CN114964212B - Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration - Google Patents
Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration Download PDFInfo
- Publication number
- CN114964212B CN114964212B CN202210622880.0A CN202210622880A CN114964212B CN 114964212 B CN114964212 B CN 114964212B CN 202210622880 A CN202210622880 A CN 202210622880A CN 114964212 B CN114964212 B CN 114964212B
- Authority
- CN
- China
- Prior art keywords
- inertial
- odometer
- point
- fusion
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 74
- 238000013507 mapping Methods 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 47
- 238000006731 degradation reaction Methods 0.000 claims abstract description 34
- 230000015556 catabolic process Effects 0.000 claims abstract description 33
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 238000010276 construction Methods 0.000 claims abstract description 8
- 230000008878 coupling Effects 0.000 claims abstract description 7
- 238000010168 coupling process Methods 0.000 claims abstract description 7
- 238000005859 coupling reaction Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 42
- 230000009466 transformation Effects 0.000 claims description 42
- 230000006870 function Effects 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 28
- 238000005457 optimization Methods 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000013461 design Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000007500 overflow downdraw method Methods 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 12
- 238000004590 computer program Methods 0.000 description 7
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000019994 cava Nutrition 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及机器人技术领域,尤其涉及面向未知空间探索的多机协同融合定位与建图方法。The present invention relates to the field of robotics technology, and in particular to a multi-machine collaborative fusion positioning and mapping method for unknown space exploration.
背景技术Background Art
复杂的未知空间如地下溶洞、原始森林等一般具有屏蔽卫星定位信号、通信受限、地形地貌复杂且无法提前布置路标点等特点,因此在没有环境先验信息的情况下,面向未知空间探索的机器人如何利用自身搭载的传感器在运动过程中对周围环境信息建模并同时估计自身位姿具有重要的意义。近些年,研究人员不断地对实时定位与建图技术进行改进,但应用场景大多针对室内或室外开阔环境,应用于未知空间探索的研究并不多。Complex unknown spaces such as underground caves and primeval forests generally have the characteristics of shielding satellite positioning signals, limited communication, complex terrain and landforms, and the inability to arrange landmarks in advance. Therefore, in the absence of prior information about the environment, it is of great significance for robots that explore unknown spaces to use their own sensors to model the surrounding environment information during movement and estimate their own posture at the same time. In recent years, researchers have continuously improved real-time positioning and mapping technologies, but most of the application scenarios are for indoor or outdoor open environments, and there are not many studies on the application of unknown space exploration.
单一传感器在感知能力和鲁棒性方面有着不足之处,无法应对复杂环境带来的各种挑战,因此多传感器融合是目前的主流方案。而面向未知空间探索的机器人由于自身载重和功耗的限制,其机载电脑计算资源有限,并且通信受限的未知空间也不具备使用云端计算资源的条件,因此如何使多传感器融合算法在机器人平台上实时运行变得极具挑战性。而现有方案主要通过各传感器之间实现松耦合的方式来减少运算成本。该方法缺少对传感器自身特性的综合考虑,不利于提升机器人的定位精度。A single sensor has deficiencies in perception and robustness, and cannot cope with the various challenges brought by complex environments. Therefore, multi-sensor fusion is the current mainstream solution. However, robots for exploring unknown spaces have limited computing resources on their onboard computers due to their own load and power consumption limitations, and unknown spaces with limited communications do not have the conditions to use cloud computing resources. Therefore, how to make multi-sensor fusion algorithms run in real time on the robot platform becomes extremely challenging. The existing solutions mainly reduce computing costs by achieving loose coupling between sensors. This method lacks comprehensive consideration of the characteristics of the sensor itself, which is not conducive to improving the positioning accuracy of the robot.
机器人在未知空间探索时会遇到存在大量几何结构重复或纹理单一的特征稀疏的场景,由于缺乏某些方向上的约束,此类场景通常会导致一些传感器的状态估计发生退化,并对系统稳定性造成严重影响。而现有方法主要通过预测值和观测值做比较的方式来判断传感器是否发生退化,该方法没有从传感器自身观测方程去分析,存在一定的局限性。When robots explore unknown spaces, they will encounter scenes with a large number of repetitive geometric structures or single textures with sparse features. Due to the lack of constraints in certain directions, such scenes usually cause the state estimation of some sensors to degrade and have a serious impact on the stability of the system. Existing methods mainly judge whether the sensor has degraded by comparing the predicted value with the observed value. This method does not analyze the sensor's own observation equation and has certain limitations.
受限于传感器安装位置以及周围环境障碍物遮挡等原因,机器人在未知空间探索时会存在较大的感知盲区。并且单机器人在执行长时间建图任务时,会存在建图时间长、算力需求大,难以满足实时性要求等问题。而现有的针对多机器人协同建图的研究还处于萌芽阶段,并且多以室内建图为主,无法满足未知空间探索对于多机协同建图的需求。Due to the limitations of the sensor installation location and the obstruction of obstacles in the surrounding environment, robots will have large perception blind spots when exploring unknown spaces. In addition, when a single robot performs a long mapping task, there will be problems such as long mapping time, high computing power requirements, and difficulty in meeting real-time requirements. Existing research on multi-robot collaborative mapping is still in its infancy, and is mainly focused on indoor mapping, which cannot meet the needs of multi-machine collaborative mapping for unknown space exploration.
发明内容Summary of the invention
本发明的目的在于提出面向未知空间探索的多机协同融合定位与建图方法,实现机器人在未知复杂环境中的高精度定位和建图。The purpose of this invention is to propose a multi-machine collaborative fusion positioning and mapping method for unknown space exploration, so as to achieve high-precision positioning and mapping of robots in unknown and complex environments.
为达此目的,本发明采用以下技术方案:面向未知空间探索的多机协同融合定位与建图方法包括如下步骤:To achieve this purpose, the present invention adopts the following technical solution: A multi-machine collaborative fusion positioning and mapping method for unknown space exploration includes the following steps:
轻量化多传感器融合里程计算法设计了惯性里程计、激光-惯性里程计和视觉-惯性里程计,实现对多传感器信息的融合;所述惯性里程计、所述激光-惯性里程计和所述视觉-惯性里程计分别用于估计机器人的位姿信息;The lightweight multi-sensor fusion mileage calculation method designs an inertial odometer, a laser-inertial odometer and a visual-inertial odometer to achieve the fusion of multi-sensor information; the inertial odometer, the laser-inertial odometer and the visual-inertial odometer are used to estimate the robot's position information respectively;
设计基于能观性分析的退化识别与融合决策策略算法,基于能观性分析对激光-惯性里程计以及视觉-惯性里程计是否发生退化进行判断;若存在退化,则将激光-惯性里程计获取的位姿估计信息以及视觉-惯性里程计的位姿估计信息进行退化处理,将退化处理后的位姿估计信息和惯性里程计的位姿估计信息根据融合决策进行融合,得到总位姿估计输出;若不存在退化,则将所述惯性里程计、所述激光-惯性里程计和所述视觉-惯性里程计获取到的位姿估计信息根据融合决策进行融合,得到总位姿估计输出;Design a degradation identification and fusion decision strategy algorithm based on observability analysis, and judge whether the laser-inertial odometry and the visual-inertial odometry are degraded based on observability analysis; if degradation occurs, the pose estimation information obtained by the laser-inertial odometry and the pose estimation information of the visual-inertial odometry are subjected to degradation processing, and the degraded pose estimation information and the pose estimation information of the inertial odometry are fused according to the fusion decision to obtain the total pose estimation output; if degradation does not occur, the pose estimation information obtained by the inertial odometry, the laser-inertial odometry and the visual-inertial odometry are fused according to the fusion decision to obtain the total pose estimation output;
多机协同建图算法,各个机器人之间通过所述信息共享网络完成全局一致地图的构建和协同建图任务的分配。In the multi-machine collaborative mapping algorithm, each robot completes the construction of a global consistent map and the allocation of collaborative mapping tasks through the information sharing network.
作为一种可选的实施例,所述里程计系统融合多传感器信息的方法为:As an optional embodiment, the method for fusing multi-sensor information in the odometer system is:
所述惯性里程计通过惯性测量单元与空速计构成航位推算系统,所述航位推算系统用于为机器人提供高频的初始位置估算;并通过气压计与激光测距模块产生的高度约束消除惯性里程计在垂直方向上的位置漂移;The inertial odometer forms a dead reckoning system through an inertial measurement unit and an airspeed meter, and the dead reckoning system is used to provide a high-frequency initial position estimation for the robot; and eliminates the position drift of the inertial odometer in the vertical direction through the height constraint generated by the barometer and the laser ranging module;
所述激光-惯性里程计接收所述惯性里程计的初始位姿估计和固态激光雷达的观测量,通过构建迭代误差状态卡尔曼滤波估计器为惯性里程计提供修正后的位姿;The laser-inertial odometry receives the initial pose estimation of the inertial odometry and the observation of the solid-state laser radar, and provides a corrected pose for the inertial odometry by constructing an iterative error state Kalman filter estimator;
所述视觉-惯性里程计接收所述惯性里程计的初始位姿估计、视觉传感器采集的图像数据和点云数据,采用公式(1)构建局部滑窗优化估计器:The visual-inertial odometry receives the initial pose estimation of the inertial odometry, the image data and the point cloud data collected by the visual sensor, and constructs a local sliding window optimization estimator using formula (1):
其中,表示集合的最优估计;表示k时刻滑窗内所有关键帧的集合;分别表示第i时刻与第j时刻之间激光-惯性里程计、视觉-惯性里程计、惯性里程计的相对约束;表示滑窗中第一个状态量的先验约束;∑LIO、∑VIO、∑IMU、∑0分别表示各约束的协方差矩阵。in, Representing a collection The best estimate of represents the set of all key frames in the sliding window at time k; They represent the relative constraints of the laser-inertial odometry, visual-inertial odometry, and inertial odometry between the i-th moment and the j-th moment respectively; represents the prior constraint of the first state quantity in the sliding window; ∑ LIO , ∑ VIO , ∑ IMU , ∑ 0 represent the covariance matrices of each constraint respectively.
作为一种可选的实施例,基于能观性分析对激光-惯性里程计以及视觉-惯性里程计是否发生退化进行判断的方法为:As an optional embodiment, a method for judging whether a laser-inertial odometer and a visual-inertial odometer are degraded based on observability analysis is:
根据激光约束和视觉约束,构建激光-惯性里程计和视觉-惯性里程计的信息矩阵;According to the laser constraints and visual constraints, the information matrix of laser-inertial odometry and visual-inertial odometry is constructed;
对信息矩阵进行特征值分解,通过特征值的大小表征里程计的退化程度:Perform eigenvalue decomposition on the information matrix, and characterize the degree of degradation of the odometer by the size of the eigenvalue:
若特征值小于设定的阈值,则判断为里程计发生退化,且此时特征值对应的特征向量被视为退化方向向量;若特征值大于等于设定的阈值,则判断里程计无退化。If the eigenvalue is less than the set threshold, the odometer is judged to be degraded, and the eigenvector corresponding to the eigenvalue is regarded as the degradation direction vector; if the eigenvalue is greater than or equal to the set threshold, the odometer is judged to be non-degraded.
作为一种可选的实施例,退化处理包括如下步骤:As an optional embodiment, the degradation process includes the following steps:
通过特征值可以确定位姿估计问题的非线性优化解在退化方向的方向向量和非退化方向的方向向量;The eigenvalues can be used to determine the direction vector of the nonlinear optimization solution of the pose estimation problem in the degenerate direction and the direction vector of the non-degenerate direction.
将位姿估计的非线性优化解分解成真实解在非退化方向的投影和预测值在退化方向的投影,并考虑在位姿估计问题的部分子空间中添加约束,在退化方向上不执行迭代优化,并只输出非退化方向的解;The nonlinear optimization solution of pose estimation is decomposed into the projection of the true solution in the non-degenerate direction and the projection of the predicted value in the degenerate direction. Consider adding constraints in some subspaces of the pose estimation problem, do not perform iterative optimization in the degenerate direction, and only output the solution in the non-degenerate direction.
通过丢弃最优解在退化方向的分量,只把非退化方向解的增量叠加到状态量中,得到退化处理后的位姿估计信息。By discarding the component of the optimal solution in the degenerate direction and only adding the increment of the solution in the non-degenerate direction to the state quantity, the pose estimation information after degradation processing is obtained.
作为一种可选的实施例,得到总位姿估计输出包括如下步骤:As an optional embodiment, obtaining the total pose estimation output includes the following steps:
向融合决策表输入激光-惯性子系统和视觉-惯性子系统的退化状态信息,基于退化状态信息得到融合决策表的决策输出;Inputting degradation state information of the laser-inertial subsystem and the visual-inertial subsystem into the fusion decision table, and obtaining a decision output of the fusion decision table based on the degradation state information;
通过融合决策表的决策输出和各里程计子系统的观测数据,决策得到里程计系统的总位姿估计输出,并将其输入到后端优化系统中;By fusing the decision output of the decision table and the observation data of each odometer subsystem, the total pose estimation output of the odometer system is obtained and input into the back-end optimization system;
后端优化系统通过位姿图构建各时刻状态量之间的关系,并建模成批量递归平滑问题,具体如公式(2)所示:The backend optimization system constructs the relationship between the state quantities at each moment through the pose graph and models it as a batch recursive smoothing problem, as shown in formula (2):
其中,表示集合的最优估计;表示截止到k时刻所有关键帧的集合;分别表示第i时刻与第j时刻之间里程计和回环的相对约束;∑odom、∑loop分别表示各约束的协方差矩阵。in, Representing a collection The best estimate of represents the set of all key frames up to time k; They represent the relative constraints of the odometer and loop between the i-th moment and the j-th moment respectively; ∑ odom and ∑ loop represent the covariance matrices of each constraint respectively.
作为一种可选的实施例,各个机器人之间通过所述信息共享网络完成全局一致地图的构建和协同建图任务的分配包括如下步骤:As an optional embodiment, the construction of a global consistent map and the allocation of collaborative mapping tasks among the robots through the information sharing network include the following steps:
基于通信质量与群并视野约束的聚类算法,在规范约束下最大化由无人机通信质量与群作业视野集组成的耦合性能,获得一组优解,以划分无人机群信息共享区域,每个信息共享区域内基于计算资源最优的原则选择网络主控单元;A clustering algorithm based on communication quality and group vision constraints maximizes the coupling performance consisting of drone communication quality and group operation vision set under specification constraints, and obtains a set of optimal solutions to divide the drone group information sharing area. In each information sharing area, the network master control unit is selected based on the principle of optimal computing resources.
通过寻找机器人地图之间的重叠区域进行信息关联,确定机器人间的坐标变换关系,并以此为依据构建全局一致地图;其中,基于关键帧的词袋模型检测重叠区域;词袋模型利用两幅图像之间的相似性,以判断出这两幅图像是否来自同一个场景;若两个机器人经过了同一个场景,则证明这两个机器人的地图之间很大可能存在重叠区域;By finding the overlapping areas between the robot maps to associate information, the coordinate transformation relationship between the robots is determined, and a global consistent map is constructed based on this. The bag-of-words model based on key frames detects overlapping areas. The bag-of-words model uses the similarity between two images to determine whether the two images are from the same scene. If the two robots have passed through the same scene, it proves that there is a high probability that there is an overlapping area between the maps of the two robots.
主控单元接收到机器人发来的相似关键帧和局部地图数据后,先对相似关键帧和其自身的关键帧进行特征匹配并利用对极约束关系得到初始位姿变换矩阵,将其提供给正态分布变换算法作为初始值,再使用正态分布变换算法迭代计算出精确的位姿变换矩阵,并根据最终的位姿变换矩阵将机器人发来的局部地图与主控单元维护的全局地图进行融合;After receiving the similar keyframes and local map data sent by the robot, the main control unit first performs feature matching on the similar keyframes and its own keyframes and obtains the initial pose transformation matrix using the epipolar constraint relationship, which is then provided to the normal distribution transformation algorithm as the initial value. The normal distribution transformation algorithm is then used to iteratively calculate the precise pose transformation matrix, and the local map sent by the robot is merged with the global map maintained by the main control unit based on the final pose transformation matrix.
正态分布变换算法主要是通过利用点分布的概率密度函数进行配准,并通过最优化算法确定点云之间的最佳匹配,包括如下步骤:The normal distribution transformation algorithm mainly uses the probability density function of point distribution for registration and determines the best match between point clouds through an optimization algorithm, including the following steps:
Step1:网格化,将主控单元维护的全局点云地图划分为均匀大小的立体单元格;Step 1: Gridding: Divide the global point cloud map maintained by the main control unit into three-dimensional cells of uniform size;
Step2:计算多维正态分布参数,根据公式(3)、(4)分别求出每个单元格内空间点的均值向量q和协方差矩阵Σ:Step 2: Calculate the parameters of the multidimensional normal distribution. According to formulas (3) and (4), the mean vector q and covariance matrix Σ of each spatial point in each cell are obtained respectively:
其中,n是每个单元格中点的总数,xk代表单元格中的一点;Where n is the total number of points in each cell, and x k represents a point in the cell;
Step3:坐标变换,假设位姿变换矩阵为T,初值由对极约束关系提供,并且有:其中R是3×3的旋转矩阵,t是3×1的平移矩阵。利用位姿变换矩阵T将局部点云地图中的每个点变换到全局地图上,变换后的点用yi(i=1,…,m)来表示,m为局部点云地图中所有点的总数;Step 3: Coordinate transformation, assuming that the pose transformation matrix is T, the initial value is provided by the epipolar constraint relationship, and there are: Where R is a 3×3 rotation matrix, and t is a 3×1 translation matrix. Each point in the local point cloud map is transformed to the global map using the pose transformation matrix T. The transformed point is represented by yi (i=1,…,m), where m is the total number of points in the local point cloud map.
Step4:计算概率密度,根据变换点yi所在单元格的正态分布参数计算每个变换点的概率密度,其中,计算公式(5)为:Step 4: Calculate the probability density. Calculate the probability density of each transformation point according to the normal distribution parameters of the cell where the transformation point yi is located. The calculation formula (5) is:
其中detΣ表示协方差矩阵∑的行列式;Where detΣ represents the determinant of the covariance matrix Σ;
Step5:创建目标函数,对于每个点的概率密度,利用公式(6)构建目标函数评估位姿变换矩阵T:Step 5: Create the objective function. For the probability density of each point, use formula (6) to construct the objective function to evaluate the pose transformation matrix T:
Step6:优化目标函数,利用牛顿优化方法求解目标函数的最小值,从而得到最佳的位姿变换矩阵T;Step 6: Optimize the objective function and use the Newton optimization method to solve the minimum value of the objective function to obtain the optimal posture transformation matrix T;
基于快速随机搜索树算法去寻找已经建图的区域与未知区域的边界点,RRT算法包括:Based on the fast random search tree algorithm, the boundary points between the mapped area and the unknown area are found. The RRT algorithm includes:
Step1:调用Init函数初始化树T;Step 1: Call the Init function to initialize the tree T;
Step2:调用RandomSample函数在状态空间X内随机采样一个点xrand;Step 2: Call the RandomSample function to randomly sample a point x rand in the state space X;
Step3:调用Nearest函数找到在当前树T上最接近xrand的点xnear;Step 3: Call the Nearest function to find the point x near closest to x rand on the current tree T;
Step4:调用Steer函数,以xnear为起点,ε为步长,迭代地检查xnear与xrand之间的点是否为无碰撞的自由状态,如果不是则停止迭代,此时距离xnear最远的可通行点即为新的节点xnew;Step 4: Call the Steer function, starting from x near and taking ε as the step length, and iteratively check whether the point between x near and x rand is a collision-free free state. If not, stop the iteration. At this time, the farthest passable point from x near is the new node x new .
Step5:将新节点xnew以及xnear与xnew之间的新边添加到树T,完成本次树的扩展并进入到下一次迭代;Step 5: Add the new node x new and the new edge between x near and x new to the tree T, completing the expansion of this tree and entering the next iteration;
Step6:迭代结束,生成新的树T;Step 6: The iteration ends and a new tree T is generated;
主控单元对全局和局部检测器得到的大量边界点利用均值漂移聚类算法进行筛选得到需要的任务目标点,均值漂移聚类算法的包括步骤:The main control unit uses the mean shift clustering algorithm to screen a large number of boundary points obtained by the global and local detectors to obtain the required task target points. The mean shift clustering algorithm includes the following steps:
Step1:在未被标记的边界点中随机选择一个点作为起始中心点xcenter;Step 1: Randomly select a point from the unmarked boundary points as the starting center point x center ;
Step2:找出以xcenter为中心,半径为r的球形区域内的所有边界点并对这些边界点进行标记;Step 2: Find all boundary points in the spherical area with radius r and center on x center and mark them;
Step3:计算从xcenter开始到球形区域中每个边界点的向量,将这些向量相加,得到漂移向量xshift;Step 3: Calculate the vector from x center to each boundary point in the spherical area, add these vectors together to get the drift vector x shift ;
Step4:根据xcenter=xcenter+xshift移动中心点,即xcenter沿着xshift的方向移动,移动距离是||xshift||;Step 4: Move the center point according to x center = x center + x shift , that is, x center moves along the direction of x shift , and the moving distance is ||x shift ||;
Step5:重复Step2到Step4,直到||xshift||很小,此时说明已经迭代到收敛,记录当前的中心点xcenter,在迭代过程中遇到的所有边界点都将会被标记;Step 5: Repeat Step 2 to Step 4 until ||x shift || is very small, which means the iteration has converged. Record the current center point x center . All boundary points encountered during the iteration will be marked.
Step6:如果收敛时当前中心点xcenter与其它已经存在的中心点的距离小于阈值,那么保留球形区域内存在更多边界点的中心点;否则,把xcenter作为新的中心点;Step 6: If the distance between the current center point x center and other existing center points is less than the threshold value during convergence, then the center point with more boundary points in the spherical area is retained; otherwise, x center is used as the new center point;
Step7:重复之前的步骤,一直到所有的点都被标记访问;Step 7: Repeat the previous steps until all points are marked as visited;
主控单元将利用拍卖机制将任务目标点分配给共享网络中的机器人。The master control unit will use an auction mechanism to allocate mission target points to robots in the shared network.
本发明还提供了面向未知空间探索的多机协同融合定位与建图设备,包括存储器,处理器及存储在所述存储器上并可在所述处理器上运行的面向未知空间探索的多机协同融合定位与建图方法程序,所述处理器执行所述面向未知空间探索的多机协同融合定位与建图方法程序时实现上述的面向未知空间探索的多机协同融合定位与建图方法的步骤。The present invention also provides a multi-machine collaborative fusion positioning and mapping device for unknown space exploration, including a memory, a processor, and a multi-machine collaborative fusion positioning and mapping method program for unknown space exploration stored in the memory and executable on the processor. When the processor executes the multi-machine collaborative fusion positioning and mapping method program for unknown space exploration, the steps of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration are implemented.
一种计算机可读存储介质,所述计算机可读存储介质上存储有面向未知空间探索的多机协同融合定位与建图方法程序,所述面向未知空间探索的多机协同融合定位与建图方法程序被处理器执行时实现上述的面向未知空间探索的多机协同融合定位与建图方法的步骤。A computer-readable storage medium stores a program of a multi-machine collaborative fusion positioning and mapping method for unknown space exploration. When the program of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration is executed by a processor, the steps of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration are implemented.
与现有技术相比,本发明实施例具有以下有益效果:Compared with the prior art, the embodiments of the present invention have the following beneficial effects:
本发明设计的轻量化多传感器融合里程计算法可以实现惯性里程计、激光-惯性里程计以及视觉-惯性里程计的紧耦合状态估计。与现有方法相比,可以有效提高机器人对位姿估计的准确性以及应对未知空间中各种挑战的鲁棒性并实现多传感器融合算法在机载电脑上的实时运行。The lightweight multi-sensor fusion odometer calculation method designed by the present invention can realize the tightly coupled state estimation of inertial odometer, laser-inertial odometer and visual-inertial odometer. Compared with the existing methods, it can effectively improve the accuracy of robot pose estimation and the robustness in dealing with various challenges in unknown space and realize the real-time operation of multi-sensor fusion algorithm on the onboard computer.
本发明设计的基于能观性分析的退化识别与融合决策算法可以判断出里程计是否发生退化,并且能够基于里程计的退化状态完成里程计信息的融合。与现有方法相比,能够改善特征稀疏场景中机器人的位姿估计结果,并实现机器人的多源信息融合,为机器人在复杂未知空间中的自主导航、实时避障和地图构建提供了可靠保障。The degradation identification and fusion decision algorithm designed by the present invention based on observability analysis can determine whether the odometer is degraded, and can complete the fusion of odometer information based on the degradation state of the odometer. Compared with the existing methods, it can improve the pose estimation results of the robot in feature-sparse scenes and realize the fusion of multi-source information of the robot, providing reliable guarantee for the robot's autonomous navigation, real-time obstacle avoidance and map construction in complex unknown spaces.
本发明设计的多机协同建图算法可以实现各机器人的地图信息共享以及计算资源的最优化分配,并基于信息共享网络完成全局一致地图的构建和协同建图任务的分配。与现有方法相比,本专利设计了一种面向未知空间探索的多机协同建图算法,提高了机器人构建未知空间地图的效率。The multi-machine collaborative mapping algorithm designed by the present invention can realize the sharing of map information among robots and the optimal allocation of computing resources, and complete the construction of a global consistent map and the allocation of collaborative mapping tasks based on an information sharing network. Compared with existing methods, this patent designs a multi-machine collaborative mapping algorithm for unknown space exploration, which improves the efficiency of robots in building maps of unknown spaces.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明其中一个实施例的算法框架示意图;FIG1 is a schematic diagram of an algorithm framework of one embodiment of the present invention;
图2是本发明其中一个实施例的传感器信息融合流程示意图;FIG2 is a schematic diagram of a sensor information fusion process according to an embodiment of the present invention;
图3是本发明其中一个实施例的迭代误差卡尔曼滤波器示意图;FIG3 is a schematic diagram of an iterative error Kalman filter according to one embodiment of the present invention;
图4是本发明其中一个实施例的里程计融合流程示意图;FIG4 is a schematic diagram of an odometer fusion process according to an embodiment of the present invention;
图5是本发明其中一个实施例基于能观性分析退化判断示意图;FIG5 is a schematic diagram of degradation judgment based on observability analysis in one embodiment of the present invention;
图6是本发明其中一个实施例的信息融合框架示意图;FIG6 is a schematic diagram of an information fusion framework according to an embodiment of the present invention;
图7是本发明其中一个实施例的重叠区域检测流程示意图;FIG7 is a schematic diagram of an overlapping area detection process according to an embodiment of the present invention;
图8是本发明其中一个实施例的构建全局一致地图的流程示意图;FIG8 is a schematic diagram of a process of constructing a globally consistent map according to one embodiment of the present invention;
图9是本发明其中一个实施例的协同建图系统框架示意图;FIG9 is a schematic diagram of a collaborative mapping system framework according to one embodiment of the present invention;
图10是本发明其中一个实施例的拍卖机制流程示意图。FIG. 10 is a flowchart of an auction mechanism according to one embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the accompanying drawings, wherein the same or similar reference numerals throughout represent the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present invention, and cannot be understood as limiting the present invention.
下面结合图1至图10,描述本发明实施例的面向未知空间探索的多机协同融合定位与建图方法及系统。The following describes a multi-machine collaborative fusion positioning and mapping method and system for unknown space exploration according to an embodiment of the present invention in conjunction with Figures 1 to 10.
针对复杂的未知空间环境,亟需提高机器人的自主定位和环境感知能力,如图1所示,本发明通过设计轻量化多传感器融合里程计算法、基于能观性分析的退化识别与融合决策策略算法、多机协同建图算法去提高机器人在未知复杂空间环境探索时定位与建图的鲁棒性和平滑性。In view of the complex and unknown spatial environment, it is urgent to improve the robot's autonomous positioning and environmental perception capabilities. As shown in Figure 1, the present invention designs a lightweight multi-sensor fusion mileage calculation method, a degradation identification and fusion decision strategy algorithm based on observability analysis, and a multi-machine collaborative mapping algorithm to improve the robustness and smoothness of the robot's positioning and mapping when exploring an unknown and complex spatial environment.
作为一种可选的实施例,设计轻量化多传感器融合里程计算法的实施例如图2所示,轻量化多传感器融合里程计算法采用冗余的设计,分别设计了惯性里程计、激光-惯性里程计、视觉-惯性里程计三个子系统,实现对多传感器信息的融合,增强机器人应对未知空间各种挑战的鲁棒性。As an optional embodiment, an embodiment of designing a lightweight multi-sensor fusion mileage calculation method is shown in Figure 2. The lightweight multi-sensor fusion mileage calculation method adopts a redundant design, and designs three subsystems, namely, an inertial odometry, a laser-inertial odometry, and a visual-inertial odometry, to achieve the fusion of multi-sensor information and enhance the robustness of the robot to cope with various challenges in unknown spaces.
首先,惯性里程计通过惯性测量单元与空速计构成航位推算系统,为机器人提供高频的初始位姿估计。同时气压计和激光测距模块产生的高度约束能有效消除惯性里程计垂直方向上的位姿漂移。并且由于惯性里程计只是对机器人自身状态的估计,所以即使视觉-惯性里程计和激光-惯性里程计出现退化或失效,也能在短时间内保证机器人位姿估计的准确性。First, the inertial odometer forms a dead reckoning system through an inertial measurement unit and an airspeed meter to provide the robot with a high-frequency initial pose estimation. At the same time, the height constraint generated by the barometer and laser ranging module can effectively eliminate the vertical pose drift of the inertial odometer. And because the inertial odometer is only an estimate of the robot's own state, even if the visual-inertial odometer and laser-inertial odometer degrade or fail, the accuracy of the robot's pose estimation can be guaranteed in a short time.
其次,激光-惯性里程计接收惯性里程计的初始位姿估计和固态激光雷达观测量,如图3所示,通过构建迭代误差状态卡尔曼滤波估计器为惯性里程计提供修正后的位姿,消除惯性里程计的累积误差。Secondly, the laser-inertial odometry receives the initial pose estimate of the inertial odometry and the solid-state lidar observation, as shown in Figure 3. By constructing an iterative error state Kalman filter estimator, it provides a corrected pose for the inertial odometry and eliminates the accumulated error of the inertial odometry.
最后,视觉-惯性里程计接收惯性里程计的初始位姿估计、视觉传感器图像数据和点云数据,采用如下公式(1)构建局部滑窗优化估计器:Finally, the visual-inertial odometry receives the initial pose estimate of the inertial odometry, the visual sensor image data, and the point cloud data, and constructs a local sliding window optimization estimator using the following formula (1):
其中,表示集合的最优估计;表示k时刻滑窗内所有关键帧的集合;分别表示第时刻与第时刻之间激光-惯性里程计、视觉-惯性里程计、惯性里程计的相对约束;表示滑窗中第一个状态量的先验约束;∑LIO、∑VIO、∑IMU、∑0分别表示各约束的协方差矩阵。此融合方法通过使用点云数据来辅助视觉进行深度估计来克服由于深度估计不准,影响位姿估计的问题。in, Representing a collection The best estimate of represents the set of all key frames in the sliding window at time k; They represent the relative constraints of the laser-inertial odometry, visual-inertial odometry, and inertial odometry between the th moment and the th moment respectively; represents the prior constraint of the first state in the sliding window; ∑ LIO , ∑ VIO , ∑ IMU , ∑ 0 represent the covariance matrix of each constraint respectively. This fusion method overcomes the problem of inaccurate depth estimation affecting pose estimation by using point cloud data to assist vision in depth estimation.
进一步,设计基于能观性分析的退化识别与融合决策算法的实施例如下:Furthermore, an embodiment of designing a degradation identification and fusion decision algorithm based on observability analysis is as follows:
通过惯性里程计、视觉-惯性里程计、激光-惯性里程计可以分别得到机器人的位姿估计,接下来需要对三个里程计的位姿估计信息进行融合。The robot's pose estimation can be obtained through the inertial odometry, visual-inertial odometry, and laser-inertial odometry respectively. Next, the pose estimation information of the three odometry meters needs to be fused.
对于视觉-惯性里程计和激光-惯性里程计而言,视觉传感器需要纹理信息决定特征点,而激光传感器则需要几何结构信息来决定特征点。而在如长直走廊、长隧道、单侧墙、开阔公路、桥梁等纹理单一或几何结构重复的特征稀疏场景中,视觉传感器和激光传感器都将无法提取到足够的特征点。而缺乏足够的特征点将会导致视觉-惯性里程计和激光-惯性里程计的状态估计器发生退化,从而影响位姿估计的准确性和稳定性。针对视觉-惯性里程计和激光-惯性里程计可能发生退化的问题,本发明基于能观性分析对里程计是否发生退化进行判断分析,并基于里程计的退化状态对多个里程计子系统的位姿估计信息进行融合。For visual-inertial odometers and laser-inertial odometers, visual sensors require texture information to determine feature points, while laser sensors require geometric structure information to determine feature points. In sparse feature scenes with single texture or repeated geometric structure, such as long straight corridors, long tunnels, single-side walls, open roads, bridges, etc., both visual sensors and laser sensors will not be able to extract enough feature points. The lack of sufficient feature points will cause the state estimators of the visual-inertial odometer and laser-inertial odometer to degrade, thereby affecting the accuracy and stability of pose estimation. In response to the problem that visual-inertial odometers and laser-inertial odometers may degrade, the present invention judges and analyzes whether the odometer has degraded based on observability analysis, and fuses the pose estimation information of multiple odometer subsystems based on the degradation state of the odometer.
如图4所示,由于惯性里程计不会受到结构稀疏场景的影响,因此直接将惯性里程计的位姿估计输入到融合决策系统中。而激光-惯性里程计和视觉-惯性里程计则需要先对其是否发生退化进行判断,若没有发生退化的话,则和惯性里程计一样直接将位姿估计输入到融合决策系统中,而发生退化的里程计的位姿估计信息将会在处理后再输入到融合决策系统中。As shown in Figure 4, since the inertial odometry is not affected by the sparsely structured scene, the pose estimation of the inertial odometry is directly input into the fusion decision system. However, the laser-inertial odometry and visual-inertial odometry need to first determine whether they are degraded. If they are not degraded, the pose estimation is directly input into the fusion decision system like the inertial odometry. The pose estimation information of the degraded odometry will be input into the fusion decision system after processing.
如图5所示,本发明基于能观性分析去判断特征稀疏场景下视觉-惯性里程计和激光-惯性里程计是否发生退化。As shown in FIG5 , the present invention determines whether the visual-inertial odometry and the laser-inertial odometry are degraded in a feature-sparse scene based on observability analysis.
首先分别根据激光约束和视觉约束,构建激光-惯性里程计和视觉-惯性里程计的信息矩阵,进一步地将能观性分析问题转换成分析能观测子空间的问题。然后对信息矩阵进行特征值分解,通过特征值的大小来表征里程计的退化程度,当特征值小于设定的阈值时认为里程计发生了退化,并且此时特征值对应的特征向量被视为退化方向向量。First, according to the laser constraint and visual constraint, the information matrix of the laser-inertial odometry and visual-inertial odometry is constructed, and the observability analysis problem is further converted into the problem of analyzing the observable subspace. Then, the information matrix is decomposed by eigenvalue, and the degradation degree of the odometry is characterized by the size of the eigenvalue. When the eigenvalue is less than the set threshold, the odometry is considered to have degraded, and the eigenvector corresponding to the eigenvalue is regarded as the degradation direction vector.
当判断到激光-惯性里程计和/或视觉-惯性里程计发生退化后,需要对其位姿估计信息进行处理。对于里程计系统而言,位姿估计问题最终可以转化为一个非线性优化问题。而里程计退化与否和位姿估计问题是否存在解没有必然联系。因此即使里程计出现退化,位姿估计问题也是有可能存在解的。而通过特征值可以确定位姿估计问题的非线性优化解在退化方向的方向向量和非退化方向的方向向量。因此本发明将位姿估计的非线性优化解分解成真实解在非退化方向的投影和预测值在退化方向的投影,并考虑在位姿估计问题的部分子空间中添加约束,在退化方向上不执行迭代优化,并只输出非退化方向的解。最后,通过丢弃最优解在退化方向的分量,只把非退化方向解的增量叠加到状态量中,从而防止最优解在零空间中移动,保证最优解的精确性。最后,将处理好的位姿估计信息输入到融合决策系统中。倘若里程计在所有方向上均发生了退化,则直接丢弃该位姿估计信息,不输入到融合决策系统中。When it is determined that the laser-inertial odometer and/or visual-inertial odometer has degraded, its pose estimation information needs to be processed. For the odometer system, the pose estimation problem can eventually be transformed into a nonlinear optimization problem. There is no necessary connection between whether the odometer is degraded and whether there is a solution to the pose estimation problem. Therefore, even if the odometer is degraded, the pose estimation problem may have a solution. The eigenvalue can be used to determine the direction vector of the nonlinear optimization solution of the pose estimation problem in the degraded direction and the direction vector of the non-degraded direction. Therefore, the present invention decomposes the nonlinear optimization solution of the pose estimation into the projection of the true solution in the non-degraded direction and the projection of the predicted value in the degraded direction, and considers adding constraints to some subspaces of the pose estimation problem, does not perform iterative optimization in the degraded direction, and only outputs the solution in the non-degraded direction. Finally, by discarding the component of the optimal solution in the degraded direction, only the increment of the non-degraded direction solution is superimposed on the state quantity, thereby preventing the optimal solution from moving in the null space and ensuring the accuracy of the optimal solution. Finally, the processed pose estimation information is input into the fusion decision system. If the odometer is degraded in all directions, the pose estimation information is directly discarded and not input into the fusion decision system.
融合决策系统接收三个里程计的位姿估计信息,针对机器人多个里程计子系统的信息融合问题,本发明设计基于退化状态的决策驱动融合方法。首先向决策表输入激光-惯性子系统和视觉-惯性子系统的退化状态信息,然后基于里程计子系统的退化状态得到决策表的决策输出。显然当里程计没有发生退化或者退化程度相对较低时,其在决策输出中的占比越高。然后通过融合决策表的决策数据和各里程计子系统的观测数据,决策得到轻量级多传感器融合里程计的总位姿估计输出,并将其输入到后端优化系统中。The fusion decision system receives the pose estimation information of the three odometers. Aiming at the information fusion problem of multiple odometer subsystems of the robot, the present invention designs a decision-driven fusion method based on degradation state. First, the degradation state information of the laser-inertial subsystem and the visual-inertial subsystem is input into the decision table, and then the decision output of the decision table is obtained based on the degradation state of the odometer subsystem. Obviously, when the odometer has not degraded or the degree of degradation is relatively low, its proportion in the decision output is higher. Then, by fusing the decision data of the decision table and the observation data of each odometer subsystem, the total pose estimation output of the lightweight multi-sensor fusion odometer is obtained and input into the back-end optimization system.
后端优化系统通过位姿图构建各时刻状态量之间的关系,并建模成批量递归平滑问题,具体如公式(2)所示:The backend optimization system constructs the relationship between the state quantities at each moment through the pose graph and models it as a batch recursive smoothing problem, as shown in formula (2):
其中,表示集合的最优估计;表示截止到k时刻所有关键帧的集合;分别表示第i时刻与第j时刻之间里程计和回环的相对约束;∑odom、∑loop分别表示各约束的协方差矩阵。最后,通过求解公式(2)得到具有全局一致性的轨迹,进而得到稠密的三维点云地图。in, Representing a collection The best estimate of represents the set of all key frames up to time k; They represent the relative constraints of the odometer and loop between the i-th moment and the j-th moment respectively; ∑ odom and ∑ loop represent the covariance matrices of each constraint respectively. Finally, by solving formula (2), a trajectory with global consistency is obtained, and then a dense 3D point cloud map is obtained.
如图6所示,各个里程计子系统以紧耦合的方式融合多传感数据进行状态估计,然后再以松耦合的方式对各个里程计的位姿估计信息进行融合决策。这种紧耦合和松耦合相结合的融合方式,在提高定位精度的同时也保证了系统的稳定性。As shown in Figure 6, each odometer subsystem fuses multi-sensor data in a tightly coupled manner for state estimation, and then makes fusion decisions on the pose estimation information of each odometer in a loosely coupled manner. This fusion method that combines tight coupling and loose coupling improves positioning accuracy while also ensuring system stability.
进一步,设计多机协同建图算法的实施例如下:Furthermore, an embodiment of designing a multi-machine collaborative mapping algorithm is as follows:
首先,针对未知空间网络复杂且通讯受限的问题,本发明基于通信质量与群并视野约束的聚类算法,在规范约束下最大化由无人机通信质量与群作业视野集组成的耦合性能,获得一组较优的解,实现无人机群信息共享区域的高效划分,在每个信息共享区域内基于计算资源最优的原则选择网络主控单元。First, in response to the problem of complex unknown spatial networks and limited communications, the present invention adopts a clustering algorithm based on communication quality and group vision constraints. It maximizes the coupling performance composed of the drone communication quality and the group operation vision set under specification constraints, obtains a set of better solutions, and realizes the efficient division of the drone group information sharing area. In each information sharing area, the network master control unit is selected based on the principle of optimal computing resources.
其次,针对机器人在未知空间交会几率较低以及无法提前布置路标的问题,本发明基于地图重叠区域来构建全局一致地图。通过寻找机器人地图之间的重叠区域进行信息关联,从而确定机器人间的坐标变换关系,并以此为依据构建全局一致地图。重叠区域检测采用基于关键帧的词袋模型来完成。词袋模型度量的是两幅图像之间的相似性,从而可以判断出这两幅图像是否来自同一个场景。因此通过词袋模型去比较两个机器人之间的关键帧可以判断出两个机器人是否经过了同一个场景,若两个机器人经过了同一个场景,则证明它们的地图之间很大可能存在重叠区域。Secondly, in order to solve the problem that the probability of robots meeting in unknown spaces is low and road signs cannot be arranged in advance, the present invention constructs a globally consistent map based on the overlapping areas of the maps. By finding the overlapping areas between the robot maps and performing information association, the coordinate transformation relationship between the robots is determined, and a globally consistent map is constructed based on this. Overlapping area detection is completed using a bag-of-words model based on key frames. The bag-of-words model measures the similarity between two images, so that it can be determined whether the two images are from the same scene. Therefore, by comparing the key frames between the two robots using the bag-of-words model, it can be determined whether the two robots have passed through the same scene. If the two robots have passed through the same scene, it proves that there is a high probability of overlapping areas between their maps.
同时,如图7所示,为了避免不断传输三维地图数据影响系统通信的实时性,主控单元会先将其关键帧对应的全局描述子通过在通信网络广播的方式发送给网络中的每个机器人。而网络中的机器人在接收到主控单元发送的全局描述子后,会通过词袋模型去判断是否和主控单元存在地图重叠区域。机器人只有和主控单元存在重叠区域时才进行数据传输,将对应的相似关键帧和局部地图数据发送给主控单元。通过这种机制可以有效避免大量冗余数据的传输,提高传输效率。At the same time, as shown in Figure 7, in order to avoid the real-time performance of system communication being affected by the continuous transmission of three-dimensional map data, the main control unit will first send the global descriptor corresponding to its key frame to each robot in the network by broadcasting on the communication network. After receiving the global descriptor sent by the main control unit, the robot in the network will use the bag-of-words model to determine whether there is an overlapping map area with the main control unit. The robot will only transmit data when there is an overlapping area with the main control unit, and send the corresponding similar key frame and local map data to the main control unit. This mechanism can effectively avoid the transmission of a large amount of redundant data and improve transmission efficiency.
如图8所示,主控单元接收到机器人发来的相似关键帧和局部地图数据后,先对相似关键帧和其自身的关键帧进行特征匹配并利用对极约束关系得到初始位姿变换矩阵,将其提供给正态分布变换算法作为初始值,再使用正态分布变换算法迭代计算出精确的位姿变换矩阵,并根据最终的位姿变换矩阵将机器人发来的局部地图与主控单元维护的全局地图进行融合。As shown in Figure 8, after the main control unit receives the similar key frames and local map data sent by the robot, it first performs feature matching on the similar key frames and its own key frames and uses the epipolar constraint relationship to obtain the initial pose transformation matrix, which is provided to the normal distribution transformation algorithm as the initial value. The normal distribution transformation algorithm is then used to iteratively calculate the precise pose transformation matrix, and the local map sent by the robot is merged with the global map maintained by the main control unit according to the final pose transformation matrix.
正态分布变换(Normal Distribution Transformation,NDT)算法主要是通过利用点分布的概率密度函数进行配准,并通过最优化算法确定点云之间的最佳匹配。具体步骤如下:The Normal Distribution Transformation (NDT) algorithm mainly uses the probability density function of point distribution for registration and determines the best match between point clouds through an optimization algorithm. The specific steps are as follows:
Step1:网格化,将主控单元维护的全局点云地图划分为均匀大小的立体单元格。Step 1: Gridding, divide the global point cloud map maintained by the main control unit into three-dimensional cells of uniform size.
Step2:计算多维正态分布参数,根据公式(3)、(4)分别求出每个单元格内空间点的均值向量q和协方差矩阵∑:Step 2: Calculate the parameters of the multidimensional normal distribution. According to formulas (3) and (4), the mean vector q and covariance matrix ∑ of each spatial point in each cell are obtained respectively:
其中,n是每个单元格中点的总数,xk代表单元格中的一点。Where n is the total number of points in each cell and xk represents a point in the cell.
Step3:坐标变换,假设位姿变换矩阵为T,初值由对极约束关系提供,并且有:其中R是3×3的旋转矩阵,t是3×1的平移矩阵。利用位姿变换矩阵T将局部点云地图中的每个点变换到全局地图上,变换后的点用yi(i=1,…,m)来表示,m为局部点云地图中所有点的总数。Step 3: Coordinate transformation, assuming that the pose transformation matrix is T, the initial value is provided by the epipolar constraint relationship, and there are: Where R is a 3×3 rotation matrix, and t is a 3×1 translation matrix. Each point in the local point cloud map is transformed to the global map using the pose transformation matrix T. The transformed point is represented by yi (i=1,…,m), where m is the total number of points in the local point cloud map.
Step4:计算概率密度,根据变换点yi所在单元格的正态分布参数计算每个变换点的概率密度:Step 4: Calculate the probability density. Calculate the probability density of each transformation point according to the normal distribution parameters of the cell where the transformation point yi is located:
其中detΣ表示协方差矩阵∑的行列式;Where detΣ represents the determinant of the covariance matrix Σ;
Step5:创建目标函数,对于每个点的概率密度,利用公式(6)构建目标函数评估位姿变换矩阵T:Step 5: Create the objective function. For the probability density of each point, use formula (6) to construct the objective function to evaluate the pose transformation matrix T:
Step6:优化目标函数,利用牛顿优化方法求解目标函数的最小值,从而得到最佳的位姿变换矩阵T。Step 6: Optimize the objective function and use the Newton optimization method to solve the minimum value of the objective function to obtain the optimal posture transformation matrix T.
基于位姿图优化算法进行全局优化来获得更高精度的全局一致地图与机器人之间的位姿约束关系。Global optimization is performed based on the pose graph optimization algorithm to obtain a higher-precision global consistent map and the pose constraint relationship between the robot.
最后,针对多个机器人在探索未知空间时如何协同建图的问题,基于快速随机搜索树(Rapidly-exploring Random Trees,RRT)算法去寻找已经建图的区域与未知区域的边界点。RRT算法包括:Finally, in order to solve the problem of how multiple robots can collaboratively build maps when exploring unknown spaces, we use the Rapidly-exploring Random Trees (RRT) algorithm to find the boundary points between the mapped areas and the unknown areas. The RRT algorithm includes:
Step1:调用Init函数初始化树T;Step 1: Call the Init function to initialize the tree T;
Step2:调用RandomSample函数在状态空间X内随机采样一个点xrand;Step 2: Call the RandomSample function to randomly sample a point x rand in the state space X;
Step3:调用Nearest函数找到在当前树T上最接近xrand的点xnear;Step 3: Call the Nearest function to find the point x near closest to x rand on the current tree T;
Step4:调用Steer函数,以xnear为起点,ε为步长,迭代地检查xnear与xrand之间的点是否为无碰撞的自由状态,如果不是则停止迭代,此时距离xnear最远的可通行点即为新的节点xnew;Step 4: Call the Steer function, starting from x near and taking ε as the step length, and iteratively check whether the point between x near and x rand is a collision-free free state. If not, stop the iteration. At this time, the farthest passable point from x near is the new node x new .
Step5:将新节点xnew以及xnear与xnew之间的新边添加到树T,完成本次树的扩展并进入到下一次迭代;Step 5: Add the new node x new and the new edge between x near and x new to the tree T, completing the expansion of this tree and entering the next iteration;
Step6:迭代结束,生成新的树T。Step 6: The iteration ends and a new tree T is generated.
如图9所示,本发明通过设置全局边界检测器与局部边界检测器来兼顾边界点搜索过程的高效与完整性。网络中的每个机器人都会维护一个局部检测器,通过RRT算法对周围的环境进行随机探测,当检测到边界点后机器人将会删除原来的树并在当前位置重新开始扩展树,从而加快边界点检测的速度。而全局边界检测器由主控单元维护,与局部检测器不同,全局检测器中的树根节点不会更新,在任务期间只会生成一个树,全局检测器可以检测到被局部检测器忽略的边界点,并在局部检测器的RRT算法陷入生长“陷阱”时提供更有价值的边界点。主控单元对全局和局部检测器得到的大量边界点利用均值漂移聚类算法进行筛选得到需要的任务目标点。As shown in FIG9 , the present invention takes into account both the efficiency and integrity of the boundary point search process by setting a global boundary detector and a local boundary detector. Each robot in the network will maintain a local detector, and randomly detect the surrounding environment through the RRT algorithm. When a boundary point is detected, the robot will delete the original tree and restart the tree expansion at the current position, thereby speeding up the detection of boundary points. The global boundary detector is maintained by the main control unit. Unlike the local detector, the root node of the tree in the global detector will not be updated. Only one tree will be generated during the task. The global detector can detect boundary points ignored by the local detector and provide more valuable boundary points when the RRT algorithm of the local detector falls into a growth "trap". The main control unit uses the mean shift clustering algorithm to screen the large number of boundary points obtained by the global and local detectors to obtain the required task target points.
均值漂移聚类算法的具体步骤如下所示:The specific steps of the mean shift clustering algorithm are as follows:
Stepl:在未被标记的边界点中随机选择一个点作为起始中心点xcenter。Step 1: Randomly select a point among the unmarked boundary points as the starting center point x center .
Step2:找出以xcenter为中心,半径为r的球形区域内的所有边界点并对这些边界点进行标记。Step 2: Find all boundary points in the spherical area with radius r and center on x center and mark these boundary points.
Step3:计算从xcenter开始到球形区域中每个边界点的向量,将这些向量相加,得到漂移向量xshift。Step 3: Calculate the vector from the x center to each boundary point in the spherical area, add these vectors together to get the drift vector x shift .
Step4:根据xcenter=xcenter+xshift移动中心点,即xcenter沿着xshift的方向移动,移动距离是||xshift||。Step 4: Move the center point according to x center = x center + x shift , that is, x center moves along the direction of x shift , and the moving distance is ||x shift ||.
Step5:重复Step2到Step4,直到||xshift||很小,此时说明已经迭代到收敛,记录当前的中心点xcenter,在迭代过程中遇到的所有边界点都将会被标记。Step 5: Repeat Step 2 to Step 4 until ||x shift || is very small, which means the iteration has converged. Record the current center point x center . All boundary points encountered during the iteration will be marked.
Step6:如果收敛时当前中心点xcenter与其它已经存在的中心点的距离小于阈值,那么保留球形区域内存在更多边界点的中心点。否则,把xcenter作为新的中心点。Step 6: If the distance between the current center point x center and other existing center points is less than the threshold value during convergence, then the center point with more boundary points in the spherical area is retained. Otherwise, x center is used as the new center point.
Step7:重复之前的步骤,一直到所有的点都被标记访问。Step 7: Repeat the previous steps until all points are marked as visited.
经过均值漂移聚类后,原来数量庞大的边界点就会被聚类到一些中心点上,保留中心点并删除其余的边界点,此时这些中心点就是机器人的任务目标点。After mean shift clustering, the originally large number of boundary points will be clustered to some center points. The center points are retained and the remaining boundary points are deleted. At this time, these center points are the robot's task target points.
在得到任务目标点之后,主控单元将利用如图10所示的拍卖机制将任务目标点分配给共享网络中的机器人。拍卖算法通过模拟市场拍卖过程,由主控单元扮演拍卖者,而信息共享网络中空闲的机器人则作为竞拍者。拍卖者基于系统利益最大化的原则在所有竞拍者中选择回报最高的作为中标者。其中,本专利以探索路径、视场重叠以及“孤岛”作为代价准则,通过衡量每个机器人到达任务节点的路径长短、重复覆盖率的高低以及将会形成“孤岛”的数量来对任务节点进行分配,实现多机器人协同建图任务的最优化分配。After obtaining the mission target point, the main control unit will use the auction mechanism shown in Figure 10 to allocate the mission target point to the robots in the shared network. The auction algorithm simulates the market auction process, with the main control unit acting as the auctioneer and the idle robots in the information sharing network acting as bidders. The auctioneer selects the one with the highest return as the winner among all bidders based on the principle of maximizing system benefits. Among them, this patent uses exploration paths, field of view overlap, and "islands" as cost criteria, and allocates task nodes by measuring the length of the path for each robot to reach the task node, the level of repeated coverage, and the number of "islands" that will be formed, thereby achieving the optimal allocation of multi-robot collaborative mapping tasks.
本发明还公开了面向未知空间探索的多机协同融合定位与建图设备,包括存储器,处理器及存储在所述存储器上并可在所述处理器上运行的面向未知空间探索的多机协同融合定位与建图方法程序,所述处理器执行所述面向未知空间探索的多机协同融合定位与建图方法程序时实现上述的面向未知空间探索的多机协同融合定位与建图方法的步骤。The present invention also discloses a multi-machine collaborative fusion positioning and mapping device for unknown space exploration, including a memory, a processor, and a multi-machine collaborative fusion positioning and mapping method program for unknown space exploration stored in the memory and executable on the processor. When the processor executes the multi-machine collaborative fusion positioning and mapping method program for unknown space exploration, the steps of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration are implemented.
包括至少一个处理器、存储器。处理器是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。The method comprises at least one processor and a memory. The processor is an integrated circuit chip with the ability to process signals. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in the processor or an instruction in the form of software. The above processor can be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps and logic block diagrams disclosed in the embodiments of the present invention can be implemented or executed. The general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc. The software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
可以理解,本发明实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ReadOnly Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double DataRateSDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(DirectRambus RAM,DRRAM)。本发明实施例描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in the embodiment of the present invention can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories. Among them, the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory can be a random access memory (RAM), which is used as an external cache. By way of example but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDRSDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link dynamic random access memory (SLDRAM) and direct RAM bus random access memory (DRRAM). The memory of the system and method described in the embodiments of the present invention is intended to include, but is not limited to, these and any other suitable types of memory.
本发明还公开了一种计算机可读存储介质,所述计算机可读存储介质上存储有面向未知空间探索的多机协同融合定位与建图方法程序,所述面向未知空间探索的多机协同融合定位与建图方法程序被处理器执行时实现上述的面向未知空间探索的多机协同融合定位与建图方法的步骤。The present invention also discloses a computer-readable storage medium, on which is stored a program of a multi-machine collaborative fusion positioning and mapping method for unknown space exploration. When the program of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration is executed by a processor, the steps of the above-mentioned multi-machine collaborative fusion positioning and mapping method for unknown space exploration are implemented.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to the flowchart and/or block diagram of the method, device (system), and computer program product according to the embodiment of the present invention. It should be understood that each process and/or box in the flowchart and/or block diagram, as well as the combination of the process and/or box in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing device to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing device produce a device for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
根据本发明实施例的面向未知空间探索的多机协同融合定位与建图方法的其他构成等以及操作对于本领域普通技术人员而言都是已知的,这里不再详细描述。Other structures and operations of the multi-machine collaborative fusion positioning and mapping method for unknown space exploration according to an embodiment of the present invention are known to ordinary technicians in the field and will not be described in detail here.
在本说明书的描述中,参考术语“实施例”、“示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, the description with reference to the terms "embodiment", "example", etc. means that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present invention. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described can be combined in any one or more embodiments or examples in a suitable manner.
尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the present invention, and that the scope of the present invention is defined by the claims and their equivalents.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210622880.0A CN114964212B (en) | 2022-06-02 | 2022-06-02 | Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210622880.0A CN114964212B (en) | 2022-06-02 | 2022-06-02 | Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114964212A CN114964212A (en) | 2022-08-30 |
CN114964212B true CN114964212B (en) | 2023-04-18 |
Family
ID=82959466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210622880.0A Active CN114964212B (en) | 2022-06-02 | 2022-06-02 | Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114964212B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115792931A (en) * | 2022-10-25 | 2023-03-14 | 上海有个机器人有限公司 | Robot positioning method, device, equipment and storage medium |
CN115711617B (en) * | 2022-10-31 | 2024-10-15 | 广东工业大学 | Strong consistency odometer and high-precision mapping method and system for complex nearshore waters |
CN115597585A (en) * | 2022-12-16 | 2023-01-13 | 青岛通产智能科技股份有限公司(Cn) | Robot positioning method, device, equipment and storage medium |
CN116380935A (en) * | 2023-06-02 | 2023-07-04 | 中南大学 | High-speed railway box girder damage detection robot car and damage detection method |
CN116704388B (en) * | 2023-08-09 | 2023-11-03 | 南京航空航天大学 | Multi-unmanned aerial vehicle cooperative target positioning method based on vision |
CN117213470B (en) * | 2023-11-07 | 2024-01-23 | 武汉大学 | A multi-machine fragment map aggregation and update method and system |
CN118537606B (en) * | 2024-07-26 | 2024-10-25 | 集美大学 | A method, device and program product for processing degradation of three-dimensional point cloud matching |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN113433553A (en) * | 2021-06-23 | 2021-09-24 | 哈尔滨工程大学 | Precise navigation method for multi-source acoustic information fusion of underwater robot |
CN113819905A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | An odometer method and device based on multi-sensor fusion |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3078935A1 (en) * | 2015-04-10 | 2016-10-12 | The European Atomic Energy Community (EURATOM), represented by the European Commission | Method and device for real-time mapping and localization |
CN109682373B (en) * | 2018-12-28 | 2021-03-09 | 中国兵器工业计算机应用技术研究所 | Perception system of unmanned platform |
CN110726406A (en) * | 2019-06-03 | 2020-01-24 | 北京建筑大学 | An Improved Nonlinear Optimization Method for Monocular Inertial Navigation SLAM |
CN111707261A (en) * | 2020-04-10 | 2020-09-25 | 南京非空航空科技有限公司 | High-speed sensing and positioning method for micro unmanned aerial vehicle |
CN114022519A (en) * | 2020-07-28 | 2022-02-08 | 清华大学 | Coordinate system conversion method and device and multi-source fusion SLAM system comprising device |
CN112304307B (en) * | 2020-09-15 | 2024-09-06 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN114001733B (en) * | 2021-10-28 | 2024-03-15 | 浙江大学 | Map-based consistent efficient visual inertial positioning algorithm |
-
2022
- 2022-06-02 CN CN202210622880.0A patent/CN114964212B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN113819905A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | An odometer method and device based on multi-sensor fusion |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN113433553A (en) * | 2021-06-23 | 2021-09-24 | 哈尔滨工程大学 | Precise navigation method for multi-source acoustic information fusion of underwater robot |
Non-Patent Citations (2)
Title |
---|
Raouf Rasoulzadeh等.Implementation of A low-cost multi-IMU hardware by using a homogenous multi-sensor fusion.《2016 4th International Conference on Control, Instrumentation, and Automation (ICCIA)》.2016,第451-456页. * |
卫文乐 ; 金国栋 ; 谭力宁 ; 芦利斌 ; 陈丹琪 ; .利用惯导测量单元确定关键帧的实时SLAM算法.计算机应用.2020,(第04期),233-239. * |
Also Published As
Publication number | Publication date |
---|---|
CN114964212A (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114964212B (en) | Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration | |
CN111583369B (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN106595659A (en) | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment | |
CN111427047A (en) | A SLAM method for autonomous mobile robots in large scenes | |
CN115494533B (en) | Vehicle positioning method, device, storage medium and positioning system | |
CN114047766B (en) | Mobile robot data collection system and method for long-term application in indoor and outdoor scenes | |
CN116703984B (en) | Robot path planning and infrared light image fusion method, system and storage medium | |
Xu et al. | Dynamic vehicle pose estimation and tracking based on motion feedback for LiDARs | |
MAILKA et al. | An efficient end-to-end EKF-SLAM architecture based on LiDAR, GNSS, and IMU data sensor fusion for autonomous ground vehicles | |
Martinez et al. | Pit30m: A benchmark for global localization in the age of self-driving cars | |
Pan et al. | LiDAR-IMU Tightly-Coupled SLAM Method Based on IEKF and Loop Closure Detection | |
CN115371662B (en) | Static map construction method for removing dynamic objects based on probability grids | |
US11898869B2 (en) | Multi-agent map generation | |
Guo et al. | Occupancy grid based urban localization using weighted point cloud | |
Liu et al. | Laser 3D tightly coupled mapping method based on visual information | |
Liu et al. | Fast and robust LiDAR-inertial odometry by tightly-coupled iterated Kalman smoother and robocentric voxels | |
Zhang et al. | R-LVIO: Resilient LiDAR-Visual-Inertial Odometry for UAVs in GNSS-denied Environment | |
CN118135133A (en) | A vehicle-machine collaborative mapping method and device for wide-area search | |
CN117631697A (en) | Map construction method, device, equipment and storage medium based on air-ground cooperation | |
CN115908737A (en) | An Incremental 3D Composition Method for Intelligent Obstacle Avoidance of Unmanned Aerial Vehicles | |
Li et al. | Research on the role of multi-sensor system information fusion in improving hardware control accuracy of intelligent system | |
Chen et al. | A 3D Lidar SLAM Algorithm Based on Graph Optimization in Indoor Scene | |
Zheng et al. | An improved gtsam-based nonlinear optimization algorithm in ORBSLAM3 | |
CN117191005B (en) | An air-ground heterogeneous collaborative mapping method, device, equipment and storage medium | |
KR102815180B1 (en) | Slam system and method for dynamic environment response based on local map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |