CN107680133A - A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm - Google Patents

A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm Download PDF

Info

Publication number
CN107680133A
CN107680133A CN201710839954.5A CN201710839954A CN107680133A CN 107680133 A CN107680133 A CN 107680133A CN 201710839954 A CN201710839954 A CN 201710839954A CN 107680133 A CN107680133 A CN 107680133A
Authority
CN
China
Prior art keywords
mrow
msub
msup
munderover
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710839954.5A
Other languages
Chinese (zh)
Inventor
胡章芳
鲍合章
罗元
孙林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201710839954.5A priority Critical patent/CN107680133A/en
Publication of CN107680133A publication Critical patent/CN107680133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明请求保护一种基于改进闭环检测算法的移动机器人视觉SLAM方法,包括步骤:S1,使用张定友标定法对Kinect进行标定;S2,对获取的RGB图像进行ORB特征提取,并采用FLANN进行特征匹配;S3,删除误匹配;获取匹配点空间坐标,然后采用PnP算法估计出帧间位姿变换(R,t);S4,为对PnP求取的位姿变换进行无结构的迭代优化。S5,图像帧进行预处理,通过视觉词袋来对图像进行描述,并用一个改进的相似性得分匹配方法进行图像匹配,从而获取候选闭环;筛选出正确的闭环。S6,采用以集束调整为核心的图优化方法来对位姿和路标进行优化,通过不断的迭代优化获取更精确的相机位姿和路标。本发明可以在室内环境下获得更精确的位姿估计以及更好的三维重建效果。

The present invention claims a mobile robot visual SLAM method based on an improved closed-loop detection algorithm, including steps: S1, using Zhang Dingyou's calibration method to calibrate Kinect; S2, performing ORB feature extraction on the acquired RGB image, and using FLANN for feature matching ; S3, delete the wrong match; obtain the spatial coordinates of the matching point, and then use the PnP algorithm to estimate the inter-frame pose transformation (R, t); S4, perform unstructured iterative optimization on the pose transformation obtained by PnP. S5, the image frame is preprocessed, the image is described by the bag of visual words, and an improved similarity score matching method is used for image matching to obtain candidate closed loops; the correct closed loop is screened out. S6, using the graph optimization method with cluster adjustment as the core to optimize the pose and landmarks, and obtain more accurate camera poses and landmarks through continuous iterative optimization. The present invention can obtain more accurate pose estimation and better three-dimensional reconstruction effect in indoor environment.

Description

一种基于改进闭环检测算法的移动机器人视觉SLAM方法A Visual SLAM Method for Mobile Robots Based on Improved Closed-Loop Detection Algorithm

技术领域technical field

本发明属于移动机器人导航领域,特别是一种基于视觉的同步定位与地图构建方法。The invention belongs to the field of mobile robot navigation, in particular to a vision-based synchronous positioning and map construction method.

背景技术Background technique

同时定位与地图构建(Simultaneous Location and Mapping,简称SLAM)是利用机器人自身携带的传感器获取所在环境的信息估计机器人位姿并构建环境地图,是实现机器人完全自主移动的关键技术。同时,视觉SALM还是目前VR技术的关键所在,在未来的虚拟现实产业应用前景广阔。Simultaneous Location and Mapping (SLAM for short) is to use the sensors carried by the robot to obtain the information of the environment in which it is located to estimate the pose of the robot and build an environmental map. It is the key technology for the robot to move completely autonomously. At the same time, visual SALM is still the key to the current VR technology, and has broad application prospects in the future virtual reality industry.

视觉传感器可以获取和人眼感知信息相似的丰富图像信息,这些丰富的环境信息对环境地图构建提供更多细节,可以实现更好的地图构建效果。目前在视觉SLAM方面上已经取得很大进展。2007年,Klein等人提出了PTAM,它实现了跟踪与建图的并行化,并且是第一个使用非线性优化作为后端的方案;Endres在2014年提出基于深度相机的RGBD-SLAM,它整合了图像特征、闭环检测、点云等技术,实现室内场景的三维建图;2015年,Raul Mur-Artal等基于PTAM架构,增加了地图初始化、闭环检测的功能,优化了关键帧的选取等,在处理速度、地图精度上都取得不错的效果。Vision sensors can obtain rich image information similar to the information perceived by the human eye. These rich environmental information provide more details for the construction of environmental maps, which can achieve better map construction effects. At present, great progress has been made in visual SLAM. In 2007, Klein and others proposed PTAM, which realized the parallelization of tracking and mapping, and was the first solution to use nonlinear optimization as the backend; Endres proposed RGBD-SLAM based on depth cameras in 2014, which integrated In 2015, Raul Mur-Artal, etc., based on PTAM architecture, added the functions of map initialization and closed-loop detection, and optimized the selection of key frames. Good results have been achieved in processing speed and map accuracy.

虽然过去几年,视觉SLAM取得不错的处理速度和地图精度,但是由于其对处理速度以及地图精度要求很苛刻,所以仍有很大进步空间。移动机器人长时间运行会带来的累计误差,这会造成定位与地图构建的不一致问题,而闭环检测的引入可以减小累积误差。但传统的闭环检测算法检测闭环效率不高、且准确率低。为此,引入一个改进的闭环检测算法提高闭环检测准确率和效率。同时为了减小系统误差,采用非线性优化方法对前端估计位姿进行局部优化,再对闭环信息以及前端估计信息进行全局优化。因此,提出的基于改进闭环检测算法的移动机器人视觉同步定位与地图构建方法可以在满足实时性要求下,并可以获得更加准确的位姿估计以及效果更好的三维重建。Although in the past few years, visual SLAM has achieved good processing speed and map accuracy, but due to its strict requirements on processing speed and map accuracy, there is still a lot of room for improvement. The cumulative error caused by the long-term operation of the mobile robot will cause the inconsistency between positioning and map construction, and the introduction of closed-loop detection can reduce the cumulative error. However, the traditional closed-loop detection algorithm is not efficient and has low accuracy in detecting closed-loop. To this end, an improved closed-loop detection algorithm is introduced to improve the accuracy and efficiency of closed-loop detection. At the same time, in order to reduce the system error, a nonlinear optimization method is used to locally optimize the front-end estimated pose, and then globally optimize the closed-loop information and the front-end estimated information. Therefore, the proposed mobile robot visual synchronization positioning and map construction method based on the improved closed-loop detection algorithm can meet the real-time requirements, and can obtain more accurate pose estimation and better 3D reconstruction.

发明内容Contents of the invention

本发明旨在解决以上现有技术的问题。提出了一种更加准确位姿估计的基于改进闭环检测算法的移动机器人视觉SLAM方法。本发明的技术方案如下:一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其包括以下步骤:The present invention aims to solve the above problems of the prior art. A mobile robot visual SLAM method based on an improved loop-closed detection algorithm for more accurate pose estimation is proposed. The technical scheme of the present invention is as follows: a kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm, it comprises the following steps:

S1、使用张定友标定法对Kinect进行标定,获取相机内参;S1. Use Zhang Dingyou's calibration method to calibrate Kinect to obtain the camera's internal parameters;

S2、对获取的RGB图像进行ORB特征提取,并采用FLANN(快速近似最近邻)算法进行特征匹配,包括正确匹配结果和误匹配结果;S2, carry out ORB feature extraction to the acquired RGB image, and adopt FLANN (fast approximate nearest neighbor) algorithm to carry out feature matching, including correct matching result and wrong matching result;

S3、针对步骤S2的误匹配结果,利用RANSAC(随机采样一致性)删除误匹配;通过Kinect相机模型获取匹配点空间坐标,然后采用PnP(n点透视问题)算法估计出帧间位姿变换(R,t);S3. For the mismatching result of step S2, use RANSAC (random sampling consistency) to delete the mismatching; obtain the spatial coordinates of the matching point through the Kinect camera model, and then use the PnP (n-point perspective problem) algorithm to estimate the inter-frame pose transformation ( R,t);

S4、对步骤S3采用PnP算法求取的位姿变换进行无结构的迭代优化;S4. Perform unstructured iterative optimization on the pose transformation calculated by the PnP algorithm in step S3;

S5、通过对步骤S2获取的图像帧进行预处理,通过视觉词袋来对图像进行描述,并用一个改进的相似性得分匹配方法进行图像匹配,得出的匹配更符合图像间实际相似度,从而获取候选闭环,同时,利用时间连续性和空间一致性筛选出正确的闭环;S5. By preprocessing the image frame obtained in step S2, the image is described through the bag of visual words, and an improved similarity score matching method is used for image matching, and the matching obtained is more in line with the actual similarity between images, thereby Obtain candidate closed loops, and at the same time, use time continuity and spatial consistency to filter out the correct closed loops;

S6、采用以集束调整BA为核心的图优化方法来对位姿和路标进行优化,通过不断的迭代优化获取更精确的相机位姿和路标。S6. Use the graph optimization method with bundle adjustment BA as the core to optimize the pose and landmarks, and obtain more accurate camera poses and landmarks through continuous iterative optimization.

进一步的,步骤S2中的ORB特征提取步骤为:Further, the ORB feature extraction step in step S2 is:

首先,对RGB图像进行FAST角点提取,并利用构建图像金字塔和灰度质心法让ORB特征点具有尺度和旋转的不变性;First, FAST corner point extraction is performed on the RGB image, and the ORB feature points are invariant to scale and rotation by using the image pyramid and gray-scale centroid method;

再次,通过BRIEF对前一步提取出的特征点周围图像进行二进制描述。Again, the binary description of the surrounding image of the feature points extracted in the previous step is performed through BRIEF.

进一步的,所述步骤S3中通过Kinect相机模型获取匹配点空间坐标,空间坐标求解方法为:Further, in the step S3, the spatial coordinates of the matching point are obtained through the Kinect camera model, and the method for solving the spatial coordinates is:

zo=zz o = z

若Kinect获得图像中像素坐标(u,v)对应的深度值为z,则根据以上Kinect相机模型可以获得空间坐标(xo,yo,zo)。If Kinect obtains the depth value z corresponding to the pixel coordinates (u, v) in the image, then the space coordinates (x o , y o , z o ) can be obtained according to the above Kinect camera model.

进一步的,步骤S4中对PnP求取的位姿变换进行优化的步骤为:Further, the step of optimizing the pose transformation obtained by PnP in step S4 is:

首先,以相机位姿为优化变量,通过最小化重投影误差来构建优化问题;First, with the camera pose as the optimization variable, an optimization problem is constructed by minimizing the reprojection error;

然后,在原来的PnP函数中把PnP值作为初值,再调用g2o进行优化。Then, use the PnP value as the initial value in the original PnP function, and then call g2o for optimization.

进一步的,所述步骤S5候选闭环的检测步骤为:Further, the step of detecting the candidate closed loop in step S5 is:

首先,对获取图像进行预处理,利用以下相似性计算函数获取图像相似性得分,当得分大于设定阈值,即舍去当前帧;如若不是,当前帧则代表一个新的场景,用于参与闭环检测;First, preprocess the acquired image, and use the following similarity calculation function to obtain the image similarity score. When the score is greater than the set threshold, the current frame is discarded; if not, the current frame represents a new scene and is used to participate in the closed loop detection;

其中,Vl表示上一帧图像的加权向量,Vc表示当前帧的加权向量,H值表示图像相似度。Among them, V l represents the weighted vector of the previous frame image, V c represents the weighted vector of the current frame, and the H value represents the image similarity.

接下来,采用分层K均值聚类算法对图像进行描述;再采用以下改进的单节点得分函数进行图像相似性得分匹配;Next, use the hierarchical K-means clustering algorithm to describe the image; then use the following improved single-node scoring function to match the image similarity score;

使用自上而下相似性增量的方法可以避免重复累计相似性,定义第l层相似性得分增量为:Using the method of top-down similarity increment can avoid repeating cumulative similarity, and define the l-layer similarity score increment as:

定义匹配核为:Define the matching kernel as:

其中,ηl表示第l层的匹配强度系数。Among them, η l represents the matching strength coefficient of the lth layer.

进一步的,所述改进的相似性得分的函数定义为:Further, the function of the improved similarity score is defined as:

基于上式提出的新的单节点相似性得分函数,则第l层的相似性得分为:Based on the new single-node similarity score function proposed by the above formula, the similarity score of the first layer is:

其中,表示图像X在树的第l层的第i个节点上的得分,表示图像Y在树的第l层的第i个节点上的得分。in, Indicates the score of the image X on the i-th node of the l-th layer of the tree, Indicates the score of image Y on the i-th node of the l-th layer of the tree.

进一步的,采用以集束调整BA为核心的图优化方法来对位姿和路标进行优化具体包括:首先,、构建位姿和路标整体的代价函数即目标函数为:Further, using the graph optimization method with bundle adjustment BA as the core to optimize the pose and landmarks specifically includes: First, the overall cost function of constructing the pose and landmarks, that is, the objective function is:

e=z-h(ξ,p)e=z-h(ξ,p)

其中,ξ是相机位姿即外参(R,t)对应的李代数,p是路标的三维坐标点,z是Kinect观测的像素坐标数据,e是观测误差,zij表示在位姿ξi处观察路标pj产生的像素坐标数据;Among them, ξ is the Lie algebra corresponding to the camera pose, that is, the extrinsic parameters (R, t), p is the three-dimensional coordinate point of the landmark, z is the pixel coordinate data observed by Kinect, e is the observation error, and z ij represents the position ξ i The pixel coordinate data generated by observing the landmark p j at the place;

然后对这个最小二乘法进行求解,把自变量定义成所有待优化的变量:Then solve this least squares method, defining the independent variables as all variables to be optimized:

x=[ξ1,...,ξm,p1,...,pn]x=[ξ 1 ,...,ξ m ,p 1 ,...,p n ]

根据非线性优化得思想,不断地寻找下降方向△x来找到代价函数的最优解;当给自变量一个增量,目标函数变为:According to the idea of nonlinear optimization, the optimal solution of the cost function is found by constantly looking for the descending direction △x; when an increment is given to the independent variable, the objective function becomes:

其中,Fij、Eij分别表示当前状态下相机位姿的偏导数和对路标点位置的偏导数;Among them, F ij and E ij represent the partial derivative of the camera pose and the partial derivative of the landmark position in the current state, respectively;

最终求解如下增量方程:Finally, the following incremental equation is solved:

H△x=g,g等价为一个常数。H△x=g, g is equivalent to a constant.

其中,in,

H=JTJ+λIH=J T J+λI

J=[F E],矩阵E和F表示整体目标函数对整体变量的导数。J=[F E], matrices E and F represent the derivative of the overall objective function to the overall variable.

本发明的优点及有益效果如下:Advantage of the present invention and beneficial effect are as follows:

本发明提供了一种基于改进闭环检测算法的移动机器人视觉SLAM方法,使用ORB特征提高了特征提取与匹配的速度,改善了系统实时性;而且通过改进的闭环检测算法提高闭环检测准确率,避免了感知歧义问题,减小了系统累积误差;并且分别通过对位姿的局部优化和对整个系统数据的全局优化,减少了系统噪声等带来的误差,因此获得更精确的位姿估计以及更好的三维重建效果。The invention provides a mobile robot visual SLAM method based on an improved closed-loop detection algorithm, which uses ORB features to improve the speed of feature extraction and matching, and improves the real-time performance of the system; and improves the accuracy of closed-loop detection through the improved closed-loop detection algorithm to avoid It solves the problem of perception ambiguity and reduces the cumulative error of the system; and through the local optimization of the pose and the global optimization of the entire system data, the error caused by the system noise is reduced, so more accurate pose estimation and better Good 3D reconstruction effect.

附图说明Description of drawings

图1是本发明的基于改进闭环检测算法的移动机器人视觉SLAM方法流程图。Fig. 1 is a flow chart of the mobile robot visual SLAM method based on the improved closed-loop detection algorithm of the present invention.

具体实施方式detailed description

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、详细地描述。所描述的实施例仅仅是本发明的一部分实施例。The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the drawings in the embodiments of the present invention. The described embodiments are only some of the embodiments of the invention.

本发明解决上述技术问题的技术方案是:The technical scheme that the present invention solves the problems of the technologies described above is:

如图1所示,本发明提供了一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,包括以下步骤:As shown in Figure 1, the present invention provides a kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm, it is characterized in that, comprises the following steps:

S1,使用张定友标定法对Kinect进行标定;校正相机畸变并获取相机内参;张定友标定法就是仅需使用一个打印出来的棋盘格就可以获得相机内参的标定法。S1, use Zhang Dingyou calibration method to calibrate Kinect; correct camera distortion and obtain camera internal parameters; Zhang Dingyou calibration method is a calibration method that only needs to use a printed checkerboard to obtain camera internal parameters.

S2,使用opencv对RGB图像进行ORB特征提取,获取的ORB特征鲁棒性好、耗时短,非常适用实时SLAM;快速近似最近邻(FLANN)算法适合匹配点数量极多的情况,可以有效地完成特征匹配。S2, using opencv to extract ORB features from RGB images, the obtained ORB features are robust and time-consuming, which is very suitable for real-time SLAM; the Fast Approximate Nearest Neighbor (FLANN) algorithm is suitable for situations with a large number of matching points, and can effectively Complete feature matching.

S3,针对误匹配情况,利用随机采样一致性(RANSAC)删除误匹配;通过Kinect相机模型获取匹配点空间坐标,然后采用PnP算法估计出帧间位姿变换(R,t);空间坐标求解方法为:S3, for the wrong matching situation, use random sampling consistency (RANSAC) to delete the wrong matching; obtain the spatial coordinates of the matching point through the Kinect camera model, and then use the PnP algorithm to estimate the inter-frame pose transformation (R, t); space coordinate solution method for:

zo=zz o = z

若Kinect获得图像中像素坐标(u,v)对应的深度值为z,则根据以上Kinect相机模型可以获得空间坐标(xo,yo,zo)。If Kinect obtains the depth value z corresponding to the pixel coordinates (u, v) in the image, then the space coordinates (x o , y o , z o ) can be obtained according to the above Kinect camera model.

S4,为了给后端全局优化提供更稳定、准确的位姿估计,对PnP求取的位姿变换进行无结构的迭代优化。对PnP求取的位姿变换进行优化的步骤为:S4, in order to provide a more stable and accurate pose estimation for the back-end global optimization, unstructured iterative optimization is performed on the pose transformation obtained by PnP. The steps to optimize the pose transformation obtained by PnP are:

首先,以相机位姿为优化变量,通过最小化重投影误差来构建优化问题;First, with the camera pose as the optimization variable, an optimization problem is constructed by minimizing the reprojection error;

然后,在原来的PnP函数中把PnP值作为初值,再调用g2o进行优化。Then, use the PnP value as the initial value in the original PnP function, and then call g2o for optimization.

S5,首先,对获取图像进行预处理;利用以下相似性计算函数获取图像相似性得分,当得分大于设定阈值,即舍去当前帧;如若不是,当前帧则代表一个新的场景,用于参与闭环检测;其中,预处理中的阈值应该根据获取图像质量以及图像采集速率合理设定,因为阈值越小,删除的图像帧越多,获取新场景的准确率也越高。然而,若设定的太低,当机器人回到曾经到过的区域,也检测不出闭环,所以要根据情况合理设置阈值大小。S5, first, preprocess the acquired image; use the following similarity calculation function to obtain the image similarity score, when the score is greater than the set threshold, the current frame is discarded; if not, the current frame represents a new scene, used for Participate in closed-loop detection; among them, the threshold in preprocessing should be reasonably set according to the quality of the acquired image and the image acquisition rate, because the smaller the threshold, the more image frames will be deleted, and the higher the accuracy of acquiring new scenes. However, if the setting is too low, when the robot returns to the area it has been to, it will not be able to detect the closed loop, so the threshold should be set reasonably according to the situation.

接下来,采用分层K均值聚类算法对图像进行描述;采用以下改进的单节点得分函数进行相似性得分匹配。Next, the hierarchical K-means clustering algorithm is used to describe the image; the following improved single-node scoring function is used for similarity score matching.

改进的相似性得分的函数定义为:The function of the improved similarity score is defined as:

基于上式提出的新的单节点相似性得分函数,则第l层的相似性得分为:Based on the new single-node similarity score function proposed by the above formula, the similarity score of the first layer is:

使用自上而下相似性增量的方法可以避免重复累计相似性,定义第l层相似性得分增量为:Using the method of top-down similarity increment can avoid repeating cumulative similarity, and define the l-layer similarity score increment as:

定义匹配核为:Define the matching kernel as:

其中,ηl表示第l层的匹配强度系数。Among them, η l represents the matching strength coefficient of the lth layer.

通过相似性得分与设定的阈值相比较,筛选出候选闭环;再利用时间连续性和空间一致性在候选闭环中筛选出正确的闭环。By comparing the similarity score with the set threshold, the candidate closed loops are screened out; then the correct closed loops are screened out of the candidate closed loops by using time continuity and spatial consistency.

S6,首先,为了优化视觉里程计计算出的位姿和路标点,构建整体的代价函数(也称目标函数)为:S6. First, in order to optimize the pose and landmark points calculated by the visual odometry, the overall cost function (also called the objective function) is constructed as:

e=z-h(ξ,p)e=z-h(ξ,p)

其中,ξ是相机位姿即外参(R,t)对应的李代数,p是路标的三维坐标点,z是Kinect观测的像素坐标数据,e是观测误差,zij表示在位姿ξi处观察路标pj产生的像素坐标数据。Among them, ξ is the Lie algebra corresponding to the camera pose, that is, the extrinsic parameters (R, t), p is the three-dimensional coordinate point of the landmark, z is the pixel coordinate data observed by Kinect, e is the observation error, and z ij represents the position ξ i The pixel coordinate data generated by observing the landmark p j at the place.

然后对这个最小二乘法进行求解,把自变量定义成所有待优化的变量:Then solve this least squares method, defining the independent variables as all variables to be optimized:

x=[ξ1,...,ξm,p1,...,pn]x=[ξ 1 ,...,ξ m ,p 1 ,...,p n ]

根据非线性优化得思想,不断地寻找下降方向△x来找到代价函数的最优解;当给自变量一个增量,目标函数变为:According to the idea of nonlinear optimization, the optimal solution of the cost function is found by constantly looking for the descending direction △x; when an increment is given to the independent variable, the objective function becomes:

其中,Fij、Eij分别表示当前状态下相机位姿的偏导数和对路标点位置的偏导数。Among them, F ij and E ij represent the partial derivative of the camera pose and the partial derivative of the landmark position in the current state, respectively.

最终求解如下增量方程:Finally, the following incremental equation is solved:

H△x=gH△x=g

其中,in,

H=JTJ+λIH=J T J+λI

J=[F E]J=[F E]

最终通过多次迭代得到满足目标方程状态变量的收敛值x,即可得到优化后的位姿和路标。Finally, the convergence value x that satisfies the state variable of the objective equation is obtained through multiple iterations, and the optimized pose and landmark can be obtained.

以上这些实施例应理解为仅用于说明本发明而不用于限制本发明的保护范围。在阅读了本发明的记载的内容之后,技术人员可以对本发明作各种改动或修改,这些等效变化和修饰同样落入本发明权利要求所限定的范围。The above embodiments should be understood as only for illustrating the present invention but not for limiting the protection scope of the present invention. After reading the contents of the present invention, skilled persons can make various changes or modifications to the present invention, and these equivalent changes and modifications also fall within the scope defined by the claims of the present invention.

Claims (7)

1.一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,包括以下步骤:1. a mobile robot visual SLAM method based on improved closed-loop detection algorithm, is characterized in that, comprises the following steps: S1、使用张定友标定法对Kinect进行标定,获取相机内参;S1. Use Zhang Dingyou's calibration method to calibrate Kinect to obtain the camera's internal parameters; S2、对获取的RGB图像进行ORB特征提取,并采用FLANN快速近似最近邻算法进行特征匹配,包括正确匹配结果和误匹配结果;S2. Perform ORB feature extraction on the acquired RGB image, and use the FLANN fast approximate nearest neighbor algorithm to perform feature matching, including correct matching results and incorrect matching results; S3、针对步骤S2的误匹配结果,利用RANSAC随机采样一致性删除误匹配;通过Kinect相机模型获取匹配点空间坐标,然后采用n点透视问题算法PnP估计出帧间位姿变换(R,t);S3. For the mismatching result of step S2, use RANSAC random sampling consistency to delete the mismatching; obtain the spatial coordinates of the matching point through the Kinect camera model, and then use the n-point perspective problem algorithm PnP to estimate the inter-frame pose transformation (R, t) ; S4、对步骤S3采用PnP算法求取的位姿变换进行无结构的迭代优化;S4. Perform unstructured iterative optimization on the pose transformation calculated by the PnP algorithm in step S3; S5、通过对步骤S2获取的图像帧进行预处理,通过视觉词袋来对图像进行描述,并用一个改进的相似性得分匹配方法进行图像匹配,得出的匹配更符合图像间实际相似度,从而获取候选闭环,同时,利用时间连续性和空间一致性筛选出正确的闭环;S5. By preprocessing the image frame obtained in step S2, the image is described through the bag of visual words, and an improved similarity score matching method is used for image matching, and the matching obtained is more in line with the actual similarity between images, thereby Obtain candidate closed loops, and at the same time, use time continuity and spatial consistency to filter out the correct closed loops; S6、采用以集束调整BA为核心的图优化方法来对位姿和路标进行优化,通过不断的迭代优化获取更精确的相机位姿和路标。S6. Use the graph optimization method with bundle adjustment BA as the core to optimize the pose and landmarks, and obtain more accurate camera poses and landmarks through continuous iterative optimization. 2.根据权利要求1所述的移动机器人视觉同步定位与地图构建方法,其特征在于,步骤S2中的ORB特征提取步骤为:2. mobile robot vision synchronous positioning and map construction method according to claim 1, is characterized in that, the ORB feature extraction step in the step S2 is: 首先,对RGB图像进行FAST角点提取,并利用构建图像金字塔和灰度质心法让ORB特征点具有尺度和旋转的不变性;First, FAST corner point extraction is performed on the RGB image, and the ORB feature points are invariant to scale and rotation by using the image pyramid and gray-scale centroid method; 再次,通过BRIEF对前一步提取出的特征点周围图像进行二进制描述。Again, the binary description of the surrounding image of the feature points extracted in the previous step is performed through BRIEF. 3.根据权利要求2所述的移动机器人视觉同步定位与地图构建方法,其特征在于,所述步骤S3中通过Kinect相机模型获取匹配点空间坐标,空间坐标求解方法为:3. mobile robot vision synchronous location and map construction method according to claim 2, it is characterized in that, in described step S3, obtain matching point spatial coordinates by Kinect camera model, spatial coordinate solution method is: <mrow> <msub> <mi>x</mi> <mi>o</mi> </msub> <mo>=</mo> <mfrac> <mi>z</mi> <msub> <mi>f</mi> <mi>x</mi> </msub> </mfrac> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <msub> <mi>c</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>x</mi><mi>o</mi></msub><mo>=</mo><mfrac><mi>z</mi><msub><mi>f</mi><mi>x</mi></msub></mfrac><mrow><mo>(</mo><mi>u</mi><mo>-</mo><msub><mi>c</mi><mi>x</mi></msub><mo>)</mo></mrow></mrow> <mrow> <msub> <mi>y</mi> <mi>o</mi> </msub> <mo>=</mo> <mfrac> <mi>z</mi> <msub> <mi>f</mi> <mi>y</mi> </msub> </mfrac> <mrow> <mo>(</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>c</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>y</mi><mi>o</mi></msub><mo>=</mo><mfrac><mi>z</mi><msub><mi>f</mi><mi>y</mi></msub></mfrac><mrow><mo>(</mo><mi>v</mi><mo>-</mo><msub><mi>c</mi><mi>y</mi></msub><mo>)</mo></mrow></mrow> zo=zz o = z 若Kinect获得图像中像素坐标(u,v)对应的深度值为z,则根据以上Kinect相机模型可以获得空间坐标(xo,yo,zo)。If Kinect obtains the depth value z corresponding to the pixel coordinates (u, v) in the image, then the space coordinates (x o , y o , z o ) can be obtained according to the above Kinect camera model. 4.根据权利要求3所述的一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,步骤S4中对PnP求取的位姿变换进行优化的步骤为:4. a kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm according to claim 3, is characterized in that, the step of optimizing the pose transformation that PnP obtains in step S4 is: 首先,以相机位姿为优化变量,通过最小化重投影误差来构建优化问题;First, with the camera pose as the optimization variable, an optimization problem is constructed by minimizing the reprojection error; 然后,在原来的PnP函数中把PnP值作为初值,再调用g2o进行优化。Then, use the PnP value as the initial value in the original PnP function, and then call g2o for optimization. 5.根据权利要求4所述的一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,所述步骤S5候选闭环的检测步骤为:5. a kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm according to claim 4, is characterized in that, the detection step of described step S5 candidate closed-loop is: 首先,对获取图像进行预处理,利用以下相似性计算函数获取图像相似性得分,当得分大于设定阈值,即舍去当前帧;如若不是,当前帧则代表一个新的场景,用于参与闭环检测;First, preprocess the acquired image, and use the following similarity calculation function to obtain the image similarity score. When the score is greater than the set threshold, the current frame is discarded; if not, the current frame represents a new scene and is used to participate in the closed loop detection; <mrow> <mi>H</mi> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <mo>,</mo> <msub> <mi>V</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>V</mi> <mi>c</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow> <mrow><mi>H</mi><mo>=</mo><mfrac><mrow><mo>(</mo><msub><mi>V</mi><mi>l</mi></msub><mo>,</mo><msub><mi>V</mi><mi>c</mi></msub><mo>)</mo></mrow><mrow><mo>|</mo><mo>|</mo><msub><mi>V</mi><mi>l</mi></msub><mo>|</mo><mo>|</mo><mo>|</mo><mo>|</mo><msub><mi>V</mi><mi>c</mi></msub><mo>|</mo><mo>|</mo></mrow></mfrac><mo>,</mo></mrow> 其中,Vl表示上一帧图像的加权向量,Vc表示当前帧的加权向量,H值表示图像相似度;Wherein, V 1 represents the weighted vector of the previous frame image, V c represents the weighted vector of the current frame, and the H value represents the image similarity; 接下来,采用分层K均值聚类算法对图像进行描述;再采用以下改进的单节点得分函数进行图像相似性得分匹配;Next, use the hierarchical K-means clustering algorithm to describe the image; then use the following improved single-node scoring function to match the image similarity score; 使用自上而下相似性增量的方法可以避免重复累计相似性,定义第l层相似性得分增量为:Using the method of top-down similarity increment can avoid repeating cumulative similarity, and define the l-layer similarity score increment as: <mrow> <msup> <mi>&amp;Delta;S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>S</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mn>1</mn> <mo>&amp;le;</mo> <mi>l</mi> <mo>&lt;</mo> <mi>d</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>S</mi> <mi>d</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>l</mi> <mo>=</mo> <mi>d</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow><msup><mi>&amp;Delta;S</mi><mi>l</mi></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>=</mo><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><msup><mi>S</mi><mi>l</mi></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>-</mo><msup><mi>S</mi><mrow><mi>l</mi><mo>+</mo><mn>1</mn></mrow></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mn>1</mn><mo>&amp;le;</mo><mi>l</mi><mo>&lt;</mo><mi>d</mi></mrow></mtd></mtr><mtr><mtd><mrow><msup><mi>S</mi><mi>d</mi></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mi>l</mi><mo>=</mo><mi>d</mi></mrow></mtd></mtr></mtable></mfenced></mrow> 定义匹配核为:Define the matching kernel as: <mrow> <mi>K</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>d</mi> </munderover> <msub> <mi>&amp;eta;</mi> <mi>l</mi> </msub> <msup> <mi>&amp;Delta;S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> </mrow> <mrow><mi>K</mi><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>=</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>l</mi><mo>=</mo><mn>1</mn></mrow><mi>d</mi></munderover><msub><mi>&amp;eta;</mi><mi>l</mi></msub><msup><mi>&amp;Delta;S</mi><mi>l</mi></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow></mrow> 其中,ηl表示第l层的匹配强度系数。Among them, η l represents the matching strength coefficient of the lth layer. 6.根据权利要求5所述的一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,所述改进的相似性得分的函数定义为:6. a kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm according to claim 5, is characterized in that, the function definition of the similarity score of described improvement is: <mrow> <msubsup> <mi>S</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>min</mi> <mo>{</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>+</mo> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <mo>|</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> </mrow> <mrow><msubsup><mi>S</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>=</mo><mi>min</mi><mo>{</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow><mo>,</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>Y</mi><mo>)</mo></mrow><mo>}</mo><mo>+</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><mo>|</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow><mo>-</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>Y</mi><mo>)</mo></mrow><mo>|</mo></mrow></mfrac></mrow> 基于上式提出的新的单节点相似性得分函数,则第l层的相似性得分为:Based on the new single-node similarity score function proposed by the above formula, the similarity score of the first layer is: <mrow> <msup> <mi>S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <msubsup> <mi>S</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <mi>min</mi> <mo>{</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <mo>|</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> </mrow> <mrow><msup><mi>S</mi><mi>l</mi></msup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><mo>=</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msup><mi>k</mi><mi>l</mi></msup></munderover><msubsup><mi>S</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo></mrow><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msup><mi>k</mi><mi>l</mi></msup></munderover><mi>min</mi><mo>{</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow><mo>,</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>Y</mi><mo>)</mo></mrow><mo>}</mo><mo>+</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msup><mi>k</mi><mi>l</mi></msup></munderover><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><mo>|</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow><mo>-</mo><msubsup><mi>&amp;omega;</mi><mi>i</mi><mi>l</mi></msubsup><mrow><mo>(</mo><mi>Y</mi><mo>)</mo></mrow><mo>|</mo></mrow></mfrac></mrow> 其中,表示图像X在树的第l层的第i个节点上的得分,表示图像Y在树的第l层的第i个节点上的得分。in, Indicates the score of the image X on the i-th node of the l-th layer of the tree, Indicates the score of image Y on the i-th node of the l-th layer of the tree. 7.根据权利要求5或6所述的一种基于改进闭环检测算法的移动机器人视觉SLAM方法,其特征在于,采用以集束调整BA为核心的图优化方法来对位姿和路标进行优化具体包括:首先,、构建位姿和路标整体的代价函数即目标函数为:7. A kind of mobile robot visual SLAM method based on improved closed-loop detection algorithm according to claim 5 or 6, it is characterized in that, adopting the graph optimization method with cluster adjustment BA as the core to optimize pose and landmark specifically includes : First, the overall cost function for constructing the pose and landmarks, that is, the objective function is: <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>z</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mi>h</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;xi;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><munderover><mo>&amp;Sigma;</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mo>|</mo><mo>|</mo><msub><mi>e</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><munderover><mo>&amp;Sigma;</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mo>|</mo><mo>|</mo><msub><mi>z</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo>-</mo><mi>h</mi><mrow><mo>(</mo><msub><mi>&amp;xi;</mi><mi>i</mi></msub><mo>,</mo><msub><mi>p</mi><mi>j</mi></msub><mo>)</mo></mrow><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup></mrow> e=z-h(ξ,p)e=z-h(ξ,p) 其中,ξ是相机位姿即外参(R,t)对应的李代数,p是路标的三维坐标点,z是Kinect观测的像素坐标数据,e是观测误差,zij表示在位姿ξi处观察路标pj产生的像素坐标数据;Among them, ξ is the Lie algebra corresponding to the camera pose, that is, the extrinsic parameters (R, t), p is the three-dimensional coordinate point of the landmark, z is the pixel coordinate data observed by Kinect, e is the observation error, and z ij represents the position ξ i The pixel coordinate data generated by observing the landmark p j at the place; 然后对这个最小二乘法进行求解,把自变量定义成所有待优化的变量:Then solve this least squares method, defining the independent variables as all variables to be optimized: x=[ξ1,...,ξm,p1,...,pn]x=[ξ 1 ,...,ξ m ,p 1 ,...,p n ] 根据非线性优化得思想,不断地寻找下降方向△x来找到代价函数的最优解;当给自变量一个增量,目标函数变为:According to the idea of nonlinear optimization, the optimal solution of the cost function is found by constantly looking for the descending direction △x; when an increment is given to the independent variable, the objective function becomes: <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>&amp;ap;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>F</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;Delta;&amp;xi;</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>E</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;Delta;p</mi> <mi>j</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>|</mo><mo>|</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>&amp;Delta;</mi><mi>x</mi><mo>)</mo></mrow><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>&amp;ap;</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><munderover><mo>&amp;Sigma;</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mo>|</mo><mo>|</mo><msub><mi>e</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>F</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><msub><mi>&amp;Delta;&amp;xi;</mi><mi>i</mi></msub><mo>+</mo><msub><mi>E</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><msub><mi>&amp;Delta;p</mi><mi>j</mi></msub><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup></mrow> 其中,Fij、Eij分别表示当前状态下相机位姿的偏导数和对路标点位置的偏导数;Among them, F ij and E ij represent the partial derivative of the camera pose and the partial derivative of the landmark position in the current state, respectively; 最终求解如下增量方程:Finally, the following incremental equation is solved: H△x=g,g等价为一个常数。H△x=g, g is equivalent to a constant. 其中,in, H=JTJ+λIH=J T J+λI J=[F E],矩阵E和F表示整体目标函数对整体变量的导数。J=[F E], matrices E and F represent the derivative of the overall objective function to the overall variable.
CN201710839954.5A 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm Pending CN107680133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710839954.5A CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710839954.5A CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Publications (1)

Publication Number Publication Date
CN107680133A true CN107680133A (en) 2018-02-09

Family

ID=61135809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710839954.5A Pending CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Country Status (1)

Country Link
CN (1) CN107680133A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109556611A (en) * 2018-11-30 2019-04-02 广州高新兴机器人有限公司 A kind of fusion and positioning method based on figure optimization and particle filter
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN109902619A (en) * 2019-02-26 2019-06-18 上海大学 Image closed-loop detection method and system
CN109934857A (en) * 2019-03-04 2019-06-25 大连理工大学 Loop detection method based on convolutional neural network and ORB characteristics
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A Visual Navigation Method for Indoor Mobile Robots Based on Improved Point Cloud Matching Algorithm
CN110119144A (en) * 2019-04-19 2019-08-13 苏州大学 Based on the matched multirobot SLAM algorithm of sub- map feature
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110322549A (en) * 2019-06-12 2019-10-11 清华大学 A kind of method and system of the three-dimensional reconstruction based on image
CN110335315A (en) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 An image processing method and device, and a computer-readable storage medium
CN110827353A (en) * 2019-10-18 2020-02-21 天津大学 A robot positioning method based on monocular camera assistance
CN111024078A (en) * 2019-11-05 2020-04-17 广东工业大学 UAV Vision SLAM Method Based on GPU Acceleration
CN111044039A (en) * 2019-12-25 2020-04-21 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measuring device and method based on IMU
CN111160373A (en) * 2019-12-30 2020-05-15 重庆邮电大学 A method for image feature extraction and detection and classification of defects in variable speed drum parts
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111322993A (en) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 Visual positioning method and device
CN111582123A (en) * 2020-04-29 2020-08-25 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
WO2020259185A1 (en) * 2019-06-25 2020-12-30 京东方科技集团股份有限公司 Method and apparatus for implementing visual odometer
CN112304311A (en) * 2019-07-29 2021-02-02 南京理工大学 A method for solving BA problem based on differential evolution algorithm for SLAM process
CN112364881A (en) * 2020-04-01 2021-02-12 武汉理工大学 Advanced sampling consistency image matching algorithm
CN112560648A (en) * 2020-12-09 2021-03-26 长安大学 SLAM method based on RGB-D image
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113743413A (en) * 2021-07-30 2021-12-03 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN114111803A (en) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 A Visual Navigation Method for Indoor Satellite Platform
CN114418927A (en) * 2021-11-09 2022-04-29 四川大学 A closed-loop detection method and system based on spatial relationship feature matching
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114694013A (en) * 2022-04-11 2022-07-01 北京理工大学 Distributed multi-machine cooperative vision SLAM method and system
CN115235505A (en) * 2022-07-12 2022-10-25 重庆邮电大学 Visual odometer method based on nonlinear optimization
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117115238A (en) * 2023-04-12 2023-11-24 荣耀终端有限公司 A method, electronic device and storage medium for determining posture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706342A (en) * 2012-05-31 2012-10-03 重庆邮电大学 Location and environment modeling method of intelligent movable robot
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706342A (en) * 2012-05-31 2012-10-03 重庆邮电大学 Location and environment modeling method of intelligent movable robot
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ADRIEN ANGELI ET AL: "Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words", 《IEEE TRANSACTIONS ON ROBOTICS》 *
FELIX ENDRES ET AL: "3-D Mapping With an RGB-D Camera", 《IEEE TRANSACTIONS ON ROBOTICS》 *
刘国忠 等: "基于SURF和ORB全局特征的快速闭环检测", 《机器人》 *
张毅 等: "基于图优化的移动机器人视觉SLAM", 《智能系统学报》 *
曹明伟 等: "面向大规模三维重建的快速鲁棒集束调整算法", 《计算机辅助设计与图形学学报》 *
李永锋 等: "一种基于历史模型集的改进闭环检测算法", 《机器人》 *
陈超 等: "基于场景外观建模的移动机器人视觉闭环检测研究", 《产业与科技论坛》 *

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462683B (en) * 2018-03-06 2022-04-12 斯坦德机器人(深圳)有限公司 Method, terminal and computer readable storage medium for tightly coupling visual SLAM
CN110462683A (en) * 2018-03-06 2019-11-15 斯坦德机器人(深圳)有限公司 Tightly coupled visual SLAM method, terminal and computer-readable storage medium
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108648274B (en) * 2018-05-10 2020-05-22 华南理工大学 A cognitive point cloud map creation system for visual SLAM
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN108830191B (en) * 2018-05-30 2022-04-01 上海电力学院 Mobile robot SLAM method based on improved environment measurement module EMM and ORB algorithm
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109509211B (en) * 2018-09-28 2021-11-16 北京大学 Feature point extraction and matching method and system in simultaneous positioning and mapping technology
CN109556611A (en) * 2018-11-30 2019-04-02 广州高新兴机器人有限公司 A kind of fusion and positioning method based on figure optimization and particle filter
CN109556611B (en) * 2018-11-30 2020-11-10 广州高新兴机器人有限公司 Fusion positioning method based on graph optimization and particle filtering
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature
CN111322993B (en) * 2018-12-13 2022-03-04 杭州海康机器人技术有限公司 Visual positioning method and device
CN111322993A (en) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 Visual positioning method and device
CN109676604B (en) * 2018-12-26 2020-09-22 清华大学 Robot surface kinematic positioning method and kinematic positioning system
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN109800692B (en) * 2019-01-07 2022-12-27 重庆邮电大学 Visual SLAM loop detection method based on pre-training convolutional neural network
CN109902619A (en) * 2019-02-26 2019-06-18 上海大学 Image closed-loop detection method and system
CN109934857A (en) * 2019-03-04 2019-06-25 大连理工大学 Loop detection method based on convolutional neural network and ORB characteristics
CN109934857B (en) * 2019-03-04 2021-03-19 大连理工大学 A Loop Closure Detection Method Based on Convolutional Neural Network and ORB Features
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A Visual Navigation Method for Indoor Mobile Robots Based on Improved Point Cloud Matching Algorithm
CN110119144B (en) * 2019-04-19 2022-04-22 苏州大学 Multi-robot SLAM algorithm based on sub-map feature matching
CN110119144A (en) * 2019-04-19 2019-08-13 苏州大学 Based on the matched multirobot SLAM algorithm of sub- map feature
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110322549A (en) * 2019-06-12 2019-10-11 清华大学 A kind of method and system of the three-dimensional reconstruction based on image
WO2020259185A1 (en) * 2019-06-25 2020-12-30 京东方科技集团股份有限公司 Method and apparatus for implementing visual odometer
CN110335315A (en) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 An image processing method and device, and a computer-readable storage medium
CN110335315B (en) * 2019-06-27 2021-11-02 Oppo广东移动通信有限公司 An image processing method and device, and a computer-readable storage medium
CN112304311A (en) * 2019-07-29 2021-02-02 南京理工大学 A method for solving BA problem based on differential evolution algorithm for SLAM process
CN112304311B (en) * 2019-07-29 2023-08-22 南京理工大学 Method for solving BA problem based on differential evolution algorithm for SLAM process
CN110827353B (en) * 2019-10-18 2023-03-28 天津大学 Robot positioning method based on monocular camera assistance
CN110827353A (en) * 2019-10-18 2020-02-21 天津大学 A robot positioning method based on monocular camera assistance
CN111024078B (en) * 2019-11-05 2021-03-16 广东工业大学 Unmanned aerial vehicle vision SLAM method based on GPU acceleration
CN111024078A (en) * 2019-11-05 2020-04-17 广东工业大学 UAV Vision SLAM Method Based on GPU Acceleration
CN111044039B (en) * 2019-12-25 2024-03-19 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measurement device and method based on IMU
CN111044039A (en) * 2019-12-25 2020-04-21 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measuring device and method based on IMU
CN111160373A (en) * 2019-12-30 2020-05-15 重庆邮电大学 A method for image feature extraction and detection and classification of defects in variable speed drum parts
WO2021147549A1 (en) * 2020-01-20 2021-07-29 深圳市普渡科技有限公司 Closed-loop detection method and system, multi-sensor fusion slam system, robot, and medium
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111275763B (en) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN112364881A (en) * 2020-04-01 2021-02-12 武汉理工大学 Advanced sampling consistency image matching algorithm
CN112364881B (en) * 2020-04-01 2022-06-28 武汉理工大学 An Advanced Sampling Consistency Image Matching Method
CN111582123B (en) * 2020-04-29 2023-02-14 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111582123A (en) * 2020-04-29 2020-08-25 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
CN112560648B (en) * 2020-12-09 2023-04-07 长安大学 SLAM method based on RGB-D image
CN112560648A (en) * 2020-12-09 2021-03-26 长安大学 SLAM method based on RGB-D image
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113743413B (en) * 2021-07-30 2023-12-01 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN113743413A (en) * 2021-07-30 2021-12-03 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN114418927A (en) * 2021-11-09 2022-04-29 四川大学 A closed-loop detection method and system based on spatial relationship feature matching
CN114111803A (en) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 A Visual Navigation Method for Indoor Satellite Platform
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114603555B (en) * 2022-02-24 2023-12-08 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114694013A (en) * 2022-04-11 2022-07-01 北京理工大学 Distributed multi-machine cooperative vision SLAM method and system
CN115235505A (en) * 2022-07-12 2022-10-25 重庆邮电大学 Visual odometer method based on nonlinear optimization
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117115238A (en) * 2023-04-12 2023-11-24 荣耀终端有限公司 A method, electronic device and storage medium for determining posture

Similar Documents

Publication Publication Date Title
CN107680133A (en) A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN109631855B (en) High-precision vehicle localization method based on ORB-SLAM
CN110223348B (en) Adaptive pose estimation method for robot scene based on RGB-D camera
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN110108258B (en) Monocular vision odometer positioning method
CN107301654B (en) A multi-sensor high-precision real-time localization and mapping method
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN106940704B (en) Positioning method and device based on grid map
CN104732518B (en) An Improved Method of PTAM Based on Ground Features of Intelligent Robot
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN110490928A (en) A kind of camera Attitude estimation method based on deep neural network
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108460779A (en) A method for image vision positioning of mobile robot in dynamic environment
CN110146099A (en) A Synchronous Localization and Map Construction Method Based on Deep Learning
CN103413352A (en) 3D scene reconstruction method based on RGBD multi-sensor fusion
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
CN108416385A (en) It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN111105460B (en) A RGB-D Camera Pose Estimation Method for 3D Reconstruction of Indoor Scenes
CN111998862B (en) BNN-based dense binocular SLAM method
CN111161318A (en) Dynamic scene SLAM method based on YOLO algorithm and GMS feature matching
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN114663880B (en) Three-dimensional object detection method based on multi-level cross-modal self-attention mechanism
Gao et al. Pose refinement with joint optimization of visual points and lines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209

RJ01 Rejection of invention patent application after publication