CN110060277A - A kind of vision SLAM method of multiple features fusion - Google Patents
A kind of vision SLAM method of multiple features fusion Download PDFInfo
- Publication number
- CN110060277A CN110060277A CN201910357796.9A CN201910357796A CN110060277A CN 110060277 A CN110060277 A CN 110060277A CN 201910357796 A CN201910357796 A CN 201910357796A CN 110060277 A CN110060277 A CN 110060277A
- Authority
- CN
- China
- Prior art keywords
- feature
- point
- line
- features
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims abstract description 26
- 238000005457 optimization Methods 0.000 claims abstract description 25
- 238000013507 mapping Methods 0.000 claims abstract description 17
- 238000010276 construction Methods 0.000 claims abstract description 5
- 230000003044 adaptive effect Effects 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 2
- 238000004804 winding Methods 0.000 claims 2
- 229910002056 binary alloy Inorganic materials 0.000 claims 1
- 238000009472 formulation Methods 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 claims 1
- 238000002203 pretreatment Methods 0.000 claims 1
- 230000007704 transition Effects 0.000 claims 1
- 230000007423 decrease Effects 0.000 abstract description 3
- 230000004807 localization Effects 0.000 abstract 1
- 230000001174 ascending effect Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
多特征融合的视觉SLAM方法,涉及机器人视觉定位与建图领域。本发明公开了一种基于深度相机的多特征融合视觉SLAM方法,通过充分使用从图像中提取的点线特征并根据点线特征构建平面特征,来解决纯点特征失效情况下的视觉定位问题。采用一种自适应阈值方法提取点特征,以获得更加均匀的点特征;提取线特征并删除短小线段、合并被分割的线段,以提高线特征匹配的准确率;点线特征用于帧间位姿的估计以及局部地图的构建;采用最小参数法来计算面特征,以减小计算量;通过构建融合特征的反投影误差函数,将点线面特征紧密耦合,并构建全局地图进行全局位姿优化。本发明是一种精度高、实时性好、鲁棒性强的视觉SLAM方法,解决了低纹理环境下基于特征点法的视觉SLAM精度下降甚至系统失效的问题。
The visual SLAM method of multi-feature fusion involves the field of robot visual localization and mapping. The invention discloses a multi-feature fusion visual SLAM method based on a depth camera, which solves the problem of visual positioning in the case of failure of pure point features by fully using point and line features extracted from images and constructing plane features according to the point and line features. An adaptive threshold method is used to extract point features to obtain more uniform point features; line features are extracted, short and small line segments are removed, and segmented line segments are merged to improve the accuracy of line feature matching; point and line features are used for inter-frame bits Attitude estimation and local map construction; the minimum parameter method is used to calculate surface features to reduce the amount of calculation; by constructing the back-projection error function of the fusion features, the point, line and surface features are tightly coupled, and a global map is constructed for global pose optimization. The present invention is a visual SLAM method with high precision, good real-time performance and strong robustness, and solves the problem that the precision of visual SLAM based on the feature point method decreases or even the system fails in a low texture environment.
Description
技术领域:Technical field:
本发明属于机器人同时定位与地图构建技术领域,具体涉及一种多特征融合的SLAM(同时定位与地图构建)方法。The invention belongs to the technical field of simultaneous positioning and map construction of robots, and in particular relates to a multi-feature fusion SLAM (simultaneous positioning and map construction) method.
背景技术:Background technique:
随着视觉SLAM技术的发展,基于帧间的优化和图优化框架已经成为了视觉SLAM问题的主流框架,图优化框架将运动估计和光束平差法引入到视觉SLAM中来,运动估计是将机器人的位置和周围环境特征当作全局优化问题来求解,通过提取图像上的特征进行特征跟踪,建立误差函数,通过线性假设构建线性优化或直接进行非线性优化来求解使得误差函数取得最小值时的机器人位姿同时优化路标。早先的运动结构恢复(Structure fromMotion,SFM)在特征提取和匹配以及后续的优化环节消耗了太多时间,因此只能进行离线的位姿优化及三维重建,无法实时在线地完成自身的定位和地图构建。随着这些年光束平差法的稀疏性被发现以及计算机硬件的升级换代,各个环节消耗的时间大大降低,因此基于图优化框架的视觉SLAM能够实现实时定位与建图。With the development of visual SLAM technology, frame-based optimization and graph optimization frameworks have become mainstream frameworks for visual SLAM problems. The graph optimization framework introduces motion estimation and beam adjustment into visual SLAM. The location and surrounding environment features of the image are solved as a global optimization problem. The feature tracking is performed by extracting the features on the image, and the error function is established. The linear optimization is constructed through the linear assumption or the nonlinear optimization is directly performed to solve the error function when the minimum value is obtained. The robot pose simultaneously optimizes the landmarks. The earlier Structure from Motion (SFM) consumes too much time in feature extraction and matching and subsequent optimization, so it can only perform offline pose optimization and 3D reconstruction, and cannot complete its own positioning and map online in real time. Construct. With the discovery of the sparsity of the beam adjustment method and the upgrading of computer hardware in recent years, the time consumed by each link has been greatly reduced. Therefore, the visual SLAM based on the graph optimization framework can realize real-time positioning and mapping.
视觉SLAM主要分为基于灰度的直接法和基于特征的间接法两类。其中基于灰度的直接法在某些情况下表现良好,不需要进行特征提取与匹配,只需进行灰度梯度的跟踪,因此会节省大量的计算时间,并且这种方法在特征较少的区域能够增强视觉SLAM的连续性。但同时,由于直接法单单使用像素的梯度来进行帧间的匹配及运动跟踪,因此该方法受光照强度、噪声干扰等影响较为强烈。基于特征的间接法虽然会占用部分计算资源,但是由于硬件水平的提高,其占用的计算资源大大降低,且提取的特征为人工设计的,具有较为稳定的特性,并且能够进行特征跟踪、构图、闭环检测,能够完成机器人同时定位与地图构建的全过程,因此基于特征的视觉SLAM方法逐渐成为了SLAM技术研究的主流。Visual SLAM is mainly divided into two categories: gray-based direct methods and feature-based indirect methods. Among them, the direct method based on gray level performs well in some cases. It does not need to perform feature extraction and matching, but only needs to perform gray gradient tracking, so it will save a lot of computing time, and this method can be used in areas with fewer features. Able to enhance the continuity of visual SLAM. However, at the same time, since the direct method only uses the gradient of pixels to perform inter-frame matching and motion tracking, this method is strongly affected by light intensity and noise interference. Although the feature-based indirect method will occupy some computing resources, due to the improvement of the hardware level, the computing resources occupied are greatly reduced, and the extracted features are artificially designed, with relatively stable characteristics, and can perform feature tracking, composition, Closed-loop detection can complete the whole process of simultaneous positioning and map construction of the robot, so the feature-based visual SLAM method has gradually become the mainstream of SLAM technology research.
发明内容:Invention content:
本发明是为了解决现有基于点特征的视觉SLAM算法在低纹理环境下计算的点特征无法进行稳定的跟踪,并且根据点特征产生的约束力下降,无法构建出准确的环境地图等问题,本发明提供了一种多特征融合的视觉SLAM方法。The present invention is to solve the problems that the existing point feature-based visual SLAM algorithm cannot perform stable tracking on the point feature calculated in the low-texture environment, and the constraint force generated according to the point feature decreases, and an accurate environment map cannot be constructed. The invention provides a multi-feature fusion visual SLAM method.
首先将SLAM问题建模为:First model the SLAM problem as:
是一个SE(3)上的六维向量,表示机器人从t到t+1时刻的运动,表示运动估计的协方差矩阵,由上次迭代中损失函数的黑塞矩阵来逆逼近。 is a six-dimensional vector on SE(3), representing the motion of the robot from t to t+1, Represents the covariance matrix for motion estimation, inversely approximated by the Hessian matrix of the loss function in the previous iteration.
本发明所采用的技术方案是:基于图像的点、线、面特征融合的视觉SLAM方法,其特征在于,包括以下步骤:The technical solution adopted in the present invention is: a visual SLAM method based on image point, line and surface feature fusion, which is characterized in that it includes the following steps:
步骤1,进行深度相机的标定,确定相机的内参;Step 1, carry out the calibration of the depth camera, and determine the internal parameters of the camera;
步骤2,针对移动机器人平台上相机获取的视频流数据,进行高斯滤波以减少噪声对后续步骤的影响;Step 2, for the video stream data obtained by the camera on the mobile robot platform, perform Gaussian filtering to reduce the influence of noise on subsequent steps;
步骤3,针对校正后的图像进行特征提取,在线提取每一帧的点特征和线特征,具体包括:针对每一帧图像提取FAST角点特征,用rBRIEF描述子进行描述,线特征采用LSD算法进行检测,并采用LBD描述子进行描述,利用线段提供的几何信息,筛选出方向和长度不同以及端点距离较大的线段;Step 3: Perform feature extraction on the corrected image, and extract point features and line features of each frame online, specifically including: extracting FAST corner features for each frame of image, using rBRIEF descriptor to describe, and using LSD algorithm for line features Detect and describe using the LBD descriptor, and use the geometric information provided by the line segment to screen out the line segments with different directions and lengths and large distances between the endpoints;
步骤4,根据提取到的点、线特征构建面特征,通过以下三种方法构造出候选面特征:1)三个共面的特征点;2)一个特征点与一条特征线;3)两条共面的特征线;Step 4, construct surface features according to the extracted point and line features, and construct candidate surface features through the following three methods: 1) three coplanar feature points; 2) one feature point and one feature line; 3) two Coplanar feature lines;
步骤5,利用当前帧与上一帧间的点、线特征进行图像的帧间匹配,获得点线特征的匹配对,建立帧间的几何对应关系;Step 5, use the point and line features between the current frame and the previous frame to perform inter-frame matching of the image, obtain the matching pair of point-line features, and establish the geometric correspondence between the frames;
步骤6,根据步骤5所得到的帧间对应关系进行特征反投影,使用高斯-牛顿法最小化特征的反投影误差,为了处理异常值,使用一个伪胡贝尔代价函数执行两步最小化,求得两个连续帧间的增量运动;Step 6, perform feature back-projection according to the correspondence between frames obtained in step 5, and use the Gauss-Newton method to minimize the back-projection error of the feature. get incremental motion between two consecutive frames;
步骤7,进行关键帧的选取,将协方差矩阵的不确定性通过下式转换为熵的形式:Step 7, select the key frame, and convert the uncertainty of the covariance matrix into the form of entropy by the following formula:
h(ζ)=3(1+log(2π))+0.5log(|Ωζ|),h(ζ)=3(1+log(2π))+0.5log(|Ω ζ |),
Ω表示增量运动的协方差矩阵。计算比值:Ω represents the covariance matrix of incremental motion. Calculate the ratio:
若α<0.9,并且当前帧与上一关键帧的重合特征数超过300,则将该图像加入到关键帧列表中,否则返回步骤2,继续处理输入的图像数据;If α < 0.9, and the number of overlapping features between the current frame and the previous key frame exceeds 300, add the image to the key frame list, otherwise return to step 2 to continue processing the input image data;
步骤8,基于图的思想建立局部地图,由所有与当前帧有关联的关键帧以及根据这些关键帧观测到的特征组成,定义ψ是se(3)下的变量,包括每两个连续帧间的增量运动、每个点特征的空间坐标点、线特征端点的空间坐标点以及平面的最小表示向量;然后最小化观测数据与路标投影的距离,构建准确的局部地图;Step 8, build a local map based on the idea of the graph, which consists of all key frames associated with the current frame and the features observed according to these key frames. Define ψ as a variable under se(3), including every two consecutive frames. The incremental motion of , the spatial coordinate point of each point feature, the spatial coordinate point of the line feature endpoint, and the minimum representation vector of the plane; then minimize the distance between the observation data and the road sign projection to construct an accurate local map;
步骤9,判断机器人运动的轨迹是否形成闭环,若是,则对两端的局部地图进行修正,仍然使用图优化的思想,将关键帧代表的机器人位姿作为节点,两端局部地图的约束关系作为边来进行轨迹的修正;否则判断是否有停止信号,若没有接收到停止信号,则返回步骤2进行视觉SALM过程,否则进入步骤10;Step 9: Determine whether the trajectory of the robot motion forms a closed loop. If so, modify the local maps at both ends, still using the idea of graph optimization, take the robot pose represented by the key frame as a node, and the constraint relationship between the local maps at both ends as an edge to correct the trajectory; otherwise, determine whether there is a stop signal, if no stop signal is received, return to step 2 to perform the visual SALM process, otherwise go to step 10;
步骤10,停止运动。Step 10, stop exercising.
进一步的,步骤1的实现方法:Further, the implementation method of step 1:
提前利用相机获取多张不同视角下的固定大小的棋盘格影像数据,然后通过张正友相机标定法,对获取到的棋盘格影像数据进行相机内参的计算,取得相机标定的结果。Use the camera to obtain multiple fixed-size checkerboard image data from different viewing angles in advance, and then use Zhang Zhengyou's camera calibration method to calculate the camera's internal parameters on the obtained checkerboard image data to obtain the camera calibration result.
进一步的,步骤3的实现方式:Further, the implementation of step 3:
步骤3.1,使用M16模版提取每一帧图像的FAST角点特征,M16模版如附图2所示,若有连续的12个点的像素均大于Ip+T或者小于Ip-T,则为特征点;首先计算一个自适应阈值T:Step 3.1, use the M16 template to extract the FAST corner feature of each frame image, the M16 template is as shown in accompanying drawing 2, if the pixels of 12 consecutive points are all greater than Ip +T or less than Ip -T, then Feature points; first calculate an adaptive threshold T:
其中表示当前圆中所有像素灰度值的平均值;判断1、5、9、13四点的灰度值与T的大小关系,当至少有三个点满足上述条件时继续计算其余12个点,根据灰度质心法确定特征点的方向:在邻域S内计算质心的位置C,邻域S的矩定义为:in Represents the average value of the gray values of all pixels in the current circle; judges the relationship between the gray values of the four points 1, 5, 9, and 13 and the size of T. When at least three points meet the above conditions, continue to calculate the remaining 12 points, according to The gray-scale centroid method determines the direction of the feature points: the position C of the centroid is calculated in the neighborhood S, and the moment of the neighborhood S is defined as:
质心为:The centroid is:
特征点的方向为:The orientation of the feature points is:
θ=atan 2(m01,m10),θ=atan 2(m 01 , m 10 ),
通过具有旋转不变性的rBRIEF描述子对提取到的角点特征进行描述;The extracted corner features are described by the rBRIEF descriptor with rotation invariance;
步骤3.2,针对图像使用LSD算法检测可能存在的线段特征,并合并可能是同一条直线的线段,通过计算任意两个线特征的夹角以及各自线段中点到另一条线段的距离和,若分别小于阈值Tlang和Tldis,则认为是同一条线段,进行合并,之后使用LBD算法对优化后的线特征进行描述,得到线特征的描述子。Step 3.2, use the LSD algorithm to detect the possible line segment features on the image, and merge the line segments that may be the same straight line, by calculating the angle between any two line features and the distance between the midpoint of each line segment and the other line segment, if respectively If it is less than the thresholds T lang and T ldis , it is considered to be the same line segment, which is merged, and then the LBD algorithm is used to describe the optimized line feature to obtain the descriptor of the line feature.
进一步的,步骤4的实现方法:Further, the implementation method of step 4:
步骤4.1,针对步骤3提取到的点线特征进行空间坐标求解,特别的,线特征求取其空间坐标之后计算其所属的空间直线的方程,之后将特征分为两个集合,点特征归为Sp集合,线特征归为Sl集合;Step 4.1, solve the spatial coordinates of the point and line features extracted in step 3. In particular, after obtaining the spatial coordinates of the line features, calculate the equation of the spatial straight line to which they belong, and then divide the features into two sets, and the point features are classified as Sp set, line features are classified as S l set;
步骤4.2,通过以下三种方法构造出候选面特征:1)三个共面的特征点;2)一个特征点与一条特征线;3)两条共面的特征线;并计算候选平面的一般方程:p1x+p2y+p3z=p4;Step 4.2, construct candidate surface features through the following three methods: 1) three coplanar feature points; 2) one feature point and one feature line; 3) two coplanar feature lines; Equation: p 1 x+p 2 y+p 3 z=p 4 ;
步骤4.3,将平面方程的系数进行归一化:Step 4.3, normalize the coefficients of the plane equation:
||[p1,p2,p3,p4]T||=1,||[p 1 ,p 2 ,p 3 ,p 4 ] T ||=1,
以此用三个量来表示一个平面:p=[p1,p2,p3]T;In this way, a plane is represented by three quantities: p=[p 1 , p 2 , p 3 ] T ;
进一步的,步骤5的实现方法:Further, the implementation method of step 5:
步骤5.1,根据步骤3中获取的点、线、面特征进行匹配,对于点特征,根据描述符的距离按照升序排序,取前百分之六十作为最佳匹配点对;对于线特征,同样根据描述子的距离按照升序排序,取前百分之五十作为最佳匹配对;对于面特征,计算:Step 5.1: Match according to the point, line, and surface features obtained in step 3. For point features, sort them in ascending order according to the distance of the descriptors, and take the first 60% as the best matching point pair; for line features, the same Sort in ascending order according to the distance of the descriptors, and take the top 50% as the best matching pair; for surface features, calculate:
按照升序排序,取前3-5个作为最佳匹配对;Sort in ascending order, and take the first 3-5 as the best matching pair;
步骤5.2,利用RANSAC矩阵对得到的匹配对进行错误特征剔除;Step 5.2, use the RANSAC matrix to remove the error features of the obtained matching pairs;
进一步的,步骤6的实现方法:Further, the implementation method of step 6:
步骤6.1,根据匹配好的特征进行三维重建,取得三维坐标,再根据相邻帧间的几何关系对进行重投影,计算特征的重投影误差,利用高斯牛顿法最小化点和直线的投影误差;Step 6.1, perform 3D reconstruction according to the matched features, obtain 3D coordinates, and then reproject according to the geometric relationship between adjacent frames, calculate the reprojection error of the feature, and use the Gauss-Newton method to minimize the projection error of points and lines;
步骤6.2,使用一个伪胡贝尔损失函数并执行一次两步最小化过程,最终得到两连续关键帧间的增量运动。Step 6.2, using a pseudo Huber loss function and performing a two-step minimization process, finally obtains the incremental motion between two consecutive keyframes.
进一步的,步骤7的实现方法:Further, the implementation method of step 7:
步骤7.1,将协方差矩阵中的不确定性转化为熵的标量的形式:Step 7.1, convert the uncertainty in the covariance matrix into a scalar form of entropy:
h(ζ)=3(1+log(2π))+0.5log(|Ωζ|),h(ζ)=3(1+log(2π))+0.5log(|Ω ζ |),
其中Ω表示增量运动的协方差矩阵,对于给定输入帧图像,检查上一关键帧i与当前的帧i+u之间的运动估计的熵及其与第一个关键帧i+1的比值,计算:where Ω represents the covariance matrix of incremental motion. For a given input frame image, check the entropy of the motion estimation between the previous key frame i and the current frame i+u and its entropy with the first key frame i+1 Ratio, calculate:
若α的值低于某个阈值,本方法中规定为0.9,并且当前帧与上一关键帧中的特征重合数超过300,则将i+u帧作为新的关键帧插入系统;If the value of α is lower than a certain threshold, which is specified as 0.9 in this method, and the number of feature overlaps between the current frame and the previous key frame exceeds 300, then the i+u frame is inserted into the system as a new key frame;
步骤7.2,由于ζt,t+1仅仅是相邻帧间的估计,因此通过一阶高斯传递函数合成上述的一系列估计,以获得两个非连续帧间的协方差。In step 7.2, since ζ t, t+1 is only an estimate between adjacent frames, the above-mentioned series of estimates are synthesized through a first-order Gaussian transfer function to obtain the covariance between two non-consecutive frames.
进一步的,步骤8的实现方法:Further, the implementation method of step 8:
步骤8.1,对于当前帧与之前所有关键帧之间的相对位姿的变化估计进行细化,进行数据关联,使用步骤6得到的增量运动作为高斯-牛顿优化方法的初始输入;Step 8.1, refine the estimation of the relative pose change between the current frame and all previous key frames, perform data association, and use the incremental motion obtained in step 6 as the initial input of the Gauss-Newton optimization method;
步骤8.2,定义ψ是se(3)下的变量,包括每两个连续帧间的增量运动、每个点特征的空间坐标点、线特征端点的空间坐标点以及平面的最小表示向量,最小化重投影误差:Step 8.2, define ψ as a variable under se(3), including the incremental motion between every two consecutive frames, the spatial coordinate point of each point feature, the spatial coordinate point of the line feature endpoint, and the minimum representation vector of the plane, the minimum Reprojection error:
其中Kl、Pl、Ll、Ml分别表示局部地图集合、点特征集合、线特征集合、面特征集合,Ω表示增量运动的协方差矩阵。其中的点特征投影误差eij是第i帧中的第j个映射点观测到的二维距离,根据下式计算:Among them, K l , P l , L l , and M l represent the local map set, point feature set, line feature set, and surface feature set, respectively, and Ω represents the covariance matrix of incremental motion. The point feature projection error eij is the two-dimensional distance observed by the jth mapping point in the ith frame, and is calculated according to the following formula:
eij=xij-π(ζiw,Xwj),e ij =x ij -π(ζ iw ,X wj ),
式中π是指从se(3)到SE(3)的映射,Xwj是点特征的三维空间坐标;其中线特征的投影误差与点特征的投影误差不同,将三维线段的端点投影到图像平面,并且将其与图像平面上相应的无穷直线之间的距离作为误差函数,eik由下式计算:where π refers to the mapping from se(3) to SE(3), and Xwj is the three-dimensional spatial coordinate of the point feature; the projection error of the line feature is different from that of the point feature, and the endpoints of the three-dimensional line segment are projected to the image. plane, and taking the distance between it and the corresponding infinite straight line on the image plane as the error function, e ik is calculated by:
式中P、Q分别为线特征的端点坐标,lik代表平面上的直线方程,由步骤3给出;面特征的投影误差eim根据四元数的对数映射来进行计算:In the formula, P and Q are the endpoint coordinates of the line feature, respectively, and l ik represents the straight line equation on the plane, which is given by step 3; the projection error e im of the surface feature is calculated according to the logarithmic mapping of the quaternion:
eim=log{q[T(ζiw)Tp(ζim)]-1q(ζiw)},e im =log{q[T(ζ iw ) T p(ζ im )] -1 q(ζ iw )},
式中p(ζim)表示平面,由步骤3给出,q(*)为一个四元数,T(ζiw)是一个转换矩阵的齐次形式。使用Levenberg-Marquardt优化方法进行第二代更新,求得最优的增量运动。where p(ζ im ) represents the plane, given by step 3, q(*) is a quaternion, and T(ζ iw ) is the homogeneous form of a transformation matrix. The second-generation update is performed using the Levenberg-Marquardt optimization method to obtain the optimal incremental motion.
进一步的,步骤9的实现方法:Further, the implementation method of step 9:
步骤9.1,对提取到的线特征进行BRIEF二进制描述,得到线特征的描述子,与点特征的描述子构成当前帧的视觉词条向量,根据提前离线建立的视觉词袋模型,进行匹配;Step 9.1, perform Brief binary description on the extracted line features, obtain the descriptors of the line features, and the descriptors of the point features to form the visual term vector of the current frame, and perform matching according to the visual word bag model established offline in advance;
步骤9.2,若匹配成功,则需对这两个关键帧所在的两个局部地图进行优化合并,并更新与这两个局部地图有联系的局部地图,基于图优化框架,将帧间增量运动作为图的节点,边的构造与步骤8相似,根据:Step 9.2, if the matching is successful, it is necessary to optimize and merge the two local maps where the two key frames are located, and update the local maps related to the two local maps. Based on the graph optimization framework, the incremental motion between frames is As nodes of the graph, edges are constructed similarly to step 8, according to:
来进行迭代更新,其中的log与exp函数分别表示李群到李代数的对数映射以及李代数到李群的指数映射;To perform iterative update, the log and exp functions respectively represent the logarithmic mapping from Lie groups to Lie algebras and the exponential mapping from Lie algebras to Lie groups;
步骤9.3,若匹配不成功,判断是否有停止信号,若没有接收到停止信号,则返回步骤2继续运行,否则进入步骤10。Step 9.3, if the match is unsuccessful, judge whether there is a stop signal, if no stop signal is received, return to step 2 to continue running, otherwise go to step 10.
本发明有以下有益效果:The present invention has the following beneficial effects:
1.采用多特征进行紧密耦合,可以有效解决低纹理环境下基于特征系统精度急剧下降,系统失灵等问题;1. The use of multiple features for tight coupling can effectively solve the problems of sharp decline in the accuracy of the feature-based system and system failure in a low-texture environment;
2.可以很好的依据室内结构环境实现移动平台的定位效果和构建具有结构信息的周围环境特征,可以很好的在多种公开实验数据集上获得高精度的结果,能够实时、高效的利用匹配的点线面特征对移动平台的位姿和周围的环境进行构图,并进行回环检测处理,充分利用点线特征减少累积误差。2. It can realize the positioning effect of the mobile platform and build the surrounding environment features with structural information according to the indoor structural environment. It can obtain high-precision results on a variety of public experimental data sets, and can be used in real time and efficiently. The matched point, line and surface features are used to compose the pose of the mobile platform and the surrounding environment, and perform loop closure detection processing, making full use of the point and line features to reduce accumulated errors.
附图说明:Description of drawings:
图1为具体实施方式所述的一种多特征融合的视觉SLAM方法的流程示意图;1 is a schematic flowchart of a visual SLAM method for multi-feature fusion according to the specific embodiment;
图2为本文SLAM方法所用FAST特征M16模版示意图;Figure 2 is a schematic diagram of the FAST feature M16 template used in the SLAM method in this paper;
图3为本文SLAM方法所用基于图优化思想的概率图模型;Figure 3 shows the probabilistic graph model based on the idea of graph optimization used in the SLAM method in this paper;
图4为本文SLAM方法所用平面特征示意图;Figure 4 is a schematic diagram of the plane features used in the SLAM method in this paper;
图5为本文SLAM方法中点线特征重投影示意图。Figure 5 is a schematic diagram of the re-projection of point-line features in the SLAM method in this paper.
具体实施方式:Detailed ways:
下面将结合附图以及具体实施方式对本发明进行进一步描述。The present invention will be further described below with reference to the accompanying drawings and specific embodiments.
本发明所采用的技术方案是:基于多特征融合的视觉SLAM方法,具体实施流程见图1,主要包括以下步骤:The technical solution adopted in the present invention is: a visual SLAM method based on multi-feature fusion, and the specific implementation process is shown in Figure 1, which mainly includes the following steps:
步骤1,进行深度相机的标定,确定相机的内参;Step 1, carry out the calibration of the depth camera, and determine the internal parameters of the camera;
步骤2,针对移动机器人平台上相机获取的视频流数据,进行高斯滤波以减少噪声对后续步骤的影响;Step 2, for the video stream data obtained by the camera on the mobile robot platform, perform Gaussian filtering to reduce the influence of noise on subsequent steps;
步骤3,针对校正后的图像进行特征提取,在线提取每一帧的点特征和线特征,具体包括:针对每一帧图像提取FAST角点特征,用rBRIEF描述子进行描述,线特征采用LSD算法进行检测,并采用LBD描述子进行描述,利用线段提供的几何信息,筛选出方向和长度不同以及端点距离较大的线段;Step 3: Perform feature extraction on the corrected image, and extract point features and line features of each frame online, specifically including: extracting FAST corner features for each frame of image, using rBRIEF descriptor to describe, and using LSD algorithm for line features Detect and describe using the LBD descriptor, and use the geometric information provided by the line segment to screen out the line segments with different directions and lengths and large distances between the endpoints;
步骤4,根据提取到的点、线特征构建面特征,通过以下三种方法构造出候选面特征:1)三个共面的特征点;2)一个特征点与一条特征线;3)两条共面的特征线;Step 4, construct surface features according to the extracted point and line features, and construct candidate surface features through the following three methods: 1) three coplanar feature points; 2) one feature point and one feature line; 3) two Coplanar feature lines;
步骤5,利用当前帧与上一帧间的点、线特征进行图像的帧间匹配,获得点线特征的匹配对,建立帧间的几何对应关系;Step 5, use the point and line features between the current frame and the previous frame to perform inter-frame matching of the image, obtain the matching pair of point-line features, and establish the geometric correspondence between the frames;
步骤6,根据步骤5所得到的帧间对应关系进行特征反投影,使用高斯-牛顿法最小化特征的反投影误差,为了处理异常值,使用一个伪胡贝尔代价函数执行两步最小化,求得两个连续帧间的增量运动;Step 6, perform feature back-projection according to the correspondence between frames obtained in step 5, and use the Gauss-Newton method to minimize the back-projection error of the feature. get incremental motion between two consecutive frames;
步骤7,进行关键帧的选取,将协方差矩阵的不确定性通过式--------转换为熵的形式:Step 7, select the key frame, and convert the uncertainty of the covariance matrix into the form of entropy through the formula --------:
h(ζ)=3(1+log(2π))+0.5log(|Ωζ|),h(ζ)=3(1+log(2π))+0.5log(|Ω ζ |),
计算比值:Calculate the ratio:
若α<0.9,则将该图像加入到关键帧列表中,否则返回步骤2,继续处理输入的图像数据;If α < 0.9, add the image to the key frame list, otherwise return to step 2 to continue processing the input image data;
步骤8,基于图的思想建立局部地图,由所有与当前帧有关联的关键帧以及根据这些关键帧观测到的特征组成,定义ψ是se(3)下的变量,包括每两个连续帧间的增量运动、每个点特征的空间坐标点、线特征端点的空间坐标点以及平面的最小表示向量;然后最小化观测数据与路标投影的距离,构建准确的局部地图;Step 8, build a local map based on the idea of the graph, which consists of all key frames associated with the current frame and the features observed according to these key frames. Define ψ as a variable under se(3), including every two consecutive frames. The incremental motion of , the spatial coordinate point of each point feature, the spatial coordinate point of the line feature endpoint, and the minimum representation vector of the plane; then minimize the distance between the observation data and the road sign projection to construct an accurate local map;
步骤9,判断机器人运动的轨迹是否形成闭环,若是,则对两端的局部地图进行修正,仍然使用图优化的思想,将关键帧代表的机器人位姿作为节点,两端局部地图的约束关系作为边来进行轨迹的修正;否则,判断是否有停止信号,若没有接收到停止信号,则返回步骤2继续进行视觉SALM过程,否则进入步骤10;Step 9: Determine whether the trajectory of the robot motion forms a closed loop. If so, modify the local maps at both ends, still using the idea of graph optimization, take the robot pose represented by the key frame as a node, and the constraint relationship between the local maps at both ends as an edge to correct the trajectory; otherwise, judge whether there is a stop signal, if no stop signal is received, return to step 2 to continue the visual SALM process, otherwise go to step 10;
步骤10,停止运动。Step 10, stop exercising.
进一步的,步骤1的实现方法:Further, the implementation method of step 1:
提前利用相机获取多张不同视角下的固定大小的棋盘格影像数据,然后通过张正友相机标定法,对获取到的棋盘格影像数据进行相机内参的计算,取得相机标定的结果。Use the camera to obtain multiple fixed-size checkerboard image data from different viewing angles in advance, and then use Zhang Zhengyou's camera calibration method to calculate the camera's internal parameters on the obtained checkerboard image data to obtain the camera calibration result.
进一步的,步骤3的实现方式:Further, the implementation of step 3:
步骤3.1,使用M16模版提取每一帧图像的FAST角点特征,M16模版如附图2所示,若有连续的12个点的像素均大于Ip+T或者小于Ip-T,则为特征点;首先计算一个自适应阈值T:Step 3.1, use the M16 template to extract the FAST corner feature of each frame image, the M16 template is as shown in accompanying drawing 2, if the pixels of 12 consecutive points are all greater than Ip +T or less than Ip -T, then Feature points; first calculate an adaptive threshold T:
其中表示当前圆中所有像素灰度值的平均值;判断1、5、9、13四点的灰度值与T的大小关系,当至少有三个点满足上述条件时继续计算其余12个点,根据灰度质心法确定特征点的方向:在邻域S内计算质心的位置C,邻域S的矩定义为:in Represents the average value of the gray values of all pixels in the current circle; judges the relationship between the gray values of the four points 1, 5, 9, and 13 and the size of T. When at least three points meet the above conditions, continue to calculate the remaining 12 points, according to The gray-scale centroid method determines the direction of the feature points: the position C of the centroid is calculated in the neighborhood S, and the moment of the neighborhood S is defined as:
质心为:The centroid is:
特征点的方向为:The orientation of the feature points is:
θ=atan 2(m01,m10),θ=atan 2(m 01 , m 10 ),
通过具有旋转不变性的rBRIEF描述子对提取到的角点特征进行描述;The extracted corner features are described by the rBRIEF descriptor with rotation invariance;
步骤3.2,针对图像使用LSD算法检测可能存在的线段特征,并合并可能是同一条直线的线段,通过计算任意两个线特征的夹角以及各自线段中点到另一条线段的距离和,若分别小于阈值Tlang和Tldis,则认为是同一条线段,进行合并,之后使用LBD算法对优化后的线特征进行描述,得到线特征的描述子。Step 3.2, use the LSD algorithm to detect the possible line segment features on the image, and merge the line segments that may be the same straight line, by calculating the angle between any two line features and the distance between the midpoint of each line segment and the other line segment, if respectively If it is less than the thresholds T lang and T ldis , it is considered to be the same line segment, which is merged, and then the LBD algorithm is used to describe the optimized line feature to obtain the descriptor of the line feature.
进一步的,步骤4的实现方法:Further, the implementation method of step 4:
步骤4.1,针对步骤3提取到的点线特征进行空间坐标求解,特别的,线特征求取其空间坐标之后计算其所属的空间直线的方程,之后将特征分为两个集合,点特征归为Sp集合,线特征归为Sl集合;Step 4.1, solve the spatial coordinates of the point and line features extracted in step 3. In particular, after obtaining the spatial coordinates of the line features, calculate the equation of the spatial straight line to which they belong, and then divide the features into two sets, and the point features are classified as Sp set, line features are classified as S l set;
步骤4.2,通过以下三种方法构造出候选面特征:1)三个共面的特征点;2)一个特征点与一条特征线;3)两条共面的特征线;并计算候选平面的一般方程:p1x+p2y+p3z=p4;Step 4.2, construct candidate surface features through the following three methods: 1) three coplanar feature points; 2) one feature point and one feature line; 3) two coplanar feature lines; Equation: p 1 x+p 2 y+p 3 z=p 4 ;
步骤4.3,将平面方程的系数进行归一化:Step 4.3, normalize the coefficients of the plane equation:
||[p1,p2,p3,p4]T||=1,||[p 1 ,p 2 ,p 3 ,p 4 ] T ||=1,
以此用三个量来表示一个平面:[p1,p2,p3]T;In this way, three quantities are used to represent a plane: [p 1 , p 2 , p 3 ] T ;
进一步的,步骤5的实现方法:Further, the implementation method of step 5:
步骤5.1,根据步骤3中获取的点、线、面特征进行匹配,对于点特征,根据描述符的距离按照升序排序,取前百分之六十作为最佳匹配点对;对于线特征,同样根据描述子的距离按照升序排序,取前百分之五十作为最佳匹配对;对于线特征,计算:Step 5.1: Match according to the point, line, and surface features obtained in step 3. For point features, sort them in ascending order according to the distance of the descriptors, and take the first 60% as the best matching point pair; for line features, the same Sort in ascending order according to the distance of the descriptors, and take the top 50% as the best matching pair; for line features, calculate:
按照升序排序,取前3-5个作为最佳匹配对;Sort in ascending order, and take the first 3-5 as the best matching pair;
步骤5.2,利用RANSAC矩阵对得到的匹配对进行错误特征剔除;Step 5.2, use the RANSAC matrix to remove the error features of the obtained matching pairs;
进一步的,步骤6的实现方法:Further, the implementation method of step 6:
步骤6.1,根据匹配好的特征进行三维重建,取得三维坐标,再根据相邻帧间的几何关系对进行重投影,计算特征的重投影误差,利用高斯牛顿法最小化点和直线的投影误差;Step 6.1, perform 3D reconstruction according to the matched features, obtain 3D coordinates, and then reproject according to the geometric relationship between adjacent frames, calculate the reprojection error of the feature, and use the Gauss-Newton method to minimize the projection error of points and lines;
步骤6.2,使用一个伪胡贝尔损失函数并执行一次两步最小化过程,最终得到两连续关键帧间的增量运动。Step 6.2, using a pseudo Huber loss function and performing a two-step minimization process, finally obtains the incremental motion between two consecutive keyframes.
进一步的,步骤7的实现方法:Further, the implementation method of step 7:
步骤7.1,将协方差矩阵中的不确定性转化为熵的标量的形式:Step 7.1, convert the uncertainty in the covariance matrix into a scalar form of entropy:
h(ζ)=3(1+log(2π))+0.5log(|Ωζ|),h(ζ)=3(1+log(2π))+0.5log(|Ω ζ |),
对于给定的输入帧图像,检查上一关键帧i与当前的帧i+u之间的运动估计的熵及其与第一个关键帧i+1的比值,计算:For a given input frame image, check the entropy of the motion estimation between the previous key frame i and the current frame i+u and its ratio to the first key frame i+1, calculate:
若α的值低于某个阈值,本方法中规定为0.9,并且当前帧与上一关键帧中的特征重合数超过300,则将i+u帧作为新的关键帧插入系统;If the value of α is lower than a certain threshold, which is specified as 0.9 in this method, and the number of feature overlaps between the current frame and the previous key frame exceeds 300, then the i+u frame is inserted into the system as a new key frame;
步骤7.2,由于ζt,t+1仅仅是相邻帧间的估计,因此通过一阶高斯传递函数合成上述的一系列估计,以获得两个非连续帧间的协方差。In step 7.2, since ζ t, t+1 is only an estimate between adjacent frames, the above-mentioned series of estimates are synthesized through a first-order Gaussian transfer function to obtain the covariance between two non-consecutive frames.
进一步的,步骤8的实现方法:Further, the implementation method of step 8:
步骤8.1,对于当前帧与之前所有关键帧之间的相对位姿的变化估计进行细化,进行数据关联,使用步骤6得到的增量运动作为高斯-牛顿优化方法的初始输入;Step 8.1, refine the estimation of the relative pose change between the current frame and all previous key frames, perform data association, and use the incremental motion obtained in step 6 as the initial input of the Gauss-Newton optimization method;
步骤8.2,定义ψ是se(3)下的变量,包括每两个连续帧间的增量运动、每个点特征的空间坐标点、线特征端点的空间坐标点以及平面的最小表示向量,最小化重投影误差:Step 8.2, define ψ as a variable under se(3), including the incremental motion between every two consecutive frames, the spatial coordinate point of each point feature, the spatial coordinate point of the endpoint of the line feature, and the minimum representation vector of the plane, the minimum Reprojection error:
其中Kl、Pl、Ll、Ml分别表示局部地图集合、点特征集合、线特征集合、面特征集合。其中的点特征投影误差eij是第i帧中的第j个映射点观测到的二维距离,根据下式计算:Among them, K l , P l , L l , and M l represent local map sets, point feature sets, line feature sets, and surface feature sets, respectively. The point feature projection error eij is the two-dimensional distance observed by the jth mapping point in the ith frame, and is calculated according to the following formula:
eij=xij-π(ζiw,Xwj),e ij =x ij -π(ζ iw ,X wj ),
式中π是指从se(3)到SE(3)的映射,Xwj是点特征的三维空间坐标;其中线特征的投影误差与点特征的投影误差不同,将三维线段的端点投影到图像平面,并且将其与图像平面上相应的无穷直线之间的距离作为误差函数,eik由下式计算:where π refers to the mapping from se(3) to SE(3), and Xwj is the three-dimensional spatial coordinate of the point feature; the projection error of the line feature is different from that of the point feature, and the endpoints of the three-dimensional line segment are projected to the image. plane, and taking the distance between it and the corresponding infinite straight line on the image plane as the error function, e ik is calculated by:
式中P、Q分别为线特征的端点坐标,lik代表平面上的直线方程,由步骤3给出;面特征的投影误差eim根据四元数的对数映射来进行计算:In the formula, P and Q are the endpoint coordinates of the line feature, respectively, and l ik represents the straight line equation on the plane, which is given by step 3; the projection error e im of the surface feature is calculated according to the logarithmic mapping of the quaternion:
eim=log{q[T(ζiw)Tp(ζim)]-1q(ζiw)},e im =log{q[T(ζ iw ) T p(ζ im )] -1 q(ζ iw )},
式中p(ζim)表示平面,由步骤3给出,q(*)为一个四元数,T(ζiw)是一个转换矩阵的齐次形式。使用Levenberg-Marquardt优化方法进行第二代更新,求得最优的增量运动。where p(ζ im ) represents the plane, given by step 3, q(*) is a quaternion, and T(ζ iw ) is the homogeneous form of a transformation matrix. The second-generation update is performed using the Levenberg-Marquardt optimization method to obtain the optimal incremental motion.
进一步的,步骤9的实现方法:Further, the implementation method of step 9:
步骤9.1,对提取到的线特征进行BRIEF二进制描述,得到线特征的描述子,与点特征的描述子构成当前帧的视觉词条向量,根据提前离线建立的视觉词袋模型,进行匹配;Step 9.1, perform Brief binary description on the extracted line features, obtain the descriptors of the line features, and the descriptors of the point features to form the visual term vector of the current frame, and perform matching according to the visual word bag model established offline in advance;
步骤9.2,若匹配成功,则需对这两个关键帧所在的两个局部地图进行优化合并,并更新与这两个局部地图有联系的局部地图,基于图优化框架,将帧间增量运动作为图的节点,边的构造与步骤8相似,根据:Step 9.2, if the matching is successful, it is necessary to optimize and merge the two local maps where the two key frames are located, and update the local maps related to the two local maps. Based on the graph optimization framework, the incremental motion between frames is As nodes of the graph, edges are constructed similarly to step 8, according to:
来进行迭代更新,其中的log与exp函数分别表示李群到李代数的对数映射以及李代数到李群的指数映射;To perform iterative update, the log and exp functions respectively represent the logarithmic mapping from Lie groups to Lie algebras and the exponential mapping from Lie algebras to Lie groups;
步骤9.3,若匹配不成功,判断是否有停止信号,若没有接收到停止信号,则返回步骤2继续运行,否则进入步骤10。Step 9.3, if the match is unsuccessful, judge whether there is a stop signal, if no stop signal is received, return to step 2 to continue running, otherwise go to step 10.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910357796.9A CN110060277A (en) | 2019-04-30 | 2019-04-30 | A kind of vision SLAM method of multiple features fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910357796.9A CN110060277A (en) | 2019-04-30 | 2019-04-30 | A kind of vision SLAM method of multiple features fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110060277A true CN110060277A (en) | 2019-07-26 |
Family
ID=67321742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910357796.9A Pending CN110060277A (en) | 2019-04-30 | 2019-04-30 | A kind of vision SLAM method of multiple features fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110060277A (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599569A (en) * | 2019-09-16 | 2019-12-20 | 上海市刑事科学技术研究院 | Method for generating two-dimensional plane graph inside building, storage device and terminal |
CN110631586A (en) * | 2019-09-26 | 2019-12-31 | 珠海市一微半导体有限公司 | Map construction method based on visual SLAM, navigation system and device |
CN110807799A (en) * | 2019-09-29 | 2020-02-18 | 哈尔滨工程大学 | A Line Feature Visual Odometry Method Combined with Depth Map Inference |
CN110926405A (en) * | 2019-12-04 | 2020-03-27 | 中科新松有限公司 | ARV attitude measurement method based on monocular vision vanishing point detection |
CN111275764A (en) * | 2020-02-12 | 2020-06-12 | 南开大学 | Depth camera visual mileage measurement method based on line segment shadow |
CN111310772A (en) * | 2020-03-16 | 2020-06-19 | 上海交通大学 | Point and line feature selection method and system for binocular vision SLAM |
CN111369594A (en) * | 2020-03-31 | 2020-07-03 | 北京旋极信息技术股份有限公司 | Method, device, computer storage medium and terminal for realizing target tracking |
CN111461141A (en) * | 2020-03-30 | 2020-07-28 | 歌尔科技有限公司 | Equipment pose calculation method device and equipment |
CN111693047A (en) * | 2020-05-08 | 2020-09-22 | 中国航空工业集团公司西安航空计算技术研究所 | Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene |
CN111899334A (en) * | 2020-07-28 | 2020-11-06 | 北京科技大学 | A method and device for simultaneous visual positioning and map construction based on point and line features |
CN112114966A (en) * | 2020-09-15 | 2020-12-22 | 杭州未名信科科技有限公司 | Light beam adjustment calculation method of visual SLAM |
CN112258391A (en) * | 2020-10-12 | 2021-01-22 | 武汉中海庭数据技术有限公司 | Fragmented map splicing method based on road traffic marking |
CN112305558A (en) * | 2020-10-22 | 2021-02-02 | 中国人民解放军战略支援部队信息工程大学 | Mobile robot track determination method and device by using laser point cloud data |
CN112381890A (en) * | 2020-11-27 | 2021-02-19 | 上海工程技术大学 | RGB-D vision SLAM method based on dotted line characteristics |
CN112419497A (en) * | 2020-11-13 | 2021-02-26 | 天津大学 | Monocular vision-based SLAM method combining feature method and direct method |
CN112631271A (en) * | 2020-10-09 | 2021-04-09 | 南京凌华微电子科技有限公司 | Map generation method based on robot perception fusion |
CN112802196A (en) * | 2021-02-01 | 2021-05-14 | 北京理工大学 | Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion |
CN112880687A (en) * | 2021-01-21 | 2021-06-01 | 深圳市普渡科技有限公司 | Indoor positioning method, device, equipment and computer readable storage medium |
CN112950709A (en) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | Pose prediction method, pose prediction device and robot |
CN112945233A (en) * | 2021-01-15 | 2021-06-11 | 北京理工大学 | Global drift-free autonomous robot simultaneous positioning and map building method |
WO2021135645A1 (en) * | 2019-12-31 | 2021-07-08 | 歌尔股份有限公司 | Map updating method and device |
CN113608236A (en) * | 2021-08-03 | 2021-11-05 | 哈尔滨智兀科技有限公司 | Mine robot positioning and image building method based on laser radar and binocular camera |
CN113689499A (en) * | 2021-08-18 | 2021-11-23 | 哈尔滨工业大学 | A fast visual positioning method, device and system based on point-surface feature fusion |
CN113763481A (en) * | 2021-08-16 | 2021-12-07 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN114004900A (en) * | 2021-11-17 | 2022-02-01 | 中国科学院合肥物质科学研究院 | Indoor binocular vision odometer method based on point-line-surface characteristics |
CN114119805A (en) * | 2021-10-28 | 2022-03-01 | 北京理工大学 | A SLAM method for semantic mapping based on point-line-surface fusion |
CN114202035A (en) * | 2021-12-16 | 2022-03-18 | 成都理工大学 | A large-scale network community detection algorithm based on multi-feature fusion |
CN115578620A (en) * | 2022-10-28 | 2023-01-06 | 北京理工大学 | Point-line-surface multi-dimensional feature-visible light fusion slam method |
CN115741702A (en) * | 2022-11-23 | 2023-03-07 | 大连理工大学 | Binocular vision system error modeling method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150103183A1 (en) * | 2013-10-10 | 2015-04-16 | Nvidia Corporation | Method and apparatus for device orientation tracking using a visual gyroscope |
CN104851094A (en) * | 2015-05-14 | 2015-08-19 | 西安电子科技大学 | An Improved Method of SLAM Algorithm Based on RGB-D |
US20180173991A1 (en) * | 2016-02-24 | 2018-06-21 | Soinn Holdings Llc | Feature value extraction method and feature value extraction apparatus |
CN109523589A (en) * | 2018-11-13 | 2019-03-26 | 浙江工业大学 | A kind of design method of more robust visual odometry |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
-
2019
- 2019-04-30 CN CN201910357796.9A patent/CN110060277A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150103183A1 (en) * | 2013-10-10 | 2015-04-16 | Nvidia Corporation | Method and apparatus for device orientation tracking using a visual gyroscope |
CN104851094A (en) * | 2015-05-14 | 2015-08-19 | 西安电子科技大学 | An Improved Method of SLAM Algorithm Based on RGB-D |
US20180173991A1 (en) * | 2016-02-24 | 2018-06-21 | Soinn Holdings Llc | Feature value extraction method and feature value extraction apparatus |
CN109544636A (en) * | 2018-10-10 | 2019-03-29 | 广州大学 | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method |
CN109523589A (en) * | 2018-11-13 | 2019-03-26 | 浙江工业大学 | A kind of design method of more robust visual odometry |
Non-Patent Citations (5)
Title |
---|
DYLAN THOMAS等: "A monocular SLAM method for satellite proximity operations", 《2016 AMERICAN CONTROL CONFERENCE (ACC)》 * |
仇翔等: "基于RGB-D相机的视觉里程计实现", 《浙江工业大学学报》 * |
彭天博等: "增强室内视觉里程计实用性的方法", 《模式识别与人工智能》 * |
林志诚等: "移动机器人视觉SLAM过程中图像匹配及相机位姿求解的研究", 《机械设计与制造工程》 * |
梁强: "基于深度相机的室内定位系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599569A (en) * | 2019-09-16 | 2019-12-20 | 上海市刑事科学技术研究院 | Method for generating two-dimensional plane graph inside building, storage device and terminal |
CN110631586A (en) * | 2019-09-26 | 2019-12-31 | 珠海市一微半导体有限公司 | Map construction method based on visual SLAM, navigation system and device |
CN110807799A (en) * | 2019-09-29 | 2020-02-18 | 哈尔滨工程大学 | A Line Feature Visual Odometry Method Combined with Depth Map Inference |
CN110807799B (en) * | 2019-09-29 | 2023-05-30 | 哈尔滨工程大学 | A Line Feature Visual Odometry Method Combined with Depth Map Inference |
CN110926405A (en) * | 2019-12-04 | 2020-03-27 | 中科新松有限公司 | ARV attitude measurement method based on monocular vision vanishing point detection |
CN110926405B (en) * | 2019-12-04 | 2022-02-22 | 中科新松有限公司 | ARV attitude measurement method based on monocular vision vanishing point detection |
WO2021135645A1 (en) * | 2019-12-31 | 2021-07-08 | 歌尔股份有限公司 | Map updating method and device |
US12031837B2 (en) | 2019-12-31 | 2024-07-09 | Goertek Inc. | Method and device for updating map |
CN111275764B (en) * | 2020-02-12 | 2023-05-16 | 南开大学 | Depth camera visual odometry method based on line segment shadow |
CN111275764A (en) * | 2020-02-12 | 2020-06-12 | 南开大学 | Depth camera visual mileage measurement method based on line segment shadow |
CN111310772B (en) * | 2020-03-16 | 2023-04-21 | 上海交通大学 | Point line characteristic selection method and system for binocular vision SLAM |
CN111310772A (en) * | 2020-03-16 | 2020-06-19 | 上海交通大学 | Point and line feature selection method and system for binocular vision SLAM |
CN111461141A (en) * | 2020-03-30 | 2020-07-28 | 歌尔科技有限公司 | Equipment pose calculation method device and equipment |
CN111461141B (en) * | 2020-03-30 | 2023-08-29 | 歌尔科技有限公司 | Equipment pose calculating method and device |
CN111369594A (en) * | 2020-03-31 | 2020-07-03 | 北京旋极信息技术股份有限公司 | Method, device, computer storage medium and terminal for realizing target tracking |
CN111693047A (en) * | 2020-05-08 | 2020-09-22 | 中国航空工业集团公司西安航空计算技术研究所 | Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene |
CN111693047B (en) * | 2020-05-08 | 2022-07-05 | 中国航空工业集团公司西安航空计算技术研究所 | Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene |
CN111899334A (en) * | 2020-07-28 | 2020-11-06 | 北京科技大学 | A method and device for simultaneous visual positioning and map construction based on point and line features |
CN111899334B (en) * | 2020-07-28 | 2023-04-18 | 北京科技大学 | Visual synchronous positioning and map building method and device based on point-line characteristics |
CN112114966A (en) * | 2020-09-15 | 2020-12-22 | 杭州未名信科科技有限公司 | Light beam adjustment calculation method of visual SLAM |
CN112114966B (en) * | 2020-09-15 | 2024-08-13 | 杭州未名信科科技有限公司 | Light beam adjustment calculation method of vision SLAM |
CN112631271A (en) * | 2020-10-09 | 2021-04-09 | 南京凌华微电子科技有限公司 | Map generation method based on robot perception fusion |
CN112258391A (en) * | 2020-10-12 | 2021-01-22 | 武汉中海庭数据技术有限公司 | Fragmented map splicing method based on road traffic marking |
CN112305558A (en) * | 2020-10-22 | 2021-02-02 | 中国人民解放军战略支援部队信息工程大学 | Mobile robot track determination method and device by using laser point cloud data |
CN112305558B (en) * | 2020-10-22 | 2023-08-01 | 中国人民解放军战略支援部队信息工程大学 | A mobile robot trajectory determination method and device using laser point cloud data |
CN112419497A (en) * | 2020-11-13 | 2021-02-26 | 天津大学 | Monocular vision-based SLAM method combining feature method and direct method |
CN112381890B (en) * | 2020-11-27 | 2022-08-02 | 上海工程技术大学 | RGB-D vision SLAM method based on dotted line characteristics |
CN112381890A (en) * | 2020-11-27 | 2021-02-19 | 上海工程技术大学 | RGB-D vision SLAM method based on dotted line characteristics |
CN112945233B (en) * | 2021-01-15 | 2024-02-20 | 北京理工大学 | Global drift-free autonomous robot simultaneous positioning and map construction method |
CN112945233A (en) * | 2021-01-15 | 2021-06-11 | 北京理工大学 | Global drift-free autonomous robot simultaneous positioning and map building method |
CN112880687B (en) * | 2021-01-21 | 2024-05-17 | 深圳市普渡科技有限公司 | Indoor positioning method, device, equipment and computer readable storage medium |
CN112880687A (en) * | 2021-01-21 | 2021-06-01 | 深圳市普渡科技有限公司 | Indoor positioning method, device, equipment and computer readable storage medium |
CN112802196B (en) * | 2021-02-01 | 2022-10-21 | 北京理工大学 | Binocular inertial simultaneous localization and map construction method based on point-line feature fusion |
CN112802196A (en) * | 2021-02-01 | 2021-05-14 | 北京理工大学 | Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion |
CN112950709B (en) * | 2021-02-21 | 2023-10-24 | 深圳市优必选科技股份有限公司 | Pose prediction method, pose prediction device and robot |
CN112950709A (en) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | Pose prediction method, pose prediction device and robot |
CN113608236A (en) * | 2021-08-03 | 2021-11-05 | 哈尔滨智兀科技有限公司 | Mine robot positioning and image building method based on laser radar and binocular camera |
CN113763481A (en) * | 2021-08-16 | 2021-12-07 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN113763481B (en) * | 2021-08-16 | 2024-04-05 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN113689499A (en) * | 2021-08-18 | 2021-11-23 | 哈尔滨工业大学 | A fast visual positioning method, device and system based on point-surface feature fusion |
CN114119805A (en) * | 2021-10-28 | 2022-03-01 | 北京理工大学 | A SLAM method for semantic mapping based on point-line-surface fusion |
CN114119805B (en) * | 2021-10-28 | 2024-06-04 | 北京理工大学 | Semantic mapping SLAM method for point-line-plane fusion |
CN114004900A (en) * | 2021-11-17 | 2022-02-01 | 中国科学院合肥物质科学研究院 | Indoor binocular vision odometer method based on point-line-surface characteristics |
CN114202035A (en) * | 2021-12-16 | 2022-03-18 | 成都理工大学 | A large-scale network community detection algorithm based on multi-feature fusion |
CN115578620B (en) * | 2022-10-28 | 2023-07-18 | 北京理工大学 | A point-line-surface multidimensional feature-visible light fusion slam method |
CN115578620A (en) * | 2022-10-28 | 2023-01-06 | 北京理工大学 | Point-line-surface multi-dimensional feature-visible light fusion slam method |
CN115741702A (en) * | 2022-11-23 | 2023-03-07 | 大连理工大学 | Binocular vision system error modeling method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110060277A (en) | A kind of vision SLAM method of multiple features fusion | |
CN109544677B (en) | Indoor scene main structure reconstruction method and system based on depth image key frame | |
CN111899334B (en) | Visual synchronous positioning and map building method and device based on point-line characteristics | |
CN112258618B (en) | Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map | |
CN110223348B (en) | Adaptive pose estimation method for robot scene based on RGB-D camera | |
CN108564616B (en) | Fast robust RGB-D indoor three-dimensional scene reconstruction method | |
CN110378997B (en) | ORB-SLAM 2-based dynamic scene mapping and positioning method | |
CN111105460B (en) | A RGB-D Camera Pose Estimation Method for 3D Reconstruction of Indoor Scenes | |
CN109934862A (en) | A binocular vision SLAM method combining point and line features | |
Tang et al. | ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
KR101257207B1 (en) | Method, apparatus and computer-readable recording medium for head tracking | |
CN110766024B (en) | Deep learning-based visual odometer feature point extraction method and visual odometer | |
CN107679537A (en) | A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings | |
CN111582036B (en) | Cross-view-angle person identification method based on shape and posture under wearable device | |
CN113256698A (en) | Monocular 3D reconstruction method with depth prediction | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
Yu et al. | Drso-slam: A dynamic rgb-d slam algorithm for indoor dynamic scenes | |
CN111652901A (en) | A Textureless 3D Object Tracking Method Based on Confidence and Feature Fusion | |
CN111709893B (en) | ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment | |
CN116468786B (en) | Semantic SLAM method based on point-line combination and oriented to dynamic environment | |
CN117036404A (en) | Monocular thermal imaging simultaneous positioning and mapping method and system | |
CN104156933A (en) | Image registering method based on optical flow field | |
Zhao et al. | 3D object tracking via boundary constrained region-based model | |
CN118071873A (en) | Dense Gaussian map reconstruction method and system in dynamic environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190726 |