CN104732518B - An Improved Method of PTAM Based on Ground Features of Intelligent Robot - Google Patents
An Improved Method of PTAM Based on Ground Features of Intelligent Robot Download PDFInfo
- Publication number
- CN104732518B CN104732518B CN201510025810.7A CN201510025810A CN104732518B CN 104732518 B CN104732518 B CN 104732518B CN 201510025810 A CN201510025810 A CN 201510025810A CN 104732518 B CN104732518 B CN 104732518B
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- msub
- mtr
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000012937 correction Methods 0.000 claims abstract description 18
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 230000033001 locomotion Effects 0.000 claims abstract description 10
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 230000007423 decrease Effects 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 238000000691 measurement method Methods 0.000 claims 1
- 230000000717 retained effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Studio Devices (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于机器人视觉领域,涉及一种基于地面特征的PTAM算法改进。The invention belongs to the field of robot vision and relates to an improved PTAM algorithm based on ground features.
背景技术Background technique
随着机器人与人类的关系越来越紧密,智能机器人的相关技术受到了极大的重视。移动机器人的同步定位与建图(Simultaneous Localization and Mapping,SLAM)是当前智能移动机器人最主流的定位技术之一。它实际上就是一个运动估计问题,利用传感器获取的内部和外部数据,计算出移动机器人在某一时刻的位置,同时建立所依赖的地图模型。基于视觉的SLAM是属于视觉测量的研究范畴,由于视觉传感器拥有自身独特的优点:体积小,重量轻,价格便宜,安装方便,提取的外部信息非常丰富。这些优势进一步促进了当前基于视觉的SLAM的研究和应用。申请号为201310678018.2提供了一种采用SURF作为单目视觉SLAM的特征检测算子,分别从感兴趣点的检测、SURF描述子的生成及SURF点匹配等三个方面进行了创新和改进,提出了基于区域增长的SURF特征匹配方法完成机器人的同时定位与地图创建,使得在SURF描述子匹配过程中,某一描述子仅与最可能与之匹配的描述子进行比对,从而显著的减少了单目视觉SLAM问题中的比对次数,提高了匹配速度。但该方法依然存在不能建立米制地图以及无法提供三维特征信息的问题,算法鲁棒性与实时性有待进一步提高。As the relationship between robots and humans is getting closer, the related technologies of intelligent robots have received great attention. Simultaneous Localization and Mapping (SLAM) of mobile robots is one of the most mainstream positioning technologies for intelligent mobile robots. It is actually a motion estimation problem, using the internal and external data acquired by the sensor to calculate the position of the mobile robot at a certain moment, and at the same time establish the map model it depends on. Vision-based SLAM belongs to the research category of visual measurement, because the visual sensor has its own unique advantages: small size, light weight, cheap price, easy installation, and the extracted external information is very rich. These advantages further promote the current research and application of vision-based SLAM. The application number is 201310678018.2, which provides a feature detection operator using SURF as a monocular visual SLAM. It innovates and improves in three aspects: the detection of interest points, the generation of SURF descriptors, and the matching of SURF points. The SURF feature matching method based on region growth completes the simultaneous positioning and map creation of the robot, so that in the process of SURF descriptor matching, a certain descriptor is only compared with the most likely matching descriptor, thus significantly reducing the single The number of comparisons in the visual SLAM problem improves the matching speed. However, this method still has the problems of not being able to establish a metric map and not being able to provide three-dimensional feature information, and the robustness and real-time performance of the algorithm need to be further improved.
从运动中创建环境模型技术是一种同时求解摄像机运动轨迹和场景结构模型的方法。作为SFM的代表性方法,G.Klein等人于2007年在增强现实(AR)领域首先提出了PTAM(Parallel Tracking and Mapping)的概念,用来实时地解决环境识别问题。但未改进的PTAM算法也存在无法建立米制地图的问题,同时对相机的移动有着严格限制。The technique of creating an environment model from motion is a method to simultaneously solve the camera motion trajectory and the scene structure model. As a representative method of SFM, G. Klein et al. first proposed the concept of PTAM (Parallel Tracking and Mapping) in the field of augmented reality (AR) in 2007 to solve the problem of environment recognition in real time. However, the unimproved PTAM algorithm also has the problem of being unable to establish a metric map, and has strict restrictions on the movement of the camera.
发明内容Contents of the invention
针对以上存在的问题,本发明提出了一种基于地面特征的PTAM改进算法,在加入地面特征的基础上,PTAM对单目摄像机的当前位姿与三维空间点进行同步估计,在利用FAST角点方法更新检测特征点的同时,用最优的局部与全局光束平差法,不断地更新摄像机与三维特征点的位姿,完成了高精度定位与地图创建,该发明有效解决了无法创建米制地图的问题,同时在实时性及三维特征提取上有着极大的提高,对于解决机器人导航定位、三维重建中的相关问题有着重要意义。In view of the above existing problems, the present invention proposes an improved PTAM algorithm based on ground features. On the basis of adding ground features, PTAM performs synchronous estimation on the current pose of the monocular camera and three-dimensional space points. While updating the detection feature points, the optimal local and global beam adjustment method is used to continuously update the poses of the camera and 3D feature points, completing high-precision positioning and map creation. This invention effectively solves the problem of the inability to create metric maps At the same time, it has greatly improved the real-time performance and 3D feature extraction, which is of great significance for solving related problems in robot navigation and positioning and 3D reconstruction.
本发明采用如下的技术方案:The present invention adopts following technical scheme:
首先,完成参数校正,这包括参数定义与相机校正;接着对关键帧中的特征进行跟踪,再实现位姿的初始化,再此基础上对平面参数进行估计;然后利用相机获取当前环境纹理信息,并构建四层高斯图像金字塔,运用FAST角点检测算法提取当前图像中的特征信息,建立角点特征间的数据关联,得到位姿估计模型;在地图的初始绘制阶段,获取两个关键帧将相机架设在移动机器人上,通过相机与地面标定后开始进行初始化过程;在初始化过程中,移动机器人开始移动,同时相机捕获当前场景中角点信息并建立关联,由位姿估计模型获取当前位姿初始估计,再运用基于地面特征的位姿估计方法获取当前位姿的准确估计;实现三维稀疏地图的初始化后,更新关键帧并利用极线搜索与块匹配方法建立特征点亚像素精度映射关系,结合位姿估计模型实现相机精确重定位。最后将匹配点投影到空间,完成当前全局环境三维地图创建。同时利用全局与局部集束调整完成算法的一致性优化。First, complete the parameter correction, which includes parameter definition and camera correction; then track the features in the key frame, and then realize the initialization of the pose, and then estimate the plane parameters on this basis; then use the camera to obtain the current environment texture information, And build a four-layer Gaussian image pyramid, use the FAST corner detection algorithm to extract the feature information in the current image, establish the data association between the corner features, and obtain the pose estimation model; in the initial drawing stage of the map, two key frames will be obtained The camera is set up on the mobile robot, and the initialization process starts after the camera and the ground are calibrated; during the initialization process, the mobile robot starts to move, and at the same time, the camera captures the corner information in the current scene and establishes an association, and the current pose is obtained by the pose estimation model Initial estimation, and then use the pose estimation method based on ground features to obtain an accurate estimate of the current pose; after realizing the initialization of the 3D sparse map, update the key frame and use the epipolar search and block matching method to establish the sub-pixel precision mapping relationship of feature points, Combined with the pose estimation model to achieve precise camera relocalization. Finally, the matching points are projected into the space to complete the creation of the 3D map of the current global environment. At the same time, the consistency optimization of the algorithm is completed by using global and local bundle adjustment.
具体包括以下步骤:Specifically include the following steps:
步骤1,参数校正Step 1, parameter correction
步骤1.1,参数定义Step 1.1, parameter definition
(1)机器人位姿(1) Robot pose
在移动机器人起始处建立世界坐标系,在机器人移动过程中,其位姿可表示为:The world coordinate system is established at the starting point of the mobile robot. During the movement of the robot, its pose can be expressed as:
由上式可知移动机器人位姿可由x=(x,y,θ)描述,其中,x,y分别表示移动机器人在x轴,y轴方向的平移量,θ为移动机器人绕z轴的旋转角度。It can be seen from the above formula that the pose of the mobile robot can be described by x=(x, y, θ), where x and y represent the translation of the mobile robot in the x-axis and y-axis directions respectively, and θ is the rotation angle of the mobile robot around the z-axis .
在世界坐标系下,相机的位姿可表示为:In the world coordinate system, the pose of the camera can be expressed as:
其中,Tc∈SE(3),SE(3)为特殊刚体变换群,Rc为3×3旋转矩阵,tc为3×1平移矩阵。该位姿矩阵建立了世界坐标系与相机坐标系下点pc,pw的对应关系,即pc=Tcpw。在附图2所示,本研究中相机固定在移动机器人上,相机与机器人间存在固定的旋转平移关系Tcr。在机器人移动过程中,若某一时间间隔机器人位姿增量为Trs,与之对应,相机的位姿增量为Tcs,则:Among them, T c ∈ SE(3), SE(3) is a special rigid body transformation group, R c is a 3×3 rotation matrix, and t c is a 3×1 translation matrix. The pose matrix establishes the corresponding relationship between the world coordinate system and the points p c and p w in the camera coordinate system, that is, p c = T c p w . As shown in Figure 2, the camera is fixed on the mobile robot in this study, and there is a fixed rotation-translation relationship T cr between the camera and the robot. During the moving process of the robot, if the pose increment of the robot is T rs at a certain time interval, and correspondingly, the pose increment of the camera is T cs , then:
Trs=TrcTcsTcr (3)T rs = T rc T cs T cr (3)
其中, in,
(2)地平面标定参数(2) Ground plane calibration parameters
将相机固定在移动机器人上,并将标定靶标平放在地面。相机通过获取具有标定靶标的图像信息建立相机与机器人间位姿关系。Tcp∈SE(3)为相机与标定靶标间的变换矩阵,Rcp,tcp分别表示Tcp的旋转与平移分量,其逆变换假定机器人的坐标原点与相机保持一致,且机器人的x轴与相机的z轴朝向相同。在靶标坐标系下,机器人的x方向向量可由Tcp的第3行向量的前两个分量表示。此时,机器人坐标系与靶标坐标系的变换关系可表示为:Fix the camera on the mobile robot, and place the calibration target flat on the ground. The camera establishes the pose relationship between the camera and the robot by acquiring image information with a calibrated target. T cp ∈ SE(3) is the transformation matrix between the camera and the calibration target, R cp and t cp represent the rotation and translation components of T cp respectively, and their inverse transformation It is assumed that the coordinate origin of the robot is consistent with the camera, and the x-axis of the robot is oriented in the same direction as the z-axis of the camera. In the target coordinate system, the x-direction vector of the robot can be represented by the first two components of the vector in the third row of Tcp . At this time, the transformation relationship between the robot coordinate system and the target coordinate system can be expressed as:
其中,γ为机器人坐标系与靶标坐标系的旋转角度,(xr,yr)为机器人原点在靶标坐标系下的坐标,该坐标可由相机与靶标的平移向量确定。因此机器人与相机间的变换关系可表示为Tcr=TcpTpr,根据相机与靶标平面间的位姿关系,可获取在相机坐标系下的地平面方程:Among them, γ is the rotation angle between the robot coordinate system and the target coordinate system, and (x r , y r ) is the coordinate of the robot origin in the target coordinate system, which can be determined by the translation vector of the camera and the target. Therefore, the transformation relationship between the robot and the camera can be expressed as T cr = T cp T pr , According to the pose relationship between the camera and the target plane, the ground plane equation in the camera coordinate system can be obtained:
(0,0,1,0)Tpcpc=0 (5)(0,0,1,0)T pc p c =0 (5)
其中,pc为相机坐标系下三维点,地平面参数可表示为(nT,d),且n=(0,0,1)Rpc,d=tp(3)。Wherein, p c is a three-dimensional point in the camera coordinate system, the ground plane parameter can be expressed as (n T ,d), and n=(0,0,1)R pc , d=t p (3).
步骤1.2,相机校正Step 1.2, Camera Correction
由于透镜在制造上的原因会存在畸变,畸变主要分为由透镜形状引起的镜向畸变和由摄像机组装过程引起的切向畸变。本文采用FOV模型实现对单目相机的矫正,校正模型如图3。该模型是由Devernay和Faugeras提出的一种针对广角相机去畸变方法,其数学描述如下:Due to the manufacturing reasons of the lens, there will be distortion. The distortion is mainly divided into mirror distortion caused by the shape of the lens and tangential distortion caused by the camera assembly process. In this paper, the FOV model is used to correct the monocular camera, and the correction model is shown in Figure 3. This model is proposed by Devernay and Faugeras as a de-distortion method for wide-angle cameras. Its mathematical description is as follows:
式中,xu是像素归一化坐标,ud是畸变后像素坐标,ω为畸变系数,rd,ru分别为矫正前后归一化坐标到坐标系中心的距离。In the formula, x u is the normalized coordinate of the pixel, u d is the pixel coordinate after distortion, ω is the distortion coefficient, r d , r u are the distances from the normalized coordinate before and after correction to the center of the coordinate system, respectively.
利用上述相机模型,将图像像素坐标映射到归一化坐标平面,同时结合相机内参数矩阵K,实现图像畸变矫正,即:Using the above camera model, the image pixel coordinates are mapped to the normalized coordinate plane, and combined with the camera internal parameter matrix K, image distortion correction is realized, namely:
u=Kxu (7)u=Kx u (7)
步骤2,基于地面特征的初始化Step 2, initialization based on ground features
步骤2.1,特征跟踪Step 2.1, Feature Tracking
FAST角点检测是由Edward Rosten和Tom Drummond提出的一种简单快速的角点检测算法。该算法利用像素点某个邻域(通常为圆形区域)内相邻像素点灰度值的差异,判断该像素点是否为角点,如下式:FAST corner detection is a simple and fast corner detection algorithm proposed by Edward Rosten and Tom Drummond. This algorithm uses the difference in the gray value of adjacent pixels in a certain neighborhood (usually a circular area) of a pixel to determine whether the pixel is a corner point, as follows:
其中,uf为候选角点,s为候选角点邻域内任意一点,c(uf)表示uf的邻域,I(x)为在x处图像灰度值,若N超出阈值条件,x处的候选角点即为角点候选点即为角点。Among them, u f is the candidate corner point, s is any point in the neighborhood of the candidate corner point, c(u f ) represents the neighborhood of u f , I(x) is the gray value of the image at x, if N exceeds the threshold condition, The candidate corner point at x is the corner point. The candidate point is the corner point.
在相机跟踪线程前,系统利用相机获取当前环境纹理信息,并构建四层高斯图像金字塔,采用FAST的角点检测算法,结合块搜索的方式建立角点特征间的数据关联,利用投影关系预测三维点与当前帧中特征点的对应关系,并在位置附近固定区域进行搜索,以获取准确的匹配关系。Before the camera tracks the thread, the system uses the camera to obtain the current environment texture information, and constructs a four-layer Gaussian image pyramid, uses the FAST corner detection algorithm, combines the block search method to establish the data association between corner features, and uses the projection relationship to predict 3D The corresponding relationship between the point and the feature point in the current frame, and search in a fixed area near the position to obtain an accurate matching relationship.
步骤2.2,位姿初始化Step 2.2, pose initialization
在机器人移动过程中,通过相机获取地面信息,并利用FAST算法实现角点特征的跟踪。根据上述数据关联信息,可建立平面点的单应性关系,即:During the movement of the robot, the ground information is obtained through the camera, and the corner features are tracked using the FAST algorithm. According to the above data association information, the homography relationship of plane points can be established, namely:
π(Hur)=up (9)π(Hu r )=u p (9)
其中,π(p)为投影函数,p=(x,y,z),π(p)=(x/z,y/z,1),ur,up分别为参考帧与当前帧下对应特征点。根据相机平移量Tc以及地平面方程参数(nT,d),可建立平面点的单应性关系:Among them, π(p) is the projection function, p=(x,y,z), π(p)=(x/z,y/z,1), u r , u p are the reference frame and the current frame Corresponding feature points. According to the camera translation T c and the ground plane equation parameters (n T ,d), the homography relationship of plane points can be established:
H=KTc(I|nd)TK-1 (10)H=KT c (I|n d ) T K -1 (10)
其中,nd=n/d,K为相机内参数矩阵。将式(3)代入式(10),此时机器人的位姿与地面特征单应性关系可描述为:Wherein, n d =n/d, K is the camera intrinsic parameter matrix. Substituting Equation (3) into Equation (10), the homography relationship between the pose of the robot and the ground features can be described as:
H=KTcrTrsTrc(I|nd)TK-1 (11)H=KT cr T rs T rc (I n d )TK -1 (11)
利用上述图像特征点的单应性关系,可建立位姿估计模型:Using the homography relationship of the above image feature points, the pose estimation model can be established:
其中,r(x)=π(H(x)uref)-up为单应性关系误差函数,即投影误差函数,H(x)为单应性矩阵。根据小位移假设,误差函数r的一阶泰勒展开表示为:Among them, r(x)=π(H(x)u ref )-u p is the homography relationship error function, that is, the projection error function, and H(x) is the homography matrix. According to the small displacement assumption, the first-order Taylor expansion of the error function r is expressed as:
此时,能量函数的极值条件为:At this time, the extreme value condition of the energy function is:
即:which is:
其中,可通过迭代求解上式获取移动机器人的位姿估计。为了进一步提高定位精度,本文引入鲁棒性权值构建基于M-estimation的位姿估计模型,此时能量函数表示为:in, The pose estimation of the mobile robot can be obtained by iteratively solving the above formula. In order to further improve the positioning accuracy, this paper introduces robust weights to build a pose estimation model based on M-estimation. At this time, the energy function is expressed as:
其中,w(x)为鲁棒性Tukey加权函数,此时,能量函数的极值条件为:Among them, w(x) is the robust Tukey weighting function. At this time, the extreme value condition of the energy function is:
步骤2.3,平面参数估计Step 2.3, Planar Parameter Estimation
机器人利用基于地面特征的单应性关系实现机器人定位,但由于非地面特征的影响,难以保证定位精度,因此本发明采用一种快速的非地面特征去除方法。根据平行向量叉乘方式构建法向估计模型。实验发现,由于角点检测存在一定误差,使得理想叉乘条件无法满足,法向估计模型适应度差。利用投影误差r作为判定依据,从而确定非地面特征信息。但随着非地面特征与观察点间距离减小,该方法区分度明显降低。相比上述方法,本发明在位姿初始估计的基础上,利用单应性矩阵直接推导法向估计模型,通过获取其与地平面方程参数的差异,判别该角点是否为地面特征信息。根据平面单应性关系式,可建立平面参数nd的估计模型:The robot uses the homography relationship based on ground features to realize robot positioning, but due to the influence of non-ground features, it is difficult to ensure the positioning accuracy, so the present invention adopts a fast non-ground feature removal method. The normal estimation model is constructed according to the cross product of parallel vectors. Experiments show that the ideal cross product condition cannot be satisfied due to certain errors in corner point detection, and the fitness of the normal estimation model is poor. The projection error r is used as the judgment basis to determine the non-ground feature information. However, as the distance between non-ground features and observation points decreases, the discrimination of this method decreases significantly. Compared with the above method, the present invention uses the homography matrix to directly derive the normal estimation model on the basis of the initial estimation of the pose, and judges whether the corner point is ground feature information by obtaining the difference between it and the parameters of the ground plane equation. According to the plane homography relation, the estimation model of the plane parameter n d can be established:
其中,r(nd)=π(H(nd)uref)-up为相应投影误差函数。与位姿估计求解过程类似,在小位移假设的基础上,可根据误差函数的一阶泰勒公式建立式(18)的求解模型。Wherein, r(n d )=π(H(n d )u ref )-u p is the corresponding projection error function. Similar to the solution process of pose estimation, on the basis of the small displacement assumption, the solution model of formula (18) can be established according to the first-order Taylor formula of the error function.
此时,平面参数可通过迭代式(18)获取:At this time, the plane parameters can be obtained by iterative formula (18):
其中, in,
步骤3,位姿估计Step 3, pose estimation
位姿估计模块的主要功能是完成相机位姿的实时更新。其具体实施过程如下。在完成初始化工作后,根据投影误差,建立位姿估计模型,其数学描述如下:The main function of the pose estimation module is to complete the real-time update of the camera pose. Its specific implementation process is as follows. After the initialization work is completed, the pose estimation model is established according to the projection error, and its mathematical description is as follows:
其中,ej是投影误差,且为Tukey双权目标函数,σT为特征点匹配标准差的无偏估计值,μ为相机位姿六元组描述向量。where e j is the projection error, and is the Tukey dual-weight objective function, σ T is the unbiased estimate of the standard deviation of feature point matching, μ is the camera pose six-tuple description vector.
对于跟踪中存在的失败情况,提出特征观察比值用以评估每帧图像的跟踪质量。若比值低于设定阈值,可以为跟踪效果较差,则继续跟踪,但是系统不会将该帧更新到地图中;若在多帧图像的跟踪中都出现了特征观察比值低于设定阈值的情况时,即可认为属于“跟丢”的情况,那么就初始化跟踪过程。通过以上步骤实现了位姿的精确估计,跟踪过程得以正常进行。For the failures in tracking, a feature observation ratio is proposed to evaluate the tracking quality of each frame image. If the ratio is lower than the set threshold, the tracking effect is poor, and the tracking will continue, but the system will not update the frame to the map; if the feature observation ratio is lower than the set threshold in the tracking of multiple frames of images In the case of , it can be considered as a "follow-up" situation, and then the tracking process is initialized. Through the above steps, the precise estimation of the pose is realized, and the tracking process can be carried out normally.
步骤4,地图创建Step 4, map creation
步骤4.1,地图初始化Step 4.1, map initialization
系统利用利用地面特征与标准立体相机算法模型建立当前环境初始化地图。在地图的初始化过程中,利用图像中FAST角点匹配关系,结合RANSAC算法,采用基于地面特征的初始位姿估计方法,运用三角测量法计算当前特征点处的三维坐标,并确定起始点为全局世界坐标系原点,完成三维稀疏地图的初始化。The system uses the ground features and standard stereo camera algorithm model to establish the current environment initialization map. In the initialization process of the map, using the FAST corner point matching relationship in the image, combined with the RANSAC algorithm, using the initial pose estimation method based on ground features, using the triangulation method to calculate the three-dimensional coordinates of the current feature point, and determining the starting point as the global The origin of the world coordinate system to complete the initialization of the 3D sparse map.
在地图的初始绘制阶段,获取两个关键帧的具体过程如下,将相机架设在移动机器人上,通过相机与地面标定后开始进行初始化过程。在初始化过程中,移动机器人开始移动,同时相机捕获当前场景中角点信息并建立关联,通过求解式(12)获取当前位姿初始估计。当该初始估计超过设定阈值d0时,运用2.2中基于地面特征的位姿估计方法获取当前位姿的准确估计。In the initial drawing stage of the map, the specific process of obtaining two key frames is as follows. The camera is set up on the mobile robot, and the initialization process starts after the camera and the ground are calibrated. In the initialization process, the mobile robot starts to move, and at the same time, the camera captures the corner information in the current scene and establishes an association, and obtains the initial estimation of the current pose by solving equation (12). When the initial estimate exceeds the set threshold d 0 , use the pose estimation method based on ground features in 2.2 to obtain an accurate estimate of the current pose.
步骤4.2,关键帧更新与极线搜索Step 4.2, keyframe update and epipolar search
地图初始化时,仅仅包含两个关键帧,并且只能对邻近较小的体积空间进行描述,在相机离开初始位置时,需要添加新的关键帧以及地图特征。若相机与当前关键帧间图像帧数超出阈值条件,相机跟踪效果为最佳,且相机与最近的地图关键点距离尽量小时,将自动执行添加关键帧过程。由于实时性的约束,跟踪系统可能只是测度到了帧中可视特征的子集,因此映射线程需要重新计划和测量余下的地图特征。When the map is initialized, it only contains two keyframes, and can only describe the adjacent small volume space. When the camera leaves the initial position, new keyframes and map features need to be added. If the number of image frames between the camera and the current key frame exceeds the threshold condition, the camera tracking effect is the best, and the distance between the camera and the nearest map key point is as small as possible, the process of adding key frames will be automatically executed. Due to real-time constraints, the tracking system may only measure a subset of features visible in the frame, so the mapping thread needs to replan and measure the remaining map features.
在关键帧的更新中,系统首先将会对新增加的关键帧中所有角点进行Shi-Tomas评估,获取当前具有显著特征的角点信息,成功观测的特征点附近的显著点将会被丢弃,保留下来的显著性点将作为候选的地图节点。由于新的地图节点需要已知深度信息,仅靠单帧图像无法获取,因此需要选取与之最近的关键帧作为第二视图,利用极线搜索与块匹配方法建立特征点亚像素精度映射关系。在第二视图中,采用零均值SSD法,结合位姿估计模型实现相机精确重定位。最后将匹配点投影到空间,以生成当前全局环境三维地图。In the update of the key frame, the system will first perform Shi-Tomas evaluation on all the corner points in the newly added key frame to obtain the current corner point information with salient features, and the salient points near the successfully observed feature points will be discarded , the remaining salient points will be used as candidate map nodes. Since the new map node requires known depth information, which cannot be obtained only by a single frame image, it is necessary to select the nearest key frame as the second view, and use epipolar search and block matching methods to establish the sub-pixel precision mapping relationship of feature points. In the second view, the zero-mean SSD method is adopted, combined with the pose estimation model to achieve precise camera relocalization. Finally, the matched points are projected into space to generate a 3D map of the current global environment.
步骤4.3,优化Step 4.3, Optimization
利用最优的局部与全局LM(Levenberg-Marquardt)集束调整算法实现当前地图全局一致性优化以提高鲁棒性。该集束调整算法的数学描述为:The optimal local and global LM (Levenberg-Marquardt) bundle adjustment algorithm is used to optimize the global consistency of the current map to improve robustness. The mathematical description of the cluster adjustment algorithm is:
其中,σji为在第i个关键帧中,FAST特征点匹配标准差的无偏估计,μi表示第i个关键帧的位姿六元组描述向量,pi为全局地图中的点。Among them, σ ji is the unbiased estimate of the matching standard deviation of FAST feature points in the i-th keyframe, μ i is the pose six-tuple description vector of the i-th keyframe, and p i is a point in the global map.
通过不断的集束调整,使得(21)中的稳定代价函数极小。全局集束调整所有关键帧的位姿以及地图点的位置,同时利用结构-运动问题中的内在稀疏性使得整体矩阵分解的复杂度呈现立方级别的下降,由O((N+M)3)降至O(N3),但在多数的估算情况中又受到生成的外点矢量积的制约,随着地图的生长,这个运算量会显著增加。The stable cost function in (21) is made extremely small by continuous bundle adjustment. Global clustering adjusts the poses of all keyframes and the positions of map points, and at the same time utilizes the inherent sparsity in the structure-motion problem to make the complexity of the overall matrix decomposition present a cubic-level reduction, which is reduced by O((N+M) 3 ) to O(N 3 ), but in most estimation cases is constrained by the generated outer point vector product, which increases significantly as the map grows.
针对以上问题,引入局部集束调整,这里只对关键帧的子集位姿进行调整,此时的目标函数变为:Aiming at the above problems, local bundle adjustment is introduced, here only the subset poses of the key frames are adjusted, and the objective function at this time becomes:
X为系列关键帧的待调整集合,Y为固定帧集,Z为地图点子集。集合X由地图中一个最新的关键帧以及4个最接近的关键帧构成,而关键帧的所有地图中的可视点又组成了集合Z,最后Y包含了所有由Z中测量数据所获取的关键帧。由此,局部集束调整优化了最新、最近的关键帧的位姿,更新了地图中的所有点,同时复杂度也显著降低,不超过O(NM)。X is a series of keyframes to be adjusted, Y is a fixed frame set, and Z is a subset of map points. The set X is composed of the latest keyframe and the 4 closest keyframes in the map, and all the visible points in the map of the keyframe form the set Z, and finally Y contains all the keys obtained from the measurement data in Z frame. As a result, local bundle adjustment optimizes the pose of the latest, most recent keyframe, updating all points in the map, while also significantly reducing the complexity to no more than O(NM).
与原PTAM算法相比,本发明加入了地面特征来对其进行改进,实现了如下的几个优点:Compared with the original PTAM algorithm, the present invention adds ground features to improve it, and realizes the following advantages:
(1)改进了初始化过程,有效解除了PTAM算法对相机移动的严格限制。(1) The initialization process is improved, which effectively removes the strict restrictions on camera movement imposed by the PTAM algorithm.
(2)可有效建立相对准确的米制地图,为进一步进行环境测量提供可靠保证。(2) It can effectively establish a relatively accurate metric map and provide a reliable guarantee for further environmental measurement.
附图说明Description of drawings
图1为本发明所涉及方法的流程图;Fig. 1 is the flowchart of the method involved in the present invention;
图2相机校正模型;Figure 2 Camera correction model;
图3平面参数标定示意图。Figure 3 Schematic diagram of planar parameter calibration.
具体实施方式detailed description
下面结合附图对本发明专利进行进一步详细说明。Below in conjunction with accompanying drawing, the patent of the present invention is described in further detail.
基于地面特征的PTAM改进算法流程图如附图(1)所示,具体包括以下几个步骤:The PTAM improved algorithm flow chart based on ground features is shown in Figure (1), which specifically includes the following steps:
步骤1,参数校正Step 1, parameter correction
步骤1.1,参数定义Step 1.1, parameter definition
由机器人坐标系与世界坐标系的关系,构建机器人位姿表示,而由相机与靶标平面间的位姿关系确定地平面标定参数。The robot pose representation is constructed from the relationship between the robot coordinate system and the world coordinate system, and the ground plane calibration parameters are determined by the pose relationship between the camera and the target plane.
步骤1.2,相机校正Step 1.2, Camera Correction
采用FOV模型实现单目相机的校正,将图像像素坐标映射到归一化坐标平面,同时结合相机内参数矩阵K,实现图像畸变矫正。The FOV model is used to correct the monocular camera, and the image pixel coordinates are mapped to the normalized coordinate plane. At the same time, the image distortion correction is realized by combining the camera internal parameter matrix K.
步骤2,基于地面特征的初始化Step 2, initialization based on ground features
步骤2.1,特征跟踪Step 2.1, Feature Tracking
获取环境纹理信息,提取特征,建立角点特征关联。Obtain environmental texture information, extract features, and establish corner feature associations.
步骤2.2,位姿初始化Step 2.2, pose initialization
建立单应性关系以及位姿初始化估计模型,实现位姿初始化。Establish a homography relationship and a pose initialization estimation model to realize pose initialization.
步骤2.3,平面参数估计Step 2.3, Planar Parameter Estimation
步骤3,位姿估计Step 3, pose estimation
在完成初始化的基础上,根据基于地面特征的姿估计模型,实现相机的初始化位姿估计。更进一步,PTAM根据相机初始化位姿,采用极线搜索方式,建立图像金字塔中角点特征亚像素精度匹配关系,并重新带入位姿估计模型,实现相机精确重定位。After the initialization is completed, the initialization pose estimation of the camera is realized according to the pose estimation model based on ground features. Furthermore, PTAM uses the epipolar search method to establish the sub-pixel precision matching relationship of the corner points in the image pyramid according to the initial pose of the camera, and reintroduces it into the pose estimation model to achieve precise camera repositioning.
步骤4,地图创建Step 4, map creation
步骤4.1,地图初始化Step 4.1, map initialization
系统利用标准立体相机算法模型建立当前环境初始化地图The system uses the standard stereo camera algorithm model to establish the current environment initialization map
步骤4.2,关键帧更新与极线搜索Step 4.2, keyframe update and epipolar search
添加新的关键帧并利用极线搜索与块匹配方法建立特征点亚像素精度映射关系,结合位姿估计模型实现相机精确重定位。最后将匹配点投影到空间,完成当前全局环境三维地图创建。Add a new key frame and use the epipolar search and block matching methods to establish the sub-pixel precision mapping relationship of feature points, and combine the pose estimation model to achieve precise camera relocation. Finally, the matching points are projected into the space to complete the creation of the 3D map of the current global environment.
步骤4.3,优化Step 4.3, Optimization
利用全局与局部集束调整实现地图的一致性优化。Consistency optimization of maps is achieved using global and local bundle adjustments.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510025810.7A CN104732518B (en) | 2015-01-19 | 2015-01-19 | An Improved Method of PTAM Based on Ground Features of Intelligent Robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510025810.7A CN104732518B (en) | 2015-01-19 | 2015-01-19 | An Improved Method of PTAM Based on Ground Features of Intelligent Robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104732518A CN104732518A (en) | 2015-06-24 |
CN104732518B true CN104732518B (en) | 2017-09-01 |
Family
ID=53456386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510025810.7A Expired - Fee Related CN104732518B (en) | 2015-01-19 | 2015-01-19 | An Improved Method of PTAM Based on Ground Features of Intelligent Robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104732518B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109993798A (en) * | 2019-04-09 | 2019-07-09 | 上海肇观电子科技有限公司 | Method, equipment and the storage medium of multi-cam detection motion profile |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6575325B2 (en) * | 2015-11-27 | 2019-09-18 | 富士通株式会社 | Camera position / orientation estimation apparatus, camera position / orientation estimation method, and camera position / orientation estimation program |
CN105678754B (en) * | 2015-12-31 | 2018-08-07 | 西北工业大学 | A kind of unmanned plane real-time map method for reconstructing |
JP7127941B2 (en) * | 2016-07-21 | 2022-08-30 | モービルアイ ビジョン テクノロジーズ リミテッド | Method, system and program |
CN106570904B (en) * | 2016-10-25 | 2019-04-09 | 大连理工大学 | A multi-target relative pose recognition method based on Xtion camera |
CN106548486B (en) * | 2016-11-01 | 2024-02-27 | 浙江大学 | Unmanned vehicle position tracking method based on sparse visual feature map |
CN106600641B (en) * | 2016-12-07 | 2020-02-11 | 常州工学院 | Embedded visual servo control method based on multi-feature fusion |
CN106885574B (en) * | 2017-02-15 | 2020-02-07 | 北京大学深圳研究生院 | Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy |
CN107358624B (en) * | 2017-06-06 | 2020-01-07 | 武汉几古几古科技有限公司 | Monocular dense instant positioning and map reconstruction method |
CN107272454A (en) * | 2017-06-19 | 2017-10-20 | 中国人民解放军国防科学技术大学 | A kind of real time human-machine interaction method based on virtual reality |
CN107291093A (en) * | 2017-07-04 | 2017-10-24 | 西北工业大学 | Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment |
CN107687850B (en) * | 2017-07-26 | 2021-04-23 | 哈尔滨工业大学深圳研究生院 | A Pose and Attitude Estimation Method for Unmanned Aircraft Based on Vision and Inertial Measurement Unit |
CN107677279B (en) * | 2017-09-26 | 2020-04-24 | 上海思岚科技有限公司 | Method and system for positioning and establishing image |
CN107767450B (en) * | 2017-10-31 | 2021-05-28 | 南京维睛视空信息科技有限公司 | Spark-slam-based real-time graph establishing method |
CN107808395B (en) * | 2017-10-31 | 2020-12-04 | 南京维睛视空信息科技有限公司 | Indoor positioning method based on SLAM |
CN107869989B (en) * | 2017-11-06 | 2020-02-07 | 东北大学 | Positioning method and system based on visual inertial navigation information fusion |
CN107862720B (en) * | 2017-11-24 | 2020-05-22 | 北京华捷艾米科技有限公司 | Pose optimization method and pose optimization system based on multi-map fusion |
CN107885871A (en) * | 2017-11-24 | 2018-04-06 | 南京华捷艾米软件科技有限公司 | Synchronous superposition method, system, interactive system based on cloud computing |
CN108780577A (en) * | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
CN108090958B (en) * | 2017-12-06 | 2021-08-27 | 上海阅面网络科技有限公司 | Robot synchronous positioning and map building method and system |
CN108109169B (en) * | 2017-12-12 | 2021-12-14 | 深圳市神州云海智能科技有限公司 | Pose estimation method and device based on rectangular identifier and robot |
CN108253962A (en) * | 2017-12-18 | 2018-07-06 | 中北智杰科技(北京)有限公司 | New energy pilotless automobile localization method under a kind of low light environment |
CN108171787A (en) * | 2017-12-18 | 2018-06-15 | 桂林电子科技大学 | A kind of three-dimensional rebuilding method based on the detection of ORB features |
CN108133496B (en) * | 2017-12-22 | 2021-11-26 | 北京工业大学 | Dense map creation method based on g2o and random fern algorithm |
CN108615244B (en) * | 2018-03-27 | 2019-11-15 | 中国地质大学(武汉) | An Image Depth Estimation Method and System Based on CNN and Depth Filter |
CN110389349B (en) * | 2018-04-17 | 2021-08-17 | 北京京东尚科信息技术有限公司 | Positioning method and device |
CN110264509B (en) * | 2018-04-27 | 2022-10-14 | 腾讯科技(深圳)有限公司 | Method, apparatus, and storage medium for determining pose of image capturing device |
CN108765326A (en) * | 2018-05-18 | 2018-11-06 | 南京大学 | A kind of synchronous superposition method and device |
CN110660098B (en) * | 2018-06-28 | 2022-08-12 | 北京京东叁佰陆拾度电子商务有限公司 | Positioning method and device based on monocular vision |
CN109141393B (en) * | 2018-07-02 | 2020-12-08 | 北京百度网讯科技有限公司 | Relocation method, relocation apparatus and storage medium |
CN109100731B (en) * | 2018-07-17 | 2022-11-11 | 重庆大学 | A Mobile Robot Localization Method Based on LiDAR Scanning Matching Algorithm |
US10953545B2 (en) * | 2018-08-13 | 2021-03-23 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for autonomous navigation using visual sparse map |
CN110858403B (en) * | 2018-08-22 | 2022-09-27 | 杭州萤石软件有限公司 | Method for determining scale factor in monocular vision reconstruction and mobile robot |
CN109215059B (en) * | 2018-10-16 | 2021-06-29 | 西安建筑科技大学 | A Local Data Association Method for Moving Vehicle Tracking in Aerial Video |
US10839556B2 (en) * | 2018-10-23 | 2020-11-17 | Microsoft Technology Licensing, Llc | Camera pose estimation using obfuscated features |
CN109544629B (en) | 2018-11-29 | 2021-03-23 | 南京人工智能高等研究院有限公司 | Camera position and posture determining method and device and electronic equipment |
CN109341707B (en) * | 2018-12-03 | 2022-04-08 | 南开大学 | Construction method of 3D map for mobile robot in unknown environment |
CN109741306B (en) * | 2018-12-26 | 2021-07-06 | 北京石油化工学院 | Image processing method applied to warehouse stacking of hazardous chemicals |
CN109815847B (en) * | 2018-12-30 | 2020-12-01 | 中国电子科技集团公司信息科学研究院 | Visual SLAM method based on semantic constraint |
CN111568305B (en) * | 2019-02-18 | 2023-02-17 | 北京奇虎科技有限公司 | Method, device and electronic equipment for processing the relocation of sweeping robot |
CN110044358B (en) * | 2019-04-29 | 2020-10-02 | 清华大学 | Mobile robot localization method based on field line features |
CN110223354A (en) * | 2019-04-30 | 2019-09-10 | 惠州市德赛西威汽车电子股份有限公司 | A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction |
CN111982133B (en) * | 2019-05-23 | 2023-01-31 | 北京地平线机器人技术研发有限公司 | Method and device for positioning vehicle based on high-precision map and electronic equipment |
US11694432B2 (en) | 2019-07-23 | 2023-07-04 | Toyota Research Institute, Inc. | System and method for augmenting a visual output from a robotic device |
CN110648363B (en) * | 2019-09-16 | 2024-10-18 | 腾讯科技(深圳)有限公司 | Camera gesture determining method and device, storage medium and electronic equipment |
CN110992487B (en) * | 2019-12-10 | 2020-09-29 | 南京航空航天大学 | Fast 3D map reconstruction device and reconstruction method for handheld aircraft fuel tank |
CN113129366B (en) * | 2020-01-10 | 2024-04-30 | 北京字节跳动网络技术有限公司 | Monocular SLAM initialization method and device and electronic equipment |
CN111121774B (en) * | 2020-01-14 | 2021-04-06 | 上海曼恒数字技术股份有限公司 | Infrared positioning camera capable of detecting self posture in real time |
CN111102984A (en) * | 2020-01-21 | 2020-05-05 | 中南民族大学 | Visualized mountain road navigation method, device, equipment and storage medium |
CN111230886A (en) * | 2020-03-05 | 2020-06-05 | 浙江理工大学 | Industrial robot kinematics parameter calibration system and method with tolerance capability |
CN111157009A (en) * | 2020-04-02 | 2020-05-15 | 深圳司南数据服务有限公司 | Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence |
CN111780763B (en) * | 2020-06-30 | 2022-05-06 | 杭州海康机器人技术有限公司 | Visual positioning method and device based on visual map |
CN112116651B (en) * | 2020-08-12 | 2023-04-07 | 天津(滨海)人工智能军民融合创新中心 | Ground target positioning method and system based on monocular vision of unmanned aerial vehicle |
CN112348893B (en) | 2020-10-30 | 2021-11-19 | 珠海一微半导体股份有限公司 | Local point cloud map construction method and visual robot |
WO2022134057A1 (en) * | 2020-12-25 | 2022-06-30 | Intel Corporation | Re-localization of robot |
CN112783995B (en) * | 2020-12-31 | 2022-06-03 | 杭州海康机器人技术有限公司 | V-SLAM map checking method, device and equipment |
CN113156948B (en) * | 2021-04-19 | 2022-06-28 | 浙江工业大学 | A method for distinguishing and recognizing the ground and obstacles for a two-wheeled self-balancing robot |
CN115248043A (en) * | 2021-04-28 | 2022-10-28 | 广东博智林机器人有限公司 | Map optimization method and device, robot and readable storage medium |
CN113211449B (en) * | 2021-06-10 | 2022-06-17 | 哈工大机器人集团股份有限公司 | Robot calibration method and system based on path analysis |
CN114581517B (en) * | 2022-02-10 | 2024-12-27 | 北京工业大学 | An improved VINS method for complex lighting environments |
CN115451920B (en) * | 2022-10-27 | 2023-03-14 | 南京航空航天大学 | Relative pose measurement method for unmanned autonomous landing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment |
CN102519481A (en) * | 2011-12-29 | 2012-06-27 | 中国科学院自动化研究所 | Implementation method of binocular vision speedometer |
CN104050671A (en) * | 2014-06-25 | 2014-09-17 | 西安石油大学 | Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2418800A1 (en) * | 2000-08-09 | 2002-02-14 | Dynamic Digital Depth Research Pty Ltd. | Image conversion and encoding techniques |
JP5034852B2 (en) * | 2007-10-09 | 2012-09-26 | 富士通株式会社 | Image measuring method and image measuring apparatus |
-
2015
- 2015-01-19 CN CN201510025810.7A patent/CN104732518B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment |
CN102519481A (en) * | 2011-12-29 | 2012-06-27 | 中国科学院自动化研究所 | Implementation method of binocular vision speedometer |
CN104050671A (en) * | 2014-06-25 | 2014-09-17 | 西安石油大学 | Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment |
Non-Patent Citations (5)
Title |
---|
Mobile Robot 3D Map Building Based on Laser Ranging and Stereovision;Jia Songmin等;《Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation》;20110810;第1774-1779页 * |
Mobile Robot Localization and Map Building Based on Laser Ranging and PTAM;Sheng Jinbo等;《Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation》;20110810;第1015-1020页 * |
RTM框架下基于分层拓扑结构的多机器人系统地图拼接;贾松敏等;《机器人》;20130531;第35卷(第3期);第292-298页 * |
基于RGB-D相机的移动机器人三维SLAM;贾松敏等;《华中科技大学学报(自然科学版)》;20140131;第42卷(第1期);第103-109页 * |
基于变分模型的单目视觉三维重建方法;贾松敏等;《光学学报》;20140430;第34卷(第4期);第0415002-1-第0415002-7页 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109102525B (en) * | 2018-07-19 | 2021-06-18 | 浙江工业大学 | A mobile robot following control method based on adaptive pose estimation |
CN109993798A (en) * | 2019-04-09 | 2019-07-09 | 上海肇观电子科技有限公司 | Method, equipment and the storage medium of multi-cam detection motion profile |
CN109993798B (en) * | 2019-04-09 | 2021-05-28 | 上海肇观电子科技有限公司 | Method and equipment for detecting motion trail by multiple cameras and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104732518A (en) | 2015-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104732518B (en) | An Improved Method of PTAM Based on Ground Features of Intelligent Robot | |
CN111462200B (en) | A cross-video pedestrian positioning and tracking method, system and device | |
Naseer et al. | Deep regression for monocular camera-based 6-dof global localization in outdoor environments | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN109974707B (en) | Indoor mobile robot visual navigation method based on improved point cloud matching algorithm | |
CN110686677B (en) | Global positioning method based on geometric information | |
WO2022188094A1 (en) | Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar | |
CN109059895B (en) | Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor | |
CN108090958B (en) | Robot synchronous positioning and map building method and system | |
CN106127739B (en) | An RGB-D SLAM Method Combined with Monocular Vision | |
CN106940704B (en) | Positioning method and device based on grid map | |
CN110807809B (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
CN109631855A (en) | High-precision vehicle positioning method based on ORB-SLAM | |
CN107610175A (en) | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window | |
CN113658337B (en) | Multi-mode odometer method based on rut lines | |
CN107680133A (en) | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm | |
CN110726406A (en) | An Improved Nonlinear Optimization Method for Monocular Inertial Navigation SLAM | |
CN105856230A (en) | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot | |
CN110033489A (en) | A kind of appraisal procedure, device and the equipment of vehicle location accuracy | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
CN114088081B (en) | Map construction method for accurate positioning based on multistage joint optimization | |
CN104899918A (en) | Three-dimensional environment modeling method and system for unmanned plane | |
Cvišić et al. | Recalibrating the KITTI dataset camera setup for improved odometry accuracy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170901 Termination date: 20200119 |