CN102692236A - Visual milemeter method based on RGB-D camera - Google Patents

Visual milemeter method based on RGB-D camera Download PDF

Info

Publication number
CN102692236A
CN102692236A CN2012101514249A CN201210151424A CN102692236A CN 102692236 A CN102692236 A CN 102692236A CN 2012101514249 A CN2012101514249 A CN 2012101514249A CN 201210151424 A CN201210151424 A CN 201210151424A CN 102692236 A CN102692236 A CN 102692236A
Authority
CN
China
Prior art keywords
camera
coordinate system
point
image
dimensional
Prior art date
Application number
CN2012101514249A
Other languages
Chinese (zh)
Inventor
刘济林
曹腾
龚小谨
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to CN2012101514249A priority Critical patent/CN102692236A/en
Publication of CN102692236A publication Critical patent/CN102692236A/en

Links

Abstract

The invention discloses a visual milemeter method based on an RGB-D camera. The conventional visual milemeter methods are all based on a monocular or binocular camera and have the problem that three-dimensional information of a scene cannot be obtained or equipment is complicated and inconvenient to install. The visual milemeter method based on the RGB-D camera comprises the RGB-D camera, a computer host and an autonomous vehicle, wherein the computer host is installed in the autonomous vehicle, the RGB-D camera is fixed at the top end of the periphery of the autonomous vehicle, and the RGB-D camera is connected with the computer host through a USB (universal serial bus) interface (or 1394 interface). According to the method, feature extraction, feature matching and tracking and motion estimation are carried out on front and back frames of images on the basis of the aligned RGB-D image sequence obtained by the RBG-D camera so as to obtain the motion distance and direction of a vehicle body. The method has the advantages that the equipment is simple and convenient to install, the cost is lower, the work load of image processing is lower, the accurate three-dimensional information of the scene can be obtained and the result of motion estimation is accurate and reliable.

Description

基于RGB-D相机的视觉里程计方法 Based on visual odometry method RGB-D camera

技术领域 FIELD

[0001] 本发明属于车辆自主导航领域,具体地涉及ー种基于RGB-D相机的视觉里程计方法。 [0001] The present invention belongs to the field of autonomous navigation of the vehicle, particularly to a method of visual odometry ー species based on RGB-D camera.

背景技术 Background technique

[0002] 里程计在车辆导航定位过程中至关重要。 [0002] odometer vehicle navigation and positioning is crucial in the process. 视觉里程计是ー种依靠视觉信息测算车辆运动距离和方向的方法,解决了轮式里程计因车轮打滑造成的測量错误,不存在传感器精度降低或惯导漂移等因素造成的误差,是传统方法的有效补充。 Visual odometry error is ー method relies on visual estimates vehicle movement distance and direction information, to solve the odometer wheel due to measurement errors due to wheel slip, the sensor accuracy is reduced or absent INS drift and other factors caused by conventional methods effective complement.

[0003]目前,已提出的视觉里程计方法都是基于单目或双目相机。 [0003] Currently, visual odometry methods have been proposed are based on one or both eyes camera. 视觉里程计依靠单目或双目相机得到的图像序列,通过特征提取、特征匹配与跟踪和运动估计得出车体六个自由度更新(位置和姿态)。 Visual odometry rely on binocular or monocular image sequences camera obtained by the feature extraction, feature matching and motion estimation and tracking of the vehicle body derived updates six degrees of freedom (position and attitude).

[0004] 单目视觉里程计假定路面平坦,通过标定得到路面上点在世界坐标系下的坐标与相应的图像坐标之间的对应关系,再将车辆运动前后得到的两帧图像中相同的点匹配起来,从而利用运动估计算法求得车体运动參数。 [0004] monocular vision odometry assumed flat road, obtained by calibration the correspondence relationship between the image coordinates of the corresponding coordinate points in world coordinates on the road surface, the same point two frames and then back and forth motion of the vehicle obtained match up to take advantage of motion estimation algorithm to obtain the body motion parameters. 单目视觉里程计最大的局限在于只能处理位于ー个平面上的场景点,无法得到场景的三維信息,因此当路面存在起伏或凸出的部分时算法会失效。 Monocular vision odometer biggest limitation is that only the processing ー scene point located on a plane, the three-dimensional information of the scene can not be obtained, and therefore when there is fluctuation or convex part of the road algorithms fail.

[0005] 双目视觉里程计是建立在立体视觉基础上的,它通过双目立体摄像机获取立体图像序列,通过特征提取、特征点立体匹配、特征点跟踪匹配、坐标变换和运动估计等步骤求得车辆的运动数据。 [0005] binocular visual odometry is built on the basis of stereoscopic vision, it acquires stereoscopic image by binocular stereo camera sequence, by the feature extraction, stereo matching feature points, the feature point tracking and matching, the coordinate transformation and motion estimation step seek get motion data of the vehicle. 一般认为,双目立体系统能取得比单目系统更可靠,精确和方便的结果(Nister, Visual Odometry for Ground Venicle Applications, Journal of FieldRobotics, 2006),然而双目视觉里程计往往设备较为复杂,安装不便,图像处理工作量大,现有的商业化双目立体相机也售价昂贵。 It is generally believed that the system can achieve binocular more reliable than monocular systems, convenient and accurate results (Nister, Visual Odometry for Ground Venicle Applications, Journal of FieldRobotics, 2006), however, binocular vision odometry often more complex equipment, installation inconvenience, the image processing workload, the conventional binocular stereo cameras commercial sale price.

发明内容 SUMMARY

[0006] 本发明的目的是针对现有技术的不足,提供一种基于RGB-D相机的视觉里程计方法。 Objective [0006] The present invention is a deficiency of the prior art, there is provided a method based on visual odometer RGB-D camera.

[0007] 本发明包括RGB-D相机、电脑主机和自主车辆;电脑主机安装在自主车辆内部,RGB-D相机固定在自主车辆外围顶端;RGB-D相机通过USB接ロ(或1394接ロ)与电脑主机相连。 [0007] The present invention comprises a RGB-D camera, the host computer and the autonomous vehicle; host computer installed in the interior of the autonomous vehicle, RGB-D camera fixed to the periphery of the top of the autonomous vehicle; RGB-D camera via a USB connection ro (or 1394 ro) connected to the host computer.

[0008] 本发明解决其技术问题所采用的技术方案包括如下步骤: [0008] The present invention solves the technical problem using the technical solution comprising the steps of:

步骤(I).将RGB-D相机固定在自主车辆外围顶端,代替传统的单目或双目相机来感知 Step (I). The RGB-D camera is fixed at the periphery of the top of the autonomous vehicle, instead of a conventional binocular or monocular camera perceives

环境,并输出彩色图像 和深度图像4。 Environment, and outputs a color image and a depth image 4.

[0009] 步骤(2).由于RGB-D相机中彩色相机和深度相机处于不同位置,因此,将给出的深度图像4与彩色图像z对齐。 [0009] Step (2) Since the RGB-D camera and the depth camera color camera in different positions, and therefore, description will be given of the depth image and color image Z 4 are aligned. [0010] 深度图像ら)与彩色图像4对齐步骤如下: [0010] ra depth image) and the color image alignment step 4 as follows:

2-1.针孔相机成像模型的齐次坐标形式如下: m = (I) 2-1 form a homogeneous coordinate pinhole camera imaging model as follows: m = (I)

其中:A?=[足Hlf表示ー个世界坐标系下的三维点,雨=是点设投影在ニ Where:? A = [foot Hlf represent ー dimensional point in the world coordinate system, rain = projection is located at the point ni

维图像平面的点,ίΓ为针孔相机的内參数矩阵,为针孔相机的外參数矩阵ャ·カ针孔相机坐标系相对于世界坐标系的旋转矩阵,i为针孔相机坐标系相对于世界坐标系的平移向量,Ζ; Γ, 2是三维点i在世界坐标系下的坐标,& V是投影占在ニ维图像平面的坐标。 Point-dimensional image plane, ίΓ pinhole camera intrinsic parameters matrix, the matrix ka-ya pinhole camera coordinate system to the parameters pinhole camera rotation matrix with respect to the world coordinate system, i is a pinhole camera coordinate system translation vector with respect to the world coordinate system, Ζ; Γ, 2 three-dimensional coordinates of point i in the world coordinate system, & V Ni-dimensional image coordinates in the projection plane accounted for.

[0011] 2-2.对式(I)展开得: Expand [0011] 2-2 of formula (I) have:

免== KBlX -C1J -C公Z - Cj = KR(M1-C) (2) Free == KBlX -C1J -C well-Z - Cj = KR (M1-C) (2)

其中:Mt = IX,Y,I]7 ^t=-RC , C是平移向量£的另外ー种表示方式,表示针孔相机 Wherein: Mt = IX, Y, I] 7 ^ t = -RC, C is the translation vector £ ー additional species representation, showing a pinhole camera

坐标系与世界坐标系的原点之间的位移“为针孔相机平移矩阵,R为针孔相机旋转矩阵。 Displacement "between the origin of the coordinate system and the world coordinate system for a pinhole camera translation matrix, R is a pinhole camera rotation matrix.

[0012] 2-3.根据式(2),以深度相机的坐标系作为世界坐标系,对彩色相机建立投影模型得: . [0012] 2-3 in accordance with the formula (2), the depth camera coordinate system to a world coordinate system, the establishment of a color camera projection model obtained:

みc = D、(3) Mi c = D, (3)

以深度相机的坐标系作为世界坐标系,对深度相机建立投影模型得: mB = K3Mt3 (4) Depth camera coordinate system as the world coordinate system, the depth camera projection model was established: mB = K3Mt3 (4)

由式(3)和(4)得出: By the formula (3) and (4) yields:

= KcRd(K^md-Cd)或ち=-¾) (5) = KcRd (K ^ md-Cd) or ち = -¾) (5)

其中:邊c·是以深度相机坐标系作为世界坐标系下的三维点投影在彩色相机ニ维图像 Where: c · side is the depth camera coordinate system is projected as three-dimensional point in the world coordinate system ni-dimensional image in color camera

平面的点,·*1»是以深度相机坐标系作为世界坐标系下的三维点投影在深度相机ニ维图像 Point of the screen, * * 1 >> depth camera coordinate system is a three-dimensional projection point in the world coordinate system ni-dimensional image in a depth camera

平面的点,Kc为彩色相机内參数矩阵,13为深度相机内參数矩阵,[ち:½]为彩色相机相 Point of the screen, Kc within a matrix color camera parameters, the depth camera 13 is the parameter matrix, [ち: ½] is a color camera with

对于深度相机的外參数矩阵。 For external parameter matrix depth camera. Rn为彩色相机相坐标系对于深度相机坐标系的旋转矩阵 Rn for the color camera coordinate system with respect to the depth camera coordinate system rotation matrix

为彩色相机坐标系相对于深度相机坐标系的平移向量,Cs是ら的另外ー种表不方式,表不 A color camera coordinate system relative to the translational vector of the depth camera coordinate system, Cs is the kind of form ra ー not further embodiment, the table is not

彩色相机坐标系与深度相机坐标系的原点间的位移,表示ー个以深度相机坐标系作为世界坐标系下的三维点。 The displacement between the camera coordinate system and the origin of color depth camera coordinate system, ー represents a depth camera coordinate system as a three-dimensional point in the world coordinate system.

[0013] 2-4.根据式(4)算出深度图像4上每ー个像素点在以深度相机的坐标系作为世界坐标系下的三维坐标,同时根据式(5),算出深度图像/s上每ー个像素点对应的彩色图像4的投影平面坐标,得到与彩色图像4对齐的深度图像,并将对齐后且包含三维坐标信息的深度图像记为4。 [0013] 2-4. According to formula (4) on the depth image is calculated for each 4 pixel ー point depth camera coordinate system in three-dimensional coordinates of the world coordinate system, while according to formula (5), calculates the depth image / s each ー points corresponding to the pixel coordinates of the color image projection plane 4, to obtain the depth image and color image 4 are aligned, and comprising a three-dimensional coordinate information and the alignment referred to as a depth image 4.

[0014] 步骤(3).剔除深度图像/i中无效和不稳定的区域,同时对彩色图像Zc进行图像平滑操作,得到准确可靠的ニ维RGB-D图像4_。 [0014] Step (3). Excluding the depth image / i invalid and instability region, while an image of a color image Zc smoothing operation, to obtain accurate and reliable dimensional RGB-D image ni 4_.

[0015] 步骤(4).以时间T为周期,均匀采集彩色图像/c和深度图像/^根据步骤⑵、步骤⑶将采集到的彩色图像4和深度图像4转换为ニ维RGB-D图像,从而得到在时间轴上连续的ニ维RGB-D图像序列{4_)。 [0015] Step (4) at time T is period, even to capture color images / c and the depth image / ^ ⑶ according to step ⑵, the step of the collected color image 4, and the depth image 4 is converted into Ni-dimensional RGB-D images to obtain continuous on the time axis dimension ni RGB-D image sequence {4_).

[0016] 步骤(5).根据时间轴的先后顺序,依次选取ニ维RGB-D图像序列中连续的前后两帧图像/和/】^,分别对这两帧图像进行特征点的提取与描述,得到特征点集び”和ぴ8+1},所有特征点集里的特征点有相同的维数为正整数。 [0016] Step (5). The sequence of the time axis, sequentially selecting ni dimensional RGB-D image sequence two successive images before and after the / and /} ^, respectively, the two frame image feature points extracted with the description, number of dimensions, obtain a set of feature points び "and ぴ 8 + 1}, all the feature points in the feature point sets have the same positive integer.

[0017] 步骤(6).对于特征点集び”中的每ー个特征点F,从特征点集び8中找到与特征点P的特征向量最邻近的特征点i7S+1 ; [0017] Step (6) for the feature point set び "ー each feature point F, the feature point set from the feature vector 8 び find the feature point P nearest feature point i7S + 1.;

若特征点广与特征点/^+1之间特征向量的距离小于阈值7¾,则记录特征点f与特征点产1为相匹配的一对特征点对,表示为m ; If the wide feature point the feature point / ^ + 1 the distance between the feature vector less than a threshold 7¾, then recording the feature points and the feature point f 1 producing a pair of matched feature points, expressed as m;

若特征点Fs与特征点i?K+1之间特征向量的距离大于等于阈值7¾ ,则特征点集びs+1)中没有与特征点i?s相匹配的特征点; ? If the feature point and the feature point Fs i K + 1 the distance between the feature vectors is greater than equal to the threshold 7¾, the feature point set び s + 1) and i is not matched feature point the feature point s?;

特征点对表示了三维场景中同一个点在连续的前后两帧图像上的不同投影,通过步骤(6)可得到ー个特征点对集合i(F,Fs+1))。 A characteristic point representing a three-dimensional scene with different projection points on two successive images before and after, in step (6) can be obtained for the set of feature points ー i (F, Fs + 1)).

[0018] 步骤(7).对得到特征点对集合((f,F+1)}进行筛选,获得较优的用于确定运动參数的三维点对。 [0018] Step (7). The obtained feature points to the set of ((f, F + 1)} screened to obtain superior dimensional motion parameters for determining point.

[0019] 特征点对集合筛选步骤如下: [0019] The feature points of the set of screening steps:

7-1.对特征点对集合(Ρκ,グ+1)}进行多次随机取样,毎次从中随机抽出a对特征点对,根据这a对特征点对及其对应的估计刚体的运动參数的方法算出两帧图像间的运动參数,即旋转矩阵i?和平移向量ΐ,其中a为正整数。 7-1. Characteristic point pair set (Ρκ, Corning + 1)} multiple random sampling, every times are randomly chosen from a feature point on, the rigid body motion estimation parameters according to which a pair of feature points and their corresponding the method of calculating the number of motion parameters between the two frames, i.e., rotation matrix I? and translation vector ΐ, wherein a is a positive integer.

[0020] 7-2.对于特征点对集合{(F'f+1)}中的每ー个特征点对(F1Jw1),算出前帧图像1\獅中特征点f经旋转矩阵Λ和平移向量i后的三维坐标(ダ"1/ ; [0020] 7-2. For the set of feature points {(F'f + 1)} of each pair of feature points ー (F1Jw1), calculated before a frame image \ lions feature point f by rotation and translation matrix Λ three-dimensional coordinates (manufactured "1 / I after vector;

若,ヤ与的三维距离小于阈值,则特征点F被划分为内点; If, with the three-dimensional distance Yakult less than the threshold, the feature points are divided into the point F;

若(ァx+1y与F科1的三维距离大于等于阈值™,则特征点r被划分为外点。 If (x + B A three-dimensional distance F 1y Section 1 and the threshold value is greater than or equal ™, feature points are divided into an outer point r.

[0021] 7-3.经过b次随机取样试验后,找出内点数目最多的那次随机取样试验,其中b为正整数;将该次取样试验所取的特征点对,作为最终筛选的用于确定运动參数的三维点对;将该次取样试验所估计的运动參数,作为最终确定的运动參数。 [0021] 7-3 b after sampling randomized trials, to identify the maximum number of points that random sampling test, wherein b is a positive integer;. The test sub-sampled feature point pairs taken as the final screening means for determining motion parameters of three-dimensional points; sub-sampling tests the estimated motion parameters, as the finally determined motion parameters.

[0022] 本发明与现有技术相比,具有的有益效果是: [0022] Compared with the prior art, it has an advantageous effect that:

1)本发明设备简单,安装便利,成本较低; 1) Simple devices of the present invention, ease of installation, low cost;

2)本发明无需进行左右图像的立体匹配,图像处理工作量较小;3)本发明可获取精确的场景三维信息,运动估计结果精确可靠; 2) the present invention does not need to match the left and right stereoscopic images, image processing workload is small; 3) according to the present invention can obtain accurate three-dimensional scene information, accurate and reliable motion estimation result;

4)本发明拥有很强的灵活性,可根据不同需求选用不同的前后帧匹配算法和刚体运动估计方法。 4) the present invention has high flexibility, you can choose different and rigid front frame matching motion estimation algorithm according to different needs.

附图说明 BRIEF DESCRIPTION

[0023] 图I是本发明的基本结构图; [0023] Figure I is a basic structural view of the present invention;

图2是本发明的的工作流程图。 FIG 2 is a flowchart of the present invention.

具体实施方式 Detailed ways

[0024] 下面将结合附图对本发明方法作进ー步说明。 [0024] The following method of the present invention in conjunction with the accompanying drawings as further described ー feed.

[0025] 如图I所示,本发明包括RGB-D相机、电脑主机和自主车辆三个部分;电脑主机安装在自主车辆内部,RGB-D相机固定在自主车辆顶端;RGB-D相机通过USB接ロ(或1394接ロ)与电脑主机相连。 [0025] FIG I, the present invention consists of three parts RGB-D camera, the autonomous vehicle and the host computer; host computer installed in the autonomous vehicle interior, RGB-D camera on top of the autonomous vehicle; RGB-D camera via a USB ro contact (ro or 1394) is connected to the host computer.

[0026] 如图2所示,本发明方法具体步骤如下: [0026] As shown in FIG 2, the method of the present invention, the following steps:

步骤(I).将RGB-D相机固定在自主车辆外围顶端,代替传统的单目或双目相机来感知 Step (I). The RGB-D camera is fixed at the periphery of the top of the autonomous vehicle, instead of a conventional binocular or monocular camera perceives

环境,并输出彩色图像ん和深度图像。 Environment, and outputs a color image and a depth image san.

[0027] 步骤(2).由于RGB-D相机中彩色相机和深度相机处于不同位置,它们各自所成的像也是从两个不同的视角对三维世界进行了投影,这样的数据不利于后期的处理,因此对 [0027] Step (2) Since the RGB-D camera and the depth camera color camera in different positions, each of which is formed by the image from two different perspectives of a three-dimensional world were projected, such data is not conducive to the late processing, so the

给出的深度图像ら与彩色图像4对齐。 Ra given depth image and the color image 4 are aligned. 对齐的效果相当于在彩色相机的位置同时放置一 The color effect of the alignment corresponds to the camera position while placing a

台深度相机,两者同时对三维场景进行成像,得到的ニ维图像每ー个象素既包含顔色信息,也包含深度信息。 Sets the depth camera, both three-dimensional image a scene, Ni-dimensional image obtained ー each pixel contains both color information and depth information is also included.

[0028] 深度图像4与彩色图像/c对齐步骤如下: [0028] The depth image and the color image 4 / c alignment step as follows:

2-1.针孔相机成像模型的齐次坐标形式如下: m = (I) 2-1 form a homogeneous coordinate pinhole camera imaging model as follows: m = (I)

其中:愈=[U,2,If表示ー个世界坐标系下的三维点,尤为针孔相机的内參数矩阵,[5¾为针孔相机的外參数矩阵,兩=[iんv,lf是点M投影在ニ维图像平面的坐标,& V是 Wherein: more = [U, 2, If represents ー three-dimensional points in world coordinates, particularly the pinhole camera parameter matrix, [5¾ outside of pinhole camera parameter matrix, two = [i san v, lf the point M is Ni-dimensional projection image plane coordinates, & V is

投影点ώ在ニ维图像平面的坐标,ぶ;Γ, Z是三维点M在ー个世界坐标系下的坐标。 Ώ projection point in the image plane Ni-dimensional coordinates, bu; Γ, Z coordinates in the world coordinate system of three-dimensional points ー M.

[0029] 2-2.对式(I)展开得: Expand [0029] 2-2 of formula (I) have:

m = KR[lI-CjM= KR[X-C1J-C2,Z-C3f = KR(M1-C) (2) m = KR [lI-CjM = KR [X-C1J-C2, Z-C3f = KR (M1-C) (2)

其中:jT = [iT,r,Zf , ,c是平移向量I的另外ー种表示方式,表示针孔相机 Wherein: jT = [iT, r, Zf,, c I is the translation vector further ー species representation, showing a pinhole camera

坐标系与世界坐标系的原点之间的位移,ΐ为针孔相机平移矩阵力针孔相机旋转矩阵。 The displacement between the origin of the coordinate system and the world coordinate system, ΐ translation matrix force for the pinhole camera pinhole camera rotation matrix.

[0030] 2-3.根据式(2),以深度相机的坐标系作为世界坐标系,对彩色相机建立投影模型得: . [0030] 2-3 in accordance with the formula (2), the depth camera coordinate system to a world coordinate system, the establishment of a color camera projection model obtained:

も=[cも(め-ら)(3) Mo = [c mo (Circular - ra) (3)

以深度相机的坐标系作为世界坐标系,对深度相机建立投影模型得:«ϋ = KsM1s (4) Depth camera coordinate system as the world coordinate system, the establishment of the depth camera projection model was: «ϋ = KsM1s (4)

由式(3)和(4)得出: By the formula (3) and (4) yields:

も=(«-も)或も=Kc(RDKtmD -¾) (5) Mo = ( «- mo) or mo = Kc (RDKtmD -¾) (5)

其中是以深度相机坐标系作为世界坐标系下的三维点投影在彩色相机ニ维图像平面的点,みD是以深度相机坐标系作为世界坐标系下的三维点投影在深度相机ニ维图像平面的点,【C为彩色相机内參数矩阵,13为深度相机内參数矩阵,为彩色相机相对于深度相机的外參数矩阵。 Wherein the depth camera coordinate system is a three-dimensional point in the world coordinate system projection point Ni-dimensional color camera image plane, D is the depth of the Mi three-dimensional camera coordinate system in the projection point in the world coordinate system Ni-dimensional depth camera image plane point [C within a color matrix camera parameters, the depth camera 13 is the parameter matrix of the camera with respect to color depth camera external parameter matrix. Rn为彩色相机相坐标系对于深度相机坐标系的旋转矩阵为彩色相机坐标系相对于深度相机坐标系的平移向量,ら是ら的另外ー种表不方式,表不彩色相机坐标系与深度相机坐标系的原点间的位移,表示ー个以深度相机坐标系作为世界坐标系下的三维点。 Rn for the color camera with the coordinate system to a rotation matrix depth camera coordinate system is a color camera coordinate system relative to the translational vector of the depth camera coordinate system, ra is ra Further ー kinds of table is not the way, the table is not a color camera coordinate system and the depth camera the displacement between the origin of the coordinate system, ー represents a depth camera coordinate system as a three-dimensional point in the world coordinate system. [0031] 2-4.根据式(5),将深度图像ら对齐到彩色图像4的投影平面上,获得对齐后的深度图像,同时根据式(4)算出对齐后的深度图像上每ー个像素点在世界坐标系下的三维坐标,将对齐后且包含三维坐标信息的深度图像记为4。 [0031] 2-4. According to the formula (5), the depth image ra aligned to the color image projected on the plane 4, to obtain the depth image alignment, while according to formula (4) calculates the depth of each image align a ーnote depth image pixel coordinates in the three-dimensional world coordinate system, and aligning and containing the three-dimensional coordinate information is 4.

[0032] 步骤(3).剔除深度图像/:η中无效和不稳定的区域,同时对彩色图像4进行图像平滑操作,得到准确可靠的ニ维RGB-D图像4_。 Excluding the depth image / [0032] Step (3): η invalid and instability in the region, while the color image 4 image smoothing operation, to obtain accurate and reliable dimensional RGB-D image ni 4_.

[0033] 步骤(4).以时间T为周期,均匀采集彩色图像/c和深度图像ら,根据步骤⑵、步骤⑶将采集到的彩色图像4和深度图像4转换为ニ维RGB-D图像从而得到在时间轴上连续的ニ维RGB-D图像序列。 [0033] Step (4) at time T is period, even to capture color images / c and the depth image ra, according to step ⑵, step ⑶ the collected color image 4, and the depth image 4 is converted into Ni-dimensional RGB-D images ni dimension to thereby obtain a continuous sequence of RGB-D image on the time axis.

[0034] 步骤(5).选取ニ维RGB-D图像序列{4卿}中连续的前后两帧图像和!ニ [0034] Step (5). Qing two images {4} ni select successive longitudinal dimension and RGB-D image sequence! Ni

,分别对这两帧图像进行特征点的提取与描述,得到特征点集{ί"}和,所有特征点集里的特征点有相同的维数力正整数。 , Respectively, these two frame images extracted with the description of the feature point, the feature point set obtained {ί "} and all the feature points in the feature point sets have the same dimensions force a positive integer.

[0035] 步骤(6).对于特征点集び"}中的每ー个特征点i?へ从特征点集び"+1}找到与特征点i?H的特征向量最邻近的特征点,记为FK+1 ; [0035] Step (6). For the feature point set び "} Each of the feature points ー I? び understands the feature point set of" +1} to find the feature point I? Nearest characteristic point of the feature vector H, referred to as FK + 1;

若特征点尸与特征点FB+1之间特征向量的距离小于阈值1¾ ,则记录特征点Fm与特征点Fx+1为相匹配的ー对特征点对,表示为; If the dead feature point the feature point from the feature vectors of the FB + 1 is less than a threshold between 1¾, the feature point feature point Fm Fx + 1 is recorded ー matched feature points, expressed as;

若特征点Fn与特征点i?s+1之间特征向量的距离大于等于阈值”贝IJ特征点集ぴ8+1)中没有与特征点Fs相匹配的特征点; ? Fn if the feature points and the feature point from the feature vector i s + 1 is not less than the threshold value between "Tony IJ feature points set ぴ 8 + 1) does not match with the feature point Fs of;

特征点对(タ'/^+1)表示了三维场景中同一个点在连续的前后两帧图像上的不同投影,通过该步骤可以得到ー个特征点对集合{(F'f+1)}。 Feature point pair (ta '/ ^ + 1) represents the same three-dimensional scene in different projection points on the two consecutive images before and after feature points can be obtained through the ー steps set {(F'f + 1) }.

[0036] 步骤(7).刚体运动參数的估计最少需要两对连续的前后帧匹配的三维点对(常用估计方法有奇异值分解法、正交分解法、単位四元数法和基于ニ维特征点的“8点算法”),通过步骤(6)得到的连续前后帧匹配的特征点对集合ダ1+1)}包含大量的三维点对,其 [0036] Step (7) The estimated rigid motion parameters requires a minimum of two pairs of consecutive frames before and after the match to three-dimensional points (estimation method is common singular value decomposition, orthogonal decomposition method. Unit quaternion and Ni-based "8-point algorithm"), obtained before and after successive step (6) of the frame matching set of feature points inter- 1 + 1)} contains a large number of three-dimensional points on the feature points, which

中也包含一些误匹配的三维点对,本步骤对得到特征点对集合((,,尸+1)}进行筛选,基于 Also contains a number of false match of three-dimensional points, obtained in this step of the set of feature point pair ((,, dead +1)} screened, based on

随机抽样一致性(RANSAC)思想,获取较优的用于确定运动參数的三维点对,剔除掉存在的误匹配点。 Random sample consensus (RANSAC) ideas, get superior motion parameters for determining the three-dimensional points, to weed out false matching point exists.

[0037] 特征点对集合{(P,尸+1)}筛选步骤如下: [0037] The characteristic point of the set {(P, dead + 1)} screening step is as follows:

7-1.对特征点对集合((,,尸+1)}进行多次随机取样,毎次从中随机抽出a对匹配点对,根据这a对匹配点对及其对应的估计刚体的运动參数的方法算出两帧图像间的运动參数,即旋转矩阵P和平移矩阵,其中a为正整数。 [0038] 7-2.对于特征点对集合《F,f+1)}中的每ー个特征点对GWk+1),算出前帧图中特征点Fs经旋转矩阵i?和平移矩阵ί后的三维坐标(ァ"+1ア; 7-1. Characteristic point pair set ((,, dead +1)} multiple random sampling, every times are randomly chosen from a pair of matching points, according to which a pair of matching points of the motion estimation and the corresponding rigid the method of calculating the parameters of the motion parameters between the two images, i.e., the rotation matrix and translation matrix P, wherein a is a positive integer. [0038] 7-2. for the feature point set of "F, f + 1)} in each feature points ー GWk + 1), calculated in the previous frame feature point Fs FIG via three-dimensional coordinate rotation matrix i (ASTON "+1 ア and the translation matrix ί?;

若与i?»+1的三维距离小于阈值TM,则特征点i?»被划分为内点; ? If the i »+ 1-dimensional distance is less than a threshold TM, the feature point I» is divided into the point?;

若与ド的三维距离大于等于阈值TM,则特征点F及被划分为外点。 If the three-dimensional distance greater than or equal ド threshold TM, and the feature point F is divided into an outer point.

[0039] 7-3.因为正确匹配点对所确定的模型逼近于真实的模型,大多数数据会成为内点,而错误的匹配点由于数据杂乱无章所确定的模型大部分数据会成为外点。 [0039] 7-3. Because the correct match point of the model determined approach to the real model, most of the data will be in the points, but the wrong match point due to the chaotic model most of the data will be determined outside the points. 经过b次随机取样试验后,找出内点数目最多的那次随机取样试验,其中b为正整数;将该次取样试所取的特征点对,作为最终筛选的用于确定运动參数的三维点对;将该次取样试验所估计的运动參数,作为最终确定的运动參数。 B after sampling randomized trials, to identify the maximum number of points that random sampling test, wherein b is a positive integer; the sub-sampled sample feature point pairs taken as a final filter for determining motion parameters dimensional point; sub-sampling tests the estimated motion parameters, as the finally determined motion parameters.

Claims (1)

1.基于RGB-D相机的视觉里程计方法,其特征在于包括如下步骤: 步骤(I).将RGB-D相机固定在自主车辆外围顶端,代替传统的单目或双目相机来感知环境,并输出彩色图像 4和深度图像^); 步骤(2).由于RGB-D相机中彩色相机和深度相机处于不同位置,因此,将给出的深度图像ら与彩色图像ら对齐; 深度图像4与彩色图像4对齐步骤如下: 2-1.针孔相机成像模型的齐次坐标形式如下: m = K[R:t]M (I) 其中:A?=[足表示ー个世界坐标系下的三维点,麻=[av,if是点砬投影在ニ维图像平面的点,i力针孔相机的内參数矩阵,为针孔相机的外參数矩阵,P、为针孔相机坐标系相对于世界坐标系的旋转矩阵,i为针孔相机坐标系相对于世界坐标系的平移向量,I 7, Z是三维点it?在世界坐标系下的坐标,& V是投影占,在ニ维图像平面的坐标; 2-2.对式(I)展开得: 1. Method meter RGB-D camera-based visual odometry, characterized by comprising the following steps: Step (I) The RGB-D camera is fixed at the periphery of the top of the autonomous vehicle, instead of a conventional binocular or monocular camera to perceive the environment, and outputs a color image and a depth image ^ 4); step (2) Since the RGB-D camera and the depth camera color camera in different positions, and therefore, description will be given of a color image and depth image ra ra aligned; 4 and the depth image color image alignment step 4 as follows: 2-1 homogeneous coordinate form pinhole camera imaging model as follows: m = K [R: t] m (I) wherein:? a = [ー represented foot coordinate system in the world three-dimensional point, Ma = [av, if the projected point is a point La Ni-dimensional image plane, i of the force in the pinhole camera parameter matrix, outside the pinhole camera parameter matrix, P, is a pinhole camera coordinate system rotation matrix with respect to the world coordinate system, i is a pinhole camera coordinate system relative to the world coordinate system translation vector, I 7, Z is a three-dimensional point IT? coordinates in the world coordinate system, & V accounted for projection, in ni the coordinate-dimensional image plane; 2-2 formula (I) are spread:
Figure CN102692236AC00021
其中:JT =[足F1Zf,t'=-Rd , C是平移向量£的另外ー种表示方式,表示针孔相机坐标系与世界坐标系的原点之间的位移パ为针孔相机平移矩阵2力针孔相机旋转矩阵; 2-3.根据式(2),以深度相机的坐标系作为世界坐标系,对彩色相机建立投影模型得: Wherein: JT = [foot F1Zf, t '= - Rd, C is the translation vector £ represents another embodiment ー species, represents the displacement between the origin of the coordinate system and a pinhole camera in the world coordinate system for the pinhole camera pan pa matrix 2 force pinhole camera rotation matrix; 2-3 (2), the depth camera coordinate system to a world coordinate system, the establishment of the projection model to obtain a color camera according to the formula:
Figure CN102692236AC00022
以深度相机的坐标系作为世界坐标系,对深度相机建立投影模型得: Depth camera coordinate system as the world coordinate system, projection model was established on the depth camera:
Figure CN102692236AC00023
由式(3)和(4)得出: By the formula (3) and (4) yields:
Figure CN102692236AC00024
其中:ネ:< 是以深度相机坐标系作为世界坐标系下的三维点投影在彩色相机ニ维图像平面的点,·*£■是以深度相机坐标系作为世界坐标系下的三维点投影在深度相机ニ维图像平面的点,为彩色相机内參数矩阵,为深度相机内參数矩阵,1¾:½]为彩色相机相对于深度相机的外參数矩阵;'ち为彩色相机相坐标系对于深度相机坐标系的旋转矩阵,h力彩色相机坐标系相对于深度相机坐标系的平移向量,ら是ら的另外ー种表不方式,表示彩色相机坐标系与深度相机坐标系的原点间的位移,Mn表示ー个以深度相机坐标系作为世界坐标系下的三维点;2-4.根据式(4)算出深度图像4上每ー个像素点在以深度相机的坐标系作为世界坐标系下的三维坐标,同时根据式(5),算出深度图像4η上每ー个像素点对应的彩色图像/e的投影平面坐标,得到与彩色图像4对齐的深度图 Wherein: ne: <depth camera coordinate system is a three-dimensional point in the world coordinate system projection Ni-dimensional image plane points in a color camera, · * £ ■ the depth camera coordinate system is a three-dimensional projection point in the world coordinate system point depth camera Ni-dimensional image plane within a matrix color camera parameters, the camera parameters is within the depth of the matrix, 1¾: ½] with respect to the camera is a color outside the depth camera parameter matrix; 'ち relative coordinate system is a color camera for rotation matrix depth camera coordinate system, h force color camera coordinate system relative to the translational vector of the depth camera coordinate system, ra is ra Further ー kinds of tables are not manner, represents between origin color camera coordinate system and the depth camera coordinate system displacement, Mn represents ー depth camera coordinate system to a three-dimensional point in world coordinates;. 2-4 according to formula (4) on the depth image is calculated for each 4 pixel ー point depth camera coordinate system to the world coordinate system under the three-dimensional coordinates, while according to formula (5), the depth image is calculated for each ー 4η a color image projection plane coordinate / e corresponding to the pixels, and 4 are aligned to give a color image depth map ,并将对齐后且包含三维坐标信息的深度图像记为4; 步骤(3).剔除深度图像4中无效和不稳定的区域,同时对彩色图像4进行图像平滑操作,得到准确可靠的ニ维RGB-D图像; 步骤(4).以时间T为周期,均匀采集彩色图像4和深度图像ら,根据步骤(2)、步骤(3)将采集到的彩色图像和深度图像ら转换为ニ维RGB-D图像4_,从而得到在时间轴上连续的ニ维RGB-D图像序列; 步骤(5).根据时间轴的先后顺序,依次选取ニ维RGB-D图像序列中连续的前后两帧图像!I腿和/Iム,分别对这两帧图像进行特征点的提取与描述,得到特征点集{FK}和,所有特征点集里的特征点有相同的维数,„为正整数; 步骤(6).对于特征点集{i?s}中的每ー个特征点i?*,从特征点集{i?s+1}中找到与特征点产的特征向量最邻近的特征点タ®+1 ; 若特征点Fs与特征点产1之间特征向量的距离小于阈值m,则记录 Note the depth image, and comprises a three-dimensional coordinate information and aligning and 4; Step (3) excluding the depth image region 4 and the instability of the invalid, while color image 4 image smoothing operation, to obtain accurate and reliable ni dimension. RGB-D image; step (4) at time T is period, uniformly collect the color image 4, and the depth image ra, according to step (2), the step (3) the acquired color image and a depth image ra converted to ni dimension. 4_ RGB-D image, on the time axis to obtain continuous ni-dimensional RGB-D image sequence; dimensional RGB-D image sequence two successive images before and after the step (5) in accordance with the order of the time axis, the writing is selected sequentially. ! legs and the I / I Rousseau, respectively, these two frame images extracted feature point with the description, to give {FK} feature point set and all the feature points in the feature point sets have the same dimension, "is a positive integer; step (6). for the feature point set {i? s} each ー feature points i? *, {? s + 1 i} find feature points producing feature vectors nearest feature point from the feature point set ta ® + 1; if the feature points and the feature point Fs capacity is a distance between feature vectors is smaller than the threshold value m, then the recording 征点Fx与特征点Fx+1为相匹配的一对特征点对,表示为(F\Fx+h); 若特征点Fx与特征点i?M+1之间特征向量的距离大于等于阈值-贝IJ特征点集びs+1)中没有与特征点相匹配的特征点; 特征点对,メ"+1)表示了三维场景中同一个点在连续的前后两帧图像上的不同投影,通过步骤(6)可得到ー个特征点对集合{(FH,FS+1)}; 步骤(7).对得到特征点对集合UFs,尸+1))进行筛选,获得较优的用于确定运动參数的三维点对; 特征点对集合(P'f+1)}筛选步骤如下: 7-1.对特征点对集合((デ,尸+1)}进行多次随机取样,毎次从中随机抽出a对特征点对,根据这a对特征点对及其对应的估计刚体的运动參数的方法算出两帧图像间的运动參数,即旋转矩阵和平移向量ί,其中a为正整数; 7-2.对于特征点对集合中的每ー个特征点对(F'F*+1),算出前帧图像Ihm中特征点F经旋转矩阵J?和平移向量£后的三 Intrinsic point Fx feature point Fx + 1 to match a pair of feature points, expressed as (F \ Fx + h);? If the feature points Fx the feature point i M + between a distance from the feature vector is not less than the threshold value - Pui IJ feature point set び s + 1) does not match with the feature points; feature point pairs METABLEN "+ 1) represents the same point in a three-dimensional scene on the two different projection images before and after the continuous , step (6) can be obtained for the set of feature points ー {(FH, FS + 1)};. step (7) to filter the obtained feature points set UFS, corpse + 1)), obtained with superior determining the three-dimensional point in the motion parameters; characteristic point pair set (P'f + 1)} screening step is as follows: 7-1 of the feature point set ((by Du, P + 1)} multiple random sampling. once every randomly chosen from a feature point to calculate the motion parameters between the two images of the estimation method of the motion parameters corresponding to the rigid body and according to which a feature point, i.e., the rotation matrix and translation vector ί, wherein a It is a positive integer;.? 7-2 for the feature points of each of the set of feature points ー (F'F * + 1), calculated before the frame image F Ihm feature points by the rotation matrix and translation vector £ J after three 维坐标(Fs+1)f ; 若与デ+1的三维距离小于阈值TM,则特征点f被划分为内点;若与产I的三维距离大于等于阈值-I则特征点i?»被划分为外点; .7-3.经过b次随机取样试验后,找出内点数目最多的那次随机取样试验,其中b为正整数;将该次取样试验所取的特征点对,作为最终筛选的用于确定运动參数的三维点对;将该次取样试验所估计的运动參数,作为最终确定的运动參数。 Dimensional coordinate (Fs + 1) f; if the threshold value TM the three-dimensional distance by Du + 1 is less than the feature point f is divided into the point;? If the three-dimensional distance yield I is not less than the threshold value -I the feature point I »is divided into a point outside;. .7-3 b after sampling randomized trials, to identify the maximum number of points that random sampling test, wherein b is a positive integer; the test sub-sampled feature point pairs taken as finally selected for determining the three-dimensional motion parameters of the point; the test sub-sampling the estimated motion parameter, determined as the final motion parameters.
CN2012101514249A 2012-05-16 2012-05-16 Visual milemeter method based on RGB-D camera CN102692236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101514249A CN102692236A (en) 2012-05-16 2012-05-16 Visual milemeter method based on RGB-D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101514249A CN102692236A (en) 2012-05-16 2012-05-16 Visual milemeter method based on RGB-D camera

Publications (1)

Publication Number Publication Date
CN102692236A true CN102692236A (en) 2012-09-26

Family

ID=46857843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101514249A CN102692236A (en) 2012-05-16 2012-05-16 Visual milemeter method based on RGB-D camera

Country Status (1)

Country Link
CN (1) CN102692236A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104121902A (en) * 2014-06-28 2014-10-29 福州大学 Implementation method of indoor robot visual odometer based on Xtion camera
CN104933755A (en) * 2014-03-18 2015-09-23 华为技术有限公司 Static object reconstruction method and system
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN107103626A (en) * 2017-02-17 2017-08-29 杭州电子科技大学 A kind of scene reconstruction method based on smart mobile phone
CN107569181A (en) * 2016-07-04 2018-01-12 九阳股份有限公司 A kind of Intelligent cleaning robot and cleaning method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038678A (en) * 2007-04-19 2007-09-19 北京理工大学 Smooth symmetrical surface rebuilding method based on single image
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
CN101315661A (en) * 2008-07-18 2008-12-03 东南大学 Fast three-dimensional face recognition method for reducing expression influence
KR100951309B1 (en) * 2008-07-14 2010-04-05 성균관대학교산학협력단 New Calibration Method of Multi-view Camera for a Optical Motion Capture System
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN102176243A (en) * 2010-12-30 2011-09-07 浙江理工大学 Target ranging method based on visible light and infrared camera
CN102435172A (en) * 2011-09-02 2012-05-02 北京邮电大学 Visual locating system of spherical robot and visual locating method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038678A (en) * 2007-04-19 2007-09-19 北京理工大学 Smooth symmetrical surface rebuilding method based on single image
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
KR100951309B1 (en) * 2008-07-14 2010-04-05 성균관대학교산학협력단 New Calibration Method of Multi-view Camera for a Optical Motion Capture System
CN101315661A (en) * 2008-07-18 2008-12-03 东南大学 Fast three-dimensional face recognition method for reducing expression influence
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN102176243A (en) * 2010-12-30 2011-09-07 浙江理工大学 Target ranging method based on visible light and infrared camera
CN102435172A (en) * 2011-09-02 2012-05-02 北京邮电大学 Visual locating system of spherical robot and visual locating method thereof

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALBERT S.HUANG: "Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera", 《IEEE》 *
CEDRIC AUDRAS: "Real-time dense appearance-based SLAM for RGB-D sensors", 《PROCEEDINGS OF AUSTRALASIAN CONFERENCE ON ROBOTICS AND AUTOMATION》 *
FRANK STEINBRUCKER: "Real-Time Visual Odometry from Dense RGB-D Images", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS》 *
MAXIME MEILLAND等: "A Spherical Robot-Centered Representation for Urban Navigation", 《IEEE》 *
RENE WAGNER: "Rapid Development of Manifold-Based Graph Optimization Systems for", 《IEEE》 *
彭勃: "立体视觉里程计关键技术与应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李智等: "动态场景下基于视差空间的立体视觉里程计", 《浙江大学学报( 工学版)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933755A (en) * 2014-03-18 2015-09-23 华为技术有限公司 Static object reconstruction method and system
CN104933755B (en) * 2014-03-18 2017-11-28 华为技术有限公司 A kind of stationary body method for reconstructing and system
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104121902A (en) * 2014-06-28 2014-10-29 福州大学 Implementation method of indoor robot visual odometer based on Xtion camera
CN104121902B (en) * 2014-06-28 2017-01-25 福州大学 Implementation method of indoor robot visual odometer based on Xtion camera
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN107569181A (en) * 2016-07-04 2018-01-12 九阳股份有限公司 A kind of Intelligent cleaning robot and cleaning method
CN106254854B (en) * 2016-08-19 2018-12-25 深圳奥比中光科技有限公司 Preparation method, the apparatus and system of 3-D image
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN107103626A (en) * 2017-02-17 2017-08-29 杭州电子科技大学 A kind of scene reconstruction method based on smart mobile phone

Similar Documents

Publication Publication Date Title
Forster et al. SVO: Fast semi-direct monocular visual odometry
US9443350B2 (en) Real-time 3D reconstruction with power efficient depth sensor usage
CN101019096B (en) Apparatus and method for detecting a pointer relative to a touch surface
US20120237114A1 (en) Method and apparatus for feature-based stereo matching
JP5660648B2 (en) Online reference generation and tracking in multi-user augmented reality
Tanskanen et al. Live metric 3d reconstruction on mobile phones
Ke et al. Transforming camera geometry to a virtual downward-looking camera: Robust ego-motion estimation and ground-layer detection
Heng et al. Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry
US9177381B2 (en) Depth estimate determination, systems and methods
Hee Lee et al. Motion estimation for self-driving cars with a generalized camera
JP4814669B2 (en) 3D coordinate acquisition device
CN100489833C (en) Position posture measuring method, position posture measuring device
JP6198230B2 (en) Head posture tracking using depth camera
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
Sola et al. Fusing monocular information in multicamera SLAM
Kneip et al. Robust real-time visual odometry with a single camera and an IMU
JP2016516977A (en) Generating a 3D model of the environment
Bazin et al. Motion estimation by decoupling rotation and translation in catadioptric vision
EP1741059B1 (en) Method for determining the position of a marker in an augmented reality system
Quiroga et al. Dense semi-rigid scene flow estimation from rgbd images
US9269003B2 (en) Diminished and mediated reality effects from reconstruction
EP2430837A1 (en) Image processing method for determining depth information from at least two input images recorded using a stereo camera system
CN101706957B (en) Self-calibration method for binocular stereo vision device
Ligorio et al. Extended Kalman filter-based methods for pose estimation using visual, inertial and magnetic sensors: Comparative analysis and performance evaluation
CN104520898A (en) A multi-frame image calibrator

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C05 Deemed withdrawal (patent law before 1993)