CN102221358A - Monocular Vision Positioning Method Based on Inverse Perspective Projection Transformation - Google Patents
Monocular Vision Positioning Method Based on Inverse Perspective Projection Transformation Download PDFInfo
- Publication number
- CN102221358A CN102221358A CN 201110070941 CN201110070941A CN102221358A CN 102221358 A CN102221358 A CN 102221358A CN 201110070941 CN201110070941 CN 201110070941 CN 201110070941 A CN201110070941 A CN 201110070941A CN 102221358 A CN102221358 A CN 102221358A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- wheeled vehicle
- loc
- perspective projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000013519 translation Methods 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 230000004807 localization Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 2
- 239000004744 fabric Substances 0.000 claims 4
- 230000000694 effects Effects 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- WYTGDNHDOZPMIW-RCBQFDQVSA-N alstonine Natural products C1=CC2=C3C=CC=CC3=NC2=C2N1C[C@H]1[C@H](C)OC=C(C(=O)OC)[C@H]1C2 WYTGDNHDOZPMIW-RCBQFDQVSA-N 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明提供一种基于逆透视投影变换的单目视觉定位方法,采用的技术方案如下:将姿态传感器与相机固定在一起,安装在轮式车辆上,对行驶过程中拍摄的图像进行下述处理:第一步,对图像序列进行逆透视投影变换;第二步,计算相邻图像间的变换矩阵;第三步,确定轮式车辆行驶轨迹曲线。本发明的有益效果是利用姿态传感器获得的实时姿态信息辅助轮式车辆的定位,获得了较高精度的定位结果;将图像进行逆透视投影变换,消除了透视效果,进一步提高了轮式车辆定位精度。
The invention provides a monocular vision positioning method based on inverse perspective projection transformation. The adopted technical scheme is as follows: fix the attitude sensor and the camera together, install them on the wheeled vehicle, and perform the following processing on the images taken during driving : The first step is to perform inverse perspective projection transformation on the image sequence; the second step is to calculate the transformation matrix between adjacent images; the third step is to determine the trajectory curve of the wheeled vehicle. The beneficial effect of the present invention is that the real-time attitude information obtained by the attitude sensor is used to assist the positioning of the wheeled vehicle, and a high-precision positioning result is obtained; the image is subjected to inverse perspective projection transformation, which eliminates the perspective effect and further improves the positioning of the wheeled vehicle precision.
Description
技术领域technical field
本发明涉及摄像测量和图像处理技术领域,进一步是涉及一种利用逆透视投影变换算法对运动平台进行单目视觉定位的方法。The invention relates to the technical field of camera measurement and image processing, and further relates to a method for monocular vision positioning of a motion platform by using an inverse perspective projection transformation algorithm.
背景技术Background technique
汽车自主驾驶技术是近年来国内外研究的热点问题,其中的关键技术是如何对轮式车辆实现实时准确的自定位。目前,常用的定位方案是利用GPS(Global Position System,全球定位系统)加IMU(Inertial Measurement Unit,惯性测量装置)的组合导航系统,但是一般价格较为昂贵,而且当所处环境使得运动平台无法接受到GPS信号时,该方案无法完成定位功能。Automobile autonomous driving technology is a hot research topic at home and abroad in recent years, and the key technology is how to realize real-time and accurate self-positioning of wheeled vehicles. At present, the commonly used positioning solution is the integrated navigation system using GPS (Global Position System, Global Positioning System) plus IMU (Inertial Measurement Unit, Inertial Measurement Unit), but the price is generally expensive, and when the environment makes the motion platform unacceptable When there is no GPS signal, this solution cannot complete the positioning function.
现有的视觉定位方法主要分为立体视觉定位和单目视觉定位两大类。立体视觉定位法在三维空间内检测环境特征点,并在此基础上估计轮式车辆的运动情况。它存在以下缺点:一是算法复杂,较为耗时,很难满足实时性要求;二是当环境背景无明显纹理特征时,提取的特征点数量有限,会导致算法出现较大的测量误差。因此,立体视觉定位法要达到真正的工程化应用水平仍需时日。The existing visual localization methods are mainly divided into two categories: stereo vision localization and monocular vision localization. Stereo vision positioning method detects environmental feature points in three-dimensional space and estimates the motion of wheeled vehicles on this basis. It has the following disadvantages: first, the algorithm is complex and time-consuming, and it is difficult to meet the real-time requirements; second, when the environmental background has no obvious texture features, the number of extracted feature points is limited, which will lead to large measurement errors in the algorithm. Therefore, it will still take time for the stereo vision positioning method to reach the real engineering application level.
与之相比,单目视觉定位方法以假设路面较为平坦为前提,通过求解图像序列间简单的位移关系来实现轮式车辆行驶轨迹的解算。其算法简单,时效性强,且安装容易。但是,传统的单目视觉定位方法需要满足路面平坦且相机姿态不随轮式车辆运动而变化等应用条件。吕强等在“基于SIFT特征提取的单目视觉里程计在导航系统中的实现”一文中对单目视觉定位技术进行了研究(传感技术学报,第20卷,第5期,1148-1152页),它对相机拍摄的 透视图像直接进行SIFT特征匹配,根据推导的理论模型由匹配点算出轮式车辆运动状态。由于透视图像具有将真实景物变形为近大远小的效果,则图像上提取到的特征点对应在世界坐标系下的点距离轮式车辆越远,其导致的图像映射关系解算误差将越大。若在距离轮式车辆较近处环境纹理特征单一,特征点难以提取,则算法会出现较大的测量误差,甚至失效。由于该文的理论模型进行了一定程度的近似,同时没有考虑到相机在随着轮式车辆运动时,难免会出现姿态变化的问题,且直接对透视图像进行特征匹配也存在一定的局限性,导致最终的定位精度不够理想。In contrast, the monocular vision positioning method assumes that the road surface is relatively flat, and realizes the calculation of the wheeled vehicle trajectory by solving the simple displacement relationship between image sequences. Its algorithm is simple, time-sensitive, and easy to install. However, the traditional monocular vision positioning method needs to meet the application conditions such as the road surface is flat and the camera attitude does not change with the movement of the wheeled vehicle. Lu Qiang et al. studied the monocular vision positioning technology in the article "The realization of monocular visual odometer based on SIFT feature extraction in navigation system" (Journal of Sensing Technology, Vol. 20, No. 5, 1148-1152 Page), it directly performs SIFT feature matching on the perspective images captured by the camera, and calculates the motion state of the wheeled vehicle from the matching points according to the derived theoretical model. Since the perspective image has the effect of distorting the real scene into a larger one and a smaller one in the far distance, the farther the point in the world coordinate system corresponding to the feature points extracted on the image is from the wheeled vehicle, the greater the error in the calculation of the image mapping relationship will be. big. If the environmental texture features are single and the feature points are difficult to extract near the wheeled vehicle, the algorithm will have large measurement errors and even fail. Since the theoretical model in this paper is approximated to a certain extent, at the same time, it does not take into account that when the camera moves with the wheeled vehicle, the problem of attitude changes will inevitably occur, and there are certain limitations in directly performing feature matching on perspective images. As a result, the final positioning accuracy is not ideal.
发明内容Contents of the invention
本发明解决的技术问题是:针对现有视觉定位技术的不足,提出一种基于逆透视投影变换的单目视觉定位方法,较立体视觉定位法而言本发明算法简单,易于实现;和现有的单目视觉定位方法相比,本发明具有较高的定位精度。The technical problem solved by the present invention is: aiming at the deficiencies of the existing visual positioning technology, a monocular visual positioning method based on inverse perspective projection transformation is proposed. Compared with the stereo vision positioning method, the algorithm of the present invention is simple and easy to implement; and the existing Compared with the monocular vision positioning method, the present invention has higher positioning accuracy.
本发明所采用的具体技术方案如下:The concrete technical scheme that the present invention adopts is as follows:
将姿态传感器与相机固定在一起,并且相机的放置位置使相机可以拍摄到车身周围任何一个方向上的路面。根据需要设置相机的帧频和分辨率。相机帧频的设置需要保证在轮式车辆正常行驶过程中,相机拍摄的相邻两幅图像对应的视场区域有重叠的部分。假设记录轮式车辆行驶轨迹曲线的起始时刻为t1,该时刻相机拍摄的图像为P1;在时刻ti(i=1,2...,n,其中n为图像总数),相机拍摄到第i幅图像Pi,姿态传感器获得的相机姿态角为ai。实施下述步骤:The attitude sensor and the camera are fixed together, and the camera is positioned so that the camera can capture the road surface in any direction around the vehicle body. Set the camera's frame rate and resolution as desired. The setting of the camera frame rate needs to ensure that the field of view areas corresponding to two adjacent images captured by the camera overlap during the normal driving of the wheeled vehicle. Assuming that the starting moment of recording wheeled vehicle trajectory curve is t 1 , the image captured by the camera at this moment is P 1 ; at time t i (i=1, 2...,n, where n is the total number of images), the camera The i-th image P i is captured, and the camera attitude angle obtained by the attitude sensor is a i . Carry out the following steps:
第一步,对图像序列进行逆透视投影变换。In the first step, an inverse perspective projection transformation is performed on the image sequence.
在时刻ti,由相机姿态角ai获得相机外参数矩阵Ai,结合相机内参数矩阵获得图像的逆透视投影变换矩阵Bi。根据获得的逆透视投影变换矩阵Bi,对在 时刻ti拍摄的图像Pi进行逆透视投影变换,获得路面正下视图P′i。At time t i , the camera extrinsic parameter matrix A i is obtained from the camera attitude angle a i , combined with the camera intrinsic parameter matrix to obtain the inverse perspective projection transformation matrix B i of the image. According to the obtained inverse perspective projection transformation matrix B i , the inverse perspective projection transformation is performed on the image P i captured at time t i to obtain the front view P′ i of the road surface.
假设所有时刻的路面正下视图P′i组成正下视图序列P′1,P′2,L,P′n。Assume that the front-down view P′ i of the road surface at all times constitutes the front-down view sequence P′ 1 , P′ 2 , L, P′ n .
第二步,计算相邻图像间的变换矩阵。In the second step, the transformation matrix between adjacent images is calculated.
对正下视图序列P′1,P′2,L,P′n的任何相邻图像P′q和P′q+1,q=1,2,L,n-1,进行下述处理:For any adjacent images P′ q and P′ q+1 of the front view sequence P′ 1 , P′ 2 , L, P′ n , q=1, 2, L, n-1, perform the following processing:
第(1)步,提取特征点。Step (1), extract feature points.
使用SURF特征描述算子对相邻图像P′q和P′q+1进行特征提取,特征提取后图像P′q的特征点集为Fq,图像P′q+1的特征点集为Fq+1。每幅图像特征点集的特征点个数与图像的分辨率、纹理复杂度和SURF算子的参数设置有关,可根据使用需要确定。Use the SURF feature description operator to perform feature extraction on adjacent images P′ q and P′ q+1 . After feature extraction, the feature point set of image P′ q is F q , and the feature point set of image P′ q+1 is F q+1 . The number of feature points in each image feature point set is related to the image resolution, texture complexity and parameter settings of the SURF operator, and can be determined according to the needs of use.
第(2)步,获取图像间刚体变换模型的参数。In step (2), the parameters of the rigid body transformation model between images are obtained.
对相邻图像P′q和P′q+1,建立刚体变换模型描述两幅图像之间的变换关系,刚体变换模型如下:For the adjacent images P′ q and P′ q+1 , establish a rigid body transformation model to describe the transformation relationship between the two images. The rigid body transformation model is as follows:
其中,假设图像P′q的特征点集Fq对应的图像特征像素点坐标集为{(xj,yj)},j=1...m,图像P′q+1的特征点集Fq+1对应的图像特征像素点坐标集为{(x′k,y′k)},k=1...m′,其中m、m′分别为特征点集Fq和Fq+1包含的特征点数量,θq表示图像P′q+1相对于P′q旋转的角度,(dxq,dyq)为图像P′q+1相对于P′q的像素坐标平移量。Among them, assuming that the feature point set F q of the image P′ q corresponds to the image feature pixel point coordinate set is {(x j , y j )}, j=1...m, the feature point set of the image P′ q+1 The image feature pixel coordinate set corresponding to F q+1 is {(x′ k , y′ k )}, k=1...m′, where m and m′ are feature point sets F q and F q+ 1 contains the number of feature points, θ q represents the rotation angle of image P′ q+1 relative to P′ q , and (dx q , dy q ) is the pixel coordinate translation of image P′ q+1 relative to P′ q .
利用图像P′q和P′q+1对应的图像特征像素点坐标集,采用RANSAC估计算法求解刚体变换模型,便可以算得刚体变换模型的参数θq和(dxq,dyq)。Using the coordinate sets of image feature pixels corresponding to images P′ q and P′ q+1 , and using the RANSAC estimation algorithm to solve the rigid body transformation model, the parameters θ q and (dx q , dy q ) of the rigid body transformation model can be calculated.
第三步,确定轮式车辆行驶轨迹曲线。The third step is to determine the trajectory curve of the wheeled vehicle.
根据获得的刚体变换模型参数θq和(dxq,dyq),计算出相邻时刻tq和tq+1之间轮式车辆的移动距离dLq, q=1,2...,n-1,其中,M为正下视图的纵向分辨率,正下视图序列中每一幅图像的纵向分辨率相同;D为正下视图纵向分辨率为M时对应的纵向实际视场范围,M和D的值可根据需要进行确定。结合航向信息θq,即可推断出时刻tq+1轮式车辆在世界坐标系下所处位置坐标Locq+1,其中Loc1位于坐标原点,即 其余Locq+1(q=1,2,L,n-1)的横纵坐标 可类推如下:According to the obtained rigid body transformation model parameters θ q and (dx q , dy q ), calculate the moving distance dL q of the wheeled vehicle between adjacent moments t q and t q+1 , q=1, 2...,n-1, where M is the vertical resolution of the front view, and the vertical resolution of each image in the front view sequence is the same; D is the vertical resolution of the front view. The corresponding vertical field of view range, the values of M and D can be determined according to the needs. Combined with heading information θ q , it can be deduced that the location coordinate Loc q+1 of the wheeled vehicle at time t q+ 1 in the world coordinate system, where Loc 1 is located at the origin of the coordinates, namely The horizontal and vertical coordinates of the remaining Loc q+1 (q=1, 2, L, n-1) It can be deduced as follows:
将各时刻对应位置坐标点Loci(i=1,2,L,n)依次连接,便可以得到轮式车辆的行驶轨迹曲线。By sequentially connecting the coordinate points Loc i (i=1, 2, L, n) corresponding to each time, the driving track curve of the wheeled vehicle can be obtained.
本发明的有益效果在于:The beneficial effects of the present invention are:
1、在传统单目视觉定位方法的基础上,利用姿态传感器获得的实时姿态信息辅助轮式车辆的定位,获得了更高精度的定位结果。1. On the basis of the traditional monocular vision positioning method, the real-time attitude information obtained by the attitude sensor is used to assist the positioning of the wheeled vehicle, and a higher precision positioning result is obtained.
2、将图像进行逆透视投影变换,消除了透视效果,在此基础上进行特征提取、模型参数计算,避免了传统方法导致图像映射关系解算误差大的问题,提高了轮式车辆定位精度。2. The image is transformed by inverse perspective projection to eliminate the perspective effect. Based on this, feature extraction and model parameter calculation are performed, which avoids the problem of large error in the calculation of image mapping relationship caused by traditional methods, and improves the positioning accuracy of wheeled vehicles.
3、本发明仅使用一台摄相机和一台姿态传感器,便可计算出轮式车辆的行驶轨迹信息,因此本发明结构较为简单,安装简便,无需复杂的标定等操作。3. The present invention only uses one camera and one attitude sensor to calculate the driving track information of the wheeled vehicle. Therefore, the present invention has a relatively simple structure, easy installation, and no complicated operations such as calibration.
附图说明Description of drawings
图1为本发明具体实施的流程图;Fig. 1 is the flowchart of concrete implementation of the present invention;
图2为轮式车辆轨迹曲线的计算原理示意图;Fig. 2 is a schematic diagram of the calculation principle of the wheeled vehicle trajectory curve;
图3为单目视觉定位工作原理示意图;Figure 3 is a schematic diagram of the working principle of monocular vision positioning;
图4为实验中相机拍摄的第34幅路面图像;Figure 4 is the 34th road surface image taken by the camera in the experiment;
图5为图4对应第34幅图像经逆透视变换后的路面正下视图;Fig. 5 is the front view of the road surface after inverse perspective transformation corresponding to the 34th image in Fig. 4;
图6为实验中得到的轮式车辆行驶轨迹曲线图。Figure 6 is a curve diagram of the wheeled vehicle trajectory obtained in the experiment.
具体实施方式Detailed ways
图1给出了本发明具体实施的流程图。第一步,对图像序列进行逆透视投影变换。本步骤中,求解相机内参数矩阵和相机外参数矩阵方法参见《摄像测量学原理与应用研究》一书(科学出版社出版,于起峰/尚洋著)第22页至33页。第二步,计算相邻图像间的变换矩阵。其中,第(1)步是提取特征点。本步骤中,使用的特征检测算子为SURF算子,关于SURF特征算子的性质及使用方法参见文献“Distinctive image features from scale invariant key points”(期刊:International Journal of Computer Vision,2004,60(2),页码91-110,作者David G.Lowe)以及文献“SURF:Speeded up robust features”(会议:Proceedings of the 9th European Conference on Computer Vision,2006,3951(1),页码404-417,作者Herbert Bay,Tinne Tuytelaars和Luc Van Gool)。第(2)步是获取图像间刚体变换模型的参数。本步骤中,采用RANSAC估计算法求解刚体变换模型,RANSAC算法速度快,估计出的参数准确性高,是目前比较常用的估计算法,RANSAC算法原理的详细介绍参见文献“Preemptive RANSAC for Live Structure and Motion Estimation”(会议:Proceedings of the Ninth IEEE International Conference on Computer Vision,ICCV 2003,作者David Nist′er,Sarnoff Corporation和Princeton)。第三步,确定轮式车辆的行驶轨迹曲线。图2给出了轮式车辆轨迹曲线的计算原理图。图中的XY坐标轴代表世 界坐标系,坐标系的原点对应了相机拍摄第1幅图像时轮式车辆在世界坐标系中的位置坐标Loc1,该位置对应了定位的起始时刻t1,θ1为第1幅图像与第2幅图像之间的角度旋转量,dL1为第1幅图像与第2幅图像之间的平移距离,Loc2对应了相机拍摄第2幅图像时轮式车辆在世界坐标系下的位置坐标。θ2为第2幅图像与第3幅图像之间的角度旋转量,dL2为第2幅图像与第3幅图像之间的平移距离,Loc3对应了相机拍摄第3幅图像时轮式车辆在世界坐标系下的位置坐标。θn-1为第n-1幅图像与第n幅图像之间的角度旋转量,其中n为相机拍摄的图像总数,dLn-1为第n-1幅图像与第n幅图像之间的平移距离,Locn-1对应了相机拍摄第n-1幅图像时轮式车辆在世界坐标系下的位置坐标,Locn对应了相机拍摄第n幅图像时轮式车辆在世界坐标系下的位置坐标。将世界坐标系下的各个位置坐标Loc1,Loc2,...Locn相连接,获得的曲线即为轮式车辆的行驶轨迹曲线。Fig. 1 has provided the flow chart of the concrete implementation of the present invention. In the first step, an inverse perspective projection transformation is performed on the image sequence. In this step, for the method of solving the camera internal parameter matrix and the camera external parameter matrix, refer to pages 22 to 33 of the book "Principles and Applications of Photogrammetry" (published by Science Press, written by Yu Qifeng/Shang Yang). In the second step, the transformation matrix between adjacent images is calculated. Among them, step (1) is to extract feature points. In this step, the feature detection operator used is the SURF operator. For the properties and usage of the SURF feature operator, refer to the document "Distinctive image features from scale invariant key points" (Journal: International Journal of Computer Vision, 2004, 60( 2), page number 91-110, author David G.Lowe) and literature "SURF: Speeded up robust features" (Conference: Proceedings of the 9th European Conference on Computer Vision, 2006, 3951(1), page number 404-417, author Herbert Bay, Tinne Tuytelaars and Luc Van Gool). The second step is to obtain the parameters of the rigid body transformation model between images. In this step, the RANSAC estimation algorithm is used to solve the rigid body transformation model. The RANSAC algorithm is fast and the estimated parameters are highly accurate. It is currently a relatively commonly used estimation algorithm. For a detailed introduction to the principle of the RANSAC algorithm, see the document "Preemptive RANSAC for Live Structure and Motion Estimation" (Conference: Proceedings of the Ninth IEEE International Conference on Computer Vision, ICCV 2003, by David Nist'er, Sarnoff Corporation and Princeton). The third step is to determine the trajectory curve of the wheeled vehicle. Figure 2 shows the calculation principle diagram of the wheeled vehicle trajectory curve. The XY coordinate axes in the figure represent the world coordinate system, and the origin of the coordinate system corresponds to the position coordinate Loc 1 of the wheeled vehicle in the world coordinate system when the camera captures the first image, which corresponds to the initial moment t 1 of positioning, θ 1 is the angular rotation between the first image and the second image, dL 1 is the translation distance between the first image and the second image, Loc 2 corresponds to the wheel type when the camera captures the second image The position coordinates of the vehicle in the world coordinate system. θ 2 is the angular rotation between the second image and the third image, dL 2 is the translation distance between the second image and the third image, Loc 3 corresponds to the wheel type when the camera captures the third image The position coordinates of the vehicle in the world coordinate system. θ n-1 is the angular rotation between the n-1th image and the nth image, where n is the total number of images captured by the camera, and dL n-1 is the distance between the n-1th image and the nth image Loc n-1 corresponds to the position coordinates of the wheeled vehicle in the world coordinate system when the camera captures the n-1th image, and Loc n corresponds to the wheeled vehicle's position in the world coordinate system when the camera captures the nth image location coordinates. The curve obtained by connecting the position coordinates Loc 1 , Loc 2 , . . . Loc n in the world coordinate system is the driving track curve of the wheeled vehicle.
利用本发明的具体实施方式进行了实验,实验选择在室外平坦场地进行,相机架设于轮式车辆前方,实际应用中也可以架设于轮式车辆任意一侧或者后方,只要保证相机拍摄到的视场只包括路面即可。本次实验中相机以3帧/秒的速度采集图片,共采集图片93张,即n=93,姿态传感器(型号MTI)与相机(型号FLE2-14S3)同步采集数据,实验的工作原理如图3所示。设图3中数字1所指示为时刻tq(q=1,2,...n-1,n为拍摄的图像总数)轮式车辆所处位置,该时刻对应相机的视场范围如梯形区域3所示,此时相机拍摄的图像为Pq,数字2所指示为tq+1时刻轮式车辆所处位置,该时刻对应相机的视场范围如梯形区域4所示,此时相机拍摄的图像为Pq+1,黑色箭头代表轮式车辆在时刻tq和tq+1之间行进距离,图像Pq和Pq+1的视场重叠区域如数字5所示。轮式车辆移动距离和方向的获取正是通过两视场的重叠区域在各视场中相对位置关系来得到的。Utilize the specific embodiment of the present invention to carry out the experiment, the experiment is chosen to carry out in the outdoor flat place, the camera is set up in front of the wheeled vehicle, also can be set up in any side or the rear of the wheeled vehicle in practical application, as long as the visual field captured by the camera is guaranteed The field only includes the road surface. In this experiment, the camera collects pictures at a speed of 3 frames per second, and a total of 93 pictures are collected, that is, n=93. The attitude sensor (model MTI) and the camera (model FLE2-14S3) collect data synchronously. The working principle of the experiment is shown in the figure 3. Assuming that the
实验中,小车按蛇形方式曲线行驶一段距离。图4为实验中相机拍摄的图像序列中的第34张P34,图5为图像P34经逆透视变换后的路面正下视图P′34。图6给出了实验中利用本发明得到的轮式车辆行驶轨迹曲线,图中XY代表世界坐标系,单位为米,图中每一个绿色正方形点代表每一副图像对应时刻的轮式车辆位置坐标Loci,将所有的点连接便获得轮式车辆行驶轨迹曲线。由图6可见由本发明方法得到的行驶距离为22.76米,而由皮尺实地测量得实际行驶距离为22.63米,误差约6‰。In the experiment, the car traveled a certain distance along a serpentine curve. Fig. 4 is the 34th image P 34 in the image sequence captured by the camera in the experiment, and Fig. 5 is the front view P ' 34 of the road after the inverse perspective transformation of the image P 34 . Fig. 6 has provided the wheeled vehicle running track curve that utilizes the present invention to obtain in the experiment, and XY represents the world coordinate system in the figure, and the unit is meter, and each green square point in the figure represents the position of the wheeled vehicle at the corresponding moment of each pair of images Coordinate Loc i , connect all the points to obtain the wheeled vehicle trajectory curve. It can be seen from Fig. 6 that the traveling distance obtained by the method of the present invention is 22.76 meters, while the actual traveling distance measured by a tape measure on the spot is 22.63 meters, with an error of about 6‰.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110070941 CN102221358B (en) | 2011-03-23 | 2011-03-23 | Monocular visual positioning method based on inverse perspective projection transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110070941 CN102221358B (en) | 2011-03-23 | 2011-03-23 | Monocular visual positioning method based on inverse perspective projection transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102221358A true CN102221358A (en) | 2011-10-19 |
CN102221358B CN102221358B (en) | 2012-12-12 |
Family
ID=44777980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110070941 Expired - Fee Related CN102221358B (en) | 2011-03-23 | 2011-03-23 | Monocular visual positioning method based on inverse perspective projection transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102221358B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102829763A (en) * | 2012-07-30 | 2012-12-19 | 中国人民解放军国防科学技术大学 | Pavement image collecting method and system based on monocular vision location |
CN103292807A (en) * | 2012-03-02 | 2013-09-11 | 江阴中科矿业安全科技有限公司 | Drill carriage posture measurement method based on monocular vision |
CN104359464A (en) * | 2014-11-02 | 2015-02-18 | 天津理工大学 | Mobile robot positioning method based on stereoscopic vision |
CN105976402A (en) * | 2016-05-26 | 2016-09-28 | 同济大学 | Real scale obtaining method of monocular vision odometer |
CN106462762A (en) * | 2016-09-16 | 2017-02-22 | 香港应用科技研究院有限公司 | Enhanced inverse perspective transform based vehicle detection, tracking and localization |
CN104180818B (en) * | 2014-08-12 | 2017-08-11 | 北京理工大学 | A kind of monocular vision mileage calculation device |
CN108051012A (en) * | 2017-12-06 | 2018-05-18 | 爱易成技术(天津)有限公司 | Mobile object space coordinate setting display methods, apparatus and system |
CN108921060A (en) * | 2018-06-20 | 2018-11-30 | 安徽金赛弗信息技术有限公司 | Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method |
CN109242907A (en) * | 2018-09-29 | 2019-01-18 | 武汉光庭信息技术股份有限公司 | A kind of vehicle positioning method and device based on according to ground high speed camera |
CN110335317A (en) * | 2019-07-02 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Image processing method, device, equipment and medium based on terminal device positioning |
CN110567728A (en) * | 2018-09-03 | 2019-12-13 | 阿里巴巴集团控股有限公司 | Method, device and equipment for identifying shooting intention of user |
CN110986890A (en) * | 2019-11-26 | 2020-04-10 | 北京经纬恒润科技有限公司 | Height detection method and device |
CN112212873A (en) * | 2019-07-09 | 2021-01-12 | 北京地平线机器人技术研发有限公司 | High-precision map construction method and device |
CN112710308A (en) * | 2019-10-25 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN112927306A (en) * | 2021-02-24 | 2021-06-08 | 深圳市优必选科技股份有限公司 | Calibration method and device of shooting device and terminal equipment |
CN113167579A (en) * | 2018-12-12 | 2021-07-23 | 国立大学法人东京大学 | Measurement system, measurement method, and measurement procedure |
CN114782549A (en) * | 2022-04-22 | 2022-07-22 | 南京新远见智能科技有限公司 | Camera calibration method and system based on fixed point identification |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020080235A1 (en) * | 2000-12-27 | 2002-06-27 | Yong-Won Jeon | Image processing method for preventing lane deviation |
JP2006101816A (en) * | 2004-10-08 | 2006-04-20 | Univ Of Tokyo | Steering control method and apparatus |
-
2011
- 2011-03-23 CN CN 201110070941 patent/CN102221358B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020080235A1 (en) * | 2000-12-27 | 2002-06-27 | Yong-Won Jeon | Image processing method for preventing lane deviation |
JP2006101816A (en) * | 2004-10-08 | 2006-04-20 | Univ Of Tokyo | Steering control method and apparatus |
Non-Patent Citations (2)
Title |
---|
《Proc. of SPIE》 20110524 Cao Yu et al Monocular Visual Odometry based on Inverse Perspective Mapping 1-7 1 第8194卷, * |
《计算机测量与控制》 20090930 高德芝等 基于逆透视变换德智能车辆定位技术 1810-1812 1 第17卷, 第9期 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103292807A (en) * | 2012-03-02 | 2013-09-11 | 江阴中科矿业安全科技有限公司 | Drill carriage posture measurement method based on monocular vision |
CN103292807B (en) * | 2012-03-02 | 2016-04-20 | 江阴中科矿业安全科技有限公司 | Drill carriage attitude measurement method based on monocular vision |
CN102829763B (en) * | 2012-07-30 | 2014-12-24 | 中国人民解放军国防科学技术大学 | Pavement image collecting method and system based on monocular vision location |
CN102829763A (en) * | 2012-07-30 | 2012-12-19 | 中国人民解放军国防科学技术大学 | Pavement image collecting method and system based on monocular vision location |
CN104180818B (en) * | 2014-08-12 | 2017-08-11 | 北京理工大学 | A kind of monocular vision mileage calculation device |
CN104359464A (en) * | 2014-11-02 | 2015-02-18 | 天津理工大学 | Mobile robot positioning method based on stereoscopic vision |
CN105976402A (en) * | 2016-05-26 | 2016-09-28 | 同济大学 | Real scale obtaining method of monocular vision odometer |
CN106462762A (en) * | 2016-09-16 | 2017-02-22 | 香港应用科技研究院有限公司 | Enhanced inverse perspective transform based vehicle detection, tracking and localization |
CN108051012A (en) * | 2017-12-06 | 2018-05-18 | 爱易成技术(天津)有限公司 | Mobile object space coordinate setting display methods, apparatus and system |
CN108921060A (en) * | 2018-06-20 | 2018-11-30 | 安徽金赛弗信息技术有限公司 | Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method |
CN110567728A (en) * | 2018-09-03 | 2019-12-13 | 阿里巴巴集团控股有限公司 | Method, device and equipment for identifying shooting intention of user |
CN110567728B (en) * | 2018-09-03 | 2021-08-20 | 创新先进技术有限公司 | Method, device and equipment for identifying shooting intention of user |
CN109242907A (en) * | 2018-09-29 | 2019-01-18 | 武汉光庭信息技术股份有限公司 | A kind of vehicle positioning method and device based on according to ground high speed camera |
CN113167579A (en) * | 2018-12-12 | 2021-07-23 | 国立大学法人东京大学 | Measurement system, measurement method, and measurement procedure |
CN110335317A (en) * | 2019-07-02 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Image processing method, device, equipment and medium based on terminal device positioning |
CN112212873A (en) * | 2019-07-09 | 2021-01-12 | 北京地平线机器人技术研发有限公司 | High-precision map construction method and device |
CN112710308A (en) * | 2019-10-25 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN112710308B (en) * | 2019-10-25 | 2024-05-31 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN110986890A (en) * | 2019-11-26 | 2020-04-10 | 北京经纬恒润科技有限公司 | Height detection method and device |
CN112927306A (en) * | 2021-02-24 | 2021-06-08 | 深圳市优必选科技股份有限公司 | Calibration method and device of shooting device and terminal equipment |
CN112927306B (en) * | 2021-02-24 | 2024-01-16 | 深圳市优必选科技股份有限公司 | Calibration method and device of shooting device and terminal equipment |
CN114782549A (en) * | 2022-04-22 | 2022-07-22 | 南京新远见智能科技有限公司 | Camera calibration method and system based on fixed point identification |
CN114782549B (en) * | 2022-04-22 | 2023-11-24 | 南京新远见智能科技有限公司 | Camera calibration method and system based on fixed point identification |
Also Published As
Publication number | Publication date |
---|---|
CN102221358B (en) | 2012-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102221358A (en) | Monocular Vision Positioning Method Based on Inverse Perspective Projection Transformation | |
CN109631887B (en) | Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope | |
CN107229908B (en) | A kind of method for detecting lane lines | |
CN103499350B (en) | Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area | |
CN101604448B (en) | Method and system for measuring speed of moving targets | |
CN201269758Y (en) | Vehicle mounted full automatic detection recording system for traffic signs | |
CN108534782B (en) | A real-time positioning method of landmark map vehicles based on binocular vision system | |
CN102721409B (en) | Measuring method of three-dimensional movement track of moving vehicle based on vehicle body control point | |
CN102967305B (en) | Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square | |
CN109842756A (en) | A method and system for lens distortion correction and feature extraction | |
CN113376669B (en) | A monocular VIO-GNSS fusion positioning algorithm based on point-line features | |
CN108759823B (en) | Localization and deviation correction method of low-speed autonomous vehicles on designated roads based on image matching | |
CN102692236A (en) | Visual milemeter method based on RGB-D camera | |
CN105157609A (en) | Two-sets-of-camera-based global morphology measurement method of large parts | |
CN110031829A (en) | A kind of targeting accuracy distance measuring method based on monocular vision | |
CN109596121B (en) | A method for automatic target detection and spatial positioning of a mobile station | |
CN109766757A (en) | A kind of parking position high-precision locating method and system merging vehicle and visual information | |
CN109493385A (en) | Autonomic positioning method in a kind of mobile robot room of combination scene point line feature | |
CN103345630A (en) | Traffic sign positioning method based on spherical panoramic video | |
CN107284455A (en) | A kind of ADAS systems based on image procossing | |
CN101893443A (en) | The Making System of Road Digital Orthophoto Map | |
CN101545776A (en) | Method for obtaining digital photo orientation elements based on digital map | |
CN102222333A (en) | Method and device of mobile augmented reality of underground engineering based on mixed registration | |
CN109029442A (en) | Based on the matched positioning device of multi-angle of view and method | |
CN105163065A (en) | Traffic speed detecting method based on camera front-end processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20121212 Termination date: 20150323 |
|
EXPY | Termination of patent right or utility model |