CN102682292B - Method based on monocular vision for detecting and roughly positioning edge of road - Google Patents

Method based on monocular vision for detecting and roughly positioning edge of road Download PDF

Info

Publication number
CN102682292B
CN102682292B CN 201210144082 CN201210144082A CN102682292B CN 102682292 B CN102682292 B CN 102682292B CN 201210144082 CN201210144082 CN 201210144082 CN 201210144082 A CN201210144082 A CN 201210144082A CN 102682292 B CN102682292 B CN 102682292B
Authority
CN
China
Prior art keywords
step
road
edge
image
value
Prior art date
Application number
CN 201210144082
Other languages
Chinese (zh)
Other versions
CN102682292A (en
Inventor
王宏
严润晨
赵云鹏
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Priority to CN 201210144082 priority Critical patent/CN102682292B/en
Publication of CN102682292A publication Critical patent/CN102682292A/en
Application granted granted Critical
Publication of CN102682292B publication Critical patent/CN102682292B/en

Links

Abstract

基于单目视觉的道路边缘检测以及粗定位方法,涉及机器视觉和智能控制领域。 Based on monocular vision edge detection path and coarse positioning method relates to the field of machine vision and intelligent control. 本发明针对连续的具有不同边缘特征的道路提供了两种道路边缘检测方法,适用于半结构化和非结构化的道路,可用于机器人的视觉导航及智能控制。 The present invention provides for a road having a continuous edge features of two different road edge detection method for a semi-structured and unstructured road, it can be used for visual navigation and intelligent control of the robot. 一种是基于颜色的道路边缘检测方法,另一种是基于阈值分割的道路边缘检测方法。 One is based on the road edge detection method for color, the other is the road edge detection method based on threshold segmentation. 在得到道路边缘的基础之上,对图像进行逆透视投影变换,将前视图转化成俯视图,根据转换后的图像中的像素和实际距离的线性对应关系,计算机器人当前位置和道路边缘的垂直距离以及航向角。 On the basis of obtained edge of the road on, the image is inverse perspective projection conversion, the conversion of a front view to the plan view, according to a linear correspondence relationship between the transformed image in pixels and actual distance calculated vertical distance of the current position and the road edge robot and heading angle. 本发明的方法实施简单,抗干扰力强,实时性好,适用于半结构化道路、非结构化道路。 The method of the present invention is simple to implement, and strong anti-interference, real time, the road is suitable for semi-structured, unstructured way.

Description

基于单目视觉的道路边缘检测及粗定位方法 Based on monocular vision edge detection path and coarse positioning method

技术领域 FIELD

[0001] 发明涉及计算机视觉、模式识别领域,移动机器人导航技术及方法。 [0001] The invention relates to computer vision, pattern recognition, and a method for mobile robot navigation.

技术背景 technical background

[0002] 智能移动机器人是包含环境感知、控制决策、路径规划等多个模块并应用计算机技术、信息技术、机器人技术、微电子技术的复杂系统。 [0002] intelligent mobile robot is included situational awareness, decision-making control multiple modules, route planning and application of complex computer systems technology, information technology, robotics, microelectronics technology. 它一般融合了多种传感器技术,包括GPS、激光雷达、摄像机、超声波、陀螺仪等,将感知的数据进行处理和融合,从而得到机器人定位信息,前方障碍信息,用于行走规划和决策。 It typically combines multiple sensor technologies, including GPS, laser radar, a camera, an ultrasonic, a gyroscope, the sensed data is processed and fused to obtain location information of the robot, the front obstacle information for travel planning and decision making. 移动机器人现在广泛应用于军事、医疗、农业、工业等各个领域。 Mobile robots are now widely used in military, medical, agriculture, industry and so on. 特别是在遇到极端环境和危险条件时,如核污染、高压、深海,甚至是外太空,移动机器人均可以有效降低人工作业的风险,提高工作效率,推动了社会和科技的进步与发展。 Especially in the face of extreme environmental and hazardous conditions, such as nuclear pollution, high pressure, deep sea, or even outer space, the mobile robot can be effectively reduce the risk of manual work, improve efficiency, and promote social progress and development and science and technology .

[0003] 移动机器人有多种分类,由控制类型可分为遥控型机器人和自主型机器人。 [0003] Mobile robots have multiple classification can be divided by a remote control type robot and autonomous robots.

[0004] 2011年11月26日,美国成功发射了“好奇号”新一代火星探测车是集最新科技的核动力遥控型移动机器人,它是继“勇气号”、“机遇号”、“凤凰号”之后第四个火星探测车,具有很强的移动行驶能力,携带了大量环境分析监测仪器,可以将多种分析资料通过卫星传回地球。 [0004] November 26, 2011, the United States successfully launched the "Curiosity" next-generation Mars rover is a nuclear-powered remote control latest technology mobile robots, it is the "Spirit", "Opportunity", "Phoenix after the number "fourth Mars rover, it has a strong ability to move traveling, carry a lot of analysis of environmental monitoring equipment, can analyze a variety of data via satellite back to Earth.

[0005]自主型移动机器人是未来的发展趋势。 [0005] autonomous mobile robot is the future trend of development. 它综合应用了计算机视觉、模式识别、模糊控制,专家系统等人工智能技术,由已知的结构化环境、半结构化环境下的自主移动,向非结构化未知环境的自主移动和探索发展。 It combines the application of computer vision, pattern recognition, artificial intelligence, fuzzy control technology, expert systems, from the known structured environment, independent movement in semi-structured environment, to autonomous mobile and explore the unknown unstructured environment development.

[0006] 其中智能汽车又是目前备受关注的移动机器人技术应用领域之一。 [0006] where the Smart car is one of the applications of mobile robot technology is closely watched. 清华大学研制的智能汽车THMR— V,可以行驶在一般道路和高速公路上,设计车速为高速公路80km/h,一般道路20km/h。 Tsinghua University developed the Smart car THMR- V, can travel on a general road and highway design speed of the highway 80km / h, the general road 20km / h. 汽车上装备有彩色摄像机和激光测距仪,差分GPS、磁罗盘和光码盘等导航和环境检测系统,可在校园等非结构化环境中行驶和避障。 Equipped with a color camera and laser range finder on the car, differential GPS, magnetic compass and optical encoder such as navigation and environmental monitoring system, driving and obstacle avoidance in school and other unstructured environment. 谷歌于2010年公布了自己正在研制中的智能车,丰田Prius。 Google announced its own smart car being developed in 2010, the Toyota Prius. 它安装了视频摄像头,激光测距仪,雷达传感器,GPS卫星定位系统,从传感信息中获得交通情况,在谷歌自己的地图导航下行驶了超过16万英里(其中包括偶尔有人控制的行驶路段)。 It installed a video camera, laser range finder, radar sensors, GPS satellite positioning system, access to traffic information from the sensor, traveling more than 160,000 miles under its own Google Maps Navigation (including occasional control of someone driving route ). 智能车感知道路环境是依靠多种传感器的信息融合,对传感器的精度要求较高。 Smart car road environment is perceived to rely on a variety of sensor information fusion, higher sensor accuracy.

[0007] 对于移动机器人来说,最常使用的是视觉传感器,即安装彩色摄像机。 [0007] For a mobile robot, the vision sensor is the most commonly used, i.e. color camera installation. 计算机视觉,模式识别技术对移动机器人的导航起到了至关重要的作用。 Computer vision, pattern recognition technology for mobile robot navigation played a crucial role. 对于结构化道路的视觉识别算法,一般是对车道线,交通标识,信号灯及道路区域的检测。 For visual recognition algorithm structured way, typically the detection of lane lines, traffic signs, signals and road area. 对于非结构化道路的视觉识别算法,有对于可行驶区域,障碍物的检测。 For visual recognition algorithm unstructured road, there travelable area for the obstacle detection. 目前带有车道线的结构化的道路检测算法已经比较成熟,而对于乡村道路、不带车道线的城市道路的检测还缺少一些通用性较强的算法。 Currently road detection algorithm with structured lane line is relatively mature, and for rural roads, urban roads without detecting the lane line is also missing some universal strong algorithm.

[0008] 针对目前的多数视觉识别算法,对传感器要求高,时间复杂性高的缺点,对非结构化的环境识别仍需要满足较多条件,无法准确的给出导航信息的现状,本发明提出了基于单目视觉的道路边缘检测及粗定位方法。 [0008] for the most current visual recognition algorithms, high sensor requirements, the time complexity of high disadvantage, unstructured environment recognition still need to meet more conditions can not give the exact status of navigation information, the present invention has rough road edge detection and location based on monocular vision. 发明内容 SUMMARY

[0009] 本发明针对移动机器人视觉算法对传感器要求高,实时性弱,对非结构化道路识别通用性较低等问题,提出了基于单目视觉的道路边缘检测及粗定位方法。 [0009] The present invention is directed to a mobile robot vision algorithms required height sensor, weak real-time, low unstructured road recognition universal problems, proposed monocular vision edge detection and coarse road location based.

[0010] 本发明的特征在于,是一种半自动式的基于移动机器人单目视觉的道路边缘检测和粗定位方法,是在由操作人员操控的计算机和装在移动机器人上的单镜头CCD摄像机共同串接而成的基于单目视觉的道路边缘检测和粗定位系统中实现的,该方法是依次按照以下步骤实现的; [0010] feature of the present invention, is a semi-automatic road edge detection and the coarse positioning method for mobile robot based on monocular vision, common string manipulation by the operator computer and mounted on a mobile robot single lens CCD camera road based on monocular vision edge detection and the coarse positioning system ground from the implement, the method according to the steps of sequentially implemented;

[0011] 步骤(1),单镜头CCD摄像机将机器人前进时拍摄到的能反映道路和非道路边界两侧颜色差异的路况图像输入到计算机中显示; [0011] Step (1), a single lens CCD camera captured the robot proceeds to reflect the difference in color and on both sides of the road-road road border image input to the computer display;

[0012] 步骤(2),操作人员根据所述的道路和非道路边界两侧的颜色差异,来选择性的启动设在所述计算机内的两个不同的软件模块; [0012] Step (2), in accordance with the operator and road color difference on both sides of the non-road boundary, arranged to selectively start the computer in two different software modules;

[0013] 对于道路和非道路边界两侧颜色有显著差异的RGB颜色空间彩色路况图像,启动基于颜色的道路边缘检测模块; [0013] There are significant differences for both road and off-road color boundary road color image the RGB color space, based on the color start road edge detection module;

[0014] 对于道路和非道路边界两侧颜色模糊且无明显边界的RBG颜色空间彩色路况图像,启动基于阈值分割的道路边缘检测模块; [0014] For both sides of the road and non-road boundary color blur and no obvious boundaries traffic RBG color space color image, segmentation of the start threshold based on the road edge detection module;

[0015] 步骤(3),所述基于颜色的道路边缘检测模块依次执行以下步骤进行道路边缘的检测; [0015] Step (3), based on the color of the road edge detection module performs the steps of sequentially detecting the edge of the road;

[0016] 步骤(3.1),参数初始化,读入所述RBG颜色空间的彩色路况图像,设置感兴趣区域ROI的范围为所述路况图像的1/4~1/2大小,从所述彩色路况图像左下角开始划分; [0016] Step (3.1), the parameter initializing, reads the road RBG color space color image, the setting range of the region of interest ROI as the road image 1/4 ~ 1/2 size, color from the road the bottom left image begins to divide;

[0017] 步骤(3.2),从所述彩色路况图像中取出感兴趣区域ROI的图像并转换到HSV颜色空间,得到新的路况图像; [0017] Step (3.2), the region of interest ROI is extracted from the color image and road image into HSV color space conversion to obtain new road image;

[0018]亮度 V' =max (R, G, B), [0018] Brightness V '= max (R, G, B),

Figure CN102682292BD00071

[0021] 若H'〈0,则用(H' +360)代替,其中RGB分别为原像素点红、蓝、绿三色的亮度值,令V=255tV',S=255S',H=H' /2 [0021] If H '<0, then with (H' +360) in place, wherein the original RGB pixels are red, blue, and green brightness value, so that V = 255tV ', S = 255S', H = H '/ 2

[0022] 步骤(3.3),按以下方式用openCV库中的cvCanny函数模块进行canny抽边,得到canny 图: [0022] Step (3.3), in the following manner using openCV library cvCanny pumping function block canny edge, canny FIG obtain:

[0023] 设定步骤(3.2)得到的结果图像边缘连接处作为梯度阈值为50的控制边缘,内侧初始分割处作为梯度阈值为150的控制强边缘,输入为感兴趣区域ROI的HSV颜色空间图像和上述两个梯度阈值;输出为canny抽边图;[0024] 步骤(3.4),设定HSV的颜色范围区间,把绿色区域或土黄色区域提取出来,其中绿色的范围区间为(20, O, O)〜(70,40,O),土黄色的范围为(10,0,O)〜(30,0, 150); Results image edge connection (3.2) to give the [0023] threshold setting step as a gradient of the control edge 50, at the inner side of the initial segmentation threshold control 150 as a gradient of a strong edge, the input region of interest ROI HSV color space image and said two gradient threshold; FIG output canny edge extraction; [0024] step (3.4), the HSV color range setting section, the extracted area or khaki green region, wherein the green range interval (20, O , O) ~ (70,40, O), khaki range (10,0, O) ~ (30,0, 150);

[0025] 步骤(3.5),利用openCV库中的cvErode函数模块对步骤(3.4)得到的绿色或土黄色区域进行腐蚀操作,输出为结果图像; [0025] Step (3.5), using a library cvErode openCV step function blocks (3.4) to give a khaki green or etching operation area, the output result of the image;

[0026] 步骤(3.6),利用openCV库中的cvDilate函数模块对步骤(3.5)得到的结果图像进行膨胀操作,输出为经过腐蚀膨胀操作后的绿色或土黄色区域结果图像; [0026] Step (3.6), using the results of the image library cvDilate openCV step function module (3.5) was subjected to dilation, or khaki green output image area subjected to corrosion result of the expansion operation;

[0027] 步骤(3.7),从步骤(3.6)得到的结果图像中提取拟合用的边缘点:为道路左侧边缘时,在步骤(3.6)得到的结果图像内,从右至左地扫描,若canny抽边图中某个边缘点左侧的5个像素内有绿色点,而右侧的5个像素内对应的没有绿色点,则认为该边缘点为真实边缘点,边缘点左右不足5个像素时,此边缘点直接舍去,得到一个真实边缘点样本集;对土黄色区域按同样步骤; [0027] Step (3.7), the image extraction result obtained from step (3.6) is fitted with edge points: when the left edge of the road, the result of the image in step (3.6) to give a right to left scan when the canny edge graph drawn green pixels within 5 points of a point in the left edge, and the corresponding pixels to the right of the five points is not green, it is considered that the true edge point of the edge points, the edge points is less than about 5 pixels, this edge point is directly discarded, to obtain the true edge point of a set of samples; khaki by the same region of the step;

[0028] 步骤(3.8),判断真实边缘点: [0028] Step (3.8), determining the true edge point:

[0029] 若所述真实边缘点样本集中,真实边缘点的个数m > 30,则执行以下步骤; [0029] If the true edge point in the sample set, the number m of the true edge point> 30, perform the following steps;

[0030] 步骤(3.8.1),从所述真实边缘点样本集中,随机地选出两个真实边缘点,作为两个样本数据; [0030] Step (3.8.1), from the true edge point in the sample set, randomly selected two points true edge, as two sample data;

[0031] 步骤(3.8.2),通过openCV库中的cvFitline函数模块,用最小二乘法拟合出一条直线;输入为步骤(3.8.1)得到的两个样本数据,计算误差类型为CV_DIST_L2,极坐标半径和角度误差均为0.01,输出所拟合直线的斜率和截距;之后再统计与所述直线距离小于4个像素点的边缘点个数k,以此作为后续步骤拟合用的点; [0031] Step (3.8.2), by openCV cvFitline function library module, fitting a straight line by least square method; an input step (3.8.1) to give two samples of data, calculation error type CV_DIST_L2, polar radius and angle errors are both 0.01, the slope and intercept of the fitted line output; then, after the statistical distance is less than four straight edge point number k of pixels, as a subsequent step of fitting with point;

[0032] 步骤(3.8.3),若k/m大于0.3,则接受拟合模型,再通过所述的k个点利用最小二乘法重新拟合直线,即得到最终结果;否则,返回步骤(3.8.1);当重复执行次数大于设定的最大循环次数200次时,则失败;若所述真实边缘点样本集中的真实边缘点数小于30时则失败; [0032] Step (3.8.3), if k / m is greater than 0.3, then accept the fit model, re-fitting a straight line, i.e., the final result obtained by the k points using the least squares method; otherwise, returns to step ( 3.8.1); when the maximum number of times the cycle is repeated 200 times greater than the set number of executions, the failed; true edge point if the sample set is less than the true edge point 30 fails;

[0033] 步骤(4),所述基于阈值分割的道路边缘检测模块,在道路右侧边缘处,依次做以下步骤进行道路边缘检测; [0033] Step (4), based on the road edge detection threshold segmentation module, at the edge of the road on the right, the following steps may be sequentially road edge detection;

[0034] 步骤(4.1),设置感兴趣区域ROI的大小为图像右下角1/4部分; [0034] Step (4.1), the region of interest ROI is set to the lower right quarter of the size;

[0035] 步骤(4.2),取出感兴趣区域图像,并按下式转换为灰度图像; [0035] Step (4.2), remove the image region of interest, and the following formula into a gray image;

[0036] v0(x, y)=0.212671XR(x, y)十0,715160XG(x, y)+0。072169XB(x, y), [0036] v0 (x, y) = 0.212671XR (x, y) ten 0,715160XG (x, y) + 0.072169XB (x, y),

[0037] x, y代表所述图像中以左上角为原点的像素点坐标值;V(I表示(x,y)像素点的灰度值,R表示彩色图像中所在像素的红色通道值,G表示彩色图像中所在像素的绿色通道值,B表示彩色图像中所在像素的蓝色通道值,v0, R, G,B都在[0,255]内; [0037] x, the pixel coordinates of the upper left corner as the origin of the value y represents the image; V (I represented by (X, y) is the gradation value of the pixel, R represents the red channel value of the color image where the pixels, G represents green channel image where the pixel color value, B represents a color image where the pixels in the blue channel values, v0, R, G, B in [0,255] therein;

[0038] 步骤(4.3),按下式对像素点(X,y)进行均值模糊处理,将以(x,y)为中心的5X5的像素值矩阵V与核矩阵K卷积,得到新的像素值Vl ; [0038] Step (4.3), the following equation for the pixel (X, y) mean-blurring process, will be (x, y) is the pixel value of the center of the 5X5 kernel matrix K and the matrix V convolution of the new Vl pixel value;

[0039] V1 (X,y) =V*K, * 表示卷积, [0039] V1 (X, y) = V * K, * denotes convolution,

[0040] [0040]

Figure CN102682292BD00091

[0042] 步骤(4.4),计算每个灰度值对应的像素点数,制成颜色直方图; [0042] Step (4.4), calculates the number of pixels corresponding to each gray value, color histogram is made;

[0043] 步骤(4.5),设定CCD摄像机获取的感兴趣区域图像,在灰度空间中的值域为[O, 1-1], I e (0,255]: Image region of interest [0043] Step (4.5), the CCD camera set acquired in the gradation range space is [O, 1-1], I e (0,255]:

[0044] 按下式计算评判值η (t),遍历所有灰度值的评判值,选出评判值最大的一个灰 [0044] The evaluation value is calculated as η (t), traversing all the gradation values ​​of the evaluation values, selecting a maximum value of the evaluation gray

度值作为分类最优的灰度阈值; As the optimum value of gradation classification threshold;

[0045] [0045]

Figure CN102682292BD00092

[0046] 其中,为根据灰度分类阈值t划分的大于和小于等于该值的两类像素点,通 [0046] wherein, according to the gradation larger than the classification threshold t equal to two and less than the divided value of the pixel, through

过计算出现概率PoahP1 α)而得到的类间方差; By calculating the probability of occurrence among PoahP1 α) obtained class variance;

[0047] [0047]

Figure CN102682292BD00093

[0048] [0048]

[0049] p,为灰度值i的像素点出现的概率;图像的总像素为 [0049] p, the probability of the pixel gray values ​​i occurring; total pixels of the image

Figure CN102682292BD00094

表示灰度值为 It represents a gradation value

i的像素点数,利用直方图的结果计算得到灰度为i的像素出现的概率为 I is number of pixels, calculated from the results obtained using a histogram of the gradation occurrence probability of pixel i

Figure CN102682292BD00095

[0050] P0W^P1 (t)是根据所述灰度阈值t划分的两类的灰度值出现概率;μ Jt)为整个 [0050] P0W ^ P1 (t) is the probability of occurrence of two gray value t based on the division gradation threshold value; μ Jt) for the entire

灰度值值域内像素点的概率加权总和; Probability values ​​of pixel gray value weighted sum of the art;

[0051] [0051]

Figure CN102682292BD00096

[0052] (¾.⑴为根据灰度阈值t划分的大于和小于等于该值的两类像素点,通过计算出现概率而得到的类内方差; [0052] (¾.⑴ t is greater than two pixels and less than or equal to the divided value, by calculating the probability threshold according to the gradation obtained by the class variance;

[0053] 其中,y^t),yi(t)分别为在阈值t以下和以上的灰度值值域中,像素点出现的概率的加权均值; [0053] wherein, y ^ t), yi (t), respectively below and above the probability t gray value range, the pixels appear in the weighted mean threshold value;

[0054] [0054]

Figure CN102682292BD00101

[0055] 步骤(4.6),基于步骤(4.54)得到的灰度分类阈值t,分别把高于所述灰度分类阈值t以及低于所述灰度分类阈值t的两类像素点,将其灰度值按下式进行二值化处理; [0055] Step (4.6), based on step (4.54) to give a gray classification threshold t, respectively, the two pixels above the threshold value t and classification gradation lower than the gradation classification threshold value t, which is gradation from the following formula binarizing process;

[0056] [0056]

Figure CN102682292BD00102
Figure CN102682292BD00103

[0057] v(x, y)为进行二值化处理后的像素灰度值; [0057] v (x, y) is the pixel gray values ​​for the binarization process;

[0058] 步骤(4.7),利用所述openCV库中的cvCanny函数模块对步骤(4.5)得到的二值图像进行canny抽边,输出为canny抽边图像;设定作为边缘连接处的控制边缘连接的像素梯度阈值为50,作为边缘初始切割处的控制强边缘连接处的像素梯度阈值为100 ; [0058] Step (4.7), with the openCV library cvCanny step function module (4.5) was subjected to the binary image canny edge extraction, edge extraction image output canny; set as a control at the connection edges of the connection the gradient threshold is 50 pixels, the pixel intensity gradient threshold as a control at the cutting edges of the initial value of the connection 100;

[0059] 步骤(4.8),对于步骤(4.7)得到的二值化处理以后的canny抽边图,按照从左到右取得第一个白色边缘点,作为边界点,得到一个边界点集; [0059] Step (4.8), for step (4.7) canny edge evacuated FIG subsequent binarization process obtained, to obtain a white first edge point from left to right, as a boundary point, to obtain a set of boundary points;

[0060] 步骤(4.9),利用openCV库中的cvHoughLines函数模块对步骤(4.8)得到的边界点集中的所有边界点进行Hough直线拟合,得到道路边缘;设定极坐标半径的分辨率为3 [0060] Step (4.9), using the boundary points openCV library function modules cvHoughLines step (4.8) to give a concentration of all the boundary points Hough fitting a straight line to give the edge of the road; polar angle setting resolution of 3

个像素点,极坐标角度分辨率力$直线转换到极坐标上需通过该直线的曲线条数最少为 Pixel points, polar angular resolution $ linear force to the polar coordinate conversion for an article by a logarithmic curve of the minimum line

50 ; 50;

[0061] 步骤(5),按一下步骤进行机器人和道路边缘的相对粗定位; [0061] Step (5), a relatively coarse positioning of the robot and the road edge by clicking step;

[0062] 步骤(5.1),把步骤(4)检测得到的路况边缘的前向图通过做逆透视投影变换得到道路边缘的俯视图,步骤如下: [0062] Step (5.1), the top plan view of the front step (4) detecting the edge of the road obtained by making inverse FIG perspective projection transform edge of the road, the following steps:

[0063] 步骤(5.1.1),用机器人上的摄像机拍摄一幅带有梯形图像,在俯视图中为矩形的前向图; Before [0063] the step (5.1.1), taken with the camera on the robot with an image trapezoidal, rectangular in plan view to FIG;

[0064] 步骤(5.1.2),在步骤(5.1.1)得到的前向图中选择一个梯形的四个顶点,记录坐标; Obtained before the [0064] Step (5.1.2), in the step (5.1.1) to select four vertices of a trapezoid in the figure, the coordinates of the recording;

[0065] 步骤(5.1.3),任意设置俯视图中矩形大小与像素的对应关系,从而得到所述四个顶点在做完逆透视投影变换后得到的俯视图中的四个顶点,记录坐标; [0065] Step (5.1.3), provided any correspondence between a rectangular plan view and the size of the pixel, resulting in four vertices of the four vertices of a plan view in the finished inverse perspective projection conversion is obtained, the coordinates of the recording;

[0066]步骤(5.1.4),利用openCV 中的cvWrapPerspectiveQMatrix 函数模块,求出逆透视投影变换矩阵H,输入为步骤(5.1.2)、(5.1.3)中的两套坐标构成的两个4X2的矩阵,输出为逆透视投影变换矩阵H ; [0066] Step (5.1.4), using the cvWrapPerspectiveQMatrix openCV function module obtains an inverse perspective projection transformation matrix H, the input of the step (5.1.2), two sets of coordinates (5.1.3) consisting of 4X2 matrix output inverse perspective projection transformation matrix H;

[0067] 步骤(5.1.5),把所述前向图和逆透视投影变换矩阵输入openCV中的cvffrapPerspective函数模块中,得到逆透视投影变换后的俯视图; [0067] Step (5.1.5), the input to the forward view and a perspective projection transformation matrix inverse of cvffrapPerspective openCV function module, to give a top view of the inverse perspective projection conversion;

[0068] 步骤(5.2),按以下步骤计算航向角偏差和横向位移: [0068] Step (5.2), is calculated according to the following steps heading angle error and lateral displacement:

[0069] 所述航向角偏差是指机器人行驶方向AC与道路边缘线的延长线AE之间的夹角Θ , [0069] The deviation of the heading angle refers to the angle between the traveling direction of the robot and a road edge line of the AC extension line AE Θ,

[0070] 所述横向位移是指所述机器人行驶方向AC上的机器人中心B到所述道路边缘延长线上的垂直距离BD;[0071] 步骤(5.2.1),求航向角偏差Θ, [0070] The lateral displacement of the robot is the center of the vertical distance B BD robot traveling direction on the road to the edge of the AC extension line; [0071] Step (5.2.1), find the heading angle deviation [Theta],

[0072] 在所述前向图的道路边缘线上任意两点,将其中的每一个点的坐标乘以所述逆透视投影变换矩阵H,就得到所述道路边缘线的延长线上对应两点的坐标(Xl,Y1)、(x2, J2),相应的直线AE 的斜率k=(y2 一y2)/(x2 一X1),则Θ =90。 [0072] to any two points of a road edge line of the FIG., Each of which is multiplied by the coordinates of the point at the front of said inverse perspective projection transformation matrix H, to obtain an extension line of the road edge line corresponding to the two coordinate point (Xl, Y1), (x2, J2), the slope of the corresponding linear AE k = (y2 a y2) / (x2 a X1), then Θ = 90. -arctan (k);若X1=X2,则0=0。 -arctan (k); if X1 = X2, then 0 = 0. ; ;

[0073] 步骤(5.2.2),求横向位移BD, [0073] Step (5.2.2), the lateral displacement of the BD request,

[0074] 步骤(5.2.2.1),设定:所述俯视图的左上角为坐标原点,坐标用像素点表示,每一个像素点对应实际长度为7.5_,在步骤(5.1.3)中设置俯视图坐标时设定的;联立如下方程可得到机器人行驶方向AC中A的纵坐标为h ; [0074] Step (5.2.2.1), is set: the upper left corner of the top view of the origin of coordinates, the ordinate represents a pixel, each pixel corresponding to the actual length of 7.5_, a plan view is provided in the step (5.1.3) in when the coordinate set; simultaneous equations can be obtained as the traveling direction of the robot a in the vertical axis AC is H;

Figure CN102682292BD00111

, width 表示图像宽度, , Width represents the width of the image,

[0076] 步骤(5.2.2.2),设机器人行驶方向AC与所述俯视图的交点C的纵坐标为图像高度height,则从C点到道路边缘线的延长线的垂直距离为CE=Sin ( θ )*7.5*(h_height); [0076] Step (5.2.2.2), the robot is provided with AC and ordinate direction intersecting point C is a plan view of the image height height, the vertical distance from the point C to the extension line of the road edge line is CE = Sin (θ ) * 7.5 * (h_height);

[0077] 步骤(5.2.2.3),按下式计算横向位移BD: [0077] Step (5.2.2.3), the lateral displacement is calculated as BD:

[0078] BD=Sin ( Θ ) *7.5* (h-height-d/7.5) [0078] BD = Sin (Θ) * 7.5 * (h-height-d / 7.5)

[0079] d是机器人中心B和俯视图下边缘的垂直距离BC,也是机器人前方的盲区距离,单位为mm。 [0079] d is a plan view of a robot and a center B of the lower edge of the vertical distance of the BC, but also in front of the robot blind distance, in mm.

[0080] 本发明的主要优点是: [0080] The main advantage of the present invention are:

[0081] 1、本发明针对连续的具有不同边缘特征的道路提供了两种道路边缘检测方法,对道路环境的适应性强,通用性好。 [0081] 1, the present invention provides for a road having a continuous edge features of two different road edge detection method, the adaptability to the road environment, common good. 一种适用于道路和非道路边界两侧颜色差异比较大的情况(如两侧为绿化带的道路),对光照条件和阴影的敏感性较弱;另一种方法适用于边界处相对不够清晰的道路情况,对阴影很敏感。 Suitable for color differences on both sides of the boundary road and off-road is relatively large (e.g., a road on both sides of the green band), sensitivity to weak lighting conditions and shadows; Another method suitable for the boundary is not clear enough at the opposite the road conditions are very sensitive to the shadows. 两种方法主要适用于半结构化道路以及非结构化道路。 Both methods mainly applied to semi-structured and unstructured way road.

[0082] 2、有些时候由于摄像机的视野或者安装角度,或者道路较宽等原因,图像不能完整显示道路的两侧,本发明的道路边缘检测方法针对单侧的边缘检测,降低了对摄像机视野范围和安装角度的要求;本发明的方法不需要分辨率很高的摄像机,降低了对摄像机清晰度的要求。 [0082] 2, since the field of view of the camera in some cases, or installation angle or wide road and other reasons, the image can not be displayed completely on both sides of the road, a road edge detection method of the present invention is directed to a one-sided edge detection, reduces the field of vision of the camera and mounting angle requirements; method of the present invention does not require high resolution cameras, the camera reduces the requirement for clarity.

[0083] 3、本发明在检测到道路的边缘后,对机器人进行了相对于道路边缘的粗定位,给出了定位结果,此结果可以更好的辅助控制机器人的行进。 [0083] 3, the present invention after an edge is detected in the road, the robot is positioned relative to the rough edge of the road, given the positioning result, which can better assist control traveling of the robot.

[0084] 本发明实现了两种基于单目视觉的道路边缘检测方法,在其基础上,实现了机器人相对于道路边缘的粗定位。 [0084] The present invention implements two road edge detection method based on monocular vision, on its basis, to achieve a coarse positioning of the robot relative to the edge of the road.

[0085] 第一种方法是基于颜色的道路边缘检测方法,首先将图像转换到HSV颜色空间上,利用道路和非道路的颜色差异,以H值为主范围区间,S和V为辅助范围区间,将图像粗略分成道路和非道路区域的二值灰度图;结合原始图像的canny边缘图,根据道路边缘处梯度较大以及边缘两侧的颜色不同,提取道路边缘点;当道路边缘点的数目较多时,才进行下一步的处理,否则丢弃这一帧图像对下一帧图像进行处理;然后用RANSAC方法对道路边缘点进行直线拟合。 [0085] A first method is a road edge detection based on color, the image is first converted into the HSV color space, using the color difference of road and off-road to the main window interval value H, S and V is an auxiliary range section , the image is roughly divided into two road and non-road area grayscale value; FIG conjunction canny edge of the original image, the larger the gradient and a different color on both sides of the edge, the edge point extraction according to the road edge of the road; a road when the edge points when a larger number only for further processing, otherwise, discard the frame image of the next frame is processed; road edge points and fitting a straight line with the RANSAC method. 从已有的道路边缘点中随机抽取两点进行直线拟合,当拟合结果较好时,去除干扰较大的点,对剩余的边缘点再次进行随机抽样直线拟合。 Randomly selected from two o'clock in the existing road edge points fit a straight line, fitting is better when a result, the interference removing larger dots, the remaining edge points in a random sample of a straight line fit again. 两次拟合可以使拟合结果更准确,并且能很好的去除图像中一些噪点的干扰。 Two fitting can be made more accurate fit, and can well remove some of the noise in the noisy images.

[0086] 另一种方法是基于阈值分割的道路边缘检测方法,先将原图像转化成灰度图像;选择灰度阈值t将图像划为道路和非道路两类,计算两类的类间方差;遍历t的值,选取使得类间方差最大的t值作为道路区域和非道路区域的临界值;利用灰度阈值t将图像二值化,提取二值化图像的边缘点即为道路边缘点;对道路边缘点进行Hough直线检测,得到道路边缘线。 Another method [0086] road thresholding edge detection method based on first original image is converted into grayscale image; selecting a threshold gray image t road and off-road classified as two types, class variance between the two calculated ; traversal value t, such that the selection of the maximum between-class variance value t and the threshold value as the road-road area region; t use the gray threshold binary image, edge points are extracted binarized image is the road edge points ; road edge points Hough line detection, to obtain the road edge line.

[0087] 在检测到道路边缘之后,对机器人进行和道路边缘的相对粗定位,计算机器人和道路方向的夹角以及机器人与道路边缘的垂直距离。 [0087] After detecting the edge of the road, the robot and the relative coarse positioning edge of the road, calculating an angle of the robot and the direction of the road and the vertical distance of the robot and the edge of the road. 对已得到的道路边缘线图像进行逆透视投影变化,将图像从摄像机正常的视角图转换为俯视图;逆透视投影变换后的图像和实际的距离有一个线性的对应关系,一个像素点在实际中代表的长度为α毫米;利用图像中直线的斜率可以算出机器人当前和道路方向的夹角Θ,机器人和道路边缘的垂直距离δ可以从图像中根据三角关系求出。 For road edge line image has been inverse perspective projection changes, the image from the camera normal perspective view of a transition is a plan view; image and the actual distance after inverse perspective projection conversion has a linear correspondence between a pixel in practice Representative of length α mm; slope of the straight line with the image of the robot can be calculated current direction of the road and the angle Θ, the robot and the vertical distance δ can be determined from the edge of the road image on the basis triangle.

[0088] 本发明的优劣:本发明提出的道路边缘检测方法适用于半结构化和非结构化道路,对于图像稳定性及传感器色彩偏差上有很好的鲁棒性,可用于小型移动机器人。 [0088] The merits of the present invention: a road edge detection method proposed by the present invention is applied to semi-structured and unstructured road, and for the stability of the image sensor with a color deviation is robust, can be used in small mobile robot . 同时,可以很好满足实时导航的需求,处理延迟低,对于非结构化道路适用范围广。 At the same time, we can well meet the needs of real-time navigation, low processing delay, applicable for a wide range of unstructured way. 算法得出的最终结果,可以辅助机器人定位,对机器人控制决策起到决定性作用。 The final results of the algorithm derived, may be assisted robot positioning, the robot control decisions play a decisive role. 由于算法基于单目视觉,因此具有视觉算法的通病,即对于光照条件、阴影较为敏感,在这些条件下的检测效果会受到影响。 Since the algorithm based on monocular vision, vision algorithms therefore have a common problem, i.e., the conditions for lighting, shadows more sensitive detection effect under these conditions will be affected.

附图说明 BRIEF DESCRIPTION

[0089] 图1、2为基于颜色的道路边缘检测算法的步骤说明图,其中: [0089] FIGS explanatory diagram, wherein the step of color-based road edge detection algorithm:

[0090] 图1 (a)为划分ROI后的原始图像,图1 (b)为canny抽边结果图,图1 (C)为绿色区域提取结果图,图1(d)为腐蚀膨胀处理后的绿色区域图。 After [0090] FIG. 1 (a) is the original image is divided into the ROI, FIG. 1 (b) is a canny suction side bet, and FIG. 1 (C) extraction result FIG green region, FIG. 1 (d) of etching the expansion process green area chart.

[0091 ] 图2为RANSAC直线拟合结果图。 [0091] FIG 2 is a straight-line fitting results RANSAC FIG.

[0092] 图3、4、5为基于阈值分割的道路边缘检测算法的步骤说明图,其中: [0092] FIG. FIG. 3, 4, wherein the step of edge detection algorithm based on the road thresholding of:

[0093] 图3(a)为划分ROI后的原始图像,图3(b)为转化为灰度空间的图像。 [0093] FIG. 3 (a) dividing an original image to the ROI, FIG. 3 (b) is an image space into gray.

[0094] 图4(a)为利用灰度阈值进行二值化的结果图,图4(b)为canny抽边后去冗简化的结果图。 [0094] FIG. 4 (a) is binarized using a gray-level threshold for the results, and FIG. 4 (b) is a canny edge extraction to the redundant results simplified FIG.

[0095] 图6、7、8、9为机器人粗定位实施方法的步骤说明图,其中: [0095] FIG 6,7,8,9 explanatory view, wherein the coarse positioning step of a method embodiment of the robot:

[0096] 图5为最终直线拟合结果图。 [0096] FIG. 5 is a straight-line fitting results final FIG.

[0097] 图6为世界坐标系与图像坐标系示意图。 [0097] FIG. 6 is a world coordinate system and the image coordinate system. FIG.

[0098] 图7为对于实际道路,逆透视投影变换前后的效果对比图。 [0098] FIG. 7 is an actual road, before and after comparison of the effect of inverse perspective projection transformation. 其中(a)为原图,(b)为逆透视投影变换后的图。 Wherein (a) is the original image, (b) after the inverse perspective projection conversion FIG.

[0099] 图8为用于求逆透视投影变换矩阵的图像。 [0099] FIG. 8 is a perspective projection image inverse transformation matrix. 其中(a)为原图,图中的四个点为选取的四个点,(b)为逆透视投影变换后的效果图。 Wherein (a) is a picture, the four points in FIG selected four points, (b) after the inverse perspective projection conversion effect FIG.

[0100] 图9为航向角偏差和横向位移计算不意图。 [0100] FIG. 9 is a calculated heading angle deviation and is not intended to lateral displacement.

[0101] 图10为基于单目视觉的道路边缘检测和粗定位方法的流程图。 [0101] FIG. 10 is a flowchart of edge detection based on monocular vision roads and rough positioning method.

具体实施方式 Detailed ways

[0102] 本发明设计了基于单目视觉的道路边缘检测和粗定位方法。 [0102] The present invention contemplates rough road edge detection and location based on monocular vision. 图1所示的是道路边缘检测和粗定位的流程。 FIG. 1 is a rough road edge detection and positioning of the flow shown. 本发明设计了两种道路边缘检测方法,一种是基于颜色的道路边缘检测方法,另一种是基于阈值分割的道路边缘检测方法。 The present invention is designed in two road edge detection method for a detection method is based on the road edge color, the other is the road edge detection method based on threshold segmentation. 在道路边缘检测的基础上,对机器人进行了相对于道路边缘的粗定位。 On the basis of the road edge detection, the robot is positioned relative to the edge of the road coarse. 实现过程中的图像处理利用的是openCV开源库。 Image processing implementation process is openCV use of open source libraries.

[0103] 一、根据实际环境选择道路检测算法 [0103] First, the selection according to the actual road environment detection algorithm

[0104] 首先根据实际道路环境选择用哪一种算法,如果道路和非道路边界两侧颜色差异比较大的情况(如两侧为绿化带的道路)就选择基于颜色的道路边缘检测方法;否则就选取基于阈值分割的道路边缘检测方法。 [0104] First, according to the actual road conditions with an algorithm to select which, if the road and non-road side of the border color difference is relatively large (e.g., a road on both sides of the green band) to select road edge detection method based on color; otherwise, to select the road edge detection method based on threshold segmentation.

[0105] 二、道路边缘检测算法 [0105] Second, the road edge detection algorithm

[0106] 基于颜色的道路边缘检测方法 [0106] Based on the color of the road edge detection method

[0107] 本方法的关键是道路边缘点的提取和道路边缘线的拟合。 Key [0107] The present method is the extraction fitting and a road edge line of a road edge points. 由于道路边缘两侧的颜色一般是有很大区别的,并且道路边缘处的图像梯度较大,因此本方法使用颜色范围区间和canny图像相结合的方法提取边缘点;提取的边缘点可能有一些干扰,本方法使用抗干扰力较强的RANSAC方法来拟合道路边缘线。 Since the road edge on both sides of the color is generally very different, and the image gradient at the edge of the road is large, so the present method interval and the range of colors using the method of combining images canny edge point is extracted; extracted edge points may be some interference, the method using a strong anti-interference ability RANSAC method to fit a road edge line. 以两旁是绿化带的道路为例,具体的步骤如下: Road lined with green belt, for example, specific steps are as follows:

[0108] 1.参数的初始化:使用设置颜色的范围区间H、S、V的值,例如绿色:(20,O, O)~(70,40,O),土黄色:(10,0,O)~(30,0,150),ROI (感兴趣区域)的大小(范围在图像的四分之一至二分之一大小,从图像左下角或者右下角开始划分); [0108] 1. Initialization of parameters: provided the use of a range of colors interval H, S, V values, for example, green: (20, O, O) ~ (70,40, O), khaki: (10,0, O) ~ (30,0,150), ROI (region of interest) size (in the range of one quarter to two sub-image size, or assigned from the bottom left bottom right corner of the image);

[0109] 2.读入图像,取出图像的R0I,将原始图像转换到HSV空间,公式如下: [0109] 2. a read image, an image taken R0I, converting the original image into HSV space, the following formula:

[0110] a)将颜色图像从RGB颜色空间转换到HSV颜色空间` [0110] a) a color image from the RGB color space to HSV color space '

[0111] 1.每个像素点的H,S,V分量的值分别如下计算: [0111] 1. H, S, V component values ​​of each pixel are calculated as follows:

Figure CN102682292BD00131

[0115]若 H〈0,则用(H+360)代替 [0115] If H <0, then with (H + 360) instead of

[0116] 其中R,G,B分别为该点红、绿、蓝三色的亮度值。 [0116] wherein R, G, B for each dot of red, green, and blue luminance values.

[0117] i1.为了方便存储和计算,对每个点的H,S,V分量值做如下计算: . [0117] i1 for convenience of storage and computing, for each point of the H, S, V component values ​​are calculated as follows:

[0118] V'=255V,S'=255S,H'=H/2; [0118] V '= 255V, S' = 255S, H '= H / 2;

[0119] 对于RGB颜色空间的彩色图像,对于每个像素点进行上述计算,可以转化为HSV颜色空间图像。 [0119] For a color image the RGB color space, the above calculation for each pixel, can be converted into the HSV color space image.

[0120] 3.进行canny抽边得到canny图:利用openCV库中的cvCanny函数进行,输入参数为原始图像,作为控制边缘的梯度阈值为50,内侧初始分割处作为控制强边缘的像素阈值为150,输出为canny抽边图;图1 (a)为划分ROI后的图像,图1(b)为取出ROI后的canny抽边图;[0121] 4.用HSV的范围区间将绿色或土黄色区域提取出来;如图1(c); [0120] 3. canny edge extraction obtained canny FIG: cvCanny using openCV library function, the input parameters for the original image, a gradient threshold value of the control edge 50, at the inner side of the initial segmentation as a strong edge pixel control the threshold value of 150 the output of pumping canny edge; Figure 1 (a) is divided ROI image, FIG. 1 (b) is a canny edge extraction after withdrawal FIG ROI; [0121] 4. the interval range HSV green or ocher extracted region; FIG. 1 (c);

[0122] 5.对绿色或土黄色区域图进行腐蚀操作:利用openCV库中的函数cvErode函数,输入为原始图像,其余参数为默认的矩形矩阵,无需输入,输出为结果图像; [0122] The area of ​​green or ocher FIG etching operation: using a function cvErode openCV library function, the input original image, the other parameters as default rectangular matrix, without the input and output of the image result;

[0123] 6.对绿色或土黄色区域图进行膨胀操作:利用openCV库中的函数cvDilate函数,输入为原始图像,其余参数为默认,无需输入,输出为结果图像;如图1(d); [0123] 6. FIG green or yellow soil region inflated operation: using a function cvDilate openCV library function, the input original image, the other parameters as default, no input, output result of the image; FIG. 1 (d);

[0124] 7.提取用于拟合的边缘点:对canny图从下至上从右至左扫描(如果是道路左侧边缘),若canny图中的边缘点左侧5像素内有绿色点或土黄色点,右侧5像素内没有绿色点或者土黄色点,则认定为真实的边缘点。 [0124] 7. A fitting for extracting edge points: canny FIG oriented to scan from right to left (if the left edge of the road) from the bottom, 5 points in the green pixel to the left edge point if the drawing or canny khaki points, within 5 pixels to the right spot or not green khaki point is identified as the true edge point. 一行最多只有一个真实边缘点。 One line at most a real edge point. 左右不足5像素的点直接舍去; Point less than about 5 pixels directly discarded;

[0125] 8.如果真实边缘点的个数小于30则无法拟合;如果真实边缘点的个数大于30,则采用RANSAC算法对上面步骤得到的真实边缘点进行直线拟合: [0125] 8. If the number of true edge point is less than 30 can not be fitted; true edge point if the number is greater than 30, then the true edge point using the RANSAC algorithm obtained in the above step of fitting a straight line:

[0126] a)设真实边缘点样本集中总共有m个点,随机地选出两个样本数据; [0126] a) the true edge point in the sample set is provided with a total of m points, two randomly selected sample data;

[0127] b)通过这两个点用最小二乘法拟合直线:利用openCV库中的CvFitLine函数,输入为两个点的点集,计算误差类型为CV_DIST_L2,极坐标半径和角度误差均为0.01,输出为直线的参数;然后统计与直线距离小于4像素的点即拟合上的点的个数K ; [0127] b) by fitting these two points by a straight line least squares method: using CvFitLine openCV library function, two input point set points, calculation error type CV_DIST_L2, polar radius and angle errors are 0.01 , linear output parameter; point count is then less than the linear distance of 4 pixels, i.e., the number of fitting points K;

[0128] c)如果K/m大于0.3就接受拟合模型,则接受该拟合结果,并且通过所有能够拟合上的数据利用最小二乘方法拟合直线;否则重新执行a)_c); [0128] c) If K / m is larger than 0.3 accept fitting model, the fitting result is accepted, and fitting a straight line by the least square method can be fitted on all the data; otherwise re-execute a) _c);

[0129] d)如果重复执行次数大于最大允许次数200,则失败; [0129] d) If the number is greater than the maximum allowed is repeatedly performed 200 times, the failure;

[0130] 最终结果如图2。 [0130] The final result is shown. · ·

[0131] 基于阈值分割的道路边缘检测算法 [0131] an edge detection algorithm based on thresholding road

[0132] 针对一些非结构化道路中,道路区域边缘不清晰,道路区域与非道路区域颜色差异大的特点,在系统中利用了一种最大类间方差法,将摄像机捕捉到的图像,在灰度空间中进行自适应的聚类,使其动态地进行阈值分割,从而识别出可行驶的道路区域。 [0132] For some of unstructured road, the road area of ​​the edge is not clear, the road area and features a large color difference-road areas, the use of an inter-Otsu in the system, the camera captured images, in gray adaptive clustering space, dividing it dynamically threshold value, thereby identifying the road travelable area.

[0133] 算法中将道路区域和非道路区域分为两类Cc^P C1,通过分别计算两类的类间方差和类内方差,获得最佳灰度分割阈值t,即灰度阈值t可使得两类的类间方差最大,类内方差最小。 [0133] Algorithm for road and off-road in the region divided into two regions Cc ^ P C1, by calculating the variance between the two classes and class variance to obtain optimal gamma segmentation threshold value t, i.e., gray threshold t can be It makes between two types of class maximum variance, minimum variance in the class. 该方法的判决条件,是根据将图像二分时,同一类的像素点颜色相似,不同类的像素点颜色差异大的特点演化而来。 Decision conditions of the process, based on the time-division two images, the same type of similar pixel color, the color pixel characteristics of large differences in the different classes of evolved.

[0134] 具体实施步骤如下: [0134] In particular embodiments the steps of:

[0135] 1.设置感兴趣区域ROI大小为图像右下角四分之一部分。 [0135] 1. Set the size of the region of interest ROI image to the bottom right quarter portion. 图3(a)为划分感兴趣区域后得到的图像; FIG. 3 (a) dividing the image region of interest is obtained;

[0136] 2.将图像转化为灰度图像,公式如下(其中X,y代表图像中以左上角为原点的像素点坐标值,%表示(x,y)像素点的灰度值,R表示彩色图像中所在像素的红色通道值,G表示彩色图像中所在像素的绿色通道值,B表示彩色图像中所在像素的蓝色通道值,v0, R, G, B e [O, 255]): [0136] 2. The image into a gray image, the following formula (wherein X, pixel coordinate value of the upper left corner as the origin of the representative image in y, expressed in% (X, y) is the gradation value of the pixel, R represents red channel value of the color pixel in the image is located, G represents a pixel value of the green channel of the color image where, B represents a pixel value of a blue color channel image is located, v0, R, G, B e [O, 255]):

[0137] v0(x, y) =0.212671XR(x, y) +0.715160XG(x, y) +0.072169XB(x, y) [0137] v0 (x, y) = 0.212671XR (x, y) + 0.715160XG (x, y) + 0.072169XB (x, y)

[0138] 3.利用均值模糊算法对图像进行模糊处理,用以减少个别的颜色突变点。 [0138] 3. Using the mean fuzzy algorithm for image blurring process for reducing the color of individual point mutations. 图3(b)为灰度空间的图像;具体公式如下(其中X,y代表图像中以左上角为原点的像素点坐标值,V。表示(x,y)像素点的灰度值,K表示均值模糊的核矩阵,V表示以(x,y)为中心的5X5的像素值矩阵,*表示卷积): FIG 3 (b) is the image gray space; in particular the following formula (wherein X, y, the representative image pixel coordinate value of the upper left corner as the origin, V represents (x, y) of the pixel gray scale value, K. It represents the mean blur kernel matrix, V represented by (x, y) as the center of the 5X5 matrix of pixel values, * represents convolution):

Figure CN102682292BD00151

[0142] 4.对灰度图进行颜色直方图统计,即计算每种灰度值对应的像素点数; [0142] 4. grayscale color histogram, i.e., calculates the number of pixels corresponding to each gray scale value;

[0143] 5.摄像机获取的图像在灰度空间中的值域为[0,1-1](1 e [O, 255]),设灰度值 [0143] The range of the camera image acquired in the gray space is [0,1-1] (1 e [O, 255]), set gray value

为i的像素点个数为ni则图像的总像素为 I is the number of pixels ni is the total pixels of the image

Figure CN102682292BD00152

则利用直方图的结果可计算得到灰 Calculate the histogram results obtained ash

度为i的像素出现的概率为 I degree of probability of appearance of pixels

[0144] [0144]

Figure CN102682292BD00153

[0145] 所以根据上述公式我们可以得到: [0145] Therefore, the above equation we get:

Figure CN102682292BD00154

[0147] 对于某一个灰度值t作为阈值,可将灰度值划分为两类。 [0147] For a certain gray value t as a threshold value, the gradation value may be divided into two categories. 通过以下公式可计算得到两类的出现概率为: The probability of occurrence can be calculated by the following formula obtained two categories as follows:

[0148] [0148]

Figure CN102682292BD00155

[0149] 因此,可以计算得到两类的均值,以及图像的总体灰度概率均值分别为: [0149] Thus, the mean can be calculated in two, and the overall probability that the gradation image mean values:

[0150] [0150]

Figure CN102682292BD00156

[0151] 由此可得出两类的类间方差为: [0151] thus be drawn between two categories of variance:

[0152] [0152]

Figure CN102682292BD00157

[0153] 类内方差为: [0153] the class variance:

[0154] [0154]

Figure CN102682292BD00158

[0155] 最终评判值计算公式为 [0155] The final criterion value is calculated as

Figure CN102682292BD00161

[0157] 遍历所有灰度评判值大小选出最终评判值最大的灰度值,即分类最优的灰度阈值。 [0157] Evaluation values ​​through all the gray tone value of the maximum size of a final selected criterion value, i.e., the optimal classification threshold gray.

[0158] 6.按照上一步得到灰度分类阈值,将高于此数值和低于此数值的两类像素进行简单的黑白二值处理。 [0158] 6. A obtained in accordance with the classification threshold gradation step, and this value will be higher than the value of the two pixels below this simple black and white binarized. 结果如图4(a)。 Results As shown in FIG 4 (a). 公式如下(其中x,y代表图像中以左上角为原点的像素点坐标值,V1表示(x,y)像素点二值化之前的灰度值,V表示所在像素二值化处理后的灰度值,t为上一步求出的灰度分类阈值): The following formula (wherein the pixel values ​​represent the image point coordinates x, y to the upper left corner as the origin, V1 represents (x, y) is the gradation value of the previous pixel binarization, V represents a pixel where the gray binarization processing values, t is the classification threshold step gradation obtained):

Figure CN102682292BD00162

[0160] 7.对此二值图像进行canny抽边:利用openCV库中的cvCanny函数进行,输入参数为原始图像,作为控制边缘的梯度阈值为50,内侧初始分割处作为控制强边缘的像素阈值为150,输出为canny抽边图 [0160] 7. The binary image for this canny edge extraction: openCV performed using cvCanny library function, the input parameters for the original image, a gradient threshold value of the control edge 50, at the inner side of the initial segmentation as a strong edge pixel control threshold value 150, FIG output canny edge extraction

[0161] 8.根据上一步的canny抽边图,按行从左到右取得第一个白色点(如果是沿道路右边缘),得到一个边界点集,起到去除冗余信息简化结果的作用。 [0161] The last step of pumping canny edge graph, the first row from left to right to obtain a white point (along the road if the right edge), to give a set of boundary points, simplified functions to remove redundant information results effect. 图4(b)为抽边且边缘简化后的结果; FIG. 4 (b) and the result of edge extraction of the edge simplified;

[0162] 9.利用上一步得到的边界点集进行Hough直线拟合:使用openCV库中的cvHoughLines函数,输入为原始图像,极坐标半径的分辨率3像素,极坐标系角度的分辨率 [0162] 9. A set of points on the boundary using the Hough step was subjected to linear fitting: Use cvHoughLines openCV library function, the input original image, the resolution of the polar coordinate radius 3 pixels, a resolution angle polar coordinate system

^直线所需最少的曲线交点50,输出为Hough拟合图。 ^ Required minimum linear curve intersection 50, the output fitting Hough FIG. 即得到了道路边沿。 That is the way to get the edge. 如图5。 5.

[0163] 三、机器人和道路边缘的相对粗定位 [0163] Third, the robot and the opposite edge of the road coarse positioning

[0164] 机器人行进时,需不断的通过摄像机检测出的道路边缘来计算当前机器人和道路边缘之间的横向位移以及航向角偏差。 [0164] When the robot travels, we need to be constantly detected by the camera to calculate the current road edge lateral displacement between the robot and the road edges and the heading angle deviation. 横向位移即为机器人中心与道路边缘的垂直距离。 It is the lateral displacement of the robot from the vertical center of the edge of the road. 航向角偏差即为机器人当前行驶方向和欲行驶的规划路线之间的夹角。 Heading angle deviation is the angle between the traveling direction and the current directions of the robot to be traveling. 具体的计算步骤如下: The specific calculation steps are as follows:

[0165] 1、逆透视投影变换 [0165] 1, the inverse perspective projection transformation

[0166]为了方便的计算出机器人的横向位移偏差和航向角偏差,将检测到的车道边缘图像做逆透视投影变换,将前向图变换成俯视图,如图7。 [0166] In order to facilitate the lateral displacement deviation calculated angular deviation and heading of the robot, the detected lane edge image do inverse perspective projection transformation, converted into a forward plan view of FIG, 7 as shown in FIG.

[0167] 在欧氏空间中定义两个坐标系W和I,分别表示世界坐标系和图像坐标系: [0167] defines two coordinate systems I and W in the Euclidean space, the world coordinate system respectively, and the image coordinate system:

Figure CN102682292BD00163

[0170] 其中E是欧氏空间,X、Y、z是世界坐标系W中的坐标,Z=O的平面是地平面;u、V是图像坐标系中的坐标,图像坐标系的原点是图像的左上角。 [0170] where E is the Euclidean space, X, Y, z are the coordinates of the world coordinate system W is, Z = O plane is a ground plane; u, V are the coordinates of the image coordinate system, the origin of the image coordinate system is the upper left corner of the image.

[0171] 逆透视投影变换的实质就是将图像坐标系I下的道路图像,即摄像机所获取的图像,变换到世界坐标系W下的Z=O平面中,两者关系如图6。 [0171] inverse perspective projection conversion is the essence of the road image in the image coordinate system I, i.e., images acquired by a camera is converted to the Z = O plane of the world coordinate system W, the relationship between the two in Figure 6.

[0172] 由于z=0,并且变换后的图像没有旋转变换,因此只需要将一个3X3的变换矩阵和原图像相乘就可以得到逆透视投影变换后的图像。 [0172] Since the z = 0, and not the transformed image rotational transformation, and therefore only need a 3X3 transformation matrix and the original image can be obtained by multiplying the inverse perspective projection image transformation.

[0173] f(x, y) = (x, y, I) XH [0173] f (x, y) = (x, y, I) XH

[0174] 其中(x,y)是原始图像中一点的坐标,H是3X3的逆透视投影变换矩阵,f(x, y)是变换后的坐标。 [0174] where (x, y) coordinates of the point is the original image, H is the inverse of the perspective projection 3X3 transformation matrix, f (x, y) is transformed coordinates.

[0175] 具体步骤如下: [0175] the following steps:

[0176] (I)用机器人上的摄像机拍摄一幅带有梯形图案,俯视图时为矩形的前向图; [0176] (I) captured by a camera on the robot with a trapezoidal pattern, a rectangular top plan view of FIG forward;

[0177] (2)在图片中取一个梯形的四个顶点,记录坐标; [0177] (2) to take a picture in the four vertices of the trapezoid, the coordinates of the recording;

[0178] (3)根据俯视图中的矩形大小与像素的对应关系设置矩形这四个顶点在新的坐标系中的X,y坐标; [0178] (3) Set the four vertices of the rectangle in a new X coordinate system based on the correspondence with the pixel size of the rectangle in plan view, y coordinates;

[0179] (4)利用openCV中的cvWarpPerspectiveQMatrix函数求出逆透视投影矩阵H。 [0179] (4) an inverse perspective projection matrix is ​​determined using the cvWarpPerspectiveQMatrix openCV function H. cvffarpPerspectiveQMatrix函数的输入是上述两套坐标构成的两个4X 2的矩阵,输出是逆透视投影变换矩阵H。 CvffarpPerspectiveQMatrix input function is a matrix composed of the above-described two sets of coordinates 4X 2, the output is the inverse perspective projection transformation matrix H.

[0180] (5)利用openCV中的函数cvWarpPerspective,函数的输入是原始图像和逆透视投影变换矩阵H,输出是变换后的图像。 [0180] (5) using the cvWarpPerspective openCV function, the original image input function and the inverse perspective projection transformation matrix H, the output image is converted.

[0181 ] 用于计算逆透视投影变换矩阵的图像和逆透视投影变化后的图像如图8.[0182] 具体实例:如图8,图像的分辨率为768*576,取四个点,坐标分别为(92,574),(230,437),(722,566),(572,432);设定在新坐标系下的坐标为(304,575),(304,375),(464,575), (464,375)。 After the change in the image shown in FIG 8. [0182] Specific examples of the inverse perspective projection image and [0181] is used to calculate the inverse perspective projection transformation matrix: 8, image resolution is 768 * 576, take four points, the coordinates of respectively (92,574), (230,437), (722,566), (572,432); set in the new coordinate system to the coordinates (304,575), (304,375), (464,575 ), (464,375). 用openCV 中的cvWarpPerspectiveQMatrix 函数求出逆透视投影变换矩阵。 Inverse perspective projection transformation matrix determined by the cvWarpPerspectiveQMatrix openCV function. cvWarpPerspectiveQMatrix函数的输入是两个矩阵4X 2的矩阵,每个矩阵是由4个点的横纵坐标组成的,就是变换前和变换后的坐标。 CvWarpPerspectiveQMatrix two input function is the matrix of the matrix 4X 2, each matrix is ​​composed of horizontal and vertical coordinates of the four points of the composition, it is the coordinate before conversion and after conversion. 求得的逆透视投影变换矩阵为: Obtained by the inverse perspective projection transformation matrix:

[0183] -0.291892 -1.36039 487.334 [0183] -0.291892 -1.36039 487.334

[0184] -0.0454482 -2.89973 1062.64 [0184] -0.0454482 -2.89973 1062.64

[0185] -0.000060317 -0.00356855 I [0185] -0.000060317 -0.00356855 I

[0186] 得到逆透视投影变换后的图像,原来的四个点在逆透视投影变换后,基本为正方形的四个顶点。 [0186] The image obtained after inverse perspective projection transformation, the original four points after inverse perspective projection conversion, four substantially square vertices.

[0187] 由于图片中一个方砖的实际宽度为600mm,各顶点在图像中的坐标也已知,可算出逆透视投影变换后一个像素点在实际中的距离是7.5_,这是后续计算机器人行驶状态的重要依据。 [0187] As the actual width of the image in a tiling is 600mm, the coordinates of each vertex in the image are also known, can be calculated after inverse perspective projection conversion in a pixel distance 7.5_ practice, this is the subsequent calculation of the robot driving state of an important basis. 由于逆透视投影变换后的图像的四个点的坐标是人为规定的,因此可以控制这四个点的位置来调整图像中一个像素点对应的实际距离。 Since the four coordinates of the image points of the inverse perspective projection conversion is artificial and thus the position of the four points can be controlled to adjust the actual distance image corresponding to a pixel.

[0188] 2、航向角偏差和横向位移计算 [0188] 2, the heading angle error and lateral displacement calculation

[0189] 在图9中,假设机器人的位置是AC中点B的位置,中心实线(直线ABC)是机器人行驶的方向,车道边缘的方向是经过逆透视投影变换后的车道边缘线,虚线是车道边缘线的延长线,它和中心实线(直线ABC)的夹角即为机器人的航向角偏差Θ,线段BD长度就是横向位移。 [0189] In FIG. 9, the position of the assumption that the robot is a position of the AC point B, the center of the solid line (a straight line ABC) is the direction of travel of the robot, the edge of the lane is the lane edge line after the inverse perspective projection transformation, a broken line lane edge line is the extended line, the angle between it and the center solid line (a straight line ABC) of the robot is the heading angle deviation [Theta], the length of the line segment BD is the lateral displacement.

[0190] (I)求行驶角度偏差Θ。 [0190] (I) with angular deviation request Θ. 由前面的检测算法可以得到原始图像中道路边缘线上的每一点的坐标,将其乘以逆透视投影变换矩阵H就可以得到直线DE上任意一点的坐标,然后用两点的坐标(X^y1),(x2»y2) (Xi不等于X2)算出直线的斜率k=(y2-y2)/(X2-X1),然后可以求出Θ =90。 From the foregoing detection algorithm may obtain the coordinates of each point in the original image edge line path, multiplied by the inverse matrix H on the perspective projection transformation of coordinates of any point on the straight line DE can be obtained, followed by two coordinates (X ^ y1), (x2 »y2) (Xi is not equal to X2) calculated from the slope of the straight line k = (y2-y2) / (X2-X1), and can be determined Θ = 90. 一Brctan(Ii)Igx1=X2,贝丨J 0=0。 A Brctan (Ii) Igx1 = X2, shellfish Shu J 0 = 0. .

[0191 ] (2)求横向位移偏差(BD)。 [0191] (2) Determine the lateral displacement deviation (BD). [0192] 1.图中A点的坐标用直线ABC和直线DE的方程联立可以求出来。 [0192] FIG 1. A coordinate points may find out by a straight line ABC and the linear simultaneous equations DE. 假设图像中左上角为坐标原点,坐标用像素来表示,A点的纵坐标为h。 The image is assumed as the coordinate origin in the upper left corner, the coordinates of the pixel represented by the ordinate of point A to h. 方程如下: Equation is as follows:

Figure CN102682292BD00181

[0194] i1.在做逆透视投影变换的时候,有一个可选的尺度即为一个像素点对应的实际的长度(例如上面提到的7.5mm),那么可以得出,线段CE的实际长度为sin( Θ )* oc *(h-height),其中height为图像的像素高度,也就是C的纵坐标。 [0194] i1. In doing inverse perspective projection conversion, when an optional scale is the actual length of a corresponding pixel (e.g. 7.5mm mentioned above), it can be concluded that the actual length of the line CE is sin (Θ) * oc * (h-height), wherein the height of the image height in pixels, i.e. the ordinate C.

[0195] ii1.假设机器人中心和图像下方的实际距离(BC)是d (mm),这个距离即为摄像机安装在机器人上后的盲区距离,转换成像素的距离就是d/oc,那么横向位移(BD)的计算就变成sin( Θ )* oc *( h-height-d/ )。 [0195] ii1. The actual distance below the assumption that the robot and the center of the image (BC) is d (mm), the distance that is in the blind spot camera mounted on the robot from the rear, is converted into a pixel distance d / oc, then the lateral displacement (BD) calculation becomes sin (Θ) * oc * (h-height-d /).

Claims (1)

1.基于单目视觉的道路边缘检测和粗定位方法,其特征在于,是一种半自动式的基于移动机器人单目视觉的道路边缘检测和粗定位方法,是在由操作人员操控的计算机和装在移动机器人上的单镜头CCD摄像机共同串接而成的基于单目视觉的道路边缘检测和粗定位系统中实现的,该方法是依次按照以下步骤实现的; 步骤(1),单镜头CXD摄像机将机器人前进时拍摄到的能反映道路和非道路边界两侧颜色差异的路况图像输入到计算机中显示; 步骤(2),操作人员根据所述的道路和非道路边界两侧的颜色差异,来选择性的启动设在所述计算机内的两个不同的软件模块; 对于道路和非道路边界两侧颜色有显著差异的RGB颜色空间彩色路况图像,启动基于颜色的道路边缘检测模块; 对于道路和非道路边界两侧颜色模糊且无明显边界的RBG颜色空间彩色路况图像,启动基 1. Based on edge detection and the coarse positioning method road monocular vision, wherein a semi-automatic edge detection and rough road location based on monocular vision mobile robot is manipulated by the operator in the computer and installed in single-lens CCD camera on the mobile robot of the concatenation common road edge detection and the coarse positioning system in monocular vision-based implementation, the process is implemented sequentially following steps; step (1), the single-lens camera CXD captured color differences reflect road and non-road side of the border when the robot forward road image input to the computer display; step (2), the operator by the color difference on both sides of the road and non-road boundary, selected start of two different software modules provided within said computer; significant difference for both road and off-road color boundary road color image the RGB color space, based on the color start road edge detection module; for roads and non- color blur the boundary on both sides of the road and no obvious boundaries RBG color space color image road conditions, start-yl 阈值分割的道路边缘检测模块; 步骤(3),所述基于颜色的道路边缘检测模块依次执行以下步骤进行道路边缘的检测; 步骤(3.1),参数初始化,读入所述RBG颜色空间的彩色路况图像,设置感兴趣区域ROI的范围为所述路况图像的1/4~1/2大小,从所述彩色路况图像左下角开始划分; 步骤(3.2),从所述彩色路况图像中取出感兴趣区域ROI的图像并转换到HSV颜色空间,得到新的路况图像; 亮度V' =max (R, G, B), Thresholding road edge detection module; Step (3), based on the color of the road edge detection module performs the following steps sequentially detects a road edge; step (3.1), the parameter initializing, reads the RBG color space color traffic image, setting range of the region of interest ROI as the road image 1/4 ~ 1/2 the size, the color assigned from the bottom left corner of the road image; step (3.2), removed the color of interest from the road image image region ROI and converted to HSV color space, get a new road image; luminance V '= max (R, G, B),
Figure CN102682292BC00021
若H'〈0,则用(H' +360)代替,其中RGB分别为原像素点红、蓝、绿三色的亮度值,令V=255V/ , S=255S/,H=H' /2 ; 步骤(3.3),按以下方式用openCV库中的cvCanny函数模块进行canny抽边,得到canny 图: 设定步骤(3.2)得到的结果图像边缘连接处作为梯度阈值为50的控制边缘,内侧初始分割处作为梯度阈值为150的控制强边缘,输入为感兴趣区域ROI的HSV颜色空间图像和上述两个梯度阈值;输出为canny抽边图; 步骤(3.4),设定HSV的颜色范围区间,把绿色区域或土黄色区域提取出来,其中绿色的范围区间为(20, O, O)~(70,40,O),土黄色的范围为(10,0,O)~(30,0, 150); 步骤(3.5),利用openCV库中的cvErode函数模块对步骤(3.4)得到的绿色或土黄色区域进行腐蚀操作,输出为结果图像; 步骤(3.6),利用openCV库中的cvDilate函数模块对步骤(3.5)得到的结果图像进行膨胀操作,输出为经过腐蚀膨胀操作后的 If H '<0, then with (H' +360) instead of, respectively, wherein the original RGB red pixel, the luminance values ​​of the blue and green, so that V = 255V /, S = 255S /, H = H '/ 2; step (3.3), in the following manner using openCV function block library cvCanny canny edge extraction to give canny FIG: setting step (3.2) the results obtained at the image edge connector as a gradient of the threshold value of the control edge 50, the inner at initial segmentation threshold control 150 as a gradient of a strong edge, the region of interest ROI of an input HSV color space image and the two gradient threshold; FIG output canny edge extraction; step (3.4), the color range setting section HSV , the extracted region khaki green region or out of the range in which the green interval (20, O, O) ~ (70,40, O), khaki range (10,0, O) ~ (30,0 , 150); step (3.5), using openCV library cvErode function module in step (3.4) to give a green or yellow soil area etching operation, the output result of the image; step (3.6), using openCV library cvDilate function module result image obtained in step (3.5) of the expansion operation, the output operation after erosion and dilation 绿色或土黄色区域结果图像;步骤(3.7),从步骤(3.6)得到的结果图像中提取拟合用的边缘点:为道路左侧边缘时,在步骤(3.6)得到的结果图像内,从右至左地扫描,若canny抽边图中某个边缘点左侧的5个像素内有绿色点,而右侧的5个像素内对应的没有绿色点,则认为该边缘点为真实边缘点,边缘点左右不足5个像素时,此边缘点直接舍去,得到一个真实边缘点样本集;对土黄色区域按同样步骤; 步骤(3.8),判断真实边缘点: 若所述真实边缘点样本集中,真实边缘点的个数m > 30,则执行以下步骤; 步骤(3.8.1),从所述真实边缘点样本集中,随机地选出两个真实边缘点,作为两个样本数据; 步骤(3.8.2),通过openCV库中的cvFitline函数模块,用最小二乘法拟合出一条直线;输入为步骤(3.8.1)得到的两个样本数据,计算误差类型为CV_DIST_L2,极坐标半径和角度误差均为0.01,输出 Green or ocher result image region; step (3.7), the edge points are extracted by fitting the results of the image resulting from step (3.6) is: when the left edge of the road, the result of the image obtained in step (3.6), from right-to-left scanning, the point when the green pump canny edge points of a left side edge in FIG. 5 pixels and the corresponding pixels to the right of the five points is not green, it is considered that the true edge point is an edge point around the edge points is less than 5 pixels, the edge point is directly discarded, to obtain the true edge point of a sample set; in the same region of yellow soil; step (3.8), determining the true edge point: if the true edge point samples concentration, the number m of the true edge point> 30, perform the following steps; step (3.8.1), from the true edge point in the sample set, randomly selected two points true edge, as two sample data; step (3.8.2), by openCV cvFitline function library module, fitting a straight line by least square method; two input data samples obtained in step (3.8.1), calculating an error type CV_DIST_L2, polar angle and 0.01 angle error are output 所拟合直线的斜率和截距;之后再统计与所述直线距离小于4个像素点的边缘点个数k,以此作为后续步骤拟合用的点; 步骤(3.8.3),若k/m大于0.3,则接受拟合模型,再通过所述的k个点利用最小二乘法重新拟合直线,即得到最终结果;否则,返回步骤(3.8.1);当重复执行次数大于设定的最大循环次数200次时,则失败;若所述真实边缘点样本集中的真实边缘点数小于30时则失败; 步骤(4),所述基于阈值分割的道路边缘检测模块,在道路右侧边缘处,依次做以下步骤进行道路边缘检测; 步骤(4.1),设置感兴趣区域ROI的大小为图像右下角1/4部分;· 步骤(4.2),取出感兴趣区域图像,并按下式转换为灰度图像; V0(X,y) =0.212671XR(x, y) +0.715160XG(x, y) +0.072169XB(x, y),x,y代表所述图像中以左上角为原点的像素点坐标值表示U,y)像素点的灰度值,R表示彩色图像中所在像 The slope and intercept of the fitted line; then, after statistical edge point number k of the linear distance of less than four pixels, as a subsequent step points with fitting; step (3.8.3), if k / m is larger than 0.3, then accept the fit model, re-fitting a straight line, i.e., the final result obtained by the k points using the least squares method; otherwise, returns to the step (3.8.1); when repeatedly performed more than a set number of times the maximum number of cycles at 200 times, the fail; true edge point if the sample set is less than the true edge point 30 fails; step (4), based on the road edge detection threshold segmentation module, the right edge of the road , the following steps may be sequentially road edge detection; step (4.1), the size of the region of interest ROI is provided to the bottom right quarter of the image; - step (4.2), remove the image region of interest, and converted to the following formula grayscale image; V0 (X, y) = 0.212671XR (x, y) + 0.715160XG (x, y) + 0.072169XB (x, y), x, y represent the image pixels to the upper left corner as the origin point coordinate represents U, y) is the gradation value of the pixel, R represents a color image where the image 的红色通道值,G表示彩色图像中所在像素的绿色通道值,B表示彩色图像中所在像素的蓝色通道值,Vo,R,G,B都在[0,255]内; 步骤(4.3),按下式对像素点(X,y)进行均值模糊处理,将以(X,y)为中心的5X5的像素值矩阵V与核矩阵K卷积,得到新的像素值Vl ; V1 (X,y) =V*K, * 表示卷积, The red channel value, G represents a green channel image where the pixel color value, B represents a color image where the pixels in the blue channel value, Vo, R, G, B in [0,255] within; step (4.3) , the following equation for the pixel (X, y) mean-blurring process, will be (X, y) is the pixel value matrix V and nuclear matrix K 5X5 convolution center to obtain the new pixel value Vl; V1 (X , y) = V * K, * denotes convolution,
Figure CN102682292BC00031
步骤(4.4),计算每个灰度值对应的像素点数,制成颜色直方图; 步骤(4.5),设定CCD摄像机获取的感兴趣区域图像,在灰度空间中的值域为[0,1-1], Step (4.4), is calculated for each gradation value corresponding to the number of pixels, a color histogram is made; step (4.5), the region of interest setting image acquired by the CCD camera, the gradation range in the space [0, 1-1],
Figure CN102682292BC00041
按下式计算评判值H (t),遍历所有灰度值的评判值,选出评判值最大的一个灰度值作为分类最优的灰度阈值; Calculated as the criterion value H (t), traversing all the gradation values ​​of the evaluation values, selecting a maximum evaluation value as a gradation value of the gradation optimal classification threshold;
Figure CN102682292BC00042
其中,o|(t)为根据灰度分类阈值t划分的大于和小于等于该值的两类像素点,通过计算出现概率PtlUhP1⑴而得到的类间方差; Wherein, O | (t) according to the gradation larger than the classification threshold t equal to two and less than the divided value of the pixels, between the occurrence probability is calculated by the obtained PtlUhP1⑴ class variance;
Figure CN102682292BC00043
Pi为灰度值i的像素点出现的概率;图像的总像素为 Pi is the probability of the pixel gray values ​​i occurring; total pixels of the image
Figure CN102682292BC00044
表示灰度值为i的像素点数,利用直方图的结果计算得到灰度为i的像素出现的概率力P =|; P0 是根据所述灰度阈值t划分的两类的灰度值出现概率;μ T(t)为整个灰度值值域内像素点的概率加权总和; I represents the number of pixels of the tone value, calculated from the results obtained using a histogram gradation force P i is the probability of occurrence of the pixel = |; P0 is the probability of occurrence of two gray value t based on the division gradation threshold value ; μ T (t) is the weighted sum of pixel value probability entire gray value range;
Figure CN102682292BC00045
OHO为根据灰度阈值t划分的大于和小于等于该值的两类像素点,通过计算出现概率而得到的类内方差; 其中,μ0α), μιω分别为在阈值t以下和以上的灰度值值域中,像素点出现的概率的加权均值; OHO within-class variance based on the gradation threshold value t is greater than two pixels and less than or equal to the value divided by the calculated probability is obtained; wherein, μ0α), μιω t respectively below and above the threshold gray value range, the probability of occurrence of pixels weighted mean;
Figure CN102682292BC00046
步骤(4.6),基于步骤(4.5)得到的灰度分类阈值t,分别把高于所述灰度分类阈值t以及低于所述灰度分类阈值t的两类像素点,将其灰度值按下式进行二值化处理; Step (4.6), based on step (4.5) to give a gray classification threshold t, respectively, the classification threshold is higher than the gray scale t and the two pixels below the threshold value t classification gradation, the gradation value thereof the following formula binarization process;
Figure CN102682292BC00047
v(x, y)为进行二值化处理后的像素灰度值; 步骤(4.7),利用所述openCV库中的cvCanny函数模块对步骤(4.5)得到的二值图像进行canny抽边,输出为canny抽边图像;设定作为边缘连接处的控制边缘连接的像素梯度阈值为50,作为边缘初始切割处的控制强边缘连接处的像素梯度阈值为100 ; 步骤(4.8),对于步骤(4.7)得到的二值化处理以后的canny抽边图,按照从左到右取得第一个白色边缘点,作为边界点,得到一个边界点集; 步骤(4.9),利用openCV库中的cvHoughLines函数模块对步骤(4.8)得到的边界点集中的所有边界点进行Hough直线拟合,得到道路边缘;设定极坐标半径的分辨率为3个像素点,极坐标角度分辨率为$,直线转换到极坐标上需通过该直线的曲线条数最少为50 ; 步骤(5),按以下步骤进行机器人和道路边缘的相对粗定位;步骤(5.1),把步骤(4)检测得到的路况边缘的前向图 v (x, y) is binarized pixel gray value after processing; step (4.7), with the openCV library cvCanny step function module (4.5) was subjected to the binary image canny edge extraction, an output pumping Canny edge images; set as a control at the connected edges of the pixel 50 connected to the gradient threshold value, the pixel intensity gradient threshold as a control at the initial cut at the edge is an edge connector 100; step (4.8), for step (4.7 ) obtained after the binarization processing pumping canny edge graph, first obtain a white edge point from left to right, as a boundary point, to obtain a set of boundary points; step (4.9), using openCV function block library cvHoughLines all border points of the boundary point set in step (4.8) was subjected to Hough fitting a straight line to give the edge of the road; polar angle setting resolution of 3 pixels, a resolution of polar angle $, the linear conversion to polar coordinates required by the number of curved bars of the linear least 50; in step (5), a relatively coarse positioning of the robot and the edge of the road by the following steps; step (5.1), the forward traffic edge of the step (4) detecting the resulting map 过做逆透视投影变换得到道路边缘的俯视图,步骤如下: 步骤(5.1.1),用机器人上的摄像机拍摄一幅带有梯形图像,在俯视图中为矩形的前向图; 步骤(5.1.2),在步骤(5.1.1)得到的前向图中选择一个梯形的四个顶点,记录坐标;步骤(5.1.3),任意设置俯视图中矩形大小与像素的对应关系,从而得到所述四个顶点在做完逆透视投影变换后得到的俯视图中的四个顶点,记录坐标; 步骤(5.1.4),利用openCV中的cvWrapPerspectiveQMatrix函数模块,求出逆透视投影变换矩阵H,输入为步骤(5.1.2)、(5.1.3)中的两套坐标构成的两个4X2的矩阵,输出为逆透视投影变换矩阵H ; 步骤(5.1.5),把所述前向图和逆透视投影变换矩阵输入openCV中的cvffrapPerspective函数模块中,得到逆透视投影变换后的俯视图; 步骤(5.2),按以下步骤计算航向角偏差和横向位移: 所述航向角偏差是指机器人行驶方 By doing a top perspective projection transform inverse edge of the road map, the following steps: step (5.1.1), captured by a camera on the robot before an image with a trapezoidal, rectangular in plan view to FIG; step (5.1.2 ), select a trapezoid in the figure obtained prior to the step (5.1.1) four vertex coordinates recorded; step (5.1.3), and the pixel size of the rectangle corresponding relationship arbitrarily set in the plan view, thereby obtaining the four done in four vertices vertices top view of the inverse perspective projection transformation is obtained, the coordinates of recording; the step (5.1.4), using the cvWrapPerspectiveQMatrix openCV function module obtains an inverse perspective projection transformation matrix H, the input of the step ( 5.1.2), two 4X2 matrix output sets of coordinates (5.1.3) consisting of inverse perspective projection transformation matrix H; step (5.1.5), the conversion into the forward and inverse perspective projection of FIG. openCV cvffrapPerspective matrix input in function block to obtain a top view of the inverse perspective projection conversion; step (5.2) to calculate the heading angle and lateral displacement deviation of the following steps: the heading angle of the robot with means square deviation AC与道路边缘线的延长线AE之间的夹角Θ,所述横向位移是指所述机器人行驶方向AC上的机器人中心B到所述道路边缘延长线上的垂直距离BD ; 步骤(5.2.1),求航向角偏差Θ, 在所述前向图的道路边缘线上任意两点,将其中的每一个点的坐标乘以所述逆透视投影变换矩阵H,就得到所述道路边缘线的延长线上对应两点的坐标(Xl,Yl)、(x2, y2),相应的直线AE 的斜率k= (y2-y2) / (X2-X1),贝丨J Θ =90。 The angle between AC and a road edge line extended line AE Θ, the lateral displacement means of the robot center B on the traveling direction of the robot to the AC line of the vertical edge of the road from the BD extension; step (5.2. 1), seeking the heading angle deviation [Theta], the road edge line between any two points of the FIG., each of which is multiplied by the coordinates of points of the inverse perspective projection transformation matrix H in the front edge line of the road is obtained extension line corresponding to the coordinates of two points (Xl, Yl), (x2, y2), the slope of the corresponding linear AE k = (y2-y2) / (X2-X1), Tony Shu J Θ = 90. -arctan (k);若X1=X2,贝丨J Θ =0。 -arctan (k); if X1 = X2, shellfish Shu J Θ = 0. ; 步骤(5.2.2),求横向位移BD, 步骤(5.2.2.1),设定:所述俯视图的左上角为坐标原点,坐标用像素点表示,每一个像素点对应实际长度为7.5_,在步骤(5.1.3)中设置俯视图坐标时设定的;联立如下方程可得到机器人行驶方向AC中A的纵坐标为h ; ; Step (5.2.2), the lateral displacement of the BD request, step (5.2.2.1), is set: a plan view of the upper left corner is the origin of coordinates, the ordinate represents a pixel, each pixel corresponding to the actual length of 7.5_, set in the step (5.1.3) is provided in a top view Figure coordinates; simultaneous equations can be obtained as the ordinate of the robot in the traveling direction a of the AC is H;
Figure CN102682292BC00051
步骤(5.2.2.2),设机器人行驶方向AC与所述俯视图的交点C的纵坐标为图像高度height,则从C点到道路边缘线的延长线的垂直距离为CE=Sin ( θ )*7.5*(h_height); 步骤(5.2.2.3),按下式计算横向位移BD: BD=sin ( Θ)*7.5*(h-height-d/7.5) d是机器人中心B和俯视图下边缘的垂直距离BC,也是机器人前方的盲区距离,单位为mm η Step (5.2.2.2), the robot is provided with AC and ordinate direction intersecting point C is a plan view of the image height height, the vertical distance from the point C to the extension line of the road edge line is CE = Sin (θ) * 7.5 * (h_height); step (5.2.2.3), the lateral displacement is calculated as BD: BD = sin (Θ) * 7.5 * (h-height-d / 7.5) d is the vertical distance of the robot and a center B of the lower edge of the top view of FIG. the BC, but also in front of the robot blind distance, in units of mm η
CN 201210144082 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road CN102682292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210144082 CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210144082 CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Publications (2)

Publication Number Publication Date
CN102682292A CN102682292A (en) 2012-09-19
CN102682292B true CN102682292B (en) 2014-01-29

Family

ID=46814188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210144082 CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Country Status (1)

Country Link
CN (1) CN102682292B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102997853A (en) * 2012-11-22 2013-03-27 华中科技大学 Device and method for detecting ice and snow thickness
CN103971361B (en) * 2013-02-06 2017-05-10 富士通株式会社 An image processing apparatus and method
CN103308056B (en) * 2013-05-23 2015-09-16 中国科学院自动化研究所 A road marking detection method
CN103400150B (en) * 2013-08-14 2017-07-07 浙江大学 A method and apparatus for identifying the road edge based on the mobile platform
CN103630122B (en) * 2013-10-15 2015-07-15 北京航天科工世纪卫星科技有限公司 Monocular vision lane line detection method and distance measurement method thereof
CN103696381B (en) * 2013-12-20 2016-02-03 长沙中联重科环卫机械有限公司 One kind curb cleaning control method, a control apparatus, control system and sweepers
CN103714538B (en) * 2013-12-20 2016-12-28 中联重科股份有限公司 Road edge detection method, apparatus and vehicle
CN103809186B (en) * 2013-12-20 2016-10-05 长沙中联重科环卫机械有限公司 Curb detection systems, methods, apparatus, and construction machinery
CN103699899B (en) * 2013-12-23 2016-08-17 北京理工大学 Equidistant curve model based on lane line detection method
CN104021388B (en) * 2014-05-14 2017-08-22 西安理工大学 Based on binocular vision reversing automatic obstacle detection and early warning in
CN104036246B (en) * 2014-06-10 2017-02-15 电子科技大学 The method of one kind of lane line locating polymorphic feature fusion and - means
CN105224908A (en) * 2014-07-01 2016-01-06 北京四维图新科技股份有限公司 Road marking acquisition method based on orthographic projection and device
CN104760812B (en) * 2015-02-26 2017-06-30 三峡大学 Based on the product conveyor monocular vision methods and RTLS
CN105005761B (en) * 2015-06-16 2018-08-17 北京师范大学 Analysis of significance in combination of a full-color image of high resolution remote sensing method for detecting a road
CN105549603B (en) * 2015-12-07 2018-08-28 北京航空航天大学 Intelligent road patrol control method kinds of multi-rotor UAVs
CN105701496B (en) * 2016-01-12 2019-07-05 北京万同科技有限公司 A kind of go disk recognition methods based on artificial intelligence technology
CN105740826A (en) * 2016-02-02 2016-07-06 大连楼兰科技股份有限公司 Lane mark binaryzation detection method based on dual scales
CN105844614A (en) * 2016-03-15 2016-08-10 广东工业大学 Vision north finding method based on correction of angle of robot
CN106127177A (en) * 2016-07-01 2016-11-16 蔡雄 Unmanned road roller
CN106295556A (en) * 2016-08-09 2017-01-04 中国科学院遥感与数字地球研究所 Road detection method based on small unmanned aerial vehicle aerial image
CN107831675A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 Online robot control device based on intelligent learning technology
CN106500714A (en) * 2016-09-22 2017-03-15 福建网龙计算机网络信息技术有限公司 Video-based robot navigation method and system
CN106503636A (en) * 2016-10-12 2017-03-15 同济大学 Road sight distance detection method and device based on visual image
CN106872995A (en) * 2017-04-14 2017-06-20 北京佳讯飞鸿电气股份有限公司 Laser radar detection method and laser radar detection device
CN107221070A (en) * 2017-05-24 2017-09-29 广州市银科电子有限公司 Bill anti-fake identification method based on main pattern fluorescent feature identification
CN107665489A (en) * 2017-09-18 2018-02-06 华中科技大学 Glass dihedral angle detection method based on machine vision
CN107831762A (en) * 2017-10-18 2018-03-23 江苏卡威汽车工业集团股份有限公司 Path planning system and method of new energy vehicle
CN107525525A (en) * 2017-10-18 2017-12-29 江苏卡威汽车工业集团股份有限公司 System and method for calibrating paths of new energy automobiles
CN107807638A (en) * 2017-10-18 2018-03-16 江苏卡威汽车工业集团股份有限公司 New energy automobile image processing system and method
CN107728620A (en) * 2017-10-18 2018-02-23 江苏卡威汽车工业集团股份有限公司 Driverless system and method of new energy car
CN107560634A (en) * 2017-10-18 2018-01-09 江苏卡威汽车工业集团股份有限公司 Route regulating system and route regulating method of new-energy automobile

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4248558B2 (en) * 2006-03-24 2009-04-02 トヨタ自動車株式会社 Road dividing line detection device
CN102156977A (en) * 2010-12-22 2011-08-17 浙江大学 Vision-based road detection method
CN102436644B (en) * 2011-11-02 2013-10-16 南京物联网研究院发展有限公司 Unstructured road detection method based on adaptive edge registration

Also Published As

Publication number Publication date
CN102682292A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
Hillel et al. Recent progress in road and lane detection: a survey
Oniga et al. Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection
Toulminet et al. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis
US6906620B2 (en) Obstacle detection device and method therefor
US6826293B2 (en) Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded
Kastrinaki et al. A survey of video processing techniques for traffic applications
Kong et al. Vanishing point detection for road detection
CN100502463C (en) Method for collecting characteristics in telecommunication flow information video detection
US9435885B2 (en) Road-terrain detection method and system for driver assistance systems
US9652980B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
CN101966846B (en) Travel&#39;s clear path detection method for motor vehicle involving object deteciting and enhancing
Kong et al. General road detection from a single image
Kluge Extracting road curvature and orientation from image edge points without perceptual grouping into features
Li et al. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios
Liang et al. Video stabilization for a camcorder mounted on a moving vehicle
Kenue Lanelok: Detection Of Lane Boundaries And Vehicle Tracking Using Image-Processing Techniques-Part I: Hough-Transform, Region-Tracing And Correlation Algorithms
US8332134B2 (en) Three-dimensional LIDAR-based clear path detection
US8487991B2 (en) Clear path detection using a vanishing point
Hautière et al. Real-time disparity contrast combination for onboard estimation of the visibility distance
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
US9852357B2 (en) Clear path detection using an example-based approach
Levinson et al. Traffic light mapping, localization, and state detection for autonomous vehicles
Son et al. Real-time illumination invariant lane detection for lane departure warning system
US8428305B2 (en) Method for detecting a clear path through topographical variation analysis
US8670592B2 (en) Clear path detection using segmentation-based method

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model
CF01 Termination of patent right due to non-payment of annual fee