CN111783666A - A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames - Google Patents
A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames Download PDFInfo
- Publication number
- CN111783666A CN111783666A CN202010625087.7A CN202010625087A CN111783666A CN 111783666 A CN111783666 A CN 111783666A CN 202010625087 A CN202010625087 A CN 202010625087A CN 111783666 A CN111783666 A CN 111783666A
- Authority
- CN
- China
- Prior art keywords
- image
- lane line
- lane
- method based
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000009466 transformation Effects 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 12
- 230000003287 optical effect Effects 0.000 claims abstract description 9
- 238000003702 image correction Methods 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 38
- 238000006073 displacement reaction Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 12
- 239000002131 composite material Substances 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 2
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 2
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000034994 death Effects 0.000 description 1
- 231100000517 death Toxicity 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/247—Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于连续视频帧角点特征匹配的快速车道线检测方法,包括:计算图像畸变矩阵;进行图像采集和图像校正;使用角点检测与匹配的方法,划定动态感兴趣区域,如果上帧车道线识别结果为失败,将感兴趣区域选定为全局,跳过该步骤;求出前后视频帧的特征点对坐标值;在上一帧图像识别车道线结果为真情况下,对上一帧图像中角点特征显著的点进行光流估计,采用最小二乘解,求出前后视频帧的特征点对坐标值;预估本帧图像中车道线的位置,选定识别感兴趣区域;在感兴趣区域内进行图像二值化处理,得到二值化图像;对图像进行透视变换;在俯视图查找车道线像素点,并用多项式方程进行车道线拟合;统计车道线曲率和车道偏离距离,标注到原始图像中。
The invention relates to a fast lane line detection method based on corner point feature matching of continuous video frames, comprising: calculating an image distortion matrix; performing image acquisition and image correction; If the result of the lane line recognition in the previous frame is failed, select the region of interest as global and skip this step; find the coordinate values of the feature point pairs of the video frames before and after; if the lane line recognition result of the previous frame image is true, Perform optical flow estimation on the points with significant corner features in the previous frame image, and use the least squares solution to obtain the coordinate values of the feature point pairs of the video frames before and after; Area of interest; perform image binarization processing in the area of interest to obtain a binarized image; perform perspective transformation on the image; find lane line pixels in the top view, and use polynomial equations to fit lane lines; statistics lane curvature and lane The offset distance, annotated into the original image.
Description
技术领域technical field
本发明属于智能车辆中环境感知与图像处理技术,特别涉及一种基于连续视频帧角点特征匹配的快速车道线检测方法。The invention belongs to the technology of environment perception and image processing in intelligent vehicles, in particular to a fast lane line detection method based on corner feature matching of continuous video frames.
背景技术Background technique
据公安部统计,2019年全国新注册登记机动车3214万辆,机动车保有量达3.48亿辆。私家车保有量首次突破2亿辆,达2.07亿辆。中国公路总里程已达484.65万公里、高速公路达14.26万公里,居世界第一。汽车保有量和总路总里程的持续增长,增加了交通事故发生频次;同时,由于车辆出行区域相对集中,更加剧了交通拥堵程度。根据公安部交通管理局的统计数据显示,2019年,全国共发生道路交通事故238351起,造成67759人死亡、275125人受伤,直接财产损失9.1亿元。交通事故频发危害人民的生命和财产安全,同时造成了社会资源的浪费,造成了严重的直接和间接经济损失。According to the statistics of the Ministry of Public Security, in 2019, there were 32.14 million newly registered motor vehicles nationwide, and the number of motor vehicles reached 348 million. The number of private cars exceeded 200 million for the first time, reaching 207 million. China's total highway mileage has reached 4,846,500 kilometers, and highways have reached 142,600 kilometers, ranking first in the world. The continuous growth of car ownership and total road mileage has increased the frequency of traffic accidents; at the same time, due to the relative concentration of vehicle travel areas, the degree of traffic congestion has been exacerbated. According to statistics from the Traffic Administration of the Ministry of Public Security, in 2019, a total of 238,351 road traffic accidents occurred nationwide, resulting in 67,759 deaths, 275,125 injuries and direct property losses of 910 million yuan. Frequent traffic accidents endanger people's life and property safety, and at the same time cause a waste of social resources and cause serious direct and indirect economic losses.
因此,各国对于智能交通系统(Intelligent Transportation System)给予了大力支持,同时世界各国的企业和科研结构也投入大量的人力物力致力于研究。在这其中,智能车辆(Intelligent Vehicles)作为智能交通系统的关键组成部分,发挥着不容忽视的作用。智能车辆通过车载摄像头和雷达等传感器实时感知车辆周围的环境,并根据传感器获得的信息通过算法简历局部地图,再通过智能软件系统控制车辆动作,使车辆更安全、可靠的行驶在道路上。Therefore, countries have given strong support to the Intelligent Transportation System, and at the same time, companies and scientific research structures around the world have also invested a lot of manpower and material resources in research. Among them, intelligent vehicles (Intelligent Vehicles), as a key component of the intelligent transportation system, play a role that cannot be ignored. Intelligent vehicles perceive the environment around the vehicle in real time through sensors such as on-board cameras and radars, and resume local maps through algorithms based on the information obtained by the sensors, and then control vehicle actions through intelligent software systems to make vehicles safer and more reliable on the road.
现有技术水平和网络传输速度等方面的制约,车辆还不能够实现完全的自动驾驶,很多研究机构首先实现汽车的先进辅助驾驶系统(ADAS,Advanced Driver AssistanceSystem)。ADAS集成了车辆周边环境感知和车辆控制,实现了智能驾驶的一些基础功能,具体包括自动巡航、车辆自动预防碰撞、车道偏离预警和自动泊车等功能。在车身诸多传感器中,摄像头由于廉价且包含丰富的色彩信息而被广泛采用,主要用于检测车辆周围的车道线、车辆、道路标识、行人等信息。Due to the limitations of the existing technology level and network transmission speed, the vehicle is not yet able to achieve fully autonomous driving. Many research institutions first realize the Advanced Driver Assistance System (ADAS, Advanced Driver Assistance System). ADAS integrates the perception of the surrounding environment of the vehicle and vehicle control, and realizes some basic functions of intelligent driving, including automatic cruise, automatic collision prevention, lane departure warning and automatic parking. Among the many sensors in the body, cameras are widely used because they are cheap and contain rich color information, and are mainly used to detect information such as lane lines, vehicles, road signs, and pedestrians around the vehicle.
当前的车道线识别算法多使用Hough算子方法、线性方程拟合方法等。这些方法实用性尚可,能够在直线道路上拟合车道线位置,但是Hough算子方法容易丢失弯曲道路信息,线性方程拟合方法需要良好的车道线边缘特征作为前提,且因无全局约束而易受到噪音影响。The current lane line recognition algorithms mostly use the Hough operator method and the linear equation fitting method. These methods are practicable and can fit the position of the lane line on a straight road, but the Hough operator method is easy to lose the information of the curved road. The linear equation fitting method requires good edge features of the lane line as a premise, and because there is no global constraint, the susceptible to noise.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于连续视频帧角点特征匹配的快速车道线检测方法,用于解决现有车道线识别方法缺少曲率信息、复杂度较高的问题,The purpose of the present invention is to provide a fast lane line detection method based on continuous video frame corner feature matching, which is used to solve the problems of lack of curvature information and high complexity in the existing lane line identification method,
本发明一种基于连续视频帧角点特征匹配的快速车道线检测方法,其中,包括:步骤1,计算图像畸变矩阵;步骤2,进行图像采集和图像校正;步骤3,使用角点检测与匹配的方法,划定动态感兴趣区域,包括:如果上帧车道线识别结果为失败,将感兴趣区域选定为全局,跳过步骤3;求出前后视频帧的特征点对坐标值;在上一帧图像识别车道线结果为真情况下,对上一帧图像中角点特征显著的点进行光流估计,采用最小二乘解,求出前后视频帧的特征点对坐标值;预估本帧图像中车道线的位置,选定识别感兴趣区域;步骤4,在感兴趣区域内进行图像二值化处理,得到二值化图像;步骤5,对图像进行透视变换,得到俯视图;步骤6,在俯视图查找车道线像素点,并用多项式方程进行车道线拟合;步骤7,统计车道线曲率和车道偏离距离,标注到原始图像中;步骤8,重复步骤2至步骤6,直至图像采集失败或接收到终止识别信号。The present invention is a fast lane line detection method based on continuous video frame corner feature matching, which includes: step 1, calculating an image distortion matrix; step 2, performing image acquisition and image correction; step 3, using corner detection and matching The method of delineating the dynamic area of interest includes: if the result of the lane line recognition in the previous frame is failure, select the area of interest as the global one, and skip step 3; obtain the coordinate values of the feature points of the video frames before and after; When the result of identifying the lane line in one frame of image is true, the optical flow is estimated for the points with significant corner features in the previous frame of image, and the least squares solution is used to obtain the coordinate values of the feature points of the video frames before and after; The position of the lane line in the frame image is selected to identify the region of interest; step 4, image binarization processing is performed in the region of interest to obtain a binarized image; step 5, the image is subjected to perspective transformation to obtain a top view; step 6 , find the pixel points of the lane line in the top view, and use the polynomial equation to fit the lane line; step 7, count the curvature of the lane line and the lane departure distance, and mark them in the original image; step 8, repeat steps 2 to 6 until the image acquisition fails Or a termination identification signal is received.
本发明提出一种基于连续视频帧角点特征匹配的快速车道线检测方法,角点即轮廓之间的交点,对于同一场景,即使视角发生变化,通常具备稳定性质的特征。角点在保留图像图形重要特征的同时,可以有效地减少信息的数据量,使其信息的含量很高,有效地提高了计算的速度,有利于图像的可靠匹配,使得实时处理成为可能。The present invention proposes a fast lane line detection method based on feature matching of corner points of consecutive video frames. Corner points are intersections between contours. For the same scene, even if the viewing angle changes, it usually has the characteristics of stable quality. While retaining the important features of image graphics, corner points can effectively reduce the amount of information data, make the information content very high, effectively improve the speed of calculation, and be conducive to reliable image matching, making real-time processing possible.
附图说明Description of drawings
图1为本发明所涉及的方法流程图;Fig. 1 is the method flow chart involved in the present invention;
图2为畸变校正后的图像;Figure 2 is the image after distortion correction;
图3为灰度化和二值化后的图像;Figure 3 is an image after grayscale and binarization;
图4为透视变换后的图像;Fig. 4 is the image after perspective transformation;
图5为车道线拟合后的图像;Figure 5 is the image after lane line fitting;
图6为标注后的图像。Figure 6 is an annotated image.
具体实施方式Detailed ways
为使本发明的目的、内容、和优点更加清楚,下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。In order to make the purpose, content, and advantages of the present invention clearer, the specific embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and embodiments.
图1为本发明所涉及的方法流程图;图2为畸变校正后的图像;图3为灰度化和二值化后的图像;图4为透视变换后的图像;图5为车道线拟合后的图像;图6为标注后的图像,如图1至图6所示,本发明一种基于连续视频帧角点特征匹配的快速车道线检测方法,包括如下步骤:Fig. 1 is a flow chart of the method involved in the present invention; Fig. 2 is an image after distortion correction; Fig. 3 is an image after grayscale and binarization; Fig. 4 is an image after perspective transformation; Figure 6 is the marked image, as shown in Figure 1 to Figure 6, a fast lane line detection method based on continuous video frame corner feature matching of the present invention, including the following steps:
步骤1,计算图像畸变矩阵Step 1, calculate the image distortion matrix
由于摄像机透镜在制造以及组装过程中会存在偏差,引入畸变,导致采集的原始图像失真。因此在使用过程中,我们需要通过标定的方式对采集到的图像进行校正。Due to the deviation of the camera lens during the manufacturing and assembly process, distortion is introduced, resulting in distortion of the original image captured. Therefore, in the process of use, we need to correct the collected images by means of calibration.
步骤1.1,使用摄像头驱动提供的接口拍摄图像,从20个不同角度拍摄棋盘标定图像Step 1.1, use the interface provided by the camera driver to capture images, and capture chessboard calibration images from 20 different angles
步骤1.2,使用加权平均法,将图像进行灰度化Step 1.2, use the weighted average method to grayscale the image
步骤1.3,使用张正友标定法,查找灰度图像中的标定板内角点并计算畸变矩阵。Step 1.3, using Zhang Zhengyou's calibration method, find the inner corner points of the calibration plate in the grayscale image and calculate the distortion matrix.
步骤2,图像采集和图像校正Step 2, Image Acquisition and Image Correction
在每帧图像处理周期开始,首先需要确定当前处理周期是否为初次处理,如果为初次规划,则将上帧识别车道线结果标记为False。At the beginning of each frame of image processing cycle, it is first necessary to determine whether the current processing cycle is the initial processing.
调用摄像头驱动提供的接口采集车辆前方图像,并使用畸变矩阵对采集到的图像进行畸变校正,得到校正后的图像,如图2所示。Call the interface provided by the camera driver to collect the image in front of the vehicle, and use the distortion matrix to perform distortion correction on the collected image to obtain the corrected image, as shown in Figure 2.
步骤3,使用角点检测与匹配的方法,划定动态感兴趣区域(ROI,Region ofInterest)Step 3, use the method of corner detection and matching to delineate the dynamic region of interest (ROI, Region of Interest)
步骤3.1,如果上帧车道线识别结果为False,将感兴趣区域选定为全局,跳过该步骤3Step 3.1, if the result of lane line recognition in the previous frame is False, select the region of interest as the global one, and skip this step 3
步骤3.2,求出前后视频帧的特征点对坐标值Step 3.2, find the coordinate values of the feature points of the video frames before and after
在上一帧图像识别车道线结果为True情况下,使用L-K方法对上一帧图像中角点特征显著的点进行光流估计,由于光流估计的方程是一个超定线性方程,采用最小二乘解,求出前后视频帧的特征点对坐标值多组。When the result of identifying the lane line in the previous image is True, the L-K method is used to estimate the optical flow of the points with significant corner features in the previous image. Since the optical flow estimation equation is an overdetermined linear equation, the least squares method is used. Multiply the solution to obtain multiple sets of coordinate values of the feature point pairs of the video frames before and after.
步骤3.3,预估本帧图像中车道线的位置,选定识别感兴趣区域Step 3.3, estimate the position of the lane line in the image of this frame, and select and identify the area of interest
步骤3.3.1,根据坐标在图像中相对于中竖线的位置,将特征点对坐标值分为左右两组Step 3.3.1, according to the position of the coordinates relative to the vertical line in the image, divide the feature point pair coordinate values into two groups: left and right.
步骤3.3.2,剔除前景像素Step 3.3.2, culling foreground pixels
针对左右两组坐标点,分别对坐标点位移矢量使用DBSCAN聚类算法,剔除前景像素,得到主要背景坐标点。For the left and right two sets of coordinate points, the DBSCAN clustering algorithm is used for the coordinate point displacement vectors respectively, and the foreground pixels are eliminated to obtain the main background coordinate points.
步骤3.3.3,使用加权均值方法,求解前方背景图像的相对位移矢量Step 3.3.3, use the weighted mean method to solve the relative displacement vector of the front background image
步骤3.3.4,设定感兴趣区域Step 3.3.4, set the region of interest
使用求得的位移矢量分别计算左右两侧车道的预估曲线,并将预估曲线的左右宽度为b(b=50)的滑动窗口设定为感兴趣区域。Use the obtained displacement vector Calculate the estimated curves of the left and right lanes respectively, and set the sliding window with the left and right width of the estimated curve as b (b=50) as the region of interest.
步骤4,在感兴趣区域内进行图像二值化处理,得到二值化图像Step 4: Perform image binarization processing in the region of interest to obtain a binarized image
步骤4.1,使用复合算子对图像进行灰度化处理Step 4.1, use the composite operator to grayscale the image
其中,Sobel算子计算方法为Among them, the calculation method of Sobel operator is:
其中Gx和为Gy分别为水平方向和竖直方向边缘检测的图像,为平面卷积操作,A为原始图像。where Gx and Gy are the images of edge detection in the horizontal and vertical directions, respectively, is a plane convolution operation, and A is the original image.
计算Sobel算子的幅值和梯度方向的公式为:The formula for calculating the magnitude and gradient direction of the Sobel operator is:
其中G和θ分别为像素点对应的幅值和梯度方向。where G and θ are the corresponding amplitude and gradient direction of the pixel, respectively.
步骤4.1.1,在水平方向上使用Sobel算子对图像进行灰度化,即获得Gx对应图像Step 4.1.1, use the Sobel operator to grayscale the image in the horizontal direction, that is, obtain the image corresponding to Gx
步骤4.1.2,在幅值上使用Sobel算子对图像进行灰度化,即获得G对应图像Step 4.1.2, use the Sobel operator to grayscale the image on the amplitude value, that is, obtain the image corresponding to G
步骤4.1.3,在梯度方向上使用Sobel算子对图像进行灰度化,即获得θ对应图像Step 4.1.3, use the Sobel operator to grayscale the image in the gradient direction, that is, obtain the image corresponding to θ
步骤4.1.4,取图像饱和度通道分量Step 4.1.4, take the image saturation channel component
分离图像在HSL色彩空间中色相(Hue)、饱和度(Saturation)、亮度(Lightness)通道分量,取饱和度通道分量。Separate the hue (Hue), saturation (Saturation), and lightness (Lightness) channel components of the image in the HSL color space, and take the saturation channel component.
步骤4.2,对四种灰度图像求均值,获得复合算子的灰度图像,如图3所示。Step 4.2: Average the four grayscale images to obtain the grayscale image of the composite operator, as shown in Figure 3.
步骤5,对图像进行透视变换,得到俯视图。如图4所示。Step 5: Perform perspective transformation on the image to obtain a top view. As shown in Figure 4.
步骤6,在俯视图查找车道线像素点,并用多项式方程进行车道线拟合Step 6: Find the lane line pixel points in the top view, and use the polynomial equation to fit the lane line
步骤6.1,查找车道线像素点Step 6.1, find lane line pixels
步骤6.1.1,计算下半部分图像的直方图,并统计直方图在左右两侧的峰值位置Step 6.1.1, calculate the histogram of the lower half of the image, and count the peak positions of the histogram on the left and right sides
步骤6.1.2,切分图像Step 6.1.2, segment the image
将图像水平切分为9等份,在底部切片中使用两个切片等高、200像素宽的矩形滑动窗口,覆盖直方图左右峰值位置。The image was horizontally divided into 9 equal parts, and two rectangular sliding windows of equal height and width of 200 pixels were used in the bottom slice to cover the left and right peak positions of the histogram.
步骤6.1.3,寻找车道线像素点Step 6.1.3, find lane line pixels
自底向上移动滑动窗口,依次寻找各个切片中的车道线像素点,并重新定位上切片中滑动矩形的中心。Move the sliding window from the bottom to the top, find the lane line pixels in each slice in turn, and reposition the center of the sliding rectangle in the upper slice.
步骤6.2,使用最小二乘法进行二阶多项式拟合Step 6.2, second-order polynomial fitting using least squares
对左右两组车道线像素点,分别使用最小二乘法进行二阶多项式拟合,得到透视变换下的车道线方程。车道线方程的公式为:For the left and right groups of lane line pixels, use the least squares method to perform second-order polynomial fitting respectively, and obtain the lane line equation under perspective transformation. Lane Line Equation The formula is:
其中多项式系数a0、a1、a2的计算方程为The calculation equations of the polynomial coefficients a 0 , a 1 , and a 2 are:
其中,xi和yi为步骤6.1.3中查找后的第i组车道线像素点的横纵坐标。Among them, x i and y i are the horizontal and vertical coordinates of the pixel points of the i-th lane line after the search in step 6.1.3.
拟合后的车道线在图像中的位置如图5所示。The positions of the fitted lane lines in the image are shown in Figure 5.
步骤6.3,统计车道线像素点参数,并标记车道线识别状态位Step 6.3, count the lane line pixel parameters, and mark the lane line recognition status bit
当出现像素点少于设定阈值200或拟合曲线的曲率和范围超出阈值范围时,认为检测失败,将车道线识别状态位标记为False(通过标记,使下一帧使用全局检测),在本帧使用上一帧视频的检测结果;否则,将车道线识别状态位标记为True。When the number of pixels is less than the set threshold of 200 or the curvature and range of the fitted curve exceeds the threshold range, the detection is considered to have failed, and the lane line recognition status bit is marked as False (by marking, the next frame will use global detection). This frame uses the detection result of the previous frame of video; otherwise, the lane line recognition status bit is marked as True.
步骤7,统计车道线曲率和车道偏离距离,将上述信息标注到原始图像中Step 7: Count the curvature of the lane line and the distance of the lane departure, and mark the above information into the original image
步骤7.1,计算车道曲率,并按照对应关系将单位转换为米。Step 7.1, calculate the lane curvature and convert the units to meters according to the corresponding relationship.
步骤7.2,计算车道偏离距离,并按照对应关系将单位转换为米。Step 7.2, calculate the lane departure distance, and convert the unit to meters according to the corresponding relationship.
步骤7.3,信息标注。Step 7.3, information labeling.
在与透视变换后图像等高等宽的空白图像上,标注左右车道线并绿色化其中间区域。将图像经过逆透视变换后与原始图像进行融合,最后将车道线曲率和车道偏离距离以文字标注到图像的左上部分。标注后的图像如图6所示。On a blank image with the same height and width as the perspective transformed image, mark the left and right lane lines and green the middle area. The image is fused with the original image after inverse perspective transformation, and finally the curvature of the lane line and the distance of the lane departure are marked in the upper left part of the image with text. The annotated image is shown in Figure 6.
步骤8,重复步骤2至步骤6,直至图像采集失败或接收到终止识别信号Step 8, repeat steps 2 to 6 until the image acquisition fails or a termination identification signal is received
本发明提供一种基于连续视频帧角点特征匹配的快速车道线检测方法的另一实施例,包括如下步骤:The present invention provides another embodiment of a fast lane line detection method based on continuous video frame corner feature matching, comprising the following steps:
步骤1,计算图像畸变矩阵Step 1, calculate the image distortion matrix
由于摄像机透镜在制造以及组装过程中会存在偏差,引入畸变,导致原始图像失真。因此使用过程中,我们需要对采集到的图像进行校正。Due to the deviation of the camera lens during the manufacturing and assembly process, distortion is introduced, resulting in distortion of the original image. Therefore, in the process of use, we need to correct the collected images.
步骤1.1,使用摄像头驱动提供的接口,从20个不同角度拍摄棋盘标定图像Step 1.1, use the interface provided by the camera driver to take chessboard calibration images from 20 different angles
步骤1.2,使用加权平均法,将图像进行灰度化Step 1.2, use the weighted average method to grayscale the image
f(i,j)=0.3*R(i,j)+0.59*G(i,j)+0.11*B(i,j)f(i,j)=0.3*R(i,j)+0.59*G(i,j)+0.11*B(i,j)
其中,f(i,j)为坐标为(i,j)的像素点灰度化后对应的灰度值,R(i,j)、G(i,j)、B(i,j)分别为彩色图像中坐标为(i,j)的像素点的红色、绿色、蓝色通道分量。Among them, f(i,j) is the corresponding grayscale value of the pixel point whose coordinate is (i,j) after grayscale, R(i,j), G(i,j), B(i,j) respectively is the red, green, and blue channel components of the pixel with coordinates (i, j) in the color image.
步骤1.3,使用张正友标定法,查找灰度图像中的标定板内角点并计算畸变矩阵Step 1.3, use Zhang Zhengyou's calibration method to find the inner corner points of the calibration plate in the grayscale image and calculate the distortion matrix
步骤2,图像采集和图像校正Step 2, Image Acquisition and Image Correction
在每帧图像处理周期开始,首先需要确定当前处理周期是否为初次处理,如果为初次规划,则将上帧识别车道线结果标记为False。At the beginning of each frame of image processing cycle, it is first necessary to determine whether the current processing cycle is the initial processing.
调用摄像头驱动提供的接口采集车辆前方图像,并使用畸变矩阵对采集到的图像进行畸变校正,得到校正后的图像,如图2所示。Call the interface provided by the camera driver to collect the image in front of the vehicle, and use the distortion matrix to perform distortion correction on the collected image to obtain the corrected image, as shown in Figure 2.
步骤3,使用角点检测与匹配的方法,划定动态感兴趣区域(ROI,Region ofInterest)Step 3, use the method of corner detection and matching to delineate the dynamic region of interest (ROI, Region of Interest)
步骤3.1,如果上帧车道线识别结果为False,将感兴趣区域选定为全局,跳过该步骤Step 3.1, if the result of lane line recognition in the previous frame is False, select the region of interest as global and skip this step
步骤3.2,进行光流估计,求最小二乘解Step 3.2, perform optical flow estimation and find the least squares solution
在上一帧图像识别车道线结果为True情况下,使用L-K方法对上一帧图像中角点特征显著的点进行光流估计,由于光流估计的方程是一个超定线性方程,采用最小二乘解为:When the result of identifying the lane line in the previous image is True, the L-K method is used to estimate the optical flow of the points with significant corner features in the previous image. Since the optical flow estimation equation is an overdetermined linear equation, the least squares method is used. The multiplication solution is:
其中u和v为特征点在视频帧之间的相对位移坐标,各符号的计算方法如下:where u and v are the relative displacement coordinates of feature points between video frames, and the calculation methods of each symbol are as follows:
计算后,得到前后视频帧的特征点对坐标值多组。After the calculation, multiple sets of coordinate values of feature point pairs of the video frames before and after are obtained.
步骤3.3,预估本帧图像中车道线的位置,选定识别感兴趣区域Step 3.3, estimate the position of the lane line in the image of this frame, and select and identify the area of interest
步骤3.3.1,根据坐标在图像中相对于中竖线的位置,将特征点对坐标值分为左右两组Step 3.3.1, according to the position of the coordinates relative to the vertical line in the image, divide the feature point pair coordinate values into two groups: left and right.
步骤3.3.2,剔除前景像素Step 3.3.2, culling foreground pixels
针对左右两组坐标点,分别对坐标点位移矢量使用DBSCAN聚类算法,剔除前景像素,得到主要背景坐标点。For the left and right two sets of coordinate points, the DBSCAN clustering algorithm is used for the coordinate point displacement vectors respectively, and the foreground pixels are eliminated to obtain the main background coordinate points.
步骤3.3.3,求解前方背景图像的相对位移矢量Step 3.3.3, solve the relative displacement vector of the front background image
使用加权均值方法,求解前方背景图像的相对位移矢量,计算公式为:Use the weighted mean method to solve the relative displacement vector of the front background image. The calculation formula is:
其中为求得的前方背景图像的相对位移矢量,是第i组征像素点的相对位移矢量,权值是第i个特征像素点与车道拟合曲线的距离倒数。in is the relative displacement vector of the obtained front background image, which is the relative displacement vector of the i-th group of feature pixels, and the weight is the reciprocal of the distance between the i-th feature pixel and the lane fitting curve.
步骤3.3.4,设定感兴趣区域Step 3.3.4, set the region of interest
使用求得的位移矢量分别计算左右两侧车道的预估曲线,并将预估曲线的左右宽度为b(b=50)的滑动窗口设定为感兴趣区域。Use the obtained displacement vector Calculate the estimated curves of the left and right lanes respectively, and set the sliding window with the left and right width of the estimated curve as b (b=50) as the region of interest.
步骤4,在感兴趣区域内进行图像二值化处理,得到二值化图像Step 4: Perform image binarization processing in the region of interest to obtain a binarized image
步骤4.1,使用复合算子对图像进行灰度化处理Step 4.1, use the composite operator to grayscale the image
其中,Sobel算子计算方法为Among them, the calculation method of Sobel operator is:
其中Gx和为Gy分别为水平方向和竖直方向边缘检测的图像,为平面卷积操作,A为原始图像。where Gx and Gy are the images of edge detection in the horizontal and vertical directions, respectively, is a plane convolution operation, and A is the original image.
计算Sobel算子的幅值和梯度方向的公式为:The formula for calculating the magnitude and gradient direction of the Sobel operator is:
其中G和θ分别为像素点对应的幅值和梯度方向。where G and θ are the corresponding amplitude and gradient direction of the pixel, respectively.
步骤4.1.1,在水平方向上使用Sobel算子对图像进行灰度化,即获得Gx对应图像Step 4.1.1, use the Sobel operator to grayscale the image in the horizontal direction, that is, obtain the image corresponding to Gx
步骤4.1.2,在幅值上使用Sobel算子对图像进行灰度化,即获得G对应图像Step 4.1.2, use the Sobel operator to grayscale the image on the amplitude value, that is, obtain the image corresponding to G
步骤4.1.3,在梯度方向上使用Sobel算子对图像进行灰度化,即获得θ对应图像Step 4.1.3, use the Sobel operator to grayscale the image in the gradient direction, that is, obtain the image corresponding to θ
步骤4.1.4,分离图像饱和度通道分量Step 4.1.4, separate image saturation channel components
分离图像在HSL色彩空间中色相(Hue)、饱和度(Saturation)、亮度(Lightness)通道分量,取饱和度通道分量。Separate the hue (Hue), saturation (Saturation), and lightness (Lightness) channel components of the image in the HSL color space, and take the saturation channel component.
步骤4.2,对四种灰度图像求均值,获得复合算子的灰度图像,如图3所示Step 4.2: Average the four grayscale images to obtain the grayscale image of the composite operator, as shown in Figure 3
步骤5,对图像进行透视变换,得到俯视图。如图4所示Step 5: Perform perspective transformation on the image to obtain a top view. As shown in Figure 4
步骤6,在俯视图查找车道线像素点,并用多项式方程进行车道线拟合Step 6: Find the lane line pixel points in the top view, and use the polynomial equation to fit the lane line
步骤6.1,查找车道线像素点Step 6.1, find lane line pixels
步骤6.1.1,计算下半部分图像的直方图,并统计直方图在左右两侧的峰值位置Step 6.1.1, calculate the histogram of the lower half of the image, and count the peak positions of the histogram on the left and right sides
步骤6.1.2,将图像水平切分为9等份,在底部切片中使用两个切片等高、200像素宽的矩形滑动窗口,覆盖直方图左右峰值位置Step 6.1.2, horizontally divide the image into 9 equal parts, and use two rectangular sliding windows of equal height and 200 pixels wide in the bottom slice to cover the left and right peak positions of the histogram
步骤6.1.3,自底向上移动滑动窗口,依次寻找各个切片中的车道线像素点,并重新定位上切片中滑动矩形的中心Step 6.1.3, move the sliding window from the bottom to the top, find the lane line pixels in each slice in turn, and reposition the center of the sliding rectangle in the upper slice
步骤6.2,进行二阶多项式拟合Step 6.2, perform second-order polynomial fitting
对左右两组车道线像素点,分别使用最小二乘法进行二阶多项式拟合,得到透视变换下的车道线方程,车道线方程的公式为For the left and right two sets of lane line pixels, use the least squares method to perform second-order polynomial fitting, and obtain the lane line equation and lane line equation under perspective transformation. The formula is
其中多项式系数a0、a1、a2的计算方程为The calculation equations of the polynomial coefficients a 0 , a 1 , and a 2 are:
其中,xi和yi为步骤6.1.3中查找后的第i组车道线像素点的横纵坐标。Among them, x i and y i are the horizontal and vertical coordinates of the pixel points of the i-th lane line after the search in step 6.1.3.
拟合后的车道线在图像中的位置如图5所示。The positions of the fitted lane lines in the image are shown in Figure 5.
步骤6.3,统计车道线像素点参数,并标记车道线识别状态位Step 6.3, count the lane line pixel parameters, and mark the lane line recognition status bit
当出现像素点少于设定阈值200或拟合曲线的曲率和范围超出阈值范围时,认为检测失败,将车道线识别状态位标记为False(通过标记,使下一帧使用全局检测),在本帧使用上一帧视频的检测结果;否则,将车道线识别状态位标记为True。When the number of pixels is less than the set threshold of 200 or the curvature and range of the fitted curve exceeds the threshold range, the detection is considered to have failed, and the lane line recognition status bit is marked as False (by marking, the next frame will use global detection). This frame uses the detection result of the previous frame of video; otherwise, the lane line recognition status bit is marked as True.
步骤7,统计车道线曲率和车道偏离距离,将上述信息标注到原始图像中Step 7: Count the curvature of the lane line and the distance of the lane departure, and mark the above information into the original image
步骤7.1,计算车道曲率Step 7.1, Calculate Lane Curvature
根据左右车道线拟合方程,分别统计下半幅图像中每个纵坐标对应的的车道线曲率均值,并按照水平方向3.7米/700像素、竖直方向30米/720像素的对应关系,将单位转换为米,并将左右平均曲率求均值。According to the fitting equation of the left and right lane lines, the mean curvature of the lane lines corresponding to each ordinate in the lower half of the image is calculated separately, and the unit is calculated according to the corresponding relationship of 3.7 meters/700 pixels in the horizontal direction and 30 meters/720 pixels in the vertical direction. Convert to meters and average the left and right mean curvatures.
步骤7.2,计算车道偏离距离Step 7.2, Calculate the Lane Departure Distance
根据左右车道线拟合方程,统计下半幅图像中每个纵坐标像素对应的左右车道线中心位置与图像横向中心的差值,并按照水平方向3.7米/700像素的对应关系,将单位转换为米。According to the fitting equation of the left and right lane lines, count the difference between the center position of the left and right lane lines corresponding to each ordinate pixel in the lower half image and the horizontal center of the image, and convert the unit to 3.7 meters/700 pixels in the horizontal direction. Meter.
步骤7.3,信息标注Step 7.3, Information Labeling
在与透视变换后图像等高等宽的空白图像上,标注左右车道线并绿色化其中间区域。将图像经过逆透视变换后与原始图像进行融合,最后将车道线曲率和车道偏离距离以文字标注到图像的左上部分。标注后的图像如图6所示。On a blank image with the same height and width as the perspective transformed image, mark the left and right lane lines and green the middle area. The image is fused with the original image after inverse perspective transformation, and finally the curvature of the lane line and the distance of the lane departure are marked in the upper left part of the image with text. The annotated image is shown in Figure 6.
步骤8,重复步骤2至步骤6,直至图像采集失败或接收到终止识别信号Step 8, repeat steps 2 to 6 until the image acquisition fails or a termination identification signal is received
本发明所提的基于连续视频帧角点特征匹配的快速车道线检测方法,主要用于辅助驾驶和自动驾驶中的车道线识别应用。方法通过引入连续视频帧角点匹配方法检测车辆前方背景的相对移动,根据视频帧间的时间相关性对车道线范围进行动态预估缩小感兴趣区域(ROI,Region of intrest)。然后,对图像进行组合灰度化并进行二值化处理。最后,对图像进行透视变换,查找车道线像素点并使用二阶多项式方程进行车道线拟合。The fast lane line detection method based on continuous video frame corner feature matching proposed by the present invention is mainly used for lane line recognition applications in assisted driving and automatic driving. The method detects the relative movement of the background in front of the vehicle by introducing the continuous video frame corner matching method, and dynamically predicts the range of the lane line according to the temporal correlation between the video frames to reduce the region of interest (ROI, Region of intrest). Then, the image is combined grayscale and binarized. Finally, perform a perspective transformation on the image, find the lane line pixels and fit the lane lines using a second-order polynomial equation.
针对现有车道线识别方法缺少曲率信息、复杂度较高的问题,本发明提出一种基于连续视频帧角点特征匹配的快速车道线检测方法,角点即轮廓之间的交点,对于同一场景,即使视角发生变化,通常具备稳定性质的特征。角点在保留图像图形重要特征的同时,可以有效地减少信息的数据量,使其信息的含量很高,有效地提高了计算的速度,有利于图像的可靠匹配,使得实时处理成为可能。Aiming at the problems of lack of curvature information and high complexity in the existing lane line identification methods, the present invention proposes a fast lane line detection method based on the feature matching of corner points of continuous video frames. The corner points are the intersections between the contours. For the same scene , which is usually characterized by stable properties even if the viewing angle changes. While retaining the important features of image graphics, corner points can effectively reduce the amount of information data, make the information content very high, effectively improve the speed of calculation, be conducive to reliable image matching, and make real-time processing possible.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变形,这些改进和变形也应视为本发明的保护范围。The above are only the preferred embodiments of the present invention. It should be pointed out that for those skilled in the art, without departing from the technical principle of the present invention, several improvements and modifications can also be made. These improvements and modifications It should also be regarded as the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010625087.7A CN111783666A (en) | 2020-07-01 | 2020-07-01 | A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010625087.7A CN111783666A (en) | 2020-07-01 | 2020-07-01 | A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111783666A true CN111783666A (en) | 2020-10-16 |
Family
ID=72757800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010625087.7A Pending CN111783666A (en) | 2020-07-01 | 2020-07-01 | A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111783666A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095283A (en) * | 2021-04-30 | 2021-07-09 | 南京工程学院 | Lane line extraction method based on dynamic ROI and improved firefly algorithm |
CN113378719A (en) * | 2021-06-11 | 2021-09-10 | 许杰 | Lane line recognition method and device, computer equipment and storage medium |
CN113505747A (en) * | 2021-07-27 | 2021-10-15 | 浙江大华技术股份有限公司 | Lane line recognition method and apparatus, storage medium, and electronic device |
CN113591565A (en) * | 2021-06-25 | 2021-11-02 | 江苏理工学院 | Machine vision-based lane line detection method, detection system and detection device |
CN115063761A (en) * | 2022-05-19 | 2022-09-16 | 广州文远知行科技有限公司 | Lane line detection method, device, equipment and storage medium |
CN115116018A (en) * | 2022-06-30 | 2022-09-27 | 北京旋极信息技术股份有限公司 | Method and device for fitting lane line |
CN115546753A (en) * | 2022-10-27 | 2022-12-30 | 佛山仙湖实验室 | Lane detection method, system and equipment based on DBSCAN algorithm |
CN117710795A (en) * | 2024-02-06 | 2024-03-15 | 成都同步新创科技股份有限公司 | Machine room line safety detection method and system based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785291A (en) * | 2018-12-20 | 2019-05-21 | 南京莱斯电子设备有限公司 | A kind of lane line self-adapting detecting method |
CN110647850A (en) * | 2019-09-27 | 2020-01-03 | 福建农林大学 | Automatic lane deviation measuring method based on inverse perspective principle |
-
2020
- 2020-07-01 CN CN202010625087.7A patent/CN111783666A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785291A (en) * | 2018-12-20 | 2019-05-21 | 南京莱斯电子设备有限公司 | A kind of lane line self-adapting detecting method |
CN110647850A (en) * | 2019-09-27 | 2020-01-03 | 福建农林大学 | Automatic lane deviation measuring method based on inverse perspective principle |
Non-Patent Citations (1)
Title |
---|
庄博阳: "基于光流法的快速车道线识别算法研究", 《计算机测量与控制》, vol. 27, no. 9, pages 146 - 150 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095283A (en) * | 2021-04-30 | 2021-07-09 | 南京工程学院 | Lane line extraction method based on dynamic ROI and improved firefly algorithm |
CN113095283B (en) * | 2021-04-30 | 2023-08-25 | 南京工程学院 | Lane line extraction method based on dynamic ROI and improved firefly algorithm |
CN113378719A (en) * | 2021-06-11 | 2021-09-10 | 许杰 | Lane line recognition method and device, computer equipment and storage medium |
CN113378719B (en) * | 2021-06-11 | 2024-04-05 | 北京清维如风科技有限公司 | Lane line identification method, lane line identification device, computer equipment and storage medium |
CN113591565A (en) * | 2021-06-25 | 2021-11-02 | 江苏理工学院 | Machine vision-based lane line detection method, detection system and detection device |
CN113591565B (en) * | 2021-06-25 | 2023-07-18 | 江苏理工学院 | Lane line detection method, detection system and detection device based on machine vision |
CN113505747A (en) * | 2021-07-27 | 2021-10-15 | 浙江大华技术股份有限公司 | Lane line recognition method and apparatus, storage medium, and electronic device |
CN115063761A (en) * | 2022-05-19 | 2022-09-16 | 广州文远知行科技有限公司 | Lane line detection method, device, equipment and storage medium |
CN115116018A (en) * | 2022-06-30 | 2022-09-27 | 北京旋极信息技术股份有限公司 | Method and device for fitting lane line |
CN115546753A (en) * | 2022-10-27 | 2022-12-30 | 佛山仙湖实验室 | Lane detection method, system and equipment based on DBSCAN algorithm |
CN117710795A (en) * | 2024-02-06 | 2024-03-15 | 成都同步新创科技股份有限公司 | Machine room line safety detection method and system based on deep learning |
CN117710795B (en) * | 2024-02-06 | 2024-06-07 | 成都同步新创科技股份有限公司 | Machine room line safety detection method and system based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111783666A (en) | A Fast Lane Line Detection Method Based on Corner Feature Matching of Continuous Video Frames | |
CN106778593B (en) | Lane level positioning method based on multi-ground sign fusion | |
CN109190523B (en) | Vehicle detection tracking early warning method based on vision | |
CN104021378B (en) | Traffic lights real-time identification method based on space time correlation Yu priori | |
CN101608924B (en) | Method for detecting lane lines based on grayscale estimation and cascade Hough transform | |
CN108038416B (en) | Lane line detection method and system | |
CN110569704A (en) | A Multi-strategy Adaptive Lane Line Detection Method Based on Stereo Vision | |
CN109299674B (en) | Tunnel illegal lane change detection method based on car lamp | |
CN107025432A (en) | A kind of efficient lane detection tracking and system | |
CN108805065A (en) | One kind being based on the improved method for detecting lane lines of geometric properties | |
CN111915883A (en) | Road traffic condition detection method based on vehicle-mounted camera shooting | |
CN107491753A (en) | A kind of parking offense detection method based on background modeling | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN105488454A (en) | Monocular vision based front vehicle detection and ranging method | |
CN104036246A (en) | Lane line positioning method based on multi-feature fusion and polymorphism mean value | |
CN104504364B (en) | Stop line Real time identification and distance-finding method based on space time correlation | |
CN110210451A (en) | A kind of zebra line detecting method | |
CN110379168A (en) | A Method of Traffic Vehicle Information Acquisition Based on Mask R-CNN | |
CN114715168A (en) | Vehicle yaw early warning method and system under road marking missing environment | |
Wu et al. | Adjacent lane detection and lateral vehicle distance measurement using vision-based neuro-fuzzy approaches | |
CN109190483A (en) | A kind of method for detecting lane lines of view-based access control model | |
CN109948552A (en) | A method of lane line detection in complex traffic environment | |
CN106503640A (en) | A kind of detection method for taking bus zone | |
CN105354857B (en) | A kind of track of vehicle matching process for thering is viaduct to block | |
CN108230667A (en) | A kind of vehicle peccancy behavioral value method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201016 |
|
RJ01 | Rejection of invention patent application after publication |