CN109583365B - Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve - Google Patents

Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve Download PDF

Info

Publication number
CN109583365B
CN109583365B CN201811427546.XA CN201811427546A CN109583365B CN 109583365 B CN109583365 B CN 109583365B CN 201811427546 A CN201811427546 A CN 201811427546A CN 109583365 B CN109583365 B CN 109583365B
Authority
CN
China
Prior art keywords
image
lane line
line
coordinate system
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811427546.XA
Other languages
Chinese (zh)
Other versions
CN109583365A (en
Inventor
穆柯楠
赵祥模
王会峰
惠飞
卢勇
杨澜
景首才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201811427546.XA priority Critical patent/CN109583365B/en
Publication of CN109583365A publication Critical patent/CN109583365A/en
Application granted granted Critical
Publication of CN109583365B publication Critical patent/CN109583365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the steps of constraining a non-uniform B-spline curve to fit a lane line detection method based on an imaging model, firstly, carrying out median filtering and histogram equalization processing on an image to obtain an enhanced lane line image; secondly, performing edge detection on the image by adopting a Canny operator to obtain a lane line edge image; then, Hough transformation straight line detection is carried out on the edge image, so that the background interference edge is reduced while the edge continuity is improved; deducing a control point estimation model under the constraint of a lane line-camera imaging model on the basis of a camera geometric imaging model based on the assumptions of 'the optical axis of the camera is parallel to the road plane' and 'the left lane line and the right lane line are parallel'; and finally, solving the parameters of the non-uniform B-spline curve model by combining the position information of the pixels at the edge of the lane line to realize the fitting of the lane line. The method can effectively improve the positioning precision of the control points and the detection accuracy of the lane lines, and improve the robustness of the lane line detection algorithm based on curve fitting to background interference.

Description

基于成像模型约束非均匀B样条曲线拟合车道线检测方法A lane line detection method based on imaging model-constrained non-uniform B-spline curve fitting

技术领域technical field

本发明属于交通视频检测领域,具体涉及一种基于成像模型约束非均匀B样条曲线拟合车道线检测方法。The invention belongs to the field of traffic video detection, in particular to a lane line detection method based on imaging model constraint non-uniform B-spline curve fitting.

背景技术Background technique

当前无人驾驶技术是智能交通领域的研究热点,国内外众多科研院所、企业的大量科研投入大大推进了无人驾驶技术的迅猛发展。车道线作为车辆进行车道保持、变道等驾驶行为的必要信息,是无人车行车环境感知中的一项重要环境数据,因此车道线检测方法的性能优劣对无人车环境感知系统性能乃至整个无人车驾驶系统的安全性有着不可忽视的影响。At present, driverless technology is a research hotspot in the field of intelligent transportation. The large amount of scientific research investment by many scientific research institutes and enterprises at home and abroad has greatly promoted the rapid development of driverless technology. As the necessary information for the vehicle to keep the lane, change lanes and other driving behaviors, the lane line is an important environmental data in the perception of the driving environment of the unmanned vehicle. The safety of the entire unmanned vehicle driving system has a non-negligible impact.

车道线检测的主要目的是从视频图像中提取车道线的位置信息。目前常用的车道线检测方法大致可分为基于区域、基于特征和基于模型三类,而其中以基于模型的车道线检测方法最为普遍。这类方法通常基于结构化道路的车道线走向可以用特定的数学模型来逼近这一思想,针对直线型、抛物线型、蛇型等不同走向的车道线采用直线、抛物线、双曲线、样条曲线等数学模型来拟合,从而在保证车道线检测准确性的同时大大降低检测成本。其中样条曲线由分段多项式表达,可精确拟合任意形状的曲线,因此在车道线检测中得到了广泛应用。分析相关研究可知控制点的确定是B样条曲线拟合车道线的关键,然而车辆遮挡、树荫、建筑物阴影、路面破损等干扰,为模型控制点的提取增加了不少难度,从而影响车道线拟合的准确性甚至导致拟合失败。因此,如何有效地提高控制点提取对背景干扰(车辆遮挡、树荫、建筑物阴影、路面其他标识、路面破损等)的鲁棒性,提高控制点定位精度的同时兼顾算法时间成本,成为提高基于样条曲线模型的车道线检测方法效率的关键问题。The main purpose of lane line detection is to extract the location information of lane lines from video images. At present, the commonly used lane line detection methods can be roughly divided into three categories: area-based, feature-based and model-based, among which the model-based lane line detection method is the most common. This kind of method is usually based on the idea that the lane lines of structured roads can be approximated by a specific mathematical model. For lane lines with different orientations such as linear, parabolic, and snake-shaped, straight lines, parabolas, hyperbolas, and splines are used. and other mathematical models to fit, thereby greatly reducing the detection cost while ensuring the accuracy of lane line detection. The spline curve is expressed by a piecewise polynomial, which can accurately fit any shape of the curve, so it has been widely used in lane line detection. Analysis of related research shows that the determination of control points is the key to the B-spline curve fitting lane line. However, the interference of vehicle occlusion, tree shade, building shadow, road damage, etc. increases a lot of difficulty for the extraction of model control points, thus affecting the model. The accuracy of the lane line fitting even causes the fitting to fail. Therefore, how to effectively improve the robustness of control point extraction to background interference (vehicle occlusion, tree shade, building shadow, other road signs, road damage, etc.) The key issue of the efficiency of lane line detection method based on spline curve model.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于克服传统基于样条曲线模型的车道线检测方法易受背景干扰,导致控制点定位不准或失败的问题,提供基于成像模型约束非均匀B样条曲线拟合车道线检测方法,该方法能够有效提高控制点定位精度的同时兼顾算法时间成本,得到较好的检测结果,有效地提高了车道线检测效率。The purpose of the present invention is to overcome the problem that the traditional lane line detection method based on spline curve model is susceptible to background interference, resulting in inaccurate or failed control point positioning, and to provide a lane line detection method based on imaging model constraint non-uniform B-spline curve fitting , the method can effectively improve the positioning accuracy of the control point while taking into account the time cost of the algorithm, obtain better detection results, and effectively improve the efficiency of lane line detection.

为了实现上述目的,本发明采取如下的技术解决方案予以实现:In order to achieve the above object, the present invention adopts the following technical solutions to realize:

基于成像模型约束非均匀B样条曲线拟合车道线检测方法,包括以下步骤:A lane line detection method based on imaging model-constrained non-uniform B-spline curve fitting includes the following steps:

步骤一:图像预处理;Step 1: Image preprocessing;

从卡内基梅隆图像数据库中的车道线标准图像库中获取原始车道线图像I,对原始车道线图像I进行中值滤波去除椒盐噪声,再进行直方图均衡化增强图像的亮度和对比度,使边缘特征突出,得到增强后的车道线图像I1Obtain the original lane line image I from the lane line standard image library in the Carnegie Mellon image database, perform median filtering on the original lane line image I to remove salt and pepper noise, and then perform histogram equalization to enhance the brightness and contrast of the image. Make the edge features stand out to obtain the enhanced lane line image I 1 ;

步骤二:边缘检测;Step 2: edge detection;

对增强后的车道线图像I1采用Canny算子进行边缘检测,得到初始车道线边缘图像I2Using the Canny operator to perform edge detection on the enhanced lane line image I 1 to obtain the initial lane line edge image I 2 ;

步骤三:Hough直线检测;Step 3: Hough line detection;

对步骤二中得到的初始车道线边缘图像I2进行Hough直线检测,保留包含直线检测结果的边缘,去除其余干扰边缘,得到边缘图像I3Perform Hough straight line detection on the initial lane line edge image I 2 obtained in the step 2, retain the edge containing the straight line detection result, remove the remaining interference edges, and obtain an edge image I 3 ;

步骤四:推导成像模型约束条件;Step 4: Derive imaging model constraints;

推导得到的成像模型约束条件为:The derived imaging model constraints are:

在世界坐标系中长度为ΔY的车道线线段对应的图像坐标系中第u列中的线段长度Δv为:The line segment length Δv in the u-th column in the image coordinate system corresponding to the lane line segment with the length ΔY in the world coordinate system is:

Figure BDA0001881976450000021
Figure BDA0001881976450000021

在世界坐标系中宽度为ΔX的左右车道线间距对应的图像坐标系中第v行中的间距宽度Δu为:The spacing width Δu in the vth row in the image coordinate system corresponding to the left and right lane line spacing with a width of ΔX in the world coordinate system is:

Figure BDA0001881976450000031
Figure BDA0001881976450000031

步骤五:非均匀B样条曲线控制点提取;Step 5: Extraction of non-uniform B-spline curve control points;

以对应世界坐标系中ΔY的长度在图像中按成像模型约束设置扫描线,每条扫描线与左右车道边缘的交点即为一对控制点;Scan lines are set in the image according to the constraints of the imaging model according to the length of ΔY in the corresponding world coordinate system, and the intersection of each scan line and the edge of the left and right lanes is a pair of control points;

步骤六:车道线拟合;Step 6: Lane line fitting;

由步骤五得到非均匀B样条曲线控制点信息,再利用NUBS插值方法对左右车道线进行拟合,完成对车道线的检测。Obtain the non-uniform B-spline curve control point information in step 5, and then use the NUBS interpolation method to fit the left and right lane lines to complete the detection of the lane lines.

本发明进一步的改进在于,步骤一中,进行中值滤波时,采用中值滤波函数f^(x,y)为:A further improvement of the present invention is that, in step 1, when median filtering is performed, the median filtering function f^(x, y) is used as:

Figure BDA0001881976450000032
Figure BDA0001881976450000032

其中f^(x,y)为中值滤波输出,Sxy表示中心在(x,y),尺寸为M×N的矩形子图像窗口的坐标组, f(a,b)为坐标为(a,b)的像素灰度值。Where f^(x, y) is the output of the median filter, S xy represents the coordinate group of a rectangular sub-image window whose center is (x, y) and the size is M×N, and f(a, b) is the coordinate group of (a) ,b) pixel gray value.

本发明进一步的改进在于,步骤一中,进行直方图均衡化时,直方图均衡化函数sk为:A further improvement of the present invention is that, in step 1, when performing histogram equalization, the histogram equalization function sk is:

Figure BDA0001881976450000033
Figure BDA0001881976450000033

其中sk为直方图均衡化输出,rk代表离散灰度级,0≤rk≤255,k=0,1,2,…,n-1,ni为图像中出现灰度ri的像素数,n是图像中的像素总数,

Figure BDA0001881976450000034
就是概率论中的频数。where s k is the histogram equalization output, r k represents discrete gray level, 0≤r k ≤255, k=0,1,2,...,n-1, n i is the gray level ri appearing in the image number of pixels, n is the total number of pixels in the image,
Figure BDA0001881976450000034
It is the frequency in probability theory.

本发明进一步的改进在于,步骤二的具体步骤如下:A further improvement of the present invention is that the concrete steps of step 2 are as follows:

(1)用高斯滤波器平滑图像I1(1) Smooth the image I 1 with a Gaussian filter;

高斯平滑函数G(x,y)为:The Gaussian smoothing function G(x,y) is:

Figure BDA0001881976450000035
Figure BDA0001881976450000035

用G(x,y)与增强后的车道线图像I1进行卷积,得到平滑图像f1Convolve G(x, y) with the enhanced lane line image I 1 to obtain a smooth image f 1 ;

f1(x,y)=I1(x,y)*G(x,y) (4)f 1 (x,y)=I 1 (x,y)*G(x,y) (4)

(2)用一阶偏导的有限差分来计算梯度的幅值和方向,得到梯度图像f2(2) Calculate the magnitude and direction of the gradient by using the finite difference of the first-order partial derivative to obtain the gradient image f 2 ;

一阶微分卷积模板

Figure BDA0001881976450000041
first-order differential convolution template
Figure BDA0001881976450000041

(3)对梯度幅值进行非极大值抑制,得到非极大值抑制图像f3(3) performing non-maximum suppression on the gradient amplitude to obtain a non-maximum suppression image f 3 ;

在梯度图像f2的每一点上,将8邻域的中心像素S与沿着梯度线的两个像 素相比;如果 S的梯度值不比沿梯度线的两个相邻像素梯度值大,则令S=0;At each point of the gradient image f2, compare the center pixel S of the 8-neighborhood with the two pixels along the gradient line; if the gradient value of S is not greater than the gradient value of the two adjacent pixels along the gradient line, then let S = 0;

(4)用双阈值算法检测和连接边缘;(4) Detect and connect edges with a double threshold algorithm;

对非极大值抑制图像f3设置两个阈值T1和T2,T1=0.4T2,将梯度值小于T1的像素的灰度值设为0,得到图像f4;然后将梯度值小于T2的像素的灰度值设为0,得到图像f5;以图像f5为基础,以图像f4为补充,连结图像的边缘,得到初始车道线边缘图像I2Set two thresholds T 1 and T 2 for the non-maximum value suppression image f 3 , T 1 =0.4T 2 , set the gray value of the pixel whose gradient value is less than T 1 to 0, and obtain the image f 4 ; The gray value of the pixel whose value is less than T 2 is set to 0, and the image f 5 is obtained; based on the image f 5 and supplemented by the image f 4 , the edges of the image are connected to obtain the initial lane edge image I 2 .

本发明进一步的改进在于,步骤三的具体步骤如下:A further improvement of the present invention is that the concrete steps of step 3 are as follows:

(1)Hough直线检测;(1) Hough line detection;

对于直角坐标系中的任意一点A(x0,y0),经过点A的直线满足For any point A(x 0 , y 0 ) in the Cartesian coordinate system, the straight line passing through the point A satisfies

y=kx+l(5)y=kx+l(5)

其中k是斜率,l是截距,则在X-Y平面过点A(x0,y0)的直线簇均用式(5)表示,对于垂直于X轴的直线斜率是无穷大的则无法表示;因此将直角坐标系转换到极坐标系;Where k is the slope and l is the intercept, then the straight line clusters passing through the point A(x 0 , y 0 ) in the XY plane are all represented by formula (5), and the slope of the straight line perpendicular to the X-axis is infinite, it cannot be represented; So convert the Cartesian coordinate system to the polar coordinate system;

在极坐标系中表示直线的方程为ρ=xcosθ+ysinθ(6)The equation representing a straight line in polar coordinates is ρ=xcosθ+ysinθ(6)

其中ρ为原点到直线的法线距离,θ为法线与X轴的正向夹角;则图像空间中的一点对应极坐标系ρ-θ中的一条正弦曲线;通过检测ρ-θ空间的交集点来检测图像空间中的直线;将ρ,θ离散化,在参数θ对应的每一取值,分别按照公式(6)计算相应的参数ρ的取值,然后在相应的参数累加单元中加1;最后统计每个累加单元的取值,大于预设的阈值H则认为该组参数是图像空间内的直线的参数,从而在图像中标记出直线;where ρ is the normal distance from the origin to the straight line, and θ is the positive angle between the normal and the X-axis; then a point in the image space corresponds to a sinusoid in the polar coordinate system ρ-θ; The intersection point is used to detect the straight line in the image space; ρ and θ are discretized, and for each value corresponding to the parameter θ, the value of the corresponding parameter ρ is calculated according to formula (6), and then in the corresponding parameter accumulation unit. Add 1; finally count the value of each accumulating unit, if it is greater than the preset threshold H, it is considered that the group of parameters is the parameter of the straight line in the image space, thereby marking the straight line in the image;

(2)去除干扰边缘;(2) remove the interference edge;

对步骤(1)标记出的直线中的每一个边缘像素,搜索包含该像素的整条边缘并保留,剔除与标记的直线无共同像素点的边缘,从而得到边缘图像I3For each edge pixel in the line marked in step (1), search for the entire edge including the pixel and keep it, and remove the edge that has no common pixel point with the marked line, thereby obtaining the edge image I 3 .

本发明进一步的改进在于,步骤四的具体过程为:A further improvement of the present invention is that the concrete process of step 4 is:

假设相机光轴与车辆行驶道路平面平行以及左右车道线平行;It is assumed that the optical axis of the camera is parallel to the plane of the vehicle's driving road and the left and right lane lines;

已知世界坐标系(X,Y,Z)和图像坐标系(U,V),相机最大水平视角为α,最大垂直视角为β,相机安装位置在世界坐标系中的坐标为C(d,0,h),其中h为相机安装高度,即相机在世界坐标系Z轴上的值,d为相机安装水平偏移,即相机在世界坐标系中X轴上的值;相机光轴与车辆行驶道路平面平行,与车道线夹角为γ;根据相机几何成像模型,世界坐标系(X,Y,Z)中路面上某点P(x,y,0)与图像坐标系(U,V)中相对坐标点Q(u,v)的映射模型为:Knowing the world coordinate system (X, Y, Z) and the image coordinate system (U, V), the maximum horizontal viewing angle of the camera is α, the maximum vertical viewing angle is β, and the coordinates of the camera installation position in the world coordinate system are C(d, 0,h), where h is the camera installation height, that is, the value of the camera on the Z axis of the world coordinate system, and d is the horizontal offset of the camera installation, that is, the value of the camera on the X axis in the world coordinate system; the camera optical axis and the vehicle The plane of the driving road is parallel, and the angle between it and the lane line is γ; according to the camera geometric imaging model, a certain point P(x,y,0) on the road in the world coordinate system (X,Y,Z) and the image coordinate system (U,V) ), the mapping model of the relative coordinate point Q(u, v) is:

Figure BDA0001881976450000051
Figure BDA0001881976450000051

Figure BDA0001881976450000052
Figure BDA0001881976450000052

Figure BDA0001881976450000053
Figure BDA0001881976450000053

Figure BDA0001881976450000054
Figure BDA0001881976450000054

式中HI,WI分别为相机成像后图像的水平、垂直分辨率;where H I and W I are the horizontal and vertical resolutions of the image after imaging by the camera, respectively;

根据相机成像原理,成像后图像中车道线线段长度是随着世界坐标系中车道线线段与相机距离的增加而缩短,同理,在世界坐标系中路面上相同的左右车道线间距,在近视场中成像得到的车道线间距较宽,在远视场中成像得到的车道线间距较窄;再结合几何相机成像模型,推导出在世界坐标系中长度为ΔY的车道线线段对应的图像坐标系中第u列中的线段长度Δv为:According to the camera imaging principle, the length of the lane line segment in the image after imaging is shortened with the increase of the distance between the lane line segment in the world coordinate system and the camera. Similarly, the same distance between the left and right lane lines on the road surface in the world coordinate system, in myopia The distance between the lane lines obtained by imaging in the field is wider, and the distance between the lane lines obtained by imaging in the far field of view is narrow; then combined with the geometric camera imaging model, the image coordinate system corresponding to the lane line segment with the length ΔY in the world coordinate system is deduced The line segment length Δv in the u-th column is:

Figure BDA0001881976450000061
Figure BDA0001881976450000061

在世界坐标系中宽度为ΔX的左右车道线间距对应的图像坐标系中第v行中的间距宽度Δu为:The spacing width Δu in the vth row in the image coordinate system corresponding to the left and right lane line spacing with a width of ΔX in the world coordinate system is:

Figure BDA0001881976450000062
Figure BDA0001881976450000062

本发明进一步的改进在于,步骤五的具体过程为:A further improvement of the present invention is that the concrete process of step 5 is:

从车道线边缘图像底部开始,在vi行设置水平扫描线Line i,m≤i≤n,在Line i与左右车道线的交点处得到控制点对(Li,Ri),其中Li的坐标为(ui,vi),Ri坐标为(ui',vi);根据成像模型约束条件定义vi计算公式为:Starting from the bottom of the lane line edge image, set the horizontal scan line Line i in line v i , m≤i≤n, and obtain the control point pair (Li, R i ) at the intersection of Line i and the left and right lane lines, where Li's The coordinates are (u i ,vi ), and the coordinates of R i are ( u i ' ,vi ) ; according to the constraints of the imaging model , the formula for calculating vi is defined as:

Figure BDA0001881976450000063
Figure BDA0001881976450000063

根据式(8)、(10)、(11)推导出:According to formulas (8), (10), (11), it is derived:

Figure BDA0001881976450000064
Figure BDA0001881976450000064

其中v1,Δv1为预设值;将式(14)代入式(13)依次求出vi的值;由此求出第i条扫描线Line i所确定的控制点纵坐标等于vi;从扫描线中点分别向左右两边搜索边缘点,得到的第一对扫描线与左右车道线的交点即为控制点,从而确定一对控制点的坐标(ui,vi)和(u′i,vi)。Wherein v 1 , Δv 1 are preset values; Substitute formula (14) into formula (13) to obtain the value of v i in turn; thus obtain the ordinate of the control point determined by the i-th scan line Line i is equal to v i ; Search for edge points from the midpoint of the scan line to the left and right sides respectively, and the intersection of the first pair of scan lines and the left and right lane lines is the control point, so as to determine the coordinates (u i , v i ) and (u i , v i ) of a pair of control points ′ i ,v i ).

本发明进一步的改进在于,步骤五的具体过程为:A further improvement of the present invention is that the concrete process of step 5 is:

假设左右车道线平行,求解第i条扫描线Line i所确定的控制点Li、Ri的横坐标ui和u′i;根据成像模型约束条件以及式(14)推导出Δui+1与Δui的关系式为:Assuming that the left and right lane lines are parallel, solve the abscissas u i and u′ i of the control points Li and Ri determined by the i-th scan line Line i ; derive Δu i+1 according to the constraints of the imaging model and equation (14) The relationship with Δu i is:

Figure BDA0001881976450000065
Figure BDA0001881976450000065

u′i=ui+Δui (16)u′ i =u i +Δu i (16)

其中相机光轴与车道线夹角γ按如下公式计算:The angle γ between the optical axis of the camera and the lane line is calculated according to the following formula:

Figure BDA0001881976450000071
Figure BDA0001881976450000071

对于车道线边缘缺失导致控制点丢失的情况,在相邻控制点对L1、L2已知的情况下,根据公式(15)-(17)计算出该控制点的横坐标u2;对于由虚假边缘导致的控制点错误定位的情况,验证所有相邻控制点对间距宽度比值是否满足式(15),从而检测错误控制点坐标并根据式 (15)-(17)对其进行重新定位。For the situation that the control point is lost due to the lack of the edge of the lane line, when the adjacent control point pairs L 1 and L 2 are known, the abscissa u 2 of the control point is calculated according to formulas (15)-(17); for In the case of wrong positioning of control points caused by false edges, verify whether the ratio of all adjacent control points to the spacing width satisfies Equation (15), so as to detect the coordinates of the wrong control points and reposition them according to Equations (15)-(17) .

本发明进一步的改进在于,步骤六的具体过程为:A further improvement of the present invention is that the concrete process of step 6 is:

假设B样条曲线S由n+1个控制点集合{P0,P1,...Pn}构成,则曲线S上的各点满足:Assuming that the B-spline curve S is composed of n+1 control point sets {P 0 , P 1 ,...P n }, then each point on the curve S satisfies:

Figure BDA0001881976450000072
Figure BDA0001881976450000072

其中Bi,m(o)为基本B样条函数,2≤m≤n+1,tmin≤u≤tmax,tj为节点,j=0,...,i+m,当各节点 tj之间等距时,称该B样条曲线为均匀B样条曲线,否则为非均匀B样条曲线;根据NUBS 插值法,若已知m对控制点,m≥3,则车道线采用m-1阶多项式函数进行拟合;若能够确定 4对控制点,则采用三阶多项式函数进行NUBS插值从而拟合出车道线;若只确定了3对控制点,则采用二阶多项式函数拟合车道线。where B i,m (o) is the basic B-spline function, 2≤m≤n+1, t min ≤u≤t max , t j is the node, j=0,...,i+m, when each When the nodes t and j are equidistant, the B-spline curve is called a uniform B-spline curve, otherwise it is a non-uniform B-spline curve; according to the NUBS interpolation method, if m pairs of control points are known, and m≥3, then the lane The line is fitted with an m-1-order polynomial function; if 4 pairs of control points can be determined, a third-order polynomial function is used to perform NUBS interpolation to fit the lane line; if only 3 pairs of control points are determined, a second-order polynomial is used. Function to fit lane lines.

与现有技术相比,本发明具有的有益效果:本发明首先对图像进行中值滤波和直方图均衡化处理,得到增强后的车道线图像;其次采用Canny算子对图像进行边缘检测,得到车道线边缘图像;然后对边缘图像进行Hough变换直线检测,提高边缘连续性的同时减少背景干扰边缘;最后结合车道线边缘像素位置信息求解非均匀B样条曲线模型参数,实现车道线拟合。本发明能有效提高提高控制点定位精度和车道线检测准确度,提高基于曲线拟合的车道线检测算法对背景干扰的鲁棒性。Compared with the prior art, the present invention has the beneficial effects: firstly, the present invention performs median filtering and histogram equalization processing on the image to obtain an enhanced lane line image; secondly, the Canny operator is used to perform edge detection on the image to obtain The lane line edge image; then the Hough transform line detection is performed on the edge image to improve the edge continuity and reduce the background interference edge; finally, the non-uniform B-spline curve model parameters are solved combined with the lane line edge pixel position information to realize the lane line fitting. The invention can effectively improve the positioning accuracy of the control point and the detection accuracy of the lane line, and improve the robustness of the lane line detection algorithm based on the curve fitting to the background interference.

进一步的,基于“相机光轴与道路平面平行”以及“左右车道线平行”的假设,在相机几何成像模型的基础上,推导车道线-相机成像模型约束下的控制点估算模型;能够降低车辆遮挡、树荫、建筑物阴影、路面破损以及各种非车道线路面标线对控制点确定造成的干扰,提高非均匀B样条曲线控制点提取方法的鲁棒性,提高车道线检测准确度。Further, based on the assumptions of "camera optical axis is parallel to the road plane" and "left and right lane lines are parallel", based on the camera geometric imaging model, the control point estimation model under the constraints of the lane line-camera imaging model is derived; Occlusion, tree shade, building shadow, road damage, and various non-lane road surface markings cause interference to control point determination, improve the robustness of the control point extraction method for non-uniform B-spline curves, and improve the lane line detection accuracy .

附图说明Description of drawings

图1是相机在世界坐标系中的位置示意图;Figure 1 is a schematic diagram of the position of the camera in the world coordinate system;

图2是车道线连续情况下的控制点确定过程示意图;Figure 2 is a schematic diagram of the control point determination process under the condition of continuous lane lines;

图3是车道线检测结果一;Figure 3 is the first result of lane line detection;

图4是车道线不连续情况下的控制点确定过程示意图;FIG. 4 is a schematic diagram of a control point determination process in the case of discontinuous lane lines;

图5是车道线检测结果二;Figure 5 is the second result of lane line detection;

图6是基于成像模型约束的非均匀B样条曲线拟合车道线检测算法流程图。FIG. 6 is a flowchart of a non-uniform B-spline curve fitting lane line detection algorithm based on imaging model constraints.

具体实施方式Detailed ways

下面结合附图对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings.

本发明提供的基于成像模型约束非均匀B样条曲线拟合车道线检测方法,包括以下步骤:The non-uniform B-spline curve fitting lane line detection method based on imaging model constraints provided by the present invention includes the following steps:

步骤一:图像预处理;Step 1: Image preprocessing;

从卡内基梅隆图像数据库中的车道线标准图像库中获取原始车道线图像I,对原始车道线图像I进行中值滤波去除椒盐噪声,再进行直方图均衡化增强图像的亮度和对比度,使边缘特征突出,得到增强后的车道线图像I1Obtain the original lane line image I from the lane line standard image library in the Carnegie Mellon image database, perform median filtering on the original lane line image I to remove salt and pepper noise, and then perform histogram equalization to enhance the brightness and contrast of the image. The edge features are highlighted to obtain the enhanced lane line image I 1 .

中值滤波函数f^(x,y)Median filter function f^(x,y)

Figure BDA0001881976450000081
Figure BDA0001881976450000081

其中f^(x,y)为中值滤波输出,Sxy表示中心在(x,y),尺寸为M×N的矩形子图像窗口的坐标组, f(a,b)为坐标为(a,b)的像素灰度值。Where f^(x, y) is the output of the median filter, S xy represents the coordinate group of a rectangular sub-image window whose center is (x, y) and the size is M×N, and f(a, b) is the coordinate group of (a) ,b) pixel gray value.

直方图均衡化函数sk Histogram equalization function sk

Figure BDA0001881976450000091
Figure BDA0001881976450000091

其中sk为直方图均衡化输出,rk代表离散灰度级(0≤rk≤255,k=0,1,2,…,n-1),ni为图像中出现ri这种灰度的像素数,n是图像中的像素总数,

Figure BDA0001881976450000092
就是概率论中的频数。where s k is the histogram equalization output, rk represents discrete gray level (0≤r k ≤255, k =0,1,2,...,n-1), n i is the appearance of ri in the image The number of pixels in grayscale, n is the total number of pixels in the image,
Figure BDA0001881976450000092
It is the frequency in probability theory.

步骤二:边缘检测;Step 2: edge detection;

对增强后的车道线图像I1采用Canny算子进行边缘检测,得到初始车道线边缘图像I2The Canny operator is used for edge detection on the enhanced lane line image I 1 to obtain the initial lane line edge image I 2 .

具体步骤如下:Specific steps are as follows:

(1)用高斯滤波器平滑图像I1(1) Smooth the image I 1 with a Gaussian filter;

高斯平滑函数G(x,y)Gaussian smoothing function G(x,y)

Figure BDA0001881976450000093
Figure BDA0001881976450000093

用G(x,y)与图像I1进行卷积,得到平滑图像f1 Convolve the image I 1 with G(x,y) to get the smoothed image f 1

f1(x,y)=I1(x,y)*G(x,y) (4)f 1 (x,y)=I 1 (x,y)*G(x,y) (4)

(2)用一阶偏导的有限差分来计算梯度的幅值和方向,得到梯度图像f2(2) Calculate the magnitude and direction of the gradient by using the finite difference of the first-order partial derivative to obtain the gradient image f 2 ;

一阶微分卷积模板

Figure BDA0001881976450000094
first-order differential convolution template
Figure BDA0001881976450000094

(3)对梯度幅值进行非极大值抑制,得到非极大值抑制图像f3(3) performing non-maximum suppression on the gradient amplitude to obtain a non-maximum suppression image f 3 ;

在梯度图像f2的每一点上,将8邻域的中心像素S与沿着梯度线的两个像 素相比。如果 S的梯度值不比沿梯度线的两个相邻像素梯度值大,则令S=0。At each point of the gradient image f2, the center pixel S of the 8 -neighborhood is compared with the two pixels along the gradient line. If the gradient value of S is not greater than the gradient value of two adjacent pixels along the gradient line, then let S=0.

(4)用双阈值算法检测和连接边缘。(4) Detect and connect edges with a double-threshold algorithm.

对非极大值抑制图像f3设置两个阈值T1和T2,T1=0.4T2。把梯度值小于T1的像素的灰度值设为0,得到图像f4。然后把梯度值小于T2的像素的灰度值设为0,得到图像f5。以图像f5为基础,以图像f4为补充来连结图像的边缘,得到边缘图像I2Two thresholds T 1 and T 2 are set for the non-maximum suppression image f 3 , and T 1 =0.4T 2 . The gray value of the pixel whose gradient value is less than T 1 is set to 0, and the image f 4 is obtained. Then, the gray value of the pixel whose gradient value is less than T 2 is set to 0, and the image f 5 is obtained. Based on the image f5 and supplemented by the image f4 , the edges of the image are connected to obtain the edge image I2 .

步骤三:Hough直线检测;Step 3: Hough line detection;

对步骤二中得到的车道线边缘图像I2进行Hough直线检测,只保留那些包含直线检测结果的边缘,去除其余干扰边缘,得到边缘图像I3。具体步骤如下:Hough line detection is performed on the lane line edge image I 2 obtained in the second step, only those edges containing the line detection result are retained, and the remaining interference edges are removed to obtain an edge image I 3 . Specific steps are as follows:

(1)Hough直线检测;(1) Hough line detection;

对于直角坐标系中的任意一点A(x0,y0),经过点A的直线满足For any point A(x 0 , y 0 ) in the Cartesian coordinate system, the straight line passing through the point A satisfies

y=kx+l(5)y=kx+l(5)

其中k是斜率,l是截距。那么在X-Y平面过点A(x0,y0)的直线簇均可以用式(5)表示,但对于垂直于X轴的直线斜率是无穷大的则无法表示。因此将直角坐标系转换到极坐标系就能解决该特殊情况。where k is the slope and l is the intercept. Then the cluster of straight lines passing through the point A(x 0 , y 0 ) in the XY plane can be represented by equation (5), but it cannot be represented for the straight line perpendicular to the X-axis whose slope is infinite. So converting the Cartesian coordinate system to the polar coordinate system can solve this special case.

在极坐标系中表示直线的方程为ρ=xcosθ+ysinθ(6)The equation representing a straight line in polar coordinates is ρ=xcosθ+ysinθ(6)

其中ρ为原点到直线的法线距离,θ为法线与X轴的正向夹角。则图像空间中的一点对应极坐标系ρ-θ中的一条正弦曲线。通过检测ρ-θ空间的交集点来检测图像空间中的直线。将ρ,θ离散化,在参数θ对应的每一取值,分别按照公式(6)计算相应的参数ρ的取值,然后在相应的参数累加单元中加1。最后统计每个累加单元的取值,大于预设的阈值H就认为该组参数便是图像空间内的直线的参数,从而在图像中标记出直线;Where ρ is the normal distance from the origin to the straight line, and θ is the positive angle between the normal and the X axis. Then a point in the image space corresponds to a sinusoid in the polar coordinate system ρ-θ. Lines in image space are detected by detecting intersection points in ρ-θ space. Discretize ρ and θ, and at each value corresponding to parameter θ, calculate the value of the corresponding parameter ρ according to formula (6), and then add 1 to the corresponding parameter accumulation unit. Finally, the value of each accumulating unit is counted, and if it is greater than the preset threshold H, it is considered that the group of parameters is the parameter of the straight line in the image space, so as to mark the straight line in the image;

(2)去除干扰边缘;(2) remove the interference edge;

对步骤(1)标记出的直线中的每一个边缘像素,搜索包含该像素的整条边缘并保留,剔除那些与标记的直线无共同像素点的边缘,从而得到边缘图像I3For each edge pixel in the line marked in step (1), search for the entire edge including the pixel and keep it, and remove those edges that have no common pixel point with the marked line, thereby obtaining the edge image I 3 .

步骤四:成像模型约束条件推导;Step 4: Derivation of imaging model constraints;

基于相机几何成像模型,在“相机光轴与车辆行驶道路平面平行”以及“左右车道线平行”的假设基础上,推导成像模型约束条件。Based on the camera geometric imaging model, the imaging model constraints are derived based on the assumptions that the optical axis of the camera is parallel to the road plane of the vehicle and that the left and right lane lines are parallel.

已知世界坐标系(X,Y,Z)和图像坐标系(U,V),相机最大水平视角为α,最大垂直视角为β,相机安装位置在世界坐标系中的坐标为C(d,0,h),其中h为相机安装高度,即相机在世界坐标系Z轴上的值,d为相机安装水平偏移,即相机在世界坐标系中X轴上的值。相机光轴与车辆行驶道路平面平行,与车道线夹角为γ。根据相机几何成像模型,可知世界坐标系(X,Y,Z) 中路面上某点P(x,y,0)与图像坐标系(U,V)中相对坐标点Q(u,v)的映射模型为:Knowing the world coordinate system (X, Y, Z) and the image coordinate system (U, V), the maximum horizontal viewing angle of the camera is α, the maximum vertical viewing angle is β, and the coordinates of the camera installation position in the world coordinate system are C(d, 0, h), where h is the camera installation height, that is, the value of the camera on the Z axis of the world coordinate system, and d is the horizontal offset of the camera installation, that is, the value of the camera on the X axis in the world coordinate system. The optical axis of the camera is parallel to the plane of the road where the vehicle is traveling, and the angle between it and the lane line is γ. According to the camera geometric imaging model, it can be known that a certain point P(x,y,0) on the road in the world coordinate system (X,Y,Z) and the relative coordinate point Q(u,v) in the image coordinate system (U,V) The mapping model is:

Figure BDA0001881976450000111
Figure BDA0001881976450000111

Figure BDA0001881976450000112
Figure BDA0001881976450000112

Figure BDA0001881976450000113
Figure BDA0001881976450000113

Figure BDA0001881976450000114
Figure BDA0001881976450000114

式中HI,WI分别为相机成像后图像的水平、垂直分辨率。where H I and W I are the horizontal and vertical resolutions of the image after imaging by the camera, respectively.

根据相机成像原理可知,成像后图像中车道线线段长度是随着世界坐标系中车道线线段与相机距离的增加而缩短的,即在世界坐标系中路面上长度相等的车道线线段,在近视场中成像得到的车道线线段较长,而在远视场中成像得到的车道线线段较短。同理,在世界坐标系中路面上相同的左右车道线间距,在近视场中成像得到的车道线间距较宽,而在远视场中成像得到的车道线间距较窄。根据这一成像事实,结合前述相机成像模型,可以推导出在世界坐标系中长度为ΔY的车道线线段对应的图像坐标系中第u列中的线段长度Δv为:According to the camera imaging principle, the length of the lane line segment in the image after imaging is shortened with the increase of the distance between the lane line segment in the world coordinate system and the camera, that is, the lane line segment with the same length on the road surface in the world coordinate system, in myopia The lane line segment obtained by imaging in the field is longer, while the lane line segment obtained by imaging in the far field is shorter. Similarly, with the same left and right lane line spacing on the road surface in the world coordinate system, the lane line spacing obtained by imaging in the near field of view is wider, while the lane line spacing obtained by imaging in the far field of view is narrower. According to this imaging fact, combined with the aforementioned camera imaging model, it can be deduced that the line segment length Δv in the uth column in the image coordinate system corresponding to the lane line segment with length ΔY in the world coordinate system is:

Figure BDA0001881976450000115
Figure BDA0001881976450000115

而在世界坐标系中宽度为ΔX的左右车道线间距对应的图像坐标系中第v行中的间距宽度Δu为:In the world coordinate system, the spacing width Δu in the vth row in the image coordinate system corresponding to the left and right lane line spacing with a width of ΔX is:

Figure BDA0001881976450000121
Figure BDA0001881976450000121

步骤五:非均匀B样条曲线控制点提取;Step 5: Extraction of non-uniform B-spline curve control points;

为了求解NUBS曲线模型参数,需要首先确定合适的控制点,方法为:从车道线边缘图像底部开始,以“对应世界坐标系中ΔY的长度”在图像中按成像模型约束设置扫描线,每条扫描线与左右车道边缘的交点即为一对控制点。该过程的数学描述如下:从车道线边缘图像底部开始,在vi行设置水平扫描线Line i(m≤i≤n),在Line i与左右车道线的交点处得到控制点对(Li,Ri),其中Li的坐标为(ui,vi),Ri坐标为(ui',vi)。由于车道线在图像中整体呈纵向延伸趋势,各控制点对的纵坐标依次减小,该趋势不受相机光轴与车道线夹角γ影响,因此为简化计算过程忽略γ的值,根据成像模型约束定义vi计算公式为:In order to solve the parameters of the NUBS curve model, it is necessary to first determine the appropriate control points. The method is: starting from the bottom of the image of the edge of the lane line, and setting the scan lines in the image according to the imaging model constraints with "corresponding to the length of ΔY in the world coordinate system". The intersection of the scan line with the left and right lane edges is a pair of control points. The mathematical description of the process is as follows: starting from the bottom of the lane line edge image, set the horizontal scan line Line i (m≤i≤n) at line v i , and obtain the control point pair (Li, R i ), where the coordinates of Li are ( ui ,vi ), and the coordinates of R i are ( ui ',vi ) . Since the lane line extends longitudinally in the image as a whole, the ordinate of each control point pair decreases in turn. This trend is not affected by the angle γ between the optical axis of the camera and the lane line. Therefore, in order to simplify the calculation process, the value of γ is ignored. The model constraint definition v i calculation formula is:

Figure BDA0001881976450000122
Figure BDA0001881976450000122

根据式(8)、(10)、(11)推导出:According to formulas (8), (10), (11), it is derived:

Figure BDA0001881976450000123
Figure BDA0001881976450000123

其中v1,Δv1为预设值。将式(14)代入式(13)即可依次求出vi的值。由此可以求出第i条扫描线Line i所确定的控制点纵坐标等于vi。从扫描线中点分别向左右两边搜索边缘点,得到的第一对扫描线与左右车道线的交点即为控制点,进而可得知该控制点对的横坐标,从而确定其坐标(ui,vi)和(u′i,vi)。Among them, v 1 and Δv 1 are preset values. By substituting Equation (14) into Equation (13), the value of vi can be obtained sequentially. From this, it can be obtained that the ordinate of the control point determined by the i-th scan line Line i is equal to v i . From the midpoint of the scan line, the edge points are searched for the left and right sides respectively, and the intersection of the first pair of scan lines and the left and right lane lines is the control point, and then the abscissa of the control point pair can be known, so as to determine its coordinates (u i ,vi ) and ( ui , vi ).

上述控制点确定过程是在假设左右车道线均无遮挡,即每条扫描线确定的控制点就是扫描线与左右车道线的交点。然而,实际检测得到的车道线往往存在由于车辆遮挡、树荫、建筑物阴影、路面破损造成的干扰边缘或不连续边缘。此外,虚车道线边缘也是不连续的。这样就会造成扫描线与左车道线或者右车道线的交点不是实际的正确交点,或者不存在交点。The above control point determination process assumes that the left and right lane lines are not blocked, that is, the control point determined by each scan line is the intersection of the scan line and the left and right lane lines. However, the actual detected lane lines often have interfering edges or discontinuous edges caused by vehicle occlusion, tree shadows, building shadows, and road damage. In addition, the edges of the dashed lane lines are also discontinuous. This will result in that the intersection of the scan line with the left lane line or the right lane line is not the actual correct intersection, or there is no intersection.

针对上述问题,本发明在前述控制点确定方法的基础上,结合“左右车道线平行”的假设,求解第i条扫描线Line i所确定的控制点Li、Ri的横坐标ui和u′i。根据成像模型约束以及式 (12)推导出Δui+1与Δui的关系式为:In view of the above problems, the present invention, on the basis of the aforementioned control point determination method, combined with the assumption of "parallel left and right lane lines", solves the abscissas ui and u′ i . According to the constraints of the imaging model and equation (12), the relationship between Δu i+1 and Δu i is derived as:

Figure BDA0001881976450000131
Figure BDA0001881976450000131

u′i=ui+Δui (16)u′ i =u i +Δu i (16)

其中相机光轴与车道线夹角γ按如下公式计算:The angle γ between the optical axis of the camera and the lane line is calculated according to the following formula:

Figure BDA0001881976450000132
Figure BDA0001881976450000132

对于车道线边缘缺失导致控制点丢失的情况,可以在相邻控制点对L1、L2已知的情况下,用根据公式(15)-(17)计算出该控制点的横坐标u2;对于由虚假边缘导致的控制点错误定位的情况,可以验证所有相邻控制点对间距宽度比值是否满足式(15),从而检测错误控制点坐标并根据式(15)-(17)对其进行重新定位。For the case where the control point is lost due to the lack of the edge of the lane line, the abscissa u 2 of the control point can be calculated according to formulas (15)-(17) when the adjacent control point pairs L 1 and L 2 are known. ; For the case of incorrect positioning of control points caused by false edges, it can be verified whether the ratio of all adjacent control points to the spacing width satisfies the formula (15), so as to detect the coordinates of the wrong control points and adjust them according to formulas (15)-(17). to reposition.

步骤六:车道线拟合。Step 6: Lane line fitting.

由步骤五所述控制点确定方法得到了车道线边缘控制点信息,就可以利用NUBS插值方法对左右车道线进行拟合。B样条曲线数学模型为:After the control point information of the edge of the lane line is obtained by the control point determination method in step 5, the left and right lane lines can be fitted by using the NUBS interpolation method. The mathematical model of the B-spline curve is:

假设B样条曲线S由n+1个控制点集合{P0,P1,...Pn}构成,则曲线S上的各点满足:Assuming that the B-spline curve S is composed of n+1 control point sets {P 0 , P 1 ,...P n }, then each point on the curve S satisfies:

Figure BDA0001881976450000133
Figure BDA0001881976450000133

其中Bi,m(o)为基本B样条函数,2≤m≤n+1,tmin≤u≤tmax,tj(j=0,...,i+m)为节点,当各节点 tj之间等距时,称该B样条曲线为均匀B样条曲线,否则为非均匀B样条曲线。根据NUBS 插值法可知:若已知m(m≥3)对控制点,则车道线可用m-1阶多项式函数进行拟合。若能够确定4对控制点,则可采用三阶多项式函数进行NUBS插值从而拟合出车道线;若只确定了3 对控制点,则可采用二阶多项式函数拟合车道线。where B i,m (o) is the basic B-spline function, 2≤m≤n+1, t min ≤u≤t max , t j (j=0,...,i+m) is the node, when When each node t j is equidistant, the B-spline curve is called a uniform B-spline curve, otherwise it is a non-uniform B-spline curve. According to the NUBS interpolation method, it can be known that if m (m ≥ 3) pairs of control points are known, the lane line can be fitted with an m-1 order polynomial function. If four pairs of control points can be determined, a third-order polynomial function can be used to perform NUBS interpolation to fit the lane lines; if only three pairs of control points are determined, a second-order polynomial function can be used to fit the lane lines.

将步骤五所确定的控制点坐标代入式(18),求解出样条曲线S(u)并在原始车道线图像I中显示出来,完成对车道线的检测。Substitute the coordinates of the control point determined in step 5 into formula (18), solve the spline curve S(u) and display it in the original lane line image I, and complete the detection of the lane line.

下面通过一个具体实施例进行说明。A specific embodiment will be used for description below.

参见图6,本发明的方法对一幅大小为WI×HI(240×256)的车道线图像进行边缘检测,经成像模型约束结合车道线边缘位置信息来确定控制点,从而求解非均匀B样条曲线参数,实现车道线检测。Referring to Fig. 6, the method of the present invention performs edge detection on a lane line image with a size of W I × H I (240 × 256), and determines the control points by combining the image model constraints with the edge position information of the lane line, so as to solve the non-uniformity B-spline curve parameters to realize lane line detection.

具体采用以下几个步骤实现:The specific steps are as follows:

步骤一:从卡内基梅隆图像数据库中的车道线标准图像库中获取原始车道线图像I,对图像I进行中值滤波去除椒盐噪声,再进行直方图均衡化增强图像的亮度和对比度,使边缘特征突出,得到增强后的车道线图像I1。本发明中中值滤波时取3×3的矩形子图像窗口,直方图均衡化时灰度离散参数rk范围为0≤rk≤255。Step 1: Obtain the original lane line image I from the lane line standard image library in the Carnegie Mellon image database, perform median filtering on the image I to remove salt and pepper noise, and then perform histogram equalization to enhance the brightness and contrast of the image, The edge features are highlighted to obtain the enhanced lane line image I 1 . In the present invention, a 3×3 rectangular sub-image window is used for median filtering, and the gray scale discrete parameter r k is in the range of 0≤r k ≤255 during histogram equalization.

步骤二:对增强后的原始车道线图像I1采用Canny算子进行边缘检测,得到初始车道线边缘图像I2。本发明中进行边缘检测时采用的高斯平滑函数的平滑参数σ=1,一阶微分卷积模板

Figure BDA0001881976450000141
双阈值T1和T2为默认值,满足T1=0.4T2。Step 2: Use Canny operator to perform edge detection on the enhanced original lane line image I 1 to obtain an initial lane line edge image I 2 . The smoothing parameter σ=1 of the Gaussian smoothing function used for edge detection in the present invention, the first-order differential convolution template
Figure BDA0001881976450000141
The double thresholds T 1 and T 2 are default values, satisfying T 1 =0.4T 2 .

步骤三:对步骤二中得到的初始车道线边缘图像I2采用Hough算法进行Hough直线检测,对标记出的直线中的每一个边缘像素,搜索包含该像素的整条边缘并保留,剔除那些与标记的直线无共同像素点的边缘,从而得到边缘图像I3。本发明中Hough算法的参数均为默认值;Step 3: Use the Hough algorithm to detect the Hough line on the initial lane line edge image I 2 obtained in step 2. For each edge pixel in the marked line, search for the entire edge containing the pixel and keep it, and remove those with the same pixel. The marked straight line has no edge of common pixel points, so the edge image I 3 is obtained. The parameters of the Hough algorithm in the present invention are all default values;

步骤四:从车道线边缘图像I3底部开始,在vi行设置水平扫描线Line i(2≤i≤6),在Line i与左右车道线的交点处得到控制点对(Li,Ri),其中Li的坐标为(ui,vi),Ri坐标为(ui',vi)。本发明中v1=0,Δv1=20为预设值。将式(14)代入式(13)即可依次求出vi的值。由此可以求出第i 条扫描线Line i所确定的控制点纵坐标等于vi。从扫描线中点分别向左右两边搜索边缘点,得到的第一对扫描线与左右车道线的交点即为控制点,进而可得知该控制点对的横坐标,从而确定其坐标(ui,vi)和(u′i,vi)。Step 4: Starting from the bottom of the lane line edge image I3 , set the horizontal scan line Line i (2≤i≤6) in the vi line, and obtain the control point pair (Li, R i ) at the intersection of Line i and the left and right lane lines , where the coordinates of Li are ( u i ,vi ) , and the coordinates of R i are ( ui ',vi ) . In the present invention, v 1 =0 and Δv 1 =20 are preset values. By substituting Equation (14) into Equation (13), the value of vi can be obtained sequentially. From this, it can be obtained that the ordinate of the control point determined by the i-th scan line Line i is equal to v i . From the midpoint of the scan line, the edge points are searched for the left and right sides respectively, and the intersection of the first pair of scan lines and the left and right lane lines is the control point, and then the abscissa of the control point pair can be known, so as to determine its coordinates (u i ,vi ) and ( ui , vi ).

实际检测得到的车道线由于车辆遮挡、树荫、建筑物阴影、路面破损或虚车道线造成的干扰边缘或不连续边缘。对于车道线边缘缺失导致控制点丢失的情况,可以在相邻控制点对 L1、L2已知的情况下,用根据公式(15)-(17)计算出该控制点的横坐标u2;对于由虚假边缘导致的控制点错误定位的情况,可以验证所有相邻控制点对间距宽度比值是否满足式(15),从而检测错误控制点坐标并根据式(15)-(17)对其进行重新定位。The actual detected lane lines are interfering edges or discontinuous edges caused by vehicle occlusion, tree shade, building shadows, road damage, or dashed lane lines. For the case where the control point is lost due to the lack of the edge of the lane line, the abscissa u 2 of the control point can be calculated according to formulas (15)-(17) when the adjacent control point pairs L 1 and L 2 are known. ; For the case of incorrect positioning of control points caused by false edges, it can be verified whether the ratio of all adjacent control points to the spacing width satisfies the formula (15), so as to detect the coordinates of the wrong control points and adjust them according to formulas (15)-(17). to reposition.

步骤五:由步骤四所述控制点确定方法得到了车道线边缘控制点信息,就可以利用NUBS 插值方法对左右车道线进行拟合。若能够确定4对控制点,则可采用三阶多项式函数进行 NUBS插值从而拟合出车道线;若只确定了3对控制点,则可采用二阶多项式函数拟合车道线。将步骤四所确定的控制点坐标代入式(18),求解出样条曲线S(u)并在原始车道线图像I中显示出来,完成对车道线的检测。Step 5: After obtaining the control point information on the edge of the lane line by the control point determination method described in the step 4, the left and right lane lines can be fitted by using the NUBS interpolation method. If 4 pairs of control points can be determined, the third-order polynomial function can be used for NUBS interpolation to fit the lane line; if only 3 pairs of control points are determined, the second-order polynomial function can be used to fit the lane line. Substitute the coordinates of the control point determined in step 4 into formula (18), solve the spline curve S(u) and display it in the original lane line image I, and complete the detection of the lane line.

对于一幅大小为240×256的车道线图像进行中值滤波和直方图均衡化操作,采用Canny 边缘检测算子提取车道线的边缘特征。然后对车道线边缘图像进行Hough直线检测:结合“近视场的车道线边缘呈直线型”的特点,对车道线边缘图像进行Hough直线检测,只保留那些包含直线检测结果的边缘,据此进一步消除那些由背景建筑、树荫、障碍物、路面孔洞、裂缝等形成的干扰边缘。For a lane line image of size 240×256, median filtering and histogram equalization are performed, and the edge features of lane lines are extracted by Canny edge detection operator. Then perform Hough straight line detection on the image of the lane line edge: Combined with the feature of "the edge of the lane line in the near field is straight", the Hough line detection is performed on the image of the lane line edge, and only those edges containing the straight line detection result are retained, and further elimination is performed accordingly. Those interfering edges formed by background buildings, shade, obstacles, pavement holes, cracks, etc.

图1是相机在世界坐标系中的位置示意图,根据相机成像原理,推导出成像模型如式 (11)-(12)。Figure 1 is a schematic diagram of the position of the camera in the world coordinate system. According to the principle of camera imaging, the imaging model is derived as equations (11)-(12).

图2是车道线边缘连续情况下的控制点确定过程示意图。预设v1=0,Δv1=20,将式(14) 代入式(13)即可依次求出vi的值:v2=167,v3=117,v4=78,v5=46,v6=21。从车道线边缘图像底部开始,在vi行设置水平扫描线Line i(2≤i≤6),在Line i与左右车道线的交点处得到控制点对(Li,Ri),其中Li的坐标为(ui,vi),Ri坐标为(u′i,vi)。由此可以求出第i条扫描线Line i所确定的控制点纵坐标等于vi。从扫描线中点分别向左右两边搜索边缘点,得到的第一对扫描线与左右车道线的交点即为控制点,进而可得知该控制点对的横坐标,从而确定其坐标(ui,vi) 和(ui',vi)。据此得到5组控制点对坐标{(92,21)(117,21)},{(80,46)(145,46)},{(67,78)(175,78)}, {(51,117)(212,117)},{(33,167)(254,167)}。然后根据NUBS插值算法,采用四阶多项式函数进行NUBS插值,得到的拟合曲线显示在车道线图像中如图3所示。FIG. 2 is a schematic diagram of the control point determination process in the case of continuous lane edge edges. Assuming v 1 =0, Δv 1 =20, and substituting equation (14) into equation (13), the values of v i can be obtained in turn: v 2 =167, v 3 =117, v 4 =78, v 5 = 46, v 6 =21. Starting from the bottom of the lane line edge image, set the horizontal scan line Line i (2≤i≤6) on the v i line, and obtain the control point pair (Li,R i ) at the intersection of Line i and the left and right lane lines, where Li i The coordinates of is (u i ,vi ), and the coordinates of R i are (u′ i , vi ) . From this, it can be obtained that the ordinate of the control point determined by the i-th scan line Line i is equal to v i . From the midpoint of the scan line, the edge points are searched for the left and right sides respectively, and the intersection of the first pair of scan lines and the left and right lane lines is the control point, and then the abscissa of the control point pair can be known, so as to determine its coordinates (u i ,vi ) and ( ui ',vi ) . According to this, 5 sets of control point pairs of coordinates {(92,21)(117,21)}, {(80,46)(145,46)}, {(67,78)(175,78)}, {( 51,117)(212,117)}, {(33,167)(254,167)}. Then, according to the NUBS interpolation algorithm, the fourth-order polynomial function is used for NUBS interpolation, and the obtained fitting curve is displayed in the lane line image as shown in Figure 3.

图4是车道线边缘不连续情况下的控制点确定过程示意图。同样预设v1=0,Δv1=20,将式(14)代入式(13)即可依次求出vi的值:v2=167,v3=117,v4=78,v5=46,v6=21。从车道线边缘图像底部开始,在vi行设置水平扫描线Line i(2≤i≤6),其中Line 5和Line 6与左车道线的交点缺失,只初步确定出3个左车道线控制点分别为{(15,167),(39,117),(59,78)};5个右车道线控制点分别为{(253,167),(213,117),(181,78),(155,46),(134,21)}。根据式(10)-(12)估算出Δu5=80,u5=75;Δu6=46,u6=88。从而确定出左车道线缺失的两个控制点坐标分别为(75,46), (88,21)。然后根据NUBS插值算法,采用四阶多项式函数进行NUBS插值,得到的拟合曲线显示在车道线图像中如图5所示。FIG. 4 is a schematic diagram of a control point determination process when the edge of the lane line is discontinuous. Similarly, assuming v 1 =0, Δv 1 =20, and substituting Equation (14) into Equation (13), the values of v i can be obtained in turn: v 2 =167, v 3 =117, v 4 =78, v 5 =46, v 6 =21. Starting from the bottom of the edge image of the lane line, set the horizontal scan line Line i (2≤i≤6) in the v i line, in which the intersection of Line 5 and Line 6 and the left lane line is missing, and only 3 left lane line controls are preliminarily determined. The points are {(15,167),(39,117),(59,78)}; the five right lane control points are {(253,167),(213,117),(181,78),(155,46),( 134,21)}. According to equations (10)-(12), it is estimated that Δu 5 =80, u 5 =75; Δu 6 =46, u 6 =88. Therefore, it is determined that the coordinates of the two control points missing the left lane line are (75, 46), (88, 21) respectively. Then, according to the NUBS interpolation algorithm, the fourth-order polynomial function is used to perform NUBS interpolation, and the obtained fitting curve is displayed in the lane line image as shown in Figure 5.

从图3和5可以看出,按照上述方法进行车道线检测,实现了较好的检测结果。本实施例表明,本发明的方案能有效提高非均匀B样条曲线模型的控制点定位准确度和成功率,同时兼顾计算量,有较好的检测准确率和实时性。It can be seen from Figures 3 and 5 that the lane line detection is carried out according to the above method, and better detection results are achieved. This example shows that the solution of the present invention can effectively improve the control point positioning accuracy and success rate of the non-uniform B-spline curve model, while taking into account the amount of calculation, it has better detection accuracy and real-time performance.

以上给出本发明的具体实施例,需要说明的是本发明并不局限于以下具体实施例,凡是在本申请方案基础上做的同等变换均落入本发明的保护范围。The specific embodiments of the present invention are given above. It should be noted that the present invention is not limited to the following specific embodiments, and all equivalent transformations made on the basis of the solution of the present application fall into the protection scope of the present invention.

Claims (7)

1. The method for detecting the lane line based on the imaging model constrained non-uniform B-spline curve fitting is characterized by comprising the following steps:
the method comprises the following steps: preprocessing an image;
acquiring an original lane line image I from a lane line standard image library in an intra-card Chiilong image database, performing median filtering on the original lane line image I to remove salt-pepper noise, performing histogram equalization to enhance the brightness and contrast of the image to make edge features prominent, and obtaining an enhanced lane line image I 1
Step two: detecting edges;
for the enhanced lane line image I 1 Adopting a Canny operator to carry out edge detection to obtain an initial lane line edge image I 2
Step three: detecting a Hough straight line;
for the initial lane line edge image I obtained in the step two 2 Performing Hough linear detection, reserving edges containing linear detection results, removing other interference edges to obtain an edge image I 3
Step four: deducing constraint conditions of the imaging model;
the derived imaging model constraints are as follows:
the length delta v of the line segment in the u-th column in the image coordinate system corresponding to the lane line segment with the length delta Y in the world coordinate system is as follows:
Figure FDA0003640946150000011
the pitch width Δ u in the v-th line in the image coordinate system corresponding to the left and right lane line pitch having the width Δ X in the world coordinate system is:
Figure FDA0003640946150000012
the concrete process of the step four is as follows:
the optical axis of the camera is assumed to be parallel to the plane of the driving road of the vehicle and parallel to the left lane line and the right lane line;
knowing a world coordinate system (X, Y, Z) and an image coordinate system (U, V), the maximum horizontal viewing angle of the camera is alpha, the maximum vertical viewing angle is beta, and the coordinates of the camera mounting position in the world coordinate system are C (d,0, h), where h is the camera mounting height, i.e., the value of the camera on the Z axis of the world coordinate system, and d is the camera mounting horizontal offset, i.e., the value of the camera on the X axis of the world coordinate system; the optical axis of the camera is parallel to the plane of the driving road of the vehicle, and the included angle between the optical axis of the camera and the lane line is gamma; according to the camera geometric imaging model, the mapping model of a certain point P (X, Y,0) on the road surface in the world coordinate system (X, Y, Z) and a relative coordinate point Q (U, V) in the image coordinate system (U, V) is:
Figure FDA0003640946150000021
Figure FDA0003640946150000022
Figure FDA0003640946150000023
Figure FDA0003640946150000024
in the formula H I ,W I Respectively the horizontal resolution and the vertical resolution of the image after the camera imaging;
according to the camera imaging principle, the length of the lane line segment in the imaged image is shortened along with the increase of the distance between the lane line segment and the camera in the world coordinate system, and similarly, the same left lane line and right lane line spacing on the road surface in the world coordinate system is wider in the lane line spacing obtained by imaging in the near vision field and narrower in the lane line spacing obtained by imaging in the far vision field; and deducing the length delta v of a line segment in the u-th column in an image coordinate system corresponding to the lane line segment with the length delta Y in the world coordinate system by combining a geometric camera imaging model, wherein the length delta v is as follows:
Figure FDA0003640946150000025
the pitch width Δ u in the v-th line in the image coordinate system corresponding to the left and right lane line pitch having the width Δ X in the world coordinate system is:
Figure FDA0003640946150000026
step five: extracting control points of the non-uniform B-spline curve;
setting scanning lines in an image according to the length of delta Y in a corresponding world coordinate system and the constraint of an imaging model, wherein the intersection point of each scanning line and the edges of the left lane and the right lane is a pair of control points; the specific process is as follows:
starting from the bottom of the lane line edge image at v i Horizontal scanning lines Line i are arranged in rows, m is less than or equal to i and less than or equal to n, and control point pairs (Li, R) are obtained at the intersection points of the Line i and the left and right lane lines i ) Wherein L is i Has the coordinates of (u) i ,v i ),R i The coordinate is (u) i ',v i ) (ii) a According to the imaging modelConstraint definition v i The calculation formula is as follows:
Figure FDA0003640946150000031
derived from equations (8), (10), (11):
Figure FDA0003640946150000032
wherein v is 1 ,Δv 1 Is a preset value; v is obtained by substituting formula (14) for formula (13) i A value of (d); thereby finding that the vertical coordinate of the control point determined by the ith scanning Line i is equal to v i (ii) a Searching edge points from the middle point of the scanning line to the left and right sides respectively, and determining the coordinates (u) of a pair of control points by using the intersection points of the first pair of scanning lines and the left and right lane lines as the control points i ,v i ) And (u) i ',v i );
Step six: fitting a lane line;
and fifthly, obtaining the non-uniform B-spline curve control point information, and fitting the left lane line and the right lane line by using a NUBS interpolation method to complete the detection of the lane lines.
2. The imaging model constraint-based non-uniform B-spline curve fitting lane line detection method according to claim 1, wherein in the step one, during median filtering, a median filtering function f ^ (x, y) is adopted as:
Figure FDA0003640946150000033
wherein f is ^ (x, y) is the median filtered output, S xy Representing the set of coordinates of a rectangular sub-image window centered at (x, y) and of size M × N, and f (a, b) is the pixel gray scale value with coordinates (a, b).
3. According to the claimSolving 1 the method for detecting the lane line based on the imaging model constrained non-uniform B-spline curve fitting is characterized in that in the step one, when histogram equalization is carried out, a histogram equalization function s k Comprises the following steps:
Figure FDA0003640946150000041
wherein s is k For histogram equalized output, r k Representing discrete gray levels, 0 ≦ r k ≤255,k=0,1,2,…,n-1,n i For the occurrence of grey levels r in an image i N is the total number of pixels in the image,
Figure FDA0003640946150000042
is the frequency in probability theory.
4. The imaging model constraint-based non-uniform B-spline curve fitting lane line detection method according to claim 1, wherein the specific steps of step two are as follows:
(1) smoothing image I with Gaussian filter 1
The gaussian smoothing function G (x, y) is:
Figure FDA0003640946150000043
using G (x, y) and the enhanced lane line image I 1 Performing convolution to obtain a smooth image f 1
f 1 (x,y)=I 1 (x,y)*G(x,y)(4)
(2) Calculating the amplitude and direction of the gradient by using finite difference of first-order partial derivatives to obtain a gradient image f 2
First order differential convolution template
Figure FDA0003640946150000044
(3) For gradient amplitudeThe non-maximum value is suppressed to obtain a non-maximum value suppressed image f 3
In the gradient image f 2 Compares the central pixel S of the 8 neighbourhood with the two pixels along the gradient line at each point of (a); if the gradient value of S is not larger than the gradient values of two adjacent pixels along the gradient line, making S equal to 0;
(4) detecting and connecting edges by using a dual-threshold algorithm;
suppression of non-maximum images f 3 Setting two thresholds T 1 And T 2 ,T 1 =0.4T 2 Making the gradient value less than T 1 The gray value of the pixel (d) is set to 0 to obtain an image f 4 (ii) a Then the gradient value is less than T 2 The gray value of the pixel (d) is set to 0 to obtain an image f 5 (ii) a With image f 5 Based on the image f 4 For supplement, the edges of the images are connected to obtain an initial lane edge image I 2
5. The imaging model constraint-based non-uniform B-spline curve fitting lane line detection method according to claim 1, characterized in that the concrete steps of step three are as follows:
(1) detecting a Hough straight line;
for any point A (x) in the rectangular coordinate system 0 ,y 0 ) Straight line passing through point A satisfies
y=kx+l(5)
Where k is the slope and l is the intercept, then the point A (X) is crossed in the X-Y plane 0 ,y 0 ) The linear clusters of (a) are all expressed by formula (5), and cannot be expressed if the slope of a straight line perpendicular to the X axis is infinite; therefore, the rectangular coordinate system is converted into a polar coordinate system;
the equation representing a straight line in a polar coordinate system is
ρ=xcosθ+ysinθ(6)
Wherein rho is the normal distance from the original point to the straight line, and theta is the positive included angle between the normal line and the X axis; one point in the image space corresponds to one sine curve in the polar coordinate system rho-theta; detecting a straight line in an image space by detecting an intersection point of the ρ - θ space; discretizing rho and theta, respectively calculating the value of the corresponding parameter rho according to a formula (6) at each value corresponding to the parameter theta, and then adding 1 to the corresponding parameter accumulation unit; finally, counting the value of each accumulation unit, and if the value is larger than a preset threshold value H, considering that the parameter is the parameter of a straight line in an image space, so as to mark the straight line in the image;
(2) removing interference edges;
searching and reserving the whole edge containing the pixel for each edge pixel in the straight line marked in the step (1), and eliminating the edge without common pixel points with the marked straight line so as to obtain an edge image I 3
6. The imaging model constraint-based non-uniform B-spline curve fitting lane line detection method according to claim 1, characterized in that the concrete process of step five is as follows:
assuming that the left lane Line and the right lane Line are parallel, solving the control point L determined by the ith scanning Line i i 、R i Abscissa u of i And u' i (ii) a Deducing delta u according to the constraint condition of the imaging model and the formula (14) i+1 And Δ u i The relation of (A) is as follows:
Figure FDA0003640946150000051
u’ i =u i +Δu i (16)
the included angle gamma between the optical axis of the camera and the lane line is calculated according to the following formula:
Figure FDA0003640946150000061
for the case of control point loss due to lane line edge loss, L is paired with adjacent control points 1 、L 2 When known, the abscissa u of the control point is calculated according to equations (15) to (17) 2 (ii) a For the case of control point mislocation caused by false edges, it is verified whether all adjacent control point pair interval width ratios satisfyEquation (15), thereby detecting the erroneous control point coordinates and repositioning them according to equations (15) - (17).
7. The imaging model constraint-based non-uniform B-spline curve fitting lane line detection method according to claim 1, wherein the specific process of the sixth step is as follows:
suppose B-spline curve S is composed of n +1 control point sets { P } 0 ,P 1 ,...P n And (c) then each point on the curve S satisfies:
Figure FDA0003640946150000062
wherein B is i,m (o) is a basic B-spline function, m is more than or equal to 2 and less than or equal to n +1, t min ≤u≤t max ,t j Is a node, j is 0.. times.i + m, when each node t is j When the distance between the two splines is equal, the B spline curve is called as a uniform B spline curve, otherwise, the B spline curve is called as a non-uniform B spline curve; according to the NUBS interpolation method, if m is known to be more than or equal to 3 for the control points, the lane lines are fitted by adopting an m-1 order polynomial function; if 4 pairs of control points can be determined, carrying out NUBS interpolation by adopting a third-order polynomial function so as to fit out a lane line; if only 3 pairs of control points are determined, a second order polynomial function is used to fit the lane lines.
CN201811427546.XA 2018-11-27 2018-11-27 Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve Active CN109583365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811427546.XA CN109583365B (en) 2018-11-27 2018-11-27 Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811427546.XA CN109583365B (en) 2018-11-27 2018-11-27 Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve

Publications (2)

Publication Number Publication Date
CN109583365A CN109583365A (en) 2019-04-05
CN109583365B true CN109583365B (en) 2022-07-26

Family

ID=65924533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811427546.XA Active CN109583365B (en) 2018-11-27 2018-11-27 Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve

Country Status (1)

Country Link
CN (1) CN109583365B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135252A (en) * 2019-04-11 2019-08-16 长安大学 An Adaptive Accurate Lane Detection and Departure Warning Method for Unmanned Vehicles
CN112654998B (en) * 2020-10-22 2022-04-15 华为技术有限公司 Lane line detection method and device
CA3196453A1 (en) * 2020-10-22 2022-04-28 Daxin LUO Lane line detection method and apparatus
CN112818873B (en) * 2021-02-04 2023-05-26 苏州魔视智能科技有限公司 Lane line detection method and system and electronic equipment
CN113450380A (en) * 2021-07-17 2021-09-28 普达迪泰(天津)智能装备科技有限公司 Track calibration method based on airport runway scribed lines
CN114912159B (en) * 2022-07-18 2022-09-13 中国铁路设计集团有限公司 Method for fitting geometric line shape of rail transit line plane
CN115123218B (en) * 2022-09-02 2022-11-22 小米汽车科技有限公司 Vehicle detection method and device and electronic equipment thereof
CN115311314B (en) * 2022-10-13 2023-02-17 深圳市华汉伟业科技有限公司 Resampling method, system and storage medium for line laser contour data
EP4475093A1 (en) * 2023-06-09 2024-12-11 Volkswagen Aktiengesellschaft Method for identifying sections belonging to one lane boundary of a lane of a road
CN118209053B (en) * 2024-05-22 2024-08-30 致真精密仪器(青岛)有限公司 Linear magnetic domain width measuring method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103747A (en) * 2009-12-16 2011-06-22 中国科学院电子学研究所 Method for calibrating external parameters of monitoring camera by adopting reference height
EP2602744A1 (en) * 2011-12-08 2013-06-12 Delphi Technologies, Inc. Method for detecting and tracking lane markings
CN103177246A (en) * 2013-03-26 2013-06-26 北京理工大学 Dual-model lane line identification method based on dynamic area division
CN108280450A (en) * 2017-12-29 2018-07-13 安徽农业大学 A kind of express highway pavement detection method based on lane line

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8676494B2 (en) * 2010-09-29 2014-03-18 Navteq B.V. Multi-dimensional road representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103747A (en) * 2009-12-16 2011-06-22 中国科学院电子学研究所 Method for calibrating external parameters of monitoring camera by adopting reference height
EP2602744A1 (en) * 2011-12-08 2013-06-12 Delphi Technologies, Inc. Method for detecting and tracking lane markings
CN103177246A (en) * 2013-03-26 2013-06-26 北京理工大学 Dual-model lane line identification method based on dynamic area division
CN108280450A (en) * 2017-12-29 2018-07-13 安徽农业大学 A kind of express highway pavement detection method based on lane line

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于成像模型的车道线检测与跟踪方法;陈龙等;《中国公路学报》;20111115;第24卷(第6期);第96-102页 *
基于车—路视觉协同的行车环境感知方法研究;穆柯楠;《中国优秀博硕士学位论文全文数据库(博士)》;20170515(第5期);第25-41页 *

Also Published As

Publication number Publication date
CN109583365A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109583365B (en) Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve
CN110569704B (en) A Multi-strategy Adaptive Lane Line Detection Method Based on Stereo Vision
CN109785291B (en) Lane line self-adaptive detection method
CN107045629B (en) A multi-lane line detection method
CN104408460B (en) A kind of lane detection and tracking detection method
CN107895375B (en) Complex road route extraction method based on visual multi-features
CN101750051A (en) Visual navigation based multi-crop row detection method
Youjin et al. A robust lane detection method based on vanishing point estimation
CN108280450A (en) A kind of express highway pavement detection method based on lane line
CN104063711B (en) A kind of corridor end point fast algorithm of detecting based on K means methods
CN103902985B (en) High-robustness real-time lane detection algorithm based on ROI
CN106960449A (en) The heterologous method for registering constrained based on multiple features
CN109544635B (en) An automatic camera calibration method based on enumeration and heuristic
CN106991407A (en) The method and device of a kind of lane detection
CN105138990A (en) Single-camera-based gesture convex hull detection and palm positioning method
CN105809104A (en) Complex marking determining device and complex marking determining method
CN106875430B (en) Single moving target tracking method and device based on fixed form under dynamic background
CN111242000A (en) A road edge detection method combined with laser point cloud steering
CN110245600B (en) Adaptive start fast stroke width UAV road detection method
CN114387329A (en) Progressive regularization method of building outline based on high-resolution remote sensing images
CN106709952A (en) Automatic calibration method of display screen
TWI383690B (en) Method for image processing
CN107563314B (en) Lane line detection method based on parallel coordinate system
CN112116644B (en) Obstacle detection method and device based on vision and obstacle distance calculation method and device
CN111161264B (en) Method for segmenting TFT circuit image with defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant