CN107045629B  Multilane line detection method  Google Patents
Multilane line detection method Download PDFInfo
 Publication number
 CN107045629B CN107045629B CN201710256771.0A CN201710256771A CN107045629B CN 107045629 B CN107045629 B CN 107045629B CN 201710256771 A CN201710256771 A CN 201710256771A CN 107045629 B CN107045629 B CN 107045629B
 Authority
 CN
 China
 Prior art keywords
 lane line
 image
 grid map
 value
 information
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 238000001514 detection method Methods 0.000 title claims abstract description 28
 239000002245 particle Substances 0.000 claims abstract description 17
 230000001131 transforming Effects 0.000 claims abstract description 15
 230000000875 corresponding Effects 0.000 claims description 11
 238000000034 methods Methods 0.000 claims description 4
 238000005260 corrosion Methods 0.000 claims description 3
 238000003708 edge detection Methods 0.000 claims description 3
 230000003044 adaptive Effects 0.000 claims description 2
 230000003628 erosive Effects 0.000 claims 1
 239000000284 extracts Substances 0.000 claims 1
 230000000694 effects Effects 0.000 description 10
 238000010586 diagram Methods 0.000 description 5
 238000005286 illumination Methods 0.000 description 3
 238000004364 calculation method Methods 0.000 description 2
 230000004048 modification Effects 0.000 description 2
 238000006011 modification reaction Methods 0.000 description 2
 240000004050 Pentaglottis sempervirens Species 0.000 description 1
 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
 230000002146 bilateral Effects 0.000 description 1
 238000006243 chemical reaction Methods 0.000 description 1
 238000005530 etching Methods 0.000 description 1
 238000009434 installation Methods 0.000 description 1
 238000007781 preprocessing Methods 0.000 description 1
 230000011218 segmentation Effects 0.000 description 1
 238000000638 solvent extraction Methods 0.000 description 1
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/00624—Recognising scenes, i.e. recognition of a whole field of perception; recognising scenespecific objects
 G06K9/00791—Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
 G06K9/00798—Recognition of lanes or road borders, e.g. of lane markings, or recognition of driver's driving pattern in relation to lanes perceived from the vehicle; Analysis of car trajectory relative to detected road

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/20—Image acquisition
 G06K9/34—Segmentation of touching or overlapping patterns in the image field
 G06K9/342—Cutting or merging image elements, e.g. region growing, watershed, clusteringbased techniques

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
 G06K9/46—Extraction of features or characteristics of the image
 G06K9/4604—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
Abstract
The invention discloses a multilane line detection method, which comprises the steps of firstly obtaining a color image of a continuous frame, carrying out gray processing on the image of the current frame, and carrying out inverse perspective transformation on a gray map according to camera parameters and the set size of the grid map to obtain a grid map of a road part; then, according to the prior information of the control points, carrying out region division on the grid map by using a Thiessen polygon, and carrying out image binarization processing on the divided regions; and then, grouping the regions according to the prior information of the lane lines, and respectively recording the pixel point coordinates of the nonzero pixel values. And finally, calculating a final lane line equation and performing perspective transformation on the lane line equation by combining the prediction of the lane line equation and the particle filter algorithm on the control points of the current image to obtain the lane line equation in the original image. The method provided by the invention has high lane line detection precision and good robustness, and can complete multilane line detection at the same time.
Description
Technical Field
The invention belongs to the field of computer vision and automatic driving, and particularly relates to a multilane line detection method.
Background
In the field of automatic driving, the detection of lane lines is an important link and is also a hotspot problem in the current research of the field of automatic driving. However, most of the current lane line detection methods based on computer vision cannot robustly detect the lane line, mainly because the following reasons are: the detection of the lane lines on the road is disturbed by information such as lighting, shadows on the road, other obstacles on the road (vehicles, pedestrians, etc.), other traffic markings on the road, etc.
Currently, there are many multilane detection algorithms based on computer vision. Many domestic and foreign periodicals and conferences related to automatic driving list lane line detection algorithms as the key research field, and domestic and foreign scholars also make a large number of beneficial achievements in this respect, and in recent years, the methods in the thesis mainly include:
the method comprises the steps that M.Aly converts an image into an inverse perspective image in 2008, and then a lane line is fitted based on a RANSAC algorithm and a Bezier Splines curve, so that multiple lane lines can be detected simultaneously; nieto et al converted the road area to be processed into an inverse perspective space in 2008, and then modeled the road using a hierarchical structure diagram to detect lane lines; the method comprises the following steps that M.Nieto et al conducts inverse perspective transformation on an interesting region to be processed in 2011, then a recursive Bayesian model is used for conducting pixelbypixel classification on an inverse perspective transformation image, and finally a circular arc is used for modeling a road to detect a lane line; in 2011, Xin et al directly model roads on perspective images by using straight lines and parabolas to realize remote multilane line detection; m. sebdani et al use a fixed traffic camera to collect images in 2011, then directly perform basic image processing on the original image, and finally use Hough transformation to detect lane lines; xu et al, in 2012, detected road edges using an improved Canny operator, and then detected lane lines using probabilistic Hough transform; liu et al convert color images acquired by a camera into HSV space in 2012, perform basic image processing, detect road edges by using a Canny operator, and finally detect lane lines by using probability Hough transformation; J.Y.Deng et al perform inverse perspective transformation on the region of interest of the image in 2013, then perform basic image processing, and finally detect lane lines by using an optimized RANSAC algorithm in combination with BSpline; seo et al inverse perspective transform the region of interest of the image in 2014, then segment the lane lines by using color features, and finally detect the lane lines by using RANSAC algorithm in combination with Hough transform.
In the patent aspect, the Chinese patent application with the publication number of CN102722705A detects the multilane lines by using an RANSAC algorithm; the Chinese patent with publication number CN103617412A determines an interested area by combining with a vanishing point, and then detects lane lines of the current two lanes; the chinese invention patent publication No. CN103632140A detects lane lines by dividing image areas; the Chinese invention patent with the publication number of CN103940434A is based on the monocular vision and inertial navigation unit to detect the lane line in real time; the Chinese invention patent with the publication number of CN103971081A detects the multilane lines based on a bilateral constraint method; the chinese invention patent publication No. CN104751151A detects a lane line based on the change in color brightness on the left and right sides of the lane line; the Chinese invention patent with the publication number of CN104951790A detects the lane lines based on the seamless splicing of multisource inverse perspective images; the Chinese patent with publication number CN105160309A detects lane lines of three lanes based on an image morphology segmentation and region growing method; the chinese patent of the invention with publication number CN105354553A uses the structure tensor and vector field divergence to distinguish edge features such as edges, corners, etc., thereby realizing the detection of lane lines; the chinese patent of the invention with publication number CN105426864A detects multiple lane lines based on the equidistant edge point matching method; the chinese patent publication No. CN105678285A discloses that the lane lines are detected by using an adaptive road bird'seye view transformation method.
However, none of the abovementioned documents is robust against multilane line detection, and the main problems are as follows: firstly, the whole image is processed uniformly in an image preprocessing stage, so that interference information such as illumination, shadow and the like has great influence; secondly, if the RANSAC algorithm is adopted to directly perform lane line equation fitting on the point set, the strip characteristics of lane lines in a lane line detection task cannot be fully utilized, so that the RANSAC algorithm cannot obtain a better result under limited calculation.
Disclosure of Invention
An object of the present invention is to provide a multilane line detection method capable of simultaneously performing multilane line detection and having high robustness under illumination, shadow, and the like.
The technical solution for realizing the purpose of the invention is as follows: a novel method for detecting a multilane line comprises the following steps:
step 1, initializing prior information of a control point and prior information of a lane line;
step 2, obtaining color images of continuous frames, and carrying out gray processing on the current frame images, specifically: the method comprises the steps of collecting RGB threechannel color images in real time by using a camera installed on an autonomous vehicle, and extracting only R channel images in color image images to be used as gray level images to be processed.
Step 3, setting the size of the grid map, and solving an inverse perspective image of an interest area in the gray scale map of the current frame by combining the camera intrinsic parameters and the camera erection height, wherein the inverse perspective image is the grid map of the current frame, and the method specifically comprises the following steps: according to actual requirements, firstly setting a front sensing range and a left and right sensing range of a current vehicle, and then setting the resolution of a grid map, namely the size of a region in a physical world corresponding to one pixel in the grid map; then calculating the size of the grid map; and finally, calculating the inverse perspective of the grayscale image by combining the camera parameters and the size of the grid map, namely the final grid map. The specific calculation steps are as follows: firstly, calculating the mapping relation from a world coordinate system to a grid map, then calculating the mapping relation from the world coordinate system to a camera coordinate system, then calculating the mapping relation from the camera coordinate system to an image coordinate system, and finally directly obtaining the pixel values of the corresponding positions from the original gray level map according to the calculated mapping relation and filling the pixel values into the grid map, wherein the process is shown in the figures 1 and 2.
Step 4, according to the prior information of the control points, applying a Delaunay triangulation algorithm and Thiessen polygons on the grid map to obtain the region division of the grid map, wherein the divided regions are not overlapped with each other;
and 5, respectively detecting the edges of the lane lines by using Sobel operators for the divided regions, then obtaining a binary image by using selfadaptive thresholding operation, and eliminating partial noise points by using corrosion operation to obtain a final binary image, wherein the edge detection method specifically comprises the steps of detecting the edges of the lane lines by using Sobel operators with the horizontal 2order derivative, the vertical 0order derivative and the kernel size of 5 × 5, respectively obtaining the maximum pixel value of each region, reassigning the maximum pixel value to be 255 if the pixel value in each region is larger than 95% of the maximum value, assigning the pixel value to be 0 if the pixel value in each region is smaller than 95% of the maximum value, and performing corrosion operation by using the kernel with the size of 3 × 3. in combination with the step 4, the finally output visualization result can refer to the step 3.
Step 6, grouping the divided areas according to the prior information of the lane lines, grouping the areas containing the same lane line of the previous frame into the same group, and recording the horizontal coordinate and the vertical coordinate of the position of the nonzero pixel value contained in each group of the current frame;
step 7, performing curve fitting on each group of nonzero pixel points by using an improved RANSAC algorithm to obtain a lane line equation, which specifically comprises the following steps: firstly, randomly selecting a point in each region from top to bottom, and directly skipping if no point exists in the region; then, performing curve fitting on the random points; then, counting the distances from the rest points to the curve, and counting the nonzero value pixels with the distances larger than a threshold value; and repeating the steps until the maximum iteration times, and finally selecting the curve with the maximum count as the finally fitted lane line.
Step 8, predicting the control point coordinates of the current frame by adopting a particle filter algorithm and combining the prior information of the control points, and obtaining a final lane line equation by combining the lane line equation fitted in the step 7; the method specifically comprises the following steps: predicting the coordinates of the current frame control point by adopting a particle filter algorithm; drawing a transverse line in the grid map at intervals from top to bottom, and selecting the intersection point of the lane line equation fitted in the step 7 and the transverse line as a control point of the current frame; then, making a difference value between the calculated abscissa and the predicted abscissa, if the difference value is larger than a set threshold value, taking the coordinates of the predicted control point as a predicted value of the particle filter algorithm, taking the calculated coordinates of the control point as a measured value of the particle filter algorithm, respectively calculating the distance between the abscissa of the predicted value and the abscissa of the measured value, normalizing the distances, and taking the normalized value as the probability of the coordinates of the predicted control point; if the difference is smaller than the set threshold, not updating the particle probability of the particle filter algorithm; and finally, selecting the particles corresponding to the maximum probability as new control points, and obtaining the new control points, and then adopting least square fitting to each group of new control points to obtain a final lane line equation.
Step 9, performing perspective transformation on the final lane line equation obtained in the grid map, and obtaining a lane line equation of the current frame on the original image to obtain a detected lane line; if unprocessed image frames exist, the final lane line equation obtained in the step 8 is used as lane line prior information of the next frame, control point prior information of the next frame image is determined according to the final lane line equation, and the step 2 is skipped to process the next frame; the specific method for determining the prior information of the control point of the next frame of image comprises the following steps: drawing a transverse line in the grid map at intervals from top to bottom, and selecting the intersection point of the transverse line and the final lane line equation obtained in the step 7 as a control point of the current frame, namely the prior information of the control point of the next frame image.
Compared with the prior art, the invention has the following advantages: 1) according to the method, the regions are divided by using the Thiessen polygons, and basic image processing is performed on each region, so that the algorithm has the effects of illumination resistance, shielding, shading and the like, and is high in robustness. 2) The invention uses the improved RANSAC algorithm to fit the lane line, saves more time than the common RANSAC algorithm under a fixed iteration number, and is easier to find the most appropriate curve equation. 3) The invention has good detection effect on multiple lane lines on the road when other marks or vehicles are on the road, and has high practical application value.
Drawings
FIG. 1 is a coordinate system transformation diagram of an original image to an inverse perspective image, wherein (a) is a world coordinate system; (b) converting a world coordinate system into a grid map; (c) is a camera coordinate system; (d) is an image coordinate system;
FIG. 2 is a conversion of an original image to a grid map;
FIG. 3 shows the basic image processing results and Thiessen polygon area partitioning;
FIG. 4 is a diagram of the lane line detection effect of the method of the present invention, wherein (a) is the detection result of only lane line information on the road; FIG. (b) is a lane line detection effect in the case where there are black vehicles on the road, in which black vehicles are framed by white frames; FIG. (c) is a graph of the detection effect in the presence of other traffic markings on the road, such as a traffic marking that is transverse to the road; FIG. d is a graph of the detection effect in the case of white vehicles and other traffic markings on the road, respectively;
FIG. 5 is a flow chart of the method of the present invention.
Detailed Description
The present invention is described in more detail below with reference to specific embodiments which will assist the developer in understanding the invention, but are not intended to limit the invention in any way. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which is within the scope of the present invention.
The embodiment provides a multilane line detection method, which comprises the steps of firstly separating a red channel of a color image acquired by a camera as a gray image to be processed, and then performing inverse perspective transformation on an original image according to the installation position of the camera on an autonomous vehicle, parameters in the camera and the set size of a grid map to obtain the grid map. Secondly, according to the control point information provided by the previous frame, the grid map is subjected to region division by using a Delaunay triangulation algorithm and Thiessen polygons, and basic image processing is carried out on the divided regions to obtain a binary image. And then, according to the lane line pair region grouping detected in the previous frame, respectively recording the nonzero pixel point coordinates, and matching the recorded nonzero pixel point coordinates by using an improved RANSAC algorithm and a lane line model to obtain a lane line equation. And finally, calculating a final lane line equation and a new control point by combining the current lane line equation and the prediction of the control point of the current frame image by the particle filter algorithm, so as to be convenient for the next frame image to use, and simultaneously, carrying out perspective transformation on the finally calculated lane line equation to obtain the lane line equation on the original image.
When the gray level image is calculated, only the image of the red channel is used as the gray level image, so that the time for calculating the gray level image can be saved, and the final effect is not influenced. The parameters of the camera are as follows: the height of the camera is 1.8 m, the pitch angle is minus 0.034 amplitude, and the yaw angle is 0.021 amplitude. The physical world size to be treated is 5 meters from the left to the right and 787 meters from the front. The physical size corresponding to each pixel in the grid map is: the physical dimension corresponding to the width of one pixel is 0.05 m/pixel and the physical dimension corresponding to the height of one pixel is 0.2 m/pixel. Then, the gray map is subjected to inverse perspective transformation according to the parameters to obtain a grid map, and the size of the grid map is as follows: width 200 pixels and height 400 pixels.
When calculating the control point, the specific operation is as follows: a horizontal line is drawn every 20 pixels in the grid map, and the intersection of the horizontal line and the detected lane line is selected as a control point. As shown in fig. 3 below, there are 19 control points on each lane line.
According to the control points, a Delaunay triangulation algorithm is used for dividing the grid map region, then the Thiessen polygon is directly solved to obtain the final region division, the divided regions are not overlapped, and each region is provided with one control point.
When each divided region is subjected to basic image processing, the lane line edge is detected by using Sobel operators with 2order derivative in the horizontal direction, 0order derivative in the vertical direction and 5 × 5 size, and then the maximum value v of each region is respectively obtained_{k}Wherein k represents the number of regions, and then each region is binarized, which is specifically as follows:
wherein p is_{kij}Indicating the value of the pixel at the (i, j) th position in the kth region, and finally, performing an etching operation using a check image of size 3 × 3.
The regions are grouped according to the lane line detected in the previous frame, and then the pixel coordinates of the nonzero pixels in each group are recorded herein. For example, for the first lane line, it is assumed that the regions from top to bottom are G_{g}Where g denotes the gth region from top to bottom in a group, the set of corresponding nonzero pixel coordinates in each region is then S_{g},g＝1,2,…,19。
When fitting a lane line using the modified RANSAC algorithm, the following is set herein: the maximum iteration number is 200, the threshold t is 3, and the line type is a straight line.
The specific algorithm is as follows:
step 1: in each set S_{g}G1, 2, …,19 randomly selects a pixel coordinateHere, no set of nonzero pixel coordinates is skipped directly. Finally, determining a straight line l according to the selected pixel coordinates;
step 2: according to the threshold value t, determining a pixel coordinate point set S (l) with the geometric distance from the straight line l being less than t, and calling the pixel coordinate point set S (l) as a consistent set of the straight line l;
and step 3: repeating the random selection 200 times to obtain a straight line l_{i}I1, 2, …,200 and the corresponding congruence set S (l)_{1}),S(l_{2}),…,S(l_{200})；
And 4, step 4: and taking the straight line corresponding to the maximum consistent set as the best matching straight line of the data points.
The best matching straight line in the above steps is the first lane line equation detected, and the process for detecting lane lines in other groups is the same as above.
And calculating the coordinates of the control points again according to the detected lane lines, and predicting the positions of the control points on the current image by using a particle filter algorithm according to the position information of the control points of the previous frame of image. If the absolute value of the difference between the abscissa of the control point and the calculated abscissa of the control point is predicted to be larger than 10, the control point on the current image is updated, the final lane line equation is obtained after all the control points are updated, and the new control point is calculated to be convenient for the next frame of image. Meanwhile, the calculated lane line equation is subjected to perspective transformation to obtain the lane line equation in the original image.
Fig. 4 is a diagram of the lane line detection effect of the method of the present invention, in which the four diagrams all have partial shadows on the road, and it can be seen from the detection effect that the method of the present invention has a good detection effect under the condition that only the lane line information exists, and there are black vehicles, white vehicles, or other traffic lines.
The above description is a detailed description of specific embodiments of the present invention. It should be noted that the present invention is not limited to the above specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims without affecting the essence of the present invention.
Claims (6)
1. A multilane line detection method is characterized by comprising the following steps:
step 1, initializing prior information of a control point and prior information of a lane line;
step 2, acquiring color images of continuous frames, and carrying out gray processing on the current frame images;
step 3, setting the size of the grid map, and combining the camera intrinsic parameters and the camera erection height to obtain an inverse perspective image of the region of interest in the gray scale map of the current frame, wherein the inverse perspective image is the grid map of the current frame;
step 4, according to the prior information of the control points, applying a Delaunay triangulation algorithm and Thiessen polygons on the grid map to obtain the region division of the grid map, wherein the divided regions are not overlapped with each other;
step 5, respectively using a Sobel operator to carry out edge detection on the lane line of the current frame for the divided areas, then using selfadaptive thresholding operation to obtain a binary image, and using corrosion operation to eliminate partial noise points to obtain a final binary image;
step 6, grouping the divided areas according to the prior information of the lane lines, grouping the areas containing the same lane line of the previous frame into the same group, and recording the horizontal coordinate and the vertical coordinate of the position of the nonzero pixel value contained in each group of the current frame;
step 7, performing curve fitting on each group of nonzero pixel points by using an improved RANSAC algorithm to obtain a lane line equation;
step 8, predicting the control point coordinates of the current frame by adopting a particle filter algorithm and combining the prior information of the control points, and obtaining a final lane line equation by combining the lane line equation fitted in the step 7;
step 9, performing perspective transformation on the final lane line equation obtained in the grid map, and obtaining a lane line equation of the current frame on the original image to obtain a detected lane line; if unprocessed image frames exist, the final lane line equation obtained in the step 8 is used as lane line prior information of the next frame, control point prior information of the next frame image is determined according to the final lane line equation, and the step 2 is skipped to process the next frame;
the specific method for determining the final lane line equation in the step 8 is as follows:
81, predicting the coordinates of the control points of the current frame by adopting a particle filter algorithm;
82, drawing a transverse line in the grid map at intervals from top to bottom, and selecting the intersection point of the lane line equation fitted in the step 7 and the transverse line as a control point of the current frame;
step 83, making a difference value on the abscissa obtained in the steps 82 and 81, if the difference value is greater than a set threshold value, taking the coordinates of the control point predicted in the step 81 as a predicted value of the particle filter algorithm, taking the coordinates of the control point calculated in the step 82 as a measured value of the particle filter algorithm, then respectively calculating the distance between the abscissa of the predicted value and the abscissa of the measured value, normalizing, taking the normalized value as the probability of the predicted coordinate of the control point, and if the difference value is less than the set threshold value, not updating the particle probability of the particle filter algorithm; finally, selecting the particles corresponding to the maximum probability as new control points;
and 84, obtaining new control points according to the step 83, and then obtaining a final lane line equation by adopting least square fitting to each group of new control points.
2. The method as claimed in claim 1, wherein step 2 extracts only red color channel in the color image for graying.
3. The method for detecting multilane lines according to claim 1, wherein the specific method for acquiring the grid map in step 3 is as follows:
step 31, setting a front sensing range, a left sensing range and a right sensing range of a current vehicle and the resolution of a grid map, namely the size of a region in a physical world corresponding to one pixel in the grid map;
step 32, calculating the size of the grid map;
and 33, calculating the inverse perspective of the gray level image by combining the camera parameters and the size of the grid map, namely the final grid map.
4. The method as claimed in claim 1, wherein the step 5 of edge detection is performed by detecting the edge of the lane line using Sobel operators with a size of 5 × 5 for horizontal 2order derivatives, vertical 0order derivatives and kernel, the adaptive thresholding operation is performed by respectively obtaining the maximum pixel value of each region, and if the pixel value in the region is greater than 95% of the maximum value, the pixel value is reassigned to 255, and the pixel value is assigned to 0 for less than 95% of the maximum value, and the erosion operation uses a kernel with a size of 3 × 3.
5. The method as claimed in claim 1, wherein the step 7 of curve fitting each group of nonzero pixels comprises:
step 71, randomly generating a point in each region from top to bottom, and directly skipping if no point exists in the region;
72, selecting the generated random points to perform curve fitting;
73, counting the distances from the rest points to the curve, and counting the nonzero value pixels with the distances larger than a threshold value;
and 74, repeating the steps 71 to 73 until the maximum iteration times, and selecting the curve with the maximum count as the finally fitted lane line.
6. The method as claimed in claim 1, wherein the specific method for determining the prior information of the control point in the next frame image in step 9 is: drawing a transverse line in the grid map at intervals from top to bottom, and selecting the intersection point of the transverse line and the final lane line equation obtained in the step 8 as a control point of the current frame, namely the prior information of the control point of the next frame image.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201710256771.0A CN107045629B (en)  20170419  20170419  Multilane line detection method 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201710256771.0A CN107045629B (en)  20170419  20170419  Multilane line detection method 
Publications (2)
Publication Number  Publication Date 

CN107045629A CN107045629A (en)  20170815 
CN107045629B true CN107045629B (en)  20200626 
Family
ID=59545641
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201710256771.0A Active CN107045629B (en)  20170419  20170419  Multilane line detection method 
Country Status (1)
Country  Link 

CN (1)  CN107045629B (en) 
Families Citing this family (11)
Publication number  Priority date  Publication date  Assignee  Title 

CN107832732B (en) *  20171124  20210226  河南理工大学  Lane line detection method based on treble traversal 
CN108256446B (en) *  20171229  20201211  百度在线网络技术（北京）有限公司  Method, device and equipment for determining lane line in road 
CN108256445B (en) *  20171229  20201106  北京华航无线电测量研究所  Lane line detection method and system 
CN110395257A (en) *  20180420  20191101  北京图森未来科技有限公司  A kind of lane line example detection method and apparatus, automatic driving vehicle 
CN108805074B (en) *  20180606  20201009  安徽江淮汽车集团股份有限公司  Lane line detection method and device 
CN109145860B (en) *  20180904  20191213  百度在线网络技术（北京）有限公司  lane line tracking method and device 
CN109543520A (en) *  20181017  20190329  天津大学  A kind of lane line parametric method of SemanticOriented segmentation result 
CN109359602B (en) *  20181022  20210226  长沙智能驾驶研究院有限公司  Lane line detection method and device 
WO2020181426A1 (en) *  20190308  20200917  深圳市大疆创新科技有限公司  Lane line detection method and device, mobile platform, and storage medium 
CN110008851A (en) *  20190315  20190712  深兰科技（上海）有限公司  A kind of method and apparatus of lane detection 
CN109948552A (en) *  20190320  20190628  四川大学  It is a kind of complexity traffic environment in lane detection method 
Citations (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN102722705A (en) *  20120612  20121010  武汉大学  Method for detecting multilane line on basis of random sample consensus (RANSAC) algorithm 
CN104008645A (en) *  20140612  20140827  湖南大学  Lane line predicating and early warning method suitable for city road 
CN104318258A (en) *  20140929  20150128  南京邮电大学  Time domain fuzzy and kalman filterbased lane detection method 
CN105261020A (en) *  20151016  20160120  桂林电子科技大学  Method for detecting fast lane line 
CN105426864A (en) *  20151204  20160323  华中科技大学  Multiple lane line detecting method based on isometric peripheral point matching 
CN105760812A (en) *  20160115  20160713  北京工业大学  Hough transformbased lane line detection method 

2017
 20170419 CN CN201710256771.0A patent/CN107045629B/en active Active
Patent Citations (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN102722705A (en) *  20120612  20121010  武汉大学  Method for detecting multilane line on basis of random sample consensus (RANSAC) algorithm 
CN104008645A (en) *  20140612  20140827  湖南大学  Lane line predicating and early warning method suitable for city road 
CN104318258A (en) *  20140929  20150128  南京邮电大学  Time domain fuzzy and kalman filterbased lane detection method 
CN105261020A (en) *  20151016  20160120  桂林电子科技大学  Method for detecting fast lane line 
CN105426864A (en) *  20151204  20160323  华中科技大学  Multiple lane line detecting method based on isometric peripheral point matching 
CN105760812A (en) *  20160115  20160713  北京工业大学  Hough transformbased lane line detection method 
NonPatent Citations (2)
Title 

A Novel Lane Detection System With Efficient Ground Truth Generation;Amol Borkar等;《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》;20120331;第365374页 * 
Lane detection and tracking using BSnake;Yue Wang等;《Image Vision Computing》;20041231;第269280页 * 
Also Published As
Publication number  Publication date 

CN107045629A (en)  20170815 
Similar Documents
Publication  Publication Date  Title 

US9396548B2 (en)  Multicue object detection and analysis  
Caltagirone et al.  Fast LIDARbased road detection using fully convolutional neural networks  
US10429193B2 (en)  Method and apparatus for generating high precision map  
US9846946B2 (en)  Objection recognition in a 3D scene  
KR101856401B1 (en)  Method, apparatus, storage medium, and device for processing lane line data  
Son et al.  Realtime illumination invariant lane detection for lane departure warning system  
Menze et al.  Object scene flow for autonomous vehicles  
US10699134B2 (en)  Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line  
CN104318258B (en)  Time domain fuzzy and kalman filterbased lane detection method  
Serna et al.  Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning  
Yang et al.  Hierarchical extraction of urban objects from mobile laser scanning data  
Chen et al.  Vehicle detection in highresolution aerial images via sparse representation and superpixels  
Hadi et al.  Vehicle detection and tracking techniques: a concise review  
Xu et al.  Multipleentity based classification of airborne laser scanning data in urban areas  
Chen et al.  Lidarhistogram for fast road and obstacle detection  
Khoshelham et al.  Performance evaluation of automated approaches to building detection in multisource aerial data  
Kong et al.  General road detection from a single image  
US8750567B2 (en)  Road structure detection and tracking  
US9311542B2 (en)  Method and apparatus for detecting continuous road partition  
US8803966B2 (en)  Clear path detection using an examplebased approach  
Broggi  Robust realtime lane and road detection in critical shadow conditions  
US8848978B2 (en)  Fast obstacle detection  
EP3596449A1 (en)  Structure defect detection using machine learning algorithms  
Yenikaya et al.  Keeping the vehicle on the road: A survey on onroad lane detection systems  
US8699754B2 (en)  Clear path detection through road modeling 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 