CN105912977B - Lane line detection method based on point clustering - Google Patents

Lane line detection method based on point clustering Download PDF

Info

Publication number
CN105912977B
CN105912977B CN201610195295.1A CN201610195295A CN105912977B CN 105912977 B CN105912977 B CN 105912977B CN 201610195295 A CN201610195295 A CN 201610195295A CN 105912977 B CN105912977 B CN 105912977B
Authority
CN
China
Prior art keywords
line
lane
clustering
lane line
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610195295.1A
Other languages
Chinese (zh)
Other versions
CN105912977A (en
Inventor
解梅
刘伸展
张锦宇
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610195295.1A priority Critical patent/CN105912977B/en
Publication of CN105912977A publication Critical patent/CN105912977A/en
Application granted granted Critical
Publication of CN105912977B publication Critical patent/CN105912977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Abstract

The invention provides a lane line detection method based on point clustering, which applies the coordinate transformation idea used by Hough detection straight lines to our lane line detection and converts the lane line detection into clustering of point pairs; after the straight line is mapped to a k-b coordinate system from an x-y coordinate system, the central position of a clustering point in the k-b coordinate system is important, the central position comprises lane line position and slope information, and the lane lines can be effectively extracted by analyzing the information. The invention does not depend on the gray information of the image excessively, makes full use of the edge information detected by the canny edge, extracts the lane line by clustering the discrete points by mapping the straight line from the x-y coordinate system to the k-b coordinate system, and realizes the detection of the lane line.

Description

Lane line detection method based on point clustering
Technical Field
The invention belongs to the field of image processing and pattern recognition, and mainly relates to a lane line detection technology.
Background
The lane line detection is to accurately and quickly find out the position of the lane line in the picture from the lane picture through a proper algorithm. Therefore, the vehicle can calculate the relative position of the vehicle and the lane line through the calibration data of the camera, and the purpose of lane early warning is achieved. The performance of a lane departure system is directly influenced by the quality of a lane line detection algorithm. In practice, the existing lane line detection algorithm is easy to cause instability of threading judgment for the virtual lane line. The detected lane repeatedly jumps between the real lane line and the virtual lane line, and tracking is always wrong when the real lane line is erroneously detected.
In the field of digital images, Hough transform is an important shape object extraction technology, and particularly has a good extraction effect on straight lines, ellipses and the like. The transformation uses the voting principle in the transformation space to obtain the relevant parameter values of the best image in a specific shape. The central idea of the hough transform is as follows: the method comprises the steps of converting a certain-shaped object from one space to another space, converting a specific shape characteristic in one space into another more conveniently-calculated characteristic in another space, and then detecting the relevant parameter value of the object in any shape by using a voting principle.
In a general x-y coordinate space, any straight line can be represented by the formula y ═ k × x + b, where x and y are variables representing the x-axis and y-axis coordinates of the straight line, and k and b are scalars representing the slope and intercept of the straight line, respectively.
The idea of Hough transformation is to convert one coordinate system space to another coordinate system space, finding out the characteristic of calculating straight lines more easily. And (3) converting the straight line from the x-y coordinate system into a k-b coordinate system, namely converting the straight line from the formula y-k x + b into the formula b-x k + y. In the formula, k and b are variables respectively representing the corresponding values of the k axis and the b axis, and x and y are scalars. As shown in FIG. 1, three points in the x-y coordinate space of FIG. 1(a) correspond to 3 lines in the k-b coordinate system of FIG. 1 (b). Similarly, 3 lines in x-y coordinate space would correspond to 3 points in the k-b coordinate system.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method which has higher robustness and realizes lane line detection by identifying straight lines by Hough transform.
The technical scheme adopted by the invention for solving the technical problems is that the lane line detection method based on point clustering comprises the following steps:
1) an edge detection step: intercepting the lower half part of the driving image to carry out canny edge detection to obtain an image after edge detection;
2) hough straight line detection:
carrying out Hough transform Hough line detection on the image after edge detection, and connecting the disconnected lane lines on the same line by setting a maximum line gap maxLineGap;
3) an image segmentation step:
uniformly dividing the image subjected to Hough straight line detection at equal intervals horizontally, searching straight lines in each interval, and mapping one section of searched straight line to one point of a k-b coordinate system;
4) clustering: carrying out fuzzy c-means clustering on points on a k-b coordinate system; the number of the classes is the number of the preset candidate lane lines, and the position of the central point of each class is a straight line corresponding to the candidate lane lines;
5) a lane determining step:
and selecting a left lane line with the slope being more than zero and closest to the vertical central line of the image as a correct left lane line from the candidate lane lines, and selecting a right lane line with the slope being less than zero and closest to the vertical central line of the image as a correct right lane line from the candidate lane lines.
The method applies the coordinate transformation idea used by Hough detection straight lines to the lane line detection, and converts the lane line detection into clustering of point pairs; after the straight line is mapped to a k-b coordinate system from an x-y coordinate system, the central position of a clustering point in the k-b coordinate system is important, the central position comprises lane line position and angle (slope) information, and the lane lines can be effectively extracted by analyzing the information.
The invention has the beneficial effects that a new lane line detection algorithm is provided, the edge information detected by canny edges is fully utilized without excessively depending on the gray information of images, and the lane lines are extracted by mapping straight lines from an x-y coordinate system to a k-b coordinate system to cluster discrete points, so that the detection of the lane lines is realized.
Drawings
Fig. 1 is a schematic diagram of Hough transform.
Fig. 2 is a picture obtained by the vehicle-mounted camera.
Fig. 3 is a canny edge detection diagram.
Fig. 4 is a Hough line inspection diagram.
Fig. 5 is a line expansion diagram.
Fig. 6 is a schematic diagram of a segmented image.
FIG. 7 is a graph showing the results of detection.
Detailed Description
The image of the lane line to be detected is shown in fig. 2, and the detection method is implemented in the VS2010 platform by C + + programming, and comprises the following steps:
1. canny edge detection
In order to eliminate a large amount of interference information and to keep the current lane line information, the embodiment selects the bottom 3/10 portion of the image for canny edge detection, and it can be seen from fig. 3 that the lane line edge information is kept, but some non-lane line information still remains, and it is also determined that the current vehicle is in the lane in the multi-lane case.
2. Hough line detection
And (5) carrying out Hough straight line detection on the image subjected to canny edge detection processing. The maximum straight line gap maxLineGap in the Hough detection straight line is set, so that the disconnected lane lines on the same straight line are connected, and the result is as shown in figure 5, which is important for correctly detecting the virtual lane line, and the length of the virtual lane line is expanded.
The maximum line clearance is used for judging whether two line segments with the same slope and intercept and having clearance with each other are regarded as a straight line, if the clearance is larger than the value, the line segments are regarded as two line segments, and if not, the line segments are regarded as one line segment. The specific value of the maxLineGap setting can be adjusted by those skilled in the art according to the actual virtual lane line test data. The present embodiment is set to 50 here.
3. Segmenting images
The image after Hough straight line detection is horizontally and uniformly segmented as shown in FIG. 6. A straight line is searched in each small interval and a piece of the searched straight line is mapped to a point in the k-b coordinate system, so that the k-b coordinate system will obtain a scatter diagram.
4. Clustering
And obtaining a scatter diagram of the k-b coordinate system, and carrying out fuzzy c-means clustering. The number of classes is the number of candidate lane lines, the position of the center point is the straight line corresponding to the candidate lane line, and the number of classes is set to be 4 in this embodiment.
5. Clustering problem is converted into solving of projection in vertical direction
Clustering is to cluster closely spaced points into one class, which is equivalent to finding the straight line with the maximum projection in the vertical direction, i.e. the lane line we want to detect. However, considering the characteristics of interference information and the intermittent characteristic of virtual lane lines, after the number of classes is set to be 4, the left lane and the right lane can both keep two lane candidate lines with the first two lengths, the left lane and the right lane closest to the vertical center line of the image are selected and respectively operated to obtain (y2-y1)/(x2-x1), the left result is greater than zero, and the right result is less than zero, namely the right lane is judged to be the correct lane. If the current two lane lines do not satisfy a positive-negative operation result, then a lane line with a sign opposite to that of the previous operation result is selected from the unselected lane lines as a correct lane line, and the other correct lane line is the lane line with the same sign as the previous two lane lines and closest to the center line as the current lane line, as shown in fig. 7.

Claims (4)

1. The lane line detection method based on point clustering is characterized by comprising the following steps:
1) an edge detection step: intercepting the lower half part of the driving image to carry out canny edge detection to obtain an image after edge detection;
2) hough straight line detection:
carrying out Hough transform Hough line detection on the image after edge detection, and connecting the disconnected lane lines on the same line by setting a maximum line gap maxLineGap;
3) an image segmentation step:
uniformly dividing the image subjected to Hough straight line detection at equal intervals horizontally, searching straight lines in each interval, and mapping one section of searched straight line to one point of a k-b coordinate system;
4) clustering: carrying out fuzzy c-means clustering on points on a k-b coordinate system; the number of the classes is the number of the preset candidate lane lines, and the position of the central point of each class is a straight line corresponding to the candidate lane lines;
5) a lane determining step:
and selecting a left lane line with the slope being more than zero and closest to the vertical central line of the image as a correct left lane line from the candidate lane lines, and selecting a right lane line with the slope being less than zero and closest to the vertical central line of the image as a correct right lane line from the candidate lane lines.
2. The method for detecting the lane line based on the point clustering as claimed in claim 1, wherein the step of intercepting the lower half of the driving image is specifically intercepting the driving image 3/10.
3. The point-clustering-based lane line detection method according to claim 1, wherein a maximum straight line gap maxLineGap is set to 50.
4. The method for detecting a lane line based on point clustering of claim 1, wherein the number of classes set in the clustering step is 4.
CN201610195295.1A 2016-03-31 2016-03-31 Lane line detection method based on point clustering Active CN105912977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610195295.1A CN105912977B (en) 2016-03-31 2016-03-31 Lane line detection method based on point clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610195295.1A CN105912977B (en) 2016-03-31 2016-03-31 Lane line detection method based on point clustering

Publications (2)

Publication Number Publication Date
CN105912977A CN105912977A (en) 2016-08-31
CN105912977B true CN105912977B (en) 2021-03-30

Family

ID=56745258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610195295.1A Active CN105912977B (en) 2016-03-31 2016-03-31 Lane line detection method based on point clustering

Country Status (1)

Country Link
CN (1) CN105912977B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529488A (en) * 2016-11-18 2017-03-22 北京联合大学 Lane line detection method based on ORB feature extraction
JP6989766B2 (en) * 2017-09-29 2022-01-12 ミツミ電機株式会社 Radar device and target detection method
CN107977608B (en) * 2017-11-20 2021-09-03 土豆数据科技集团有限公司 Method for extracting road area of highway video image
CN109636877B (en) * 2018-10-31 2021-06-01 百度在线网络技术(北京)有限公司 Lane line adjustment processing method and device and electronic equipment
CN109829366B (en) * 2018-12-20 2021-04-30 中国科学院自动化研究所南京人工智能芯片创新研究院 Lane detection method, device and equipment and computer readable storage medium
CN110345952A (en) * 2019-07-09 2019-10-18 同济人工智能研究院(苏州)有限公司 A kind of serializing lane line map constructing method and building system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN105069415A (en) * 2015-07-24 2015-11-18 深圳市佳信捷技术股份有限公司 Lane line detection method and device
CN105160309A (en) * 2015-08-24 2015-12-16 北京工业大学 Three-lane detection method based on image morphological segmentation and region growing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4988786B2 (en) * 2009-04-09 2012-08-01 株式会社日本自動車部品総合研究所 Boundary line recognition device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN105069415A (en) * 2015-07-24 2015-11-18 深圳市佳信捷技术股份有限公司 Lane line detection method and device
CN105160309A (en) * 2015-08-24 2015-12-16 北京工业大学 Three-lane detection method based on image morphological segmentation and region growing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高速公路弯道场景的路面区域提取;隆迪;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20160315(第03期);第I138-7410页 *

Also Published As

Publication number Publication date
CN105912977A (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN105912977B (en) Lane line detection method based on point clustering
Luvizon et al. A video-based system for vehicle speed measurement in urban roadways
CN107045629B (en) Multi-lane line detection method
JP5699788B2 (en) Screen area detection method and system
CN106204572B (en) Road target depth estimation method based on scene depth mapping
US8902053B2 (en) Method and system for lane departure warning
JP6062791B2 (en) License plate character segmentation using likelihood maximization
CN108280450B (en) Expressway pavement detection method based on lane lines
CN107424142B (en) Weld joint identification method based on image significance detection
CN106778712B (en) Multi-target detection and tracking method
CN111047615B (en) Image-based straight line detection method and device and electronic equipment
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
US20180253852A1 (en) Method and device for locating image edge in natural background
CN110415296B (en) Method for positioning rectangular electric device under shadow illumination
KR101742115B1 (en) An inlier selection and redundant removal method for building recognition of multi-view images
JP2009163682A (en) Image discrimination device and program
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
US9443312B2 (en) Line parametric object estimation
CN112101108A (en) Left-right-to-pass sign identification method based on pole position characteristics of graph
CN111695373A (en) Zebra crossing positioning method, system, medium and device
Wang et al. Self-calibration of traffic surveillance cameras based on moving vehicle appearance and 3-D vehicle modeling
JP5027201B2 (en) Telop character area detection method, telop character area detection device, and telop character area detection program
CN103544495A (en) Method and system for recognizing of image categories
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program
CN104766321A (en) Infrared pedestrian image accurate segmentation method utilizing shortest annular path

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant