CN105678287A - Ridge-measure-based lane line detection method - Google Patents

Ridge-measure-based lane line detection method Download PDF

Info

Publication number
CN105678287A
CN105678287A CN201610119349.6A CN201610119349A CN105678287A CN 105678287 A CN105678287 A CN 105678287A CN 201610119349 A CN201610119349 A CN 201610119349A CN 105678287 A CN105678287 A CN 105678287A
Authority
CN
China
Prior art keywords
line segment
ridge
line
sigma
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610119349.6A
Other languages
Chinese (zh)
Other versions
CN105678287B (en
Inventor
王海
蔡英凤
陈龙
徐兴
袁朝春
陈小波
何友国
李�诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201610119349.6A priority Critical patent/CN105678287B/en
Publication of CN105678287A publication Critical patent/CN105678287A/en
Application granted granted Critical
Publication of CN105678287B publication Critical patent/CN105678287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention, which belongs to the technical field of active safety of the automobile, discloses a ridge-measure-based lane line detection method. The method comprises: step one, an original image of a road in front of a vehicle is collected; step two, a region of interest is determined; step three, a ridge measure numerical value of an image in the region of interest is calculated; step four, an average value p and a variance sigma of ridge measure values of all pixel points are calculated, all pixel points with the ridge measure values larger than p-3sigma are viewed as potential lane line feature points and are set as 1, and non-potential feature points in the image are set as 0, so that a binary image only including the potential lane line feature points is obtained; step five, the extracted lane line feature points are screened to obtain a binary image Img_B only including a lane line segment; and step six, linear hough transform is carried out on the binary image Img_B obtained by the step five and a linear model parameter is obtained. According to the invention, with ridge measure feature point extraction and hough transform, lane line edge features are extracted and the linear model is established. The method has advantages of high stability and high robustness.

Description

A kind of method for detecting lane lines based on ridge tolerance
Technical field
The invention belongs to technical field of image processing, relate to image segmentation and the detection of image curve geometrical property, be specifically related to a kind of through street method for detecting lane lines based on ridge tolerance.
Background technology
Traffic safety problem has become international big problem, and therefore the impact of human life's property is self-evident by the safety of automobile. Raising along with the development of highway and automotive performance, automobile driving speed is also accelerated accordingly, in addition the increase of automobile quantity and transportation are day by day busy, motor-vehicle accident increases caused casualties and property loss, having become a social problem that can not be ignored, the traffic safety of automobile more seems extremely important. And the accident that traditional passive security has far from been avoided that traffic occurs, the therefore concept of active safety formation slowly constantly perfect. Visual sensing due to have contain much information, with low cost, have a wide range of applications in field of automotive active safety.
Lane detection technology refers to that means such as utilizing image sensing detect the technology of road track deficiency and excess graticule, and it is one of key technology of field of automotive active safety. In the Lane Keeping System of view-based access control model, the detection of lane line and tracking are basic, necessary functions, and it can prevent vehicle lane departure, can also give the road environment information that the offer such as other active safety system that includes anti-collision warning is important simultaneously. From middle nineteen nineties in last century, include the state such as the U.S., German America and Europe and carried out the research of a large amount of related direction, and successfully developed some Lane Departure Warning System differed from one another. These systems when there is skew or there is movement tendency in vehicle to driver with information warning, even actively get involved wagon control, to reach the purpose of Accident prevention generation.
Current existing through street method for detecting lane lines often adopts relatively simple method and loose constraint to obtain lane line Edge Feature Points, and adopts complex model in the lane line parameter estimation stage, such as optimum Bayesian Estimation and maximal possibility estimation etc. Such method has good Detection results under major part scene, but some road is due to by trees, light intensity, hollow, road surface material inequality, other reason such as pavement markers and shadow effect, make the method when lane line marked situation is complicated and road surface uniformity is poor, often a large amount of non-lane line characteristic points are judged as lane line characteristic point, cause lane line parameter estimation deviation.
In order to convenient, present disclosure is described, it is necessary to some concepts are illustrated.
Concept 1. camera parameters and camera calibration: what camera parameter described is the imaging geometry model of video camera itself. It characterizes object and is mapped to the transformational relation under two dimensional image coordinate system from three-dimensional world coordinate system. The process being obtained these parameters by some experiment is then referred to as camera calibration (or calibration). The parameter of video camera includes inner parameter and external parameter, and inner parameter includes principal point coordinate, focal length etc., and external parameter includes camera position, attitude etc.
Concept 2. region of interest (ROI:ReignofInterest): in image processing field, area-of-interest (ROI) refers to the local image region selected from image, and this region is the emphasis that graphical analysis is paid close attention to. Determine this region to be further processed. Use ROI often can reduce the process time, increase precision.
Concept 3. Hough transformation: i.e. Hough transform, its basic thought is the duality utilizing point-line, it may be assumed that the straight line intersected in the corresponding parameter space of the point of conllinear in image space; Otherwise, intersect at all straight lines of same point in parameter space has the point of conllinear corresponding with it in image space. Utilize the maximum value search problem that linear feature search problem can be converted in parameter space by Hough transformation.
Summary of the invention
For the problems referred to above, the present invention proposes a kind of method for detecting lane lines based on ridge tolerance, and compared with congenic method, ridge metric calculation has been applied to traffic lane line neighborhood territory pixel, and conventional edge extracts the difference having taken into consideration only between adjacent two pixels. Therefore, it is strong that institute's extracting method has stability, the advantages such as applicable working condition is relatively broad.
The definition that ridge is measured by the present invention: a kind of metric to image intensity value and ridge form degree of approximation. Such as, ridge present middle high, both sides are low, and have certain symmetric form, if certain area grayscale value is big in image, both sides gray value is low, and has good symmetry, then the ridge metric of this point is relatively big, otherwise less.
Realize technical scheme as follows:
A kind of method for detecting lane lines based on ridge tolerance, comprises the steps:
Step 1: collection vehicle road ahead original image Img;
Step 2: determine region of interest I;
Step 3: calculate the ridge metric values of image in region of interest;
Step 4: calculate meansigma methods p and the variances sigma of the ridge metric of all pixels, all ridge metrics pixel more than p-3 σ is considered as potential lane line characteristic point and is set to 1, potential characteristic point non-in image is then set to 0, obtains containing only the binary image of potential lane line characteristic point;
Step 5: the lane line characteristic point extracted through step 4 is screened, it is thus achieved that contain only the binary image Img_B of lane line line segment;
Step 6: the binary image Img_B obtained for step 5, carries out traditional straight line Hough transformation, and obtains straight line model parameter.
It is preferred that scheme, implementing of described step 1 includes: by being arranged on the camera acquisition vehicle front road conditions original image Img of vehicle interior or outside.
It is preferred that scheme, described in described step 2, the defining method of region of interest I is: according to the inner parameter of video camera and external parameter, in acquisition camera view, the region below ground level vanishing line, within image right boundary is region of interest.
It is preferred that scheme, implementing of described step 3 includes:
Step 3-1: by gray level image g (x) in original ROI and 2-d gaussian filters device GσdDo convolution algorithm:
Lσd(x)=Gσd(x) * g (x);
Wherein, GσdBeing an anisotropic Gaussian core, its covariance matrix is ∑=diag (σdxdy), wherein, σdyIt is constant HR, σdxFor variable quantity, its numerical value is the half of track live width corresponding to each row of image;
Step 3-2: calculate the image each pixel x gradient vector field along u row and v column direction:
w σ d ( x ) = ( ∂ u L σ d ( x ) , ∂ v L σ d ( x ) ) T ;
Additionally, calculate new matrix:
sσd(x)=wσd(x)·(wσd(x))T;
Step 3-3: computation structure tensor field:
s σdσ i ( x ) = G σ i ( x ) * w σ d ( x ) ;
Wherein,For another gaussian kernel;
Step 3-4: setForCharacteristic vector corresponding to eigenvalue of maximum, then corresponding for certain pixel x ridge metricCalculating formula is as follows:
R σdσ i ( x ) = | d i v ( w σdσ i ( x ) ) | ;
Wherein, div is divergence;
Step 3-5: pixels all in area-of-interest in image are carried out the calculating of ridge tolerance.
It is preferred that scheme, implementing of described step 5 comprises the steps:
Step 5-1: line segment is added up: in binary image, all continuous line segments are added up, and each line segment is considered as a unit Ui;
Step 5-2: line segment parameter calculates: to each line segment unit Ui, calculate its length li, G-bar aiWith line segment slope concordance δi;
A certain line segment unit UiG-bar aiComputing formula is as follows:
a i = 1 n - 1 Σ k = 1 : n a k ;
Wherein, akFor the slope of point-to-point transmission in line segment, the sub-slope of called after,uk,vkFor the coordinate of certain point x in line segment;
A certain line segment unit UiLine segment slope concordance δiComputational methods are the mean square deviation adding up sub-slope:
δi=E (ak)
Step 5-3: carry out line segment screening, it is thus achieved that remove the binary image Img_B containing only lane line line segment after interference line segment.
It is preferred that scheme, the rule of the screening of line segment described in described step 5-3 is:
1) length is removed less than 0.07HRLine segment, i.e. short-term section;
2) G-bar line segment outside [π/8,3 π/8] and [5 π/8,7 π/8] scope is removed, namely bigger with dream car road line slope difference line segment;
3) line segment slope concordance δ is removedi> 6.73 line segment, i.e. line segment in irregular shape.
It is preferred that scheme, implementing of described step 6 comprises the steps:
Step 6-1: traversing graph is as each pixel (x in Img_B, y), calculate ρ=xcos (θ)+ysin (θ): θ ∈ [0 °~180 °], obtain all through pixel (x, straight line group y) (ρ, θ) | θ ∈ [0 °~180 °] };
Wherein: (x, y) represents the position of pixel in image Img_B; ρ represents through pixel (x, air line distance zero y), the i.e. citing of image Img_B lower-left angle point; θ represents angle and θ ∈ [0 °~180 °];
Step 6-2: by pixels all in image Img_B (x, straight line group y) { (ρ, θ) | θ ∈ [0 °~180 °] } is mapped to H (ρ, θ) space, obtains ρ-θ parameter space accumulated image Img_H;
Step 6-3: search out two maximum M in the upper half images of ρ-θ parameter space accumulated image Img_H, lower half images respectively1、M2, then its corresponding ρ-θ parameter is to (ρll)、(ρrr) the straight line model parameter that is in region of interest ROI-I under left and right lane line polar form;
Further conversion, obtains the left-lane line under pixel coordinate system, right lane line straight line model is:
x s i n θ l + y c o s θ l = ρ l xsinθ r + ycosθ r = ρ r ;
Wherein, l represents left-lane line, and r represents right lane line.
Beneficial effects of the present invention:
The present invention takes full advantage of lane line feature of distribution in far and near visual field in image, by feature point extraction and the Hough transformation of ridge tolerance, effectively lane line edge feature is extracted and sets up straight line model.Plurality of committed step (ridge tolerance, Eigenvector screening strategy, Hough transformation) all have employed the algorithm with relatively strong adaptability and certain fault-tolerance, substantially increases stability and the robustness of the present invention.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the lane detection based on ridge tolerance that the present invention proposes.
Detailed description of the invention
Below in conjunction with the drawings and specific embodiments, the invention will be further described.
As it is shown in figure 1, the flow chart of the method for detecting lane lines based on ridge tolerance proposed for the present invention, specifically include following steps:
Step 1: collection vehicle road ahead original image Img.
In vehicle forward process, by being arranged on the camera acquisition vehicle front road conditions original image Img of vehicle interior or outside.
Step 2: determine region of interest I (ROI).
Inner parameter according to video camera and external parameter, obtaining in camera view the region below ground level vanishing line, within image right boundary is region of interest ROI, if the high width respectively H that ROI is in the pictureRAnd WR
Step 3: in region of interest ROI, the ridge metric values of image calculates. Concrete calculating comprises the steps:
Step 3-1: by gray level image g (x) in original ROI and 2-d gaussian filters device GσdDo convolution algorithm:
Lσd(x)=Gσd(x)*g(x)(1)
In formula (1), GσdBeing an anisotropic Gaussian core, its covariance matrix is ∑=diag (σdxdy) (Σ is exactly a symbol representing covariance matrix here, is not summation symbol), wherein, σdyIt is constant HR, σdxFor variable quantity, its numerical value is the half of track live width corresponding to each row of image (calculating in pixels).
Step 3-2: calculate the image each pixel x gradient vector field along u row and v column direction, as follows:
w σ d ( x ) = ( ∂ u L σ d ( x ) , ∂ v L σ d ( x ) ) T - - - ( 2 )
Additionally, calculate new matrix:
sσd(x)=wσd(x)·(wσd(x))T(3)
Step 3-3: computation structure tensor field:
s σdσ i ( x ) = G σ i ( x ) * w σ d ( x ) - - - ( 4 )
In formula (4),For another gaussian kernel.
Step 3-4: setForCharacteristic vector corresponding to eigenvalue of maximum, then corresponding for certain pixel x ridge metricCalculating formula is as follows:
R σdσ i ( x ) = | d i v ( w σdσ i ( x ) ) | - - - ( 5 )
In formula (5), div is divergence.
Step 3-5: pixels all in region of interest ROI in image are carried out the calculating of ridge tolerance.
Step 4: calculate meansigma methods p and the variances sigma of the ridge metric of all pixels in this step, is considered as potential lane line characteristic point by all ridge metrics pixel more than p-3 σ and is set to 1, potential characteristic point non-in image is then set to 0. So far, obtain containing only the binary image of potential lane line characteristic point.
Step 5: the lane line characteristic point extracted through step 4 is screened, it is thus achieved that contain only the binary image Img_B of lane line line segment. Specifically include following steps:
Step 5-1: line segment is added up. In binary image, there is a large amount of line segment. In this step, all continuous line segments are added up, and each line segment is considered as a unit Ui
Step 5-2: line segment parameter calculates. To each line segment unit Ui, calculate its length li(namely line segment is containing number n a little), G-bar aiWith line segment slope concordance δi
A certain line segment unit UiG-bar aiComputing formula is as follows:
a i = 1 n - 1 Σ k = 1 : n a k - - - ( 6 )
In formula (6), akFor the slope of point-to-point transmission in line segment, the sub-slope of called after,uk,vkFor the coordinate of certain point x in line segment.
A certain line segment unit UiLine segment slope concordance δiComputational methods are the mean square deviation adding up sub-slope:
δi=E (ak)(7)
Step 5-3: line segment screens. Screening rule has following three:
1) length is removed less than 0.07HRLine segment, i.e. short-term section;
2) G-bar line segment outside [π/8,3 π/8] and [5 π/8,7 π/8] scope is removed, namely bigger with dream car road line slope difference line segment;
3) line segment slope concordance δ is removedi> 6.73 line segment, i.e. line segment in irregular shape.
So far, it is thus achieved that remove the binary image Img_B containing only lane line line segment after interference line segment.
Step 6: because through street curvature is relatively big, therefore can be similar to and be considered as straight line model, for the binary image Img_B that step 5 obtains, carry out traditional straight line Hough transformation, and obtain straight line model parameter. Specifically include following steps:
Step 6-1: traversing graph is as each pixel (x in Img_B, y), calculate ρ=xcos (θ)+ysin (θ): θ ∈ [0 °~180 °], obtain all through pixel (x, straight line group y) (ρ, θ) | θ ∈ [0 °~180 °] }; Wherein: (x, y) represents the position of pixel in image Img_B; ρ represents through pixel (x, air line distance zero y), the i.e. citing of image Img_B lower-left angle point; θ represents angle and θ ∈ [0 °~180 °].
Step 6-2: by pixels all in image Img_B (x, straight line group y) { (ρ, θ) | θ ∈ [0 °~180 °] } is mapped to H (ρ, θ) space, obtains ρ-θ parameter space accumulated image Img_H;
Step 6-3: search out two maximum M in the upper half images of ρ-θ parameter space accumulated image Img_H, lower half images respectively1、M2. Then its corresponding ρ-θ parameter is to (ρll)、(ρrr) the straight line model parameter that is in region of interest ROI-I under left and right lane line polar form.
Further, the left and right lane line straight line model obtained under pixel coordinate system is:
xsinθ l + ycosθ l = ρ l xsinθ r + ycosθ r = ρ r - - - ( 8 )
Wherein, l represents left-lane line, and r represents right lane line.
The specific embodiment of the invention
The method adopting the present invention, writes lane detection software first by C Plus Plus; Then video camera is arranged on automobile (inside and outside all can). Then the internal and external parameter of video camera is demarcated, and in vehicle travel process, forward image is acquired; Subsequently, the original image (720x480) photographed is input in lane detection software processes; Experiment gathers the video of about 90 hours under various working altogether, and when fair weather, the lane detection algorithm success rate of the present invention is about 98%; Also the success rate of about 96% is had under the adverse weather such as night, sleet. Average every frame process time is about 50ms, and running environment is Win7, CPU is four core 2.4GHz.
In sum, the present invention makes full use of far and near visual field lane line feature, adopts a kind of lane detection strategy based on ridge tolerance, it is achieved thereby that the method that lane line is provided accurately from the input source image provided.
The above is only used for describing technical scheme and specific embodiment; the protection domain being not intended to limit the present invention; under the premise without prejudice to flesh and blood of the present invention and spirit, changed and retouching etc. falls within protection scope of the present invention.

Claims (7)

1. the method for detecting lane lines based on ridge tolerance, it is characterised in that comprise the steps:
Step 1: collection vehicle road ahead original image Img;
Step 2: determine region of interest I;
Step 3: calculate the ridge metric values of image in region of interest;
Step 4: calculate meansigma methods p and the variances sigma of the ridge metric of all pixels, all ridge metrics pixel more than p-3 σ is considered as potential lane line characteristic point and is set to 1, potential characteristic point non-in image is then set to 0, obtains containing only the binary image of potential lane line characteristic point;
Step 5: the lane line characteristic point extracted through step 4 is screened, it is thus achieved that contain only the binary image Img_B of lane line line segment;
Step 6: the binary image Img_B obtained for step 5, carries out traditional straight line Hough transformation, and obtains straight line model parameter.
2. a kind of method for detecting lane lines based on ridge tolerance according to claim 1, it is characterised in that implementing of described step 1 includes: by being arranged on the camera acquisition vehicle front road conditions original image Img of vehicle interior or outside.
3. a kind of method for detecting lane lines based on ridge tolerance according to claim 2, it is characterized in that, described in described step 2, the defining method of region of interest I is: according to the inner parameter of video camera and external parameter, in acquisition camera view, the region below ground level vanishing line, within image right boundary is region of interest.
4. a kind of method for detecting lane lines based on ridge tolerance according to claim 1, it is characterised in that implementing of described step 3 includes:
Step 3-1: by gray level image g (x) in original ROI and 2-d gaussian filters device GσdDo convolution algorithm:
Lσd(x)=Gσd(x) * g (x);
Wherein, GσdBeing an anisotropic Gaussian core, its covariance matrix is ∑=diag (σdxdy), wherein, σdyIt is constant HR, σdxFor variable quantity, its numerical value is the half of track live width corresponding to each row of image;
Step 3-2: calculate the image each pixel x gradient vector field along u row and v column direction:
w σ d ( x ) = ( ∂ u L σ d ( x ) , ∂ v L σ d ( x ) ) T ;
Additionally, calculate new matrix:
sσd(x)=wσd(x)·(wσd(x))T;
Step 3-3: computation structure tensor field:
s σdσ i ( x ) = G σ i ( x ) * w σ d ( x ) ;
Wherein,For another gaussian kernel;
Step 3-4: setForCharacteristic vector corresponding to eigenvalue of maximum, then corresponding for certain pixel x ridge metricCalculating formula is as follows:
R σdσ i ( x ) = | d i v ( w σdσ i ( x ) ) | ;
Wherein, div is divergence;
Step 3-5: pixels all in area-of-interest in image are carried out the calculating of ridge tolerance.
5. a kind of method for detecting lane lines based on ridge tolerance according to claim 1, it is characterised in that implementing of described step 5 comprises the steps:
Step 5-1: line segment is added up: in binary image, all continuous line segments are added up, and each line segment is considered as a unit Ui;
Step 5-2: line segment parameter calculates: to each line segment unit Ui, calculate its length li, G-bar aiWith line segment slope concordance δi;
A certain line segment unit UiG-bar aiComputing formula is as follows:
a i = 1 n - 1 Σ k = 1 : n a k ;
Wherein, akFor the slope of point-to-point transmission in line segment, the sub-slope of called after,uk,vkFor the coordinate of certain point x in line segment;
A certain line segment unit UiLine segment slope concordance δiComputational methods are the mean square deviation adding up sub-slope:
δi=E (ak)
Step 5-3: carry out line segment screening, it is thus achieved that remove the binary image Img_B containing only lane line line segment after interference line segment.
6. a kind of method for detecting lane lines based on ridge tolerance according to claim 5, it is characterised in that the rule of the screening of line segment described in described step 5-3 is:
1) length is removed less than 0.07HRLine segment, i.e. short-term section;
2) G-bar line segment outside [π/8,3 π/8] and [5 π/8,7 π/8] scope is removed, namely bigger with dream car road line slope difference line segment;
3) line segment slope concordance δ is removedi> 6.73 line segment, i.e. line segment in irregular shape.
7. a kind of method for detecting lane lines based on ridge tolerance according to claim 1, it is characterised in that implementing of described step 6 comprises the steps:
Step 6-1: traversing graph is as each pixel (x in Img_B, y), calculate ρ=xcos (θ)+ysin (θ): θ ∈ [0 °~180 °], obtain all through pixel (x, straight line group y) (ρ, θ) | θ ∈ [0 °~180 °] };
Wherein: (x, y) represents the position of pixel in image Img_B; ρ represents through pixel (x, air line distance zero y), the i.e. citing of image Img_B lower-left angle point; θ represents angle and θ ∈ [0 °~180 °];
Step 6-2: by pixels all in image Img_B (x, straight line group y) { (ρ, θ) | θ ∈ [0 °~180 °] } is mapped to H (ρ, θ) space, obtains ρ-θ parameter space accumulated image Img_H;
Step 6-3: search out two maximum M in the upper half images of ρ-θ parameter space accumulated image Img_H, lower half images respectively1、M2, then its corresponding ρ-θ parameter is to (ρll)、(ρrr) the straight line model parameter that is in region of interest ROI-I under left and right lane line polar form;
Further conversion, obtains the left-lane line under pixel coordinate system, right lane line straight line model is:
x s i n θ l + y c o s θ l = ρ l xsinθ r + ycosθ r = ρ r ;
Wherein, l represents left-lane line, and r represents right lane line.
CN201610119349.6A 2016-03-02 2016-03-02 A kind of method for detecting lane lines based on ridge measurement Active CN105678287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610119349.6A CN105678287B (en) 2016-03-02 2016-03-02 A kind of method for detecting lane lines based on ridge measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610119349.6A CN105678287B (en) 2016-03-02 2016-03-02 A kind of method for detecting lane lines based on ridge measurement

Publications (2)

Publication Number Publication Date
CN105678287A true CN105678287A (en) 2016-06-15
CN105678287B CN105678287B (en) 2019-04-30

Family

ID=56307843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610119349.6A Active CN105678287B (en) 2016-03-02 2016-03-02 A kind of method for detecting lane lines based on ridge measurement

Country Status (1)

Country Link
CN (1) CN105678287B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803066A (en) * 2016-12-29 2017-06-06 广州大学 A kind of vehicle yaw angle based on Hough transform determines method
CN107229908A (en) * 2017-05-16 2017-10-03 浙江理工大学 A kind of method for detecting lane lines
CN107284455A (en) * 2017-05-16 2017-10-24 浙江理工大学 A kind of ADAS systems based on image procossing
CN109325389A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Lane detection method, apparatus and vehicle
CN109948552A (en) * 2019-03-20 2019-06-28 四川大学 It is a kind of complexity traffic environment in lane detection method
CN110147698A (en) * 2018-02-13 2019-08-20 Kpit技术有限责任公司 System and method for lane detection
CN110209924A (en) * 2018-07-26 2019-09-06 腾讯科技(深圳)有限公司 Recommended parameter acquisition methods, device, server and storage medium
CN112712091A (en) * 2019-10-27 2021-04-27 北京易讯理想科技有限公司 Image characteristic region detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN103473762A (en) * 2013-08-29 2013-12-25 奇瑞汽车股份有限公司 Lane line detection method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN103473762A (en) * 2013-08-29 2013-12-25 奇瑞汽车股份有限公司 Lane line detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XINXIN DU等: "Vision-based approach towards lane line detection and vehicle localization", 《MACHINE VISION AND APPLICATIONS》 *
王海等: "基于方向可变Haar特征和双曲线模型的车道线检测方法", 《交通运输工程学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803066A (en) * 2016-12-29 2017-06-06 广州大学 A kind of vehicle yaw angle based on Hough transform determines method
CN107229908A (en) * 2017-05-16 2017-10-03 浙江理工大学 A kind of method for detecting lane lines
CN107284455A (en) * 2017-05-16 2017-10-24 浙江理工大学 A kind of ADAS systems based on image procossing
CN107284455B (en) * 2017-05-16 2019-06-21 浙江理工大学 A kind of ADAS system based on image procossing
CN107229908B (en) * 2017-05-16 2019-11-29 浙江理工大学 A kind of method for detecting lane lines
CN109325389A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Lane detection method, apparatus and vehicle
CN110147698A (en) * 2018-02-13 2019-08-20 Kpit技术有限责任公司 System and method for lane detection
CN110209924A (en) * 2018-07-26 2019-09-06 腾讯科技(深圳)有限公司 Recommended parameter acquisition methods, device, server and storage medium
CN109948552A (en) * 2019-03-20 2019-06-28 四川大学 It is a kind of complexity traffic environment in lane detection method
CN112712091A (en) * 2019-10-27 2021-04-27 北京易讯理想科技有限公司 Image characteristic region detection method

Also Published As

Publication number Publication date
CN105678287B (en) 2019-04-30

Similar Documents

Publication Publication Date Title
CN105678287A (en) Ridge-measure-based lane line detection method
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
CN101608924B (en) Method for detecting lane lines based on grayscale estimation and cascade Hough transform
CN107730520B (en) Lane line detection method and system
CN106529493B (en) Robust multi-lane line detection method based on perspective view
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN110298216B (en) Vehicle deviation alarm method based on lane line gradient image self-adaptive threshold segmentation
CN107066986A (en) A kind of lane line based on monocular vision and preceding object object detecting method
Daigavane et al. Road lane detection with improved canny edges using ant colony optimization
Wu et al. Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
Zhang et al. Robust inverse perspective mapping based on vanishing point
Prakash et al. Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
Wei et al. Research on lane detection and tracking algorithm based on improved hough transform
CN202134079U (en) Unmanned vehicle lane marker line identification and alarm device
CN103440785B (en) One is traffic lane offset warning method fast
CN112001216A (en) Automobile driving lane detection system based on computer
Sun Vision based lane detection for self-driving car
CN102982304A (en) Method and system used for detecting vehicle positions by means of polarized images
Liu et al. Lane detection based on straight line model and k-means clustering
Xu et al. Road lane modeling based on RANSAC algorithm and hyperbolic model
CN107220632B (en) Road surface image segmentation method based on normal characteristic
Ding et al. A lane detection method based on semantic segmentation
CN116142186A (en) Early warning method, device, medium and equipment for safe running of vehicle in bad environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant