CN109190483B - Lane line detection method based on vision - Google Patents
Lane line detection method based on vision Download PDFInfo
- Publication number
- CN109190483B CN109190483B CN201810886340.7A CN201810886340A CN109190483B CN 109190483 B CN109190483 B CN 109190483B CN 201810886340 A CN201810886340 A CN 201810886340A CN 109190483 B CN109190483 B CN 109190483B
- Authority
- CN
- China
- Prior art keywords
- edge point
- screened
- points
- rising edge
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a lane line detection method based on vision. The invention collects images through a camera and converts the images into gray images, and the central points of the gray images are set as reference points to demarcate an interested area; respectively extracting an ascending edge point and a descending edge point in the region of interest by a line scanning gradient value method, respectively obtaining an inverse perspective ascending edge point and an inverse perspective descending edge point by the ascending edge point and the descending edge point through inverse perspective transformation, and respectively obtaining a screened ascending edge point and a screened descending edge point according to lane width characteristic filtering; carrying out self-defined parameter space transformation on the screened rising edge points and the screened falling edge points, counting the number of the screened rising edge points and the screened falling edge points, wherein the angles and the transverse offsets of the lines of the screened rising edge points and the screened falling edge points are equal, and fitting a lane curve to construct the lane line position of the current frame road image; and correlating the lane line position of the next frame of road image by the lane line position of the current frame of road image.
Description
Technical Field
The invention relates to the technical field of signal processing, in particular to a lane line detection method based on vision.
Background
The first reason why accidents frequently occur is that the driver unintentionally deviates from the lane during driving. With the increasing year by year of the automobile keeping quantity, the road environment is becoming more and more complex, and automatic driving or auxiliary driving is becoming one of the hot spots of research in order to make people go out more effectively. The lane line identification is the most basic part, is a key unit for ensuring effective control of vehicles on road scenes, and can provide accurate position information for a vehicle deviation early warning technology, so that the traffic pressure is relieved to a certain extent, and the occurrence of traffic accidents is reduced. And lane line recognition and deviation early warning are realized under an embedded system, and the method has the characteristics of low cost, low power consumption, miniaturization, easy integration and the like, so that the method has higher practical value and wide application prospect.
With the development of computer technology, systems that provide timely warnings to drivers at risk have great potential to save a great deal of life. A system for assisting a driver in driving is called as an advanced driving assistance system, and has a plurality of functions such as adaptive cruise control, anti-collision function, blind spot detection, traffic sign detection and the like. Lane departure systems also fall into one category. Lane detection is the locating of lane markers on the road and presenting these position information to an intelligent system. In an intelligent transportation system, intelligent vehicles are integrated with intelligent infrastructure, and a safer environment and better traffic conditions can be provided.
At present, methods for lane line detection can be mainly classified into three categories: model-based methods, feature-based methods, token-based methods. The characteristic method locates the lane lines in the road surface image by using the basic characteristics of the lane, such as color, texture, etc. Model-based methods typically analyze the lane with linear or curvilinear templates, and once a model is defined, detection is simpler. The deep learning-based method has the basic principle that a large number of sample sets are marked in advance, and the convolutional neural network method is used for training the sample sets to obtain network parameters so as to achieve the purposes of lane detection and classification. Compared with the deep learning method, the road information detection of the feature and model method is necessary. The lane line can be identified more accurately by taking the model and feature method as reference and using a deep learning method. Models and feature methods also have great potential due to limited hardware conditions.
The following problems exist for the lane line detection algorithm: real-time detection is achieved. The road condition is complicated, and the detection rate is low due to the influences of occlusion, lane line loss, ground identification, tunnels and the like. Since the position information of the lane line does not change greatly except for lane changing of the continuous multiframes, the lane of the multiframe image needs to be stably detected, so that the interference lane line and the accurate lane line are not always exchanged. Lane line position information needs to be accurately detected before lane changing, and correct guidance is provided for departure early warning. Based on the above situation, how to improve the efficient, real-time and stable lane line detection is a problem to be solved in the field.
Disclosure of Invention
The embedded platform adopted by the method designs a high-efficiency lane line detection algorithm with strong real-time performance and has strong adaptability. Aim at solves the lower problem of detection efficiency of lane line among the prior art.
In order to achieve the above object, the present invention provides a lane line detection method based on vision, which comprises the following steps:
step 1: acquiring an image through a camera, converting the acquired image into a gray image, setting a central point of the gray image as a reference point, and defining an interested area according to the reference point;
step 2: respectively extracting an ascending edge point and a descending edge point in the region of interest by a line scanning gradient value method, respectively obtaining an inverse perspective ascending edge point and an inverse perspective descending edge point by the ascending edge point and the descending edge point through inverse perspective transformation, and respectively obtaining a screened ascending edge point and a screened descending edge point by the inverse perspective ascending edge point and the inverse perspective descending edge point through lane width characteristic filtering;
and step 3: carrying out self-defined parameter space transformation on the screened rising edge points and the screened falling edge points, counting the number of the screened rising edge points and the screened falling edge points, wherein the angles and the transverse offsets of the lines of the screened rising edge points and the screened falling edge points are equal, obtaining candidate lane lines, and fitting a lane curve;
and 4, step 4: and correlating the lane line position of the next frame of road image by the lane line position of the current frame of road image.
Preferably, the width of the collected image in the step 1 is u, and the height is v;
the central point of the gray level image in the step 1 isSetting the central point as a reference point of the gray image;
the step 1 of defining the region of interest according to the reference points comprises the following steps:
according to the reference pointDefining a rectangular square block with the width value range of the rectangular square block asThe height of the rectangular block is in the range of
Preferably, the step 2 of extracting the edge points in the region of interest by the line scanning gradient value method is as follows:
the calculation is based on the edge intensity of each pixel on the horizontal row of scan lines:
wherein I (I + k, j) represents the pixel values of the I + k th row and the j th column of the interested area,the number of image lines representing the region of interest,the number of image columns representing the region of interest, L representing the filter length of each row;
the pixel edge intensities are respectively compared with a first threshold valueAnd comparing the second threshold value, and classifying the pixel points in the region of interest according to the detection result: when E (i, j) > Th1When I (I, j) is a rising edge point, when E (I, j) < Th2When I (I, j) is the falling edge point;
converting the rising edge point and the falling edge point in the region of interest into edge feature points in the actual road under a world coordinate system through inverse perspective transformation, namely the inverse perspective rising edge point and the inverse perspective falling edge point in the step 2;
the inverse perspective rising edge point and the inverse perspective falling edge point use lane width characteristic filtering to remove interference points, and the Euclidean distance is calculated for the inverse perspective rising edge point and the inverse perspective falling edge point of the same image line number in the region of interest: if | dis-D | is less than or equal to dh, dis is the Euclidean distance, D is the distance threshold, and dh is the distance error, the inverse perspective rising edge point is the rising edge point after the screening in the step 2:
(xm,ym)
wherein the content of the first and second substances,m is the number of the rising edge points after screening;
and the inverse perspective descent edge points are the screened descent edge points in the step 2:
wherein the content of the first and second substances,n is the number of the edge points which are decreased after screening;
preferably, the customized parameter space transformation in step 3 is:
the self-defined parameter space of the rising edge points after screening is as follows:
xm=pk,m+ym*tanθk,m
wherein (x)m,ym) In step 2Coordinates of the filtered rising edge points, θk,mIndicates the angle of the rising edge dotted line after screening, and thetak,m∈[α,β],k∈[1,K]K denotes the number of angles of the rising edge dot line, pk,mRepresents the lateral offset of the rising edge point line, and is performed byk,mGet corresponding pk,m;
The self-defined parameter space of the screened descending edge points is as follows:
wherein the content of the first and second substances,for the coordinates of the filtered falling edge points in step 2,indicates the angle of the falling edge point line, andl represents the number of angles of the falling edge dot line,indicating the lateral offset of the falling edge dot line, is carried outThe traversal calculation of (2) obtains corresponding
And 3, counting the number of the line angles and the transverse offsets of the screened rising edge points and the screened falling edge points which are equal to each other:
in the parameter space defined by the screened rising edge points, comparing the angles of the rising edge point lines of any two different screened rising edge points with the transverse offset of the rising edge point lines, if the angles are equal to each other, then:
Hr(p,θ)=Hr(p,θ)+1 r∈[1,Nr]
wherein Hr(p, theta) is the number of the screened rising edge points with equal angle of the r group of rising edge point lines and equal transverse offset of the rising edge point lines;
in the parameter space defined by the screened descending edge points, comparing the angles of the descending edge point lines of any two different screened descending edge points with the transverse offset of the descending edge point lines, if the angles are equal to each other, then:
Hd(p,θ)=Hd(p,θ)+1 d∈[1,Nd]
wherein Hd(p, theta) is the number of screened descending edge points with equal angles of the group d descending edge point lines and equal transverse offset of the descending edge point lines;
in NrH is selected from the screened upper edge points with equal angles of the rising edge point lines and equal transverse offsets of the rising edge point linesrOne of the first G groups is ordered by the (p, theta) value from high to low
(pg,θg)g∈[1,G]Different (p)g,θg) Different straight lines are represented according to the angle of the rising edge dotted line and the transverse offset value of the rising edge dotted line;
in NdH is selected from the screened upper edge points with equal angles of the descending edge point lines and equal transverse offsets of the descending edge point linesdOne of the first G groups is ordered by the (p, theta) value from high to low
Is differentDifferent straight lines are represented according to the angle of the descending edge dotted line and the transverse offset value of the descending edge dotted line;
in step 3, the obtaining of the candidate lane lines is as follows:
for rising edge points, by the parameter value (p)g,θg)g∈[1,G]The straight line is determined as:
xi=pg+yi*tanθg
wherein the content of the first and second substances,xicalculating to obtain specific value (x) by using linear formulai,yi) Is the coordinate of a straight line, in (x)i,yi) Further screening the obtained screened edge points in an outward expansion mode for reference, and only keeping the rising edge points in an outward expansion range At the same timeδ and φ are set thresholds;
the same treatment is carried out on the falling edge point and the rising edge point, and only the falling edge point in the outward expansion range is reserved
The fitted lane line in the step 3 is as follows:
multiple fitting is carried out on the rising edge points in the outer expansion range to obtain a rising fitting lane curve, and the parameter value is
Multiple item fitting is carried out on the falling edge points in the outer expansion range to obtain a falling fitting lane curve, and the parameter value is
The multiple fitting can adopt a least square method or a Bezier curve method;
the rising fitted lane curve and the falling fitted lane curve form the lane line position of the current frame road image;
preferably, the association in step 4 is:
the rising fitting lane curve in the lane line position of the next frame of road image has the parameter value of
A lane curve is fitted in the position of the lane line of the next frame of road image in a descending way, and the parameter value is
the lane line position of the next frame of road image is valid, otherwise, the lane line position is invalid;
the lane line position of the next frame of road image is valid, otherwise, it is invalid.
The method can accurately detect the lane lines under each scene in real time and has quite good anti-interference performance. According to the method, the lane structure characteristic information is combined simultaneously, the breakthrough point is searched in real time, a self-defined parameter space transformation detection algorithm is provided, and the method is applied to an embedded environment. The method comprises the steps of automatically defining an interested area by using vanishing points, avoiding complex calculation of a whole image, eliminating redundant information and improving detection efficiency, extracting edge gradient values based on line scanning, using the extracted edge points for rapid inverse perspective transformation, then performing feature fusion on the edge points in a road image after inverse perspective transformation and the edge points extracted by scanning an original image, eliminating interference points, only reserving effective edge feature points, and providing guarantee for efficient parameter space transformation; after the effective edge information is obtained, a user-defined parameter space transformation method suitable for edge points is adopted for obtaining candidate lane lines, then the lane lines are screened through the characteristic information, and the obtained lane lines are used for realizing stable detection of subsequent frame images.
Drawings
FIG. 1: the method of the invention is schematically shown in the flow chart;
FIG. 2: the invention discloses a setting schematic diagram of an interested area and a scanning line in a lane line detection algorithm embodiment;
FIG. 3: in the embodiment of the lane line detection algorithm, an edge point extraction result graph and an inverse perspective transformation aerial view are obtained;
FIG. 4: the invention discloses a post result graph of width characteristic filtering in a lane line detection algorithm embodiment;
FIG. 5: the invention discloses a parameter space schematic diagram in a lane line detection algorithm embodiment;
FIG. 6: the lane line detection algorithm embodiment of the invention integrates the result graph of the inner boundary of the lane line.
FIG. 7: the invention discloses a lane line detection result graph in a lane line detection algorithm embodiment.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
The following describes an embodiment of the present invention with reference to fig. 1 to 7, and specifically includes the following steps:
step 1: acquiring an image through a camera, converting the acquired image into a gray image, setting a central point of the gray image as a reference point, and defining an interested area according to the reference point;
in the step 1, the width u of the acquired image is 1280, and the height v of the acquired image is 720;
the central point of the gray level image in the step 1 isSetting the central point as a reference point of the gray image;
the step 1 of defining the region of interest according to the reference points comprises the following steps:
according to the reference pointDefining a rectangular square block with the width value range of the rectangular square block asThe height of the rectangular block is in the range of
Step 2: respectively extracting an ascending edge point and a descending edge point in the region of interest by a line scanning gradient value method, respectively obtaining an inverse perspective ascending edge point and an inverse perspective descending edge point by the ascending edge point and the descending edge point through inverse perspective transformation, and respectively obtaining a screened ascending edge point and a screened descending edge point by the inverse perspective ascending edge point and the inverse perspective descending edge point through lane width characteristic filtering;
in the step 2, the extraction of the edge points in the region of interest by the line scanning gradient value method is as follows:
the calculation is based on the edge intensity of each pixel on the horizontal row of scan lines:
wherein I (I + k, j) represents the pixel values of the I + k th row and the j th column of the interested area,the number of image lines representing the region of interest,the number of image columns representing the region of interest, L representing the filter length per row, L being 8;
comparing the pixel edge strength with a first threshold and a second threshold respectively, and classifying the pixel points in the region of interest according to the detection result: when E (i, j) > Th1When I (I, j) is the rising edge point, Th1When E (i, j) < Th2When I (I, j) is the falling edge point, Th2=-16;
Converting the rising edge point and the falling edge point in the region of interest into edge feature points in the actual road under a world coordinate system through inverse perspective transformation, namely the inverse perspective rising edge point and the inverse perspective falling edge point in the step 2;
the inverse perspective rising edge point and the inverse perspective falling edge point use lane width characteristic filtering to remove interference points, and the Euclidean distance is calculated for the inverse perspective rising edge point and the inverse perspective falling edge point of the same image line number in the region of interest: if | dis-D | is less than or equal to dh which is the Euclidean distance, D is the distance threshold and is the distance of 14 pixels, and dh is the distance error and is the distance of 4 pixels, then the inverse perspective rising edge point is the rising edge point after screening in step 2:
(xm,ym)
wherein the content of the first and second substances,m is the number of the rising edge points after screening;
and the inverse perspective descent edge points are the screened descent edge points in the step 2:
wherein the content of the first and second substances,n is the number of the edge points which are decreased after screening;
and step 3: carrying out self-defined parameter space transformation on the screened rising edge points and the screened falling edge points, counting the number of the screened rising edge points and the screened falling edge points, wherein the angles and the transverse offsets of the lines of the screened rising edge points and the screened falling edge points are equal, obtaining candidate lane lines, and fitting a lane curve;
the customized parameter space transformation in step 3 is:
the self-defined parameter space of the rising edge points after screening is as follows:
xm=pk,m+ym*tanθk,m
wherein (x)m,ym) Is the coordinate of the filtered rising edge point, θ in step 2k,mIndicates the angle of the rising edge dotted line after screening, and thetak,m∈[α,β],k∈[1,K]Where α is 1, β is 75, K denotes the number of angles of the rising edge point line, and pk,mRepresents the lateral offset of the rising edge point line, and is performed byk,mGet corresponding pk,m;
The self-defined parameter space of the screened descending edge points is as follows:
wherein the content of the first and second substances,for the coordinates of the filtered falling edge points in step 2,indicates the angle of the falling edge point line, andα is 1, β is 75, L represents the number of angles of the falling edge dotted line,indicating the lateral offset of the falling edge dot line, is carried outThe traversal calculation of (2) obtains corresponding
And 3, counting the number of the line angles and the transverse offsets of the screened rising edge points and the screened falling edge points which are equal to each other:
in the parameter space defined by the screened rising edge points, comparing the angles of the rising edge point lines of any two different screened rising edge points with the transverse offset of the rising edge point lines, if the angles are equal to each other, then:
Hr(p,θ)=Hr(p,θ)+1 r∈[1,Nr]
wherein Hr(p, theta) is the number of the screened rising edge points with equal angle of the r group of rising edge point lines and equal transverse offset of the rising edge point lines;
in the parameter space defined by the screened descending edge points, comparing the angles of the descending edge point lines of any two different screened descending edge points with the transverse offset of the descending edge point lines, if the angles are equal to each other, then:
Hd(p,θ)=Hd(p,θ)+1 d∈[1,Nd]
wherein Hd(p, theta) is the number of screened descending edge points with equal angles of the group d descending edge point lines and equal transverse offset of the descending edge point lines;
in NrH is selected from the screened upper edge points with equal angles of the rising edge point lines and equal transverse offsets of the rising edge point linesrOne of the first G groups is ordered by the (p, theta) value from high to low
(pg,θg)g∈[1,G]G is 10, different (p)g,θg) Different straight lines are represented according to the angle of the rising edge dotted line and the transverse offset value of the rising edge dotted line;
in NdH is selected from the screened upper edge points with equal angles of the descending edge point lines and equal transverse offsets of the descending edge point linesdOne of the first G groups is ordered by the (p, theta) value from high to low
Is differentDifferent straight lines are represented according to the angle of the descending edge dotted line and the transverse offset value of the descending edge dotted line;
in step 3, the obtaining of the candidate lane lines is as follows:
for rising edge points, by the parameter value (p)g,θg)g∈[1,G]The straight line is determined as:
xi=pg+yi*tanθg
wherein the content of the first and second substances,xicalculating to obtain specific value (x) by using linear formulai,yi) Is the coordinate of a straight line, in (x)i,yi) Further screening the obtained screened edge points in an outward expansion mode for reference, and only keeping the rising edge points in an outward expansion range At the same timeδ andto set the threshold, δ 3,
the same treatment is carried out on the falling edge point and the rising edge point, and only the falling edge point in the outward expansion range is reserved
The fitted lane line in the step 3 is as follows:
multiple fitting is carried out on the rising edge points in the outer expansion range to obtain a rising fitting lane curve, and the parameter value is
Multiple item fitting is carried out on the falling edge points in the outer expansion range to obtain a falling fitting lane curve, and the parameter value is
The multiple fitting can adopt a least square method or a Bezier curve method;
the rising fitted lane curve and the falling fitted lane curve form the lane line position of the current frame road image;
and 4, step 4: correlating the lane line position of the next frame of road image through the lane line position of the current frame of road image;
the association in step 4 is:
the rising fitting lane curve in the lane line position of the next frame of road image has the parameter value of
A lane curve is fitted in the position of the lane line of the next frame of road image in a descending way, and the parameter value is
If it isAnd alpha and beta are set threshold values, alpha is 20, beta is 25,γ, λ are set thresholds, γ is 6, λ is 7,
the lane line position of the next frame of road image is valid, otherwise, the lane line position is invalid;
if it isAnd alpha and beta are set threshold values, alpha is 20, beta is 25,γ, λ are set thresholds, γ being 6, λ being 7.
The lane line position of the next frame of road image is valid, otherwise, it is invalid.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. A lane line detection method based on vision is characterized by comprising the following steps:
step 1: acquiring an image through a camera, converting the acquired image into a gray image, setting a central point of the gray image as a reference point, and defining an interested area according to the reference point;
step 2: respectively extracting an ascending edge point and a descending edge point in the region of interest by a line scanning gradient value method, respectively obtaining an inverse perspective ascending edge point and an inverse perspective descending edge point by the ascending edge point and the descending edge point through inverse perspective transformation, and respectively obtaining a screened ascending edge point and a screened descending edge point by the inverse perspective ascending edge point and the inverse perspective descending edge point through lane width characteristic filtering;
and step 3: carrying out self-defined parameter space transformation on the screened rising edge points and the screened falling edge points, counting the number of the screened rising edge points and the screened falling edge points, wherein the angles and the transverse offsets of the lines of the screened rising edge points and the screened falling edge points are equal, obtaining candidate lane lines, and fitting a lane curve;
the customized parameter space transformation in step 3 is:
the self-defined parameter space of the rising edge points after screening is as follows:
xm=pk,m+ym*tanθk,m
wherein (x)m,ym) Is the coordinate of the filtered rising edge point, θ in step 2k,mIndicates the angle of the rising edge dotted line after screening, and thetak,m∈[α,β],k∈[1,K]K denotes the number of angles of the rising edge dot line, pk,mRepresents the lateral offset of the rising edge point line, and is performed byk,mGet corresponding pk,m;
The self-defined parameter space of the screened descending edge points is as follows:
wherein the content of the first and second substances,for the coordinates of the filtered falling edge points in step 2,indicates the angle of the falling edge point line, andl∈[1,L]l represents the angular number of the falling edge point line,indicating the lateral offset of the falling edge dot line, is carried outThe traversal calculation of (2) obtains corresponding
And 3, counting the number of the line angles and the transverse offsets of the screened rising edge points and the screened falling edge points which are equal to each other:
in the parameter space defined by the screened rising edge points, comparing the angles of the rising edge point lines of any two different screened rising edge points with the transverse offset of the rising edge point lines, if the angles are equal to each other, then:
Hr(p,θ)=Hr(p,θ)+1,r∈[1,Nr]
wherein Hr(p, theta) is the number of the screened rising edge points with equal angle of the r group of rising edge point lines and equal transverse offset of the rising edge point lines;
in the parameter space defined by the screened descending edge points, comparing the angles of the descending edge point lines of any two different screened descending edge points with the transverse offset of the descending edge point lines, if the angles are equal to each other, then:
Hd(p,θ)=Hd(p,θ)+1,d∈[1,Nd]
wherein Hd(p, theta) is the number of screened descending edge points with equal angles of the group d descending edge point lines and equal transverse offset of the descending edge point lines;
in NrH is selected from the screened upper edge points with equal angles of the rising edge point lines and equal transverse offsets of the rising edge point linesrOne of the first G groups is ordered by the (p, theta) value from high to low
(pg,θg)g∈[1,G]Different (p)g,θg) Different straight lines are represented according to the angle of the rising edge dotted line and the transverse offset value of the rising edge dotted line;
in NdH is selected from the screened upper edge points with equal angles of the descending edge point lines and equal transverse offsets of the descending edge point linesdOne of the first G groups is ordered by the (p, theta) value from high to low
Is differentDifferent straight lines are represented according to the angle of the descending edge dotted line and the transverse offset value of the descending edge dotted line;
in step 3, the obtaining of the candidate lane lines is as follows:
for rising edge points, by the parameter value (p)g,θg)g∈[1,G]The straight line is determined as:
xi=pg+yi*tanθg
wherein the content of the first and second substances,xicalculating to obtain specific value (x) by using linear formulai,yi) Is the coordinate of a straight line, in (x)i,yi) Further screening the obtained screened edge points in an outward expansion mode for reference, and only keeping the rising edge points in an outward expansion rangeAt the same timeδ and φ are set thresholds;
the same treatment is carried out on the falling edge point and the rising edge point, and only the falling edge point in the outward expansion range is reserved
The fitted lane line in the step 3 is as follows:
multiple fitting is carried out on the rising edge points in the outer expansion range to obtain a rising fitting lane curve, and the parameter value is
Multiple item fitting is carried out on the falling edge points in the outer expansion range to obtain a falling fitting lane curve, and the parameter value is
The multiple fitting can adopt a least square method or a Bezier curve method;
the rising fitted lane curve and the falling fitted lane curve form the lane line position of the current frame road image;
and 4, step 4: and correlating the lane line position of the next frame of road image by the lane line position of the current frame of road image.
2. The vision-based lane line detection method of claim 1, wherein: in the step 1, the width of the collected image is u, and the height of the collected image is v;
the central point of the gray level image in the step 1 isSetting the central point as a reference point of the gray image;
the step 1 of defining the region of interest according to the reference points comprises the following steps:
according to the reference pointDefining a rectangular square block with the width value range of the rectangular square block asThe height of the rectangular block is in the range of
3. The vision-based lane line detection method of claim 1, wherein: in the step 2, the extraction of the edge points in the region of interest by the line scanning gradient value method is as follows:
the calculation is based on the edge intensity of each pixel on the horizontal row of scan lines:
wherein I (I + k, j) represents the pixel values of the I + k th row and the j th column of the interested area, I represents the number of rows of the image of the interested area, j represents the number of columns of the image of the interested area, L represents the filtering length of each row,
comparing the pixel edge strength with a first threshold and a second threshold respectively, and classifying the pixel points in the region of interest according to the detection result: when E (i, j) > Th1When I (I, j) is a rising edge point, when E (I, j) < Th2When I (I, j) is the falling edge point;
converting the rising edge point and the falling edge point in the region of interest into edge feature points in the actual road under a world coordinate system through inverse perspective transformation, namely the inverse perspective rising edge point and the inverse perspective falling edge point in the step 2;
the inverse perspective rising edge point and the inverse perspective falling edge point use lane width characteristic filtering to remove interference points, and the Euclidean distance is calculated for the inverse perspective rising edge point and the inverse perspective falling edge point of the same image line number in the region of interest: if | dis-D | is less than or equal to dh, dis is the Euclidean distance, D is the distance threshold, and dh is the distance error, the inverse perspective rising edge point is the rising edge point after the screening in the step 2:
(xm,ym)
wherein the content of the first and second substances,m∈[1,M]m is the number of the rising edge points after screening;
and the inverse perspective descent edge points are the screened descent edge points in the step 2:
4. The vision-based lane line detection method of claim 1, wherein: the association in step 4 is:
the rising fitting lane curve in the lane line position of the next frame of road image has the parameter value of
A lane curve is fitted in the position of the lane line of the next frame of road image in a descending way, and the parameter value is
the lane line position of the next frame of road image is valid, otherwise, the lane line position is invalid;
the lane line position of the next frame of road image is valid, otherwise, it is invalid.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810886340.7A CN109190483B (en) | 2018-08-06 | 2018-08-06 | Lane line detection method based on vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810886340.7A CN109190483B (en) | 2018-08-06 | 2018-08-06 | Lane line detection method based on vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109190483A CN109190483A (en) | 2019-01-11 |
CN109190483B true CN109190483B (en) | 2021-04-02 |
Family
ID=64920295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810886340.7A Active CN109190483B (en) | 2018-08-06 | 2018-08-06 | Lane line detection method based on vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190483B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110077399B (en) * | 2019-04-09 | 2020-11-06 | 魔视智能科技(上海)有限公司 | Vehicle anti-collision method based on road marking and wheel detection fusion |
CN110569704B (en) * | 2019-05-11 | 2022-11-22 | 北京工业大学 | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision |
CN110472578B (en) * | 2019-08-15 | 2020-09-18 | 宁波中车时代传感技术有限公司 | Lane line keeping method based on lane curvature |
CN110675637A (en) * | 2019-10-15 | 2020-01-10 | 上海眼控科技股份有限公司 | Vehicle illegal video processing method and device, computer equipment and storage medium |
CN111563412B (en) * | 2020-03-31 | 2022-05-17 | 武汉大学 | Rapid lane line detection method based on parameter space voting and Bessel fitting |
WO2022082574A1 (en) * | 2020-10-22 | 2022-04-28 | 华为技术有限公司 | Lane line detection method and apparatus |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136341A (en) * | 2013-02-04 | 2013-06-05 | 北京航空航天大学 | Lane line reconstruction device based on Bezier curve |
CN104657727A (en) * | 2015-03-18 | 2015-05-27 | 厦门麦克玛视电子信息技术有限公司 | Lane line detection method |
DE102014109063A1 (en) * | 2014-06-27 | 2015-12-31 | Connaught Electronics Ltd. | Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle |
CN107031623A (en) * | 2017-03-16 | 2017-08-11 | 浙江零跑科技有限公司 | A kind of road method for early warning based on vehicle-mounted blind area camera |
-
2018
- 2018-08-06 CN CN201810886340.7A patent/CN109190483B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136341A (en) * | 2013-02-04 | 2013-06-05 | 北京航空航天大学 | Lane line reconstruction device based on Bezier curve |
DE102014109063A1 (en) * | 2014-06-27 | 2015-12-31 | Connaught Electronics Ltd. | Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle |
CN104657727A (en) * | 2015-03-18 | 2015-05-27 | 厦门麦克玛视电子信息技术有限公司 | Lane line detection method |
CN107031623A (en) * | 2017-03-16 | 2017-08-11 | 浙江零跑科技有限公司 | A kind of road method for early warning based on vehicle-mounted blind area camera |
Non-Patent Citations (1)
Title |
---|
A robust lane detection method for autonomous car-like robot;Sun T 等;《2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP). IEEE》;20110624;第373-378页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109190483A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109190483B (en) | Lane line detection method based on vision | |
CN111563412B (en) | Rapid lane line detection method based on parameter space voting and Bessel fitting | |
CN104392212B (en) | The road information detection and front vehicles recognition methods of a kind of view-based access control model | |
CN102682292B (en) | Method based on monocular vision for detecting and roughly positioning edge of road | |
CN103324930B (en) | A kind of registration number character dividing method based on grey level histogram binaryzation | |
CN112819094B (en) | Target detection and identification method based on structural similarity measurement | |
CN103116751B (en) | A kind of Method of Automatic Recognition for Character of Lcecse Plate | |
CN105005771B (en) | A kind of detection method of the lane line solid line based on light stream locus of points statistics | |
US8670592B2 (en) | Clear path detection using segmentation-based method | |
CN105678285B (en) | A kind of adaptive road birds-eye view transform method and road track detection method | |
CN101334836B (en) | License plate positioning method incorporating color, size and texture characteristic | |
CN106647776B (en) | Method and device for judging lane changing trend of vehicle and computer storage medium | |
CN110210451B (en) | Zebra crossing detection method | |
CN109299674B (en) | Tunnel illegal lane change detection method based on car lamp | |
Gomez et al. | Traffic lights detection and state estimation using hidden markov models | |
CN105654073B (en) | A kind of speed automatic control method of view-based access control model detection | |
CN103927526A (en) | Vehicle detecting method based on Gauss difference multi-scale edge fusion | |
CN105740782A (en) | Monocular vision based driver lane-changing process quantization method | |
CN109800752B (en) | Automobile license plate character segmentation and recognition algorithm based on machine vision | |
CN106887004A (en) | A kind of method for detecting lane lines based on Block- matching | |
CN109034019B (en) | Yellow double-row license plate character segmentation method based on row segmentation lines | |
CN102419820A (en) | Method for rapidly detecting car logo in videos and images | |
CN111539303B (en) | Monocular vision-based vehicle driving deviation early warning method | |
CN108647664B (en) | Lane line detection method based on look-around image | |
CN104915642B (en) | Front vehicles distance measuring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |