CN111178193A - Lane line detection method, lane line detection device and computer-readable storage medium - Google Patents

Lane line detection method, lane line detection device and computer-readable storage medium Download PDF

Info

Publication number
CN111178193A
CN111178193A CN201911312555.9A CN201911312555A CN111178193A CN 111178193 A CN111178193 A CN 111178193A CN 201911312555 A CN201911312555 A CN 201911312555A CN 111178193 A CN111178193 A CN 111178193A
Authority
CN
China
Prior art keywords
edge point
edge
lane line
points
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911312555.9A
Other languages
Chinese (zh)
Inventor
李扬
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911312555.9A priority Critical patent/CN111178193A/en
Publication of CN111178193A publication Critical patent/CN111178193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device and a computer readable storage medium for detecting lane lines, wherein the method comprises the following steps: acquiring a road image; extracting a plurality of edge points satisfying a lane line extraction condition from the road image; calculating the gradient direction of the edge points; dividing a plurality of edge points into different edge point sets according to the gradient direction; screening the edge point set to obtain an optimal edge point set; and fitting an edge curve of the lane line according to the optimal edge point set. By means of the mode, the accuracy of lane line detection can be improved.

Description

Lane line detection method, lane line detection device and computer-readable storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method and an apparatus for detecting a lane line, and a computer-readable storage medium.
Background
Lane line detection is a key technology in Advanced Driving Assistance Systems (ADAS) and automated Driving applications, and uses a camera mounted on a vehicle to capture a front image, detect a lane line in front of the vehicle, estimate a position of the vehicle from the lane line, and warn a lane departure or keep the vehicle running normally in the lane without deviating from the lane.
The traditional lane line detection algorithm can adopt a Canny edge detection algorithm to detect the edge of a lane line, then potential straight lines in an image are detected by Hough (Hough) straight line transformation, and the lane line is found through some geometric limitations of a lane; however, the Hough line transformation detection only has a good detection effect on a straight lane or an approximately straight lane, and when the radius of a curve of a lane line is large, the scheme of adopting the line detection is invalid. In addition, although a lane line detection method based on deep learning is available, the lane line detection method based on deep learning can detect a lane line with a large turning radius, but the requirement for data is high, and a large amount of labeled data is required for training.
Disclosure of Invention
The application provides a method and a device for detecting a lane line and a computer-readable storage medium, which can improve the accuracy of lane line detection.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: provided is a method for detecting a lane line, the method including: acquiring a road image; extracting a plurality of edge points satisfying a lane line extraction condition from the road image; calculating the gradient direction of the edge points; dividing a plurality of edge points into different edge point sets according to the gradient direction; screening the edge point set to obtain an optimal edge point set; and fitting an edge curve of the lane line according to the optimal edge point set.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a lane line detection apparatus comprising a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program, when executed by the processor, is used for implementing the above-mentioned lane line detection method.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer-readable storage medium for storing a computer program for implementing the above-described lane line detection method when the computer program is executed by a processor.
Through the scheme, the beneficial effects of the application are that: the method comprises the steps of extracting a plurality of edge points meeting the extraction conditions of the lane line from an obtained road image, then calculating the gradient direction of each edge point, classifying all the edge points according to the gradient direction of the edge points to obtain a plurality of edge point sets, selecting an optimal edge point set from the edge point sets, and performing curve fitting on the edge points in the optimal edge point set to obtain a fitted lane line, so that the accuracy of lane line detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flowchart of an embodiment of a lane line detection method provided in the present application;
fig. 2 is a schematic flow chart of another embodiment of the lane marking detection method provided in the present application;
fig. 3 is a schematic structural diagram of an embodiment of a lane line detection device provided in the present application;
FIG. 4 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a lane line detection method provided in the present application, where the method includes:
step 11: and acquiring a road image.
In the driving process of the vehicle, the imaging device on the vehicle can be used for shooting the current road to obtain the road image in the current scene.
Step 12: a plurality of edge points satisfying the lane line extraction condition are extracted from the road image.
After the current road image is obtained, a plurality of edge points can be extracted from the road image by using an edge detection method, wherein the edge points can be points on a lane line; the lane lines are generally yellow or white, and are different from the colors of roads, so that in order to extract edges conveniently, the road images can be gray images, and the color images can be converted into the gray images after being collected by an imaging device.
Step 13: the gradient direction of the edge points is calculated.
After the edge points are extracted from the road image, the gradient directions of the edge points can be calculated.
Step 14: and dividing the plurality of edge points into different edge point sets according to the gradient direction.
Dividing all edge points into a plurality of edge point sets according to the gradient direction of each edge point, wherein the edge point in each edge point set can correspond to one edge, the edge point set comprises at least one edge point, and different edge point sets do not contain the same edge point; for example, edge points E1 to E10 are detected, the edge points are divided into three edge point sets a1 to A3, the edge point set a1 includes an edge point E1, an edge point E2, and an edge point E3, the edge point set a2 includes an edge point E4, an edge point E5, an edge point E6, and an edge point E7, and the edge point set A3 includes an edge point E8, an edge point E9, and an edge point E10.
Step 15: and screening the edge point set to obtain an optimal edge point set.
After a plurality of edge point sets are divided, the edge point sets are screened, and the optimal edge point set closest to the actual lane line is selected from the edge point sets.
Step 16: and fitting an edge curve of the lane line according to the optimal edge point set.
And converting the optimal edge point on the road image into the optimal edge point on the ground by utilizing the corresponding relation between the point on the road image and the point on the ground, and performing curve fitting on the optimal edge point on the ground to obtain an edge curve of the lane line.
The embodiment provides a method for detecting a lane line, which includes extracting a plurality of edge points meeting lane line extraction conditions from a captured road image, then calculating a gradient direction of each edge point, classifying the edge points according to the gradient directions of the edge points to obtain a plurality of edge point sets, screening an optimal edge point set from the edge point sets, and fitting the edge points in the optimal edge point set to obtain a fitted lane line, so that the accuracy of lane line detection can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a lane line detection method provided in the present application, the method including:
step 201: and acquiring a road image, and traversing pixel points on the road image along a preset scanning direction.
The predetermined scanning direction may be from bottom to top, from left to right.
Step 202: and carrying out difference operation on the summation of the pixel values of the forward pixel points which are positioned in the forward direction of the traversed current pixel point and are in the predetermined number along the scanning direction and the summation of the pixel values of the backward pixel points which are positioned in the backward direction of the current pixel point and are in the predetermined number so as to obtain the gradient value of the current pixel point.
The preset number is determined by the pixel width of the lane line in the road image, and the pixel width is obtained by pre-calculating the actual width of the lane line and the imaging parameters of the imaging device for shooting the road image; the actual width of the lane line is known, for example, 300mm, the imaging device may be a camera, the imaging parameters may be internal parameters and external parameters of the camera, the camera may be mounted on a front windshield of the vehicle or mounted on a robot on the vehicle, and the internal parameters and the external parameters of the camera may be calibrated in advance to determine a camera model so as to calculate some measurement information through the calibrated imaging parameters; for example, a point A on the ground is mapped to a point a on the road image, and the distance from the point A to the camera can be calculated through the coordinate of the point a on the road image; if there are two points a and B at a distance of 1m from the camera, if the distance between the point a and the point B is 100mm, the pixel distance between the points a and B on the road image can be calculated.
In a specific embodiment, assuming that the size of the road image is M × N, for the ith row in the road image, the width of the lane line of 300mm is mapped to the width of p (i) pixels in the road image, and when the ith row of pixels of the road image is scanned from left to right, the difference between the sum of the pixel values of the p (i) pixels on the right side of the scanning point and the sum of the pixel values of the p (i) pixels on the left side is calculated and is used as the gradient value of the scanning point; for example, if the road image has a width of 640 and a height of 480, the scanning starts from line 480. the scanning starts from the 480 th line, and assuming that the 10 pixel width represents 300mm on the ground in the 480 th line, the scanning can be calculated from the coordinates (480, 11)
Figure BDA0002324932490000041
And
Figure BDA0002324932490000042
obtaining gradient values at coordinates (480, 11), similarly calculating gradient values of other pixel points in the ith row, and then scanning line by line from the ith row upwards to obtain gradient values of pixel points in other rows; in other embodiments, to increase the scanning speed, the scanning may be performed at intervals of a predetermined number of lines, for example, at intervals of 4 pixels, and the 476 th line and the 472 th line are scanned upward after the 480 th line is scanned.
Step 203: and sequentially selecting pixel points corresponding to two adjacent peak values of the gradient values along the scanning direction as candidate pairs.
After the gradient values along the row direction are calculated, functions related to the gradient values and the coordinate positions corresponding to the current scanning row can be obtained, the minimum value and the maximum value of the gradient values of the current scanning row can be counted, and pixel points corresponding to the adjacent maximum value and the minimum value are used as candidate pairs; for a road image with two lane lines, generally, from left to right, pixel values are increased, decreased, increased and decreased, corresponding gradient values are gradually increased to a maximum value, then gradually decreased to a minimum value, and then gradually increased and decreased, left edge points of the lane lines correspond to the maximum value, right edge points of the lane lines correspond to the minimum value, and pixel points corresponding to all the maximum values and the minimum values can be stored as potential left and right edge points of the lane lines.
Step 203: and determining the edge points of the lane line according to the candidate pixel values.
Judging whether the absolute difference value between the pixel distance and the pixel width between two pixel points in the candidate pair is smaller than a preset threshold value or not; specifically, the pixel distance between the pixel points corresponding to the minimum value and the maximum value in the candidate pair is calculated, then the absolute value of the difference between the pixel distance and the pixel width is calculated, and the magnitude relation between the absolute value of the difference and a preset threshold is compared, wherein the preset threshold can be a preset value; and if the absolute value of the difference is greater than or equal to a preset threshold, indicating that two pixel points in the candidate pair are false edge points, and removing the false edge points from the edge points.
If the absolute difference between the pixel distance and the pixel width between the two pixel points in the candidate pair is smaller than the preset threshold, the two pixel points in the candidate pair are used as the edge points of the lane line, and the edge points meeting the preset threshold condition can be stored so as to carry out subsequent processing.
Step 204: and taking the direction of the eigenvector corresponding to the maximum eigenvalue of the autocorrelation matrix of the edge point as the gradient direction of the edge point.
After detecting the edge point pairs, calculating the gradient direction of each edge point, and calculating the gradient direction of the edge points by using the Harris (Harris) corner point detection idea; assuming that the autocorrelation matrix M at the point P needs to be calculated, a window region including the point P can be defined, the window region can be slid in any direction, the amount of the sliding offset is (u, v), and the corresponding gray scale difference before and after the window region is slid can be as follows:
Figure BDA0002324932490000061
where w (x, y) is a window function and I (x, y) is the gray scale value at (x, y).
The gray difference value E (u, v) is subjected to taylor series expansion, and the first two terms of the taylor series are taken, so that the following formula can be obtained:
Figure BDA0002324932490000062
Figure BDA0002324932490000063
wherein, IxIs a gradient value in the x direction, IyIs the gradient value in the y direction.
And calculating the characteristic value and the characteristic vector of the autocorrelation matrix M, and taking the direction of the characteristic vector corresponding to the maximum characteristic value of the autocorrelation matrix M as the gradient direction of the P point to obtain a discrete tangent vector field on the road image.
Step 205: and traversing the edge points, and judging whether the traversed current edge points belong to the generated edge point set.
After the gradient directions of the edge points are calculated, all the edge points can be grouped, whether the traversed current edge points are grouped or not is judged, namely whether the traversed current edge points belong to the generated edge point set or not, and the difference values of the gradient directions of all the edge points in the generated edge point set are within a preset direction range; if the traversed to current edge point belongs to the generated set of edge points, step 209 may be performed.
Step 206: and if the traversed current edge point does not belong to the generated edge point set, generating the current edge point set, and dividing the current edge point into the current edge point set.
If the current edge point traversed currently is not in the generated edge point set, which indicates that the gradient direction of the edge point is different from that of the generated edge point set and is located on a different edge, a new edge point set can be generated at this time, and the edge point is added into the new edge point set.
Step 207: and judging whether the next edge point exists in a preset direction range of the gradient direction of the current edge point.
Because the edge points with the same gradient direction are divided into the edge point set, after the current edge point is divided into the current edge point set, whether the next edge point still exists in the preset direction range along the gradient direction of the current edge point in the road image can be judged; for example, if the gradient direction of the current edge point is 10 degrees from the horizontal direction, and the preset direction range is-5 degrees to-5 degrees, if the gradient direction of the next edge point is 5 degrees to 15 degrees from the horizontal direction, the next edge point and the current edge point can be considered to belong to the same edge point set; or the next edge point may be found along the gradient direction of the current edge point, that is, the gradient direction of the current edge point is the same as the gradient direction of the next edge point, for example, if the gradient direction of the current edge point is 10 ° from the horizontal direction, if the gradient direction of the next edge point is also 10 ° from the horizontal direction, the next edge point and the current edge point may be considered to belong to the same edge point set.
Step 208: and if the next edge point exists in the preset direction range of the gradient direction of the current edge point, dividing the next edge point into the current edge point set, and taking the next edge point as the current edge point.
If there is a next edge point in the preset direction range along the gradient direction of the current edge point, the next edge point may be divided into the current edge point set, and the next edge point is taken as the current edge point, and the step 209 is returned to; if there is no next edge point within the preset direction range along the gradient direction of the current edge point, go back to step 207.
In a specific embodiment, a first edge point can be found in a direction from bottom to top and from left to right, then the edge point is found in a tangent vector direction of the edge point, whether a next edge point can be found in a set direction range is judged, if the next edge point can be found, then the next edge point is found in a tangent vector direction of a new found edge point, and the edge point cannot be found, so that one edge is found, the next edge can be found in the same manner, and the found edge point can be removed to increase the finding speed and does not belong to the finding range any more.
Step 209: a best set of edge points is identified from the set of edge points.
The more the edge points are, the more accurate the edge can be expressed, so that the edge point set with more edge points can be selected as the optimal edge point set, and the number of the optimal edge point set can be selected according to the number of the lane lines in the current scene.
In a specific embodiment, for a case where one lane line exists in the road image, the edge point set containing the largest number of edge points may be selected as the optimal edge point set.
In another specific embodiment, an optimal edge point set corresponding to the lane line can be obtained according to the gradient values of the edge points in the edge point set and the number of the edge points in the edge point set; specifically, for the case of at least two lane lines, the left edge and the right edge corresponding to each lane line may be selected according to the number of edge points in the edge point set; selecting the edge point set with the largest number of edge points from the edge point sets with the maximum gradient values of the edge points as the optimal edge point set corresponding to the left edge; selecting an edge point set which has the largest number of edge points and the distance difference value between the edge point set and the left edge within a preset width range from the edge point set with the gradient value of the edge point as an optimal edge point set corresponding to the right edge, so as to obtain the left edge and the right edge corresponding to the first lane line; and then selecting an edge point set with the number of the edge points in the edge point set being the next from the unmatched edge point sets according to the method to obtain the left edge and the right edge corresponding to the second lane line, and so on to obtain the left edge and the right edge corresponding to each lane line, wherein the preset width range comprises the width of the lane line and is close to the width of the lane line.
For example, in the case where two lane lines exist in the road image, the lane lines include a left lane line and a right lane line, there are 5 edge point sets C1-C5 in total, and the edge point set C1 corresponds to a maximum value including 80 edge points; the edge point set C2 corresponds to a minimum value, which includes 76 edge points; the edge point set C3 corresponds to a minimum value, which includes 67 edge points; the distance between the edge corresponding to the edge point set C1 and the edge corresponding to the edge point set C2 is not greatly different from the width of the lane line; the distance between the edge corresponding to the edge point set C1 and the edge corresponding to the edge point set C3 is different from the width of the lane line; the edge point set C4 corresponds to a maximum value, which includes 90 edge points; the edge point set C5 corresponds to a minimum value, which includes 84 edge points; the distance between the edge corresponding to the edge point set C4 and the edge corresponding to the edge point set C5 is not greatly different from the width of the lane line; then the edge point set C4 containing the most edge points can be selected, according to the distance between the edge point set C2, C3 and C5, the edge point set C5 is selected as another edge corresponding to the edge point set C4, and according to the position relationship of the edge points in the horizontal direction, it can be determined whether the edge point set C4 and the edge point set C5 belong to the edge point set corresponding to the left edge or the edge point set corresponding to the right edge respectively; then, an edge point set C1 with the number of edge points being the same as the number of edge points is selected from the edge point sets which are not matched, and according to the distances between the edge point set C1 and the edge point set C2 and the edge point set C3, the edge point set C2 is selected as the other edge corresponding to the edge point set C1, so that the optimal edge point sets corresponding to the left lane line and the right lane line are obtained.
In other embodiments, only one of the left edge or the right edge of the lane line may be obtained, for example, only the left edge is used to represent the lane line, and the implementation method is similar to the above method and is not described herein again.
Step 210: and converting the edge points in the optimal edge point set into a ground coordinate system, and processing the converted edge points by using extended Kalman filtering to generate an edge curve of the lane line.
The edge points in the optimal edge point set can be converted into a ground coordinate system to obtain edge points on the ground, and in order to increase the detection stability, the edge points on the ground can be processed by using extended Kalman filtering to limit the curvature of the curve.
In a specific embodiment, the edge curve is a cubic curve, assuming that the lane line is a smooth curve, the function expression of the smooth curve may be expanded in a taylor series, and the higher order terms of the taylor series above third order are removed, where the function expression of the lane line is as follows:
x=at+bty+cty2+dty3
wherein, (x, y) is the coordinate of the lane line on the ground, the vertical direction of the image is the y axis, the horizontal direction of the image is the x axis, and the prediction equation and the observation equation of the extended kalman filter are as follows:
xt=f(xt-1,ut)
zt=g(xt)
wherein x istIs a predicted value, x, of a cubic curve parameter at time tt=[at,bt,ct,dt]T,at、bt、ctAnd dtIs the cubic curve parameter at time t, utTo control the parameter, ztIs the observed value of the edge point on the ground at the time t.
Homogeneous coordinates [ x, y,1 ] of points on the ground]THomogeneous coordinates [ u, v,1 ] of points on the corresponding lane image]TThe following relationship is satisfied:
Figure BDA0002324932490000091
where H is a transformation matrix, (u, v) are coordinates of edge points on the road image, and since the camera is calibrated in advance, the transformation matrix H is known, and (x, y) solved by the above equation is taken as an observed value of the edge point on the ground at time t +1, denoted as zt+1=[x,y,1]T
Assuming that the vehicle makes a uniform linear motion, the velocity is v, the time interval is Δ t, and the matrix form of the function expression of the cubic curve is as follows:
Figure BDA0002324932490000101
i.e. using the predicted value x of the cubic curve parameter at time tt=[at,bt,ct,dt]TWith the above formula, the predicted value x of the cubic curve parameter at the time t +1 can be obtainedt+1=[at+1,bt+1,ct+1,dt+1]T
Then, the predicted value x of the cubic curve parameter at the t +1 moment is utilizedt+1=[at+1,bt+1,ct+1,dt+1]TAnd a functional relationship z between the predicted values of the cubic curve parameters and the observed values of the edge points on the groundt+1=g(xt+1) The predicted value z 'of the edge point can be calculated't+1
Then using the following formula to xt+1Updating to obtain an updated value x 'of the cubic curve parameter at the t +1 moment't+1
x’t+1=xt+1+k(z’t+1-zt+1)
K is an adjustment coefficient, and the state value x at the last moment can be corrected by the extended Kalman filter algorithmt=[at,bt,ct,dt]TAnd filtering is carried out, and four parameters of the cubic curve are solved, so that the lane line is fitted.
In the embodiment, the edge point pairs of the lane lines can be detected firstly, then the gradient direction of the obtained edge points is calculated, a discrete tangent vector field on the road image is calculated, all integral curves of the tangent vector field are calculated, the lane lines are in the calculated integral curves, finally, the stability of lane line detection is enhanced by adopting an extended Kalman filtering algorithm, Hough line transformation is not needed to extract straight lines, and the lane lines with larger curvatures have better detection effect.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of the lane line detection apparatus provided in the present application, the lane line detection apparatus 30 includes a memory 31 and a processor 32 connected to each other, the memory 31 is used for storing a computer program, and the computer program is used for implementing the lane line detection method in the foregoing embodiment when being executed by the processor 32.
The lane line detection device 30 may calculate the gradient directions of all edge points in the road image, divide the edge point sets according to the gradient directions, and use the edge point set with the most edge points as the optimal edge point set, thereby extracting the lane line without performing hough line transformation to extract a straight line, and being capable of better detecting a lane line with a large curvature.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a computer-readable storage medium 40 provided in the present application, where the computer-readable storage medium 40 is used for storing a computer program 41, and when the computer program 41 is executed by a processor, the computer program is used for implementing the lane line detection method in the foregoing embodiment.
The computer storage medium 40 may be a server, a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above embodiments are merely examples, and not intended to limit the scope of the present application, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present application, or those directly or indirectly applied to other related arts, are included in the scope of the present application.

Claims (10)

1. A method for detecting a lane line, comprising:
acquiring a road image;
extracting a plurality of edge points satisfying a lane line extraction condition from the road image;
calculating the gradient direction of the edge point;
dividing the edge points into different edge point sets according to the gradient direction;
screening the edge point set to obtain an optimal edge point set;
and fitting an edge curve of the lane line according to the optimal edge point set.
2. The method according to claim 1, wherein the step of extracting a plurality of edge points satisfying a lane line extraction condition from the road image includes:
traversing pixel points on the road image along a preset scanning direction;
performing difference operation on the summation of the pixel values of a predetermined number of forward pixel points which are located in the forward direction of the traversed current pixel point along the scanning direction and the summation of the pixel values of a predetermined number of backward pixel points which are located in the backward direction of the current pixel point to obtain a gradient value of the current pixel point, wherein the predetermined number is determined by the pixel width of the lane line in the road image;
sequentially selecting pixel points corresponding to two adjacent peak values of the gradient value along the scanning direction as candidate pairs;
and determining the edge points of the lane line according to the candidate pixel values.
3. The method according to claim 2, wherein the step of determining the edge point of the lane line based on the candidate pixel value includes:
judging whether the absolute difference value between the pixel distance between the two pixel points in the candidate pair and the pixel width is smaller than a preset threshold value or not;
and if the number of the pixel points is smaller than the preset threshold value, taking the two pixel points in the candidate pair as edge points of the lane line.
4. The lane line detection method according to claim 2,
the pixel width is obtained by pre-calculating an actual width of the lane line and an imaging parameter of an imaging device for capturing the road image.
5. The method according to claim 1, wherein the step of calculating the gradient direction of the edge point includes:
and taking the direction of the eigenvector corresponding to the maximum eigenvalue of the autocorrelation matrix of the edge point as the gradient direction of the edge point.
6. The method according to claim 1, wherein the step of dividing the plurality of edge points into different edge point sets according to the gradient direction includes:
traversing the edge points;
judging whether the traversed current edge point belongs to the generated edge point set or not;
if the current edge point set does not belong to the generated edge point set, generating a current edge point set, and dividing the current edge point into the current edge point set;
judging whether a next edge point exists in a preset direction range of the gradient direction of the current edge point;
if the next edge point exists, dividing the next edge point into the current edge point set, taking the next edge point as the current edge point, and returning to the step of judging whether the next edge point exists in the preset direction range of the gradient direction of the current edge point;
and if the next edge point does not exist, returning to the step of traversing the edge point.
7. The method according to claim 1, wherein the step of screening the edge point set to obtain an optimal edge point set comprises:
and obtaining the optimal edge point set corresponding to the lane line according to the gradient values of the edge points in the edge point set and the number of the edge points in the edge point set.
8. The method according to claim 1, wherein the step of fitting an edge curve of the lane line according to the optimal edge point set comprises:
and converting the edge points in the optimal edge point set into a ground coordinate system, and processing the converted edge points by using extended Kalman filtering to generate an edge curve of the lane line.
9. A lane line detection apparatus comprising a memory and a processor connected to each other, wherein the memory is configured to store a computer program, and the computer program is configured to implement the lane line detection method according to any one of claims 1 to 8 when executed by the processor.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, is configured to implement the lane line detection method according to any one of claims 1 to 8.
CN201911312555.9A 2019-12-18 2019-12-18 Lane line detection method, lane line detection device and computer-readable storage medium Pending CN111178193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312555.9A CN111178193A (en) 2019-12-18 2019-12-18 Lane line detection method, lane line detection device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312555.9A CN111178193A (en) 2019-12-18 2019-12-18 Lane line detection method, lane line detection device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111178193A true CN111178193A (en) 2020-05-19

Family

ID=70652168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312555.9A Pending CN111178193A (en) 2019-12-18 2019-12-18 Lane line detection method, lane line detection device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111178193A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348837A (en) * 2020-11-10 2021-02-09 中国兵器装备集团自动化研究所 Object edge detection method and system based on point-line detection fusion
CN112801111A (en) * 2020-12-18 2021-05-14 广东工业大学 Image straight line edge point classification method and device based on gradient direction
CN113628232A (en) * 2021-05-11 2021-11-09 深圳市汇川技术股份有限公司 Method for eliminating interference points in fit line, visual recognition equipment and storage medium
CN114581890A (en) * 2022-03-24 2022-06-03 北京百度网讯科技有限公司 Method and device for determining lane line, electronic equipment and storage medium
CN117115242A (en) * 2023-10-17 2023-11-24 湖南视比特机器人有限公司 Identification method of mark point, computer storage medium and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819779B1 (en) * 2000-11-22 2004-11-16 Cognex Corporation Lane detection system and apparatus
CN1945596A (en) * 2006-11-02 2007-04-11 东南大学 Vehicle lane Robust identifying method for lane deviation warning
CN104077756A (en) * 2014-07-16 2014-10-01 中电海康集团有限公司 Direction filtering method based on lane line confidence
DE102016124879A1 (en) * 2016-08-29 2018-03-01 Neusoft Corporation Method, device and device for determining lane lines
CN107832674A (en) * 2017-10-16 2018-03-23 西安电子科技大学 A kind of method for detecting lane lines
CN109583280A (en) * 2017-09-29 2019-04-05 比亚迪股份有限公司 Lane detection method, apparatus, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819779B1 (en) * 2000-11-22 2004-11-16 Cognex Corporation Lane detection system and apparatus
CN1945596A (en) * 2006-11-02 2007-04-11 东南大学 Vehicle lane Robust identifying method for lane deviation warning
CN104077756A (en) * 2014-07-16 2014-10-01 中电海康集团有限公司 Direction filtering method based on lane line confidence
DE102016124879A1 (en) * 2016-08-29 2018-03-01 Neusoft Corporation Method, device and device for determining lane lines
CN109583280A (en) * 2017-09-29 2019-04-05 比亚迪股份有限公司 Lane detection method, apparatus, equipment and storage medium
CN107832674A (en) * 2017-10-16 2018-03-23 西安电子科技大学 A kind of method for detecting lane lines

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348837A (en) * 2020-11-10 2021-02-09 中国兵器装备集团自动化研究所 Object edge detection method and system based on point-line detection fusion
CN112348837B (en) * 2020-11-10 2023-06-09 中国兵器装备集团自动化研究所 Point-line detection fusion object edge detection method and system
CN112801111A (en) * 2020-12-18 2021-05-14 广东工业大学 Image straight line edge point classification method and device based on gradient direction
CN113628232A (en) * 2021-05-11 2021-11-09 深圳市汇川技术股份有限公司 Method for eliminating interference points in fit line, visual recognition equipment and storage medium
CN113628232B (en) * 2021-05-11 2024-02-27 深圳市汇川技术股份有限公司 Method for eliminating interference points in fitting line, visual identification equipment and storage medium
CN114581890A (en) * 2022-03-24 2022-06-03 北京百度网讯科技有限公司 Method and device for determining lane line, electronic equipment and storage medium
CN117115242A (en) * 2023-10-17 2023-11-24 湖南视比特机器人有限公司 Identification method of mark point, computer storage medium and terminal equipment
CN117115242B (en) * 2023-10-17 2024-01-23 湖南视比特机器人有限公司 Identification method of mark point, computer storage medium and terminal equipment

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN112837303A (en) Defect detection method, device, equipment and medium for mold monitoring
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
CN108052904B (en) Method and device for acquiring lane line
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
US11049275B2 (en) Method of predicting depth values of lines, method of outputting three-dimensional (3D) lines, and apparatus thereof
CN111046843A (en) Monocular distance measurement method under intelligent driving environment
CN111144213A (en) Object detection method and related equipment
CN113822352B (en) Infrared dim target detection method based on multi-feature fusion
CN111144377A (en) Dense area early warning method based on crowd counting algorithm
EP2677462B1 (en) Method and apparatus for segmenting object area
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN109978916B (en) Vibe moving target detection method based on gray level image feature matching
WO2021051382A1 (en) White balance processing method and device, and mobile platform and camera
CN113221739B (en) Monocular vision-based vehicle distance measuring method
CN108846845B (en) SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering
CN107977608B (en) Method for extracting road area of highway video image
CN113487631A (en) Adjustable large-angle detection sensing and control method based on LEGO-LOAM
CN115330818A (en) Picture segmentation method and computer readable storage medium thereof
CN108805896B (en) Distance image segmentation method applied to urban environment
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
CN115471537A (en) Monocular camera-based moving target distance and height measuring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination