CN107229908A - A kind of method for detecting lane lines - Google Patents

A kind of method for detecting lane lines Download PDF

Info

Publication number
CN107229908A
CN107229908A CN201710343501.3A CN201710343501A CN107229908A CN 107229908 A CN107229908 A CN 107229908A CN 201710343501 A CN201710343501 A CN 201710343501A CN 107229908 A CN107229908 A CN 107229908A
Authority
CN
China
Prior art keywords
msub
mrow
mtd
mtr
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710343501.3A
Other languages
Chinese (zh)
Other versions
CN107229908B (en
Inventor
顾敏明
李福俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuelaihu Shandong Digital Economy Industrial Park Operation Management Co ltd
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Sci Tech University ZSTU
Priority to CN201710343501.3A priority Critical patent/CN107229908B/en
Publication of CN107229908A publication Critical patent/CN107229908A/en
Application granted granted Critical
Publication of CN107229908B publication Critical patent/CN107229908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The present invention provides a kind of method for detecting lane lines, obtains image using the forward sight camera on vehicle, dynamic image is handled in real time, the purpose that detection is accurately identified to lane line is reached.The method for detecting lane lines of the present invention has the advantage that:One is that applicability is more preferable, can recognize the lane line in night, rainy day and tunnel;Two be that algorithm is simple and time-consuming shorter, can meet the requirement of real-time;Three be that detection error is small, and precision is high.

Description

A kind of method for detecting lane lines
Technical field:The present invention relates to intelligent automobile technology, particularly a kind of automobile method for detecting lane lines.
Technical background:
With social progress, people's living standard improve and the vehicles development, whole world automobile quantity sharp increase, Meanwhile, traffic safety problem also becomes the hot issue of global concern, and senior drive assist system (ADAS) is arisen at the historic moment simultaneously Get the attention.ADAS is to obtain the inside and outside information of vehicle using various kinds of sensors, and passes through various processing, is finally led to Crossing alarm allows driver to discover the danger for being likely to occur, to reduce traffic accident incidence.
Existing ADAS is limited due to the level of image recognition processing in actual use, so the inspection to lane line Survey often goes wrong, it is difficult to possesses the performance of long-time long range stable operation, limits such system in certain circumstances The possibility of application.
The content of the invention:
For problem above, the present invention provides a kind of method for detecting lane lines, dynamic image is handled in real time, reached The purpose of detection is accurately identified to lane line.
A kind of method for detecting lane lines, including:
First, road image is obtained:Road image is caught by vehicle-mounted camera;
2nd, image is pre-processed:
Including coloured image gray processing, interception effective information region, filtering and noise reduction, gradation of image enhancing, rim detection and Lane line is repaired, wherein the process step for repairing lane line is first to find to need " short-term section ", at each white point Reason, if it has continuous 4 white points to decide that this is one " short-term section " in some direction, finds out all " short-term sections ", and Record its direction;Second step, finds, if having another and its in 6 pixels in the direction to each short-term section The similar short-term section in direction, it should be straight line that we, which are considered as this, and pixel all between two short-term sections is entered as 1, i.e., the black color dots between are changed into white point, so as to reach the purpose of line;All short-terms section is handled, can just be repaired Good interrupted lane line;
3rd, recognition and tracking lane line:
1st, the identification of left and right lane line:
First, reduction detection of straight lines required for joint quantity and set detection of straight lines section most short value;
Secondly, when vehicle is travelled in track, left and right lane line is typically distributed on image two in the road image of shooting Side, and the slope in two tracks has a scope, a large amount of road image samples are calculated by analyzing, polar angle to extreme point and tiltedly Rate enters row constraint successively, and the value constrained in practice is changed according to camera installation site;The polar angle scope of constraint is-a1°< θ<a2°, i.e., straight line is found within this range, reduces processing data, the restriction range to left-lane line slope is b1<k1<b2, it is right The restriction range of track line slope is b3<k2<b4, straight line not in this range all leaves out from array;
Again, the line segment detected is sorted from long to short, chooses preceding 10 line segments, total segment chooses institute less than 10 There is line segment, these line segments are then divided into three classes, the first kind is that slope is more than zero and the close straight line of slope, and Equations of The Second Kind is oblique Rate is less than zero and the close straight line of slope, and the 3rd class is other all straight lines, afterwards, according to the longer weight of length is bigger, position Put the two bigger principles of weight more on the lower and all straightways in the first kind are fitted to right-lane line, the institute in Equations of The Second Kind There is straightway to be fitted to left-lane line, and the line segment in the 3rd class is disregarded;
2nd, dynamic lane detection:
In adjacent two field pictures, because the distance that vehicle is travelled is very short, the position of two lane lines is not in too big Deviation;In actual algorithm, track line slope, intercept and the two lane line intersecting point coordinates that former frame is detected are preserved, that In the next frame, the angle of lane line should be differed within 3 ° with the angle of former frame, the data of intercept position and former frame It should differ in 20 pixels, the approximate location of lane line in next frame is just can know that according to this, therefrom finding to subtract Few many treating capacities, in addition, the distance of adjacent two frames, two lane line intersection points should in 15 pixels, by this condition, Real lane line can be filtered out;
4th, lane line is corrected
Lane line fluoroscopy images are converted into overhead view image, the general transformation for mula of inverse perspective mapping is
Wherein, [x y z] is each point coordinates of original shooting image, and [x'y'z'] is to carry out the figure after inverse perspective mapping As the coordinate of corresponding points,For perspective transformation matrix, transformation matrixIn
[a31 a32] rigid body translation, including translation, rotation, zoom are produced,Produce perspective effect Really,
Transformation for mula is deployed, obtained
[x'y'z']=[a11x+a12y+a13z a21x+a22y+a23z a31x+a32y+a33z] (4-2)
Transformation for mula is rewritten to obtain
In (4-3) formula, because image is two dimensional surface, z=1 makes a33=1, now there are 8 unknown quantitys, then 4 point coordinates before known transform and 4 point coordinates after conversion are only needed, totally 8 coordinates can be to ask for transformation matrixObtain after transformation matrix, each pixel in original image is carried out computing and then new position is corresponded to;
In actual treatment, the beginning and end on every lane line is only converted, totally four points, then by beginning and end Connect, with regard to the lane line after inverse perspective mapping can be obtained;
5th, lane shift is calculated
1st, angle calculations of offset
Assuming that the deviation angle of left-lane line is θ1, the deviation angle of right-lane line is θ2, lane line disalignment angle For θ0, the angle of two lane line center lines and lane line is α, and two lane line center lines and the angle of x-axis positive direction are β,
Angle α is eliminated, the calculation formula that can obtain vehicle body deviation angle size θ is
In formula (5-1), θ angles are bigger, and vehicle body skew is bigger,
Lane center is offset to the right in image, and vehicle is shown as in practice and offset by track to the left, therefore vehicle body is inclined Angle direction is moved relevant with β angles, when 0 °<β<At 90 °, vehicle body, which is turned left, offset by θ angles, when 90 °<β<At 180 °, vehicle body is turned right It offset by θ angles;
2nd, ranging offset is calculated
Assuming that the distance of two lane lines is x in practice, the track line length of photographing section is y, in the picture, two cars The distance of diatom occupies u pixel, and track line length occupies v pixel, then u, v and vertical view in image coordinate system X, y in coordinate system have certain proportion relation, it is assumed that x-axis aspect ratio is λ, and y-axis aspect ratio is μ, there is formula
Assuming that the lower limit of lane line and disalignment d1Individual pixel, the lane line upper limit and disalignment d2It is individual Pixel;Assuming that headstock position in actual coordinates, the i.e. distance of camera position offset lanes center line are D, headstock is with Extreme position distance is L0,
According to the offset d of lower limit1The graph of a relation in image coordinate system is made with angle skew θ, according to trigonometric function formula Obtain
According to formula (5-2), it is to be understood that
Work as L>L0When, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-5)
Work as L<L0When, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-6)
In formula (5-5) and (5-6), result of calculation is that offset by a segment distance on the occasion of then illustrating vehicle toward the left side, and negative value is then Illustrate to turn right and offset by a segment distance.
The method for detecting lane lines of the present invention has the advantage that:One is that applicability is more preferable, can recognize night, rainy day and tunnel In lane line;Two be that algorithm is simple and time-consuming shorter, can meet the requirement of real-time;Three be that detection error is small, and precision is high.
Brief description of the drawings:
Accompanying drawing 1 is schematic flow sheet of the invention.
Accompanying drawing 2a is the preprosthetic image of lane line.
Accompanying drawing 2b is the image after lane line is repaired.
Accompanying drawing 3a is original image.
Accompanying drawing 3b is that Fig. 3 a are passed through into the top view that inverse perspective mapping is obtained.
Accompanying drawing 4a is situation schematic diagram of the vehicle in real road.
Accompanying drawing 4b is into top view by lane line inverse perspective mapping in Fig. 4 a.
Fig. 5 is lane line drift condition schematic diagram.
Accompanying drawing 6a makes the graph of a relation one in image coordinate system for offset d 1 and angle the skew θ of lane line lower limit.
Accompanying drawing 6b makes the graph of a relation two in image coordinate system for offset d 1 and angle the skew θ of lane line lower limit.
Accompanying drawing 6c makes the graph of a relation three in image coordinate system for offset d 1 and angle the skew θ of lane line lower limit.
Embodiment:
Implementation method of the present invention and principle are further illustrated with reference to example and accompanying drawing.
A kind of method for detecting lane lines, including:
First, road image is obtained:Road image is caught by vehicle-mounted camera;
2nd, image is pre-processed:
Coloured image gray processing, interception effective information region, filtering and noise reduction, gradation of image enhancing, rim detection and reparation Lane line, the edge of lane line is desultory after rim detection, without continuity, it is necessary to repair edge.Processing Step is first to find us to need " short-term section ", each white point is handled, if it has continuous 4 in some direction White point decides that this is one " short-term section ", finds out all " short-term sections ", and record its direction.Second step, to each short-term Section is found in the direction, if there is another short-term section similar with its direction in 6 pixels, we are considered as this It should be straight line, pixel all between two short-term sections is entered as 1, i.e., the black color dots between are changed into white point, So as to reach the purpose of line.All short-terms section is handled, with regard to interrupted lane line can be repaired.
Fig. 2 a are the edges that road image is detected by Prewitt operators, it can be seen that by rim detection Lane line afterwards is desultory, and without continuity, this may cause certain influence to follow-up identification lane line, so I Need to repair lane line.The basic thought for repairing lane line is to judge the distance between two lines section, is advised when distance is less than us During fixed value, the point between this two lines section is just entered as 1, i.e., the black color dots between are changed into white point, so as to reach company The purpose of line.Image after reparation is as shown in Figure 2 b, it can be seen that originally interrupted straightway is connected to complete one directly Line.
3rd, recognition and tracking lane line:
1st, the identification of left and right lane line:
First, reduction detection of straight lines required for joint quantity and set detection of straight lines section most short value, this can allow The serious forgiveness increase of detection of straight lines, allows some not to be that very straight straight line is detected, and this is conducive to improving detection dashed line form The success rate of lane line.
Secondly, when vehicle is travelled in track, left and right lane line is typically distributed on image two in the road image of shooting Side, and the slope in two tracks has a scope.A large amount of road image samples are calculated by analyzing, polar angle to extreme point and tiltedly Rate enters row constraint successively, and the value constrained in practice is changed according to camera installation site.In this example, the polar angle model of constraint Enclose for -70 °<θ<, i.e., straight line is found within this range, reduce processing data by 70 °.Restriction range to left-lane line slope is 0.355<k1<0.73, the restriction range of right-lane line slope is k2<0.4516, straight line not in this range is all deleted from array Go, so processing can greatly reduce noise line segment.
Again, the line segment detected is sorted from long to short, chooses preceding 10 line segments, total segment chooses institute less than 10 There is line segment.Then these line segments are divided into three classes, the first kind is that slope is more than zero and the close straight line of slope, and Equations of The Second Kind is oblique Rate is less than zero and the close straight line of slope, and the 3rd class is other all straight lines.Afterwards, we according to the longer weight of length more Greatly, all straightways in the first kind are fitted to right-lane line by the two bigger principles of weight more on the lower for position, Equations of The Second Kind In all straightways be fitted to left-lane line, and the line segment in the 3rd class is disregarded, because these are noise line segment mostly.
2nd, dynamic lane detection
It is not that can detect left and right two lane lines all the time in actual more complicated road conditions, nor inspection The lane line measured is exactly accurate, and we should add some constraintss again.
In our countries, the video that video camera is shot is generally 25 frame per second, 25 is shot using pal mode, i.e. each second Pictures.It is usually to be handled frame by frame in requirement of real-time very high ADAS.Interval between so every two pictures Time is 0.04 second, even if the travel speed for assuming vehicle is 120Km/h, vehicle can only also move ahead 1.33 meters.Therefore, adjacent In two field pictures, because the distance that vehicle is travelled is very short, the positions of two lane lines is not in too large deviation.Actual algorithm In, track line slope, intercept and the two lane line intersecting point coordinates that we detect former frame are preserved, then in next frame In, the angle of lane line should be differed within 3 ° with the angle of former frame, and the data of intercept position and former frame should be differed 20 In individual pixel, according to this, we just can know that the approximate location of lane line in next frame, and therefrom finding can be reduced a lot Treating capacity.In addition, the distance of adjacent two frames, two lane line intersection points should be in 15 pixels, by this condition, we are Real lane line can be filtered out.Therefore, dynamic detection using less processing time with regard to that can have higher accuracy.
4th, lane line is corrected
Camera tilt is shot after an object, and the image of formation can deform, and video camera throws actual three-dimensional scenic Shadow is on the two dimensional surface of image, and this projection is referred to as perspective transform.In actual conditions, camera is not necessarily mounted at centre Position, and probably due to some reasons cause camera or so to offset.The situation of track line distortion is unfavorable for us and subsequently counted Lane shift is calculated, so fluoroscopy images are converted into overhead view image by us, this process is referred to as inverse perspective mapping.
Lane line fluoroscopy images are converted into overhead view image, the general transformation for mula of inverse perspective mapping is
Wherein, [x y z] is each point coordinates of original shooting image, and [x'y'z'] is to carry out the figure after inverse perspective mapping As the coordinate of corresponding points,For perspective transformation matrix.Transformation matrixInWith [a31 a32] rigid body translation, including translation, rotation, zoom are produced,Produce transparent effect,
Transformation for mula is deployed, obtained
[x'y'z']=[a11x+a12y+a13z a21x+a22y+a23z a31x+a32y+a33z] (4-2)
Transformation for mula is rewritten to obtain
In (4-3) formula, because image is two dimensional surface, z=1.Make a33=1, now there are 8 unknown quantitys, then 4 point coordinates before known transform and 4 point coordinates after conversion are only needed, totally 8 coordinates can be to ask for transformation matrixObtain after transformation matrix, each pixel in original image is carried out computing and then new position is corresponded to.
Four white point coordinates in Fig. 3 a are coordinates of original image coordinates, and four summits after the conversion wanted along with us are sat Mark, just can calculate transformation matrix by formula (4-3).Obtain after transformation matrix, each pixel in original image is carried out Then computing corresponds to new position.In new images, there may be some points by artwork correspondence, that is, the feelings in " cavity " do not occur Condition, we can use closest interpolation method, can also use bilinear interpolation method or bi-cubic interpolation method by " cavity " completion.Through Cross top view that inverse perspective mapping obtains as shown in Figure 3 b, it can be seen that the image after inverse perspective mapping eliminates the shadow of deformation Ring, recovered top view original appearance.
5th, lane shift is calculated
1st, angle calculations of offset
Assuming that being the situation of vehicle at present in the road in Fig. 4 a, the angle of vehicle shift lane line is θ.In ideal situation In, we are by lane line inverse perspective mapping into top view, and as shown in Figure 4 b, two lane lines should be parallel, i.e. left-lane line Slope is equal to right-lane line slope.But in a practical situation, the shake of jolting of automobile, the gradient of highway can make two lane lines Not necessarily parallel, now we are calculated using the center line of two lane lines, to reduce error.Such as Fig. 5, it is assumed that left-lane line Deviation angle be θ1, the deviation angle of right-lane line is θ2, lane line disalignment angle is θ0, two lane line center lines Angle with lane line is α, and two lane line center lines and the angle of x-axis positive direction are β, are had
Angle α is eliminated, the calculation formula that can obtain vehicle body deviation angle size θ is
In formula (5-1), θ angles are bigger, and vehicle body skew is bigger.
And according to the pinhole imaging system principle of video camera, lane center is offset to the right in image, and car is shown as in practice Track is offset by the left, therefore vehicle body deviation angle direction is relevant with β angles.When 0 °<β<At 90 °, vehicle body, which is turned left, offset by θ angles Degree, when 90 °<β<At 180 °, vehicle body, which is turned right, offset by θ angles.
2nd, ranging offset is calculated
We pass through the track line image that conversion is overlooked, it is assumed that the distance of two lane lines is x in practice, is shot Partial track line length is y, and in the picture, the distance of two lane lines occupies u pixel, and track line length is occupied V pixel.Then u, v in image coordinate system have certain proportion relation with overlooking x, y in coordinate system, it is assumed that x-axis direction ratio Example is λ, and y-axis aspect ratio is μ, there is formula
Assuming that the lower limit of lane line and disalignment d1Individual pixel, the lane line upper limit and disalignment d2It is individual Pixel.Assuming that the distance of headstock position (i.e. camera position) offset lanes center line is D in actual coordinates, headstock is with Extreme position distance is L0
According to the offset d of lower limit1The graph of a relation in image coordinate system, such as Fig. 6 a, according to triangle letter are made with angle skew θ Number formula is obtained
According to formula (5-2), it is to be understood that
Work as L>L0When, as shown in Figure 6 b, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-5)
Work as L<L0When, as fig. 6 c, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-6)
In formula (5-5) and (5-6), result of calculation is that offset by a segment distance on the occasion of then illustrating vehicle toward the left side, and negative value is then Illustrate to turn right and offset by a segment distance.

Claims (1)

1. a kind of method for detecting lane lines, it is characterised in that
First, road image is obtained:Road image is caught by vehicle-mounted camera;
2nd, image is pre-processed:
Including coloured image gray processing, interception effective information region, filtering and noise reduction, gradation of image enhancing, rim detection and reparation Lane line, wherein the process step for repairing lane line is first to find to need " short-term section ", is handled each white point, if It has continuous 4 white points to decide that this is one " short-term section " in some direction, finds out all " short-term sections ", and record it Direction;Second step, finds, if having another and its direction phase in 6 pixels in the direction to each short-term section As short-term section, it should be straight line that we, which are considered as this, and pixel all between two short-terms section is entered as 1, i.e. handle Between black color dots be changed into white point, so as to reach the purpose of line;All short-terms section is handled, with regard to that can repair discontinuously Lane line;
3rd, recognition and tracking lane line:
1st, the identification of left and right lane line:
First, reduction detection of straight lines required for joint quantity and set detection of straight lines section most short value;
Secondly, when vehicle is travelled in track, left and right lane line is typically distributed on image both sides in the road image of shooting, and The slope in two tracks has a scope, and a large amount of road image samples are calculated by analyzing, to the polar angle and slope of extreme point according to It is secondary enter row constraint, the value constrained in practice is changed according to camera installation site;The polar angle scope of constraint is-a1°<θ< a2°, i.e., straight line is found within this range, reduces processing data, the restriction range to left-lane line slope is b1<k1<b2, right car The restriction range of road line slope is b3<k2<b4, straight line not in this range all leaves out from array;
Again, the line segment detected is sorted from long to short, chooses preceding 10 line segments, total segment less than 10 to choose institute wired Then these line segments are divided into three classes by section, and the first kind is that slope is more than zero and the close straight line of slope, and Equations of The Second Kind is that slope is small In zero and the close straight line of slope, the 3rd class is other all straight lines, afterwards, according to the longer weight of length is bigger, position more All straightways in the first kind are fitted to right-lane line by the two bigger principles of weight on the lower, all straight in Equations of The Second Kind Line segment is fitted to left-lane line, and the line segment in the 3rd class is disregarded;
2nd, dynamic lane detection:
In adjacent two field pictures, because the distance that vehicle is travelled is very short, the positions of two lane lines is not in too large deviation; In actual algorithm, track line slope, intercept and the two lane line intersecting point coordinates that former frame is detected are preserved, then under In one frame, the angle of lane line should be differed within 3 ° with the angle of former frame, and the data of intercept position and former frame should be differed In 20 pixels, the approximate location of lane line in next frame is just can know that according to this, therefrom finding can be reduced a lot Treating capacity, in addition, the distance of adjacent two frames, two lane line intersection points should pass through this condition in 15 pixels, you can sieve Select real lane line;
4th, lane line is corrected
Lane line fluoroscopy images are converted into overhead view image, the general transformation for mula of inverse perspective mapping is
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> </mtd> <mtd> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> </mtd> <mtd> <msup> <mi>z</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> <mtd> <mi>y</mi> </mtd> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>a</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>13</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>a</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>23</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>a</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>33</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, [x y z] is each point coordinates of original shooting image, and [x'y'z'] is to carry out the image pair after inverse perspective mapping The coordinate that should be put,For perspective transformation matrix, transformation matrixIn[a31 a32] Rigid body translation, including translation, rotation, zoom are produced,Produce transparent effect,
Transformation for mula is deployed, obtained
[x'y'z']=[a11x+a12y+a13z a21x+a22y+a23z a31x+a32y+a33z](4-2)
Transformation for mula is rewritten to obtain
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>z</mi> <mo>&amp;prime;</mo> </msup> </mfrac> <mo>=</mo> <mfrac> <mrow> <msub> <mi>a</mi> <mn>11</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>12</mn> </msub> <mi>y</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>13</mn> </msub> <mi>z</mi> </mrow> <mrow> <msub> <mi>a</mi> <mn>31</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>32</mn> </msub> <mi>y</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>33</mn> </msub> <mi>z</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>Y</mi> <mo>=</mo> <mfrac> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>z</mi> <mo>&amp;prime;</mo> </msup> </mfrac> <mo>=</mo> <mfrac> <mrow> <msub> <mi>a</mi> <mn>21</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>22</mn> </msub> <mi>y</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>23</mn> </msub> <mi>z</mi> </mrow> <mrow> <msub> <mi>a</mi> <mn>31</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>32</mn> </msub> <mi>y</mi> <mo>+</mo> <msub> <mi>a</mi> <mn>33</mn> </msub> <mi>z</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>-</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
In (4-3) formula, because image is two dimensional surface, z=1 makes a33=1, now there are 8 unknown quantitys, then only need 4 point coordinates before known transform and 4 point coordinates after conversion, totally 8 coordinates can be to ask for transformation matrixObtain after transformation matrix, each pixel in original image is carried out computing and then new position is corresponded to;
In actual treatment, the beginning and end on every lane line is only converted, totally four points, then connect beginning and end Get up, with regard to the lane line after inverse perspective mapping can be obtained;
5th, lane shift is calculated
1st, angle calculations of offset
Assuming that the deviation angle of left-lane line is θ1, the deviation angle of right-lane line is θ2, lane line disalignment angle is θ0, the angle of two lane line center lines and lane line is α, and two lane line center lines and the angle of x-axis positive direction are β,
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;theta;</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mn>0</mn> </msub> <mo>+</mo> <mi>&amp;alpha;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;theta;</mi> <mn>0</mn> </msub> <mo>=</mo> <mi>&amp;alpha;</mi> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
Angle α is eliminated, the calculation formula that can obtain vehicle body deviation angle size θ is
<mrow> <mi>&amp;theta;</mi> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mn>0</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&amp;theta;</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mn>2</mn> </msub> </mrow> <mn>2</mn> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
In formula (5-1), θ angles are bigger, and vehicle body skew is bigger, and lane center is offset to the right in image, is shown as in practice Vehicle offset by track to the left, therefore vehicle body deviation angle direction is relevant with β angles, when 0 °<β<At 90 °, vehicle body, which is turned left, offset by θ Angle, when 90 °<β<At 180 °, vehicle body, which is turned right, offset by θ angles;
2nd, ranging offset is calculated
Assuming that the distance of two lane lines is x in practice, the track line length of photographing section is y, in the picture, two lane lines Distance occupy u pixel, track line length occupies v pixel, then u, v in image coordinate system with overlook coordinate X, y in system have certain proportion relation, it is assumed that x-axis aspect ratio is λ, and y-axis aspect ratio is μ, there is formula
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mo>=</mo> <mi>&amp;lambda;</mi> <mi>u</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mo>=</mo> <mi>&amp;mu;</mi> <mi>v</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>-</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Assuming that the lower limit of lane line and disalignment d1Individual pixel, the lane line upper limit and disalignment d2Individual pixel Point;Assuming that headstock position in actual coordinates, the i.e. distance of camera position offset lanes center line are D, headstock and lower limit It is L to put distance0,
According to the offset d of lower limit1The graph of a relation in image coordinate system is made with angle skew θ, is obtained according to trigonometric function formula
<mrow> <mi>l</mi> <mo>=</mo> <mfrac> <msub> <mi>d</mi> <mn>1</mn> </msub> <mrow> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>&amp;theta;</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>-</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
According to formula (5-2), it is to be understood that
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>&amp;mu;d</mi> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>L</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&amp;mu;d</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>tan</mi> <mi>&amp;theta;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>-</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Work as L>L0When, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-5)
Work as L<L0When, according to similar triangles relation, offset distance is
D=μ d1-L0tanθ (5-6)
In formula (5-5) and (5-6), result of calculation is that offset by a segment distance on the occasion of then illustrating vehicle toward the left side, and negative value then illustrates Turn right and offset by a segment distance.
CN201710343501.3A 2017-05-16 2017-05-16 A kind of method for detecting lane lines Active CN107229908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710343501.3A CN107229908B (en) 2017-05-16 2017-05-16 A kind of method for detecting lane lines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710343501.3A CN107229908B (en) 2017-05-16 2017-05-16 A kind of method for detecting lane lines

Publications (2)

Publication Number Publication Date
CN107229908A true CN107229908A (en) 2017-10-03
CN107229908B CN107229908B (en) 2019-11-29

Family

ID=59933643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710343501.3A Active CN107229908B (en) 2017-05-16 2017-05-16 A kind of method for detecting lane lines

Country Status (1)

Country Link
CN (1) CN107229908B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090425A (en) * 2017-12-06 2018-05-29 海信集团有限公司 A kind of method for detecting lane lines, device and terminal
CN108171225A (en) * 2018-03-14 2018-06-15 海信集团有限公司 Lane detection method, device, terminal and storage medium
CN108805074A (en) * 2018-06-06 2018-11-13 安徽江淮汽车集团股份有限公司 A kind of method for detecting lane lines and device
CN109711372A (en) * 2018-12-29 2019-05-03 驭势科技(北京)有限公司 A kind of recognition methods of lane line and system, storage medium, server
CN110147382A (en) * 2019-05-28 2019-08-20 北京百度网讯科技有限公司 Lane line update method, device, equipment, system and readable storage medium storing program for executing
CN110163039A (en) * 2018-03-15 2019-08-23 北京航空航天大学 Determine method, equipment, storage medium and the processor of vehicle running state
CN110196062A (en) * 2019-06-27 2019-09-03 成都圭目机器人有限公司 A kind of air navigation aid of one camera tracking lane line
CN110244717A (en) * 2019-06-03 2019-09-17 武汉理工大学 The automatic method for searching of portal crane climbing robot based on existing threedimensional model
CN110765812A (en) * 2018-07-26 2020-02-07 北京图森未来科技有限公司 Calibration method and device for image data lane line
CN110770741A (en) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle
CN111160086A (en) * 2019-11-21 2020-05-15 成都旷视金智科技有限公司 Lane line recognition method, lane line recognition device, lane line recognition equipment and storage medium
CN111197992A (en) * 2018-11-20 2020-05-26 北京嘀嘀无限科技发展有限公司 Enlarged intersection drawing method and system and computer-readable storage medium
CN112441022A (en) * 2019-09-02 2021-03-05 华为技术有限公司 Lane center line determining method and device
WO2023279966A1 (en) * 2021-07-08 2023-01-12 中移(上海)信息通信科技有限公司 Multi-lane-line detection method and apparatus, and detection device
CN110770741B (en) * 2018-10-31 2024-05-03 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148856A1 (en) * 2011-12-09 2013-06-13 Yaojie Lu Method and apparatus for detecting road partition
CN103593649A (en) * 2013-10-24 2014-02-19 惠州华阳通用电子有限公司 Lane line detection method for lane departure early warning
CN104008645A (en) * 2014-06-12 2014-08-27 湖南大学 Lane line predicating and early warning method suitable for city road
CN105678287A (en) * 2016-03-02 2016-06-15 江苏大学 Ridge-measure-based lane line detection method
CN106525056A (en) * 2016-11-04 2017-03-22 杭州奥腾电子股份有限公司 Method for lane line detection by gyro sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148856A1 (en) * 2011-12-09 2013-06-13 Yaojie Lu Method and apparatus for detecting road partition
CN103593649A (en) * 2013-10-24 2014-02-19 惠州华阳通用电子有限公司 Lane line detection method for lane departure early warning
CN104008645A (en) * 2014-06-12 2014-08-27 湖南大学 Lane line predicating and early warning method suitable for city road
CN105678287A (en) * 2016-03-02 2016-06-15 江苏大学 Ridge-measure-based lane line detection method
CN106525056A (en) * 2016-11-04 2017-03-22 杭州奥腾电子股份有限公司 Method for lane line detection by gyro sensor

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090425A (en) * 2017-12-06 2018-05-29 海信集团有限公司 A kind of method for detecting lane lines, device and terminal
CN108171225A (en) * 2018-03-14 2018-06-15 海信集团有限公司 Lane detection method, device, terminal and storage medium
CN108171225B (en) * 2018-03-14 2020-12-18 海信集团有限公司 Lane detection method, device, terminal and storage medium
CN110163039A (en) * 2018-03-15 2019-08-23 北京航空航天大学 Determine method, equipment, storage medium and the processor of vehicle running state
CN108805074B (en) * 2018-06-06 2020-10-09 安徽江淮汽车集团股份有限公司 Lane line detection method and device
CN108805074A (en) * 2018-06-06 2018-11-13 安徽江淮汽车集团股份有限公司 A kind of method for detecting lane lines and device
CN110765812A (en) * 2018-07-26 2020-02-07 北京图森未来科技有限公司 Calibration method and device for image data lane line
CN110765812B (en) * 2018-07-26 2021-02-19 北京图森智途科技有限公司 Calibration method and device for image data lane line
CN110770741A (en) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle
CN110770741B (en) * 2018-10-31 2024-05-03 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle
CN111197992A (en) * 2018-11-20 2020-05-26 北京嘀嘀无限科技发展有限公司 Enlarged intersection drawing method and system and computer-readable storage medium
CN109711372A (en) * 2018-12-29 2019-05-03 驭势科技(北京)有限公司 A kind of recognition methods of lane line and system, storage medium, server
CN110147382A (en) * 2019-05-28 2019-08-20 北京百度网讯科技有限公司 Lane line update method, device, equipment, system and readable storage medium storing program for executing
CN110244717A (en) * 2019-06-03 2019-09-17 武汉理工大学 The automatic method for searching of portal crane climbing robot based on existing threedimensional model
CN110196062B (en) * 2019-06-27 2022-03-25 成都圭目机器人有限公司 Navigation method for tracking lane line by single camera
CN110196062A (en) * 2019-06-27 2019-09-03 成都圭目机器人有限公司 A kind of air navigation aid of one camera tracking lane line
CN112441022A (en) * 2019-09-02 2021-03-05 华为技术有限公司 Lane center line determining method and device
WO2021042856A1 (en) * 2019-09-02 2021-03-11 华为技术有限公司 Method and device for determining lane centerline
CN111160086B (en) * 2019-11-21 2023-10-13 芜湖迈驰智行科技有限公司 Lane line identification method, device, equipment and storage medium
CN111160086A (en) * 2019-11-21 2020-05-15 成都旷视金智科技有限公司 Lane line recognition method, lane line recognition device, lane line recognition equipment and storage medium
WO2023279966A1 (en) * 2021-07-08 2023-01-12 中移(上海)信息通信科技有限公司 Multi-lane-line detection method and apparatus, and detection device

Also Published As

Publication number Publication date
CN107229908B (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN107229908A (en) A kind of method for detecting lane lines
CN107284455B (en) A kind of ADAS system based on image procossing
CN104854637B (en) Moving object position attitude angle estimating device and moving object position attitude angle estimating method
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN106462968B (en) Method and device for calibrating a camera system of a motor vehicle
CN105005771B (en) A kind of detection method of the lane line solid line based on light stream locus of points statistics
CN101894271B (en) Visual computing and prewarning method of deviation angle and distance of automobile from lane line
CN105678787A (en) Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN107389026A (en) A kind of monocular vision distance-finding method based on fixing point projective transformation
CN107798724A (en) Automated vehicle 3D road models and lane markings define system
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN106156723A (en) A kind of crossing fine positioning method of view-based access control model
CN109085823A (en) The inexpensive automatic tracking running method of view-based access control model under a kind of garden scene
CN109064495A (en) A kind of bridge floor vehicle space time information acquisition methods based on Faster R-CNN and video technique
CN106651953A (en) Vehicle position and gesture estimation method based on traffic sign
CN110050278A (en) For the photographic device and method to adapt to the method detection vehicle-periphery region of situation
CN107615201A (en) Self-position estimation unit and self-position method of estimation
CN105593776B (en) Vehicle location posture angle estimating device and vehicle location posture angle estimating method
CN106503636A (en) A kind of road sighting distance detection method of view-based access control model image and device
WO1997025700A1 (en) Traffic congestion measuring method and apparatus and image processing method and apparatus
CN103204104B (en) Monitored control system and method are driven in a kind of full visual angle of vehicle
CN107389084A (en) Planning driving path planing method and storage medium
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN110398979A (en) A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201230

Address after: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee after: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.

Address before: 310018 no.928, Baiyang street, Xiasha Higher Education Park, Jianggan District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG SCI-TECH University

Effective date of registration: 20201230

Address after: No.1 xc1001-3, Nanmen Gongnong Road, Chongfu Town, Tongxiang City, Jiaxing City, Zhejiang Province

Patentee after: JIAXING YUNSHIJIAO ELECTRONIC COMMERCE Co.,Ltd.

Address before: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee before: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211119

Address after: 257091 room 632, building B, No. 59, Fuqian street, Dongying District, Dongying City, Shandong Province

Patentee after: Dongying yuelaihu science and Education Industrial Park Co.,Ltd.

Address before: No.1 xc1001-3, Nanmen Gongnong Road, Chongfu Town, Tongxiang City, Jiaxing City, Zhejiang Province

Patentee before: JIAXING YUNSHIJIAO ELECTRONIC COMMERCE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221227

Address after: Room 403, Building C4, Blue Harbor, No. 45, Dongsi Road, Dongying Development Zone, Shandong 257,091

Patentee after: Yuelaihu (Shandong) Digital Economy Industrial Park Operation Management Co.,Ltd.

Address before: 257091 room 632, building B, No. 59, Fuqian street, Dongying District, Dongying City, Shandong Province

Patentee before: Dongying yuelaihu science and Education Industrial Park Co.,Ltd.

TR01 Transfer of patent right