Background technology
In the intelligent transportation system, the lane detection technology is the important component part of intelligent vehicle environment sensing technology, is mainly used in Vehicular intelligent cruise control, lateral direction of car control, deviation warning, vehicle autonomous driving etc.Vision is that human steering vehicle uses maximum environment sensing modes, most vehicles are all right to be sailed on structured road, and structured road take the form of manward's vision, visually-perceptible is one of the most effective and important techniques means of lane detection.
In existing lane detection technology based on vision, can be summarized as three important component parts at the lane detection method of structured road: definition track model, extract the track feature, serve as that constraint is according to the feature location track of extracting with the track model.Ding Yi track model mainly contains straight line in the prior art, quadratic polynomial, SPL, circular arc etc., but these models all can't be fully be mapped with actual track curve, can only partly or reflect the track structure approx, therefore utilize them can't accurately detect the lane boundary of different shape, even detected the also accurate position of estimating vehicle in the track, track, the geometry on direction and plane, track, as track curvature, the rate of change of track curvature etc., and these data are to the autonomous driving of vehicle, the control of intelligent cruise and decision-making are very important.For the track feature, prior art has mainly been utilized the texture, color on edge, gradient, the road surface of traffic lane line etc.Wherein gradient, texture, color characteristic are vulnerable to the influence of shade, illumination, Changes in weather, and the traffic lane line edge feature is relatively stable, has better anti-interference.But in the existing lane detection technology, the edge threshold that is used for edge extracting is fixed, and road illumination changes, and the traffic lane line edge may can't extract because of the influence that track light changes, thereby can't detect the track.For the edge that extracts, utilize methods such as classification, cluster, neural network that marginal point is carried out match usually in the existing lane detection technology and locate lane boundary.Owing in the process of extracting the traffic lane line edge, have other edge also can be extracted out, therefore before the marginal point match, need filtering interfering, otherwise can influence the accurate location of lane boundary, but filtering interfering can increase the complexity of lane detection method to a certain extent again, and does not also have effective method to address this problem in the prior art.
On the other hand, the vehicle situation of road in the process of moving is complicated and changeable, so lane detection needs to adapt to various traffic lane lines, road illumination and complex environment.These complicacy and variation mainly show three aspects.The one, relevant with the traffic lane line form of expression, as the track is linear straight line, circular arc and clothoid etc. are arranged; Color adularescent and the yellow of traffic lane line; The type of traffic lane line has single solid line, double solid line, dotted line etc.The 2nd, relevant with road illumination, as the shade of roadside buildings thing, branch leaf and other vehicle, the brightness of traffic lane line under different weather and the light conditions, the dazzle that the sunburst reflection causes, traffic lane line is smudgy etc. because of wearing and tearing, and these can have influence on the observability of traffic lane line.The 3rd, relevant with condition of road surface, as breakage and the crackle on road surface, the difference of road surface material, the blocking etc. of other driving vehicle on the road, these can disturb the detection of traffic lane line.Lane detection under these complexity and mal-condition is very difficult, and prior art also can't solve fully.
Find that by prior art documents Y.Zhou etc. are published in " Measurement Science﹠amp; Technology " papers " A robust lane detection and tracking method based on computer vision " of 2006 years 17 volumes, adopt the projection model in straight line and circular arc track, utilize gradient feature and the tabu search algorithm search location lane boundary of pixel.Though the described technical method of the document can be to various traffic lane lines, at road the lane detection of carrying out under the situations such as shade, road surface breakage and crackle, vehicle block is arranged, also exist weak point.The one, this technical method only can detection of straight lines and the lane boundary of circular arc, can't accurately detect the linear lane boundary of rondo; The 2nd, this technical method can't detect the track planar geometry fully; The 3rd, this technical method can't effectively detect traffic lane line because of the ambiguous lane boundary of wearing and tearing; The 4th, the lane boundary that this technical method can't adapt under the dusk light conditions detects.
Summary of the invention
Can't detect lane boundary and track planar geometry information can't be provided under complicated and mal-condition in order to overcome prior art, the position of vehicle in the track and the deficiency of direction, the invention provides a kind of vehicle place lane boundary detection method, this method not only can detect the border in track, vehicle place under the Ordinary Rd environment, and can under complicated and abominable road environment, detect vehicle place lane boundary, as road shade is arranged, crackle, vehicle, sunlight reflected, rather dark, traffic lane line wearing and tearing etc., and can calculate the geometrical structure parameter value in track, position and the direction of vehicle in the track.
The technical solution adopted for the present invention to solve the technical problems may further comprise the steps:
Step 1: according to the pixel gradient amplitude adaptively to the extracting section marginal point below the road image local horizon, and edge calculation direction;
Step 2: according to the marginal point, edge direction and the lane boundary projection model that extract, in the parameter vector space utilization particle group optimizing search location of lane boundary projection model lane boundary;
Step 3: calculate track, vehicle place planar geometry, position and the angle of deviation of vehicle in the track according to the lane boundary projection model parameter value that searches and calculation of parameter formula.
Described step 1 may further comprise the steps:
Step 1.1: determine horizontal position in the image, calculate
R wherein
OThe ordinate of presentation video central point, d
yPhysical size on the expression pixel vertical direction, f
cThe focal length of expression vehicle-mounted vidicon, α represents the pitch angle of vehicle-mounted vidicon, with local horizon (j, r
H) for the boundary is divided into up and down two parts with image, j=0 wherein, 1 ..., N, the pixel value of N presentation video width;
Step 1.2: the local horizon is with the gradient magnitude of each pixel of lower part in the computed image
Wherein (c, the r) coordinate of the following partial pixel point in local horizon in the presentation video, G
x(c, r), G
y(its computing method are respectively for c, the r) gradient magnitude on the following partial pixel point level in local horizon and the vertical direction in the difference presentation video
F (c, r) local horizon following partial pixel point (c, pixel value r) in the presentation video wherein;
Step 1.3: calculate the threshold value of extracting the edge
Wherein M, N distinguish the pixel value of presentation video height and width, w
GBe coefficient, its span is 0.1≤w
G≤ 1.5;
Step 1.4: with local horizon in the image with the gradient magnitude of each pixel of lower part and edge threshold relatively, if gradient magnitude is greater than edge threshold, then this pixel be marginal point and edge calculation point edge direction θ (c, r)=arctan[G
y(c, r)/G
x(c, r)].
Described step 2 may further comprise the steps:
Step 2.1: drop shadow curve's equation of left and right boundary line, track, namely the lane boundary curvilinear equation in the image is also referred to as left and right border, track projection model and is respectively
b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein c represents pixel horizontal ordinate, b
1L, b
1R, b
0, b
-1And b
-2Be the parameter of lane boundary drop shadow curve, and definition B=(b
1L, b
1R, b
0, b
-1, b
-2)
TBe lane boundary projection model parameter vector, the feasible region (5,0,0 ,-2000 ,-3000) of lane boundary projection model parameter vector B is set
T<B<(0,5,300,2000,3000)
T, beta particle group's big or small m, the maximum flying speed V of particle
Max=(v
1Lmax, v
1Rmax, v
0max, v
-1max, v
-2max)
T, and maximum search iterations Iter
Max, V wherein
MaxSpan be (1,1,100,1500,2000)
T≤ V
Max≤ (5,5,500,2500,3000)
T, the span of m is 10≤m≤60, Iter
MaxSpan be 30≤Iter
Max≤ 100;
Step 2.2: each parameter vector particle position B of random initializtion in the feasible region of lane boundary projection model parameter vector B
i, speed V
i, wherein i represents the sequence number of parameter vector particle, and the historical desired positions P of each parameter vector particle is set
i=B
i, calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle then according to marginal point and edge direction, and maximum confidence corresponding historical desired positions G in the degree of confidence size of all particles and then the vectorial population that gets parms relatively;
Step 2.3: be calculated as follows and upgrade each parameter vector particle's velocity Vi and position Bi,
V
i(k+1)=w(k)V
i(k)+c
1r
1[P
i(k)-B
i(k)]+c
2r
2[G(k)-B
i(k)]
B
i(k+1)=B
i(k)+V
i(k+1)
Wherein k is the search iteration number of times, V
i(k), B
i(k), P
i(k) and the historical desired positions of G (k) i parameter vector particle's velocity, position, historical desired positions and population when representing the k time iteration respectively, c
1, c
2The expression constant, its span be (0,4], r
1, r
2Be the random number on (0,1) interval,
I=1,2 ..., m;
Step 2.4: check parameter vector particle's velocity and position, the parameter vector particle that surpasses maximal rate is carried out speed limit, it is maximal rate that its speed is set, and the parameter vector particle that crosses the border is carried out the position return, but its position is set in row space at random;
Step 2.5: calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle according to marginal point and edge direction, and relatively upgrade the historical desired positions P of each parameter vector particle
iHistorical desired positions G with the parameter vector population;
Step 2.6: with search iteration number of times k and maximum iteration time Iter
MaxCompare, if k is less than Iter
MaxForward step 2.3 to, otherwise forward step 2.7 to;
Step 2.7: with the lane boundary curve output of the historical desired positions G correspondence of parameter vector population.
The computing method of described lane boundary curve degree of confidence F (B) are
Wherein D (c, r) the expression marginal point (c, r) to the distance of lane boundary curve,
(U represents the lane boundary neighborhood of a curve to the expression marginal point for c, the angle of edge direction r) and lane boundary curve, and the span of its radius is [2,30], μ
W,
Represent (b respectively
1R-b
1L) average and variance, D (c, variance r),
Variance, its span is respectively
W wherein
LaneThe expression lane width, d
x, d
yRepresent respectively pixel in the horizontal direction with vertical direction on physical size, h
cThe expression vehicle-mounted vidicon is apart from the height on ground.
In the computing method of described lane boundary curve degree of confidence F (B), marginal point (c, r) to the distance B of lane boundary curve (c, r) and marginal point (c, the angle of edge direction r) and lane boundary curve
Computing method be respectively
Wherein
D
L(c,r)=|[b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
L|
D
R(c,r)=|[b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
R|
Wherein
ψ
L=arctan[-b
1L+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
ψ
R=arctan[-b
1R+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
Described step 3 may further comprise the steps:
Be calculated as follows the curvature C in track, vehicle place respectively according to the historical desired positions G of parameter vector population
0Rate of change C with track curvature
1:
According to the historical desired positions G of parameter vector population be calculated as follows respectively the angle of deviation β of vehicle in the track, vehicle to left and right boundary line, track apart from d
LAnd d
R:
Wherein γ is the roll angle of vehicle-mounted vidicon.
The invention has the beneficial effects as follows:
1) track of the present invention model meets the projection of track horizontal alignment in image of structuring track design specifications regulation, has taken into full account and utilized the geometry feature of lane line, can more accurately reflect actual lane boundary curve;
2) the present invention only carries out edge and edge direction extraction to ground image, regulate by step 1.2 and step 1.3 edge threshold self-adaptation, the road environment that can adapt to different light and light and shade, as fine day, cloudy day, night, reflective, traffic lane line wearing and tearing etc., has good environment self-adaption ability;
3) according to distance and direction, the present invention has only calculated the marginal point that has certain similarity with lane line, has effectively avoided the interference at non-traffic lane line edge, as shade, road surface crackle, other vehicle etc., strengthened the robustness of lane detection, method reliability height, antijamming capability is strong.
The present invention is further described below in conjunction with drawings and Examples.
Embodiment
In the embodiment of the invention video camera is installed on vehicle roof axis front position, camera lens is over against vehicle front, after the camera parameters demarcation, vehicle is along lanes, the real-time collection vehicle road ahead of vehicle-mounted vidicon image uses the inventive method that this car place lane boundary in the road image is detected.
Lane boundary for structured road detects, and can adopt certain lane boundary model.The degree of accuracy that the lane boundary that not only can improve realistic lane boundary model detects, and can estimate the track planar geometry, as track curvature, curvature variation, and position and the direction etc. of vehicle in the track.The concrete principles illustrated of lane boundary model of the embodiment of the invention is as follows.
According to specification of the highway route design, plane figure of highway should be made up of straight line, circular curve, three kinds of key elements of clothoid.Because the road surface is essentially the plane in visual range, so track curvature C can be expressed as follows with the variation relation of track length l:
C(l)=C
0+C
1l
C in the formula
0Be the curvature of track at the viewpoint place, C
1Rate of change for track curvature.According to C
0And C
1Different values, the above-mentioned relation formula both can be represented clothoid, also can represent straight line and circular arc.As shown in Figure 1, set up world coordinate system O
wX
wY
wZ
w, coordinate plane O wherein
wX
wZ
wBe parallel to the road surface, coordinate axis O
wZ
wBe parallel to lane line at the tangent line of viewpoint corresponding point position, initial point O
wApart from road surface and vehicle-mounted vidicon coordinate system O
cX
cY
cZ
cInitial point O
cWith high, highly be h
cConsider vehicle along lanes, the angle of direction of traffic and track direction is less, for boundary line, the left side, track, following track equation is arranged
As shown in Figure 2, set up pixel coordinate system, O
1Be initial point, O
cBe image center.If d
x, d
yBe respectively the physical size of pixel on x, y direction of principal axis, β represents the angle of direction of traffic and track direction, d
LThe distance vector that project to track left side boundary line of presentation video central point on the road surface.Roll angle γ, track curvature and the track curvature variation of considering vehicle-mounted vidicon are all smaller, by coordinate system transformation, can get boundary line, the left side, track curvilinear equation in the pixel coordinate system by following formula, are also referred to as track left margin projection model:
b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein
In like manner can get boundary line, the right, track curvilinear equation, be also referred to as track right margin projection model
b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein
D wherein
RThe presentation video central point is at the distance vector that projects to boundary line, the right, track on road surface.Definition B=(b
1L, b
1R, b
0, b
-1, b
-2)
TBe lane boundary projection model parameter vector, then the lane line in the carriageway image can determine that its span is (5,0,0 ,-2000 ,-3000) by parameter vector B
T<B<(0,5,300,2000,3000)
T
Different with the track model that uses in the existing lane detection technology, the lane boundary projection model of the embodiment of the invention is the true description to actual lane boundary wire shaped, use it not only can improve the degree of accuracy of lane detection, effectively suppress the interference of non-lane line, and can estimate track curvature C respectively according to lane boundary projection model parameter value
0, track curvature variation C
1, direction of traffic and track direction angle β, vehicle from boundary line, the left and right sides, track apart from d
LAnd d
R, its computing method are as follows respectively:
Utilize the described lane boundary projection model of the embodiment of the invention, the general flow chart of the embodiment of the invention as shown in Figure 3, its embodiment is as follows:
Step 1, according to the pixel gradient amplitude adaptively to the extracting section marginal point below the road image local horizon, and edge calculation direction; Particularly, its process flow diagram as shown in Figure 4, its embodiment and principle are as described below:
Step 1.1 is that the boundary is divided into two parts up and down with image with the local horizon; Because the visual information in track only exists only in the surface of road, and the local horizon in the image can calculate in advance by the inside and outside parameter of vehicle-mounted vidicon, in order to reduce calculated amount, avoid the processing to garbage in the image, lane detection only need be handled information relevant with the road surface in the road image, i.e. the following part in local horizon in the image; Particularly, calculate
R wherein
OThe ordinate of presentation video central point, d
yPhysical size on the expression pixel vertical direction, f
cThe focal length of expression vehicle-mounted vidicon, α represents the pitch angle of vehicle-mounted vidicon, with local horizon (j, r
H) for the boundary is divided into up and down two parts with image, j=0 wherein, 1 ..., N, the pixel value of N presentation video width;
Step 1.2, the local horizon is with the gradient magnitude G of each pixel of lower part in the computed image
m(c, r), wherein (c, r) coordinate of the following partial pixel point in local horizon in the presentation video; Particularly, to r 〉=r
HPixel use isotropy Sobel operator to calculate the gradient magnitude G of its horizontal direction and vertical direction
x(c, r) and G
y(c, r), compute gradient amplitude G then
m(c, r), its concrete computing method are as follows respectively:
Step 1.3, edge calculation threshold value G
MthIn order to extract the traffic lane line edge, edge threshold need be set; Because the variation of weather, sunlight, environment, the traffic lane line edge extracting method of built-in edge threshold value can not adapt to various variations; When threshold value is higher, can't extract wearing and tearing with dim environment under the traffic lane line edge; When threshold value is low, some noises and interference can be mistakened as edge extracting; Therefore, different carriageway images need use different edge threshold, and its concrete computing method are as follows:
Wherein M, N distinguish the pixel value of presentation video height and width, w
GBe coefficient, its span is 0.1≤w
G≤ 1.5, particularly, w
GValue is 0.6;
Step 1.4 is with the gradient magnitude G of local horizon in the image with each pixel of lower part
m(c is r) with edge threshold G
MthCompare, if greater than edge threshold, then be marginal point and edge calculation point edge direction θ (c, r), particularly, θ (c, r)=arctan[G
y(c, r)/G
x(c, r)];
Step 2 is according to the marginal point, edge direction and the lane boundary projection model that extract, in the parameter vector space utilization particle group optimizing search location of lane boundary projection model lane boundary; Particularly, its process flow diagram as shown in Figure 5, embodiment is as described below:
Step 2.1 arranges feasible region, beta particle group's big or small m, the maximum flying speed V of particle of lane boundary projection model parameter vector B
Max=(v
1Lmax, v
1Rmax, v
0max, v
-1max, v
-2max)
T, and maximum search iterations Iter
Max, V wherein
MaxSpan be (1,1,100,1500,2000)
T≤ V
Max≤ (5,5,500,2500,3000)
T, the span of m is 10≤m≤60, Iter
MaxSpan be 30≤Iter
Max≤ 100, particularly, (5,0,0 ,-2000 ,-3000)
T<B<(0,5,300,2000,3000)
T, V
Max=(5,5,300,600,600)
T, m=30, Iter
Max=50;
Step 2.2, each parameter vector particle position B of random initializtion in the feasible region of lane boundary projection model parameter vector
i, speed V
i, wherein i represents the sequence number of parameter vector particle, and the historical desired positions P of each parameter vector particle is set
i=B
i, calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle then according to marginal point distance, edge direction, and maximum confidence corresponding historical desired positions G in the degree of confidence size of all particles and then the vectorial population that gets parms relatively; Particularly, the computing method of lane boundary curve degree of confidence F (B) are
Wherein D (c, r) the expression marginal point (c, r) to the distance of lane boundary curve,
(U represents the lane boundary neighborhood of a curve to the expression marginal point for c, the angle of edge direction r) and lane boundary curve, and the span of its radius is [2,30], and particularly, the radius of U is 15, μ
W,
Represent (b respectively
1R-b
1L) average and variance, D (c, variance r),
Variance, its span is respectively
W wherein
LaneThe expression lane width, d
x, d
yRepresent respectively pixel in the horizontal direction with vertical direction on physical size, h
cRepresent vehicle-mounted vidicon apart from the height on ground, particularly, μ
W=1.3,
The concrete principle of the computing method of above-mentioned lane boundary curve degree of confidence F (B) is as described below; According to the highway engineering technical manual, the width basically identical in structuring track, its constraint condition as lane detection again because the observed reading of lane width meets normal distribution, therefore can be able to be constructed the first half of lane boundary curve confidence calculations formula equal sign the right formula.For the marginal point in the image (c, r), whether it belongs to the lane boundary curve, has two attributes to consider, marginal point (c, r) to the distance of lane line and marginal point (c, edge direction r), so the marginal point probability that belongs to lane line with
Be directly proportional; Observe the marginal point in the lane boundary neighbourhood of a curve U, can construct the latter half of following formula equal sign the right formula; Therefore, lane boundary curve degree of confidence F (B) is proportional to the similarity of lane line in the lane boundary curve of parameter vector B and the image;
In the calculating of described lane boundary curve degree of confidence F (B), D
2(c, r) and
As shown in Figure 2, particularly, its computing method are respectively
Wherein
Wherein ψ is lane boundary normal to a curve direction, and its computing method are specially
ψ
L=arctan[-b
1L+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
ψ
R=arctan[-b
1R+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
Step 2.3 is calculated each parameter vector particle's velocity V of renewal according to following formula
iWith position B
i:
V
i(k+1)=w(k)V
i(k)+c
1r
1[P
i(k)-B
i(k)]+c
2r
2[G(k)-B
i(k)]
B
i(k+1)=B
i(k)+V
i(k+1)
K represents search iteration number of times, V in the formula
i(k), B
i(k), P
i(k) and the historical desired positions of G (k) i particle's velocity, position, historical desired positions and population when representing the k time iteration respectively, c
1, c
2Be constant, its span be (0,4], particularly, c
1, c
2Value is respectively 2, r
1, r
2Be the random number on (0,1) interval,
I=1,2 ..., m.
Step 2.4 checks parameter vector particle's velocity and position, and the parameter vector particle that surpasses maximal rate is carried out speed limit, and it is maximal rate that its speed is set, and the parameter vector particle that crosses the border is carried out the position return, but its position is set in row space at random; Its principle is, because vector may cause behavior excessive in the process of autognosis and colony's study, the too fast or vectorial feasible region that flies out of vector speed this moment, this can reduce the search efficiency of population of vectors, therefore need correct the excessive behavior of vector, to improve global search efficient;
Step 2.5 is calculated the lane boundary curve degree of confidence F (B) of each parameter vector particle according to marginal point and edge direction, and relatively upgrades the historical desired positions P of each parameter vector particle
iHistorical desired positions G with the parameter vector population;
Step 2.6 is with search iteration number of times k and maximum iteration time Iter
MaxCompare, if less than forwarding step 2.3 to, otherwise forward step 2.7 to;
Step 2.7, with the lane boundary curve output of the historical desired positions correspondence of parameter vector population, particularly, according to the historical desired positions of parameter vector population and the lane boundary projection model lane line that in the road image that detects, draws;
Step 3 is calculated track, vehicle place planar geometry, position and the angle of deviation of vehicle in the track according to the historical desired positions G of parameter vector population and track structural parameters and vehicle heading calculation of parameter formula; Particularly, according to foregoing lane boundary projection model parameter and C
0, C
1, β, d
LAnd d
RCorresponding relation and the historical desired positions G of parameter vector population be calculated as follows track, vehicle place curvature C respectively
0With curvature variation C
1, vehicle from border, the left and right sides, track apart from d
LAnd d
RAnd angle of deviation β:
Fig. 6 to Fig. 9 is the lane detection result of the described vehicle of embodiment of the invention place lane boundary detection method under the varying environment condition, the present invention not only can adapt to various tracks linear and traffic lane line and weather and illumination variation as can be seen from the result, and can effectively reduce the influence that shade, dazzle, other vehicle, road surface crackle, traffic lane line wearing and tearing detect lane boundary, and can accurately locate lane boundary, and then can estimate geometry and position and the direction of vehicle in the track on plane, track.