CN102184535B - Method for detecting boundary of lane where vehicle is - Google Patents
Method for detecting boundary of lane where vehicle is Download PDFInfo
- Publication number
- CN102184535B CN102184535B CN 201110094319 CN201110094319A CN102184535B CN 102184535 B CN102184535 B CN 102184535B CN 201110094319 CN201110094319 CN 201110094319 CN 201110094319 A CN201110094319 A CN 201110094319A CN 102184535 B CN102184535 B CN 102184535B
- Authority
- CN
- China
- Prior art keywords
- lane boundary
- lane
- parameter vector
- vehicle
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a boundary of a lane where a vehicle is, which comprises the steps of: first adaptively extracting edge points for the part below a horizon of a road image according to pixel gradient amplitudes and calculating an edge direction, then searching and positioning a lane boundary by utilizing the particle swarm optimization in a parameter vector space of a lane boundary projection model according to the extracted edge points, the edge direction and the lane boundary projection model; finally, calculating a plane geometric structure of the lane wherein the vehicle is, a position of the vehicle in the lane and an angle of deviation according to the searched lane boundary projection model parameter values and a parameter calculation method. The method can reflect an actual lane boundary curve more accurately, has better environment self-adaption capability, enhances the robustness of the lane detection, and has high reliability and strong antijamming capability.
Description
Technical field
The invention belongs to intelligent vehicle environment sensing technical field, relate to a kind of lane boundary detection method.
Background technology
In the intelligent transportation system, the lane detection technology is the important component part of intelligent vehicle environment sensing technology, is mainly used in Vehicular intelligent cruise control, lateral direction of car control, deviation warning, vehicle autonomous driving etc.Vision is that human steering vehicle uses maximum environment sensing modes, most vehicles are all right to be sailed on structured road, and structured road take the form of manward's vision, visually-perceptible is one of the most effective and important techniques means of lane detection.
In existing lane detection technology based on vision, can be summarized as three important component parts at the lane detection method of structured road: definition track model, extract the track feature, serve as that constraint is according to the feature location track of extracting with the track model.Ding Yi track model mainly contains straight line in the prior art, quadratic polynomial, SPL, circular arc etc., but these models all can't be fully be mapped with actual track curve, can only partly or reflect the track structure approx, therefore utilize them can't accurately detect the lane boundary of different shape, even detected the also accurate position of estimating vehicle in the track, track, the geometry on direction and plane, track, as track curvature, the rate of change of track curvature etc., and these data are to the autonomous driving of vehicle, the control of intelligent cruise and decision-making are very important.For the track feature, prior art has mainly been utilized the texture, color on edge, gradient, the road surface of traffic lane line etc.Wherein gradient, texture, color characteristic are vulnerable to the influence of shade, illumination, Changes in weather, and the traffic lane line edge feature is relatively stable, has better anti-interference.But in the existing lane detection technology, the edge threshold that is used for edge extracting is fixed, and road illumination changes, and the traffic lane line edge may can't extract because of the influence that track light changes, thereby can't detect the track.For the edge that extracts, utilize methods such as classification, cluster, neural network that marginal point is carried out match usually in the existing lane detection technology and locate lane boundary.Owing in the process of extracting the traffic lane line edge, have other edge also can be extracted out, therefore before the marginal point match, need filtering interfering, otherwise can influence the accurate location of lane boundary, but filtering interfering can increase the complexity of lane detection method to a certain extent again, and does not also have effective method to address this problem in the prior art.
On the other hand, the vehicle situation of road in the process of moving is complicated and changeable, so lane detection needs to adapt to various traffic lane lines, road illumination and complex environment.These complicacy and variation mainly show three aspects.The one, relevant with the traffic lane line form of expression, as the track is linear straight line, circular arc and clothoid etc. are arranged; Color adularescent and the yellow of traffic lane line; The type of traffic lane line has single solid line, double solid line, dotted line etc.The 2nd, relevant with road illumination, as the shade of roadside buildings thing, branch leaf and other vehicle, the brightness of traffic lane line under different weather and the light conditions, the dazzle that the sunburst reflection causes, traffic lane line is smudgy etc. because of wearing and tearing, and these can have influence on the observability of traffic lane line.The 3rd, relevant with condition of road surface, as breakage and the crackle on road surface, the difference of road surface material, the blocking etc. of other driving vehicle on the road, these can disturb the detection of traffic lane line.Lane detection under these complexity and mal-condition is very difficult, and prior art also can't solve fully.
Find that by prior art documents Y.Zhou etc. are published in " Measurement Science﹠amp; Technology " papers " A robust lane detection and tracking method based on computer vision " of 2006 years 17 volumes, adopt the projection model in straight line and circular arc track, utilize gradient feature and the tabu search algorithm search location lane boundary of pixel.Though the described technical method of the document can be to various traffic lane lines, at road the lane detection of carrying out under the situations such as shade, road surface breakage and crackle, vehicle block is arranged, also exist weak point.The one, this technical method only can detection of straight lines and the lane boundary of circular arc, can't accurately detect the linear lane boundary of rondo; The 2nd, this technical method can't detect the track planar geometry fully; The 3rd, this technical method can't effectively detect traffic lane line because of the ambiguous lane boundary of wearing and tearing; The 4th, the lane boundary that this technical method can't adapt under the dusk light conditions detects.
Summary of the invention
Can't detect lane boundary and track planar geometry information can't be provided under complicated and mal-condition in order to overcome prior art, the position of vehicle in the track and the deficiency of direction, the invention provides a kind of vehicle place lane boundary detection method, this method not only can detect the border in track, vehicle place under the Ordinary Rd environment, and can under complicated and abominable road environment, detect vehicle place lane boundary, as road shade is arranged, crackle, vehicle, sunlight reflected, rather dark, traffic lane line wearing and tearing etc., and can calculate the geometrical structure parameter value in track, position and the direction of vehicle in the track.
The technical solution adopted for the present invention to solve the technical problems may further comprise the steps:
Step 1: according to the pixel gradient amplitude adaptively to the extracting section marginal point below the road image local horizon, and edge calculation direction;
Step 2: according to the marginal point, edge direction and the lane boundary projection model that extract, in the parameter vector space utilization particle group optimizing search location of lane boundary projection model lane boundary;
Step 3: calculate track, vehicle place planar geometry, position and the angle of deviation of vehicle in the track according to the lane boundary projection model parameter value that searches and calculation of parameter formula.
Described step 1 may further comprise the steps:
Step 1.1: determine horizontal position in the image, calculate
R wherein
OThe ordinate of presentation video central point, d
yPhysical size on the expression pixel vertical direction, f
cThe focal length of expression vehicle-mounted vidicon, α represents the pitch angle of vehicle-mounted vidicon, with local horizon (j, r
H) for the boundary is divided into up and down two parts with image, j=0 wherein, 1 ..., N, the pixel value of N presentation video width;
Step 1.2: the local horizon is with the gradient magnitude of each pixel of lower part in the computed image
Wherein (c, the r) coordinate of the following partial pixel point in local horizon in the presentation video, G
x(c, r), G
y(its computing method are respectively for c, the r) gradient magnitude on the following partial pixel point level in local horizon and the vertical direction in the difference presentation video
F (c, r) local horizon following partial pixel point (c, pixel value r) in the presentation video wherein;
Step 1.3: calculate the threshold value of extracting the edge
Wherein M, N distinguish the pixel value of presentation video height and width, w
GBe coefficient, its span is 0.1≤w
G≤ 1.5;
Step 1.4: with local horizon in the image with the gradient magnitude of each pixel of lower part and edge threshold relatively, if gradient magnitude is greater than edge threshold, then this pixel be marginal point and edge calculation point edge direction θ (c, r)=arctan[G
y(c, r)/G
x(c, r)].
Described step 2 may further comprise the steps:
Step 2.1: drop shadow curve's equation of left and right boundary line, track, namely the lane boundary curvilinear equation in the image is also referred to as left and right border, track projection model and is respectively
b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein c represents pixel horizontal ordinate, b
1L, b
1R, b
0, b
-1And b
-2Be the parameter of lane boundary drop shadow curve, and definition B=(b
1L, b
1R, b
0, b
-1, b
-2)
TBe lane boundary projection model parameter vector, the feasible region (5,0,0 ,-2000 ,-3000) of lane boundary projection model parameter vector B is set
T<B<(0,5,300,2000,3000)
T, beta particle group's big or small m, the maximum flying speed V of particle
Max=(v
1Lmax, v
1Rmax, v
0max, v
-1max, v
-2max)
T, and maximum search iterations Iter
Max, V wherein
MaxSpan be (1,1,100,1500,2000)
T≤ V
Max≤ (5,5,500,2500,3000)
T, the span of m is 10≤m≤60, Iter
MaxSpan be 30≤Iter
Max≤ 100;
Step 2.2: each parameter vector particle position B of random initializtion in the feasible region of lane boundary projection model parameter vector B
i, speed V
i, wherein i represents the sequence number of parameter vector particle, and the historical desired positions P of each parameter vector particle is set
i=B
i, calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle then according to marginal point and edge direction, and maximum confidence corresponding historical desired positions G in the degree of confidence size of all particles and then the vectorial population that gets parms relatively;
Step 2.3: be calculated as follows and upgrade each parameter vector particle's velocity Vi and position Bi,
V
i(k+1)=w(k)V
i(k)+c
1r
1[P
i(k)-B
i(k)]+c
2r
2[G(k)-B
i(k)]
B
i(k+1)=B
i(k)+V
i(k+1)
Wherein k is the search iteration number of times, V
i(k), B
i(k), P
i(k) and the historical desired positions of G (k) i parameter vector particle's velocity, position, historical desired positions and population when representing the k time iteration respectively, c
1, c
2The expression constant, its span be (0,4], r
1, r
2Be the random number on (0,1) interval,
I=1,2 ..., m;
Step 2.4: check parameter vector particle's velocity and position, the parameter vector particle that surpasses maximal rate is carried out speed limit, it is maximal rate that its speed is set, and the parameter vector particle that crosses the border is carried out the position return, but its position is set in row space at random;
Step 2.5: calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle according to marginal point and edge direction, and relatively upgrade the historical desired positions P of each parameter vector particle
iHistorical desired positions G with the parameter vector population;
Step 2.6: with search iteration number of times k and maximum iteration time Iter
MaxCompare, if k is less than Iter
MaxForward step 2.3 to, otherwise forward step 2.7 to;
Step 2.7: with the lane boundary curve output of the historical desired positions G correspondence of parameter vector population.
The computing method of described lane boundary curve degree of confidence F (B) are
Wherein D (c, r) the expression marginal point (c, r) to the distance of lane boundary curve,
(U represents the lane boundary neighborhood of a curve to the expression marginal point for c, the angle of edge direction r) and lane boundary curve, and the span of its radius is [2,30], μ
W,
Represent (b respectively
1R-b
1L) average and variance, D (c, variance r),
Variance, its span is respectively
W wherein
LaneThe expression lane width, d
x, d
yRepresent respectively pixel in the horizontal direction with vertical direction on physical size, h
cThe expression vehicle-mounted vidicon is apart from the height on ground.
In the computing method of described lane boundary curve degree of confidence F (B), marginal point (c, r) to the distance B of lane boundary curve (c, r) and marginal point (c, the angle of edge direction r) and lane boundary curve
Computing method be respectively
Wherein
D
L(c,r)=|[b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
L|
D
R(c,r)=|[b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
R|
Wherein
ψ
L=arctan[-b
1L+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
ψ
R=arctan[-b
1R+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
Described step 3 may further comprise the steps:
Be calculated as follows the curvature C in track, vehicle place respectively according to the historical desired positions G of parameter vector population
0Rate of change C with track curvature
1:
According to the historical desired positions G of parameter vector population be calculated as follows respectively the angle of deviation β of vehicle in the track, vehicle to left and right boundary line, track apart from d
LAnd d
R:
Wherein γ is the roll angle of vehicle-mounted vidicon.
The invention has the beneficial effects as follows:
1) track of the present invention model meets the projection of track horizontal alignment in image of structuring track design specifications regulation, has taken into full account and utilized the geometry feature of lane line, can more accurately reflect actual lane boundary curve;
2) the present invention only carries out edge and edge direction extraction to ground image, regulate by step 1.2 and step 1.3 edge threshold self-adaptation, the road environment that can adapt to different light and light and shade, as fine day, cloudy day, night, reflective, traffic lane line wearing and tearing etc., has good environment self-adaption ability;
3) according to distance and direction, the present invention has only calculated the marginal point that has certain similarity with lane line, has effectively avoided the interference at non-traffic lane line edge, as shade, road surface crackle, other vehicle etc., strengthened the robustness of lane detection, method reliability height, antijamming capability is strong.
The present invention is further described below in conjunction with drawings and Examples.
Description of drawings
Fig. 1 is world coordinate system and the camera coordinate system synoptic diagram of the described lane line of the embodiment of the invention;
Fig. 2 is that image coordinate system and the pixel coordinate of the described lane line of the embodiment of the invention is synoptic diagram;
Fig. 3 is the general flow chart of the described vehicle of embodiment of the invention place lane boundary detection method;
Fig. 4 is the described method flow diagram to the extracting section marginal point below the road image local horizon and edge calculation direction of the embodiment of the invention;
Fig. 5 is that the embodiment of the invention is described according to the marginal point, edge direction and the lane boundary projection model that extract, at the method flow diagram of lane boundary projection model parameter vector space search location lane boundary;
Fig. 6 is the lane boundary testing result of the embodiment of the invention when on the road surface shade being arranged;
Fig. 7 is the lane boundary testing result of the embodiment of the invention when having other vehicle to block in road;
Fig. 8 is the lane boundary testing result of embodiment of the invention when crackle, shade and traffic lane line wearing and tearing are arranged on the road surface;
Fig. 9 is the lane boundary testing result of the embodiment of the invention when backlight and dusk.
Embodiment
In the embodiment of the invention video camera is installed on vehicle roof axis front position, camera lens is over against vehicle front, after the camera parameters demarcation, vehicle is along lanes, the real-time collection vehicle road ahead of vehicle-mounted vidicon image uses the inventive method that this car place lane boundary in the road image is detected.
Lane boundary for structured road detects, and can adopt certain lane boundary model.The degree of accuracy that the lane boundary that not only can improve realistic lane boundary model detects, and can estimate the track planar geometry, as track curvature, curvature variation, and position and the direction etc. of vehicle in the track.The concrete principles illustrated of lane boundary model of the embodiment of the invention is as follows.
According to specification of the highway route design, plane figure of highway should be made up of straight line, circular curve, three kinds of key elements of clothoid.Because the road surface is essentially the plane in visual range, so track curvature C can be expressed as follows with the variation relation of track length l:
C(l)=C
0+C
1l
C in the formula
0Be the curvature of track at the viewpoint place, C
1Rate of change for track curvature.According to C
0And C
1Different values, the above-mentioned relation formula both can be represented clothoid, also can represent straight line and circular arc.As shown in Figure 1, set up world coordinate system O
wX
wY
wZ
w, coordinate plane O wherein
wX
wZ
wBe parallel to the road surface, coordinate axis O
wZ
wBe parallel to lane line at the tangent line of viewpoint corresponding point position, initial point O
wApart from road surface and vehicle-mounted vidicon coordinate system O
cX
cY
cZ
cInitial point O
cWith high, highly be h
cConsider vehicle along lanes, the angle of direction of traffic and track direction is less, for boundary line, the left side, track, following track equation is arranged
As shown in Figure 2, set up pixel coordinate system, O
1Be initial point, O
cBe image center.If d
x, d
yBe respectively the physical size of pixel on x, y direction of principal axis, β represents the angle of direction of traffic and track direction, d
LThe distance vector that project to track left side boundary line of presentation video central point on the road surface.Roll angle γ, track curvature and the track curvature variation of considering vehicle-mounted vidicon are all smaller, by coordinate system transformation, can get boundary line, the left side, track curvilinear equation in the pixel coordinate system by following formula, are also referred to as track left margin projection model:
b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein
In like manner can get boundary line, the right, track curvilinear equation, be also referred to as track right margin projection model
b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein
D wherein
RThe presentation video central point is at the distance vector that projects to boundary line, the right, track on road surface.Definition B=(b
1L, b
1R, b
0, b
-1, b
-2)
TBe lane boundary projection model parameter vector, then the lane line in the carriageway image can determine that its span is (5,0,0 ,-2000 ,-3000) by parameter vector B
T<B<(0,5,300,2000,3000)
T
Different with the track model that uses in the existing lane detection technology, the lane boundary projection model of the embodiment of the invention is the true description to actual lane boundary wire shaped, use it not only can improve the degree of accuracy of lane detection, effectively suppress the interference of non-lane line, and can estimate track curvature C respectively according to lane boundary projection model parameter value
0, track curvature variation C
1, direction of traffic and track direction angle β, vehicle from boundary line, the left and right sides, track apart from d
LAnd d
R, its computing method are as follows respectively:
Utilize the described lane boundary projection model of the embodiment of the invention, the general flow chart of the embodiment of the invention as shown in Figure 3, its embodiment is as follows:
Step 1, according to the pixel gradient amplitude adaptively to the extracting section marginal point below the road image local horizon, and edge calculation direction; Particularly, its process flow diagram as shown in Figure 4, its embodiment and principle are as described below:
Step 1.1 is that the boundary is divided into two parts up and down with image with the local horizon; Because the visual information in track only exists only in the surface of road, and the local horizon in the image can calculate in advance by the inside and outside parameter of vehicle-mounted vidicon, in order to reduce calculated amount, avoid the processing to garbage in the image, lane detection only need be handled information relevant with the road surface in the road image, i.e. the following part in local horizon in the image; Particularly, calculate
R wherein
OThe ordinate of presentation video central point, d
yPhysical size on the expression pixel vertical direction, f
cThe focal length of expression vehicle-mounted vidicon, α represents the pitch angle of vehicle-mounted vidicon, with local horizon (j, r
H) for the boundary is divided into up and down two parts with image, j=0 wherein, 1 ..., N, the pixel value of N presentation video width;
Step 1.2, the local horizon is with the gradient magnitude G of each pixel of lower part in the computed image
m(c, r), wherein (c, r) coordinate of the following partial pixel point in local horizon in the presentation video; Particularly, to r 〉=r
HPixel use isotropy Sobel operator to calculate the gradient magnitude G of its horizontal direction and vertical direction
x(c, r) and G
y(c, r), compute gradient amplitude G then
m(c, r), its concrete computing method are as follows respectively:
Step 1.3, edge calculation threshold value G
MthIn order to extract the traffic lane line edge, edge threshold need be set; Because the variation of weather, sunlight, environment, the traffic lane line edge extracting method of built-in edge threshold value can not adapt to various variations; When threshold value is higher, can't extract wearing and tearing with dim environment under the traffic lane line edge; When threshold value is low, some noises and interference can be mistakened as edge extracting; Therefore, different carriageway images need use different edge threshold, and its concrete computing method are as follows:
Wherein M, N distinguish the pixel value of presentation video height and width, w
GBe coefficient, its span is 0.1≤w
G≤ 1.5, particularly, w
GValue is 0.6;
Step 1.4 is with the gradient magnitude G of local horizon in the image with each pixel of lower part
m(c is r) with edge threshold G
MthCompare, if greater than edge threshold, then be marginal point and edge calculation point edge direction θ (c, r), particularly, θ (c, r)=arctan[G
y(c, r)/G
x(c, r)];
Step 2 is according to the marginal point, edge direction and the lane boundary projection model that extract, in the parameter vector space utilization particle group optimizing search location of lane boundary projection model lane boundary; Particularly, its process flow diagram as shown in Figure 5, embodiment is as described below:
Step 2.1 arranges feasible region, beta particle group's big or small m, the maximum flying speed V of particle of lane boundary projection model parameter vector B
Max=(v
1Lmax, v
1Rmax, v
0max, v
-1max, v
-2max)
T, and maximum search iterations Iter
Max, V wherein
MaxSpan be (1,1,100,1500,2000)
T≤ V
Max≤ (5,5,500,2500,3000)
T, the span of m is 10≤m≤60, Iter
MaxSpan be 30≤Iter
Max≤ 100, particularly, (5,0,0 ,-2000 ,-3000)
T<B<(0,5,300,2000,3000)
T, V
Max=(5,5,300,600,600)
T, m=30, Iter
Max=50;
Step 2.2, each parameter vector particle position B of random initializtion in the feasible region of lane boundary projection model parameter vector
i, speed V
i, wherein i represents the sequence number of parameter vector particle, and the historical desired positions P of each parameter vector particle is set
i=B
i, calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle then according to marginal point distance, edge direction, and maximum confidence corresponding historical desired positions G in the degree of confidence size of all particles and then the vectorial population that gets parms relatively; Particularly, the computing method of lane boundary curve degree of confidence F (B) are
Wherein D (c, r) the expression marginal point (c, r) to the distance of lane boundary curve,
(U represents the lane boundary neighborhood of a curve to the expression marginal point for c, the angle of edge direction r) and lane boundary curve, and the span of its radius is [2,30], and particularly, the radius of U is 15, μ
W,
Represent (b respectively
1R-b
1L) average and variance, D (c, variance r),
Variance, its span is respectively
W wherein
LaneThe expression lane width, d
x, d
yRepresent respectively pixel in the horizontal direction with vertical direction on physical size, h
cRepresent vehicle-mounted vidicon apart from the height on ground, particularly, μ
W=1.3,
The concrete principle of the computing method of above-mentioned lane boundary curve degree of confidence F (B) is as described below; According to the highway engineering technical manual, the width basically identical in structuring track, its constraint condition as lane detection again because the observed reading of lane width meets normal distribution, therefore can be able to be constructed the first half of lane boundary curve confidence calculations formula equal sign the right formula.For the marginal point in the image (c, r), whether it belongs to the lane boundary curve, has two attributes to consider, marginal point (c, r) to the distance of lane line and marginal point (c, edge direction r), so the marginal point probability that belongs to lane line with
Be directly proportional; Observe the marginal point in the lane boundary neighbourhood of a curve U, can construct the latter half of following formula equal sign the right formula; Therefore, lane boundary curve degree of confidence F (B) is proportional to the similarity of lane line in the lane boundary curve of parameter vector B and the image;
In the calculating of described lane boundary curve degree of confidence F (B), D
2(c, r) and
As shown in Figure 2, particularly, its computing method are respectively
Wherein
Wherein ψ is lane boundary normal to a curve direction, and its computing method are specially
ψ
L=arctan[-b
1L+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
ψ
R=arctan[-b
1R+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
Step 2.3 is calculated each parameter vector particle's velocity V of renewal according to following formula
iWith position B
i:
V
i(k+1)=w(k)V
i(k)+c
1r
1[P
i(k)-B
i(k)]+c
2r
2[G(k)-B
i(k)]
B
i(k+1)=B
i(k)+V
i(k+1)
K represents search iteration number of times, V in the formula
i(k), B
i(k), P
i(k) and the historical desired positions of G (k) i particle's velocity, position, historical desired positions and population when representing the k time iteration respectively, c
1, c
2Be constant, its span be (0,4], particularly, c
1, c
2Value is respectively 2, r
1, r
2Be the random number on (0,1) interval,
I=1,2 ..., m.
Step 2.4 checks parameter vector particle's velocity and position, and the parameter vector particle that surpasses maximal rate is carried out speed limit, and it is maximal rate that its speed is set, and the parameter vector particle that crosses the border is carried out the position return, but its position is set in row space at random; Its principle is, because vector may cause behavior excessive in the process of autognosis and colony's study, the too fast or vectorial feasible region that flies out of vector speed this moment, this can reduce the search efficiency of population of vectors, therefore need correct the excessive behavior of vector, to improve global search efficient;
Step 2.5 is calculated the lane boundary curve degree of confidence F (B) of each parameter vector particle according to marginal point and edge direction, and relatively upgrades the historical desired positions P of each parameter vector particle
iHistorical desired positions G with the parameter vector population;
Step 2.6 is with search iteration number of times k and maximum iteration time Iter
MaxCompare, if less than forwarding step 2.3 to, otherwise forward step 2.7 to;
Step 2.7, with the lane boundary curve output of the historical desired positions correspondence of parameter vector population, particularly, according to the historical desired positions of parameter vector population and the lane boundary projection model lane line that in the road image that detects, draws;
Step 3 is calculated track, vehicle place planar geometry, position and the angle of deviation of vehicle in the track according to the historical desired positions G of parameter vector population and track structural parameters and vehicle heading calculation of parameter formula; Particularly, according to foregoing lane boundary projection model parameter and C
0, C
1, β, d
LAnd d
RCorresponding relation and the historical desired positions G of parameter vector population be calculated as follows track, vehicle place curvature C respectively
0With curvature variation C
1, vehicle from border, the left and right sides, track apart from d
LAnd d
RAnd angle of deviation β:
Fig. 6 to Fig. 9 is the lane detection result of the described vehicle of embodiment of the invention place lane boundary detection method under the varying environment condition, the present invention not only can adapt to various tracks linear and traffic lane line and weather and illumination variation as can be seen from the result, and can effectively reduce the influence that shade, dazzle, other vehicle, road surface crackle, traffic lane line wearing and tearing detect lane boundary, and can accurately locate lane boundary, and then can estimate geometry and position and the direction of vehicle in the track on plane, track.
Claims (2)
1. a vehicle place lane boundary detection method is characterized in that comprising the steps:
Step 1: according to the pixel gradient amplitude adaptively to the extracting section marginal point below the road image local horizon, and edge calculation direction;
Step 2: according to the marginal point, edge direction and the lane boundary projection model that extract, in the parameter vector space utilization particle group optimizing search location of lane boundary projection model lane boundary;
Step 3: calculate track, vehicle place planar geometry, position and the angle of deviation of vehicle in the track according to the lane boundary projection model parameter value that searches and calculation of parameter formula;
Described step 1 may further comprise the steps:
Step 1.1: determine horizontal position in the image, calculate
R wherein
OThe ordinate of presentation video central point, d
yPhysical size on the expression pixel vertical direction, f
cThe focal length of expression vehicle-mounted vidicon, α represents the pitch angle of vehicle-mounted vidicon, with local horizon (j, r
H) for the boundary is divided into up and down two parts with image, j=0 wherein, 1 ..., N, the pixel value of N presentation video width;
Step 1.2: the local horizon is with the gradient magnitude of each pixel of lower part in the computed image
Wherein (c, the r) coordinate of the following partial pixel point in local horizon in the presentation video, G
x(c, r), G
y(its computing method are respectively for c, the r) gradient magnitude on the following partial pixel point level in local horizon and the vertical direction in the difference presentation video
F (c, r) local horizon following partial pixel point (c, pixel value r) in the presentation video wherein;
Step 1.3: calculate the threshold value of extracting the edge
Wherein M, N distinguish the pixel value of presentation video height and width, w
GBe coefficient, its span is 0.1≤w
G≤ 1.5;
Step 1.4: with local horizon in the image with the gradient magnitude of each pixel of lower part and edge threshold relatively, if gradient magnitude is greater than edge threshold, then this pixel be marginal point and edge calculation point edge direction θ (c, r)=arctan[G
y(c, r)/G
x(c, r)];
Described step 2 may further comprise the steps:
Step 2.1: left and right border, track projection model is respectively
b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c=0
Wherein c represents pixel horizontal ordinate, b
1L, b
1R, b
0, b
-1And b
-2Be the parameter of lane boundary drop shadow curve, and definition B=(b
1L, b
1R, b
0, b
-1, b
-2)
TBe lane boundary projection model parameter vector, feasible region-5<b of lane boundary projection model parameter vector B is set
1L<0,0<b
1R<5,0<b
0<300 ,-2000<b
-1<2000 ,-3000<b
-2<3000, beta particle group's big or small m, the maximum flying speed v of particle
Max=(v
1Lmax, v
1Rmax, v
0max, v
-1max, v
-2max)
T, and maximum search iterations Iter
Max, V wherein
MaxSpan be 1≤v
1Lmax≤ 5,1≤v
1Rmax≤ 5,100≤v
0max≤ 500,1500≤v
-1max≤ 2500,2000≤v
-2maxThe span of≤3000, m is 10≤m≤60, Iter
MaxSpan be 30≤Iter
Max≤ 100;
Step 2.2: each parameter vector particle position B of random initializtion in the feasible region of lane boundary projection model parameter vector B
i, speed V
i, wherein i represents the sequence number of parameter vector particle, and the historical desired positions P of each parameter vector particle is set
i=B
i, calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle then according to marginal point and edge direction, and maximum confidence corresponding historical desired positions G in the degree of confidence size of all particles and then the vectorial population that gets parms relatively;
Step 2.3: be calculated as follows and upgrade each parameter vector particle's velocity V
iWith position B
i,
V
i(k+1)=w(k)V
i(k)+c
1r
1[P
i(k)-B
i(k)]+c
2r
2[G(k)-B
i(k)]
B
i(k+1)=B
i(k)+V
i(k+1)
Wherein k is the search iteration number of times, V
i(k), B
i(k), P
i(k) and the historical desired positions of G (k) i parameter vector particle's velocity, position, historical desired positions and population when representing the k time iteration respectively, c
1, c
2The expression constant, its span be (0,4], r
1, r
2Be the random number on (0,1) interval,
I=1,2 ..., m;
Step 2.4: check parameter vector particle's velocity and position, the parameter vector particle that surpasses maximal rate is carried out speed limit, it is maximal rate that its speed is set, and the parameter vector particle that crosses the border is carried out the position return, but its position is set in row space at random;
Step 2.5: calculate the lane boundary curve degree of confidence F (B) of each parameter vector particle according to marginal point and edge direction, and relatively upgrade the historical desired positions P of each parameter vector particle
iHistorical desired positions G with the parameter vector population;
Step 2.6: with search iteration number of times k and maximum iteration time Iter
MaxCompare, if k is less than Iter
MaxForward step 2.3 to, otherwise forward step 2.7 to;
Step 2.7: with the lane boundary curve output of the historical desired positions G correspondence of parameter vector population;
The computing method of described lane boundary curve degree of confidence F (B) are
Wherein D (c, r) the expression marginal point (c, r) to the distance of lane boundary curve,
(U represents the lane boundary neighborhood of a curve to the expression marginal point for c, the angle of edge direction r) and lane boundary curve, and the span of its radius is [2,30], μ
W,
Represent (b respectively
1R-b
1L) average and variance, D (c, variance r),
Variance, its span is respectively
W wherein
LaneThe expression lane width, d
x, d
yRepresent respectively pixel in the horizontal direction with vertical direction on physical size, h
cThe expression vehicle-mounted vidicon is apart from the height on ground;
Described step 3 may further comprise the steps:
Be calculated as follows the curvature C in track, vehicle place respectively according to the historical desired positions G of parameter vector population
0Rate of change C with track curvature
1:
According to the historical desired positions G of parameter vector population be calculated as follows respectively the angle of deviation β of vehicle in the track, vehicle to left and right boundary line, track apart from d
LAnd d
R:
Wherein γ is the roll angle of vehicle-mounted vidicon.
2. vehicle according to claim 1 place lane boundary detection method, it is characterized in that: in the computing method of described lane boundary curve degree of confidence F (B), marginal point (c, r) to the distance B (c of lane boundary curve, r) and marginal point (c, the angle of edge direction r) and lane boundary curve
Computing method be respectively
Wherein
D
L(c,r)=|[b
1L(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
L|
D
R(c,r)=|[b
1R(r-r
H)+b
0+b
-1(r-r
H)
-1+b
-2(r-r
H)
-2-c]cosψ
R|
Wherein
ψ
L=arctan[-b
1L+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]
ψ
R=arctan[-b
1R+b
-1(r-r
H)
-2+2b
-2(r-r
H)
-3]。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110094319 CN102184535B (en) | 2011-04-14 | 2011-04-14 | Method for detecting boundary of lane where vehicle is |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110094319 CN102184535B (en) | 2011-04-14 | 2011-04-14 | Method for detecting boundary of lane where vehicle is |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102184535A CN102184535A (en) | 2011-09-14 |
CN102184535B true CN102184535B (en) | 2013-08-14 |
Family
ID=44570705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110094319 Expired - Fee Related CN102184535B (en) | 2011-04-14 | 2011-04-14 | Method for detecting boundary of lane where vehicle is |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102184535B (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103196433A (en) * | 2012-01-10 | 2013-07-10 | 株式会社博思科 | Data analysis device, data analysis method and programme |
CN102663744B (en) * | 2012-03-22 | 2015-07-08 | 杭州电子科技大学 | Complex road detection method under gradient point pair constraint |
DE102013103953B4 (en) * | 2012-05-02 | 2020-07-09 | GM Global Technology Operations LLC | Lane detection at full speed using multiple cameras |
JP5792678B2 (en) * | 2012-06-01 | 2015-10-14 | 株式会社日本自動車部品総合研究所 | Lane boundary detection device and program |
CN102930543B (en) * | 2012-11-01 | 2016-04-20 | 南京航空航天大学 | Based on the searching method of the fire monitor jet path of particle cluster algorithm |
CN103854277A (en) * | 2012-12-02 | 2014-06-11 | 西安元朔科技有限公司 | Marrow nucleated cell edge detection algorithm |
JP6046666B2 (en) * | 2014-06-24 | 2016-12-21 | トヨタ自動車株式会社 | Runway boundary estimation device and runway boundary estimation method |
CN104077756B (en) * | 2014-07-16 | 2017-02-08 | 中电海康集团有限公司 | Direction filtering method based on lane line confidence |
JP6500435B2 (en) * | 2014-12-26 | 2019-04-17 | アイシン精機株式会社 | Parking assistance device |
CN104751151B (en) * | 2015-04-28 | 2017-12-26 | 苏州安智汽车零部件有限公司 | A kind of identification of multilane in real time and tracking |
JP6341191B2 (en) * | 2015-12-16 | 2018-06-13 | トヨタ自動車株式会社 | Information processing device |
MX2018011509A (en) * | 2016-03-24 | 2019-01-10 | Nissan Motor | Travel path detection method and travel path detection device. |
CN105702049A (en) * | 2016-03-29 | 2016-06-22 | 成都理工大学 | DSP-based emergency lane monitoring system and realizing method thereof |
CN107798855B (en) * | 2016-09-07 | 2020-05-08 | 高德软件有限公司 | Lane width calculation method and device |
CN107193888B (en) * | 2017-05-02 | 2019-09-20 | 东南大学 | A kind of urban road network model towards lane grade navigator fix |
CN107273935B (en) * | 2017-07-09 | 2020-11-27 | 北京流马锐驰科技有限公司 | Lane sign grouping method based on self-adaptive K-Means |
JP6838522B2 (en) * | 2017-08-10 | 2021-03-03 | トヨタ自動車株式会社 | Image collection systems, image collection methods, image collection devices, and recording media |
US10373003B2 (en) * | 2017-08-22 | 2019-08-06 | TuSimple | Deep module and fitting module system and method for motion-based lane detection with multiple sensors |
US10816354B2 (en) | 2017-08-22 | 2020-10-27 | Tusimple, Inc. | Verification module system and method for motion-based lane detection with multiple sensors |
CN108062512A (en) * | 2017-11-22 | 2018-05-22 | 北京中科慧眼科技有限公司 | A kind of method for detecting lane lines and device |
CN108052880B (en) * | 2017-11-29 | 2021-09-28 | 南京大学 | Virtual and real lane line detection method for traffic monitoring scene |
CN107845264A (en) * | 2017-12-06 | 2018-03-27 | 西安市交通信息中心 | A kind of volume of traffic acquisition system and method based on video monitoring |
CN110969837B (en) * | 2018-09-30 | 2022-03-25 | 毫末智行科技有限公司 | Road information fusion system and method for automatic driving vehicle |
CN111311902B (en) * | 2018-12-12 | 2022-05-24 | 斑马智行网络(香港)有限公司 | Data processing method, device, equipment and machine readable medium |
CN111247525A (en) * | 2019-01-14 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Lane detection method and device, lane detection equipment and mobile platform |
CN110164179A (en) * | 2019-06-26 | 2019-08-23 | 湖北亿咖通科技有限公司 | The lookup method and device of a kind of parking stall of garage free time |
CN111091126A (en) * | 2019-12-12 | 2020-05-01 | 京东数字科技控股有限公司 | Certificate image reflection detection method, device, equipment and storage medium |
CN113885045A (en) * | 2020-07-03 | 2022-01-04 | 华为技术有限公司 | Method and device for detecting lane line |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405783A (en) * | 2006-03-24 | 2009-04-08 | 丰田自动车株式会社 | Road division line detector |
CN101447019A (en) * | 2007-11-29 | 2009-06-03 | 爱信艾达株式会社 | Image recognition apparatuses, methods and programs |
CN101567086A (en) * | 2009-06-03 | 2009-10-28 | 北京中星微电子有限公司 | Method of lane line detection and equipment thereof |
CN101608924A (en) * | 2009-05-20 | 2009-12-23 | 电子科技大学 | A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform |
CN101620732A (en) * | 2009-07-17 | 2010-01-06 | 南京航空航天大学 | Visual detection method of road driving line |
-
2011
- 2011-04-14 CN CN 201110094319 patent/CN102184535B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405783A (en) * | 2006-03-24 | 2009-04-08 | 丰田自动车株式会社 | Road division line detector |
CN101447019A (en) * | 2007-11-29 | 2009-06-03 | 爱信艾达株式会社 | Image recognition apparatuses, methods and programs |
CN101608924A (en) * | 2009-05-20 | 2009-12-23 | 电子科技大学 | A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform |
CN101567086A (en) * | 2009-06-03 | 2009-10-28 | 北京中星微电子有限公司 | Method of lane line detection and equipment thereof |
CN101620732A (en) * | 2009-07-17 | 2010-01-06 | 南京航空航天大学 | Visual detection method of road driving line |
Also Published As
Publication number | Publication date |
---|---|
CN102184535A (en) | 2011-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102184535B (en) | Method for detecting boundary of lane where vehicle is | |
US11940290B2 (en) | Virtual stop line mapping and navigation | |
CN111551958B (en) | Mining area unmanned high-precision map manufacturing method | |
CN109186625B (en) | Method and system for accurately positioning intelligent vehicle by using hybrid sampling filtering | |
US12106574B2 (en) | Image segmentation | |
US20230334776A1 (en) | Sensor calibration with environment map | |
CN101975951B (en) | Field environment barrier detection method fusing distance and image information | |
CN101469991B (en) | All-day structured road multi-lane line detection method | |
CN106199558A (en) | Barrier method for quick | |
US20220197301A1 (en) | Vehicle Localization Based on Radar Detections | |
CN102509067A (en) | Detection method for lane boundary and main vehicle position | |
CN104077756A (en) | Direction filtering method based on lane line confidence | |
CN102201054A (en) | Method for detecting street lines based on robust statistics | |
CN105426868A (en) | Lane detection method based on adaptive region of interest | |
US20220196829A1 (en) | Radar Reference Map Generation | |
CN103794050A (en) | Real-time transport vehicle detecting and tracking method | |
Wang et al. | A multi-step curved lane detection algorithm based on hyperbola-pair model | |
Sun et al. | Objects detection with 3-D roadside LiDAR under snowy weather | |
US20240208547A1 (en) | Route planning system and method of self-driving vehicle | |
CN103886331A (en) | Method for classifying appearances of vehicles based on multi-feature fusion of surveillance video | |
US12105192B2 (en) | Radar reference map generation | |
CN110531347A (en) | Detection method, device and the computer readable storage medium of laser radar | |
Wu | Data processing algorithms and applications of LiDAR-enhanced connected infrastructure sensing | |
CN109147328B (en) | Traffic flow detection method based on video virtual coil | |
Zhang et al. | Obstacle Detection and Tracking of Unmanned Mine Car Based on 3D Lidar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130814 Termination date: 20150414 |
|
EXPY | Termination of patent right or utility model |