Summary of the invention
The present invention conducts in-depth analysis to the links of lane information extraction algorithm in the intelligent vehicle navigation based on Vehicular video.Whole lane information extraction system is comprised of hardware components and software section two parts.
The entire block diagram of hardware components as shown in Figure 1, mainly contains five parts and forms: camera part, wireless sending module, wireless reception module, video frequency collection card and PC.Wherein, camera part cargo shipment, in the front end centre position of intelligent vehicle, gathers simulated roadway image; Wireless sending module and camera partly link together, and send the analog image information of camera collection to PC; Wireless receiving module and PC link together, and receive the analog image information that wireless sending module sends over; Video frequency collection card is loaded on PC, analog image information is converted to digital image information and is convenient to PC processing; PC is for data processing.
Software section reads in real time the digital picture that video frequency collection card conversion comes and extracts lane information.Overall flow figure as shown in Figure 2, mainly comprises image coordinate system foundation, road image pre-service, and track model is set up and lane information extracts four processes.Wherein, road image pre-service comprises weighted mean gray processing method, medium filtering, and gray scale stretching contrast strengthens, and the self-adaption binaryzation based on Otsu algorithm is processed; Track model is set up link left and right lane line is set up to straight line model, and the clear and definite lane information extracting is k
l, b
l, k
rand b
r, k wherein
land b
lfor slope and section square of left-lane line, k
rand b
rslope and section square for right lane line; Lane information extraction algorithm is realized based on widely used Hough conversion.Specific implementation process is as described below.
S1: image coordinate system is set up: by the original digital image reading from video frequency collection card, take the image lower left corner as true origin, level is to the right x axle positive dirction, sets up image coordinate system vertically upward for y axle positive dirction.
S2: road image pre-service: road image pre-service mainly comprises weighted mean gray processing, medium filtering, gray scale stretching contrast strengthens, the self-adaption binaryzation based on Otsu algorithm.Overall flow figure as shown in Figure 3.
S2.1: weighted mean gray processing: weighted mean gray processing algorithm is composed the red, green, blue three-component of each point the original digital image reading from video frequency collection card to be weighted on average with different weights, finally obtain gray level image, specific formula for calculation is as follows:
f
1(x,y)=0.212671R(x,y)+0.715160G(x,y)+0.072169B(x,y)
F wherein
1(x, y) is the gray-scale value of (x, y) point after gray processing, R (x, y), and G (x, y), B (x, y) is respectively the red, green, blue three-component of (x, y) point the original digital image reading from video frequency collection card.
S2.2: medium filtering: the image after adopting medium filtering to gray processing carries out filtering processing, adopts the template of 3*3 to realize.Specific formula for calculation is as follows:
x>0,y>0
F wherein
2(x, y) is the gray-scale value of (x, y) point after filtering, and med asks intermediate value to the value in bracket, f
1(x, y) is the gray-scale value of (x, y) point after gray processing.In addition, for the point of x=0 or y=0, directly use gray-scale value f
1(x, y) is as filtered gray-scale value f
2(x, y).
S2.3: contrast strengthens: adopt gray scale stretching algorithm to carry out contrast enhancing to filtered image, by gradation of image value from min≤f
2the scope of (x, y)≤max expands to 0≤f
3the scope of (x, y)≤255.Specific formula for calculation is as follows:
F wherein
3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, f
2(x, y) is the gray-scale value of (x, y) point after filtering, and max, min are maximum, the minimum gradation value in image after filtering.
S2.4: the self-adaption binaryzation based on Otsu algorithm: first the position based on local horizon ordinate is divided into the wild road image of long sight and near-sighted wild road image by the image after contrast enhancing, then and near-sighted wild road image wild for long sight carries out self-adaption binaryzation based on Otsu algorithm respectively.
S2.4.1: far and near visual field classification: the image after the position based on local horizon ordinate strengthens contrast is divided into near-sighted wild road image and the wild road image of long sight.The method that extract in local horizon be from top to bottom in the every row of image of Statistical Comparison degree after strengthening gray-scale value be greater than the number n of 200 point, if n is less than 50 statistics, finish, the ordinate y of row during end of record (EOR), y is the position of local horizon ordinate.If y is less than
judge that the image after contrast strengthens is the wild road image of long sight, otherwise judge that the image after contrast strengthens is near-sighted wild road image, the height that wherein H is image.
S2.4.2: the self-adaption binaryzation of the wild road image of long sight based on Otsu algorithm: for the wild road image of long sight, the self-adaption binaryzation formula based on Otsu algorithm is as follows.
F wherein
4(x, y) is the gray-scale value of (x, y) point after binaryzation, f
3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, the optimal threshold of T for trying to achieve based on Otsu algorithm, and the algorithm flow chart of Otsu algorithm is as shown in Figure 4.
S2.4.3: the self-adaption binaryzation of near-sighted wild road image based on Otsu algorithm: first the wild road image of myopia is carried out to target area compensation.Target area compensation is carried out respectively in the first half and the latter half of image.The target area of the first half road image compensation is:
The target area of the latter half road image compensation is:
w wherein, H is respectively width and the height of image.Target area compensation is directly made as 255 by the gray-scale value in target area exactly.
Behind Compensation Objectives region, respectively the first half of image and the latter half are carried out to self-adaption binaryzation based on Otsu algorithm.The binaryzation region of the first half is:
the gray-scale value of the binary image obtaining is f
4-up(x, y).The binaryzation region of the latter half is:
the gray-scale value of the binary image obtaining is f
4-down(x, y).Finally merge and obtain final binary image, the formula of fusion is:
F wherein
4(x, y) is the gray-scale value of (x, y) point after binaryzation, f
4-up(x, y) is the gray-scale value of (x, y) point after the first half road image binaryzation, f
4-down(x, y) is the gray-scale value of (x, y) point after the latter half road image binaryzation, and W and H are respectively width and the height of image.
S3: track model is set up: setting up track model can clearly need the lane information extracting.The present invention sets up linear vehicle road line model to the left and right lane line of through street, as follows.
Wherein v and u are horizontal ordinate and the ordinates of putting on lane line, k
land k
rbe respectively the slope of left and right lane line, b
land b
rbe respectively the square that cuts of left and right lane line.Known, needing the lane information extracting is k
l, b
l, k
rand b
r.
S4: lane information extracts: it is in order to obtain the slope of left and right lane line and to cut square that lane information extracts.
S4.1: unique point screening: unique point screening comprises the screening of left-lane line feature point and the screening of right lane line feature point, and S4.1.1 is the screening of left-lane line feature point, and S4.1.1 is the screening of right lane line feature point.
S4.1.1: the screening of left-lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.1.1 and S4.1.1.2 are in order to determine initial characteristics point, and step S4.1.1.3 and S4.1.1.4 are in order to determine other unique points.
S4.1.1.1: at the area-of-interest R of left-lane line initial characteristics point screening
l=(x, y) | 0≤x < W/2, in 0≤y < H} from top to bottom, search for from right to left, while searching for the first time white point (point that gray-scale value is 255, lower same), search is highly added to 2, continue to search for from right to left.Why search highly being added to 2 is existence due to dotted line end points, and dotted line lane line end points as shown in Figure 5.
S4.1.1.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x-5 < 0, hunting zone becomes 0 < x < white_last.x+5, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.1.3: lane line is dotted line if judge in step S4.1.1.2, and the ordinate of jumping out circulation from step S4.1.1.2 starts, and adds highly successively 1, and horizontal ordinate exists
the range of linearity in search white point, the width that wherein W is image, search highly reaches horizontal position, and time search is ended, horizontal position obtains in step S2.4.1.
S4.1.1.4: lane line is dotted line if do not judge in step S4.1.1.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from right to left white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.1.2: the screening of right lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.2.1 and S4.1.2.2 are in order to determine initial characteristics point, and step S4.1.2.3 and S4.1.2.4 are in order to determine other unique points.
S4.1.2.1: at the area-of-interest of right lane line initial characteristics point screening
inside from top to bottom, search for from left to right, while searching for the first time white point (white point is that gray-scale value is 255 point, lower same), search is highly added to 2, continue to search for from left to right, reason is with step S4.1.1.1.
S4.1.2.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x+5 < is W, hunting zone becomes white_last.x-5 < x < W, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.2.3: lane line is dotted line if judge in step S4.1.2.2, and the ordinate of jumping out circulation from step S4.1.2.2 starts, and adds highly successively 1, and horizontal ordinate exists
the range of linearity in search white point, search highly reaches horizontal position, and time search is ended.
S4.1.2.4: lane line is dotted line if do not judge in step S4.1.2.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, search for from left to right white point, when continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.2: determine left and right lane line θ conversion range: the physical significance of θ can be described below, make vertical line hand over lane line and some A from true origin O to lane line, the physical significance of θ is to forward counterclockwise vector to from X-axis positive dirction
angle.The position relationship analysis of left and right lane line and image coordinate system is as shown in accompanying
drawing 6~11.For left-lane line, only need for (90,180] scope change; For right lane line, only need for [0,90) scope change.
S4.3: the θ that tries to achieve left-lane line maximum times aggregate-value point
land ρ
lvalue: θ
lfor the θ value of the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms, ρ
lρ value for the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the left-lane line searching for (90,180] conversion range based on Hough transformation for mula: ρ=xcos θ+ysin θ, change, wherein the physical significance of θ is with described in step S4.2, ρ is exactly vector
mould, preserve every a pair of θ and ρ value, and through the number of times of point (θ, ρ), finally obtain the θ of maximum times aggregate-value and maximum times aggregate-value point
land ρ
lvalue.
S4.4: the θ that tries to achieve right lane line maximum times aggregate-value point
rand ρ
rvalue: θ
rfor the θ value of the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms, ρ
rρ value for the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the right lane line searching for [0,90) conversion range is changed based on Hough transformation for mula: ρ=xcos θ+ysin θ, wherein the physical significance of θ and ρ is with described in step S4.3, preserve every a pair of θ and ρ value, and through point (θ, number of times ρ), finally obtains the θ of maximum times aggregate-value and maximum times aggregate-value point
rand ρ
rvalue.
S4.5: by θ
land ρ
lobtain the parameter k of left-lane line
land b
l, by θ
rand ρ
robtain the parameter k of right lane line
rand b
r: by θ
land ρ
lobtain the parameter k of left-lane line
land b
ldetermining step be: if ρ
l=0, k
l=tan (θ
l-90), b
l=0; Otherwise, further judge θ
lwhether equal 180, if k
lfor infinity, b
ldirectly equal ρ
l; If not the parameter of left-lane line is: k
l=tan (θ-90),
ask respectively unique point to arrive the distance sum of these two straight lines, apart from sum smaller, be our required left-lane line, process flow diagram as shown in Figure 12.By θ
rand ρ
robtain the parameter k of right lane line
rand b
rdetermining step be: first judge θ
rwhether equal 0, if k
rfor infinity, b
rdirectly equal ρ
r; Otherwise the parameter of right lane line is: k
r=tan (θ
r+ 90),
process flow diagram as shown in Figure 13.
So far, completed the realization of lane information whole system.
The present invention can obtain following beneficial effect:
1,, owing to having adopted gray scale stretching contrast enhancement algorithms, therefore lane information extraction algorithm of the present invention can adapt to the road image of certain illumination variation.
2, in binaryzation and unique point, screen the impact that link has taken into full account far and near visual field road image, therefore lane information extraction algorithm of the present invention can adapt to the road image that the different visuals field are long sight open country and myopia open country.
3, owing to having increased unique point screening link, can effectively reduce calculating and the memory space of Hough conversion, improve the real-time of lane information extraction algorithm of the present invention.
4, due to the unique point screening link increasing, adopt from bottom to top, the search strategy from centre to both sides, therefore lane information extraction algorithm of the present invention can effectively be avoided dotted line flase drop, and the inside edge that detected lane line is lane line, more realistic demand.
Embodiment
The present invention is based on Vehicular video, the links that in intelligent vehicle navigation, lane information extracts is conducted in-depth analysis, and carried out as required improving research.
The preparation of lane information extraction system hardware components is as follows.
(1) camera is installed in intelligent vehicle front end centre position, camera adopts 1/3SONY CCD, and supply voltage is 12V, power-on, and along with intelligent vehicle constantly advances, can Real-time Collection simulated roadway information.
(2) wireless sending module and camera are linked together, wireless receiving module and PC are linked together, wireless transmission/receiver module adopts FPV 5.8G 200MW to send and receives suit, supply voltage is 12V, power-on, wireless sending module can, by the simulated roadway information of camera collection, send wireless receiving module to.
(3) video frequency collection card is installed on PC, video frequency collection card adopts a day quick SDK2500 capture card, and the simulated roadway information that video frequency collection card can receive wireless receiving module is converted to digital image information.
(4) VC2008 and OpenCV2.0 are installed on PC, write code, may be read into the digital image information that video frequency collection card reads.
Being achieved as follows of lane information extraction system software section.
S1: image coordinate system is set up: to the original digital image reading from video frequency collection card, take the image lower left corner as true origin, level is to the right x axle positive dirction, sets up image coordinate system vertically upward for y axle positive dirction.Should be noted that a bit, the image coordinate system of OpenCV acquiescence is to take the image upper left corner as true origin, and level is to the right x axle positive dirction, is y axle positive dirction vertically downward.Therefore, when reading original digital image information and ask for lane information based on OpenCV, the coordinate in image need to be transformed into from the image coordinate system of OpenCV acquiescence in the image coordinate system that the present invention sets up, coordinate transform is very simple, be that horizontal ordinate x is constant, ordinate becomes (H-y), the height that wherein H is image.
S2: road image pre-service.
S2.1: weighted mean gray processing: weighted mean gray processing algorithm is composed the red, green, blue three-component of each point the original digital image reading from video frequency collection card to be weighted on average with different weights, finally obtain gray level image, specific formula for calculation is as follows:
f
1(x,y)=0.212671R(x,y)+0.715160G(x,y)+0.072169B(x,y)
F wherein
1(x, y) is the gray-scale value of (x, y) point after gray processing, R (x, y), and G (x, y), B (x, y) is respectively the red, green, blue three-component of (x, y) point the original digital image reading from video frequency collection card.
S2.2: medium filtering: the image after adopting medium filtering to gray processing carries out filtering processing, adopts the template of 3*3 to realize.Specific formula for calculation is as follows:
x>0,y>0
F wherein
2(x, y) is the gray-scale value of (x, y) point after filtering, and med asks intermediate value to the gray-scale value in bracket, f
1(x, y) is the gray-scale value of (x, y) point after gray processing.In addition, for the point of x=0 or y=0, directly use gray-scale value f
1(x, y) is as filtered gray-scale value f
2(x, y).
S2.3: contrast strengthens: adopt gray scale stretching algorithm to carry out contrast enhancing to filtered image, by gradation of image value from min≤f
2the scope of (x, y)≤max expands to 0≤f
3the scope of (x, y)≤255.Specific formula for calculation is as follows:
F wherein
3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, f
2(x, y) is the gray-scale value of (x, y) point after filtering, and max and min are maximum gradation value and the minimum gradation value in image after filtering.
S2.4: the self-adaption binaryzation based on Otsu algorithm.
S2.4.1: far and near visual field classification: in the every row of image of Statistical Comparison degree after strengthening, gray-scale value is greater than the number n of 200 point from top to bottom, finishes if n is less than 50 statistics, the ordinate y of row during end of record (EOR), and y is the position of local horizon ordinate.If y is less than
judge that the image after contrast strengthens is the wild road image of long sight, otherwise judge that the image after contrast strengthens is near-sighted wild road image, the height that wherein H is image.
S2.4.2: the self-adaption binaryzation of the wild road image of long sight based on Otsu algorithm: for the wild road image of long sight, the self-adaption binaryzation formula based on Otsu algorithm is as follows.
F wherein
4(x, y) is the gray-scale value of (x, y) point after binaryzation, f
3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, the optimal threshold of T for trying to achieve based on Otsu algorithm, and the algorithm flow chart of Otsu algorithm is as shown in Figure 4.
S2.4.3: the self-adaption binaryzation of near-sighted wild road image based on Otsu algorithm: first the wild road image of myopia is carried out to target area compensation.Target area compensation is carried out respectively in the first half and the latter half of image.The target area of the first half road image compensation is:
The target area of the latter half road image compensation is:
w wherein, H is respectively width and the height of image.Target area compensation is directly made as 255 by the gray-scale value in target area exactly.
Behind Compensation Objectives region, respectively the first half of image and the latter half are carried out to self-adaption binaryzation based on Otsu algorithm.The binaryzation region of the first half is:
the gray-scale value of the binary image obtaining is f
4-up(x, y).The binaryzation region of the latter half is:
the gray-scale value of the binary image obtaining is f
4-down(x, y).Finally merge and obtain final binary image, the formula of fusion is:
F wherein
4(x, y) is the gray-scale value of (x, y) point after binaryzation, f
4-up(x, y) is the gray-scale value of (x, y) point after the first half road image binaryzation, f
4-down(x, y) is the gray-scale value of (x, y) point after the latter half road image binaryzation, and W and H are respectively width and the height of image.。
S3: track model is set up: the present invention sets up linear vehicle road line model to the left and right lane line of through street, as follows.
Wherein v and u are horizontal ordinate and the ordinates of putting on lane line, k
land k
rbe respectively the slope of left and right lane line, b
land b
rbe respectively the square that cuts of left and right lane line.Known, needing the lane information extracting is k
l, b
l, k
rand b
r.
S4: lane information extracts.
S4.1: unique point screening: unique point screening comprises the screening of left-lane line feature point and the screening of right lane line feature point, and S4.1.1 is the screening of left-lane line feature point, and S4.1.1 is the screening of right lane line feature point.
S4.1.1: the screening of left-lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.1.1 and S4.1.1.2 are in order to determine initial characteristics point, and step S4.1.1.3 and S4.1.1.4 are in order to determine other unique points.
S4.1.1.1: at the area-of-interest R of left-lane line initial characteristics point screening
l=(x, y) | 0≤x < W/2, in 0≤y < H} from top to bottom, search for from right to left, while searching for the first time white point (point that gray-scale value is 255), search is highly added to 2, continue to search for from right to left.
S4.1.1.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x-5 < 0, hunting zone becomes 0 < x < white_last.x+5, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.1.3: lane line is dotted line if judge in step S4.1.1.2, and the ordinate of jumping out circulation from step S4.1.1.2 starts, and adds highly successively 1, and horizontal ordinate exists
the range of linearity in search white point, the width that wherein W is image, search highly reaches horizontal position, and time search is ended, horizontal position obtains in step S2.4.1.
S4.1.1.4: lane line is dotted line if do not judge in step S4.1.1.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from right to left white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.1.2: the screening of right lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.2.1 and S4.1.2.2 are in order to determine initial characteristics point, and step S4.1.2.3 and S4.1.2.4 are in order to determine other unique points.
S4.1.2.1: at the area-of-interest of right lane line initial characteristics point screening
inside from top to bottom, search for from left to right, while searching for the first time white point (white point is that gray-scale value is 255 point), search is highly added to 2, continue to search for from left to right.
S4.1.2.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x+5 < is W, hunting zone becomes white_last.x-5 < x < W, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.2.3: lane line is dotted line if judge in step S4.1.2.2, and the ordinate of jumping out circulation from step S4.1.2.2 starts, and adds highly successively 1, and horizontal ordinate exists
the range of linearity in search white point, search highly reaches horizontal position, and time search is ended.
S4.1.2.4: lane line is dotted line if do not judge in step S4.1.2.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from left to right white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.2: determine left and right lane line θ conversion range: the physical significance of θ can be described below, make vertical line hand over lane line and some A from true origin O to lane line, the physical significance of θ is to forward counterclockwise vector to from X-axis positive dirction
angle.The position relationship analysis of left and right lane line and image coordinate system is as shown in accompanying drawing 6~11.For left-lane line, only need for (90,180] scope change; For right lane line, only need for [0,90) scope change.
S4.3: the θ that tries to achieve left-lane line maximum times aggregate-value point
land ρ
lvalue: θ
lfor the θ value of the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms, ρ
lρ value for the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the left-lane line searching for (90,180] conversion range based on Hough transformation for mula: ρ=xcos θ+ysin θ, change, wherein the physical significance of θ is with described in step S4.2, ρ is exactly vector
mould, preserve every a pair of θ and ρ value, and through the number of times of point (θ, ρ), finally obtain the θ of maximum times aggregate-value and maximum times aggregate-value point
land ρ
lvalue.
S4.4: the θ that tries to achieve right lane line maximum times aggregate-value point
rand ρ
rvalue: θ
rfor the θ value of the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms, ρ
rρ value for the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the right lane line searching for [0,90) conversion range is changed based on Hough transformation for mula: ρ=xcos θ+ysin θ, wherein the physical significance of θ and ρ is with described in step S4.3, preserve every a pair of θ and ρ value, and through point (θ, number of times ρ), finally obtains the θ of maximum times aggregate-value and maximum times aggregate-value point
rand ρ
rvalue.
S4.5: by θ
land ρ
lobtain the parameter k of left-lane line
land b
l, by θ
rand ρ
robtain the parameter k of right lane line
rand b
r: by θ
land ρ
lobtain the parameter k of left-lane line
land b
ldetermining step be: if ρ
l=0, k
l=tan (θ
l-90), b
l=0; Otherwise, further judge θ
lwhether equal 180, if k
lfor infinity, b
ldirectly equal ρ
l; If not the parameter of left-lane line is: k
l=tan (θ-90),
ask respectively unique point to arrive the distance sum of these two straight lines, apart from sum smaller, be our required left-lane line.By θ
rand ρ
robtain the parameter k of right lane line
rand b
rdetermining step be: first judge θ
rwhether equal 0, if k
rfor infinity, b
rdirectly equal ρ
r; Otherwise the parameter of right lane line is: k
r=tan (θ
r+ 90),