CN102663403B - System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision - Google Patents

System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision Download PDF

Info

Publication number
CN102663403B
CN102663403B CN201210128340.3A CN201210128340A CN102663403B CN 102663403 B CN102663403 B CN 102663403B CN 201210128340 A CN201210128340 A CN 201210128340A CN 102663403 B CN102663403 B CN 102663403B
Authority
CN
China
Prior art keywords
point
lane line
image
white
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210128340.3A
Other languages
Chinese (zh)
Other versions
CN102663403A (en
Inventor
闫豪杰
陈阳舟
辛乐
辛丰强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Ying Da Technology Co., Ltd.
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201210128340.3A priority Critical patent/CN102663403B/en
Publication of CN102663403A publication Critical patent/CN102663403A/en
Application granted granted Critical
Publication of CN102663403B publication Critical patent/CN102663403B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A system and a method used for extracting lane information in express way intelligent vehicle-navigation and based on vision belong to the field of machine vision and intelligent control and include four links of image coordinate system building, road image pre-processing, lane model building and lane information extracting. The road image pre-processing includes a weighted average gray level method, median filter, gray level stretch contrast enhancement, and self-adaptation binaryzation processing based on an Otsu algorithm; the lane model building link builds line models for a left lane line and a right lane line, and extracted lane information is cleared to be kl, bl, kr and br, wherein the kl and the bl are slope and moment of the left lane line, and the kr and the br are the slope and the moment of the right lane line; and the lane information extracting algorithm is achieved based on Hough transformation applied widely. Due to the fact that a gray level stretch contrast enhancement algorithm is adopted, the lane information extracting algorithm can adapt to road images with certain light changes, and can adapt to road images of different view fields, namely far view fields and near view fields.

Description

Lane information extraction system and method in through street intelligent vehicle navigation based on vision
Technical field
The invention belongs to machine vision and field of intelligent control, is that a kind of computer technology, image processing techniques, Radio Transmission Technology etc. utilized are extracted lane information.
Background technology
Along with improving gradually and the increase of private car quantity of highway traffic infrastructure, road traffic problem is day by day serious.In recent years, frequent accidents, the Loss of Life and property that road traffic accident causes more and more causes people's concern.According to statistics: 2008,265204 of road traffic accidents occurred altogether in the whole nation, cause 73484 people dead, 304919 people are injured, 10.1 hundred million yuan of direct property losss; 2009, there is 238351 of road traffic accidents in the whole nation altogether, causes 67759 people dead, and 275125 people are injured, 9.1 hundred million yuan of direct property losss; 2010, there is 3906164 of road traffic accidents in the whole nation altogether, causes 65225 people dead, and 254075 people are injured, 9.3 hundred million yuan of direct property losss.The Loss of Life and property causing in order to reduce as possible road traffic accident, in the past few decades, expert and scholar are in the research work of actively developing relevant intelligent transportation system (ITS) both at home and abroad.
Intelligent vehicle navigation, as one of gordian technique in intelligent transportation system, becomes the focus of research.Although through years of researches, due to the complicacy of real roads, intelligent vehicle navigation is a very complicated problem, none general algorithm still so far.The experiment of carrying out from the NavLab system of German VaMP system, the U.S., the Infra system of France, gondola ARGO system etc., can see, although carry out long-term research, the intelligent vehicle navigation algorithm of these systems still shows some shortcomings and defect in experiment.Therefore,, within long period of time, the research of relevant intelligent vehicle navigation can be still one and constantly improve the process of updating.
Therefore, the present invention is based on Vehicular video, the links that in intelligent vehicle navigation, lane information extracts is conducted in-depth analysis, and carried out as required improving research, finally experiment showed, that total algorithm of the present invention can meet the requirement of extract real-time lane information.
Summary of the invention
The present invention conducts in-depth analysis to the links of lane information extraction algorithm in the intelligent vehicle navigation based on Vehicular video.Whole lane information extraction system is comprised of hardware components and software section two parts.
The entire block diagram of hardware components as shown in Figure 1, mainly contains five parts and forms: camera part, wireless sending module, wireless reception module, video frequency collection card and PC.Wherein, camera part cargo shipment, in the front end centre position of intelligent vehicle, gathers simulated roadway image; Wireless sending module and camera partly link together, and send the analog image information of camera collection to PC; Wireless receiving module and PC link together, and receive the analog image information that wireless sending module sends over; Video frequency collection card is loaded on PC, analog image information is converted to digital image information and is convenient to PC processing; PC is for data processing.
Software section reads in real time the digital picture that video frequency collection card conversion comes and extracts lane information.Overall flow figure as shown in Figure 2, mainly comprises image coordinate system foundation, road image pre-service, and track model is set up and lane information extracts four processes.Wherein, road image pre-service comprises weighted mean gray processing method, medium filtering, and gray scale stretching contrast strengthens, and the self-adaption binaryzation based on Otsu algorithm is processed; Track model is set up link left and right lane line is set up to straight line model, and the clear and definite lane information extracting is k l, b l, k rand b r, k wherein land b lfor slope and section square of left-lane line, k rand b rslope and section square for right lane line; Lane information extraction algorithm is realized based on widely used Hough conversion.Specific implementation process is as described below.
S1: image coordinate system is set up: by the original digital image reading from video frequency collection card, take the image lower left corner as true origin, level is to the right x axle positive dirction, sets up image coordinate system vertically upward for y axle positive dirction.
S2: road image pre-service: road image pre-service mainly comprises weighted mean gray processing, medium filtering, gray scale stretching contrast strengthens, the self-adaption binaryzation based on Otsu algorithm.Overall flow figure as shown in Figure 3.
S2.1: weighted mean gray processing: weighted mean gray processing algorithm is composed the red, green, blue three-component of each point the original digital image reading from video frequency collection card to be weighted on average with different weights, finally obtain gray level image, specific formula for calculation is as follows:
f 1(x,y)=0.212671R(x,y)+0.715160G(x,y)+0.072169B(x,y)
F wherein 1(x, y) is the gray-scale value of (x, y) point after gray processing, R (x, y), and G (x, y), B (x, y) is respectively the red, green, blue three-component of (x, y) point the original digital image reading from video frequency collection card.
S2.2: medium filtering: the image after adopting medium filtering to gray processing carries out filtering processing, adopts the template of 3*3 to realize.Specific formula for calculation is as follows:
f 2 ( x , y ) = med f 1 ( x - 1 , y - 1 ) , f 1 ( x - 1 , y ) , f 1 ( x - 1 , y + 1 ) , f 1 ( x , y - 1 ) , f 1 ( x , y ) , f 1 ( x , y + 1 ) , f 1 ( x + 1 , y - 1 ) , f 1 ( x + 1 , y ) , f 1 ( x + 1 , y + 1 ) x>0,y>0
F wherein 2(x, y) is the gray-scale value of (x, y) point after filtering, and med asks intermediate value to the value in bracket, f 1(x, y) is the gray-scale value of (x, y) point after gray processing.In addition, for the point of x=0 or y=0, directly use gray-scale value f 1(x, y) is as filtered gray-scale value f 2(x, y).
S2.3: contrast strengthens: adopt gray scale stretching algorithm to carry out contrast enhancing to filtered image, by gradation of image value from min≤f 2the scope of (x, y)≤max expands to 0≤f 3the scope of (x, y)≤255.Specific formula for calculation is as follows:
f 3 ( x , y ) = 255 max - min f 2 ( x , y ) - 255 min max - min
F wherein 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, f 2(x, y) is the gray-scale value of (x, y) point after filtering, and max, min are maximum, the minimum gradation value in image after filtering.
S2.4: the self-adaption binaryzation based on Otsu algorithm: first the position based on local horizon ordinate is divided into the wild road image of long sight and near-sighted wild road image by the image after contrast enhancing, then and near-sighted wild road image wild for long sight carries out self-adaption binaryzation based on Otsu algorithm respectively.
S2.4.1: far and near visual field classification: the image after the position based on local horizon ordinate strengthens contrast is divided into near-sighted wild road image and the wild road image of long sight.The method that extract in local horizon be from top to bottom in the every row of image of Statistical Comparison degree after strengthening gray-scale value be greater than the number n of 200 point, if n is less than 50 statistics, finish, the ordinate y of row during end of record (EOR), y is the position of local horizon ordinate.If y is less than judge that the image after contrast strengthens is the wild road image of long sight, otherwise judge that the image after contrast strengthens is near-sighted wild road image, the height that wherein H is image.
S2.4.2: the self-adaption binaryzation of the wild road image of long sight based on Otsu algorithm: for the wild road image of long sight, the self-adaption binaryzation formula based on Otsu algorithm is as follows.
f 4 ( x , y ) = 0 f 3 ( x , y ) < T 255 f 3 ( x , y ) &GreaterEqual; T
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, the optimal threshold of T for trying to achieve based on Otsu algorithm, and the algorithm flow chart of Otsu algorithm is as shown in Figure 4.
S2.4.3: the self-adaption binaryzation of near-sighted wild road image based on Otsu algorithm: first the wild road image of myopia is carried out to target area compensation.Target area compensation is carried out respectively in the first half and the latter half of image.The target area of the first half road image compensation is:
Figure BDA0000157773690000042
The target area of the latter half road image compensation is:
Figure BDA0000157773690000043
w wherein, H is respectively width and the height of image.Target area compensation is directly made as 255 by the gray-scale value in target area exactly.
Behind Compensation Objectives region, respectively the first half of image and the latter half are carried out to self-adaption binaryzation based on Otsu algorithm.The binaryzation region of the first half is:
Figure BDA0000157773690000044
the gray-scale value of the binary image obtaining is f 4-up(x, y).The binaryzation region of the latter half is:
Figure BDA0000157773690000045
the gray-scale value of the binary image obtaining is f 4-down(x, y).Finally merge and obtain final binary image, the formula of fusion is:
f 4 ( x , y ) = f 4 - up ( x , y ) 0 &le; x < W , 0 &le; y < 1 2 H f 4 - down ( x , y ) 0 &le; x < W , 1 2 H &le; y < H
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 4-up(x, y) is the gray-scale value of (x, y) point after the first half road image binaryzation, f 4-down(x, y) is the gray-scale value of (x, y) point after the latter half road image binaryzation, and W and H are respectively width and the height of image.
S3: track model is set up: setting up track model can clearly need the lane information extracting.The present invention sets up linear vehicle road line model to the left and right lane line of through street, as follows.
y = k l x + b l y = k r x + b r
Wherein v and u are horizontal ordinate and the ordinates of putting on lane line, k land k rbe respectively the slope of left and right lane line, b land b rbe respectively the square that cuts of left and right lane line.Known, needing the lane information extracting is k l, b l, k rand b r.
S4: lane information extracts: it is in order to obtain the slope of left and right lane line and to cut square that lane information extracts.
S4.1: unique point screening: unique point screening comprises the screening of left-lane line feature point and the screening of right lane line feature point, and S4.1.1 is the screening of left-lane line feature point, and S4.1.1 is the screening of right lane line feature point.
S4.1.1: the screening of left-lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.1.1 and S4.1.1.2 are in order to determine initial characteristics point, and step S4.1.1.3 and S4.1.1.4 are in order to determine other unique points.
S4.1.1.1: at the area-of-interest R of left-lane line initial characteristics point screening l=(x, y) | 0≤x < W/2, in 0≤y < H} from top to bottom, search for from right to left, while searching for the first time white point (point that gray-scale value is 255, lower same), search is highly added to 2, continue to search for from right to left.Why search highly being added to 2 is existence due to dotted line end points, and dotted line lane line end points as shown in Figure 5.
S4.1.1.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x-5 < 0, hunting zone becomes 0 < x < white_last.x+5, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.1.3: lane line is dotted line if judge in step S4.1.1.2, and the ordinate of jumping out circulation from step S4.1.1.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure BDA0000157773690000051
the range of linearity in search white point, the width that wherein W is image, search highly reaches horizontal position, and time search is ended, horizontal position obtains in step S2.4.1.
S4.1.1.4: lane line is dotted line if do not judge in step S4.1.1.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from right to left white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.1.2: the screening of right lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.2.1 and S4.1.2.2 are in order to determine initial characteristics point, and step S4.1.2.3 and S4.1.2.4 are in order to determine other unique points.
S4.1.2.1: at the area-of-interest of right lane line initial characteristics point screening
Figure BDA0000157773690000061
inside from top to bottom, search for from left to right, while searching for the first time white point (white point is that gray-scale value is 255 point, lower same), search is highly added to 2, continue to search for from left to right, reason is with step S4.1.1.1.
S4.1.2.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x+5 < is W, hunting zone becomes white_last.x-5 < x < W, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.2.3: lane line is dotted line if judge in step S4.1.2.2, and the ordinate of jumping out circulation from step S4.1.2.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure BDA0000157773690000062
the range of linearity in search white point, search highly reaches horizontal position, and time search is ended.
S4.1.2.4: lane line is dotted line if do not judge in step S4.1.2.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, search for from left to right white point, when continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.2: determine left and right lane line θ conversion range: the physical significance of θ can be described below, make vertical line hand over lane line and some A from true origin O to lane line, the physical significance of θ is to forward counterclockwise vector to from X-axis positive dirction
Figure BDA0000157773690000071
angle.The position relationship analysis of left and right lane line and image coordinate system is as shown in accompanying drawing 6~11.For left-lane line, only need for (90,180] scope change; For right lane line, only need for [0,90) scope change.
S4.3: the θ that tries to achieve left-lane line maximum times aggregate-value point land ρ lvalue: θ lfor the θ value of the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms, ρ lρ value for the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the left-lane line searching for (90,180] conversion range based on Hough transformation for mula: ρ=xcos θ+ysin θ, change, wherein the physical significance of θ is with described in step S4.2, ρ is exactly vector
Figure BDA0000157773690000072
mould, preserve every a pair of θ and ρ value, and through the number of times of point (θ, ρ), finally obtain the θ of maximum times aggregate-value and maximum times aggregate-value point land ρ lvalue.
S4.4: the θ that tries to achieve right lane line maximum times aggregate-value point rand ρ rvalue: θ rfor the θ value of the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms, ρ rρ value for the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the right lane line searching for [0,90) conversion range is changed based on Hough transformation for mula: ρ=xcos θ+ysin θ, wherein the physical significance of θ and ρ is with described in step S4.3, preserve every a pair of θ and ρ value, and through point (θ, number of times ρ), finally obtains the θ of maximum times aggregate-value and maximum times aggregate-value point rand ρ rvalue.
S4.5: by θ land ρ lobtain the parameter k of left-lane line land b l, by θ rand ρ robtain the parameter k of right lane line rand b r: by θ land ρ lobtain the parameter k of left-lane line land b ldetermining step be: if ρ l=0, k l=tan (θ l-90), b l=0; Otherwise, further judge θ lwhether equal 180, if k lfor infinity, b ldirectly equal ρ l; If not the parameter of left-lane line is: k l=tan (θ-90),
Figure BDA0000157773690000073
ask respectively unique point to arrive the distance sum of these two straight lines, apart from sum smaller, be our required left-lane line, process flow diagram as shown in Figure 12.By θ rand ρ robtain the parameter k of right lane line rand b rdetermining step be: first judge θ rwhether equal 0, if k rfor infinity, b rdirectly equal ρ r; Otherwise the parameter of right lane line is: k r=tan (θ r+ 90),
Figure BDA0000157773690000074
process flow diagram as shown in Figure 13.
So far, completed the realization of lane information whole system.
The present invention can obtain following beneficial effect:
1,, owing to having adopted gray scale stretching contrast enhancement algorithms, therefore lane information extraction algorithm of the present invention can adapt to the road image of certain illumination variation.
2, in binaryzation and unique point, screen the impact that link has taken into full account far and near visual field road image, therefore lane information extraction algorithm of the present invention can adapt to the road image that the different visuals field are long sight open country and myopia open country.
3, owing to having increased unique point screening link, can effectively reduce calculating and the memory space of Hough conversion, improve the real-time of lane information extraction algorithm of the present invention.
4, due to the unique point screening link increasing, adopt from bottom to top, the search strategy from centre to both sides, therefore lane information extraction algorithm of the present invention can effectively be avoided dotted line flase drop, and the inside edge that detected lane line is lane line, more realistic demand.
Accompanying drawing explanation
Fig. 1 hardware configuration entire block diagram;
Fig. 2 software configuration entire block diagram;
Fig. 3 road image preprocess integral process flow diagram
Fig. 4 Otsu algorithm overall flow figure;
Fig. 5 dotted line lane line end points
The position relationship of Fig. 6 left-lane line and image coordinate system (one);
The position relationship of Fig. 7 left-lane line and image coordinate system (two);
The position relationship of Fig. 8 left-lane line and image coordinate system (three);
The position relationship of Fig. 9 left-lane line and image coordinate system (four);
The position relationship of Figure 10 right lane line and image coordinate system (one);
The position relationship of Figure 11 right lane line and image coordinate system (two);
Figure 12 asks its slope by left-lane line maximum times aggregate-value and cuts the process flow diagram of square;
Figure 13 asks its slope by right lane line maximum times aggregate-value and cuts the process flow diagram of square;
Embodiment
The present invention is based on Vehicular video, the links that in intelligent vehicle navigation, lane information extracts is conducted in-depth analysis, and carried out as required improving research.
The preparation of lane information extraction system hardware components is as follows.
(1) camera is installed in intelligent vehicle front end centre position, camera adopts 1/3SONY CCD, and supply voltage is 12V, power-on, and along with intelligent vehicle constantly advances, can Real-time Collection simulated roadway information.
(2) wireless sending module and camera are linked together, wireless receiving module and PC are linked together, wireless transmission/receiver module adopts FPV 5.8G 200MW to send and receives suit, supply voltage is 12V, power-on, wireless sending module can, by the simulated roadway information of camera collection, send wireless receiving module to.
(3) video frequency collection card is installed on PC, video frequency collection card adopts a day quick SDK2500 capture card, and the simulated roadway information that video frequency collection card can receive wireless receiving module is converted to digital image information.
(4) VC2008 and OpenCV2.0 are installed on PC, write code, may be read into the digital image information that video frequency collection card reads.
Being achieved as follows of lane information extraction system software section.
S1: image coordinate system is set up: to the original digital image reading from video frequency collection card, take the image lower left corner as true origin, level is to the right x axle positive dirction, sets up image coordinate system vertically upward for y axle positive dirction.Should be noted that a bit, the image coordinate system of OpenCV acquiescence is to take the image upper left corner as true origin, and level is to the right x axle positive dirction, is y axle positive dirction vertically downward.Therefore, when reading original digital image information and ask for lane information based on OpenCV, the coordinate in image need to be transformed into from the image coordinate system of OpenCV acquiescence in the image coordinate system that the present invention sets up, coordinate transform is very simple, be that horizontal ordinate x is constant, ordinate becomes (H-y), the height that wherein H is image.
S2: road image pre-service.
S2.1: weighted mean gray processing: weighted mean gray processing algorithm is composed the red, green, blue three-component of each point the original digital image reading from video frequency collection card to be weighted on average with different weights, finally obtain gray level image, specific formula for calculation is as follows:
f 1(x,y)=0.212671R(x,y)+0.715160G(x,y)+0.072169B(x,y)
F wherein 1(x, y) is the gray-scale value of (x, y) point after gray processing, R (x, y), and G (x, y), B (x, y) is respectively the red, green, blue three-component of (x, y) point the original digital image reading from video frequency collection card.
S2.2: medium filtering: the image after adopting medium filtering to gray processing carries out filtering processing, adopts the template of 3*3 to realize.Specific formula for calculation is as follows:
f 2 ( x , y ) = med f 1 ( x - 1 , y - 1 ) , f 1 ( x - 1 , y ) , f 1 ( x - 1 , y + 1 ) , f 1 ( x , y - 1 ) , f 1 ( x , y ) , f 1 ( x , y + 1 ) , f 1 ( x + 1 , y - 1 ) , f 1 ( x + 1 , y ) , f 1 ( x + 1 , y + 1 ) x>0,y>0
F wherein 2(x, y) is the gray-scale value of (x, y) point after filtering, and med asks intermediate value to the gray-scale value in bracket, f 1(x, y) is the gray-scale value of (x, y) point after gray processing.In addition, for the point of x=0 or y=0, directly use gray-scale value f 1(x, y) is as filtered gray-scale value f 2(x, y).
S2.3: contrast strengthens: adopt gray scale stretching algorithm to carry out contrast enhancing to filtered image, by gradation of image value from min≤f 2the scope of (x, y)≤max expands to 0≤f 3the scope of (x, y)≤255.Specific formula for calculation is as follows:
f 3 ( x , y ) = 255 max - min f 2 ( x , y ) - 255 min max - min
F wherein 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, f 2(x, y) is the gray-scale value of (x, y) point after filtering, and max and min are maximum gradation value and the minimum gradation value in image after filtering.
S2.4: the self-adaption binaryzation based on Otsu algorithm.
S2.4.1: far and near visual field classification: in the every row of image of Statistical Comparison degree after strengthening, gray-scale value is greater than the number n of 200 point from top to bottom, finishes if n is less than 50 statistics, the ordinate y of row during end of record (EOR), and y is the position of local horizon ordinate.If y is less than
Figure BDA0000157773690000103
judge that the image after contrast strengthens is the wild road image of long sight, otherwise judge that the image after contrast strengthens is near-sighted wild road image, the height that wherein H is image.
S2.4.2: the self-adaption binaryzation of the wild road image of long sight based on Otsu algorithm: for the wild road image of long sight, the self-adaption binaryzation formula based on Otsu algorithm is as follows.
f 4 ( x , y ) = 0 f 3 ( x , y ) < T 255 f 3 ( x , y ) &GreaterEqual; T
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, the optimal threshold of T for trying to achieve based on Otsu algorithm, and the algorithm flow chart of Otsu algorithm is as shown in Figure 4.
S2.4.3: the self-adaption binaryzation of near-sighted wild road image based on Otsu algorithm: first the wild road image of myopia is carried out to target area compensation.Target area compensation is carried out respectively in the first half and the latter half of image.The target area of the first half road image compensation is:
The target area of the latter half road image compensation is:
Figure BDA0000157773690000113
w wherein, H is respectively width and the height of image.Target area compensation is directly made as 255 by the gray-scale value in target area exactly.
Behind Compensation Objectives region, respectively the first half of image and the latter half are carried out to self-adaption binaryzation based on Otsu algorithm.The binaryzation region of the first half is:
Figure BDA0000157773690000114
the gray-scale value of the binary image obtaining is f 4-up(x, y).The binaryzation region of the latter half is:
Figure BDA0000157773690000115
the gray-scale value of the binary image obtaining is f 4-down(x, y).Finally merge and obtain final binary image, the formula of fusion is:
f 4 ( x , y ) = f 4 - up ( x , y ) 0 &le; x < W , 0 &le; y < 1 2 H f 4 - down ( x , y ) 0 &le; x < W , 1 2 H &le; y < H
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 4-up(x, y) is the gray-scale value of (x, y) point after the first half road image binaryzation, f 4-down(x, y) is the gray-scale value of (x, y) point after the latter half road image binaryzation, and W and H are respectively width and the height of image.。
S3: track model is set up: the present invention sets up linear vehicle road line model to the left and right lane line of through street, as follows.
y = k l x + b l y = k r x + b r
Wherein v and u are horizontal ordinate and the ordinates of putting on lane line, k land k rbe respectively the slope of left and right lane line, b land b rbe respectively the square that cuts of left and right lane line.Known, needing the lane information extracting is k l, b l, k rand b r.
S4: lane information extracts.
S4.1: unique point screening: unique point screening comprises the screening of left-lane line feature point and the screening of right lane line feature point, and S4.1.1 is the screening of left-lane line feature point, and S4.1.1 is the screening of right lane line feature point.
S4.1.1: the screening of left-lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.1.1 and S4.1.1.2 are in order to determine initial characteristics point, and step S4.1.1.3 and S4.1.1.4 are in order to determine other unique points.
S4.1.1.1: at the area-of-interest R of left-lane line initial characteristics point screening l=(x, y) | 0≤x < W/2, in 0≤y < H} from top to bottom, search for from right to left, while searching for the first time white point (point that gray-scale value is 255), search is highly added to 2, continue to search for from right to left.
S4.1.1.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x-5 < 0, hunting zone becomes 0 < x < white_last.x+5, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.1.3: lane line is dotted line if judge in step S4.1.1.2, and the ordinate of jumping out circulation from step S4.1.1.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure BDA0000157773690000122
the range of linearity in search white point, the width that wherein W is image, search highly reaches horizontal position, and time search is ended, horizontal position obtains in step S2.4.1.
S4.1.1.4: lane line is dotted line if do not judge in step S4.1.1.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from right to left white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.1.2: the screening of right lane line feature point comprises the screening of initial characteristics point and the screening of other unique points substantially, concrete screening process is as follows, wherein step S4.1.2.1 and S4.1.2.2 are in order to determine initial characteristics point, and step S4.1.2.3 and S4.1.2.4 are in order to determine other unique points.
S4.1.2.1: at the area-of-interest of right lane line initial characteristics point screening
Figure BDA0000157773690000131
inside from top to bottom, search for from left to right, while searching for the first time white point (white point is that gray-scale value is 255 point), search is highly added to 2, continue to search for from left to right.
S4.1.2.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point.Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5 < x < white_last.x+5, if white_last.x+5 < is W, hunting zone becomes white_last.x-5 < x < W, and wherein white_last.x is the horizontal ordinate of the front white point once searching.When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point.
S4.1.2.3: lane line is dotted line if judge in step S4.1.2.2, and the ordinate of jumping out circulation from step S4.1.2.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure BDA0000157773690000132
the range of linearity in search white point, search highly reaches horizontal position, and time search is ended.
S4.1.2.4: lane line is dotted line if do not judge in step S4.1.2.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from left to right white point in the range of linearity of white_last.x-5 < x < white_last.x+5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance.Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line.
S4.2: determine left and right lane line θ conversion range: the physical significance of θ can be described below, make vertical line hand over lane line and some A from true origin O to lane line, the physical significance of θ is to forward counterclockwise vector to from X-axis positive dirction
Figure BDA0000157773690000141
angle.The position relationship analysis of left and right lane line and image coordinate system is as shown in accompanying drawing 6~11.For left-lane line, only need for (90,180] scope change; For right lane line, only need for [0,90) scope change.
S4.3: the θ that tries to achieve left-lane line maximum times aggregate-value point land ρ lvalue: θ lfor the θ value of the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms, ρ lρ value for the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the left-lane line searching for (90,180] conversion range based on Hough transformation for mula: ρ=xcos θ+ysin θ, change, wherein the physical significance of θ is with described in step S4.2, ρ is exactly vector
Figure BDA0000157773690000142
mould, preserve every a pair of θ and ρ value, and through the number of times of point (θ, ρ), finally obtain the θ of maximum times aggregate-value and maximum times aggregate-value point land ρ lvalue.
S4.4: the θ that tries to achieve right lane line maximum times aggregate-value point rand ρ rvalue: θ rfor the θ value of the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms, ρ rρ value for the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms.By the unique point of the right lane line searching for [0,90) conversion range is changed based on Hough transformation for mula: ρ=xcos θ+ysin θ, wherein the physical significance of θ and ρ is with described in step S4.3, preserve every a pair of θ and ρ value, and through point (θ, number of times ρ), finally obtains the θ of maximum times aggregate-value and maximum times aggregate-value point rand ρ rvalue.
S4.5: by θ land ρ lobtain the parameter k of left-lane line land b l, by θ rand ρ robtain the parameter k of right lane line rand b r: by θ land ρ lobtain the parameter k of left-lane line land b ldetermining step be: if ρ l=0, k l=tan (θ l-90), b l=0; Otherwise, further judge θ lwhether equal 180, if k lfor infinity, b ldirectly equal ρ l; If not the parameter of left-lane line is: k l=tan (θ-90),
Figure BDA0000157773690000143
ask respectively unique point to arrive the distance sum of these two straight lines, apart from sum smaller, be our required left-lane line.By θ rand ρ robtain the parameter k of right lane line rand b rdetermining step be: first judge θ rwhether equal 0, if k rfor infinity, b rdirectly equal ρ r; Otherwise the parameter of right lane line is: k r=tan (θ r+ 90),
Figure BDA0000157773690000144

Claims (2)

1. the lane information extracting method in the through street intelligent vehicle navigation based on vision, is characterized in that: comprise image coordinate system foundation, road image pre-service, the foundation of track model and lane information extraction four processes; Wherein road image pre-service comprises weighted mean gray processing method, medium filtering, and gray scale stretching contrast strengthens, the self-adaption binaryzation based on Otsu algorithm; Track model is set up link left and right lane line is set up to straight line model, and the clear and definite lane information extracting is k l, b l, k rand b r, k wherein land b lfor slope and section square of left-lane line, k rand b rslope and section square for right lane line; Lane information extraction algorithm is realized based on widely used Hough conversion; Concrete steps are as follows:
S1: image coordinate system is set up: by the original digital image reading from video frequency collection card, take the image lower left corner as true origin, level is to the right x axle positive dirction, sets up image coordinate system vertically upward for y axle positive dirction;
S2: road image pre-service: road image pre-service comprises weighted mean gray processing, medium filtering, gray scale stretching contrast strengthens, the self-adaption binaryzation based on Otsu algorithm;
S2.1: weighted mean gray processing: weighted mean gray processing algorithm is composed the red, green, blue three-component of each point the original digital image reading from video frequency collection card to be weighted on average with different weights, finally obtain gray level image, specific formula for calculation is as follows:
f 1(x,y)=0.212671R(x,y)+0.715160G(x,y)+0.072169B(x,y)
F wherein 1(x, y) is the gray-scale value of (x, y) point after gray processing, R (x, y), and G (x, y), B (x, y) is respectively the red, green, blue three-component of (x, y) point the original digital image reading from video frequency collection card;
S2.2: medium filtering: the image after adopting medium filtering to gray processing carries out filtering processing, adopts the template of 3*3 to realize; Specific formula for calculation is as follows:
Figure FDA0000442213230000011
F wherein 2(x, y) is the gray-scale value of (x, y) point after filtering, and med asks intermediate value to the value in bracket, f 1(x, y) is the gray-scale value of (x, y) point after gray processing; In addition, for the point of x=0 or y=0, directly use gray-scale value f 1(x, y) is as filtered gray-scale value f 2(x, y);
S2.3: contrast strengthens: adopt gray scale stretching algorithm to carry out contrast enhancing to filtered image, by gradation of image value from min≤f 2the scope of (x, y)≤max expands to 0≤f 3the scope of (x, y)≤255; Specific formula for calculation is as follows:
Figure FDA0000442213230000021
F wherein 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, f 2(x, y) is the gray-scale value of (x, y) point after filtering, and max, min are maximum, the minimum gradation value in image after filtering;
S2.4: the self-adaption binaryzation based on Otsu algorithm: first the position based on local horizon ordinate is divided into the wild road image of long sight and near-sighted wild road image by the image after contrast enhancing, then and near-sighted wild road image wild for long sight carries out self-adaption binaryzation based on Otsu algorithm respectively;
S2.4.1: far and near visual field classification: the image after the position based on local horizon ordinate strengthens contrast is divided into near-sighted wild road image and the wild road image of long sight; The method that extract in local horizon be from top to bottom in the every row of image of Statistical Comparison degree after strengthening gray-scale value be greater than the number n of 200 point, if n is less than 50 statistics, finish, the ordinate y of row during end of record (EOR), y is the position of local horizon ordinate; If y is less than
Figure FDA0000442213230000025
h, judges that the image after contrast strengthens is the wild road image of long sight, otherwise judges that the image after contrast strengthens is near-sighted wild road image, the height that wherein H is image;
S2.4.2: the self-adaption binaryzation of the wild road image of long sight based on Otsu algorithm: for the wild road image of long sight, the self-adaption binaryzation formula based on Otsu algorithm is as follows;
Figure FDA0000442213230000022
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 3(x, y) is the gray-scale value of contrast enhancing rear (x, y) point, the optimal threshold of T for trying to achieve based on Otsu algorithm;
S2.4.3: the self-adaption binaryzation of near-sighted wild road image based on Otsu algorithm: first the wild road image of myopia is carried out to target area compensation; Target area compensation is carried out respectively in the first half and the latter half of image; The target area of the first half road image compensation is: R 1=(x, y) | 0≤x<W, the target area of the latter half road image compensation is: R 2=(x, y) | 0≤x<W,
Figure FDA0000442213230000024
w wherein, H is respectively width and the height of image; Target area compensation is directly made as 255 by the gray-scale value in target area exactly;
Behind Compensation Objectives region, respectively the first half of image and the latter half are carried out to self-adaption binaryzation based on Otsu algorithm; The binaryzation region of the first half is:
Figure FDA0000442213230000037
the gray-scale value of the binary image obtaining is f 4-up(x, y); The binaryzation region of the latter half is: the gray-scale value of the binary image obtaining is f 4-down(x, y); Finally merge and obtain final binary image, the formula of fusion is:
Figure FDA0000442213230000035
F wherein 4(x, y) is the gray-scale value of (x, y) point after binaryzation, f 4-up(x, y) is the gray-scale value of (x, y) point after the first half road image binaryzation, f 4-down(x, y) is the gray-scale value of (x, y) point after the latter half road image binaryzation, and W and H are respectively width and the height of image;
S3: track model is set up: setting up track model can clearly need the lane information extracting; The left and right lane line of through street is set up to linear vehicle road line model, as follows;
Figure FDA0000442213230000036
Wherein x and y are horizontal ordinate and the ordinates of putting on lane line, k land k rbe respectively the slope of left and right lane line, b land b rbe respectively the square that cuts of left and right lane line; Known, needing the lane information extracting is k l, b l, k rand b r;
S4: lane information extracts: it is in order to obtain the slope of left and right lane line and to cut square that lane information extracts;
S4.1: unique point screening: unique point screening comprises the screening of left-lane line feature point and the screening of right lane line feature point, and S4.1.1 is the screening of left-lane line feature point, and S4.1.2 is the screening of right lane line feature point;
S4.1.1: the screening of left-lane line feature point comprises the screening of initial characteristics point and the screening of other unique points, concrete screening process is as follows, wherein step S4.1.1.1 and S4.1.1.2 are in order to determine initial characteristics point, and step S4.1.1.3 and S4.1.1.4 are in order to determine other unique points;
S4.1.1.1: at the area-of-interest R of left-lane line initial characteristics point screening l=(x, y) | and 0≤x<W/2, in 0≤y<H}, from top to bottom, search for from right to left, while searching for the first time gray-scale value and be 255 white point, search is highly added to 2, continue to search for from right to left; Why search highly being added to 2 is existence due to dotted line end points;
S4.1.1.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point; Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5<x<white_last.x+ 5, if white_last.x-5<0, hunting zone becomes 0<x<white_last.x+5, and wherein white_last.x is the horizontal ordinate of the front white point once searching; When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance; Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point;
S4.1.1.3: lane line is dotted line if judge in step S4.1.1.2, and the ordinate of jumping out circulation from step S4.1.1.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure FDA0000442213230000041
the range of linearity in search white point, the width that wherein W is image, search highly reaches horizontal position, and time search is ended, horizontal position obtains in step S2.4.1;
S4.1.1.4: lane line is dotted line if do not judge in step S4.1.1.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is searched for from right to left white point in the range of linearity of white_last.x-5<x<white_last.x+ 5, when continuous 10 ordinates are all found less than white point, just judge that current lane line is dotted line lane line, search finishes in advance; Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line;
S4.1.2: the screening of right lane line feature point comprises the screening of initial characteristics point and the screening of other unique points, concrete screening process is as follows, wherein step S4.1.2.1 and S4.1.2.2 are in order to determine initial characteristics point, and step S4.1.2.3 and S4.1.2.4 are in order to determine other unique points;
S4.1.2.1: at the area-of-interest of right lane line initial characteristics point screening
Figure FDA0000442213230000042
inside from top to bottom, search for from left to right, while searching white point for the first time, search is highly added to 2, continue to search for from left to right, reason is with step S4.1.1.1;
S4.1.2.2: while again searching white point, suspense is initial characteristics point P, and utilize whether the white point number judging point P searching in follow-up 50 ordinates is suitable initial characteristics point; Concrete hunting zone is to add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5<x<white_last.x+ 5, if white_last.x+5<W, hunting zone becomes white_last.x-5<x<W, and wherein white_last.x is the horizontal ordinate of the front white point once searching; When continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance; Finally judge, if the white point number in follow-up 50 ordinates is greater than 20, judging point P is suitable initial characteristics point, otherwise, search height adds 1 on a basis of P ordinate, continues S4.1.1.2 circulation, until search out suitable initial characteristics point;
S4.1.2.3: lane line is dotted line if judge in step S4.1.2.2, and the ordinate of jumping out circulation from step S4.1.2.2 starts, and adds highly successively 1, and horizontal ordinate exists
Figure FDA0000442213230000054
the range of linearity in search white point, search highly reaches horizontal position, and time search is ended;
S4.1.2.4: lane line is dotted line if do not judge in step S4.1.2.2, the position that adds 50 from the height of initial characteristics point, add highly successively 1, horizontal ordinate is in the range of linearity of white_last.x-5<x<white_last.x+ 5, search for from left to right white point, when continuous 10 ordinates, all find less than white point, just judge that current lane line is dotted line lane line, search finishes in advance; Otherwise when search highly reaches horizontal position, search is ended, and judges that current lane line is solid line;
S4.2: determine left and right lane line θ conversion range: the physical significance of θ is described below, make vertical line hand over lane line and some A from true origin O to lane line, the physical significance of θ is to forward counterclockwise vector to from x axle positive dirction
Figure FDA0000442213230000052
angle; The position relationship analysis of left and right lane line and image coordinate system; For left-lane line, only need for (90,180] scope change; For right lane line, only need for [0,90) scope change;
S4.3: the θ that tries to achieve left-lane line maximum times aggregate-value point land ρ lvalue: θ lfor the θ value of the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms, ρ lρ value for the straight line that in the left-lane line feature point of screening, the maximum unique point of conllinear forms; By the unique point of the left-lane line searching for (90,180] conversion range based on Hough transformation for mula: ρ=xcos θ+ysin θ, change, wherein the physical significance of θ is with described in step S4.2, ρ is exactly vector
Figure FDA0000442213230000053
mould, preserve every a pair of θ and ρ value, and through the number of times of point (θ, ρ), finally obtain the θ of maximum times aggregate-value and maximum times aggregate-value point land ρ lvalue;
S4.4: the θ that tries to achieve right lane line maximum times aggregate-value point rand ρ rvalue: θ rfor the θ value of the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms, ρ rρ value for the straight line that in the right lane line feature point of screening, the maximum unique point of conllinear forms; By the unique point of the right lane line searching for [0,90) conversion range is changed based on Hough transformation for mula: ρ=xcos θ+ysin θ, wherein the physical significance of θ and ρ is with described in step S4.3, preserve every a pair of θ and ρ value, and through point (θ, number of times ρ), finally obtains the θ of maximum times aggregate-value and maximum times aggregate-value point rand ρ rvalue;
S4.5: by θ land ρ lobtain the parameter k of left-lane line land b l, by θ rand ρ robtain the parameter k of right lane line rand b r: by θ land ρ lobtain the parameter k of left-lane line land b ldetermining step be: if ρ l=0, k l=tan (θ l-90), b l=0; Otherwise, further judge θ lwhether equal 180, if k lfor infinity, bl directly equals ρ l; If not the parameter of left-lane line is: k l=tan (θ-90),
Figure FDA0000442213230000061
ask respectively unique point to arrive the distance sum of these two straight lines, apart from sum smaller, be our required left-lane line; By θ rand ρ robtain the parameter k of right lane line rand b rdetermining step be: first judge θ rwhether equal 0, if k rfor infinity, b rdirectly equal ρ r; Otherwise the parameter of right lane line is:
Figure FDA0000442213230000062
2. lane information extracting method according to claim 1, is characterized in that: software section is realized based on VC2008 and OpenCV2.0.
CN201210128340.3A 2012-04-26 2012-04-26 System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision Expired - Fee Related CN102663403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210128340.3A CN102663403B (en) 2012-04-26 2012-04-26 System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210128340.3A CN102663403B (en) 2012-04-26 2012-04-26 System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision

Publications (2)

Publication Number Publication Date
CN102663403A CN102663403A (en) 2012-09-12
CN102663403B true CN102663403B (en) 2014-04-16

Family

ID=46772887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210128340.3A Expired - Fee Related CN102663403B (en) 2012-04-26 2012-04-26 System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision

Country Status (1)

Country Link
CN (1) CN102663403B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870830B (en) * 2014-02-25 2018-06-26 奇瑞汽车股份有限公司 The extracting method and device of lane line Edge Feature Points
CN105069415B (en) * 2015-07-24 2018-09-11 深圳市佳信捷技术股份有限公司 Method for detecting lane lines and device
CN105389557A (en) * 2015-11-10 2016-03-09 佛山科学技术学院 Electronic official document classification method based on multi-region features
CN105511462B (en) * 2015-11-30 2018-04-27 北京卫星制造厂 A kind of AGV air navigation aids of view-based access control model
CN108509858A (en) * 2018-03-09 2018-09-07 北京信息科技大学 A kind of visual tire print Feature Extraction System of the scene of a traffic accident and method
CN109410587B (en) * 2018-12-18 2021-07-02 北京工业大学 Macroscopic traffic flow parameter estimation method for urban expressway
CN111161545B (en) * 2019-12-24 2021-01-05 北京工业大学 Intersection region traffic parameter statistical method based on video
CN110992296B (en) * 2020-03-04 2020-06-09 执鼎医疗科技(杭州)有限公司 Meibomian gland image enhancement method
CN113221861B (en) * 2021-07-08 2021-11-09 中移(上海)信息通信科技有限公司 Multi-lane line detection method, device and detection equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6845172B2 (en) * 2000-09-29 2005-01-18 Nissan Motor Co., Ltd. Road lane marker recognition
CN101135558A (en) * 2007-09-28 2008-03-05 深圳先进技术研究院 Vehicle anti-collision early warning method and apparatus based on machine vision
CN201825037U (en) * 2010-07-08 2011-05-11 长安大学 Lane departure alarm device for vehicles on highway

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6845172B2 (en) * 2000-09-29 2005-01-18 Nissan Motor Co., Ltd. Road lane marker recognition
CN101135558A (en) * 2007-09-28 2008-03-05 深圳先进技术研究院 Vehicle anti-collision early warning method and apparatus based on machine vision
CN201825037U (en) * 2010-07-08 2011-05-11 长安大学 Lane departure alarm device for vehicles on highway

Also Published As

Publication number Publication date
CN102663403A (en) 2012-09-12

Similar Documents

Publication Publication Date Title
CN102663403B (en) System and method used for extracting lane information in express way intelligent vehicle-navigation and based on vision
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN103500322B (en) Automatic lane line identification method based on low latitude Aerial Images
CN102375982B (en) Multi-character characteristic fused license plate positioning method
CN104050481B (en) Multi-template infrared image real-time pedestrian detection method combining contour feature and gray level
CN106056086B (en) Vehicle brand type identifier method based on Fast Learning frame
CN102509098B (en) Fisheye image vehicle identification method
CN101916383B (en) Vehicle detecting, tracking and identifying system based on multi-camera
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN106127137A (en) A kind of target detection recognizer based on 3D trajectory analysis
CN101980245B (en) Adaptive template matching-based passenger flow statistical method
CN104156731A (en) License plate recognition system based on artificial neural network and method
CN103902976A (en) Pedestrian detection method based on infrared image
CN109344704B (en) Vehicle lane change behavior detection method based on included angle between driving direction and lane line
CN103034836A (en) Road sign detection method and device
CN105913041A (en) Pre-marked signal lights based identification method
CN107025432A (en) A kind of efficient lane detection tracking and system
CN103605977A (en) Extracting method of lane line and device thereof
CN103679205B (en) Assume based on shade and the Foregut fermenters method of layering HOG symmetrical feature checking
CN106686280A (en) Image repairing system and method thereof
CN106128115A (en) A kind of fusion method based on twin camera detection Traffic Information
CN104157160B (en) Vehicle travel control method, device and vehicle
CN107016362A (en) Vehicle based on vehicle front windshield sticking sign recognition methods and system again
CN104700072A (en) Lane line historical frame recognition method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wu Chen

Inventor after: Cui Hongjun

Inventor after: Wang Jinlong

Inventor after: Yan Haojie

Inventor before: Yan Haojie

Inventor before: Chen Yangzhou

Inventor before: Xin Le

Inventor before: Xin Fengqiang

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170510

Address after: 518100 Guangdong Province, Shenzhen New District of Longhua City, Dalang Street Community High Feng Hua Da Lu Xingqiang building B Building 2 floor building

Patentee after: Shenzhen United Ying Da Technology Co., Ltd.

Address before: 100124 Chaoyang District, Beijing Ping Park, No. 100

Patentee before: Beijing University of Technology

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140416

Termination date: 20180426