CN104392212B - The road information detection and front vehicles recognition methods of a kind of view-based access control model - Google Patents

The road information detection and front vehicles recognition methods of a kind of view-based access control model Download PDF

Info

Publication number
CN104392212B
CN104392212B CN201410647880.1A CN201410647880A CN104392212B CN 104392212 B CN104392212 B CN 104392212B CN 201410647880 A CN201410647880 A CN 201410647880A CN 104392212 B CN104392212 B CN 104392212B
Authority
CN
China
Prior art keywords
mrow
msub
mtr
mtd
msubsup
Prior art date
Application number
CN201410647880.1A
Other languages
Chinese (zh)
Other versions
CN104392212A (en
Inventor
段建民
刘冠宇
Original Assignee
北京工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京工业大学 filed Critical 北京工业大学
Priority to CN201410647880.1A priority Critical patent/CN104392212B/en
Publication of CN104392212A publication Critical patent/CN104392212A/en
Application granted granted Critical
Publication of CN104392212B publication Critical patent/CN104392212B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00791Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
    • G06K9/00798Recognition of lanes or road borders, e.g. of lane markings, or recognition of driver's driving pattern in relation to lanes perceived from the vehicle; Analysis of car trajectory relative to detected road
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00791Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
    • G06K9/00825Recognition of vehicle or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The invention belongs to intelligent automobile field of road detection, it is related to the road information detection and front vehicles recognition methods of a kind of view-based access control model.Methods described includes:Image preprocessing;Lane line characteristic parameter extraction;Region of interest regional partition;Vehicle's contour is recognized.The present invention is by extracting area-of-interest, and filter background region reduces process range, simplifies the complexity of calculating;The result of every two field picture is obtained by fixed calculation times using the method retrieved line by line, the feature of linear fit will be carried out to each bright spot point by being different from Hough transform, there is prominent advantage in algorithm real-time;Real-time improvement is carried out to Robinson direction templates operator, sets intermediate variable to reduce the calculation times of each pixel.Area-of-interest is screened and differentiated using the comentropy in target area, tailstock symmetric characteristics, the loss and false drop rate of algorithm is reduced.

Description

The road information detection and front vehicles recognition methods of a kind of view-based access control model

Technical field

The invention belongs to intelligent automobile field of road detection, and in particular to a kind of road information of view-based access control model is detected and preceding Square vehicle identification method.

Background technology:

Intelligent vehicle is a comprehensive system for integrating the functions such as environment sensing, programmed decision-making, the auxiliary driving of many grades System, it, which is concentrated, has used computer, modern sensing, information fusion, communication, artificial intelligence and the technology such as has automatically controlled, and is typical New and high technology synthesis.The research to intelligent vehicle is directed generally to improve security, the comfortableness of automobile, Yi Jiti at present For excellent people's car mutual interface.In recent years, oneself warp of intelligent vehicle turns into focus and automobile that world's Vehicle Engineering is studied The new power that industry increases, many developed countries have all been included the intelligent transportation system each given priority to.Road Information detection technology is always the core link of intelligent vehicle control loop, is the important technology of intelligent transportation system.And lane line And the detection of front vehicles and identification are the matters of utmost importance for realizing this technology.

There has been proposed many technologies in this field, the automatic driving vehicle ARGO systems that VisLab is developed use vision As main sensor, by setting up the two degrees of freedom kinetic model and preview follower model of vehicle, feedback supervision is introduced Signal.Because, it is necessary to which just to obtain suitable steering wheel by complicated fit procedure defeated after image reconstruction road environment Go out, so the computation complexity of method is very high, the consumption of hardware resource is very big.Tzomakas and Seelen realize a kind of acquisition The method of road surface gray threshold, but the problem of road surface grey scale change can not be solved.Marola research belongs to Knowledge based engineering side Method, the false drop rate that the method is disadvantageous in that under complex environment can substantially increase.Wang et al. proposes the car based on B-spline Road line detecting method.Have benefited from the arbitrariness that spline function is expressed profile, this method can accurately identify straight way and bend, and right Road surface shade has certain robustness.The profile allocation control points of B-spline are located at curved exterior, therefore convergence needs repeatedly repeatedly In generation, completes, and adds system complexity.At home, Chen Zhi proposes a kind of vehicle identification method based on wavelet transformation, but The extensive adaptability of system matches can not be met.Main flow algorithm is to recognize in image to best suit track by Hough transform now The straight line of feature, so as to be demarcated.The advantage of this algorithm is that real-time is high, be disadvantageous in that result using straightway as Main, it is difficult to effective parameter is provided during vehicle is turned and computationally intensive, real-time is difficult to ensure that.

The content of the invention

The problem of early warning mechanism is required can not be met for robustness present in prior art or real-time, the present invention is carried Go out the road information detection and front vehicles recognition methods of a kind of view-based access control model, this method carries out adaptive two-value to image first Change segmentation;Then the region of interest ROI (Region Of Interes) in image is extracted;Using what is retrieved line by line Method enters the screening of characteristic point on the inside of driveway line, so as to obtain the left-right marker line parameter in actual track to carry out road model Rebuild.Noise spot is filtered out by burn into plavini;Carry out the merging of hacures and the extraction of ROI region;Using in target area Comentropy, the tailstock symmetric characteristics ROI region is screened and differentiated, reduce missing inspection and the false drop rate of algorithm;Use Improved Robinson angle detectings operator extraction vehicle border, achieves preferable effect.

The technical solution adopted by the present invention is to achieve the above object:

The road information detection and front vehicles recognition methods of a kind of view-based access control model, realizing the system of methods described includes: Camera, installs the Measurement &control computer of video frequency collection card, the LAN that router is built, programmed decision-making host computer, intelligent vehicle BJUT-SHEV experiment porch.Camera is arranged on the ceiling front dead center position of intelligent vehicle BJUT-SHEV experiment porch, in real time Gather road image;Camera is connected video frequency collection card with Measurement &control computer by USB, realizes the function of video data acquiring; The LAN that the control parameter that Measurement &control computer processing is obtained is built by router passes to programmed decision-making host computer, and (observing and controlling is calculated Machine);BJUT-SHEV experiment porch is controlled after programmed decision-making host computer parsing above- mentioned information.Characterized in that, the side Method performs following steps in Measurement &control computer:

Step 1, image preprocessing.

Including:Gray processing is carried out to coloured image, binarization segmentation is carried out using single maximum between-cluster variance OTSU methods, Sobel operator edge detections, image thinning processing, determine road area.

Step 2, lane detection and deviation early warning.

Take the method retrieved line by line to obtain lane boundary point, border point is fitted using least square method, obtain The conic section in track is described.Judge the direction of vehicle front Road turnings, whether run-off-road line carries out early warning to vehicle.

Step 3, ROI region is extracted.

The method dividing vehicle bottom shadow being combined using road area gray scale with double OTSU, is carried out rotten to segmentation figure picture Expansion process is lost, gap regions are filled, vehicle ROI region is obtained based on underbody shade.

Step 4, vehicle's contour is recognized.

Region is carried out to screen by the multiple features of main reference frame of comentropy and symmetry.Using improved Robinson operators are handled the part remained after screening, ask for grey scale change Grad, and become with Hough Change method identification vehicle outer contour.

Compared with prior art, the present invention has advantages below:

(1) present invention has reached the effect of Adaptive matching image by carrying out self-adaption binaryzation segmentation to image.

(2) the invention is characterized in that following measures improve the real-time of system:Area-of-interest in image is carried out Extract, filter background region, reduce the process range of subsequent algorithm, simplify the complexity of calculating;Using the side retrieved line by line Method just can obtain the result of every two field picture by fixed calculation times, be different from Hough transform to each bright spot point The feature of linear fit will be carried out, there is prominent advantage in algorithm real-time;The present invention is calculated Robinson direction templates Son has carried out real-time improvement, sets intermediate variable to reduce the calculation times to each pixel.

(3) different from Hough transform commonly used in the prior art, the present invention enters driveway line using the method retrieved line by line The screening of inner side characteristic point, can make testing result more fit the lane line in real road, in the absence of the limitation of line characteristics Property, so as to provide more effective informations in the excessively curved way of vehicle for system.

(4) the problem of road surface grey scale change can not be solved in the method proposed for Tzomakas and Seelen, the present invention On the basis of self-adaption binaryzation segmentation, second of OTSU Threshold segmentation is carried out, underbody shade has been extracted exactly.Pass through Burn into plavini filters out noise spot, simplifies and improves the efficiency that hacures merge and ROI region is extracted.

(5) false drop rate being under complex environment is caused substantially increased to ask for Marola Knowledge based engineering methods Topic, the present invention is screened and differentiated to ROI region using the comentropy in target area, tailstock symmetric characteristics, is reduced The missing inspection of algorithm and false drop rate, improve the feasibility of the system under complex environment.

Brief description of the drawings

Fig. 1 is hardware system composition frame chart of the embodiment of the present invention;

Fig. 2 is the method for the invention main flow chart;

Fig. 3 is lane line image preprocessing flow chart;

Fig. 4 is that vehicle deviates early warning principle schematic;

Fig. 5 is run-off-road line model figure:(a) to be deviated to the left, (b) is to be deviated to the right;

Fig. 6 is track line drawing and method for early warning flow chart;

Fig. 7 is the position relationship of shade line length and image coordinate;

Fig. 8 is that ROI extracts flow chart;

Fig. 9 is vehicle's contour identification process figure.

Embodiment

The present invention will be further described with reference to the accompanying drawings and examples.

Embodiment use hardware system composition frame chart as shown in figure 1, including:

Camera:Camera use wind after mirror king series it is a, be connected with USB line with Measurement &control computer.Camera is pacified Mounted in intelligent vehicle BJUT-SHEV experiment porch ceiling front dead centers position, with the advance of intelligent vehicle, front road just can be collected The real time information on road.

The Measurement &control computer of video frequency collection card is installed:Video frequency collection card is using particularly good figure ST-769 capture cards are thought, and it will be surveyed The simulated roadway information that control computer is received is converted to digital image information.In addition, on Measurement &control computer install VS2010 and OPENCV2.4.5 and configuration software running environment, and realize the software program of the method for the invention.

The LAN that router is built:Router uses the WNR2000 that Netgear companies produce.The office that router is built The data message that domain net packs Measurement &control computer is uploaded to programmed decision-making host computer and used for it by this LAN.

Programmed decision-making host computer:For parsing aforementioned data information and obtaining control command, so as to be tested to BJUT-SHEV Platform is controlled, and is implemented Vehicular turn, lifts throttle or is stepped on the action such as brake.

A kind of view-based access control model road information detection and front vehicles recognition methods flow chart as shown in Fig. 2 by installed in Software program in Measurement &control computer is realized, is comprised the following steps:

Step 1, image preprocessing, idiographic flow is as shown in Figure 3.

Step 1.1, coloured image gray processing.

If pixel color is RGB (R, G, B) in original color image, the pixel gray value after processing is Gray, color Color image gray processing can be expressed as follows:

Gray=R × 0.299+G × 0.587+B × 0.144

Step 1.2, single OTSU methods binary image.

OTSU methods during pattern-recognition purposes than wide, can adaptively selected threshold, distinguish background and Target area.The characteristic parameter of gray level image is calculated first:

μ=ω0μ01μ1

σ2(K)=ω00-μ)211-μ)2

Wherein, ω0、ω1The probability that respectively background and target area pixel gray value occur, μ0、μ1Respectively background And the average gray value of target area pixel, μ is the average statistical of general image gray scale, σ2(K) it is background area and target Region between-group variance, K=1,2,3 ..., ask make variance obtain maximum when K, obtain optimal threshold K.

Step 1.3, rim detection is carried out using Sobel operators.

Image border typically exhibits the transition of gray level, and this transition can be described with the differential of image.So being based on The method for detecting image edge of differential operator class is the more commonly used one kind.Most of algorithm in such method uses filter Ripple device template, even handled pixel and the center superposition of template, after coefficients are weighted with corresponding pixel value, its result It is used as the Grad of the pixel.The mobile filter device template in view picture digital image matrix, so that it may obtain a width gradient map.This The result of method reflects the gradient that pixel grey scale changes in digital picture, is detected according to the situation of change of gradient in gradient map The edge of image.The present invention is detected that its principle template is as shown in table 1 using Sobel operators.

Table 1Sobel operator principle templates

If image is after binaryzation, pixel point coordinates is (i, j), template computing is carried out to entire image, along x, y directions Grad Gx(i,j)And Gy(i,j), it is marginal point that the point is thought when meeting following formula:

| Gx |+| Gy | > nThreshold

Wherein, nThreshold is threshold value, and the present embodiment takes nThreshold=138.

Step 1.4, image thinning is handled.

The edge that Sobel rim detections are drawn is thicker, influences the processing of next step.The image after rim detection is entered below Row refinement.Thinking is that thick edge means that edge pixel has certain width, only retains the pixel in the middle of this width, and incite somebody to action The pixel " corrosion " of surrounding is fallen, the width with regard to that can reduce edge pixel, refines edge.

Each the white pixel point detected is judged, if being only less than the white of k in its eight neighborhood Pixel (k take in the detection be 7), then illustrate, the point is a brighter point, the edge picture belonged in above-mentioned border width Element, so, such pixel is set to 0, you can complete the micronization processes to image.

Step 1.5, road area is determined.

Determine the road area upper bound:Retrieval downwards, finds the first of the row since first pixel of each row of image Individual black pixel point, marks its line number yr, in yrMaximum on the obtained line number of m pixel of increase be processing region The upper bound.M value is determined by experiment.The present embodiment m=15.

Determine the right boundary of road area:Best straight line border should be that the left and right two comprising whole road area is straight A point in line, straight line, it should on the inner boundary of Road.And according to characteristics of image, the Road on image lower end both sides, Must be respectively on the both sides of image center.So, from the center of image to the left, arrange and searched line by line upwards from bottom one Seek, using find first white point as first point on road inner boundary, then regard straight slope k as parameter, structure Build linear equation.The span of left margin slope k is [0.2,6], and with 0.1 increment increase, being calculated according to linear equation should The number of white point on straight line, will obtain the k values of maximum white points as the slope of this edge fitting straight line.Straight slope It is determined that after, straight line is raised the fitting a straight line that an increment b obtains left margin in y values.Take the span of right margin slope k For [- 0.5, -6], right margin fitting a straight line is determined in the same way.Region in the middle of two straight lines is exactly road area.It is real Test result to show, this region is highly effective.

Step 2, the detection and early warning of lane line, idiographic flow is as shown in Figure 6.

Step 2.1, road edge point is determined.

Searched for upwards until upper confinement boundary from last column of entire image, its longitudinal span is picture altitude height. The line segment of white pixel is retrieved line by line, writes down the length l of nth bar line segmentn.3/4 row of the end column coordinate no more than entire image Line segment be classified as left side Road, starting row coordinate is classified as the right Road no less than the line segment that entire image 1/4 is arranged.Set picture Plain distance threshold d, when the latter half in whole image, d=100;When the top half in image, d=30.Exist respectively Effective row coordinate i between line segment adjacent rows is searched in the Road sequence of left and rightjAnd ij-1If the difference between them is more than D, illustrates that this line segment belongs to noise, then is rejected from the sequence.Finally, found out respectively from the Road sequence of left and right wherein special A most obvious line segment is levied, effective coordinate of this line segment is marked, left side sequential stroke is (il,jl), right side is (ir,jr)。

Step 2.2, Road inner boundary is fitted.

Least square method is taken to be fitted lane line.If (x1,y1),(x2,y2),……(xn,yn) it is rectangular co-ordinate The one group of data provided under system, if x1< x2< ... < xn, then this group of data can be regarded as the discrete of function Point set.If linear equation to be fitted is:Y=f (x)+ε.

F (x) represents ideally function during no noise, and ε is noise;Least square method is exactly to make to make an uproar The error sum of squares Q that sound is produced is minimum, i.e.,:

It is fitted with least square method, its advantage is that speed is very fast, as long as traversal once can just calculate plan Close curve.

Step 2.3, Road turnings walking direction and deviation early warning.

Step 2.3.1, Road turnings walking direction.

The intersection point for remembering right boundary straight line is (x0,y0), the intersection point of left and right road inner boundary matched curve is (x1,y1).Such as Fruit has x0< x10, then illustrate that road is being turned right;If x0> x10, then illustrate that road is being turned left;If x10≤ x0≤x10, then illustrate that road is linear.δ0For the numerical value of the very little determined by experiment.

Step 2.3.2, deviates early warning.

The purpose of lane line fitting is to obtain the position where vehicle.It therefore, it can set up a mathematical modeling to retouch State the positional information of vehicle.As shown in Figure 4, the center identification line of left and right lane line is obtained by lane line fitting algorithm, i.e., The angular bisector of the two lane line angles in left and right.Lane center can represent with a linear equation in two unknowns, if:

Y=ax+b

Obtained by triangle angular bisector theorem and trigonometric function relation:

Wherein, k1、k2The respectively slope of left and right lane line.

As available from the above equation, vehicle heading and the angle in lane center directionSuch as the institute of accompanying drawing 5 Show.

Using the lane departure warning model based on vehicle lateral displacement d in current lane and deflecting angle θ.This mould The advantage of type is the change independent of lane width and the travel speed of vehicle, with higher real-time and accuracy, its As shown in Figure 5, handling process is as shown in Figure 6 for model.According to lateral attitude of the vehicle in current lane and deflecting angle, build Vertical left and right deviation criterion.The single-frame images information in automobile video frequency is extracted first, and this image is carried out with reference to above-mentioned steps Handle and calculate the characteristic parameter d and θ of lane line.As d > d0And θ > θ0When, it is judged as that left side is deviateed, sends pre-warning signal; As d > d0And θ <-θ0When, it is judged as that right side is deviateed, sends pre-warning signal;Early warning letter is not sent if above-mentioned condition is unsatisfactory for Number.Enter lower two field picture after having handled.

Step 3, ROI region is extracted, as shown in Figure 8.

Step 3.1, self-adaption binaryzation.

Image is split using double OTSU methods, the overall threshold value T of image is calculated first by OTSU methods1;Traversing graph In all pixels point, with threshold value T1Classified, more than T1Be classified as background.Background area is filtered out, to pixel in artwork Gray value is less than T1All pixels point reuse OTSU methods, obtain new threshold value T2.With T2Image is entered again for segmentation threshold Row binaryzation, more than T2Be classified as background, pixel value is set to 255;Less than T2Be set to object pixel, pixel value is set For 0.M is chosen on gray level image1Individual length and width are n1The road surface region of individual pixel, m1、n1It is true according to video image resolution ratio It is fixed.The present embodiment video image resolution ratio is 640*480, m1Take 5, n1Take 25.Count m1The average gray value μ in individual road surface regioni And standard deviation sigmai, remove μiMore than μ0And σiMore than σ0Region, μ0、σ0Determined by experiment, the present embodiment μ0、σ0180 Hes are taken respectively 90, the situation that road surface regional window is divided in above zebra stripes or instruction line just can be excluded, and set remaining road surface areal For N.By calculating can in the hope of this N number of region average gray valueAnd average varianceIt can obtain It is to optimal threshold:

It is such a if because N is too small or T < 0 can not carry out local gray-value computing, the selection of algorithm is carried out by such as following formula Mode can make system while taking into account robustness and real-time.

Step 3.2, burn into expansion process:By the processing to multiple image, the shadow region of some acquisitions is found sometimes Breach occurs or situations such as with surrounding environment adhesion, can be using expansion for above-mentioned situation, the morphological method such as corrosion is carried out Processing.Binary image after to splitting before carries out burn into expansion process, and treated image has filtered out miscellaneous in artwork Matter and noise, and with preferable shape facility.

Step 3.3, shadow region extraction and merging.

First from top to bottom, the original position and final position of hacures are searched for line by line from left to right, so that it is determined that it is grown Degree and position.Then think to have found the starting point x of hacures when meeting following formula respectivelystartAnd terminal xend

The length of vehicle bottom hacures in the picture should be in certain scope, according to camera position and parameter Demarcation understands, the difference that this scope can be expert at hacures and change, according to this feature, system is often row selection one Threshold value, if the shade line length length=x detectedend-xstartDiffered excessively with this threshold value, then filter out the hacures Interference.Corresponding relational expression is as follows:

Wherein, w is the length scalar (pixel) of hacures in the picture;wpIt is real wide (rice) for vehicle;H is camera optical axis Height away from ground, 1.6 meters of value;Y is place line number (pixel) of the target on image y direction;Height is image Highly (pixel).When meeting following formula, then it is believed that this hacures is vehicle bottom shade:

0.75*w < length=xstart-xend< 1.25*w

Step 3.4, ROI region is extracted.

The ratio that rectangular degree SQ is region inner area and its boundary rectangle area is defined, SQ is more big, and then this region is more rectangular Shape.Quadrangle is made to measure QM measuring for shadow region the ratio of width to height, as QM=1, it is equilateral that can be approximately considered quadrangle. The above-mentioned shadow region detected is screened using SQ and QM, method is as follows:

, it is necessary in view of the uncertainty of the height of shadow region, width, position etc. during ROI region is extracted, Therefore the larger ROI region of selection range first.During preliminary extraction need vehicle to be contained entirely in region, and examine Considering the change of intensity of illumination and angle can make underbody shade that different proportionate relationships are presented in itself from car body, and specific method is:

Wherein, (Rv_x,Rv_ y) it is ROI region lower left corner coordinate points, (Rs_x,Rs_ y) it is shadow region lower-left angular coordinate Point, Rv_width,Rv_ height is respectively the width and height of ROI region, Rs_ width is the width of shadow region, parameter lambda =1.2, δ=50.

Step 4, vehicle's contour is extracted, as shown in Figure 9.

Step 4.1, comentropy is screened.

The entropy of image can be significantly increased in the candidate region containing vehicle.For road surface region, due to its gray value It is relatively simple, thus comprising Pixel Information value it is relatively low.Therefore the less region of some entropy can be filtered out by this property. It is located in the ROI region that the height of said extracted is h, often capable entropy is H (a), then its average is:

80 different images are have chosen in experiment to be handled, final selected threshold be T=3.1, then when When, it is believed that ROI region includes information of vehicles, and otherwise this ROI region can be filtered out.

Step 4.2, symmetry is filtered.

Applied mathematics principle, if R (x) is the one-time continuous function in ROI region, then can be split as odd function Ro (x) with even function Re(x), thus the proportion shared by the parity function that can just be isolated by it of symmetry of a function come It is determined that.For the ROI region R that said extracted is arrived, it is w × h to make area size, and symmetry axis isThen for image Y lines section, the expression formula of odd function and even function is respectively:

Algorithm needs dual function to be modified, and makes its revised average the same with odd function, levels off to zero, so as to Both relations of energy function contrast.After amendment:

It can thus be concluded that the energy function of odd function and even function is respectively:

Then thus metric calculation can be carried out to the symmetry of y row pixels:

Then have:

Final selected threshold is 0.15, as S > 0.15, then can determine that as vehicle ROI region, otherwise delete the region.

Step 4.3, rim detection is carried out using improved Robinson detective operators.

After the completion of above-mentioned steps processing, accurate ROI vehicle regions have been obtained, Robinson is used in this region Direction template operator carries out the Grad that rim detection asks for pixel grey scale.Eight templates in table 1 are used respectively in region Point detected, wherein maximum output value and its direction just handled as this point after obtained by gray value and its direction, M in table0 (↑)、M2(→)、M4(↓)、M6(←)、Represent respectively just upper, upper right, it is positive it is right, Bottom right, just under, lower-left, positive left, upper left masterplate operator.

Table 2Robinson detective operators

When being detected using Robinson operators to the pixel in ROI region, any pixel point and M1The fortune of template Calculation method is as follows:

e0=p0+2×p1+p2-p5-2×p6-p7

Wherein, e0To e7Respectively target pixel points and template M0(↑) extremelyCarry out the result of rim detection computing; p0To p7Respectively be located at target pixel points upper left, just upper, upper right, just left, the positive right side, lower-left, just under, the gray scale of bottom right pixel point Value.

To any pixel point, it is necessary to which completing 8 times is similar to above-mentioned computing, accurate result could be obtained.As can be seen here, it is right The detection of one point needs to carry out 16 multiplyings and 40 plus and minus calculations, and this greatly drags the speed of service of slow system.In It is to propose improved method, variable is introduced with the following method:

Arithmetic speed optimization processing is carried out, can obtain result is:

After improvement, only need to carry out 12 plus and minus calculations just can complete the detection to a pixel.Finally respectively to horizontal stroke To and longitudinal detection information extracted.

Step 4.4, vehicle boundary line is extracted using Hough transform.

Hough transform is to extract the more universal and ripe method of straight line, is disadvantageous in that operand is larger.Herein Using Hough transform carried out in ROI region, rather than view picture figure is extracted, therefore largely reduce amount of calculation. The present invention is limited in straight line angle, so as to further speed up arithmetic speed.Extract transverse edge during make angle- 5 ° of 5 ° of < θ <, make 60 ° of 120 ° of < θ < during extracting longitudinal edge, left (right side) boundary line in order longitudinal direction is with most left (right side) point Standard, slope value is drawn close to infinite, then can obtain rectangle vehicle external boundary.

Claims (9)

1. the road information detection and front vehicles recognition methods of a kind of view-based access control model, realizing the system of methods described includes:Take the photograph As head, the Measurement &control computer of video frequency collection card, the LAN built by router, programmed decision-making host computer, intelligence are installed in inside Car experiment porch;Camera is arranged on the ceiling front dead center position of intelligent vehicle experiment porch, and the observing and controlling meter is connected by USB Video frequency collection card in calculation machine, gathers road image in real time;Measurement &control computer image progress is handled obtained control parameter by The LAN that router is built passes to programmed decision-making host computer;Experiment porch is controlled after the parsing of programmed decision-making host computer System;Characterized in that, methods described performs following steps in Measurement &control computer:
Step 1, image preprocessing:Gray processing is carried out to coloured image, two-value is carried out using single maximum between-cluster variance OTSU methods Change segmentation, Sobel operator edge detections, image thinning processing determines road area;
Step 2, lane detection and deviation early warning;
Take the method retrieved line by line to determine lane boundary point, border point is fitted using least square method, obtain description The conic section in track;Judge the direction of vehicle front Road turnings, whether run-off-road line carries out early warning to vehicle;
Step 3, ROI region is split;
The method dividing vehicle bottom shadow being combined using road area gray scale with double OTSU, carries out corrosion swollen to segmentation figure picture Swollen processing, fills gap regions, and vehicle ROI region is obtained based on underbody shade;
Step 4, vehicle's contour is recognized;
Region is carried out to screen by the multiple features of main reference frame of comentropy and symmetry;Calculated using improved Robinson Son is handled the part remained after screening, asks for grey scale change Grad, and recognize car with Hough transform method Outer contour;
Method described in step 1 using single maximum between-cluster variance OTSU methods progress binarization segmentation is as follows:
Calculate the characteristic parameter of gray level image:
μ=ω0μ01μ1
σ2(K)=ω00-μ)211-μ)2
Wherein, ω0、ω1The probability that respectively background and target area pixel gray value occur, μ0、μ1Respectively background and mesh The average gray value of area pixel point is marked, μ is the average statistical of general image gray scale, σ2(K) it is background area and target area Between-group variance, K=1,2,3 ..., ask make variance obtain maximum when K, obtain optimal threshold K;
The method of Sobel operator edge detections is as follows described in step 1:
If image pixel coordinate after binaryzation is (i, j), template computing is carried out to entire image, so as to obtain each picture Vegetarian refreshments is along x, the Grad Gx in y directions(i,j) and Gy(i,j), it is marginal point that the point is thought when meeting following formula:
|Gx|+|Gy|>nThreshold
Wherein, nThreshold is threshold value;
Determine that the method for road area is as follows described in step 1:
Determine the road area upper bound:The retrieval downwards since first pixel of each row of image, find the row first is black Colour vegetarian refreshments, marks its line number yr, in yrMaximum on the obtained line number of m pixel of increase for processing region the upper bound; M value is determined by experiment;
Determine the right boundary of road area:From the center of image to the left, arranged from bottom one and enter line search line by line upwards, will First white point found is as first point on road inner boundary, then using straight slope k as parameter, builds straight line Equation;The number of the white point on the straight line is calculated according to linear equation, the k values for obtaining maximum white points are regard as this border The slope of fitting a straight line;After straight slope is determined, straight line is raised the fitting a straight line that an increment b obtains left margin in y values; Right margin fitting a straight line is determined in the same way;Region in the middle of two straight lines is exactly road area.
2. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1, its feature It is, determines that the method for lane boundary point is as follows described in step 2:
Searched for upwards until upper confinement boundary from last column of entire image, its longitudinal span is picture altitude height;Line by line The line segment of white pixel is retrieved, the length l of nth bar line segment is write downn;Line of the end column coordinate no more than 3/4 row of entire image Section is classified as left side Road, and starting row coordinate is classified as the right Road no less than the line segment that entire image 1/4 is arranged;Respectively in left and right Effective row coordinate i between line segment adjacent rows is searched in Road sequencejAnd ij-1If, difference between them be more than pixel away from From threshold value d, illustrate that this line segment belongs to noise, then rejected from the sequence;Finally, found out respectively from the Road sequence of left and right Wherein the most obvious line segment of feature, marks effective coordinate of this line segment, and left side sequential stroke is (il,jl), right side is (ir,jr)。
3. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1, its feature It is, the method that the direction of vehicle front Road turnings is judged described in step 2 is as follows:
The intersection point for remembering right boundary straight line is (x0,y0), the intersection point of left and right road inner boundary matched curve is (x1,y1);If x0<x10, then illustrate that road is being turned right;If x0>x10, then illustrate that road is being turned left;If x10≤x0≤x1+ δ0, then illustrate that road is linear;δ0For the numerical value of the very little determined by experiment.
4. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1,2 or 3, its Be characterised by, described in step 2 to vehicle whether run-off-road line carry out early warning method it is as follows:
The angle bisection of the center identification line of left and right lane line, the i.e. two lane line angles in left and right is obtained by lane line fitting algorithm Line;Lane center is represented with a linear equation in two unknowns:
Y=ax+b
<mrow> <mi>a</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>+</mo> <msqrt> <mrow> <mn>1</mn> <mo>+</mo> <msubsup> <mi>k</mi> <mn>1</mn> <mn>2</mn> </msubsup> <msubsup> <mi>k</mi> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>k</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>k</mi> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>-</mo> <mn>1</mn> </mrow> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow>
<mrow> <mi>b</mi> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>(</mo> <msqrt> <mrow> <mn>1</mn> <mo>+</mo> <msubsup> <mi>k</mi> <mn>1</mn> <mn>2</mn> </msubsup> <msubsup> <mi>k</mi> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>k</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>k</mi> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>+</mo> <msubsup> <mi>a</mi> <mn>1</mn> <mn>2</mn> </msubsup> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>-</mo> <msubsup> <mi>a</mi> <mn>2</mn> <mn>2</mn> </msubsup> <msub> <mi>b</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>k</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>k</mi> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </mfrac> </mrow>
Wherein, k1、k2The respectively slope of left and right lane line;
Vehicle heading and the angle in lane center direction
If lateral displacement of the vehicle in current lane is d;Work as d>d0And θ>θ0When, it is judged as that left side is deviateed, sends early warning letter Number;Work as d>d0And θ<-θ0When, it is judged as that right side is deviateed, sends pre-warning signal;Early warning letter is not sent if above-mentioned condition is unsatisfactory for Number.
5. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1, its feature It is, the method for dividing vehicle bottom shadow is as follows described in step 3:
The overall threshold value T of image is calculated first by OTSU methods1;All pixels point in traversing graph, with threshold value T1Classified, More than T1Be classified as background;Background area is filtered out, T is less than to grey scale pixel value in artwork1All pixels point reuse OTSU methods, obtain new threshold value T2;With T2Binaryzation is carried out again to image for segmentation threshold, more than T2Be classified as background, pixel Value is set to 255;Less than T2Be set to object pixel, pixel value is set to 0;M is chosen on gray level image1Individual length and width are n1The road surface region of individual pixel, m1、n1Determined according to video image resolution ratio;Count m1The average gray value μ in individual road surface regioniAnd Standard deviation sigmai, remove μiMore than μ0And σiMore than σ0Region, μ0、σ0Determined by experiment, just can exclude road surface regional window division Situation on zebra stripes or instruction line, and remaining road surface areal is set as N;This N number of region is tried to achieve by calculating Average gray valueAnd average varianceCan obtain optimal threshold is:
<mrow> <mi>T</mi> <mo>=</mo> <mover> <mi>&amp;mu;</mi> <mo>&amp;OverBar;</mo> </mover> <mo>-</mo> <mn>4</mn> <mover> <mi>&amp;sigma;</mi> <mo>&amp;OverBar;</mo> </mover> </mrow>
If because N is too small or T<Local gray-value computing can not be carried out when 0, the selection of algorithm is carried out by such as following formula:
6. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1 or 5, it is special Levy and be, the method based on underbody shade acquisition vehicle ROI region described in step 3 is as follows:
From top to bottom, the original position and final position of hacures are searched for line by line from left to right, so that it is determined that its length and position; Then think to have found the starting point x of hacures when meeting following formula respectivelystartAnd terminal xend
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>=</mo> <mn>255</mn> </mtd> </mtr> <mtr> <mtd> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced>
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>=</mo> <mn>255</mn> </mtd> </mtr> </mtable> </mfenced>
Often row chooses a threshold value, when meeting following formula, and this hacures is vehicle bottom shade:
0.75*w<Length=xstart-xend<1.25*w
<mrow> <mi>w</mi> <mo>=</mo> <mfrac> <msub> <mi>w</mi> <mi>p</mi> </msub> <mi>H</mi> </mfrac> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <mfrac> <mrow> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow>
Wherein, w is the length scalar of hacures in the picture, and unit is pixel;wpIt is real wide for vehicle;H is camera light wheelbase The height in face;Y is place line number of the target on image y direction, and unit is pixel;Height is the height unit of image For pixel;
Define the ratio that rectangular degree SQ is region inner area and its boundary rectangle area, the more big more rectangular shapes in then this region of SQ; Quadrangle is made to measure QM measuring for shadow region the ratio of width to height, as QM=1, it is equilateral to be approximately considered quadrangle;Utilize SQ The above-mentioned shadow region detected is screened with QM, method is as follows:
During ROI region is extracted, it is contemplated that height, width, the uncertainty of position of shadow region, model is chosen first Enclose larger ROI region;Vehicle is contained entirely in region during preliminary extraction, and in view of intensity of illumination and angle The change of degree can make underbody shade that different proportionate relationships are presented in itself from car body, and specific method is:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mi>v</mi> </msub> <mo>_</mo> <mi>x</mi> <mo>=</mo> <msub> <mi>R</mi> <mi>s</mi> </msub> <mo>_</mo> <mi>x</mi> <mo>-</mo> <mfrac> <mi>&amp;delta;</mi> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mi>v</mi> </msub> <mo>_</mo> <mi>y</mi> <mo>=</mo> <msub> <mi>R</mi> <mi>s</mi> </msub> <mo>_</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>R</mi> <mi>v</mi> </msub> <mo>_</mo> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mi>v</mi> </msub> <mo>_</mo> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> <mo>=</mo> <msub> <mi>R</mi> <mi>s</mi> </msub> <mo>_</mo> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> <mo>+</mo> <mi>&amp;delta;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mi>v</mi> </msub> <mo>_</mo> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo>=</mo> <mi>&amp;lambda;</mi> <mo>*</mo> <msub> <mi>R</mi> <mi>s</mi> </msub> <mo>_</mo> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, (Rv_x,Rv_ y) it is ROI region lower left corner coordinate points, (Rs_x,Rs_ y) it is shadow region lower left corner coordinate points, Rv_ width,Rv_ height is respectively the width and height of ROI region, Rs_ width is the width of shadow region, parameter lambda=1.2, δ =50.
7. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1, its feature It is, the method for region screen by the multiple features of main reference frame of comentropy and symmetry described in step 4 is as follows:
(1) comentropy is screened
The entropy of image is significantly increased in the candidate region containing vehicle;For road surface region, because its gray value is more single One, thus comprising Pixel Information value it is relatively low;The less region of some entropy is filtered out by this property;It is located at the height of extraction For in h ROI region, often capable entropy is H (a), then its average is:
<mrow> <mover> <mi>H</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>h</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>h</mi> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow>
WhenWhen, it is believed that ROI region includes information of vehicles, and otherwise this ROI region is filtered out;T is to be determined by experiment Threshold value;
(2) symmetry is filtered
If R (x) is the one-time continuous function in ROI region, odd function R is split aso(x) with even function Re(x);For upper State for the ROI region R extracted, it is w × h to make area size, symmetry axis isThen for the y lines section of image, Odd function and the expression formula of even function are respectively:
<mrow> <msub> <mi>R</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>&amp;lsqb;</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
<mrow> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>&amp;lsqb;</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
Dual function is modified, and makes its revised average the same with odd function, levels off to zero, to be contrasted with energy function Both relations;After amendment:
<mrow> <msubsup> <mi>R</mi> <mi>e</mi> <mo>&amp;prime;</mo> </msubsup> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mi>w</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>v</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>w</mi> </munderover> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
Thus the energy function for obtaining odd function and even function is respectively:
<mrow> <mi>E</mi> <mo>&amp;lsqb;</mo> <msub> <mi>R</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>=</mo> <mo>-</mo> <mi>w</mi> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>w</mi> <mo>/</mo> <mn>2</mn> </mrow> </munderover> <msubsup> <mi>R</mi> <mi>o</mi> <mn>2</mn> </msubsup> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow>
<mrow> <mi>E</mi> <mo>&amp;lsqb;</mo> <msubsup> <mi>R</mi> <mi>e</mi> <mo>&amp;prime;</mo> </msubsup> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>u</mi> <mo>=</mo> <mo>-</mo> <mi>w</mi> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>w</mi> <mo>/</mo> <mn>2</mn> </mrow> </munderover> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mi>w</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>v</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>w</mi> </munderover> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> </mrow>
Symmetry to y row pixels carries out metric calculation:
<mrow> <mi>S</mi> <mo>&amp;lsqb;</mo> <mi>R</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <mfrac> <mrow> <mi>E</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <msubsup> <mi>R</mi> <mi>e</mi> <mo>&amp;prime;</mo> </msubsup> <mrow> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>-</mo> <mi>E</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <msub> <mi>R</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mrow> <mrow> <mi>E</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <msubsup> <mi>R</mi> <mi>e</mi> <mo>&amp;prime;</mo> </msubsup> <mrow> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> <mi>E</mi> <mrow> <mo>&amp;lsqb;</mo> <mrow> <msub> <mi>R</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mrow> </mfrac> </mrow>
Then have:
Work as S>S0When, then it can determine that as vehicle ROI region, otherwise delete the region;S0For the threshold value determined by experiment.
8. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1, its feature It is, the method that Robinson operators are improved described in step 4 is as follows:
When being detected using Robinson operators to the pixel in ROI region, any pixel point and M1The operation method of template It is as follows:
e0=p0+2×p1+p2-p5-2×p6-p7
Wherein, e0~e7Respectively target pixel points and template M0~M7Carry out the result of rim detection computing;p0~p7Respectively Positioned at target pixel points upper left, just upper, upper right, just left, the positive right side, lower-left, just under, the gray value of bottom right pixel point;
Improved method is to introduce variable:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>4</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>5</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>6</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>p</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>p</mi> <mn>7</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Arithmetic speed optimization processing is carried out, can obtain result is:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mn>3</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi>y</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>4</mn> </msub> <mo>=</mo> <mo>-</mo> <msub> <mi>e</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>5</mn> </msub> <mo>=</mo> <mo>-</mo> <msub> <mi>e</mi> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>6</mn> </msub> <mo>=</mo> <mo>-</mo> <msub> <mi>e</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>e</mi> <mn>7</mn> </msub> <mo>=</mo> <mo>-</mo> <msub> <mi>e</mi> <mn>3</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Finally transverse direction and longitudinal direction detection information is extracted respectively.
9. the road information detection and front vehicles recognition methods of a kind of view-based access control model according to claim 1 or 8, it is special Levy and be, the method for recognizing vehicle outer contour using Hough transform method described in step 4 is carried out in ROI region, in terms of reducing Calculation amount;Limited in straight line angle, so as to further speed up arithmetic speed;Extract transverse edge during make angle- 5°<θ<5 °, 60 ° are made during extracting longitudinal edge<θ<120 °, the left and right boundary line in order longitudinal direction respectively using most left and right point as Standard, slope value is drawn close to infinite, can obtain rectangle vehicle external boundary.
CN201410647880.1A 2014-11-14 2014-11-14 The road information detection and front vehicles recognition methods of a kind of view-based access control model CN104392212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410647880.1A CN104392212B (en) 2014-11-14 2014-11-14 The road information detection and front vehicles recognition methods of a kind of view-based access control model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410647880.1A CN104392212B (en) 2014-11-14 2014-11-14 The road information detection and front vehicles recognition methods of a kind of view-based access control model

Publications (2)

Publication Number Publication Date
CN104392212A CN104392212A (en) 2015-03-04
CN104392212B true CN104392212B (en) 2017-09-01

Family

ID=52610113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410647880.1A CN104392212B (en) 2014-11-14 2014-11-14 The road information detection and front vehicles recognition methods of a kind of view-based access control model

Country Status (1)

Country Link
CN (1) CN104392212B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104988818B (en) * 2015-05-26 2017-04-12 浙江工业大学 Intersection multi-lane calibration method based on perspective transformation
CN104915647B (en) * 2015-06-02 2018-08-03 长安大学 A kind of detection method of highway front vehicles
CN105160309B (en) * 2015-08-24 2018-12-07 北京工业大学 Three lanes detection method based on morphological image segmentation and region growing
CN105389561B (en) * 2015-11-13 2018-06-26 深圳华中科技大学研究院 A kind of bus zone detection method based on video
CN105426868B (en) * 2015-12-10 2018-09-28 山东大学 A kind of lane detection method based on adaptive area-of-interest
CN105631414B (en) * 2015-12-23 2019-04-05 上海理工大学 A kind of vehicle-mounted multi-obstacle avoidance sorter and method based on Bayes classifier
CN105488492B (en) * 2015-12-25 2019-09-13 北京大学深圳研究生院 A kind of color image preprocess method, roads recognition method and relevant apparatus
CN105957182B (en) * 2016-04-21 2018-08-03 深圳市元征科技股份有限公司 A kind of method and device of rectilinear direction that correcting instruction vehicle traveling
CN105922994B (en) * 2016-04-21 2018-08-03 深圳市元征科技股份有限公司 A kind of method and device generating instruction straight line travel direction
CN106529530A (en) * 2016-10-28 2017-03-22 上海大学 Monocular vision-based ahead vehicle detection method
CN107122734A (en) * 2017-04-25 2017-09-01 武汉理工大学 A kind of moving vehicle detection algorithm based on machine vision and machine learning
CN107122756A (en) * 2017-05-11 2017-09-01 南宁市正祥科技有限公司 A kind of complete non-structural road edge detection method
CN107133596A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 Front truck moving vehicle detection method based on underbody shade
CN107909007B (en) * 2017-10-27 2019-12-13 上海识加电子科技有限公司 lane line detection method and device
CN109774711A (en) * 2017-11-15 2019-05-21 财团法人车辆研究测试中心 Can weight modulation lane model vehicle lateral control system and method
TWI645999B (en) * 2017-11-15 2019-01-01 財團法人車輛研究測試中心 Weighting modulation may change lane model of lateral vehicle control system and method
CN107972574A (en) * 2017-11-26 2018-05-01 南通尚力机电工程设备有限公司 A kind of turn signal callback method
CN107963012A (en) * 2017-11-26 2018-04-27 南通尚力机电工程设备有限公司 A kind of turn signal adjusts back system
CN107958226B (en) * 2017-12-15 2020-05-22 海信集团有限公司 Road curve detection method, device and terminal
CN108109156B (en) * 2017-12-25 2019-10-11 西安电子科技大学 SAR image Approach for road detection based on ratio feature
CN108909833A (en) * 2018-06-11 2018-11-30 中国科学院自动化研究所 Intelligent automobile rotating direction control method based on Policy iteration
CN109398356A (en) * 2018-11-23 2019-03-01 奇瑞汽车股份有限公司 Lane Keeping System and method
CN109635737A (en) * 2018-12-12 2019-04-16 中国地质大学(武汉) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN109708158A (en) * 2019-01-03 2019-05-03 佛山市顺德区美的洗涤电器制造有限公司 For the control method of gas-cooker, device, gas-cooker and integrated kitchen range

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5922036A (en) * 1996-05-28 1999-07-13 Matsushita Electric Industrial Co., Ltd. Lane detection sensor and navigation system employing the same
CN104008645A (en) * 2014-06-12 2014-08-27 湖南大学 Lane line predicating and early warning method suitable for city road

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5922036A (en) * 1996-05-28 1999-07-13 Matsushita Electric Industrial Co., Ltd. Lane detection sensor and navigation system employing the same
CN104008645A (en) * 2014-06-12 2014-08-27 湖南大学 Lane line predicating and early warning method suitable for city road

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Quantized wavelet features and support vector machines for on-road vehicle detection;zehang Sun等;《7th International Conference on Control,Automation,Robotics and Vision》;20031014;第1-6页 *
基于最优阈值和Hough变换的车道线检测方法;金浙良等;《仪器仪表与检测技术》;20091231;第28卷(第11期);第88-91页 *
基于机器视觉的智能车辆前方道路边界及车道标识识别方法综述;余天洪等;《公路交通科技》;20060131;第23卷(第1期);第139-142页 *
基于视觉的道路前方运动车辆检测与跟踪;莫琛;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20131215(第S2期);C034-737 *

Also Published As

Publication number Publication date
CN104392212A (en) 2015-03-04

Similar Documents

Publication Publication Date Title
JP6259928B2 (en) Lane data processing method, apparatus, storage medium and equipment
CN104766058B (en) A kind of method and apparatus for obtaining lane line
Serna et al. Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning
CN104036323B (en) A kind of vehicle checking method based on convolutional neural networks
US9652980B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
CN103020623B (en) Method for traffic sign detection and road traffic sign detection equipment
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
Siogkas et al. Traffic Lights Detection in Adverse Conditions using Color, Symmetry and Spatiotemporal Information.
Zakeri et al. Image based techniques for crack detection, classification and quantification in asphalt pavement: a review
Thorpe et al. Toward autonomous driving: the cmu navlab. i. perception
US10108867B1 (en) Image-based pedestrian detection
US8487991B2 (en) Clear path detection using a vanishing point
Rotaru et al. Color image segmentation in HSI space for automotive applications
CN101408942B (en) Method for locating license plate under a complicated background
JP2018523875A (en) Lane recognition modeling method, apparatus, storage medium and device, and lane recognition method, apparatus, storage medium and apparatus
CN104318258B (en) Time domain fuzzy and kalman filter-based lane detection method
Kong et al. General road detection from a single image
US8890951B2 (en) Clear path detection with patch smoothing approach
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
CN105550665B (en) A kind of pilotless automobile based on binocular vision can lead to method for detecting area
Zheng et al. A novel vehicle detection method with high resolution highway aerial image
Van de Voorde et al. Improving pixel-based VHR land-cover classifications of urban areas with post-classification techniques
CN101950350B (en) Clear path detection using a hierachical approach
CN104050450A (en) Vehicle license plate recognition method based on video
JP2917661B2 (en) Traffic flow measurement processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant