CN106682586A - Method for real-time lane line detection based on vision under complex lighting conditions - Google Patents

Method for real-time lane line detection based on vision under complex lighting conditions Download PDF

Info

Publication number
CN106682586A
CN106682586A CN201611098387.4A CN201611098387A CN106682586A CN 106682586 A CN106682586 A CN 106682586A CN 201611098387 A CN201611098387 A CN 201611098387A CN 106682586 A CN106682586 A CN 106682586A
Authority
CN
China
Prior art keywords
illumination
image
value
lane line
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611098387.4A
Other languages
Chinese (zh)
Inventor
刘宏哲
袁家政
唐正
李超
赵小艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201611098387.4A priority Critical patent/CN106682586A/en
Publication of CN106682586A publication Critical patent/CN106682586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for real-time lane line detection based on vision under complex lighting conditions and belongs to the field of computer vision and unmanned intelligent driving. During image preprocessing, light estimation and light color correction are conducted on different light images so that the images can recover under standard white light; noise which is introduced in the image acquisition process is eliminated through Gaussian filtering, and then the images are binarized and subjected to edge extraction; the original images are divided in the extraction process; improved Hough transform is used for obtaining a lane candidate line, and a dynamic interesting region (ROI) is built; through Hough transform based on the dynamic interesting region (ROI) and kalman filtering, a lane line is tracked in real time to realize constraint and update of a lane line model. An algorithm is added into a lane line detection failure judgment module to improve the reliability of detection. The method is high in speed and good in robustness, a good lane line detection effect is obtained under the complex lighting conditions, the dynamic lane line identifying ability of a vehicle is improved, and the safety of unmanned vehicle driving is improved.

Description

A kind of method of the real-time lane detection of view-based access control model under the conditions of complex illumination
Technical field
The present invention relates under the conditions of a kind of complex illumination the real-time lane detection of view-based access control model method, belong to vehicle from Main driving and computer aided pilot technical field.
Background technology
In recent years, with the continuous development being continuously increased with automobile industry of highway mileage, traffic safety problem is also increasingly Serious road, the vehicle on road is more and more, and the accident of generation is also increasing year by year, and the injures and deaths and property that vehicle accident is brought are damaged Mistake is startling, is the generation for reducing vehicle accident, and with technological meanses such as computer-aided driving systems driving is ensured Safety becomes a kind of trend, and the primary key issue for realizing this systems face is exactly to realize rapidly and accurately from Vehicular video Lane line is detected in image, this can allow vehicle to travel according to real-time road exact specification, to ensure the safety of vehicle and pedestrian.
At this stage the method for lane identification is mainly divided to two kinds:Characteristics of image method and model matching method.
1st, it is to utilize lane boundary or markings and surrounding in characteristics of image based on the basic thought of characteristics of image method On difference detected.Feature difference includes shape, texture, seriality, gray scale and contrast etc..Donald et al. utilizes car The geological information of diatom carries out lane detection under high-speed case to the method that Hough transform parameter is limited;Lee proposes one The departure warning system for predicting lane line direction is estimated in the change for announcing function and direction of vehicle movement by edge; Mastorakis filters out most possible tag line using the linear feature of lane line;Wang and Hu propose to utilize track respectively The rightabout property of gradient, lane line region color feature are entering the identification of driveway line on line.This kind of method borrows image The technology such as segmentation and thresholding, algorithm is relatively simple, but the change of shadow occlusion, light, noise, lane boundary or markings do not connect The factors such as continuous property are all likely to result in the None- identified in track.
2nd, the stronger geometric properties of structured road are primarily directed to based on the method for Model Matching, using two dimension or three-dimensional Curve enters driveway line modeling, and conventional two-dimentional track model has straight line model and parabola model.B-Snake tracks model is carried After for initial alignment, lane detection problem is converted to into the control point problem determined needed for SPL by road model; Employ and Hough transform and parabola model are come together to detect lane line, and first with straight line model obtain road sign After knowing the preliminary parameters of line, then achieve preferable testing result using hyperbolic model detection lane line on this basis; Mechat is modeled using the method based on SVM to lane line, and the Kalman filter using standard carries out estimation tracking. This kind of method on the basis of road parameters model is set up, the target information in analysis of the image to determine model parameter, with not The characteristics of being disturbed by pavement behavior, but because computation complexity is higher, the time overhead of algorithm is larger.
Therefore, characteristics of image method and road model matching method are combined in practical study, so as to regular car Road recognizes problem.
The content of the invention
The present invention detects that the discrimination of lane line is low, does not have to image for existing lane detection technology under complicated light Fine pretreatment is carried out, distorted image is remedied under standard white light.And original method comparison is complicated, efficiency is low, real The shortcoming of when property difference, it is proposed that --- a kind of method of the real-time lane detection of view-based access control model under the conditions of complex illumination, to figure It is remedied under standard white light as carrying out photo-irradiation treatment, using the information of lane line pixel sentencing for lane detection and trend is carried out Disconnected, algorithm has good real-time, and high efficiency must detect lane line.
For achieving the above object, a kind of method for inventor providing the method and lane detection of illumination pretreatment, institute State method following steps:In Image semantic classification different light images are carried out with illumination estimation and lighting color correction so as to extensive Under arriving standard white light again.The noise introduced in image acquisition process is removed using gaussian filtering, then image is carried out at binaryzation Reason and edge extracting, region division is carried out in extraction process to original image, and using improved Hough transform track candidate is obtained Line, sets up dynamic area-of-interest (ROI), is realized to car by the Hough transform based on dynamic area-of-interest (ROI) The constraint and renewal of road line model, and Kalman filter is to lane line real-time tracking, algorithm adds lane detection failure and sentences Other module, to improve the reliability of detection.
On structurized highway, lane line information mainly concentrates on the middle and lower part of image, it is contemplated that in difference In the case of video camera install, or headstock is shown in the picture.
The step of this method is adopted is as follows:Image is carried out down-sampled, and area-of-interest (ROI) is set, due to video There is larger dependency in image between adjacent image, most of image information is useless for lane detection, is passed through The area-of-interest useful to lane detection is found, the operand of algorithm can be not only reduced, and lane line can also be simplified Identification.On structurized highway, it is area-of-interest that lane line useful information mainly concentrates on the middle and lower part of image, by In installing in view of video camera in varied situations, or headstock is shown (0~0.1H) in the picture.WimageRepresent image Width, HimageIt is defined as the height of image.So we can be with the scope in downscaled images effective detection region.
The method of lane detection, to area-of-interest Image semantic classification -- color correction is carried out, the step of this method is adopted It is as follows:Obtain region of interest area image ψ from the image collecting devices such as monitoring camera first, region of interest area image ψ is carried out Color correction, image ψ after being corrected1
Comprise the following steps that:
The purpose of the illumination estimation of image is by the image under the image rectification under unknown illumination condition to standard white light, this mistake Journey is briefly summarized as estimating lighting color when image is imaged first, is then mapped an image to using Von Kries models Under standard white light.Also it is obtained with the white balance effect of better image.Generally can be divided into following steps:
(1), sample block is extracted and extracts sample block from image first.To each image pattern block, estimation is radiated at the block On effective illumination.
(2), illumination estimation is carried out using the illumination estimation algorithm under the conditions of existing monochromatic light photograph.Based on Grey-Edge face Color constancy algorithm frame systematically produces multiple different color constancy Eigenvalue Extraction Methods by transformation parameter.
(3), the cluster of sample block illumination estimation value, is clustered together the image block come under same illumination with shape Into a big image block, to produce a more accurate illumination estimation value, the block under same illumination irradiation is easier to gather Class is to same cluster.Therefore, all of illumination estimation value is clustered M classes (M is the illumination number in scene).
(4), (M is in scene to the backward mapping of cluster result after the illumination estimation value based on sample block is clustered to M classes Illumination number), cluster result be mapped to original image one by one, that is to say, that the pixel for belonging to same sample block belongs to same One cluster, so can be obtained by the irradiation position of every kind of illumination.Thus obtain an illumination mapping graph, i.e. each pixel to belong to Some in M illumination.By backward mapping, the illumination estimation value of available each pixel, and pixel place illumination class Cluster centre value.
(5), for the region for overlapping illumination, filtered using Gauss on the classification results of the illumination estimation value of backward mapping Ripple device
(6), color correction, can be corrected to input picture under standard illumination using the illumination estimation value of each pixel, Obtain the output image under standard illumination, so as to eliminate scene in illumination impact.Diagonal model the most frequently used at present comes school Positive image.
The method corrected using color of image, it is characterised in that:(1) assumes each pixel of image pattern block 5 × 5 simultaneously And it is equally distributed condition to meet the illumination value that light impinged upon on the sample (only a kind of illumination of color is mapped on the sample).
Image irradiation estimates the method to correct color of image, and the size of the sample block of selection is the same, meets following condition: The pixel of sample block 5 × 5 and comprising lighting color information estimating the property of illumination being radiated in the sample block exactly.
It is as follows based on Grey-Edge color constancy algorithm frame by transformation parameter, by transformation parameter n, q With σ (n is factorial, and q is Ming Kefusi normal forms, and σ is the kernel function size of Gaussian filter), ε is a span [0,1] Constant, the illumination value in f (x) representation spaces at x points;0 represents areflexia, and 1 represents total reflection;E is exponent e, is systematically produced Multiple different color constancy Eigenvalue Extraction Methods.
Under the framework, segmentation figure picture obtains the sample block of many images.Assume each sample block be 5 × 5 pixels and It is equally distributed hypothesis to meet the illumination in the sample block.In each sample block, the color shone using conventional monochromatic light is permanent Perseverance algorithm estimates the illumination value in the sample block.
The method that color of image is corrected using image irradiation estimation, it is considered to five kinds of representational methods below:
The method that color of image is corrected using image irradiation estimation, five kinds of candidate color constancy set of computations Γ= {e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}.Each sample block is characterized in that to be estimated by the illumination of the color constancy algorithm for selecting Evaluation is constituted.
The method that color of image is corrected using image irradiation estimation, the characteristic vector of sample block can be described as F '= [R, G, B], R, G, B for image Color Channel, as follows using normalized illumination estimation value, the spy of such sample block Levy vector and be converted to F=[r, g], the vector of 1 × 2.
The method that color of image is corrected using image irradiation estimation, in the chrominance space of illumination estimation value composition, After clustering to the illumination estimation value of each sample block, the distance of the illumination estimation value of j-th sample block to the i-th cluster centre Can be calculated using Euclidean distance, Euclidean distance diRepresent, dkRepresent in k [0, M] k-th sample block cluster centre away from From Z is sample block altogether, then the sample block is located at the Probability p of i-th light areaj,iIt is calculated as below:
The coverage area probability of i-th illuminationWherein pj,iJ-th piece is represented by i-th illumination irradiation Probability and p are the sums of sample block in input picture.
The method that color of image is corrected using image irradiation estimation, in order to obtain smooth continuous illumination patterns, in light According to being filtered on coverage area probability mapping graph, we use two kinds of wave filter, are respectively two kinds of wave filter of gaussian sum intermediate value, Gaussian filter consider spatial positional information calculate each estimate illumination range individual element probability, median filter it is excellent Point is the information that can well retain side so as to for the scene for having obvious illumination variation.
The method that color of image is corrected using image irradiation estimation, the illumination estimation value of image each pixel is according to as follows Formula is calculated:
Wherein IeIt is the illumination estimation value in scene, Ie,jIt is the estimated value of i-th illumination, miX () represents i-th illumination para-position The contribution of the pixel at x;Z represents sample block altogether, if miValue is larger, then mean i-th illumination to this pixel Impact it is big, if especially miX ()=1 means that this pixel is completely under the irradiation of i-th illumination.The area of coverage of illumination Domain probability mapping graph is big as input picture.
The method that color of image is corrected using image irradiation estimation, after the illumination estimation value for obtaining each pixel, It is corrected according to diagonal model individual element, wherein fuX () represents the pixel value at x under unknown illumination irradiation, fc(x) table Show the corrected rear pixel value that it presents under standard illumination irradiation.Λu,cX () is from unknown illumination to standard light at x According to mapping matrix, be shown below:fc(x)=Λu,c(x)fu(x)。
The method that color of image is corrected using image irradiation estimation, diagonal calibration model is shown below, wherein,It is expressed as position during picture:
Wherein,X represents certain point in image space, the illumination value of R channel measurements;X is represented in figure Certain point in image space, the illumination value that R passages are estimated;The measurement illumination value of certain point R passages is than upper estimation in space Illumination value.For in space certain point G passages measurement illumination value than upper estimation illumination value;For in space Illumination value of the measurement illumination value of certain point channel B than upper estimation;Λu,cX () is from unknown illumination to standard illumination at x Mapping matrix.
Area-of-interest Image semantic classification --- -- image gray processing after color correction, is shown below;Gray=R*0.299 + G*0.587+B*0.114, wherein, in formula:R, G, B represent respectively red, blue, green channel components value;GrayRepresent pixel after conversion Gray value.More information for wanting to be saved in white and yellow on lane line, therefore ensureing track line drawing error model In enclosing, the ratio of channel B component value is weakened.Gradation conversion formula such as following formula:Gray=R*0.5+G*0.5.
Choose track line model, it is characterised in that:Most sections of road are all linear sections, and straight line model is made The error calculated for track line model is only 3mm.Therefore, this method adopts straight line model as the model of lane line.
To gray level image lane line edge extracting, it is characterised in that:In real road environment, lane line generally has The brightness higher than surrounding road surface, after carrying out gray processing process, the gray value of lane line is higher.Can by the gray-scale maps scanned by row Know, the value of lane line part is higher than the value on its both sides, form a crest;Presentation is from left to right the trend of falling after rising;I Utilize these characteristics, judge the edge of lane line by calculating the change of adjacent image pixels.
Method for detecting lane lines based on Improved Hough Transform, it is characterised in that:Make an uproar in the hole of Hough transform detection of straight lines Performance is strong, and the edge that can be will be switched off is coupled together, and is highly suitable for detecting discontinuous Lane Mark.It is empty according to image Between and Hough parameter spaces duality principle, each characteristic point in image is mapped to the accumulator array of parameter space In multiple units, the counting of unit is counted detecting extreme value, determine whether to there is straight line and obtain straight line parameter.
Classical Hough transform is mapped to after polar coordinate to each point in image space and carries out ballot statistics, when ρ, θpQuantization it is thinner when, the precision of detection will be higher, quantified thick, and the result of detection again will not be accurate.It is vertical in order to solve The infinitely-great problem of straight slope, typically carries out Hough transform, i.e. ρ=x cos θ by following straight line-polar equationp +y sinθp, in order to reduce computational complexity, the efficiency for calculating is improved, do corresponding bar on classical Hough transforms herein Part is constrained, and enables more to adapt to lane detection.
Needing to detecting that lane line enters row constraint --- intra-frame trunk is constrained, in actual acquisition system and most In Intelligent Vehicle System, what in-vehicle camera was directly obtained is Video stream information, is often had between the adjacent two field pictures in video flowing There is very big redundancy.Vehicle movement temporally and spatially all have seriality, due to the sample frequency of in-vehicle camera it is fast (100fps or so), within the sampling period of picture frame, vehicle has simply advanced one section of very short distance, the change of road scene Very small, the lane line change in location for showing as before and after interframe is slow, therefore previous frame image is provided for latter two field picture Very strong track line position information.In order to improve the stability and accuracy of lane mark identification algorithm, interframe is introduced herein Relatedness is constrained.
Step is as follows:The lane line number that hypothesis is detected in the current frame is mlBar, with set Ll={ L1,L2,…,Lm} Represent;The track line number detected in the historical frames of preservation has nlIt is individual, with set El={ E1,E2,…,EnRepresent;Intra-frame trunk Constraint wave filter KlRepresent, make Kl={ K1,K2,…,Kn}。
Initially set up a Cl=ml×nlMatrix, Matrix ClIn element cijRepresent i-th straight line L in present framei With the j-th strip straight line E in historical framesjBetween distance, delta dij, wherein Δ dijComputing formula be:TlIt is that representing matrix turns order A, that B is represented respectively is straight line Li、EjTwo End points.
Then in Matrix ClIn, Δ d in the i-th row of statisticsij<Number e of TiIf, ei<1, illustrate current vehicle diatom without therewith Associated previous frame lane line, therefore the lane line is updated into the constraint of next frame intra-frame trunk as brand-new lane line History frame information.
If ei=1, then it is assumed that present frame lane line LiWith historical frames lane line EjIt is same lane line in Qian Hou interframe; Work as ei>When 1, vectorial V is usediThe track line position of condition is met in the record row of present frame i-th, i.e.,:In ViThe all elements V of the row j that middle statistics nonzero element is locatedj, obtain VjIn Minimum element, i.e.,:(Δdij)min=min { Vj}(Vj≠0)。
WhenThen obtain present frame lane line LiWith historical frames lane line EjIt is same car in Qian Hou interframe Diatom.The lane line that present frame detection is obtained meets interframe associated constraints, then it is assumed that be same lane line in Qian Hou frame, And show the position of current vehicle diatom;Otherwise, the lane line that current detection goes out is given up.If accumulative intra-frame trunk constraint number of times is big In Tα(Tα=3), then update the parameter of historical frames lane line.
Lane line is detected, is tracked based on Kalman filter lane line line, it is characterised in that:For structured road, Track line position in two continuous frames image is more or less the same, it is possible to use the dependency of the track line position between consecutive frame, uses The detection of the information guiding next frame lane line that previous frame image is obtained, to realize the real-time tracking of lane line.
Failure differentiate it is characterized in that:When heavily disturbed as road middle rolling car or other objects hide Lane Mark Situations such as gear, turning or vehicle lane-changing, algorithm can produce larger error and even fail.Therefore failure is added to differentiate in the detection Mechanism.Once bounding algorithm can in time recover the correct identification to road marking line in the case of failing.
Description of the drawings
Fig. 1 is the lane detection method flow chart described in the specific embodiment of the invention;
Fig. 2 is the flow process of the method that color of image is corrected using illumination estimation described in the specific embodiment of the invention Figure;
Fig. 3 is the track line model described in the specific embodiment of the invention
Fig. 4 is the area-of-interest described in the specific embodiment of the invention.
Fig. 5 is the edge detection graph described in the specific embodiment of the invention.
Fig. 6 is the lane line filter effect described in the specific embodiment of the invention.
Fig. 7 be described in the specific embodiment of the invention lane line laboratory test results ----road surface is stained figure
Fig. 8 is that the lane line laboratory test results described in the specific embodiment of the invention ----opposed vehicle the greasy weather turns on light
Fig. 9 is the lane line laboratory test results described in the specific embodiment of the invention --- the interference of-common pavement marker
Figure 10 is the lane line laboratory test results described in the specific embodiment of the invention --- drives a vehicle at dusk
Specific embodiment
Technology contents structural features to describe technical scheme in detail are realized purpose and effect, below in conjunction with instantiation, and Accompanying drawing is coordinated to be explained in detail.
First, general thought
In order to improve the real-time and reliability of Lane detection, propose real under the conditions of a kind of complex illumination of view-based access control model When lane detection algorithm.Region division is carried out to original image in extraction process, then to Image semantic classification difference light image Carry out illumination estimation and lighting color correction so as under returning to standard white light.Image acquisition process is removed using gaussian filtering The noise of middle introducing, then binary conversion treatment and edge extracting are carried out to image, obtain track candidate using improved Hough transform Line, sets up dynamic ROI, and by the Hough transform based on dynamic ROI the constraint and renewal to track line model, algorithm are realized Lane detection failure discrimination module is added, to improve the reliability of detection.As shown in Figure 1.
2nd, area-of-interest is determined
Due to there is larger dependency between image adjacent in video image, most of image information is examined for lane line Survey is useless, by finding the area-of-interest useful to lane detection, not only can reduce algorithm operand but also The identification of lane line can be simplified, as shown in Figure 3.
On structurized highway, it is area-of-interest that lane line useful information mainly concentrates on the middle and lower part of image, It is contemplated that video camera in varied situations is installed, or headstock is shown in the picture.WimageThe width of image is represented, HimageIt is defined as the height of image.So we can be with the scope in downscaled images effective detection region.
3rd, to area-of-interest Image semantic classification -- carry out the step of color correction this method is adopted as follows:First from prison Region of interest area image ψ is obtained in the image collecting devices such as control camera, color correction is carried out to region of interest area image ψ, obtained Image ψ after correction1;As shown in Fig. 2 comprising the following steps that:
The purpose of the illumination estimation of image be by the image under the image rectification under unknown illumination condition to standard white light, this Individual process is briefly summarized as estimating lighting color when image is imaged first, is then reflected image using Von Kries models Under being mapped to standard white light.Also it is obtained with the white balance effect of better image.Generally can be divided into following steps:
(1), sample block is extracted and extracts sample block from image first.To each image pattern block, estimation is radiated at the block On effective illumination.
(2), illumination estimation is carried out using the illumination estimation algorithm under the conditions of existing monochromatic light photograph.Based on Grey-Edge face Color constancy algorithm frame systematically produces multiple different color constancy Eigenvalue Extraction Methods by transformation parameter.
(3), the cluster of sample block illumination estimation value, is clustered together the image block come under same illumination with shape Into a big image block, to produce a more accurate illumination estimation value, the block under same illumination irradiation is easier to gather Class is to same cluster.Therefore, all of illumination estimation value is clustered M classes (M is the illumination number in scene).
(4), (M is in scene to the backward mapping of cluster result after the illumination estimation value based on sample block is clustered to M classes Illumination number), cluster result be mapped to original image one by one, that is to say, that the pixel for belonging to same sample block belongs to same One cluster, so can be obtained by the irradiation position of every kind of illumination.Thus obtain an illumination mapping graph, i.e. each pixel to belong to Some in M illumination.By backward mapping, the illumination estimation value of available each pixel, and pixel place illumination class Cluster centre value.
(5), for the region for overlapping illumination, gaussian filtering is used on the classification results of the illumination estimation value of backward mapping Device.
(6), color correction, can be corrected to input picture under standard illumination using the illumination estimation value of each pixel, Obtain the output image under standard illumination, so as to eliminate scene in illumination impact.Diagonal model the most frequently used at present comes school Positive image.
The method corrected using color of image, it is characterised in that:(1) assumes each pixel of image pattern block 5 × 5 simultaneously And it is equally distributed condition to meet the illumination value that light impinged upon on the sample (only a kind of illumination of color is mapped on the sample).
The method that color of image is corrected using image irradiation estimation, the size of the sample block of selection is the same, meets following Condition:The pixel of sample block 5 × 5 and comprising lighting color information estimating the property of illumination being radiated in the sample block exactly Matter.
It is as follows based on Grey-Edge color constancy algorithm frame by transformation parameter, by transformation parameter n, q With σ (n is factorial, and q is Ming Kefusi normal forms, and σ is the kernel function size of Gaussian filter), in f (x) representation spaces at x points Illumination value;ε is the constant of a span [0,1], and 0 represents areflexia, and 1 represents total reflection;E is exponent e, is systematically produced Multiple different color constancy Eigenvalue Extraction Methods.
Such as following formula under the framework, segmentation figure picture obtains the sample block of many images.Assume that each sample block is 5 × 5 pictures The illumination in the sample block of element and satisfaction is equally distributed hypothesis.In each sample block, shone using conventional monochromatic light Color constancy algorithm estimates the illumination value in the sample block.
The method that color of image is corrected using image irradiation estimation, it is considered to five kinds of representational methods below:
The method that color of image is corrected using image irradiation estimation, five kinds of candidate color constancy set of computations Γ= {e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}.Each sample block is characterized in that to be estimated by the illumination of the color constancy algorithm for selecting Evaluation is constituted.
The method that color of image is corrected using image irradiation estimation, the characteristic vector of sample block can be described as F '= [R, G, B], R, G, B for image Color Channel, as follows using normalized illumination estimation value, the spy of such sample block Levy vector and be converted to F=[r, g], the vector of 1 × 2:
The method that color of image is corrected using image irradiation estimation, in the chrominance space of illumination estimation value composition, After clustering to the illumination estimation value of each sample block, the distance of the illumination estimation value of j-th sample block to the i-th cluster centre Can be calculated using Euclidean distance, Euclidean distance diRepresent,
dkThe distance of k-th sample block cluster centre in k [0, M] is represented, Z is sample block altogether, then the sample block is located at The Probability p of i-th light areaj,iIt is calculated as below:
The coverage area probability of i-th illuminationWherein pj,iRepresent the j-th piece of probability irradiated by i-th illumination And p is the sum of sample block in input picture.
The method that color of image is corrected using image irradiation estimation, in order to obtain smooth continuous illumination patterns, in light According to being filtered on coverage area probability mapping graph, we use two kinds of wave filter, are respectively two kinds of wave filter of gaussian sum intermediate value, Gaussian filter consider spatial positional information calculate each estimate illumination range individual element probability, median filter it is excellent Point is the information that can well retain side so as to for the scene for having obvious illumination variation.
The method that color of image is corrected using image irradiation estimation, the illumination estimation value of image each pixel is according to as follows Formula is calculated:
Wherein IeIt is the illumination estimation value in scene, Ie,jIt is the estimated value of i-th illumination, miX () represents i-th illumination para-position The contribution of the pixel at x;Z represents sample block altogether,
If miValue is larger, then mean that impact of i-th illumination to this pixel is big, if especially miAnticipate (x)=1 Taste this pixel and is completely under the irradiation of i-th illumination.The coverage area probability mapping graph of illumination is big as input picture.
The method that color of image is corrected using image irradiation estimation, after the illumination estimation value for obtaining each pixel, It is corrected according to diagonal model individual element, wherein fuX () represents the pixel value at x under unknown illumination irradiation, fc(x) table Show the corrected rear pixel value that it presents under standard illumination irradiation.
Λu,cX () is, from unknown illumination to the mapping matrix of standard illumination, to be shown below at x:fc(x)=Λu,c (x)fu(x)。
The method that color of image is corrected using image irradiation estimation, diagonal calibration model is shown below, wherein, It is expressed as position during picture:
Wherein,X represents certain point in image space, the illumination value of R channel measurements;X represents empty in image Between middle certain point, R passages estimate illumination value;Light of the measurement illumination value of certain point R passages than upper estimation in space According to value.For in space certain point G passages measurement illumination value than upper estimation illumination value;For a certain in space Illumination value of the measurement illumination value of point channel B than upper estimation;.Λu,cX () is from unknown illumination to the mapping of standard illumination at x Matrix.
3rd, area-of-interest Image semantic classification --- -- image gray processing after color correction
It is shown below;Gray=R*0.299+G*0.587+B*0.114, wherein, in formula:R, G, B represent respectively red, blue, green logical Road component value;GrayRepresent the gray value of pixel after conversion.More information for wanting to be saved in white and yellow on lane line, Therefore ensureing in the line drawing range of error of track, to weaken the ratio of channel B component value.Gradation conversion formula such as following formula:Gray =R*0.5+G*0.5.
4th, track line model
Track line model, such as Fig. 3, it is characterised in that:Most sections of road are all linear sections, using straight line model as The error that track line model is calculated is only 3mm.Therefore, this method adopts straight line model as the model of lane line.
Wherein:(x1,y1), (x2,y2), (x3,y3), (x4,y4) it is coordinate in lane line, p represents that linear position is laterally inclined To the distance of center vertical line, d represents distance of the straight line end point away from lower sideline.The slope of lane lineAngleIntercept bτ=y-kx.
To gray level image lane line edge extracting, it is characterised in that:In real road environment, lane line generally has The brightness higher than surrounding road surface, after carrying out gray processing process, the gray value of lane line is higher.Can by the gray-scale maps scanned by row Know, the value of lane line part is higher than the value on its both sides, form a crest;Presentation is from left to right the trend of falling after rising;I Utilize these characteristics, judge the edge of lane line by calculating the change of adjacent image pixels.
Comprise the following steps that:
If certain point is (x, y), y ∈ [0, H are metimage) and x ∈ [2, Wimage-2).X, y are respectively the columns and rows of pixel, WimageRepresent the width of image, HimageIt is defined as the height of image.
Step1:Calculate the average near point (x, y) horizontal line.Wherein t ∈ [1, 3,5,7 ... ...], t=5 can obtain good effect.
Step2:Calculate edge extracting threshold value T.
Step3:Calculate liter height e at edgepWith drop height ev
Step4:The liter height of lane line and drop height are in the picture occur in pairs, and between meet it is certain away from From.The width of height and drop height is relatively risen, ungratified point is rejected:Δ w=ep (x)-ev (x).
If Δ w>Wmax, then it is assumed that it is the lane line that impossible occur, then gives up.Wherein, ep(x) and evX () represents rise respectively The row pixel coordinate of height and drop height, WmaxFor the maximum number of pixels that lane line occupies in the picture.
5th, edge extracting
Method for detecting lane lines based on Improved Hough Transform, it is characterised in that:The hole of Hough transform detection of straight lines is made an uproar performance By force, the edge that can be will be switched off is coupled together, and is highly suitable for detecting discontinuous Lane Mark.It according to image space and The duality principle of Hough parameter spaces, each characteristic point in image is mapped to parameter space accumulator array it is multiple In unit, the counting of unit is counted detecting extreme value, determine whether to there is straight line and obtain straight line parameter.
Classical Hough transform is mapped to after polar coordinate to each point in image space and carries out ballot statistics, when ρ, θpQuantization it is thinner when, the precision of detection will be higher, quantified thick, and the result of detection again will not be accurate.It is vertical in order to solve The infinitely-great problem of straight slope, typically carries out Hough transform, i.e. ρ=x cos θ by following straight line-polar equationp +y sinθp, in order to reduce computational complexity, the efficiency for calculating is improved, do corresponding bar on classical Hough transforms herein Part is constrained, and enables more to adapt to lane detection, as shown in Figure 5.
The range error limit d of given straight line place approximate regionh, Hough transform series of parameters and mean value error Threshold epsilonh.Improved Hough transform, algorithm is comprised the following steps that:
Step1. under given parameters, the Hough transform based on probability is carried out to lane line feature and is operated, obtain straight line;
Step2. to the straight line that each is obtained by Hough transform detection, in all of feature point set S distance is found Straight line is not more than dhCharacteristic point, constitute set Eh
Step3. regression straight line parameter k of set E is determined using method of least squarehAnd bh, and mean square error eh
Step4. to set EhIn any feature point (xi,yi), the k of all satisfactionshxi+bh>yiCharacteristic point constitute son Collection Epos, the k of all satisfactionshxi+bh<yiCharacteristic point constitute subset Eneg
Step5. in set EposAnd EnegIn, the maximum point of error identifyingWithWherein dh(P) represent point P to the distance of regression straight line;
Step6. point P is removedpAnd Pn, update set Epos、EnegAnd Eh, repeat step 3, until error ehLess than εh
6th, lane line enters row constraint --- and intra-frame trunk is constrained
In actual acquisition system and most Intelligent Vehicle System, what in-vehicle camera was directly obtained is Video stream information, Often there is very big redundancy between the adjacent two field pictures in video flowing.Vehicle movement temporally and spatially all has company Continuous property, because the sample frequency of in-vehicle camera is fast (100fps or so), within the sampling period of picture frame, vehicle simply advances One section of very short distance, the change of road scene is very small, and the lane line change in location for showing as before and after interframe is slow, therefore Previous frame image provides very strong track line position information for latter two field picture.In order to improve the steady of lane mark identification algorithm Qualitative and accuracy, introduces herein intra-frame trunk constraint.
The interframe smoothing model such as following formula of design:In the formula, Line represents present frame Accreditation testing result, ωiRepresent be weight span be (0,1), liThe frame in testing result of the i-th frame is represented, z is represented The frame number of association.The accreditation inspection of present frame has been obtained by way of the frame in testing result weighting to present frame and front z frames Survey result.According to the model, Change detection algorithm can be obtained.
Arrange an interframe relief area, if buffer size be z if, then relief area house present frame and it The frame in testing result of front z-1 frames.According to the accuracy in detection rising of property present frame when z values are arranged and increased, flase drop and mistake Inspection rate declines.When z is excessive, will cause to approve that detection cannot represent current frame in real information, cause detection failure, algorithm failure Program interrupt, program is re-executed.Therefore the size of z directly affects the degree of accuracy of the detection lane line of present frame.
As z=1, detection is equal to frame in Detection results, and interframe is smooth to lose meaning.During z=15, it is meant that while 14 Road conditions before frame have influence on current testing result, and the increase of relief area brings slowing down for algorithm to calculate with the smooth cluster of interframe The decline of method performance.Through interpretation, CUP often processes an image and takes 40 milliseconds, to process 25 two field pictures, z within 1 second ∈ [1,25] some value is optimal can the effect that algorithm is detected, this parameter value self adaptation is arranged, with the smooth mould of interframe TypeMiddle weights omegaiWith threshold value R of noisethIt is relevant.Meet following relation:Under the setting of weight is satisfied with Formula:ω-z+1≤ω-z+2≤…≤ω-1≤ω0
Threshold value R of noiseth, break standard it is as follows:The formula represents in result that the t article lane line is special Levy the total weighted sum in z frame ins and account for the ratio of totalframes and have to be larger than threshold value Rth, otherwise it is assumed that being noise lane line.
RthComputing formula:Wherein c is modifying factor sub-district 0.2<c<0.3, it is sharp to retain The details at edge and image, NcFor the pixel number of image, η is noise variance.
7th, tracked based on Kalman filter lane line line
It is characterized in that:For structured road, the track line position in two continuous frames image is more or less the same, it is possible to use phase The dependency of the track line position between adjacent frame, the detection of the information guiding next frame lane line obtained with previous frame image, with The real-time tracking of lane line is realized, as shown in Figure 6.
Failure differentiate it is characterized in that:When heavily disturbed as road middle rolling car or other objects hide Lane Mark Situations such as gear, turning or vehicle lane-changing, algorithm can produce larger error and even fail.Therefore failure is added to differentiate in the detection Mechanism.Once bounding algorithm can in time recover the correct identification to road marking line in the case of failing.If detecting track Line parameter meets the one kind in situations below, herein the program interrupt it is determined that algorithm fails, and program is re-executed.
(1) in dynamic area-of-interest, the straight line number that Hough transform is detected is zero.
(2) frame number for being unsatisfactory for lane line constraints is more than Tβ(Tβ=5).
(3) there occurs that the slope of mutation, i.e. straight line becomes relative to previous frame from the lane line parameter detected when former frame Rate is not to be exceeded 10 degree, and intercept is less than 15 pixels.
Fig. 6-10 is lane detection design sketch.

Claims (10)

1. under the conditions of a kind of complex illumination the real-time lane detection of view-based access control model method, it is characterised in that methods described bag Include following steps:Region to be detected is determined according to camera review, detects that lane markings are examined in the region to be detected It is that image drop sampling arranges area-of-interest to survey result, and Image semantic classification sets up track line model, Hough transform candidate lane Line, kalman filtering, discrimination module;
(1) to area-of-interest Image semantic classification -- carry out color correction;
Step one, sample block are extracted and extract first from image ψ sample block;To each image pattern block, estimation is radiated at this Effective illumination on block;
Step 2, using existing monochromatic light shine under the conditions of illumination estimation algorithm carry out illumination estimation;Based on Grey-Edge colors Constancy algorithm frame produces multiple different color constancy Eigenvalue Extraction Methods by transformation parameter;
The cluster of step 3, sample block illumination estimation value, is clustered together the image block come under same illumination with shape Into a big image block, to produce a more accurate illumination estimation value, the block under same illumination irradiation is easier to gather Class is to same cluster;All of illumination estimation value is clustered M classes;Wherein M is the illumination number in scene;
Step 4, cluster result backward mapping the illumination estimation value cluster based on sample block to after M classes, the knot of cluster Fruit is mapped to one by one original image, that is to say, that the pixel for belonging to same sample block belongs to same cluster, thus obtains every kind of The irradiation position of illumination;Thus an illumination mapping graph, i.e. each pixel are obtained and belongs to some in M illumination;By rear To mapping, the illumination estimation value of each pixel, and the cluster centre value of pixel place illumination class are obtained;
Step 5, for overlap illumination region, on the classification results of the illumination estimation value of backward mapping use gaussian filtering Device;
Step 6, color correction, are corrected to input picture under standard illumination using the illumination estimation value of each pixel, obtain Output image under standard illumination
(2) image gray processing after color correction, is shown below;Wherein, in formula:R, G, B represent respectively red, blue, green passage point Value;GrayRepresent the gray value of pixel after conversion;Gray=R*0.5+G*0.5
(3) to carrying out improved Hough transform after gray level image lane line edge extracting, comprise the following steps that:
Step1. under given parameters, the Hough transform based on probability is carried out to lane line feature and is operated, obtain straight line;
Step2. to the straight line that each is obtained by Hough transform detection, find apart from straight line in all of feature point set S No more than dhCharacteristic point, constitute set Eh
Step3. regression straight line parameter k of set E is determined using method of least squarehAnd bh, wherein khIt is the slope of straight line, bhIt is straight The intercept of line, and mean square error eh
Step4. to set EhIn any feature point (xi,yi), the k of all satisfactionshxi+bh>yiCharacteristic point constitute subset Epos, The k of all satisfactionshxi+bh<yiCharacteristic point constitute subset Eneg
Step5. in set EposAnd EnegIn, the maximum point P of error identifyingpAnd Pn,
Step6. point P is removedpAnd Pn, update set Epos、EnegAnd Eh, repeat step 3, until error ehLess than εh
(4) lane line is detected, is tracked based on Kalman filter lane line line,
(5) lane line intra-frame trunk relation
(6) if detecting that lane line parameter meets the one kind in situations below, it is determined that algorithm failure;Program interrupt, journey Sequence starts anew to perform,
1) in dynamic area-of-interest, the straight line number that Hough transform is detected is zero;
2) frame number for being unsatisfactory for lane line constraints is more than Tβ,Tβ=5;
3) the slope variation rate that there occurs mutation, i.e. straight line relative to previous frame from the lane line parameter detected when former frame is not Should be more than 10 degree, intercept is less than 15 pixels.
2. the method for correcting color of image using image irradiation estimation according to claim 1, the sample block of selection it is big It is little the same, meet following condition:The pixel of sample block 5 × 5 and comprising lighting color information estimating to be radiated at the sample exactly The property of the illumination on this block.
3. according to claim 1, it is characterised in that:Five kinds of candidate color constancy set of computations Γ={ e0,1,0,e0,∞,0, e0,∞,1,e1,1,1,e2,1,1};Each sample block is characterized in that and is made up of the illumination estimation value of the color constancy algorithm for selecting.
4. method according to claim 1, it is characterised in that:The characteristic vector of sample block is described as F '=[R, G, B], R, G, B are the Color Channel of image, and as follows using normalized illumination estimation value, the characteristic vector of such sample block just turns Chemical conversion F=[r, g], the vector of 1 × 2:
5. method according to claim 1, it is characterised in that:In the chrominance space of illumination estimation value composition, to each After the illumination estimation value of individual sample block is clustered, the illumination estimation value of j-th sample block is used to the distance of the i-th cluster centre Euclidean distance is calculated, Euclidean distance diRepresent, dkThe distance of k-th sample block cluster centre in k [0, M] is represented, Z is total Common sample block, then the sample block is located at the Probability p of i-th light areaj,iIt is calculated as below:
The coverage area probability of i-th illuminationWherein pj,iRepresent the j-th piece of probability irradiated by i-th illumination And M is the sum of sample block in input picture.
6. method according to claim 1, it is characterised in that:The illumination estimation value of image each pixel is entered according to such as following formula Row is calculated:
Wherein IeIt is the illumination estimation value in scene, Ie,jIt is the estimated value of i-th illumination, miX () represents that i-th illumination is pointed to The contribution of the pixel at x;Z represents sample block altogether.
7. method according to claim 1, it is characterised in that:After the illumination estimation value for obtaining each pixel, according to Diagonal model individual element is corrected, wherein fuX () represents the pixel value at x under unknown illumination irradiation, fcX () represents Jing The pixel value that it presents under standard illumination irradiation after overcorrect;Λu,cX () is from unknown illumination to standard illumination at x Mapping matrix, is shown below:fc(x)=Λu,c(x)fu(x)。
8. method according to claim 1, it is characterised in that:Diagonal calibration model is shown below, wherein,It is expressed as Position during picture:
Wherein,X represents certain point in image space, the illumination value of R channel measurements;X represents empty in image Between middle certain point, R passages estimate illumination value;Light of the measurement illumination value of certain point R passages than upper estimation in space According to value;For in space certain point G passages measurement illumination value than upper estimation illumination value;For a certain in space Illumination value of the measurement illumination value of point channel B than upper estimation;Λu,cX () is from unknown illumination to the mapping of standard illumination at x Matrix.
9. method according to claim 1, it is characterised in that to gray level image lane line edge extracting, if certain point is (x, y), meets y ∈ [0, himage) and x ∈ [2, wiamge-2);X, y are respectively the columns and rows of pixel, wiamgeIt is the width of image Degree, hiamgeIt is the height of image;
Step1:Calculate the average near point (x, y) horizontal line;Wherein, t=5;
Step2:Calculate edge extracting threshold value T;
Step3:Calculate liter height e at edgepWith drop height ev
ep∈{f(x+2,y)-f(x,y)>T}
ev∈{f(x+2,y)-f(x,y)<-T}
Step4:The liter height of lane line and drop height are in the picture occur in pairs, and between meet a certain distance; The width of height and drop height is relatively risen, ungratified point is rejected:Δ w=ep(x)-ev(x);
If Δ w>Wmax, then it is assumed that it is the lane line that impossible occur, then gives up;Wherein, ep(x) and evX () represents rise respectively The row pixel coordinate of height and drop height, WmaxFor the maximum number of pixels that lane line occupies in the picture.
10. method according to claim 1, it is characterised in that the interframe smoothing model such as following formula of design:In the formula, Line represents the accreditation testing result of present frame, ωiWhat is represented is weight Span is (0,1), liThe frame in testing result of the i-th frame is represented, z represents the frame number of association;By to present frame and front z The mode of the frame in testing result weighting of frame has obtained the accreditation testing result of present frame;According to the model, Change detection is obtained Algorithm;One interframe relief area is set, if buffer size is z, then relief area houses present frame and before The frame in testing result of z-1 frames;According to the accuracy in detection rising of property present frame when z values are arranged and increased, flase drop and false retrieval Rate declines;Z ∈ [1,25] and meet following relation:Following formula is satisfied with the setting of weight:ω-z+1≤ω-z+2≤…≤ω-1≤ ω0;Threshold value R of noiseth, break standard it is as follows:The formula represents in result that the t article lane line feature exists Total weighted sum of z frame ins accounts for the ratio of totalframes and have to be larger than threshold value Rth, otherwise it is assumed that being noise lane line;RthComputing formula:Wherein c is modifying factor sub-district 0.2<c<0.3, to retain the details at sharp edge and image, NcFor The pixel number of image, η is noise variance.
CN201611098387.4A 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions Pending CN106682586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611098387.4A CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611098387.4A CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Publications (1)

Publication Number Publication Date
CN106682586A true CN106682586A (en) 2017-05-17

Family

ID=58867368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611098387.4A Pending CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Country Status (1)

Country Link
CN (1) CN106682586A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451585A (en) * 2017-06-21 2017-12-08 浙江大学 Potato pattern recognition device and method based on laser imaging
CN107578012A (en) * 2017-09-05 2018-01-12 大连海事大学 A kind of drive assist system based on clustering algorithm selection sensitizing range
CN107909007A (en) * 2017-10-27 2018-04-13 上海识加电子科技有限公司 Method for detecting lane lines and device
CN108537224A (en) * 2018-04-23 2018-09-14 北京小米移动软件有限公司 Image detecting method and device
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN109002745A (en) * 2017-06-06 2018-12-14 武汉小狮科技有限公司 A kind of lane line real-time detection method based on deep learning and tracking technique
CN109272536A (en) * 2018-09-21 2019-01-25 浙江工商大学 A kind of diatom vanishing point tracking based on Kalman filter
CN109740550A (en) * 2019-01-08 2019-05-10 哈尔滨理工大学 A kind of lane detection and tracking method based on monocular vision
CN109858438A (en) * 2019-01-30 2019-06-07 泉州装备制造研究所 A kind of method for detecting lane lines based on models fitting
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN111126109A (en) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 Lane line identification method and device and electronic equipment
CN111580500A (en) * 2020-05-11 2020-08-25 吉林大学 Evaluation method for safety of automatic driving automobile
CN111753749A (en) * 2020-06-28 2020-10-09 华东师范大学 Lane line detection method based on feature matching
CN112101163A (en) * 2020-09-04 2020-12-18 淮阴工学院 Lane line detection method
CN112115784A (en) * 2020-08-13 2020-12-22 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112767359A (en) * 2021-01-21 2021-05-07 中南大学 Steel plate corner detection method and system under complex background
CN113200052B (en) * 2021-05-06 2021-11-16 上海伯镭智能科技有限公司 Intelligent road condition identification method for unmanned driving
CN115806202A (en) * 2023-02-02 2023-03-17 山东新普锐智能科技有限公司 Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment
EP4047317A3 (en) * 2021-07-13 2023-05-31 Beijing Baidu Netcom Science Technology Co., Ltd. Map updating method and apparatus, device, server, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision
CN105260713A (en) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 Method and device for detecting lane line
CN105678791A (en) * 2016-02-24 2016-06-15 西安交通大学 Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105966314A (en) * 2016-06-15 2016-09-28 北京联合大学 Lane departure pre-warning method based on double low-cost cameras

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision
CN105260713A (en) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 Method and device for detecting lane line
CN105678791A (en) * 2016-02-24 2016-06-15 西安交通大学 Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105966314A (en) * 2016-06-15 2016-09-28 北京联合大学 Lane departure pre-warning method based on double low-cost cameras

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ARJAN GIJSENIJ 等: "Color Constancy for Multiple Light Sources", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
杨喜宁 等: "基于改进Hough变换的车道线检测技术", 《计算机测量与控制》 *
董俊鹏: "基于光照分析的颜色恒常性算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郭斯羽 等: "结合Hough变换与改进最小二乘法的直线检测", 《计算机科学》 *
陆子辉: "基于视觉的全天时车外安全检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002745A (en) * 2017-06-06 2018-12-14 武汉小狮科技有限公司 A kind of lane line real-time detection method based on deep learning and tracking technique
CN107451585B (en) * 2017-06-21 2023-04-18 浙江大学 Potato image recognition device and method based on laser imaging
CN107451585A (en) * 2017-06-21 2017-12-08 浙江大学 Potato pattern recognition device and method based on laser imaging
CN107578012A (en) * 2017-09-05 2018-01-12 大连海事大学 A kind of drive assist system based on clustering algorithm selection sensitizing range
CN107578012B (en) * 2017-09-05 2020-10-27 大连海事大学 Driving assistance system for selecting sensitive area based on clustering algorithm
CN107909007A (en) * 2017-10-27 2018-04-13 上海识加电子科技有限公司 Method for detecting lane lines and device
CN107909007B (en) * 2017-10-27 2019-12-13 上海识加电子科技有限公司 lane line detection method and device
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN108537224A (en) * 2018-04-23 2018-09-14 北京小米移动软件有限公司 Image detecting method and device
CN109272536A (en) * 2018-09-21 2019-01-25 浙江工商大学 A kind of diatom vanishing point tracking based on Kalman filter
CN109272536B (en) * 2018-09-21 2021-11-09 浙江工商大学 Lane line vanishing point tracking method based on Kalman filtering
CN111126109B (en) * 2018-10-31 2023-09-05 沈阳美行科技股份有限公司 Lane line identification method and device and electronic equipment
CN111126109A (en) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 Lane line identification method and device and electronic equipment
CN109740550A (en) * 2019-01-08 2019-05-10 哈尔滨理工大学 A kind of lane detection and tracking method based on monocular vision
CN109858438B (en) * 2019-01-30 2022-09-30 泉州装备制造研究所 Lane line detection method based on model fitting
CN109858438A (en) * 2019-01-30 2019-06-07 泉州装备制造研究所 A kind of method for detecting lane lines based on models fitting
CN110084190B (en) * 2019-04-25 2024-02-06 南开大学 Real-time unstructured road detection method under severe illumination environment based on ANN
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110765890B (en) * 2019-09-30 2022-09-02 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN111580500A (en) * 2020-05-11 2020-08-25 吉林大学 Evaluation method for safety of automatic driving automobile
CN111580500B (en) * 2020-05-11 2022-04-12 吉林大学 Evaluation method for safety of automatic driving automobile
CN111753749A (en) * 2020-06-28 2020-10-09 华东师范大学 Lane line detection method based on feature matching
CN112115784B (en) * 2020-08-13 2021-09-28 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112115784A (en) * 2020-08-13 2020-12-22 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112101163A (en) * 2020-09-04 2020-12-18 淮阴工学院 Lane line detection method
CN112767359A (en) * 2021-01-21 2021-05-07 中南大学 Steel plate corner detection method and system under complex background
CN112767359B (en) * 2021-01-21 2023-10-24 中南大学 Method and system for detecting corner points of steel plate under complex background
CN113200052B (en) * 2021-05-06 2021-11-16 上海伯镭智能科技有限公司 Intelligent road condition identification method for unmanned driving
EP4047317A3 (en) * 2021-07-13 2023-05-31 Beijing Baidu Netcom Science Technology Co., Ltd. Map updating method and apparatus, device, server, and storage medium
CN115806202A (en) * 2023-02-02 2023-03-17 山东新普锐智能科技有限公司 Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN115806202B (en) * 2023-02-02 2023-08-25 山东新普锐智能科技有限公司 Hydraulic unloading device capable of weighing based on self-adaptive learning and overturning control system thereof
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment

Similar Documents

Publication Publication Date Title
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
Kong et al. General road detection from a single image
Keller et al. The benefits of dense stereo for pedestrian detection
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
Gerónimo et al. 2D–3D-based on-board pedestrian detection system
US20090309966A1 (en) Method of detecting moving objects
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN102915433B (en) Character combination-based license plate positioning and identifying method
Derpanis et al. Classification of traffic video based on a spatiotemporal orientation analysis
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN106127137A (en) A kind of target detection recognizer based on 3D trajectory analysis
CN103455820A (en) Method and system for detecting and tracking vehicle based on machine vision technology
KR20070027768A (en) Method for traffic sign detection
Fernández et al. Road curb and lanes detection for autonomous driving on urban scenarios
CN107563310A (en) A kind of lane change detection method violating the regulations
CN103680145B (en) A kind of people&#39;s car automatic identifying method based on local image characteristics
CN107862341A (en) A kind of vehicle checking method
Yang et al. PDNet: Improved YOLOv5 nondeformable disease detection network for asphalt pavement
CN104966064A (en) Pedestrian ahead distance measurement method based on visual sense
CN110826468B (en) Driving-assisted vehicle detection distance measurement method based on lane line detection
Amini et al. New approach to road detection in challenging outdoor environment for autonomous vehicle
Chen et al. Research on vehicle detection and tracking algorithm for intelligent driving
Kanitkar et al. Vision based preceding vehicle detection using self shadows and structural edge features
Lin et al. Incorporating appearance and edge features for vehicle detection in the blind-spot area
Fu et al. Vision-based preceding vehicle detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517