CN109190523B - Vehicle detection tracking early warning method based on vision - Google Patents

Vehicle detection tracking early warning method based on vision Download PDF

Info

Publication number
CN109190523B
CN109190523B CN201810940833.4A CN201810940833A CN109190523B CN 109190523 B CN109190523 B CN 109190523B CN 201810940833 A CN201810940833 A CN 201810940833A CN 109190523 B CN109190523 B CN 109190523B
Authority
CN
China
Prior art keywords
vehicle
area
region
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810940833.4A
Other languages
Chinese (zh)
Other versions
CN109190523A (en
Inventor
肖进胜
申梦瑶
眭海刚
王文
雷俊锋
周永强
赵博强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201810940833.4A priority Critical patent/CN109190523B/en
Publication of CN109190523A publication Critical patent/CN109190523A/en
Application granted granted Critical
Publication of CN109190523B publication Critical patent/CN109190523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle detection, tracking and early warning method based on vision. Collecting images and calibrating a road vanishing line, dividing and graying vehicle detection areas, and performing gray level stretching on the vehicle detection area images according to the illumination intensity classification of the collected images; constructing a training sample image, manually marking the training sample image as a positive sample image and a negative sample image, and extracting haar characteristics and LBP characteristics of the positive sample image and the negative sample image to train an Adaboost cascade classifier; dividing a vehicle detection area into different domains, detecting a vehicle by using the trained Adaboost cascade classifier, and carrying out secondary vehicle judgment according to illumination intensity; tracking the vehicle by using a KCF target tracking method after the vehicle is detected; the method comprises the steps of tracking a vehicle, calculating the distance from a front vehicle to the vehicle by a position-based distance estimation method, and calculating collision time according to the speed of the vehicle and the distance from the vehicle to remind early warning. The invention reduces the calculation complexity and improves the vehicle detection accuracy.

Description

Vehicle detection tracking early warning method based on vision
Technical Field
The invention relates to an intelligent traffic, automotive electronics and vision vehicle detection, tracking and collision early warning method, in particular to a vehicle detection, tracking and early warning method based on vision.
Background
Along with the increasing of automobile sales volume, road traffic accidents are also increasing, the announcement issued by the 19 th national security administration of safety supervision in 12 th and 19 th in 2017 and the department of transportation in combination indicates that although the road traffic accidents in China have obvious reduction in amplitude in recent years, the number is still high, 864.3 thousands of traffic accident reports are received in 2016, 65.9 thousands of traffic accident reports are increased in the same ratio, 2.9% of traffic accident reports are increased in the same period, 63093 people die in the same period, 226430 people are injured, and the death rate of all vehicles is as high as 2.14. In 7/3/2017, the department of transportation promulgated "operating bus safety and technology Condition" (JT/T1094-2016), and started to implement in 1/4/2017, the standards proposed: passenger cars more than 9 meters need to have driving assistance functions such as front collision early warning and lane departure early warning. This indicates that the country has started to make corresponding laws and regulations to promote the landing of the active safe driving technique, so as to reduce the incidence of traffic accidents.
The permeability of the auxiliary driving industry in China is only about 2%, the auxiliary driving industry is concentrated on high-end vehicle types, the price is high, and along with the continuous increase of the number of vehicles, the research and development of a vehicle collision early warning system with high cost performance is very necessary.
Vehicle detection is one of the most important modules in a collision warning system. The current vision-based vehicle detection methods are roughly classified into the following 5 categories: the first type is a template-based vehicle detection method, which needs to create a large number of templates and parameters, observe and update the templates in real time, and in the actual use process, the template is endless due to the shape difference caused by different shapes and dimensions of vehicles and deformation in the traveling process, so the template-based vehicle detection is not suitable for mobile places. The second type is an optical flow-based method, which realizes the acquisition of vehicle motion parameters according to the gray level change of pixel points of an image sequence, thereby obtaining information such as the position of a vehicle. The third type is a characteristic-based method, which detects the vehicle by utilizing vehicle bottom shadow, vehicle outline, vehicle edge, vehicle tail corner point, vehicle lamp and the like according to the prior knowledge. The fourth type is a traditional machine learning method, which has high detection speed and high accuracy, but the generalization capability of the model is poor. The fifth type is a target detection method based on deep learning, and the method has strong identification capability and high accuracy, but has large calculated amount and higher requirement on hardware.
The vehicle detection algorithm with a good detection effect needs large calculation resources, so that the cost of hardware is high, the optimized vehicle detection algorithm can improve the detection speed of the vehicle, reduce the dependence on the hardware and improve the cost performance of a vehicle auxiliary system.
Disclosure of Invention
In order to solve the technical problem, the invention provides a vision-based vehicle detection, tracking and early warning method. The system acquires images in real time through the vehicle-mounted camera, obtains the position, distance and collision time information of a front vehicle relative to the vehicle through a vehicle detection algorithm, and performs early warning when a potential danger occurs.
The technical scheme of the invention is a vision-based vehicle detection, tracking and early warning method, which specifically comprises the following steps:
step 1: collecting images and calibrating a road vanishing line, dividing a vehicle detection area according to the road vanishing line, graying the vehicle detection area, classifying the illumination intensity according to the collected images, and performing gray stretching on the vehicle detection area images according to the classification of the illumination intensity;
step 2: constructing a training sample image, manually marking the training sample image as a positive sample image and a negative sample image, and extracting haar characteristics and LBP characteristics of the positive sample image and the negative sample image to train an Adaboost cascade classifier;
and step 3: dividing a vehicle detection area into a near domain, a middle domain and a far domain, detecting a vehicle by using the trained Adaboost cascade classifier, and carrying out secondary judgment on the vehicle according to illumination intensity;
and 4, step 4: tracking the vehicle by using a KCF target tracking method after the vehicle is detected;
and 5: the method comprises the steps of tracking a vehicle, calculating the distance from a front vehicle to the vehicle by a position-based distance estimation method, and calculating collision time according to the speed of the vehicle and the distance from the vehicle to remind early warning.
Preferably, in the step 1, the width of the acquired image is u, the height of the acquired image is v, and a coordinate system is established by taking the upper left corner of the image as an origin;
the calibration of the road vanishing line in the step 1 is as follows:
after the camera is fixed at the position of the vehicle rearview mirror, firstly, the camera is rotated to calibrate a road vanishing line, so that a straight line with a vertical coordinate of y and a midpoint coordinate of (x, y) in the camera is superposed with a horizontal line of the end position of the road in an image, and the straight line is selected to be used for calibrating the road vanishing line
Figure BDA0001769030050000021
A rectangular area with vertexes as a vehicle detection area, W is the width of the vehicle detection area, and H is the height of the vehicle detection area;
in the step 1, graying of the vehicle detection area is realized by adopting a weighted average method:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11B(i,j)
Figure BDA0001769030050000022
f (i, j) is the gray value of the pixel after graying, and R (i, j), G (i, j) and B (i, j) are the R value, G value and B value of each pixel in the vehicle detection area respectively;
the illumination intensity in step 1 is classified as:
Figure BDA0001769030050000031
Figure BDA0001769030050000032
Figure BDA0001769030050000033
wherein, Is(λ) represents the sky-averaged light field intensity value, IR(lambda) represents the average light field intensity value of the road, I (lambda) represents the average light field intensity value of the sky road, and lambda is less than or equal to0.5 represents the proportionality coefficient, Sl(xs,ys) Representing a camera acquiring a pixel gray value, S, of a left sky sampling region in a picturer(xs,ys) Representing pixel gray value x in sampling region at right side of sky in camera acquisition pictures∈[0.1u,0.9u],ys∈[0,0.05h];
Rl(xr,yr) Representing the pixel grey value, R, of the left sampling area of the road in the picture acquired by the camerar(xrr,yrr) The representative camera acquires the pixel gray value in the right sampling area of the road in the picture,
xr∈[0.1u,0.9u],yr∈[0.95h,h]m is the pixel number of a sky left sampling area, n is the pixel number of a right sampling area, and M represents the maximum value of the pixel gray level in the sampling area;
for different light field intensity values, when I (lambda) <95, the scene is weak light, when 95< I (lambda) <180, the scene is normal light, and when I (lambda) >180, the scene is strong light;
in the step 1, gray stretching is carried out on the vehicle detection area image according to the illumination intensity classification:
if the scene is a strong light scene
Figure BDA0001769030050000034
If the scene is normal lighting scene
Figure BDA0001769030050000035
If the scene is a weak light scene
Figure BDA0001769030050000036
Y (i, j) is a gray value after gray stretching, and f (i, j) is a gray value before gray stretching of the gray map;
preferably, the training samples in step 2 are M k × k sample images;
artificially labeling M sample images as M1A positive sample chart containing a vehicleImage and M2And (3) expanding a negative sample image not containing the vehicle, and respectively calculating haar characteristic values of the positive sample image and the negative sample image through an integral graph of the sample:
Figure BDA0001769030050000041
Figure BDA0001769030050000042
wherein A (x, y) is an integral graph of a sample after gray stretching, H (i), i belongs to [0, M ] and is a haar characteristic value of a positive sample image and a negative sample image, w is the width of a sliding window when the haar characteristic value is calculated, and h is the height of the sliding window when the haar characteristic value is calculated;
calculating M1Positive sample image and M of sheet containing vehicle2The LBP of a negative sample image containing no vehicle is characterized by:
performing gray stretching through the processing of the step 1, and selecting k x k neighborhoods by Y (i, j) central pixels after the gray stretching processing, wherein the k x k neighborhoods are shared2Pixel value, intermediate pixel value icSetting as threshold, comparing other (k x k-1) pixel values with the middle value, if greater than threshold, marking the pixel position as 1, otherwise marking the pixel position as 0, thus generating (k x k-1) bit binary number in the region, the decimal value represented by the binary number is LBP value L (i) of the central pixel, i belongs to [0, M ∈ [0, M-];
Haar characteristics H (i) of the positive sample image and the negative sample image and LBP characteristics L (i) train an Adaboost cascade classifier by using an Adaboost algorithm;
preferably, the step 3 is to divide the vehicle region of interest in the step 1 into the far region RfMiddle domain RmAnd a near domain RnWherein the near domain is further divided into a left region R of the near domainnlNear middle region RnmNear region right region RnrThe sliding window size range when computing haar features in each region is as follows:
remote domain Rf:w∈[20,30],h∈[20,30];
Middle domain Rm:
Figure BDA0001769030050000043
Near domain Rn:
Figure BDA0001769030050000044
Near domain left region Rnl:
Figure BDA0001769030050000045
Near middle region Rnm:
Figure BDA0001769030050000046
Near region right region Rnr:
Figure BDA0001769030050000047
w and H are the size of a sliding window when the haar characteristic value is calculated, and H is the height of the vehicle detection area in the step 1;
respectively calculating haar characteristics and LBP characteristics in different vehicle regions of interest according to the step 2, inputting the haar characteristics and the LBP characteristics into the Adaboost cascade classifier after training in the step, and judging whether vehicles exist in the region;
if the vehicle is judged to exist, the illumination intensity is classified according to the step 1, and if the vehicle is in a strong light scene or a normal illumination scene, tail corner point characteristics and straight line characteristics of the vehicle in the region of interest of the vehicle are extracted for secondarily judging whether the vehicle is in a vehicle region:
extracting vehicle tail FAST angle features from the tail corner features of the vehicles in the region of interest of the vehicles through a FAST corner detection algorithm, and counting the number of vehicle tail FAST angle feature points in the region;
extracting parallel straight lines of the region by using Hough transform according to the straight line characteristics of the vehicles in the region of interest of the vehicles, and counting the number of the parallel straight lines at the tail of the vehicles in the region;
the relation between the number of FAST angle characteristic points at the tail part of the vehicle and the number of parallel straight lines at the tail part of the vehicle and the size of the vehicle is expressed by the average number of the two characteristics in a unit length:
Vscore=(λ·nc+nl)/Vwidth
wherein, VscoreRepresenting the characteristic value after the fusion of the FAST angle characteristic point of the tail part of the vehicle and the parallel straight line, wherein lambda is a proportionality coefficient and is more than 1, ncNumber of FAST corner feature points at the tail of the vehicle, nlNumber of parallel straight lines at the rear of the vehicle, VwidthIs the detected vehicle pixel width;
if VscoreIf the area is more than or equal to 0.5, the area is a vehicle area, otherwise, the area is a non-vehicle area;
if the extracted vehicle interesting region is a low-light scene, secondarily judging whether the tail lamp of the vehicle in the region is a vehicle region:
carrying out RGB three-channel color space separation on the vehicle region of interest to respectively obtain gray values Mat _ R, Mat _ G and Mat _ B of three channels;
subtracting Mat _ G from Mat _ R to obtain a gray level map Diff _ RG;
carrying out binarization processing on the gray map Diff _ RG to obtain a binary map Thresh _ RG of the red halo;
extracting a tail lamp highlight area from the RGB three-channel image, namely respectively taking R to be more than or equal to 200, G to be more than or equal to 200 and B to be more than or equal to 200 to obtain a tail lamp area binary image Mat _ bright;
extracting tail lamp outline A by Canny algorithm in Mat _ brightiAnd the circumscribed rectangle R of the profile of the tail lighti; AiHas an area of Ai.area,RiHas an area of RiArea, deleting the contour whose area is less than L pixels, and then calculating the difference S between the areasi=Ri.area-Ai.area;
Circumscribed rectangle R of corresponding tail light outline in binary image Thresh _ RG of red haloiIn the region, the area with the pixel value of 1 is calculated and is denoted as Ti
When T isi<0.1SiIn time, the Mat _ bright middle tail lampContour AiSetting the pixel of the corresponding area to 0 to obtain the screened tail lamp outline
Figure BDA0001769030050000066
The vehicle interested area obtains the left tail lamp outline A by the methodlRight taillight outline ArExternal rectangle R of left tail lamp outlinelAnd the external rectangle R of the outline of the right tail lamprThe area of the outline of the left tail lamp is Sl=AlArea, right tail light outline area is Sr=Ar.area,RlThe horizontal angle of the central connecting line is alphal,RrThe horizontal angle of the central connecting line is alpharOuter rectangle R of left tail lamp outlinelHas a length of LlExternal rectangle R of right tail lamp outlinerHas a length of LrOuter rectangle R of left tail lamp outlinelHas a width of WlExternal rectangle R of right tail lamp outlinerHas a width of WrLeft taillight outline AlCentroid and right tail lamp profile ArThe centroid distance of (d);
Figure BDA0001769030050000061
Figure BDA0001769030050000062
Figure BDA0001769030050000063
αlr<200
Figure BDA0001769030050000064
if the conditions are met, the area is a vehicle area, otherwise, the area is a non-vehicle area;
preferably, in the step 5, after the vehicle is tracked in the steps 1 to 4, the distance from the vehicle ahead to the host vehicle is calculated by a position-based distance estimation method:
the distance from the head of the vehicle to the tail of the front vehicle is d:
Figure BDA0001769030050000065
wherein H is the installation height of the camera, alpha is the visual field angle of the camera, and thetacIs the included angle h between the optical axis of the installed camera and the vertical directioniIs the pixel height of the image formed by the camera, dpThe pixel distance from the tail of the vehicle to the top of the image in the formed image, f is the focal length of the camera, d1Is the horizontal distance from the camera to the head of the vehicle, d2Is the horizontal distance, θ, from the camera to the rear of the vehicle aheadvThe included angle between the light entering the camera under the tail of the vehicle and the vertical direction is formed;
the method comprises the steps that the vehicle speed v is collected in real time in the driving process according to a vehicle-mounted GPS module, and the relative collision time t is d/v;
if t is less than beta, warning is reminded.
Compared with the prior art, the invention has the following advantages:
the vehicle detection, tracking and early warning method provided by the invention is suitable for running in all-weather environment and can adapt to scenes such as daytime, night, rainy day and the like;
the invention provides a method for automatically acquiring a detection domain according to a road vanishing point, and the detection domain is divided into multiple regions according to the characteristic of vehicle space distribution, so that the calculated amount of a vehicle detection algorithm is greatly reduced, and a better detection effect can be obtained by using less calculation resources;
the invention combines haar characteristic, LBP characteristic, angular point characteristic, parallel straight line characteristic and tail lamp to detect the vehicle, thus improving the accuracy of vehicle detection algorithm;
the invention carries out early warning according to the collision time and the lane line, and improves the accuracy of vehicle early warning according to the lane line.
Drawings
FIG. 1: the method of the invention is schematically shown in the flow chart;
FIG. 2: the invention relates to an early warning result graph.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are only for the purpose of illustration and explanation, and are not to be construed as limiting the present invention.
The following describes an embodiment of the present invention with reference to fig. 1 to 2, and specifically includes the following steps:
step 1: collecting images and calibrating a road vanishing line, dividing a vehicle detection area according to the road vanishing line, graying the vehicle detection area, classifying the illumination intensity according to the collected images, and performing gray stretching on the vehicle detection area images according to the classification of the illumination intensity;
in the step 1, the width of the collected image is u, the height is v, and a coordinate system is established by taking the upper left corner of the image as an origin;
the calibration of the road vanishing line in the step 1 is as follows:
after the camera is fixed at the position of the vehicle rearview mirror, firstly, the camera is rotated to calibrate a road vanishing line, so that a straight line with a vertical coordinate of y and a midpoint coordinate of (x, y) in the camera is superposed with a horizontal line of the end position of the road in an image, and the straight line is selected to be used for calibrating the road vanishing line
Figure BDA0001769030050000071
A rectangular area with vertexes as a vehicle detection area, W is the width of the vehicle detection area, and H is the height of the vehicle detection area;
in the step 1, graying of the vehicle detection area is realized by adopting a weighted average method:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11B(i,j
Figure BDA0001769030050000081
f (i, j) is the gray value of the pixel after graying, and R (i, j), G (i, j) and B (i, j) are the R value, G value and B value of each pixel in the vehicle detection area respectively;
the illumination intensity in step 1 is classified as:
Figure BDA0001769030050000082
Figure BDA0001769030050000083
Figure BDA0001769030050000084
wherein, Is(λ) represents the sky-averaged light field intensity value, IR(lambda) represents the road average light field intensity value, I (lambda) represents the sky road average light field intensity value, lambda is less than or equal to 0.5 and represents a proportionality coefficient, Sl(xs,ys) Representing a camera acquiring a pixel gray value, S, of a left sky sampling region in a picturer(xs,ys) Representing pixel gray value x in sampling region at right side of sky in camera acquisition pictures∈[0.1u,0.9u],ys∈[0,0.05h];
Rl(xr,yr) The representative camera collects the pixel gray values of the left sampling area of the road in the picture,
Figure BDA0001769030050000085
the representative camera acquires the pixel gray value in the right sampling area of the road in the picture,
xr∈[0.1u,0.9u],yr∈[0.95h,h]m is the pixel number of a sky left sampling area, n is the pixel number of a right sampling area, and M represents the maximum value of the pixel gray level in the sampling area;
for different light field intensity values, when I (lambda) <95, the scene is weak light, when 95< I (lambda) <180, the scene is normal light, and when I (lambda) >180, the scene is strong light;
in the step 1, gray stretching is carried out on the vehicle detection area image according to the illumination intensity classification:
if the scene is a strong light scene
Figure BDA0001769030050000086
If the scene is normal lighting scene
Figure BDA0001769030050000087
If the scene is a weak light scene
Figure BDA0001769030050000088
Y (i, j) is a gray value after gray stretching, and f (i, j) is a gray value before gray stretching of the gray map;
step 2: constructing a training sample image, manually marking the training sample image as a positive sample image and a negative sample image, and extracting haar characteristics and LBP characteristics of the positive sample image and the negative sample image to train an Adaboost cascade classifier;
the training samples in the step 2 are M k x k sample images;
artificially labeling M sample images as M1Positive sample image and M of sheet containing vehicle2A negative sample image containing no vehicle, M1:M21: and 3, respectively calculating haar characteristic values of the positive and negative sample images through the integral images of the samples:
Figure BDA0001769030050000091
Figure BDA0001769030050000092
wherein A (x, y) is an integral graph of a sample after gray stretching, H (i), i belongs to [0, M ] and is a haar characteristic value of a positive sample image and a negative sample image, w is the width of a sliding window when the haar characteristic value is calculated, and h is the height of the sliding window when the haar characteristic value is calculated;
calculating M1Positive sample image and M of sheet containing vehicle2The LBP of a negative sample image containing no vehicle is characterized by:
performing gray stretching through the processing of the step 1, and selecting k x k neighborhoods by Y (i, j) central pixels after the gray stretching processing, wherein the k x k neighborhoods are shared2Pixel value, intermediate pixel value icSetting as threshold, comparing other (k x k-1) pixel values with the middle value, if greater than threshold, marking the pixel position as 1, otherwise marking the pixel position as 0, thus generating (k x k-1) bit binary number in the region, the decimal value represented by the binary number is LBP value L (i) of the central pixel, i belongs to [0, M ∈ [0, M-];
Haar characteristics H (i) of the positive sample image and the negative sample image and LBP characteristics L (i) train an Adaboost cascade classifier by using an Adaboost algorithm;
and step 3: dividing a vehicle detection area into a near domain, a middle domain and a far domain, detecting a vehicle by using the trained Adaboost cascade classifier, and carrying out secondary judgment on the vehicle according to illumination intensity;
step 3, dividing the vehicle region of interest in the step 1 into far regions RfMiddle domain RmAnd a near domain RnWherein the near domain is further divided into a left region R of the near domainnlNear middle region RnmNear right region RnrThe sliding window size range when computing haar features in each region is as follows:
remote domain Rf:w∈[20,30],h∈[20,30];
Middle domain Rm:
Figure BDA0001769030050000101
Near domain Rn:
Figure BDA0001769030050000102
Near domain left region Rnl:
Figure BDA0001769030050000103
Near middle region Rnm:
Figure BDA0001769030050000104
Near region right region Rnr:
Figure BDA0001769030050000105
w and H are the size of a sliding window when the haar characteristic value is calculated, and H is the height of the vehicle detection area in the step 1;
respectively calculating haar characteristics and LBP characteristics in different vehicle regions of interest according to the step 2, inputting the haar characteristics and the LBP characteristics into the Adaboost cascade classifier after training in the step, and judging whether vehicles exist in the region;
in step 3, the secondary vehicle judgment according to the illumination intensity is as follows:
if the vehicle is judged to exist, the illumination intensity is classified according to the step 1, and if the vehicle is in a strong light scene or a normal illumination scene, tail corner point characteristics and straight line characteristics of the vehicle in the region of interest of the vehicle are extracted for secondarily judging whether the vehicle is in a vehicle region:
extracting vehicle tail FAST angle features from the tail corner features of the vehicles in the region of interest of the vehicles through a FAST corner detection algorithm, and counting the number of vehicle tail FAST angle feature points in the region;
extracting parallel straight lines of the region by using Hough transform according to the straight line characteristics of the vehicles in the region of interest of the vehicles, and counting the number of the parallel straight lines at the tail of the vehicles in the region;
the relation between the number of FAST angle characteristic points at the tail part of the vehicle and the number of parallel straight lines at the tail part of the vehicle and the size of the vehicle is expressed by the average number of the two characteristics in a unit length:
Vscore=(λ·nc+nl)/Vwidth
wherein, VscoreRepresenting the characteristic value after the fusion of the FAST angle characteristic point of the tail part of the vehicle and the parallel straight line, wherein lambda is a proportionality coefficient and is more than 1, ncNumber of FAST corner feature points at the tail of the vehicle, nlNumber of parallel straight lines at the rear of the vehicle, VwidthIs the detected vehicle pixel width;
if VscoreIf the area is more than or equal to 0.5, the area is a vehicle area, otherwise, the area is a non-vehicle area;
if the extracted vehicle interesting region is a low-light scene, secondarily judging whether the tail lamp of the vehicle in the region is a vehicle region:
carrying out RGB three-channel color space separation on the vehicle region of interest to respectively obtain gray values Mat _ R, Mat _ G and Mat _ B of three channels;
subtracting Mat _ G from Mat _ R to obtain a gray level map Diff _ RG;
carrying out binarization processing on the gray map Diff _ RG to obtain a binary map Thresh _ RG of the red halo;
extracting a tail lamp highlight area from the RGB three-channel image, namely respectively taking R to be more than or equal to 200, G to be more than or equal to 200 and B to be more than or equal to 200 to obtain a tail lamp area binary image Mat _ bright;
extracting tail lamp outline A by Canny algorithm in Mat _ brightiAnd the circumscribed rectangle R of the profile of the tail lighti; AiHas an area of Ai.area,RiHas an area of Riarea, deleting the contour with area less than L pixels, and then calculating the area difference Si=Riarea-Ai.area;
Circumscribed rectangle R of corresponding tail light outline in binary image Thresh _ RG of red haloiIn the region, the area with the pixel value of 1 is calculated and is denoted as Ti
When T isi<0.1SiThen, the tail light outline A in Mat _ bright is combinediSetting the pixel of the corresponding area to 0 to obtain the screened tail lamp outline
Figure BDA0001769030050000111
The left tail of the vehicle region of interest is obtained by the methodLamp profile AlRight taillight outline ArExternal rectangle R of left tail lamp outlinelAnd the external rectangle R of the outline of the right tail lamprThe area of the outline of the left tail lamp is Sl=AlArea, right tail light outline area is Sr=Ar.area,RlThe horizontal angle of the central connecting line is alphal,RrThe horizontal angle of the central connecting line is alpharOuter rectangle R of left tail lamp outlinelHas a length of LlExternal rectangle R of right tail lamp outlinerHas a length of LrOuter rectangle R of left tail lamp outlinelHas a width of WlExternal rectangle R of right tail lamp outlinerHas a width of WrLeft taillight outline AlCentroid and right tail lamp profile ArThe centroid distance of (d);
Figure BDA0001769030050000112
Figure BDA0001769030050000113
Figure BDA0001769030050000114
αlr<200
Figure BDA0001769030050000115
if the conditions are met, the area is a vehicle area, otherwise, the area is a non-vehicle area;
and 4, step 4: tracking the vehicle by using a KCF target tracking method after the vehicle is detected;
and 5: the method comprises the steps of tracking a vehicle, calculating the distance from a front vehicle to the vehicle by a position-based distance estimation method, and calculating collision time according to the speed of the vehicle and the distance from the vehicle to remind early warning.
In the step 5, after the vehicle is tracked through the steps 1 to 4, the distance from the front vehicle to the host vehicle is calculated by using a position-based distance estimation method:
the distance from the head of the vehicle to the tail of the front vehicle is d:
Figure BDA0001769030050000121
wherein H is the installation height of the camera, alpha is the visual field angle of the camera, and thetacIs the included angle h between the optical axis of the installed camera and the vertical directioniIs the pixel height of the image formed by the camera, dpThe pixel distance from the tail of the vehicle to the top of the image in the formed image, f is the focal length of the camera, d1Is the horizontal distance from the camera to the head of the vehicle, d2Is the horizontal distance, θ, from the camera to the rear of the vehicle aheadvThe included angle between the light entering the camera under the tail of the vehicle and the vertical direction is formed;
the method comprises the steps that the vehicle speed v is collected in real time in the driving process according to a vehicle-mounted GPS module, and the relative collision time t is d/v;
and if t is less than beta, and beta is 2.7s, warning is reminded.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clearness of understanding and no unnecessary limitations are to be understood therefrom, for those skilled in the art may make modifications and alterations without departing from the scope of the invention as defined by the appended claims.

Claims (4)

1. A vision-based vehicle detection, tracking and early warning method is characterized by comprising the following steps:
step 1: collecting images and calibrating a road vanishing line, dividing a vehicle detection area according to the road vanishing line, graying the vehicle detection area, classifying illumination intensity according to the collected images, and stretching the gray level of the vehicle detection area images according to the illumination intensity classification;
step 2: constructing a training sample image, manually labeling the training sample image as a positive sample image and a negative sample image, and extracting haar characteristics and LBP characteristics of the positive sample image and the negative sample image to train an Adaboost cascade classifier;
and step 3: dividing a vehicle detection area into a near domain, a middle domain and a far domain, detecting a vehicle by using the trained Adaboost cascade classifier, and carrying out secondary judgment on the vehicle according to illumination intensity;
and 4, step 4: tracking the vehicle by using a KCF target tracking method after the vehicle is detected;
and 5: tracking the vehicle, calculating the distance from the front vehicle to the vehicle by a position-based distance estimation method, and calculating collision time according to the speed of the vehicle and the distance from the vehicle to remind early warning;
in step 3, the vehicle detection area is divided into a near area, a middle area and a far area, and the far area is marked as RfThe middle domain is RmThe near domain is RnWherein the near domain is further divided into a left region R of the near domainnlNear middle region RnmNear right region RnrThe sliding window size range when computing haar features in each region is as follows:
remote domain Rf:e∈[20,30],h∈[20,30];
Middle domain Rm:
Figure FDA0003032175340000011
Near domain Rn:
Figure FDA0003032175340000012
Near domain left region Rnl:
Figure FDA0003032175340000013
Near middle region Rnm:
Figure FDA0003032175340000014
Near region right region Rnr:
Figure FDA0003032175340000015
w and H are the size of a sliding window when the haar characteristic value is calculated, and H is the height of the vehicle detection area in the step 1;
respectively calculating haar characteristics and LBP characteristics in different vehicle regions of interest according to the step 2, inputting the haar characteristics and the LBP characteristics into the Adaboost cascade classifier after training in the step, and judging whether vehicles exist in the region;
if the vehicle is judged to exist, the illumination intensity is classified according to the step 1, and if the vehicle is in a strong light scene or a normal light scene, tail corner point features and straight line features of the vehicles in the region of interest of the vehicle are extracted for secondary judgment whether the vehicle is in a vehicle region:
extracting vehicle tail FAST angle features from the tail corner features of the vehicles in the region of interest of the vehicles through a FAST corner detection algorithm, and counting the number of vehicle tail FAST angle feature points in the region;
extracting parallel straight lines of the region by using Hough transform according to straight line characteristics of vehicles in the region of interest of the vehicles, and counting the number of the parallel straight lines at the tail of the vehicles in the region;
the relation between the number of FAST angle characteristic points at the tail part of the vehicle and the number of parallel straight lines at the tail part of the vehicle and the size of the vehicle is expressed by the average number of the two characteristics in a unit length:
Vscore=(λ·nc+nl)/Vwidth
wherein, VscoreRepresenting the characteristic value after the fusion of the FAST angle characteristic point of the tail part of the vehicle and the parallel straight line, wherein lambda is a proportionality coefficient and is more than 1, ncAs the number of FAST angle feature points of the vehicle tailAmount, nlNumber of parallel straight lines at the rear of the vehicle, VwidthIs the detected vehicle pixel width;
if VscoreIf the area is more than or equal to 0.5, the area is a vehicle area, otherwise, the area is a non-vehicle area;
if the extracted vehicle interesting region is a low-light scene, secondarily judging whether the tail lamp of the vehicle in the region is a vehicle region:
carrying out RGB three-channel color space separation on the vehicle region of interest to respectively obtain gray values Mat _ R, Mat _ G and Mat _ B of three channels;
subtracting Mat _ G from Mat _ R to obtain a gray level map Diff _ RG;
carrying out binarization processing on the gray map Diff _ RG to obtain a binary map Thresh _ RG of the red halo;
extracting a tail lamp highlight area from the RGB three-channel image, namely respectively taking R to be more than or equal to 200, G to be more than or equal to 200 and B to be more than or equal to 200 to obtain a tail lamp area binary image Mat _ bright;
extracting tail lamp outline A by Canny algorithm in Mat _ brightiAnd the circumscribed rectangle R of the profile of the tail lighti;AiHas an area of Ai.area,RiHas an area of RiArea, deleting the contour whose area is less than L pixels, and then calculating the difference S between the areasi=Ri.area-Ai.area;
Circumscribed rectangle R of corresponding tail light outline in binary image Thresh _ RG of red haloiIn the region, the area with a pixel value of 1 is calculated and is denoted as Ti
When T isi<0.1SiThen, the tail light outline A in Mat _ bright is combinediSetting the pixel of the corresponding area to 0 to obtain the screened tail lamp outline
Figure FDA0003032175340000021
The vehicle interested area obtains the left tail lamp outline A by the methodlRight taillight outline ArOuter rectangle R of left tail lamp outlinelAnd the external rectangle R of the outline of the right tail lamprThe area of the outline of the left tail lamp is Sl=AlArea, rightArea of tail light profile is Sr=Ar.area,RlThe horizontal angle of the central connecting line is alphal,RrThe horizontal angle of the central connecting line is alpharOuter rectangle R of left tail lamp outlinelHas a length of LlExternal rectangle R of right tail lamp outlinerHas a length of LrOuter rectangle R of left tail lamp outlinelHas a width of WlExternal rectangle R of right tail lamp outlinerHas a width of WrLeft taillight outline AlCentroid and right tail lamp profile ArThe centroid distance of (d);
Figure FDA0003032175340000031
Figure FDA0003032175340000032
Figure FDA0003032175340000033
αlr<200
Figure FDA0003032175340000034
if the above conditions are all satisfied, the area is a vehicle area, otherwise, the area is a non-vehicle area.
2. The vision-based vehicle detection, tracking and early warning method of claim 1, wherein: in the step 1, the width of the collected image is u, the height is v, and a coordinate system is established by taking the upper left corner of the image as an origin;
the calibration of the road vanishing line in the step 1 is as follows:
after the camera is fixed on the vehicle rearview mirror, the lane is calibrated by rotating the cameraA road vanishing line, a straight line with y ordinate and (x, y) midpoint coordinate in the camera is superposed with the horizontal line of the end position of the road in the image, and the line vanishing line is selected to be used as a reference line
Figure FDA0003032175340000035
A rectangular area with vertexes as a vehicle detection area, W is the width of the vehicle detection area, and H is the height of the vehicle detection area;
in the step 1, graying of the vehicle detection area is realized by adopting a weighted average method:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11B(i,j)
Figure FDA0003032175340000036
j∈[y,y+H]
f (i, j) is the gray value of the pixel after graying, and R (i, j), G (i, j) and B (i, j) are the R value, G value and B value of each pixel in the vehicle detection area respectively;
the illumination intensity in step 1 is classified as:
Figure FDA0003032175340000037
Figure FDA0003032175340000038
Figure FDA0003032175340000041
wherein, Is(λ) represents the sky-averaged light field intensity value, IR(lambda) represents the road average light field intensity value, I (lambda) represents the sky road average light field intensity value, lambda is less than or equal to 0.5 and represents a proportionality coefficient, Sl(xs,ys) Representing a camera acquiring a pixel gray value, S, of a left sky sampling region in a picturer(xs,ys) Representative image pickupThe method comprises the following steps of collecting pixel gray value x in a sky right sampling area in a picture by a machines∈[0.1u,0.9u],ys∈[0,0.05v];
Rl(xr,yr) Representing the pixel grey value, R, of the left sampling area of the road in the picture acquired by the camerar(xr,yr) The representative camera acquires the pixel gray value in the right sampling area of the road in the picture,
xr∈[0.1u,0.9u],yr∈[0.95v,v]m is the pixel number of the sampling area at the left side of the sky, n is the pixel number of the sampling area at the right side, and M represents the maximum value of the pixel gray level in the sampling area;
for different light field intensity values, a low light scene is when I (λ) <95, a normal lighting scene is when 95< I (λ) <180, and a high light scene is when I (λ) > 180;
in the step 1, gray stretching is carried out on the vehicle detection area image according to the illumination intensity classification:
if the scene is a strong light scene
Figure FDA0003032175340000042
If the scene is normal lighting scene
Figure FDA0003032175340000043
If the scene is a weak light scene
Figure FDA0003032175340000044
Y (i, j) is the gray value after gray stretching, and f (i, j) is the gray value before gray stretching of the gray map.
3. The vision-based vehicle detection, tracking and early warning method of claim 1, wherein: the training samples in the step 2 are M k x k sample images;
artificially labeling M sample images as M1Positive sample image and M of sheet containing vehicle2Negative sample containing no vehicleAnd (3) respectively calculating haar characteristic values of the positive and negative sample images through the integral images of the samples:
Figure FDA0003032175340000045
H(i)=A(x-1,y-1)+A(x+w-1,y+w-1)-A(x-1,y+h-1)-A(x+w-1,y-1)
wherein A (x, y) is an integral graph of a sample after gray stretching, H (i), i belongs to [0, M ] and is a haar characteristic value of a positive sample image and a negative sample image, w is the width of a sliding window when the haar characteristic value is calculated, and h is the height of the sliding window when the haar characteristic value is calculated;
calculating M1Positive sample image and M of sheet containing vehicle2The LBP characteristics of a negative sample image without a vehicle are as follows:
performing gray stretching through the processing of the step 1, and selecting k x k neighborhoods by Y (i, j) central pixels after the gray stretching processing, wherein the k x k neighborhoods are shared2Pixel value, intermediate pixel value icSetting as threshold, comparing other (k x k-1) pixel values with the middle value, if greater than threshold, marking the pixel position as 1, otherwise marking the pixel position as 0, thus generating (k x k-1) bit binary number in the region, the decimal value represented by the binary number is LBP value L (i) of the central pixel, i belongs to [0, M ∈ [0, M-];
Haar features H (i) and LBP features L (i) of the positive sample images and the negative sample images are used for training an Adaboost cascade classifier by using an Adaboost algorithm.
4. The vision-based vehicle detection, tracking and early warning method of claim 1, wherein: in the step 5, after the vehicle is tracked through the steps 1 to 4, the distance from the front vehicle to the host vehicle is calculated by using a position-based distance estimation method:
the distance from the head of the vehicle to the tail of the front vehicle is d:
Figure FDA0003032175340000051
wherein H is the installation height of the camera, alpha is the visual field angle of the camera, and thetacIs the included angle h between the optical axis of the installed camera and the vertical directioniIs the pixel height of the image formed by the camera, dpThe pixel distance from the tail of the vehicle to the top of the image in the generated image;
the method comprises the steps that the vehicle speed v is collected in real time in the driving process according to a vehicle-mounted GPS module, and the relative collision time t is d/v;
if t is less than beta, warning is reminded.
CN201810940833.4A 2018-08-17 2018-08-17 Vehicle detection tracking early warning method based on vision Active CN109190523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810940833.4A CN109190523B (en) 2018-08-17 2018-08-17 Vehicle detection tracking early warning method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810940833.4A CN109190523B (en) 2018-08-17 2018-08-17 Vehicle detection tracking early warning method based on vision

Publications (2)

Publication Number Publication Date
CN109190523A CN109190523A (en) 2019-01-11
CN109190523B true CN109190523B (en) 2021-06-04

Family

ID=64918282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810940833.4A Active CN109190523B (en) 2018-08-17 2018-08-17 Vehicle detection tracking early warning method based on vision

Country Status (1)

Country Link
CN (1) CN109190523B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887281B (en) * 2019-03-01 2021-03-26 北京云星宇交通科技股份有限公司 Method and system for monitoring traffic incident
CN109948582B (en) * 2019-03-28 2021-03-02 湖南大学 Intelligent vehicle reverse running detection method based on tracking trajectory analysis
CN110415561B (en) * 2019-06-14 2021-07-02 青岛科技大学 Non-conflict meeting situation analysis method for ship cluster situation
EP4062361A1 (en) 2019-11-18 2022-09-28 Boston Scientific Scimed, Inc. Systems and methods for processing electronic medical images to determine enhanced electronic medical images
CN111091061B (en) * 2019-11-20 2022-02-15 浙江工业大学 Vehicle scratch detection method based on video analysis
CN111414857B (en) * 2020-03-20 2023-04-18 辽宁工业大学 Front vehicle detection method based on vision multi-feature fusion
CN111422190B (en) * 2020-04-03 2021-08-31 北京四维智联科技有限公司 Forward collision early warning method and system for rear car loader
CN111914627A (en) * 2020-06-18 2020-11-10 广州杰赛科技股份有限公司 Vehicle identification and tracking method and device
CN111879360B (en) * 2020-08-05 2021-04-23 吉林大学 Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof
CN112818736A (en) * 2020-12-10 2021-05-18 西南交通大学 Emergency command big data supporting platform
CN113643325B (en) * 2021-06-02 2022-08-16 范加利 Method and system for warning collision of carrier-based aircraft on aircraft carrier surface
CN114066968A (en) * 2021-11-05 2022-02-18 郑州高识智能科技有限公司 Vehicle speed measuring method based on visual image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379926B2 (en) * 2007-12-13 2013-02-19 Clemson University Vision based real time traffic monitoring
CN103455820A (en) * 2013-07-09 2013-12-18 河海大学 Method and system for detecting and tracking vehicle based on machine vision technology
CN107704833A (en) * 2017-10-13 2018-02-16 杭州电子科技大学 A kind of front vehicles detection and tracking based on machine learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379926B2 (en) * 2007-12-13 2013-02-19 Clemson University Vision based real time traffic monitoring
CN103455820A (en) * 2013-07-09 2013-12-18 河海大学 Method and system for detecting and tracking vehicle based on machine vision technology
CN107704833A (en) * 2017-10-13 2018-02-16 杭州电子科技大学 A kind of front vehicles detection and tracking based on machine learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vehicle detection combining gradient analysis and AdaBoost classification;A. Khammari 等;《2005 IEEE Intelligent Transportation Systems》;20050916;第66-71页 *
基于机器视觉的辅助驾驶系统中预警系统设计;毛河;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;I140-1236 *

Also Published As

Publication number Publication date
CN109190523A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
EP1671216B1 (en) Moving object detection using low illumination depth capable computer vision
US9384401B2 (en) Method for fog detection
Siogkas et al. Traffic lights detection in adverse conditions using color, symmetry and spatiotemporal information
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN107891808B (en) Driving reminding method and device and vehicle
CN103984950B (en) A kind of moving vehicle brake light status recognition methods for adapting to detection on daytime
CN107886034B (en) Driving reminding method and device and vehicle
KR101240499B1 (en) Device and method for real time lane recogniton and car detection
CN103034843B (en) Method for detecting vehicle at night based on monocular vision
CN105488453A (en) Detection identification method of no-seat-belt-fastening behavior of driver based on image processing
CN109948552B (en) Method for detecting lane line in complex traffic environment
CN104881661B (en) Vehicle checking method based on structural similarity
CN102509098A (en) Fisheye image vehicle identification method
Fernández et al. Real-time vision-based blind spot warning system: Experiments with motorcycles in daytime/nighttime conditions
Andreone et al. Vehicle detection and localization in infra-red images
Siogkas et al. Random-walker monocular road detection in adverse conditions using automated spatiotemporal seed selection
CN111783666A (en) Rapid lane line detection method based on continuous video frame corner feature matching
Lin et al. Adaptive IPM-based lane filtering for night forward vehicle detection
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
CN103927548A (en) Novel vehicle collision avoiding brake behavior detection method
CN107220632B (en) Road surface image segmentation method based on normal characteristic
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant