CN103886760B - Real-time vehicle detecting system based on traffic video - Google Patents

Real-time vehicle detecting system based on traffic video Download PDF

Info

Publication number
CN103886760B
CN103886760B CN201410142327.2A CN201410142327A CN103886760B CN 103886760 B CN103886760 B CN 103886760B CN 201410142327 A CN201410142327 A CN 201410142327A CN 103886760 B CN103886760 B CN 103886760B
Authority
CN
China
Prior art keywords
gradient
vehicle
image
template
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410142327.2A
Other languages
Chinese (zh)
Other versions
CN103886760A (en
Inventor
李涛
叶茂
向涛
李冬梅
朱晓珺
张栋梁
包志均
唐红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Chantu Intelligent Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410142327.2A priority Critical patent/CN103886760B/en
Publication of CN103886760A publication Critical patent/CN103886760A/en
Application granted granted Critical
Publication of CN103886760B publication Critical patent/CN103886760B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of real-time vehicle detecting system based on traffic video, utilize the key area of contour of the vehicle to carry out vehicle sign as marking area, thus realize vehicle and judge.First with the intensive sampling site of marking area, the mode of the sparse sampling site in non-significant region builds gradient map and sets up template, then by extending the binary coding of gradient, cosine is similar, the Hierarchy template that the gradient that prestores response table, linearisation memory techniques and k means cluster are set up realizes the matching algorithm of parallel fast zoom table, the invention provides a kind of simple, quickly, and there is the vehicle detecting system based on traffic video of fine robustness.

Description

Real-time vehicle detecting system based on traffic video
Technical field
The present invention relates to traffic vehicle detection technique field, be specifically related to a kind of real-time car based on traffic video Type detecting system.
Background technology
Automobile is widely used as the quick vehicles the most easily, but the increasing rapidly to city of its quantity in recent years City's traffic brings immense pressure, also allows the work of corresponding management personnel become the heaviest.Along with computer Vision technique and the fast development of hardware product, in order to solve these the most serious traffic problems, intelligence is handed over Way system arises at the historic moment (Intelligent Transportation Systems is called for short ITS), Qi Zhongche Type identification is the important composition composition of intelligent transportation system, about some existing correlation techniques of vehicle cab recognition Have also been obtained extensively application.Specific vehicle is shown by these technology in some occasions with steady-state conditions Good effect, such as bus, three box car etc..
Present vehicle cab recognition is concentrated mainly on vehicle feature description and vehicle template profile matching, vehicle feature Describe the feature description mainly obtaining vehicle from the video image that monitoring device collects, thus portray car Type, reaches the purpose of vehicle detection in video.The current main feature describing vehicle concentrates on Harris angle The single features such as some feature, HOG feature, Gabor characteristic, and SIFT feature, or single features is carried out Combination, the union feature of formation.
And about the vehicle template base describing mainly the most how Criterion of vehicle template and template search mechanism, By obtaining target on the video image that collects from monitoring device, then by the target obtained and vehicle template Storehouse corresponding template is mated, so that it is determined that relevant vehicle.Unrestricted opening it mostly is residing for automobile Environment, complicated and changeable, there is illumination variation, view transformation etc., so need a kind of accurately, in real time and There is the vehicle detection method of high-adaptability in complicated occasion.
One of retrieval prior art: In South China Polytechnics Huang writing brush, Lin Zhenze, Zhu provide naughty etc. invention " base Vehicle automatic identifying method in vehicle frontal image Yu template matching ", publication number: CN103324920A.
A kind of vehicle automatic identifying method based on vehicle frontal image Yu template matching of this disclosure of the invention, On gray level image, determine vehicle region by car plate, and set up the vehicle template of unified size, enter in template Row associated gradients calculates, and puts into neutral net and is trained, export eight class vehicle results after Grad normalization. This vehicle algorithm flow is as shown in Figure 1.
In the prior art, first the vehicle frontal image collected is carried out gray processing and obtain gray-scale map, and Calculating the transverse gradients figure of gray-scale map, reason is to consider car plate shape and placement feature, passes through transverse gradients Figure easily obtains car plate position and calculates car plate width.
In the prior art, because car plate width, the information such as position is fixing mostly, leads in this way Cross the transverse gradients figure defined location obtained and by relevant scaling, substantially centered by the position obtained Vehicle region can be obtained, zoom in feature extraction template unified for the region obtained,
In the prior art, according to the template Grad obtained, it is normalized, obtains related data The feature judged as vehicle, is input to neutral net, the neural network model obtained by training, then This model is utilized to obtain the vehicle information of detection data output.
The prior art has a disadvantage in that the gradient information of this patent utilization vehicle, it is thus achieved that car plate is correlated with Width and positional information thereby determine that vehicle template, but single utilization gradient information and do not account for reference point week Enclose the impact of gradient, make vehicle characteristics statement lack completeness, wrong report can be caused to a certain extent.It addition, Use its convergence rate of neural metwork training slow, there is the shortcomings such as local extremum, and the training of neutral net Result is too dependent on the vehicle sample of selection.
The two of retrieval prior art: China Petroleum Univ. (East-China) ancestor people Lee, public super, the Liu Yujie's of thread etc. Invention " a kind of dynamic vehicle model recognizing method in intelligent transportation system ", CN103258213A.
A kind of dynamic vehicle model recognizing method in intelligent transportation system of this disclosure of the invention.First instruct The white silk stage utilizes the image zooming-out HOG feature after normalization and describes the GIST feature of overall texture, as Input obtains two graders by SVM respectively.When then detecting, combine D-S by two graders obtained Output result is merged by evidence theory, obtains maximum of probability, thus completes vehicle cab recognition, and this algorithm is concrete Process is as shown in Figure 2.
In the prior art, first introduce the GIST of HOG feature and whole description in training and test phase Feature, information when overcoming single features to describe vehicle is certain, has merged overall and local feature.
In the prior art, after two features obtain, SVM is utilized to respectively obtain based on two features in the training stage Two discrimination model, when test judges vehicle, by defeated to HOG feature and the GIST feature of detection vehicle Enter the discrimination model that trains and obtain correlation output, the basis of the cascading judgement that is formed for knowing clearly.
In the prior art, utilize two SVM models dependent probability to the judgement vehicle that detection vehicle obtains, Having been merged the relevant information of two SVM outputs by D-S theory, thus obtained most probable value, this is maximum The vehicle classification that probit is corresponding, is the classification of current vehicle to be identified, so far achieves cascading judgement car Type, it is thus achieved that final detection result.
The prior art has a disadvantage in that this patent utilization describes HOG feature and the entirety at edge GIST feature, enhances the robustness characterizing vehicle, and the discrimination model that this patent is to being respectively trained out Testing result carried out information fusion, it is achieved that cascading judgement.But the method needs training and detection accurately Property is largely affected by sample set, but sample set can not comprise the vehicle under all ambient conditions, In practical engineering application, the accuracy of its vehicle detection can not ensure.
Summary of the invention
It is an object of the invention to provide a kind of can apply in real time in intelligent transportation system based on traffic video Vehicle detection technique.
For achieving the above object, the present invention adopts the following technical scheme that a kind of real-time based on traffic video Vehicle detecting system, mates two parts including on training under line and line;
Described Xian Xia training department divides and comprises the following steps: (1) calculates Harris angle point, it is thus achieved that marking area; Then sampling site intensive to marking area, sampling site sparse to non-significant region;(2) image after sampling site completes In, calculate corresponding extension gradient and form vehicle Prototype drawing, the binary coding of driving section mould plate figure of going forward side by side, Prestore gradient response chart according to cosine is similar, complete parallel computation design;(3) finally according to vehicle The mode that the difference of feature description utilizes k-means to cluster builds different subspace, sets up stratified vehicle Template index, logging template relevant information;
On described line, coupling comprises the following steps: (1) obtains vehicle image to be identified by under traffic scene; (2) calculating marking area and the non-significant region of image vehicle, the most non-homogeneous sampling site obtains gradient map;(3) Gradient point is extended and binary coding;(4) by the similar acquisition of cosine corresponding gradient response diagram;(5) The mode using parallel computation carries out fast zoom table coupling;(6) vehicle matching result is obtained, it is judged that vehicle, Complete vehicle detection.
Train in the step (1) of part under described line, calculate obtain marking area and non-significant region non-all The detailed step of the gradient map of even sampled point is:
(1.1) the Harris angle point on vehicle wheel profile is first obtained;
(1.2) with Harris angle point as the center of circle, neighborhood of pixels radius R=6 draws circle to size and vehicle Prototype drawing In blank image as, then on this image, find connected domain, thus orient the aobvious of vehicle image Write region;
(1.3) obtain the intensive sampling site of marking area, non-significant region carry out sparse sampling site;Calculate non- The image gradient of tri-passages of RGB of the image after uniform sampling, the Grad for each gradient point takes this Greatest gradient value o'clock in three passages;Then retain, by threshold value, the gradient point that Grad is bigger;? The gradient tried to achieve, is quantified as N (such as N=5) individual gradient direction, then occurring in each gradient point neighborhood The most gradient direction of number of times is as the gradient direction of this gradient point;
(1.4) gradient direction after quantifying carries out corresponding binary coding, by gradient direction with a length of The binary string of N=5 represents, forms the gradient map of binary representation.
The step (2) training part under described line also includes obtaining the Gradient Features information of template and prestoring Response table, comprises the following steps:
(2.1) extension of image gradient point is that the image gradient figure to binarization processes, and gradient point extends Process carries out gradient extension (i.e. by step-by-step "or" in T × T (such as T=3) neighborhood to each gradient point Operation processes, and makes each point contain radius by the gradient direction occurred in the neighborhood of T/2), thus obtain Binary coding figure after must extending;
(2.2), after obtaining the gradient image after extension, the similarity of template matching uses and asks for cosine similarity Method realize;During coupling, this gradient point in all gradient directions, has in T × T neighborhood Cosine response value obtained by the gradient direction of one gradient direction and current matching is maximum, then be considered as this Gradient direction is the gradient direction mated most;Because gradient is quantified as N=5 grade, open so obtaining N=5 Gradient response diagram, each gradient direction respectively corresponding gradient response table, each gradient response table with The maximum cosine response value of the neighborhood inside gradient direction set representated by binary coding is to precompute Come, be saved in internal memory for the maximum cosine response value searched corresponding to coding.
Training in the step (3) of part under described line, K-means cluster determines vehicle subspace, sets up layer Secondary sex cords draws and comprises the following steps:
(3.1) search element speed, the template number of vehicle when minimizing is mated each time to improve, adopt With k-means clustering method, template base figure is slightly clustered according to outward appearance;Form different vehicle spaces to divide Cloth;
(3.2) on the basis of vehicle spatial distribution, vehicle template base is divided into two-layer and sets up level index, Ground floor template is the big class template of vehicle, and second layer template is the concrete template of vehicle.
On described line, the step (1) of compatible portion obtains vehicle detection image to be identified, first passes through mixing height The mode of this model and adaptive RTS threshold adjustment obtains vehicle detection image to be identified;In this step, Remove unnecessary prospect as much as possible, reduce the computer capacity of subsequent match algorithm, improve detection efficiency.
The step (2) of compatible portion calculates on described line the gradient obtaining vehicle nonuniform sampling point to be identified Figure comprises the following steps:
(2.1) the Harris angle point on vehicle wheel profile is first obtained;
(2.2) with Harris angle point as the center of circle, R=6 pixel of radius draws circle to size and vehicle Prototype drawing In blank image as, then on this image, find connected domain, thus orient the aobvious of vehicle image Write region;
(2.3) obtain the intensive sampling site of marking area rather than marking area carry out sparse sampling site.Meter Calculating the image gradient of tri-passages of RGB of the image after nonuniform sampling, the Grad for each gradient point takes This some greatest gradient value in three passages.Then retain, by threshold value, the gradient point that Grad is bigger. The gradient tried to achieve, it is quantified as N (N=5 in this programme) individual gradient direction, then that each gradient point is adjacent The gradient direction that in territory, occurrence number is most is as the gradient direction of this gradient point;
(2.4) gradient direction after quantifying carries out corresponding binary coding, by gradient direction with a length of The binary string of N=5 represents, forms the gradient map of binary representation.
On described line, in the step (3) of compatible portion, vehicle gradient point to be detected extends and binary coding, its The extension of middle image gradient point is that the image gradient figure to binarization processes, and gradient point expansion process is to often Individual gradient point carries out gradient extension (i.e. by step-by-step OR operation in T × T (T=3 in this programme) neighborhood Process, make each point contain radius by the gradient direction occurred in the neighborhood of T/2), thus obtain expansion Binary coding figure after exhibition.
The step (4) of compatible portion calculates gradient response diagram, wherein at gradient point position on described line T × T (T=3) neighborhood in all gradient directions, have the gradient side of a gradient direction and current matching Maximum to obtained cosine response value, then being considered as this gradient direction is the gradient direction mated most.
On described line, the calculation of falling into a trap of the step (5) of compatible portion is mated, and comprises the following steps:
(5.1) in order to improve the speed of algorithm further, the parallel computation of gradient response diagram is used.The most right Gradient response diagram carries out linearisation, is formed in the linearisation of cell*cell (taking cell=2 herein) individual gradient response diagram Deposit.5 gradient response diagrams are linearly turned to 4 (cell*cell=4) individual row vector;
(5.2) realize parallel computation by linear internal memory, just can calculate the mould of multiple window every time simultaneously Plate matching similarity.In the matching process, mate by template base Hierarchy template, according in template image The gradient direction of gradient point finds the linear internal memory of the gradient response diagram of its correspondence, then further according to this gradient point It (is one that position in the region of cell*cell calculates it at the linear internal memory of corresponding gradient response diagram Row vector) in side-play amount;
(5.3) finally all row vectors are alignd by side-play amount, relevant position cosine response value is added Summation.The similarity of template during each element is this detection window in row vector after summation, then it Coordinate position corresponding at maximum is exactly the position at target place.
In this programme, N, T and cell are the natural number more than 0, preferably N=5, T=3, cell=2.
The invention has the beneficial effects as follows: in the solution of the present invention, vehicle detection is applicable to nearly all vehicle Coherent detection, high for template vehicle detection complexity, the slow-footed problem in search pattern storehouse, this programme By the intensive sampling site of vehicle marking area after size normalized, the sparse sampling site in non-significant region non- The sides such as uniform sampling mode, obtains sampled point, then carry out gradient extension on sampled point, binary coding Formula sets up the vehicle template being easy to parallel computation, and is slightly clustered vehicle outward appearance etc. by k-means cluster Set up different automobile types space, set up the search of multi-level vehicle, concrete matching process uses obtain in advance and count under line Calculate parallel internal memory when obtaining gradient response table, coupling to calculate, by the cosine similar calculating fast zoom table of responsiveness, Real-time to vehicle detection, the aspect such as accuracy in detection had had more preferable effect than former scheme.
Accompanying drawing explanation
Fig. 1 is the flow chart of prior art one;
Fig. 2 is the flow chart of prior art two;
Fig. 3 is vehicle targets flow chart of the present invention;
Fig. 4 is to obtain based on marking area and non-significant area non-uniform point flow process;
Fig. 5 is that gradient quantifies and corresponding binary scale coding;
Fig. 6 is the gradient extension of image and forms binary-coded gradient map process;
Fig. 7 is to precalculate gradient response table;
Fig. 8 is to calculate gradient response diagram;
Fig. 9 is gradient response diagram linearisation;
Figure 10 is to calculate vehicle template matching similarity graph;
Figure 11 is that example set up in part vehicle template base index;
Figure 12 is vehicle matching process;
Figure 13 is picture size normalization displaying figure.
Detailed description of the invention
The invention will be further described below in conjunction with the accompanying drawings.
Embodiment 1: a kind of real-time vehicle detecting system based on traffic video, the present invention uses the mould of improvement Plate matching process, obtains sampled point by the way of a kind of nonuniform sampling, throughput on the basis of sampled point Change gradient, and gradient direction after quantifying is carried out gradient extension binary representation, it is thus achieved that respective extension gradient Figure, thus set up template, and the mode that slightly clustered by k-means sets up the template base of level;Then phase is passed through Obtain the extension gradient map of vehicle to be detected with mode, carry out template matching by the mode of level retrieval.Building The first stage of shuttering, abandon the uniform sampling site of traditional approach, used a kind of based on vehicle region of significance Non-homogeneous sampling site mode builds template, greatly reduces the gradient point calculating that template matching is, and rings during coupling Should the amount of calculation of figure.In the second stage that template base is formed, for the gradient map obtained after quantifying extension, root Set up response chart according to binary numeral numerical value, then result is stored, it is simple to quickly search during coupling.? The template matching stage uses parallel mode Rapid matching to improve matching speed;Additionally template matching is to picture size Very sensitive, so will be to picture size normalized.Particularly as being by the size of template image in template base It is normalized, the size of the vehicle image extracted is also carried out normalization simultaneously and the most both can reduce mould Template amount in plate storehouse, it is also possible to improve matching speed and the accuracy rate of coupling.Below the present invention is embodied as Mode is further described.
1) the non-homogeneous sampling site stage
Being different from conventional truck method based on the uniform sampling site of profile, this method uses based on vehicle marking area Intensive sampling site, the mode of the sparse sampling site in non-significant region carries out sample point collection, and as shown in Figure 4, it specifically flows Journey is described as follows:
(1.1) Harris angle point calculating is carried out, it is thus achieved that respective point.Harris angle point is that horizontally and vertically direction becomes Change big point.
(1.2) utilizing each angle point after obtaining angle point is that 6 pixels carry out picture circle, then for center of circle radius R In size blank image as vehicle template image, then on this image, find connected domain.
(1.3) by the location of the blank image with size, it is transplanted on the vehicle image of relevant position, from And obtain marking area, in addition to marking area, it is defined as non-significant region.
(1.4) in the intensive sampling site of marking area of vehicle, the sparse sampling site of non-significant regions.
2) build gradient template and realize PARALLEL MATCHING
After the non-homogeneous sampling site of vehicle completes, template to be built further, carry out gradient quantization, carry out corresponding Binary codings etc. realize corresponding fast parallel coupling, and it implements is to implement to be:
(2.1) (rgb color pattern is a kind of face of industrial quarters to calculate image RGB on the sampling site figure obtained Colour standard, be by red (R), green (G), the change of blue (B) three Color Channels and they each other Superposition obtain color miscellaneous, RGB is i.e. the color representing three passages of red, green, blue, This standard almost include human eyesight can all colours of perception, be to use the widest cie system of color representation cie at present One of system) image gradient of three passages, the Grad for each gradient point takes this o'clock in three passages Greatest gradient value.In order to strengthen the ability of anti-noise jamming and illumination variation, by threshold value to gradient image Filter, thus leave behind the gradient point that Grad is bigger.
(2.2) for the point retained, gradient direction is quantified, is quantified as 5 gradient directions, specifically Gradient direction quantitative criteria is as shown in Figure 5.Gradient directions most for occurrence number in each gradient point neighborhood As the gradient direction of this gradient point, and each direction is converted into binary coding.
(2.3) in order to increase the noise resisting ability of characteristics of image further, peripheral neighborhood point is introduced, to image Gradient point be extended, incorporate the contextual information of periphery.Specifically: divide neighborhood by 3*3 region, The superposition being the gradient direction occurred in neighborhood the graded of each point occurred in region, Then being indicated all directions by binary coding mode, step-by-step OR operation processes, it is thus achieved that new Gradient binary representation, its expansion process is as shown in Figure 6.
(2.4) after obtaining gradient image, the mode using cosine similar portrays match condition, when ladder in template In the gradient direction at degree point place and detected image gradient direction at gradient point closer to, then calculate Cosine response value is the biggest, and both similarities are the highest.Concrete formula is described as follows:
S ( Im a g e , T e m p l a t e , c ) = Σ i ∈ Z ( max i ∈ M ( c + t ) | c o s ( o r i ( T e m p l a t e , t ) - o r i ( Im a g e , i ) ) | )
Wherein, (Image, Template c) represent the similarity of current region template matching to S;C represents current region Side-play amount;Z represents template area;M represents the region that template window is corresponding in detected image;I and t quilt Detection image and the gradient index value of template image.
During mating, in the neighborhood at this gradient point position in all gradient directions, have Cosine response value obtained by the gradient direction of one gradient direction and current matching is maximum, then be considered as this Gradient direction is the gradient direction mated most.Therefore, current detected image is calculated by process above Corresponding N=5 opens gradient response diagram, the most corresponding gradient response diagram of each gradient direction.Count in advance The formula calculating gradient response table is:
τ i [ ζ ] = m a x l ∈ ζ | c o s ( i - 1 ) |
Wherein, ζ represents the binary coded value that the set of neighborhood inside gradient direction is formed;I represents the gradient of quantization Direction (span is 1 to N=5).The concrete N=5 that calculates opens gradient response table T1, T2, T3, T4, T5 Process is as shown in Figure 7.
(2.5) N=5 constructing current detected image corresponding by looking into gradient response table opens gradient response diagram, The concrete N=5 that calculates opens gradient response table M1, M2, M3, M4, M5 process as shown in Figure 8.
(2.6) parallel computation is realized during coupling, in order to realize the parallel computation of gradient response diagram, first will be to ladder Degree response diagram carries out linearisation, forms the line of Cell*Cell (it is 2 that this programme takes Cell) individual gradient response diagram Property internal memory.Linearisation gradient response diagram detailed process as it is shown in figure 9, open gradient response diagram linearisation by N=5 Linearisation internal memory for Cell*Cell=4 row vector, i.e. 4 gradient response diagrams.
(2.7) realize parallel computation by linear internal memory, just can calculate the mould of multiple window every time simultaneously Plate matching similarity.In the matching process, find it corresponding according to the gradient direction of gradient point in template image The linear internal memory of gradient response diagram, then count further according to this gradient point position in the region of cell*cell Calculating its side-play amount in the linear internal memory (being a row vector) of corresponding gradient response diagram is:
Offset=(Y/cell) * (Width/cell)+(X/cell)
Wherein, the coordinate position at the gradient point place during (X, Y) represents template image;Width represents current detected The width of image.Then can by the linear internal memory of gradient response diagram corresponding for gradient point all in template image (OK Vector) all find, then calculate respective side-play amount, finally all row vectors are alignd by side-play amount, Relevant position cosine response value carries out being added summation.In row vector after summation, each element is this detection window The similarity of template in Kou, then coordinate position corresponding at its maximum is exactly the position at target place. Utilize parallel computation design can increase exponentially matching speed.
The process of concrete calculating vehicle template matching similarity graph is as shown in Figure 10.
3) build different automobile types subspace by k-means cluster and set up level index
Set up vehicle template base index.In order to reduce the template number of vehicle when mating each time, with Reaching to carry out in real time the requirement of vehicle cab recognition, this programme is that vehicle template base sets up index.K-means is used to gather Respective graphical is slightly clustered by class method, it is thus achieved that two layer indexs.Ground floor template is the big class template of vehicle, Second layer template is the concrete template of vehicle.During coupling, first mate with big class template, select matching rate That high class, then carry out mating for the second time by the concrete template of the vehicle that class is corresponding, thus match concrete car Type.Part vehicle template base index sets up example as shown in figure 11.
4) normalized of picture size
Template matching algorithm is very sensitive to picture size, so will be to picture size normalized.Particularly as It is to be intended to normalize to unified size, specific practice by template image in template base and vehicle image to be identified Being to carry out scale according to image actual aspect ratio, concrete scaling formula is as follows:
H 2 = W 2 W 1 × H 1
Wherein, W2、H2Represent the figure image width after scaling and height;W1、H1Represent the figure image width before scaling and height. By the vehicle template of various sizes is mated experiment, to reality with the vehicle image to be identified extracted Test result to be analyzed finding.
(4.1) the vehicle image width W for the vehicle image to be identified extracted, after scaling2It is taken as 160 pixels.Because being shown experimentally that under this size, it is thus achieved that Gradient Features count out meet efficiency and The double requirements of effect.
(4.2) the vehicle template image width W for vehicle template image, after scaling2Be taken as 155 respectively, 145, 135,125 pixel.Because it is the most whole vehicle that the vehicle image to be identified extracted does not ensures that Size, around has a small amount of non-vehicle region, but experimental analysis finds, this non-vehicle region is substantially distributed in In certain scope.Therefore by experimental analysis, W is drawn2It is taken as 155,145,135,125 respectively Four sizes of pixel.The most as shown in figure 12.
5) vehicle matching process
The matching process that vehicle is concrete is as follows:
(5.1) vehicle image extracted is carried out picture size normalized (at picture size normalization Reason).
(5.2) index of reference template carries out PARALLEL MATCHING, from this three class template, selects matching similarity the highest That class.
(5.3) carry out second time by the concrete template of the vehicle of such correspondence to mate, thus match concrete vehicle.
As shown in figure 13, as a example by " van ", vehicle matching process is illustrated:
In the technical program, on the basis of not affecting matching effect, to the image after normalized, logical Cross a kind of nonuniform sampling mode and improve matching efficiency, the binary coding formed after being extended by gradient, no Increase only vehicle characterize robustness also lay the foundation for the fast zoom table of follow-up parallel computation, in conjunction with based on The vehicle of the level index that thick cluster is set up carries out Secondary Match, further increases the speed of coupling.We Case establishes a kind of efficient, and quick vehicle detects.

Claims (4)

1. a real-time vehicle detecting system based on traffic video, it is characterised in that include training and line under line Upper coupling two parts;
Described Xian Xia training department divides and comprises the following steps: (1) calculates Harris angle point, it is thus achieved that marking area; Then sampling site intensive to marking area, sampling site sparse to non-significant region;(2) image after sampling site completes In, calculate corresponding extension gradient and form vehicle Prototype drawing, the binary coding of driving section mould plate figure of going forward side by side, Prestore gradient response diagram according to cosine is similar, complete parallel computation design;(3) special finally according to vehicle The mode that the difference levying description utilizes k-means to cluster builds different subspace, sets up stratified vehicle template Index, records vehicle template relevant information;
The step (2) training part under described line includes obtaining the Gradient Features information of vehicle template and depositing in advance Storage gradient response diagram, comprises the following steps: the extension of (2.1) image gradient point is the ladder of the image to binarization Degree figure processes, and gradient point expansion process carries out gradient extension to each gradient point in T × T neighborhood, thus Obtain the binary coding figure after extension;
(2.2), after obtaining the gradient image after extension, the similarity of template matching uses and asks for cosine similarity Method realize;During coupling, this gradient point in all gradient directions, has one in T × T neighborhood Cosine response value obtained by the gradient direction of individual gradient direction and current matching is maximum, then be considered as this ladder Degree direction is the gradient direction mated most;Because gradient is quantified as N number of grade, so obtaining N to open gradient sound Ying Tu, the most corresponding gradient response diagram of each gradient direction, every gradient response diagram and binary system The maximum cosine response value of the neighborhood inside gradient direction set representated by coding is to precalculate out, is saved in For the maximum cosine response value searched corresponding to binary coding in internal memory;
On described line, coupling comprises the following steps: (1) obtains vehicle image to be identified by under traffic scene; (2) calculating marking area and the non-significant region of vehicle image, the most non-homogeneous sampling site obtains gradient map;(3) Gradient point is extended and binary coding, particularly as follows: press 3*3 region to divide neighborhood, occurring in region The graded of each gradient point be the superposition of the gradient direction occurred in neighborhood, then enter by two All directions are indicated by coded system processed, and step-by-step OR operation processes, it is thus achieved that new gradient binary system Represent;(4) by the similar acquisition of cosine corresponding gradient response diagram, particularly as follows: calculate gradient response diagram, its In T × T neighborhood at middle gradient point position in all gradient directions, have a gradient direction with current Cosine response value obtained by the gradient direction joined is maximum, then being considered as this gradient direction is the ladder mated most Degree direction, the most corresponding gradient response diagram of each gradient direction;(5) side of parallel computation is used Formula carries out fast zoom table coupling;(6) obtain vehicle matching result, it is judged that vehicle, complete vehicle detection;
The step (2) of compatible portion calculates on described line the gradient map obtaining the non-homogeneous sampling site of vehicle to be identified Comprise the following steps:
(2.1) the Harris angle point on vehicle wheel profile is first obtained;
(2.2) with Harris angle point as the center of circle, R=6 pixel of radius draws circle to size and vehicle Prototype drawing In blank image as, then on this image, find connected domain, thus orient the aobvious of vehicle image Write region;
(2.3) in the intensive sampling site of marking area obtained rather than the sparse sampling site of marking area;Calculate non-all The image gradient of tri-passages of RGB of the image after even sampling site, the Grad for each gradient point takes this gradient Greatest gradient value o'clock in three passages;Then retain, by threshold value, the gradient point that Grad is bigger;? The gradient tried to achieve, is quantified as N number of gradient direction, then most for occurrence number in each gradient point neighborhood Gradient direction is as the gradient direction of this gradient point;
(2.4) gradient direction after quantifying carries out corresponding binary coding, by gradient direction with a length of The binary string of N represents, forms the gradient map of binary representation;
On described line, the step (5) of compatible portion comprises the following steps:
(5.1) in order to improve the speed of algorithm further, the parallel computation of gradient response diagram is used;The most right Gradient response diagram carries out linearisation, forms the linear internal memory of cell*cell gradient response diagram, 5 gradients is rung Answering linearization is 4 row vectors;
(5.2) realize parallel computation by linear internal memory, calculate the template of multiple detection window the most simultaneously The similarity of coupling;In the matching process, mate by vehicle template base Hierarchy template, according to vehicle mould In plate image, the gradient direction of gradient point finds the linear internal memory of the gradient response diagram of its correspondence, then further according to This gradient point position in the region of cell*cell calculates its linear internal memory at corresponding gradient response diagram In side-play amount;
(5.3) finally all row vectors are alignd by side-play amount, relevant position cosine response value is added Summation;In row vector after summation, each element is the similarity of the template matching of this detection window, then Coordinate position corresponding at its maximum is exactly the position at target place;
Described N, T and cell are the natural number more than 0.
Real-time vehicle detecting system based on traffic video the most according to claim 1, its feature exists In, train in the step (1) of part under described line, it is thus achieved that marking area and non-significant area non-uniform sampling site, And the detailed step that sampling site carries out gradient map calculating is:
(1.1) the Harris angle point on vehicle wheel profile is first obtained;
(1.2) with Harris angle point as the center of circle, neighborhood of pixels radius R=6 draws circle to size and vehicle Prototype drawing In blank image as, then on this image, find connected domain, thus orient the aobvious of vehicle image Write region;
(1.3) in the intensive sampling site of marking area obtained, the sparse sampling site in non-significant region;Calculate non-homogeneous sampling site After the image gradient of tri-passages of RGB of image, this gradient point is taken for the Grad of each gradient point and exists Greatest gradient value in three passages;Then retain, by threshold value, the gradient point that Grad is bigger;Trying to achieve Gradient, be quantified as N number of gradient direction, then gradients most for occurrence number in each gradient point neighborhood Direction is as the gradient direction of this gradient point;
(1.4) gradient direction after quantifying carries out corresponding binary coding, and gradient direction is used a length of N Binary string represent, formed binary representation gradient map.
Real-time vehicle detecting system based on traffic video the most according to claim 1, its feature exists In, to train in the step (3) of part under described line, K-means cluster determines vehicle subspace, sets up layer Secondary sex cords draws and comprises the following steps:
(3.1) search element speed to improve, reduce vehicle template number when mating each time, use Vehicle template base figure is slightly clustered by k-means clustering method according to outward appearance;Form different vehicle spaces Distribution;
(3.2) on the basis of vehicle spatial distribution, vehicle template base is divided into two-layer and sets up level index, Ground floor template is the big class template of vehicle, and second layer template is the concrete template of vehicle.
Real-time vehicle detecting system based on traffic video the most according to claim 1, its feature exists In, on described line, the step (1) of compatible portion obtains vehicle image to be identified, first passes through mixed Gaussian The mode of model and adaptive RTS threshold adjustment obtains vehicle image to be identified;In this step, as far as possible Unnecessary prospect is removed on ground, reduces the computer capacity of subsequent match algorithm, improves detection efficiency.
CN201410142327.2A 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video Expired - Fee Related CN103886760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410142327.2A CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410142327.2A CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Publications (2)

Publication Number Publication Date
CN103886760A CN103886760A (en) 2014-06-25
CN103886760B true CN103886760B (en) 2016-09-21

Family

ID=50955627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410142327.2A Expired - Fee Related CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Country Status (1)

Country Link
CN (1) CN103886760B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112122A (en) * 2014-07-07 2014-10-22 叶茂 Vehicle logo automatic identification method based on traffic video
CN104268573B (en) * 2014-09-24 2017-12-26 深圳市华尊科技股份有限公司 Vehicle checking method and device
CN104765768B (en) * 2015-03-09 2018-11-02 深圳云天励飞技术有限公司 The quick and precisely search method of magnanimity face database
CN105574944A (en) * 2015-12-15 2016-05-11 重庆凯泽科技有限公司 Highway intelligent toll collection system based on vehicle identification and method thereof
CN107025459A (en) * 2016-01-29 2017-08-08 中兴通讯股份有限公司 A kind of model recognizing method and device
CN108256566A (en) * 2018-01-10 2018-07-06 广东工业大学 A kind of adaptive masterplate matching process and device based on cosine similarity
CN109388727A (en) * 2018-09-12 2019-02-26 中国人民解放军国防科技大学 BGP face rapid retrieval method based on clustering
CN109212605A (en) * 2018-09-28 2019-01-15 中国科学院地质与地球物理研究所 pseudo-differential operator storage method and device
CN109194952B (en) * 2018-10-31 2020-09-22 清华大学 Head-mounted eye movement tracking device and eye movement tracking method thereof
CN112016393A (en) * 2020-07-21 2020-12-01 华人运通(上海)自动驾驶科技有限公司 Vehicle parameter acquisition method, device, equipment and storage medium
CN113705576B (en) * 2021-11-01 2022-03-25 江西中业智能科技有限公司 Text recognition method and device, readable storage medium and equipment
CN117316373B (en) * 2023-10-08 2024-04-12 医顺通信息科技(常州)有限公司 HIS-based medicine whole-flow supervision system and method thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3360163B2 (en) * 1997-05-09 2002-12-24 株式会社日立製作所 Traffic flow monitoring device
KR100918837B1 (en) * 2009-07-10 2009-09-28 완전정보통신(주) System for hybrid detection vehicles and method thereof
CN101976341B (en) * 2010-08-27 2013-08-07 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN102044151B (en) * 2010-10-14 2012-10-17 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103258213B (en) * 2013-04-22 2016-04-27 中国石油大学(华东) A kind of for the dynamic vehicle model recognizing method in intelligent transportation system
CN103295003B (en) * 2013-06-07 2016-08-10 北京博思廷科技有限公司 A kind of vehicle checking method based on multi-feature fusion
CN103324920A (en) * 2013-06-27 2013-09-25 华南理工大学 Method for automatically identifying vehicle type based on vehicle frontal image and template matching

Also Published As

Publication number Publication date
CN103886760A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103886760B (en) Real-time vehicle detecting system based on traffic video
CN105931295B (en) A kind of geologic map Extracting Thematic Information method
CN104240256B (en) A kind of image significance detection method based on the sparse modeling of stratification
CN105335716B (en) A kind of pedestrian detection method extracting union feature based on improvement UDN
CN103810505B (en) Vehicles identifications method and system based on multiple layer description
CN109902806A (en) Method is determined based on the noise image object boundary frame of convolutional neural networks
CN108090429A (en) Face bayonet model recognizing method before a kind of classification
CN103632146B (en) A kind of based on head and shoulder away from human body detecting method
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN105528575A (en) Sky detection algorithm based on context inference
CN106937120A (en) Object-based monitor video method for concentration
CN110287798B (en) Vector network pedestrian detection method based on feature modularization and context fusion
CN103544703A (en) Digital image stitching detecting method
CN101710418A (en) Interactive mode image partitioning method based on geodesic distance
CN107315998A (en) Vehicle class division method and system based on lane line
CN109948643A (en) A kind of type of vehicle classification method based on deep layer network integration model
CN105825233A (en) Pedestrian detection method based on random fern classifier of online learning
CN105405138A (en) Water surface target tracking method based on saliency detection
CN105404859A (en) Vehicle type recognition method based on pooling vehicle image original features
CN106355607A (en) Wide-baseline color image template matching method
CN109635726A (en) A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration
CN104732534A (en) Method and system for matting conspicuous object in image
CN104217440A (en) Method for extracting built-up area from remote sensing image
CN105809699B (en) A kind of vehicle window extracting method and system based on figure segmentation
CN105404858A (en) Vehicle type recognition method based on deep Fisher network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Tao

Inventor after: Gao Dawei

Inventor after: Tang Hongqiang

Inventor after: Li Dongmei

Inventor after: Qiao Pizhe

Inventor after: Zhu Xiaojun

Inventor after: Zhang Dongliang

Inventor after: Qu Hao

Inventor after: Zou Xiangling

Inventor after: Guo Hangyu

Inventor after: Liu Yong

Inventor before: Li Tao

Inventor before: Ye Mao

Inventor before: Xiang Tao

Inventor before: Li Dongmei

Inventor before: Zhu Xiaojun

Inventor before: Zhang Dongliang

Inventor before: Bao Zhijun

Inventor before: Tang Hongqiang

COR Change of bibliographic data
TR01 Transfer of patent right

Effective date of registration: 20170728

Address after: 450048, Henan economic and Technological Development Zone, Zhengzhou Second Avenue West, South all the way Xinghua science and Technology Industrial Park, No. 2, building 9, room 908, -37

Patentee after: ZHENGZHOU CHANTU INTELLIGENT TECHNOLOGY CO.,LTD.

Address before: Yuelu District City, Hunan province 410000 Changsha Lushan Road No. 932

Patentee before: Li Tao

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160921

CF01 Termination of patent right due to non-payment of annual fee