CN102142090A - Vehicle detection method and system - Google Patents
Vehicle detection method and system Download PDFInfo
- Publication number
- CN102142090A CN102142090A CN 201110062681 CN201110062681A CN102142090A CN 102142090 A CN102142090 A CN 102142090A CN 201110062681 CN201110062681 CN 201110062681 CN 201110062681 A CN201110062681 A CN 201110062681A CN 102142090 A CN102142090 A CN 102142090A
- Authority
- CN
- China
- Prior art keywords
- value
- pixel
- vehicle
- correspondence
- remarkable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention provides a vehicle detection method and a system. The method comprises the following steps: obtaining a feature graph of an input video image, wherein the feature graph comprises a color feature graph, a direction feature graph and a motion feature graph; calculating the feature value corresponding to each pixel point in the feature graph and integrating the feature values to obtain the significant value corresponding to each pixel point; obtaining a significant area in accordance with the position of the pixel point corresponding to the maximum value in the significant value and integrating the significant area to obtain a vehicle candidate area; and detecting vehicles in the vehicle candidate area through a classifier trained in advance according to a sample. The vehicle detection method and the system obtain the vehicle candidate area through the feature analysis in color, direction and motion on the premise of ensuring the real-time demands and have high detection rate and low false alarm rate.
Description
Technical field
The present invention relates to the intelligent transport technology field, relate to a kind of vehicle checking method and system in particular.
Background technology
City Traffic Monitor System is as reducing traffic hazard and congested in traffic effective technology, in each big city widespread use.Low air to surface City Traffic Monitor System mainly is to utilize unmanned spacecraft to carry camera to take urban road traffic information, then the video sequence of taking is handled, thereby is detected the vehicle that draws in the video image.Low air to surface platform detect down in the road traffic vehicle with and corresponding information become the gordian technique that research circle and industrial community are very paid close attention to.
The researcher has obtained many achievements aspect vehicle detection both at home and abroad, and for vehicle detection, detection method need satisfy following requirement: 1) real-time demand: detecting travelling speed must be faster than the shooting speed of video; 2) verification and measurement ratio demand: must much more as far as possible must detect the vehicles in the video flowing; 3) rate of false alarm demand: must reduce non-vehicle object as far as possible and be reported by mistake and be vehicle.
Present existing vehicle checking method, usually adopt frame difference method or background subtraction division, frame difference method is to adopt between two continuous frames in image sequence or three frames based on the time difference of pixel and thresholding to extract vehicle region in the image, the background subtraction point-score is meant at first chooses background image in frame of video, then current video frame and background image are subtracted each other, carry out the background cancellation, if resulting pixel count is greater than a certain threshold values, then judge to be monitored and have moving object in the scene, further detect and be vehicle, but because frame difference method is subjected to the influence of illumination and environment bigger, the background subtraction point-score is difficult to obtain background information more accurately again for the video image of low air to surface shooting, thus these two kinds of methods to carry out the vehicle detection time error all bigger, thereby cause verification and measurement ratio low, the rate of false alarm height.
Summary of the invention
In view of this, the invention provides a kind of vehicle checking method, verification and measurement ratio is low when solving in the prior art vehicle detection, and the technical matters that rate of false alarm is high has also satisfied the demand of real-time simultaneously.
The present invention also provides a kind of vehicle detecting system, in order to guarantee said method realization and application in practice.
For achieving the above object, the invention provides following technical scheme:
A kind of vehicle checking method, described method comprises:
Obtain the characteristic pattern of inputted video image, described characteristic pattern comprises color characteristic figure, direction character figure and motion feature figure;
Calculate each pixel characteristic of correspondence value in the described characteristic pattern, and described eigenwert is integrated the remarkable value that draws each pixel correspondence;
Pixel position according to the maximal value correspondence in the described remarkable value obtains marking area, and described marking area integrated obtains the vehicle candidate region;
Detect vehicle in the vehicle candidate region by the sorter that goes out according to sample training in advance.
Preferably, the described characteristic pattern that obtains inputted video image comprises:
The color characteristic that obtains the current frame image of input forms color characteristic figure, and described color characteristic figure comprises red r, green g, blue l three primary colors characteristic pattern and gray scale I=(r+g+b)/3 characteristic pattern;
By wave filter described gray scale I characteristic pattern is carried out filtering and obtain direction character figure, described direction character figure comprises the characteristic pattern of { 0 °, 45 °, 90 °, 135 ° } four direction;
According to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image, described motion feature figure comprises and the described present frame motion feature figure that the image of 3,4,5 frames draws by the image difference algorithm computation of being separated by respectively.
Preferably, calculate pixel characteristic of correspondence value in the described characteristic pattern, and described eigenwert is integrated the remarkable value that draws the pixel correspondence comprise:
The difference of each pixel and surrounding pixel among the calculated characteristics figure, and with the average of all differences eigenwert as this pixel;
Described eigenwert is integrated the first initial significantly value that obtains each pixel according to the first remarkable position calculation formula;
According to the second remarkable position calculation formula the described first initial significantly value is integrated the remarkable value that obtains each pixel correspondence.
Preferably, the described foundation second remarkable position calculation formula remarkable value that the described first initial significantly value is integrated each pixel correspondence of acquisition comprises:
The described first initial significantly value is integrated the second initial significantly value that obtains each pixel correspondence according to the second remarkable position calculation formula;
Return inhibiting value, position enhancing value and the described second initial significantly value integration of pixel correspondence significantly are worth.
Preferably, the pixel position of the maximal value correspondence in the described remarkable value of described foundation obtains marking area, and described marking area integrated obtains the vehicle candidate region and be specially:
Other that differ certain limit according to the pixel of the maximal value correspondence in the described remarkable value and with described maximal value are the zone of the corresponding pixel composition of values significantly, determines the central pixel point that this is regional;
Utilize rectangle operator formula to obtain marking area according to the first initial significantly value of pixel correspondence and the position of described central pixel point correspondence;
Described marking area integrated obtain the vehicle candidate region.
Preferably, described sorter is stacked sorter, and the then described vehicle that detects in the vehicle candidate region by the sorter that goes out according to sample training in advance comprises:
Utilize described stacked sorter three first layers single classifier to obtain best vehicle candidate region by the moving window scanning technique;
Whether utilize described stacked sorter residue single classifier to detect described best vehicle candidate region is vehicle;
In the video image of original input, detected vehicle is carried out mark.
The present invention also provides a kind of vehicle detecting system, and described system comprises:
The characteristic pattern acquiring unit is used to obtain the characteristic pattern of inputted video image, and described characteristic pattern comprises color characteristic figure, direction character figure and motion feature figure;
Computing unit is used for calculating each pixel characteristic of correspondence value of described characteristic pattern, and described eigenwert is integrated the remarkable value that draws each pixel correspondence;
The zone acquiring unit is used for obtaining marking area according to the pixel position of the maximal value correspondence of described remarkable value, and described marking area integrated obtains the vehicle candidate region;
The vehicle detection unit is used for detecting by the sorter that goes out according to sample training in advance the vehicle of vehicle candidate region.
Preferably, described characteristic pattern acquiring unit comprises:
Color characteristic figure acquiring unit, the color characteristic that is used to obtain the current frame image of input forms color characteristic figure, and described color characteristic figure comprises red r, green g, blue l three primary colors characteristic pattern and gray scale I=(r+g+b)/3 characteristic pattern;
Direction character figure acquiring unit is used for by wave filter described gray scale I characteristic pattern being carried out filtering and obtains direction character figure, and described direction character figure comprises the characteristic pattern of { 0 °, 45 °, 90 °, 135 ° } four direction;
Motion feature figure acquiring unit, according to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image, described motion feature figure comprises and the described present frame motion feature figure that the image of 3,4,5 frames draws by the image difference algorithm computation of being separated by.
Preferably, described computing unit comprises:
The eigenvalue calculation unit is used for the difference of each pixel of calculated characteristics figure and surrounding pixel, and with the average of all differences eigenwert as this pixel;
The first initial significantly value computing unit is used for described eigenwert being integrated the first initial significantly value that obtains each pixel according to the first remarkable position calculation formula;
Significantly the value computing unit is used for according to the second remarkable position calculation formula the described first initial significantly value being integrated the remarkable value that obtains each pixel correspondence.
Preferably, described remarkable value computing unit comprises:
The second initial significantly value computing unit is used for the described first initial significantly value being integrated the second initial significantly value that obtains each pixel correspondence according to the second remarkable position calculation formula;
Significantly the value computation subunit is used for return inhibiting value, position enhancing value and the described second initial significantly value integration of pixel correspondence significantly are worth.
Preferably, described regional acquiring unit comprises:
Central point determining unit, other that are used for differing certain limit according to the pixel of the maximal value correspondence of described remarkable value and with described maximal value be the zone of the corresponding pixel composition of values significantly, determines the central pixel point that this is regional;
The marking area acquiring unit is used for utilizing rectangle operator formula to obtain marking area according to the position of the described first initial significantly value and described central pixel point correspondence;
Vehicle candidate region acquiring unit is used for described marking area integrated and obtains the vehicle candidate region.
Preferably, described vehicle detection unit comprises:
The best region acquiring unit is used to utilize described stacked sorter three first layers single classifier to obtain best vehicle candidate region by the moving window scanning technique;
The vehicle detection subelement, whether be used to utilize described stacked sorter residue single classifier to detect described best vehicle candidate region is vehicle;
Indexing unit is used at the video image of original input detected vehicle being carried out mark.
Via above-mentioned technical scheme as can be known, compared with prior art, the invention provides a kind of vehicle checking method and system, in complicated urban transportation background, detect vehicle, by obtaining the color of inputted video image, direction and motion characteristic of correspondence figure, pixel characteristic of correspondence value among the calculated characteristics figure, integrate out at last the remarkable value of each pixel correspondence, and obtain marking area by corresponding algorithm according to the position of the maximal value corresponding pixel points in the remarkable value, the marking area integration is obtained the vehicle candidate region, accurately judging whether described vehicle candidate region is vehicle by the sorter that has trained then.The present invention guarantees under the prerequisite of real-time demand, pass through color, direction, the three aspect features of moving are obtained the vehicle candidate region, verification and measurement ratio height, rate of false alarm are low, avoided the computation complexity that uses too much feature to bring again, caused unnecessary computing cost, thereby influence the problem of the processing speed of system.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, accompanying drawing in describing below only is embodiments of the invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to the accompanying drawing that provides.
Fig. 1 is the process flow diagram of an embodiment of a kind of vehicle checking method of the present invention;
Fig. 2 is the structural drawing of an embodiment of a kind of vehicle detecting system of the present invention;
Fig. 3 is the structural drawing of a kind of vehicle detecting system computing unit of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that is obtained under the creative work prerequisite.
The embodiment of the invention discloses a kind of vehicle checking method and system, in complicated urban transportation background, detect vehicle.By extracting color, direction, the motion feature characteristic of correspondence figure in the inputted video image, obtain the remarkable value of each pixel correspondence through corresponding calculated, and according to this significantly value obtain the vehicle candidate region, accurately judging whether described vehicle candidate region is vehicle by the sorter that has trained then.The present invention guarantees under the prerequisite of real-time demand, by color, and direction, the three aspect features of moving are obtained the vehicle candidate region, and verification and measurement ratio height, rate of false alarm are low.
Referring to Fig. 1, show the process flow diagram of an embodiment of a kind of vehicle checking method of the present invention, can may further comprise the steps:
Step 101: the characteristic pattern that obtains inputted video image.
Characteristic pattern is meant and has comprised the image of characteristic information in the video frame images, described characteristic information comprises color, direction, motion etc., and described characteristic pattern can be concrete image, for example color histogram, perhaps the abstract concept in the specific algorithm process is not represented actual figure.
Owing to select for use a kind of feature of image to be difficult to distinguish vehicle and non-vehicle region, and use too much feature can bring higher computation complexity, cause unnecessary computing cost, thereby influence the processing speed of system, influence detects real-time, so the present invention adopts color, direction and the three kinds of computation complexities of moving are low and feature that be convenient to obtain is used as extracting the foundation of vehicle candidate region.
The characteristic pattern that obtains inputted video image is specially:
At first, obtain the color characteristic formation color characteristic figure of the current frame image of input.
Described color characteristic has comprised red r, green g, blue l three primary colors feature, and wherein, r, g, l be the red, green, blue triple channel of representing input images respectively.In the video image of input, the three primary colors passage exists coupled relation, and at the strong pixel of light, three primary colors are generally higher, and at the pixel of light darkness, three primary colors are generally on the low side.Therefore, when the introducing three primary colors are as color characteristic, also need to introduce I=(r+g+b)/3, R=r-(g+b)/2, G=g-(r+b)/2, and B=b-(r+g)/2 four a new Color Channel replenishes as coupled relation, therefore obtains eight color characteristic figure.Wherein I is the gray feature of image.In addition, the pixel tone of shady place can't be by the light sensation system senses, also can not become the vehicle candidate region simultaneously, and therefore new color characteristic only calculates greater than peaked 1/10 pixel place of gradation of image value at gray-scale value, and all the other are unified to be set to 0.
Then, by wave filter described gray scale I characteristic pattern is carried out filtering and obtain direction character figure.
Because vehicle is different from road and roadside buildings, has tangible directivity, the direction character that therefore can adopt video image is as obtaining the foundation of vehicle candidate region.Wherein, (σ, θ f) for the orientation-sensitive of regional area, are to extract directional characteristic common technology to Gabor wave filter G.According to the size of vehicle in image, select specific σ, f for use, with the Gabor wave filter of { 0 °, 45 °, 90 °, 135 ° } four direction gray scale I characteristic pattern is carried out filtering, comprised 4 direction character figure of { 0 °, 45 °, 90 °, 135 ° } direction.The present invention just extracts direction character from the gray scale I characteristic pattern of color characteristic figure.
Secondly, according to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image.
For the difference of two two field pictures before and after can better utilization image difference algorithm drawing, the image of described some frames of being separated by can specifically be the image with 3~6 frame periods, specifically depends on the fps (number of pictures per second) of original video.
Vehicle has tangible placement property with respect to surrounding environment.The different time intervals can produce different remarkable values to kinetic characteristic, thus the embodiment of the invention adopt with present frame respectively at interval the image of 3,4,5 frames calculate motion feature respectively and draw three motion feature figure.Need to prove, described video frame number of being separated by, and the picture number of selecting for use can also be other concrete value, present embodiment is not made concrete qualification to this.
Obtain the characteristic pattern of video image in the present embodiment, 14 characteristic patterns have been obtained altogether, comprising 7 color characteristic figure, 4 direction character figure and 3 motion feature figure, described concrete numerical value obtains according to actual conditions and experimental data, in practical application, characteristic pattern of the present invention is not limited to 14.
Step 102: calculate each pixel characteristic of correspondence value in the described characteristic pattern, and described eigenwert is integrated the remarkable value that draws each pixel correspondence.
In the present embodiment, by having obtained three category feature figure in the step 101 as can be known, comprise many characteristic patterns again among every category feature figure, calculate that pixel characteristic of correspondence value specifically is in the described characteristic pattern, calculate the difference of each pixel and surrounding pixel in every characteristic pattern, and with the average of all differences eigenwert as this pixel, therefore each pixel of every characteristic pattern all obtains an eigenwert.Then this eigenwert is being integrated.
At first described eigenwert is integrated the first initial significantly value that obtains each pixel according to the first remarkable position formula.
The first remarkable position calculation formula adopts standard operator N (*) herein, and this operator can specifically be expressed as follows:
A, all pixels in the entire image are normalized to a fixing scope [0~M];
B, find out the maximal value M in the image overall territory and the mean value of all extreme points
The described first remarkable position calculation formula is specially:
Wherein, { 0,1,2} is representative color, direction, motion feature respectively, and j ∈ { 0,1,2 for i ∈ ... represent the numbering of characteristic pattern, S
IjRepresent every characteristic pattern correspondence by eigenwert as the formed new image of new pixel value, num
iIt is the number of i kind feature character pair figure.
Because characteristic pattern is to extract from raw video image, each pixel is identical in the position of each characteristic pattern correspondence, this formula meaning be meant by eigenwert pixel on the characteristic pattern correspondence position under every category feature, at first passing through the computing of N (*) operator sums up and averages, with the first initial significantly value of this mean value as pixel, in the present embodiment, owing to from video image, extracted three category features, therefore each pixel is with corresponding three first different initial significantly value, representative colors respectively, the first initial significantly value of direction and motion feature correspondence.
In order to obtain a position the most remarkable in the video image, the described first initial significantly value is integrated the remarkable value that obtains each pixel correspondence according to the second remarkable position calculation formula.
The second remarkable position calculation formula still adopts N (*) operator herein, and operator illustrates referring to foregoing description.This second remarkable position calculation formula is:
Wherein, S
i(x y) is the first initial significantly value that the above-mentioned first remarkable position calculation formula draws, and { 0,1,2} is representative color, direction, motion feature respectively for i ∈.
The meaning of this formula is meant three kind first of each pixel correspondence initial significantly value, at first passes through the computing of N (*) operator and sums up and average, then with the remarkable value of this mean value as pixel.All corresponding one of each pixel significantly is worth.
According to the final remarkable value of integrating out, can carry out the selection of follow-up vehicle candidate region.
Need to prove, in the present embodiment, the remarkable value of utilizing the first remarkable position calculation formula and the second remarkable position calculation formula to calculate, when the color that extracts from raw video image, when the pairing characteristic pattern of the special card of direction and motion has only one separately, as can be seen, the described first remarkable position calculation formula is the same with the second remarkable position calculation formula, the eigenwert that calculates also is the first initial significantly value, the present invention does not do concrete qualification to the number of characteristic pattern, therefore drawing remarkable value by eigenvalue calculation also is not limited to the described process of step 102, this is one of them embodiment, as long as draw final significantly value according to eigenwert by the respective algorithms formula.
Step 103: the pixel position according to the maximal value correspondence in the described remarkable value obtains marking area, and described marking area integrated obtains the vehicle candidate region.
By the remarkable value that calculates in the step 102, find out the pixel of the maximal value correspondence in this remarkable value, find out near this pixel with described maximal value by respective algorithms and to differ other remarkable corresponding pixels of value in the certain limit, the organization center zone, obtain the central point of this central area then, i.e. central pixel point.
Wherein, respective algorithms can be selected the inundation method for use, and certain limit can specifically be 10%, also can be other concrete scope according to actual conditions certainly, herein not as concrete restriction.
After finding out central pixel point, adopt rectangle operator formula to obtain marking area according to the position of this central pixel point correspondence, described rectangle operator definitions has gone out the difference of central area and neighboring area, and this formula is specially:
LSA
i(w
c,h
c)=|2Sum
i(w
c,h
c)-Sum
i(w
s,h
s)|
In this formula, Sum
iIn the expression central area first of all the pixel correspondences initial significantly value with, (w
c, h
c) be the size of central area, (w
s, h
s) be the size of neighboring area,, { 0,1,2} is representative color, direction, motion feature respectively for i ∈.In the present embodiment, definition (w
s, h
s) be 4/3 (w
c, h
c), the LSA that calculates
i(w
c, h
c, w
s, h
s) in maximum size be the scope of marking area, by formula as can be known, this formula acquires three marking areas with corresponding color, direction, motion feature, described marking area need be integrated and obtain the vehicle candidate region, integrate formula and still adopt N (*) operator, be specially:
The maximal value of the described FLSA that draws is the scope of vehicle candidate region.
Wherein, for the speed-up computation process, preferably, in embodiments of the present invention, adopt the first initial significantly value sum in the integral image method rapid extraction central area, marking area computation process is as follows:
LSA
i(w
c,h
c)=|2Sum
i(w
c,h
c)-Sum
i(4w
c/3,4h
c/3)|
Sum
i(w
c,h
c)=INT
i(CenterX+w
c/2,CenterY+h
c/2)
-INT
i(CenterX-w
c/2,CenterY+h
c/2)
-INT
i(CenterX+w
c/2,CenterY-h
c/2)
+INT
i(CenterX-w
c/2,CenterY-h
c/2)
Wherein, (CenterX CenterY) is the center of resulting marking area in each iterative computation process.
In addition, as another embodiment preferably of the present invention, because when carrying out the selection of vehicle candidate region, all vehicle candidate regions can only be once selected, and should also may have vehicle around the zone, therefore the numerical value that is drawn by the remarkable position calculation formula of second in the above-mentioned steps 102 initially significantly is worth as second herein, that need to integrate goes up the pixel correspondence returns inhibiting value and position enhancing value, is used as the remarkable value that this pixel finally draws.
Returning inhibiting value can be defined by following formula:
Wherein, (CenterX
κ, CenterY
κ) be the pixel position and the first initial significantly value according to the maximal value correspondence in the second initial significantly value, adopt the computing method in the step 103, the central point of the initial marking area that in the k time iterative computation, is obtained, (RangeX
κ, RangeY
κ) be the scope of this initial marking area.
It is as follows that the position strengthens value defined:
Wherein, this formulate is to (CenterX
k, CenterY
k) for the zone at center strengthens, decentering is near more, strengthen obvious more, α
κExpression Gauss reinforcing coefficient, σ
κBe meant that Gauss strengthens the variance of function
Return inhibiting value, position enhancing value and the second initial significantly value according to what obtain, multiply each other and obtain final remarkable value, select maximum value from described remarkable value, pixel position according to this maximal value correspondence obtains marking area, again described marking area is integrated afterwards and obtained the vehicle candidate region, concrete obtain manner can be described referring to step 103, do not repeat them here.
Step 104: detect vehicle in the vehicle candidate region by the sorter that goes out according to sample training in advance.
Sorter is a kind of machine learning program, behind training study, can classify to given data automatically.
Described sample comprises positive sample and negative sample, positive sample is meant vehicle pictures, and negative sample is meant non-vehicle pictures, and sorter has promptly comprised by positive sample and negative sample trains acquisition under off-line state, wherein, the quality and quantity of sample will directly influence the classification results of sorter.
After getting access to the vehicle candidate region, read in the good sorter of training in advance, comprise the characteristic set of the sample that training in advance is good, the number of plies and the used haar feature and the haar eigenwert of each layer of sorter, the haar feature is the characteristics of image that a class is used for object identification, be defined as pixel in a plurality of rectangular windows of optional position and yardstick in the original image and difference value, each haar feature can be by feature upper left corner coordinate, and the kind of window size and haar feature is unique to be determined.Sorter promptly is to draw by the method training of a large amount of subject image with apparent in view haar feature with pattern-recognition.
Whether for vehicle detailed process can be: use and training sample moving window scanning of the same size vehicle candidate region by sorter if detecting the vehicle candidate region, calculate the size of the haar eigenwert in each moving window, the size of the haar feature threshold values by haar eigenwert in the moving window relatively and training sample, whether can determine this vehicle candidate region is vehicle, if determine that corresponding vehicle candidate region is positive sample, then can think vehicle.
Wherein, described sorter can be stacked sorter, tree classifier etc.
As a preferred embodiment, when described sorter can be for stacked sorter, detect whether the vehicle candidate region is that the vehicle concrete steps are as follows:
At first, utilize the three first layers single classifier of stacked sorter to obtain best vehicle candidate region by the moving window scanning technique.
In the practical application of present embodiment, described moving window can be chosen as 8, and selection course is as follows:
8 boundary rectangle R of a, calculating vehicle candidate region
k(k ∈ [1,8]);
These rectangles satisfy two conditions: the first, and the edge angle of the edge of boundary rectangle and vehicle candidate region is k * 20 °; The second, four summits of marking area are respectively on the four edges of boundary rectangle;
B, horizontal direction is put in 8 boundary rectangles rotation that is obtained, with the represented rectangle in vehicle candidate region towards consistent;
C, select a constant Z, and initialization N is 0, uses Z
NThe expression zoom factor.With 9 rectangles (comprise 1 vehicle candidate region rectangle and 8 boundary rectangles) of moving window scanning through the zoom factor convergent-divergent;
D, for each moving window in the vehicle candidate region, can both in 8 boundary rectangles, find corresponding window, the size of these windows is consistent with central point, and through the rotation back in original image towards difference;
E, the three first layers of utilizing stacked sorter judge respectively that to 9 moving windows that obtain the window that decision content is the highest is set to best window, this window towards be vehicle towards, this best window promptly is meant best vehicle candidate region
Wherein, described decision content is meant the result's of every layer of sorter output mean value.
Secondly, obtain vehicle towards after, whether utilize the remaining single classifier of described stacked sorter to detect described best window is vehicle.
In the stacked sorter,, use the threshold test candidate region of each haar feature wherein, obtain the comparative result f of character pair for each single classifier
i(x): 0 is expressed as non-vehicle, 1 expression vehicle.Calculate weight summation f (the x)=∑ of the correspondence of T feature then
I=1 toTw
if
i(x), if f (x) 〉=θ, θ represents the sample decision threshold, judge that then this vehicle candidate region is a vehicle, if be judged as vehicle, then enter down one deck single classifier, carry out the corresponding judgment process, otherwise should from the candidate region, delete in the zone, can determine the vehicle that this vehicle candidate region will be detected for the present invention through the vehicle candidate region of all sorters at last.
Need to prove, in embodiment describes, adopt sliding window technique, according to the selected number of windows of optimal result is 8, and those skilled in the art will be appreciated that this number of windows can specifically be provided with according to actual conditions, can be 2,4 or a plurality of, be not limited to 8.
When determining the vehicle candidate region and be vehicle, can also in original video image, this zone marker be come out, so that better observing, which is the vehicle object.
In the embodiment of the invention, by extracting color, direction, the motion feature characteristic of correspondence figure in the inputted video image, obtain the remarkable value of each pixel correspondence through corresponding calculated, and according to this significantly value obtain the vehicle candidate region, accurately judging whether described vehicle candidate region is vehicle by the sorter that has trained then.The present invention is under the prerequisite that guarantees the real-time demand, pass through color, direction, the three aspect signature analysises that move obtain the vehicle candidate region, the verification and measurement ratio height, rate of false alarm is low, has avoided using the computation complexity of too much feature simultaneously, cause unnecessary computing cost, thereby influence the processing speed of system.
The technology that the present invention uses visual attention location mechanism and sorting algorithm to combine can detect camera height and be the vehicle in the captured video of the motion platform of 60-90 rice effectively, and experimental data shows that its verification and measurement ratio can reach 89%, and rate of false alarm is lower than 3%.
Referring to Fig. 2, show the structural drawing of a kind of vehicle detecting system specific embodiment of the present invention, can comprise:
Characteristic pattern acquiring unit 201 is used to obtain the characteristic pattern of inputted video image, and described characteristic pattern comprises color characteristic figure, direction character figure and motion feature figure.
Owing to select for use a kind of feature of image to be difficult to distinguish vehicle and non-vehicle region, and use too much feature can bring higher computation complexity, cause unnecessary computing cost, thereby influence the processing speed of system, influence detects real-time, so the present invention adopts color, direction and the three kinds of computation complexities of moving are low and feature that be convenient to obtain is used as extracting the foundation of vehicle candidate region.
Wherein, this feature acquiring unit can specifically comprise:
Color characteristic figure acquiring unit, the color characteristic that is used to obtain the current frame image of input forms color characteristic figure, and described color characteristic figure comprises red r, green g, blue l three primary colors characteristic pattern and gray scale I=(r+g+b)/3 characteristic pattern.
In the present embodiment, described color characteristic figure has comprised red r, green g, blue l three primary colors characteristic pattern and I=(r+g+b)/3, R=r-(g+b)/2, G=g-(r+b)/2, and B=b-(r+g)/2 four a new Color Channel is as the color characteristic figure of coupled relation.
Direction character figure acquiring unit is used for by wave filter described gray scale I characteristic pattern being carried out filtering and obtains direction character figure, and described direction character figure comprises the characteristic pattern of { 0 °, 45 °, 90 °, 135 ° } four direction.
Wave filter can be selected Gabor wave filter G for use, and (σ, θ f), select specific σ, f for use, are comprised 4 direction character figure of { 0 °, 45 °, 90 °, 135 ° } direction.
Motion feature figure acquiring unit, according to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image, described motion feature figure comprises and the described present frame motion feature figure that the image of 3,4,5 frames draws by the image difference algorithm computation of being separated by.
Vehicle has tangible placement property with respect to surrounding environment.The different time intervals can produce different remarkable values to kinetic characteristic, thus employing of the present invention and present frame respectively at interval the image of 3,4,5 frames calculate motion feature respectively and draw three motion feature figure.Need to prove, described video frame number of being separated by, and the picture number of selecting for use can also be other concrete value, present embodiment is not made concrete qualification to this.
Eigenwert specifically can be integrated significantly by remarkable position calculation formula be worth, described remarkable position calculation formula comprises the first remarkable position calculation formula and the second remarkable position calculation formula.Concrete formula can be described referring to method embodiment.
Therefore, referring to Fig. 3, show the structural drawing of computing unit, described computing unit can specifically comprise:
Eigenvalue calculation is single 301, is used for the difference of each pixel of calculated characteristics figure and surrounding pixel, and with the average of all differences eigenwert as this pixel.
The first initial significantly value computing unit 302 is used for described eigenwert being integrated the first initial significantly value that obtains each pixel according to the first remarkable position calculation formula.
The first remarkable computing formula adopts standard operator N (*), specifically can be described referring to method embodiment, in the present embodiment, owing to from video image, extracted three category features, therefore each pixel is with corresponding three the first different initial significantly values, first of representative color, direction and the motion feature correspondence initial significantly values respectively.
Significantly value computing unit 303 is used for according to the second remarkable position calculation formula the described first initial significantly value being integrated the remarkable value that obtains each pixel correspondence.
The second remarkable position calculation formula still adopts N (*) operator, with three kind first of each pixel correspondence initial significantly value, at first passes through the computing of N (*) operator and sums up and average, then with the remarkable value of this mean value as pixel.All corresponding one of each pixel significantly is worth.
In addition, because when carrying out the selection of vehicle candidate region, all vehicle candidate regions can only be once selected, and should also may have vehicle around the zone, the remarkable value that calculates by the second remarkable position calculation formula need be enhanced, therefore, described remarkable value computing unit specifically comprises:
The second initial significantly value computing unit 3031 is used for the described first initial significantly value being integrated the second initial significantly value that obtains each pixel correspondence according to the second remarkable position calculation formula.
Significantly value computation subunit 3032 is used for return inhibiting value, position enhancing value and the described second initial significantly value integration of pixel correspondence significantly are worth.
Wherein, described regional acquiring unit specifically comprises:
Central point determining unit, other that are used for differing certain limit according to the pixel of the maximal value correspondence of described remarkable value and with described maximal value be the zone of the corresponding pixel composition of values significantly, determines the central pixel point that this is regional.
Wherein, certain limit specifically can be 10%.
The marking area acquiring unit is used for utilizing rectangle operator formula to obtain marking area according to the position of the described first initial significantly value and described central pixel point correspondence.
Can get access to corresponding color, direction, three marking areas of motion feature respectively by rectangle operator formula.
This rectangle operator formula can be specifically described referring to method embodiment.
Vehicle candidate region acquiring unit is used for described marking area integrated and obtains the vehicle candidate region.
Three marking areas that obtain are integrated obtain the vehicle candidate region, integrate formula and still adopt N (*) operator.
Sorter is specifically as follows stacked sorter, tree classifier etc., and when sorter was stacked sorter, described vehicle detection unit can specifically comprise:
The best region acquiring unit is used to utilize described stacked sorter three first layers single classifier to obtain best vehicle candidate region by the moving window scanning technique.
Whether the vehicle detection subelement, being used to utilize described stacked sorter residue single classifier to detect described best vehicle candidate region is vehicle.
Indexing unit is used at the video image of original input detected vehicle being carried out mark.
In the embodiment of the invention, by extracting color, direction, the motion feature characteristic of correspondence figure in the inputted video image, obtain the remarkable value of each pixel correspondence through corresponding calculated, and according to this significantly value obtain the vehicle candidate region, accurately judging whether described vehicle candidate region is vehicle by the sorter that has trained then.The present invention is under the prerequisite that guarantees the real-time demand, pass through color, direction, the three aspect signature analysises that move obtain the vehicle candidate region, the verification and measurement ratio height, rate of false alarm is low, has avoided using the computation complexity of too much feature simultaneously, cause unnecessary computing cost, thereby influence the processing speed of system.
Each embodiment adopts the mode of going forward one by one to describe in this instructions, and what each embodiment stressed all is and the difference of other embodiment that identical similar part is mutually referring to getting final product between each embodiment.For the disclosed device of embodiment, because it is corresponding with the embodiment disclosed method, so description is fairly simple, relevant part partly illustrates referring to method and gets final product.
At last, also need to prove, in this article, relational terms such as first and second grades only is used for an entity or operation are made a distinction with another entity or operation, and not necessarily requires or hint and have the relation of any this reality or in proper order between these entities or the operation.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby make and comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as this process, method, article or equipment intrinsic key element.Do not having under the situation of more restrictions, the key element that limits by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
To the above-mentioned explanation of the disclosed embodiments, make this area professional and technical personnel can realize or use the present invention.Multiple modification to these embodiment will be conspicuous concerning those skilled in the art, and defined herein General Principle can realize under the situation that does not break away from the spirit or scope of the present invention in other embodiments.Therefore, the present invention will can not be restricted to these embodiment shown in this article, but will meet and principle disclosed herein and features of novelty the wideest corresponding to scope.
Claims (12)
1. a vehicle checking method is characterized in that, described method comprises:
Obtain the characteristic pattern of inputted video image, described characteristic pattern comprises color characteristic figure, direction character figure and motion feature figure;
Calculate each pixel characteristic of correspondence value in the described characteristic pattern, and described eigenwert is integrated the remarkable value that draws each pixel correspondence;
Pixel position according to the maximal value correspondence in the described remarkable value obtains marking area, and described marking area integrated obtains the vehicle candidate region;
Detect vehicle in the vehicle candidate region by the sorter that goes out according to sample training in advance.
2. method according to claim 1 is characterized in that, the described characteristic pattern that obtains inputted video image comprises:
The color characteristic that obtains the current frame image of input forms color characteristic figure, and described color characteristic figure comprises red r, green g, blue l three primary colors characteristic pattern and gray scale I=(r+g+b)/3 characteristic pattern;
By wave filter described gray scale I characteristic pattern is carried out filtering and obtain direction character figure, described direction character figure comprises the characteristic pattern of { 0 °, 45 °, 90 °, 135 ° } four direction;
According to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image, described motion feature figure comprises and the described present frame motion feature figure that the image of 3,4,5 frames draws by the image difference algorithm computation of being separated by respectively.
3. method according to claim 1 is characterized in that, calculates pixel characteristic of correspondence value in the described characteristic pattern, and described eigenwert is integrated the remarkable value that draws the pixel correspondence comprises:
The difference of each pixel and surrounding pixel among the calculated characteristics figure, and with the average of all differences eigenwert as this pixel;
Described eigenwert is integrated the first initial significantly value that obtains each pixel according to the first remarkable position calculation formula;
According to the second remarkable position calculation formula the described first initial significantly value is integrated the remarkable value that obtains each pixel correspondence.
4. method according to claim 3 is characterized in that, the described foundation second remarkable position calculation formula is integrated the remarkable value that obtains each pixel correspondence with the described first initial significantly value and comprised:
The described first initial significantly value is integrated the second initial significantly value that obtains each pixel correspondence according to the second remarkable position calculation formula;
Return inhibiting value, position enhancing value and the described second initial significantly value integration of pixel correspondence significantly are worth.
5. according to claim 3 or 4 described methods, it is characterized in that the pixel position of the maximal value correspondence in the described remarkable value of described foundation obtains marking area, and described marking area integrated obtain the vehicle candidate region and be specially:
Other that differ certain limit according to the pixel of the maximal value correspondence in the described remarkable value and with described maximal value are the zone of the corresponding pixel composition of values significantly, determines the central pixel point that this is regional;
Utilize rectangle operator formula to obtain marking area according to the first initial significantly value of pixel correspondence and the position of described central pixel point correspondence;
Described marking area integrated obtain the vehicle candidate region.
6. method according to claim 1 is characterized in that, described sorter is stacked sorter, and the then described vehicle that detects in the vehicle candidate region by the sorter that goes out according to sample training in advance comprises:
Utilize described stacked sorter three first layers single classifier to obtain best vehicle candidate region by the moving window scanning technique;
Whether utilize described stacked sorter residue single classifier to detect described best vehicle candidate region is vehicle;
In the video image of original input, detected vehicle is carried out mark.
7. a vehicle detecting system is characterized in that, described system comprises:
The characteristic pattern acquiring unit is used to obtain the characteristic pattern of inputted video image, and described characteristic pattern comprises color characteristic figure, direction character figure and motion feature figure;
Computing unit is used for calculating each pixel characteristic of correspondence value of described characteristic pattern, and described eigenwert is integrated the remarkable value that draws each pixel correspondence;
The zone acquiring unit is used for obtaining marking area according to the pixel position of the maximal value correspondence of described remarkable value, and described marking area integrated obtains the vehicle candidate region;
The vehicle detection unit is used for detecting by the sorter that goes out according to sample training in advance the vehicle of vehicle candidate region.
8. system according to claim 7 is characterized in that, described characteristic pattern acquiring unit comprises:
Color characteristic figure acquiring unit, the color characteristic that is used to obtain the current frame image of input forms color characteristic figure, and described color characteristic figure comprises red r, green g, blue l three primary colors characteristic pattern and gray scale I=(r+g+b)/3 characteristic pattern;
Direction character figure acquiring unit is used for by wave filter described gray scale I characteristic pattern being carried out filtering and obtains direction character figure, and described direction character figure comprises the characteristic pattern of { 0 °, 45 °, 90 °, 135 ° } four direction;
Motion feature figure acquiring unit, according to obtaining motion feature figure with the be separated by imagery exploitation image difference algorithm of some frames of described current frame image, described motion feature figure comprises and the described present frame motion feature figure that the image of 3,4,5 frames draws by the image difference algorithm computation of being separated by.
9. system according to claim 7 is characterized in that, described computing unit comprises:
The eigenvalue calculation unit is used for the difference of each pixel of calculated characteristics figure and surrounding pixel, and with the average of all differences eigenwert as this pixel;
The first initial significantly value computing unit is used for described eigenwert being integrated the first initial significantly value that obtains each pixel according to the first remarkable position calculation formula;
Significantly the value computing unit is used for according to the second remarkable position calculation formula the described first initial significantly value being integrated the remarkable value that obtains each pixel correspondence.
10. system according to claim 9 is characterized in that, described remarkable value computing unit comprises:
The second initial significantly value computing unit is used for the described first initial significantly value being integrated the second initial significantly value that obtains each pixel correspondence according to the second remarkable position calculation formula;
Significantly the value computation subunit is used for return inhibiting value, position enhancing value and the described second initial significantly value integration of pixel correspondence significantly are worth.
11., it is characterized in that described regional acquiring unit comprises according to claim 9 or 10 described systems:
Central point determining unit, other that are used for differing certain limit according to the pixel of the maximal value correspondence of described remarkable value and with described maximal value be the zone of the corresponding pixel composition of values significantly, determines the central pixel point that this is regional;
The marking area acquiring unit is used for utilizing rectangle operator formula to obtain marking area according to the position of the described first initial significantly value and described central pixel point correspondence;
Vehicle candidate region acquiring unit is used for described marking area integrated and obtains the vehicle candidate region.
12. system according to claim 7 is characterized in that, described vehicle detection unit comprises:
The best region acquiring unit is used to utilize described stacked sorter three first layers single classifier to obtain best vehicle candidate region by the moving window scanning technique;
The vehicle detection subelement, whether be used to utilize described stacked sorter residue single classifier to detect described best vehicle candidate region is vehicle;
Indexing unit is used at the video image of original input detected vehicle being carried out mark.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110062681 CN102142090B (en) | 2011-03-15 | 2011-03-15 | Vehicle detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110062681 CN102142090B (en) | 2011-03-15 | 2011-03-15 | Vehicle detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102142090A true CN102142090A (en) | 2011-08-03 |
CN102142090B CN102142090B (en) | 2013-03-13 |
Family
ID=44409587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110062681 Expired - Fee Related CN102142090B (en) | 2011-03-15 | 2011-03-15 | Vehicle detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102142090B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722700A (en) * | 2012-05-17 | 2012-10-10 | 浙江工商大学 | Method and system for detecting abandoned object in video monitoring |
CN103473566A (en) * | 2013-08-27 | 2013-12-25 | 东莞中国科学院云计算产业技术创新与育成中心 | Multi-scale-model-based vehicle detection method |
CN104252630A (en) * | 2013-06-26 | 2014-12-31 | 现代摩比斯株式会社 | Object recognition system |
CN104809437A (en) * | 2015-04-28 | 2015-07-29 | 无锡赛睿科技有限公司 | Real-time video based vehicle detecting and tracking method |
CN106548170A (en) * | 2016-11-04 | 2017-03-29 | 湖南科技学院 | A kind of vehicles peccancy image processing method and system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104050477B (en) * | 2014-06-27 | 2017-04-12 | 西北工业大学 | Infrared image vehicle detection method based on auxiliary road information and significance detection |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101577052A (en) * | 2009-05-14 | 2009-11-11 | 中国科学技术大学 | Device and method for detecting vehicles by overlooking |
-
2011
- 2011-03-15 CN CN 201110062681 patent/CN102142090B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101577052A (en) * | 2009-05-14 | 2009-11-11 | 中国科学技术大学 | Device and method for detecting vehicles by overlooking |
Non-Patent Citations (2)
Title |
---|
《IEEE》 20091231 Renjun Lin等 Airborne Moving Vehicle Detection for Video Surveillance of Urban Traffic , * |
《Proceedings of the 8th World Congress on Intelligent Control and Automation》 20100709 Changxia Wu等 Registration-based Moving Vehicle Detection for Low-altitude Urban Traffic Surveillance , * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722700A (en) * | 2012-05-17 | 2012-10-10 | 浙江工商大学 | Method and system for detecting abandoned object in video monitoring |
CN104252630A (en) * | 2013-06-26 | 2014-12-31 | 现代摩比斯株式会社 | Object recognition system |
CN104252630B (en) * | 2013-06-26 | 2018-03-20 | 现代摩比斯株式会社 | Object identification system |
CN103473566A (en) * | 2013-08-27 | 2013-12-25 | 东莞中国科学院云计算产业技术创新与育成中心 | Multi-scale-model-based vehicle detection method |
CN103473566B (en) * | 2013-08-27 | 2016-09-14 | 东莞中国科学院云计算产业技术创新与育成中心 | A kind of vehicle checking method based on multiple dimensioned model |
CN104809437A (en) * | 2015-04-28 | 2015-07-29 | 无锡赛睿科技有限公司 | Real-time video based vehicle detecting and tracking method |
CN104809437B (en) * | 2015-04-28 | 2018-04-13 | 无锡赛睿科技有限公司 | A kind of moving vehicles detection and tracking method based on real-time video |
CN106548170A (en) * | 2016-11-04 | 2017-03-29 | 湖南科技学院 | A kind of vehicles peccancy image processing method and system |
CN106548170B (en) * | 2016-11-04 | 2019-11-19 | 湖南科技学院 | A kind of violation vehicle image processing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN102142090B (en) | 2013-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Son et al. | Real-time illumination invariant lane detection for lane departure warning system | |
US8750567B2 (en) | Road structure detection and tracking | |
Kong et al. | General road detection from a single image | |
CN101916383B (en) | Vehicle detecting, tracking and identifying system based on multi-camera | |
CN104951784B (en) | A kind of vehicle is unlicensed and license plate shading real-time detection method | |
Jin et al. | Vehicle detection from high-resolution satellite imagery using morphological shared-weight neural networks | |
CN103824081B (en) | Method for detecting rapid robustness traffic signs on outdoor bad illumination condition | |
CN102142090B (en) | Vehicle detection method and system | |
CN102915433B (en) | Character combination-based license plate positioning and identifying method | |
Andrey et al. | Automatic detection and recognition of traffic signs using geometric structure analysis | |
CN107679508A (en) | Road traffic sign detection recognition methods, apparatus and system | |
CN107798335A (en) | A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks | |
CN103544484A (en) | Traffic sign identification method and system based on SURF | |
CN105205489A (en) | License plate detection method based on color texture analyzer and machine learning | |
CN102663357A (en) | Color characteristic-based detection algorithm for stall at parking lot | |
CN103903018A (en) | Method and system for positioning license plate in complex scene | |
CN103761529A (en) | Open fire detection method and system based on multicolor models and rectangular features | |
CN102968646A (en) | Plate number detecting method based on machine learning | |
CN102999753A (en) | License plate locating method | |
CN106778633B (en) | Pedestrian identification method based on region segmentation | |
CN108090459B (en) | Traffic sign detection and identification method suitable for vehicle-mounted vision system | |
CN102163278B (en) | Illegal vehicle intruding detection method for bus lane | |
CN102880863A (en) | Method for positioning license number and face of driver on basis of deformable part model | |
CN107292933A (en) | A kind of vehicle color identification method based on BP neural network | |
CN101369312B (en) | Method and equipment for detecting intersection in image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130313 Termination date: 20160315 |
|
CF01 | Termination of patent right due to non-payment of annual fee |