Summary of the invention
In order to solve more the problem that road traffic condition is differentiated in accurately monitoring, the present invention proposes a kind ofly to differentiate the method for road traffic state automatically based on video monitoring image, comprises the steps:
A, obtain video monitoring image;
B, for the video monitoring image in time period 1, set up the time-space domain background model of video monitoring image based on mixed Gauss model;
The similitude of C, calculating present image and time-space domain background model is extracted foreground information based on similitude;
D, utilization connected domain blob analytical method are extracted the characteristic information of each connected region in the foreground information;
E, the distance of utilizing connected domain and area information are carried out vehicle identification, if car is arranged, preserve vehicle characteristics information, change step F; If no car, with the present image update background module, and export unimpeded, commentaries on classics step J;
F, to remove the background information update background module of information of vehicles;
G, in time period 1, set up the motion state that the feature identification matrix is judged target based on the matching result of the target signature that has target signature and present image, if identification finds to occur fresh target, then set up fresh target information and the target of prediction centre of form is followed the tracks of, if find that target is in desirable tracking mode, then the target of prediction centre of form is followed the tracks of, if target disappears, then deletes target information;
H, the average speed of calculating different target in time period 1, the mean value of all car speeds is as the speed parameter of current road on the computed image;
The threshold speed that I, basis configure is judged road traffic state, and speed parameter blocks up less than described threshold value output, and is unimpeded more than or equal to described threshold value output;
J, to the signal time counting number that blocks up of output, judge whether time second round finish, if finish, then change step K, otherwise the commentaries on classics steps A;
K, in time second round, calculate the number of times that output is blocked up, if output is blocked up number of times greater than predetermined value, then prompting monitoring highway section is in congestion status;
Wherein, step D specifically comprises:
D1, morphological operation are removed the noise influence in the foreground information;
D2, target is transformed into the connected component level from Pixel-level, utilizes the expansion operator to remove the aperture of filling up the target area, again the result is turned back to again on the initial prospect point set, to recover the intrinsic edge of foreground image;
The number of connected domain and to each connected component labeling in D3, the statistical picture;
Area, girth, position of form center and the boundary rectangle information of D4, extraction connected domain.
Wherein, the step B time-space domain background model of setting up video monitoring image based on mixed Gauss model specifically comprises:
B1, utilize mixed Gaussian background modeling method to set up the time-domain background model of each pixel;
B2, carry out the adaptively selected of gauss component number, specifically comprise:
When B21, initialization, the mixed Gauss model of each pixel of scene only is provided with a gauss component;
B22, change when scene, when the mixed Gauss model of pixel can not mate with current pixel value, if the gauss component number in this pixel mixed Gauss model does not reach the maximum of setting, then increasing by one automatically is the initial gauss component of average with the currency, otherwise is that the new gauss component of average replaces the end gauss component in the pixel mixed Gauss model with current pixel value then;
B23, after model modification is finished, judge whether last gauss component in the mixed Gauss model of each pixel expired, if expired then deletion
B3, by the time-domain background model to the study of scene, obtained one group of sample of expression background, directly add up the sample distribution spatially of these expression backgrounds, as the spatial domain background model of pixel.
Wherein, step e specifically comprises:
E1, the connected domain in the region of search is carried out Preliminary screening, just the Blob area places outside the limit of consideration less than the Blob of certain thresholding;
E2, carry out region clustering, a plurality of connected domains that will have similitude based on the up-and-down boundary condition of position of form center, boundary rectangle are polymerized to a vehicle;
The histogram information of E3, the area that obtains the vehicle that polymerization obtains, boundary rectangle characteristic, vehicle;
E4, the vehicle characteristics information that obtains is saved as target information.
Wherein, step G also comprises the steps:
G11, judge whether target division occurs,, otherwise jump out division observation if then carry out G12;
G12, maintenance are followed the trail of, and the sub-goal of division is put into the alternate target information list, and keep upgrading and predicting;
If G13 divides a scheduled time, confirm the former target of division deletion, the oidiospore target is put into the target information tabulation, and give new mark, follow the trail of.
Wherein, step G also comprises the steps:
G21, judge whether target occurs merging, if, then preserve the image information of target before merging, a new target is regarded in the zone after will merging then as, remains on the tracking during the merging;
G22, surpass a certain threshold value, confirm that then target merges, it is generated fresh target as common movable body when the target merging time, and the deletion template and merge before sub-goal.
Above-mentioned and other purpose, feature and advantage of the present invention will become clear and definite for those skilled in the art after the detailed description below having read in conjunction with the accompanying drawing that shows and describe specific embodiments of the invention.
Embodiment
Describe the specific embodiment of the present invention in detail below in conjunction with accompanying drawing.
The method of the invention can realize that its concrete grammar flow process as shown in Figure 1 with the software that embeds traffic surveillance and control system.
Step0: obtain video monitoring image
Step1: the time-space domain background model of setting up video monitoring image based on the ADAPTIVE MIXED Gauss model.
Step2: take to extract foreground information based on decision-making fusion goal detection method.
Step3: utilization Blob analytical method is extracted the characteristic information of each Blob.
Step4: utilize the distance of Bl0b and area information to carry out vehicle identification,, preserve vehicle characteristics information, change step5 if car is arranged; If no car changes step9.
Step5: to remove the background information update background module of information of vehicles.
Step6: set up the motion state that the feature identification matrix is judged target, take corresponding tracking strategy at the different motion state again.
Step7: calculate car speed according to following the tracks of the information of vehicles of preserving, and then calculate the road speeds parameter.
Step8: judge road traffic state according to the threshold speed that configures, block up less than described threshold value output; Unimpeded more than or equal to described threshold value output.
Step9:, and export unimpeded with the present image update background module.
Above-mentioned flow process was that loop cycle is carried out with one second, exported the decision content of a traffic behavior each second.On the basis of the traffic behavior of each second output, clear and definite traffic behavior signal of method output in per 30 seconds.If the backstage has accumulative total to surpass 15 times congestion status value in 30 seconds, then export a traffic congestion state signal, represent this observation station traffic congestion in these 30 seconds; Otherwise export the status signal that has a good transport and communication network.
Below be described in further detail at the embodiment of each step.
Wherein, adopt time-and-space background modeling to come video monitoring image is carried out background modeling in the step 1 based on adaptive mixed Gauss model.
Utilizing after mixed Gauss model learnt the time-domain background model of each pixel based on the time-and-space background modeling of ADAPTIVE MIXED Gauss model, use non-parametric density method of estimation has been constructed the spatial domain background model based on pixel, thereby has merged the distributed intelligence of each pixel on the time-space domain effectively; Utilize the adaptively selected strategy of the gauss component number of mixed Gauss model simultaneously, improved the modeling efficiency of time-and-space background.
(1) mixed Gauss model
Mixed Gaussian background modeling method adopts a plurality of Gauss models to carry out hybrid representation to each pixel.If be used for describing total K of the Gaussian Profile that each pixel color distributes, just the probability Distribution Model of mixed Gauss model remarked pixel on time-domain that can K gauss component composition.With the pixel j in the image is example, is χ in t moment value
jProbability be:
In the formula: ω
J, t iBe illustrated in the t weight of i gauss component in the mixed Gauss model of pixel j constantly,
The average of representing i gauss component;
The covariance of representing i gauss component, wherein σ represents the standard deviation of i gauss component, I representation unit matrix; η is a Gaussian probability-density function, and function expression is:
Wherein d is χ
jDimension.
When scene changed in video, the mixed Gauss model of each pixel all can be upgraded by study constantly.Its concrete step of updating is: at first K gauss component in the mixed Gauss model of each scene all can be according to ω
J, t i/ σ
J, t iDescending order is arranged, and uses the currency χ of pixel then
jCompare one by one with K composition in the mixed Gauss model, if χ
jAverage μ with i gauss component
J, t iDifference less than the δ standard deviation sigma of this gauss component of (δ is made as 2.5 ~ 3.5 usually) doubly
J, t i, then this gauss component will be by χ
jUpgrade, otherwise then remain unchanged.Renewal equation is preferably as follows:
Wherein, α is the learning rate of model, as i gauss component and χ
jDuring coupling, M
J, t iOtherwise be 0; If χ
jAll do not match with K composition in the mixed Gauss model of pixel j, then come last gauss component in this pixel mixed Gauss model and replaced by new gauss component, the average of new gauss component is χ
j, primary standard difference and weight are made as σ
InitAnd ω
InjtAfter renewal was finished, the weight of each gauss component will be carried out normalized, guaranteed
When judging the background prospect, often press the ω of each gauss component
J, t i/ σ
J, t iThe descending ordering got preceding B
jIndividual gauss component distributes as a setting.B
jComputing formula as follows:
Threshold value T has measured background gauss component shared minimum scale in the whole probability distribution of pixel.
(2) the gauss component number is adaptively selected
In the actual scene, the state number difference of zones of different background, along with scene changes, the state number of the same area also can change, and therefore if all pixels all are maintained fixed constant gauss component number, tends to cause a large amount of wastes of system's calculation resources.
By the renewal equation of mixed Gauss model as can be seen, the weight long-time and gauss component that scene is mated can be increasing, and unmatched gauss component weight can be more and more littler, thereby fall into the part of expression prospect gradually, when the weights omega of going of certain gauss component
J, t iLess than initial weight ω
Init, and the ω of this gauss component
J, t i/ σ
J, t iLess than initial ω
Init/ σ
InitThe time, through ordering, this gauss component will be come after the initialized composition.Continue to keep this gauss component, to make when this scene with this gauss component coupling occurs, it is longer than the time of learning this scene with a new gauss component and spending to utilize this gauss component to learn this scene, so such gauss component can be called " expired " gauss component, should be deleted.The discrimination formula of expired gauss component is as follows:
And
On the basis of above-mentioned analysis, can provide the following adaptively selected strategy of gauss component number:
When a, initialization, the mixed Gauss model of each pixel of scene only is provided with a gauss component;
B, change when scene, when the mixed Gauss model of certain pixel can not mate with current pixel value, if the gauss component number in this pixel mixed Gauss model does not reach the maximum (being made as 3 ~ 5 usually) of setting, then increasing by one automatically is the initial gauss component of average with the currency, otherwise is that the new gauss component of average replaces the end gauss component in the pixel mixed Gauss model with current pixel value then;
C, after model modification is finished, judge whether last gauss component in the mixed Gauss model of each pixel expired, if expired then deletion.
(3) time-space domain background model
The time-domain background model is to after the scene study, obtained the preceding B of expression background in the mixed Gauss model of each pixel
jIndividual gauss component, the average correspondence of these gauss components the state of background, and weight is represented the relative frequency that this background state occurs.Promptly, obtained one group of sample of expression background, thereby can directly add up the sample distribution spatially of these expression backgrounds, as the spatial domain background model of pixel by of the study of time-domain background model to scene.
Because the frequency that each background gauss component occurs in time is different, therefore when the spatial domain background model of statistical pixel, need respective sample be weighted with the weight of background gauss component.
Here utilize color histogram to add up the distribution of background gauss component in each neighborhood of pixels.Color histogram is a kind of simple nonparametric probability density method of estimation, has rotation and translation invariance, is used for the interference that the interior background distributions of statistical pixel neighborhood can overcome the background local motion preferably.If the spatial domain background model that pixel j represents with color histogram is:
q(x
j)={q
v(x
j)}
v=1,…,m
Wherein, m represents the number of histogram; x
j NExpression is the N * N neighborhood at center with pixel j; B
jNumber for background gauss component in the mixed Gauss model of ι pixel in this neighborhood; ω
J, t iWeight for i the gauss component of pixel ι; B (μ
L, t i) the expression average is μ
J, t iThe histogram chromatic zones of gauss component correspondence between; V is between corresponding chromatic zones; δ is the Kronecker function; C
qBe normalization coefficient.Because ω
J, t iRepresented the relative frequency that corresponding background state occurs in time, thereby made the spatial domain background model also reflect the distributed intelligence of time-domain simultaneously, got final product the background of dynamic change on the express time with a histogram, thereby be called the time-space domain background model.
Because the pixel value that obtains in the image can be subjected to interference of noise usually, if directly utilize current pixel value as the color histogram in each neighborhood of pixels of sample statistics, can make color histogram be subject to The noise.Therefore, as the currency χ of pixel ι
ιDuring with certain gauss component coupling of mixed Gauss model, then with the statistical sample of this gauss component as current spatial color histogram, otherwise with the currency of this pixel as statistical sample, so can get:
Wherein, K
ιThe number of gauss component in the mixed Gauss model of remarked pixel ι; C
pBe normalization coefficient; As ι gauss component and χ
ιDuring coupling, M
L, t iBe 1, otherwise be 0.
When carrying out background subtraction, need to judge whether the spatial distribution of pixel is similar to its time-space domain background model in present frame, promptly judges two similitudes between the histogram.Here adopt the histogram intersection method, calculate the total part of two histograms.The time-space domain background model of pixel j and the similitude ρ of current scene are expressed as:
After setting up the time-space domain background model, present embodiment step 2 is taked followingly thisly to extract foreground information based on decision-making fusion goal detection method and obtain the foreground target profile, its main thought is to utilize the time-space domain background model to carry out thick yardstick earlier to judge, utilizes the time-domain background model to judge carrying out thin yardstick again.
When the adjacent domains of pixel is very similar to its background model, i.e. ρ>τ
1The time (τ
1For judging that current scene is the lower bound of the similarity measurement of background), can think that this neighborhood of pixels is a background; When the current neighborhood of this pixel and its time-space domain background model are very dissimilar, i.e. ρ>τ
2The time (τ
2For judging that current scene is the upper bound of the similarity measurement of prospect), can think that then this neighborhood of pixels is a prospect; When the time-space domain background model can not accurately be judged pixel when ownership, i.e. τ
1≤ ρ≤τ
2The time, can utilize the time-domain background model that this pixel is carried out thin yardstick and judge that when having the background gauss component that mates with currency in the mixed Gauss model of this pixel, then pixel is a background, otherwise is prospect.The decision-making formula of whole flow process is as follows:
If τ
1≤ ρ≤τ
2
Otherwise
Wherein, D (χ
j) be that 0 remarked pixel j is a background, D (χ
j) be that 1 remarked pixel j is a prospect.
Whole foreground extraction process is that the distribution on time-domain and spatial domain detects each pixel according to pixel, thereby can eliminate traditional background model shortcoming relatively more responsive to the non-stationary variation, and can keep foreground target profile preferably.
Present embodiment step 3 adopts Blog to analyze the relevant information of extracting foreground target based on foreground information.It is that the connected domain of same pixel in the image is carried out signature analysis that Blob analyzes, and this connected domain is called as Blob.The Blob analytical technology is applied to the foreground target that extracted at this on the basis of time-space domain background model, it is analyzed, and main operational steps is as follows:
(1) morphological operation
The purpose of morphological operation is the influence of removing noise spot, can utilize erosion operator to remove isolated noise foreground point.But this processing also can influence the target yardstick of edge and shape, particularly target originally own less the time, and its edge details is easy to be destroyed by denoising.
(2) connectivity analysis
Target is transformed into the connected component level from Pixel-level, utilizes the expansion operator to remove the aperture of filling up the target area.Dilation operation can compensate the partial information that target is destroyed by denoising.We with denoising after the result of communication with detection turn back to again on the initial prospect point set, to recover its intrinsic edge.This partitioning algorithm had both kept the integrality of target, had avoided the influence of noise foreground point, had kept the edge details part of target again.
(3) Blob statistics
The Blob statistics stage is exactly the number that counts the Blob that satisfies condition in the image, and each Blob in the picture is carried out label
(4) the Blob characteristic information extracts
This stage will be extracted all required information of Blob, also be part the most consuming time, adopt Blob line processing method at this, and this method can effectively improve computational efficiency, realizes the requirement of handling in real time.Blob line processing method can be obtained the geometric properties of connected region simultaneously in the scanning process of connected region, comprise area, girth, position of form center, external matrix of connected region etc.
When weighing the target area size, target area area parameters A () can be used as a kind of scale of measurement, and (x, y), A () is defined as number of pixels in this zone for region R.
The centre of form is a very important parameter in target tracking stage, to region R (x, y), its centre of form (x
0, y
0) calculating formula be:
x
0=M
10(R(x,y))/M
00(R(x,y))
y
0=M
01(R(x,y))/M
00(R(x,y))
Square wherein
Not only can obtain centre of form coordinate according to this formula, also can obtain the more square of high-order if desired.
Except above-mentioned information, also can extract the boundary information of Blob, comprise the profile of boundary point and the position of each extreme point up and down, utilize the extreme point on border also can further determine the boundary rectangle of Blob.
Definite algorithm of extreme point is:
These information all are stored in the information structure of each Blob.
In the step 4, after the Blob information in obtaining image, need the Blob polymerization of same car be got up, finish extraction, thereby realize conversion from Blob information to information of vehicles to a car load information according to the characteristic of each Blob.
The main foundation of carrying out the zone merging is distance and the area information of Blob.In merging process, at first the Blob in the region of search is carried out preliminary screening, the Blob less than certain thresholding places outside the limit of consideration with the Blob area, thus the interference of these noises of filtering Blob, this step is called the area filter of Blob.Carry out region clustering then, it is according to the position that comprises the centre of form, the up-and-down boundary condition of boundary rectangle etc.A plurality of Blob with similitude that will satisfy condition by these conditions are polymerized to a vehicle, obtain histogram of information of vehicles Zhao size, boundary rectangle characteristic, vehicle etc. simultaneously by Blob information.The vehicle characteristics that these information are formed deposits in the structure of target information at last.
Step 6 is set up the motion state that the feature identification matrix is judged target, takes corresponding tracking strategy at the different motion state again.
1) sets up the feature identification matrix
The feature identification matrix is exactly the matrix that utilizes the matching result foundation of the target signature that has target signature and present image.If the target numbers of k frame is N, the target numbers of K+1 frame is P, and establishing k frame object set is X={X
i| i=1,2 ..., N}, K+1 frame object set is Y={Y
j| j=1,2 ..., P}, then X
iAnd Y
jThe matching result of several features is exactly a feature identification entry of a matrix element, is called identification element m
IjBy identification element m
IjN * P matrix of forming is the feature identification matrix M.
The motion state of a plurality of targets mainly can be divided into following five classes: fresh target generation, desirable tracking mode, target division, target merge, target disappears.In the multiple target tracking scene, motion state is just conversion mutually between five states also.If the state space of target is S={S
i| i=0,1,2,3,4} represents above five kinds of states respectively.
Here adopt the metastable target area parameter A of performance, three features of the distance parameter on position of form center parameters C and target and border are set up the feature identification matrix, and by each target state is inferred in the analysis of identification matrix row and column.
m
IjBe the input parameter of motion state recognizer, represent the result of this three-dimensional geometry characteristic matching, three characteristic matching function f of its value and two object sets
A(i, j), f
C(i, j), f
D(i, j) in close relations.The matching result of representing two object sets of these three characteristic matching functions, it is as follows that it embodies formula:
f
A(i,j)={λ
A|if(|A(X
i)-A(Y
j)|≤H
A),λ
A=1,elseλ
A=0}
f
D(i,j)={λ
D|if(|D(X
i)|≤H
D),λ
D=1,elseλ
D=0}
Wherein: A (), C (),
D () represents the area, the centre of form, the prediction centre of form, boundary rectangle of the target distance from the image border respectively; H
A, H
C, H
DThe limited coupling thresholding of representing each characteristic matching respectively, general H
AGet 1/10th of target area smaller value, H
CGet wide half of target boundary rectangle, H
DGet 5; D represents the distance between the former frame target prediction centre of form and the present frame target centre of form, and d is more little, and overlapping degree is high more.
Can produce four kinds of significant match condition by these three features, as follows:
Calculate m by following formula
IjAfter, can set up the feature identification matrix M.
(2) divide the moving target state
For the dbjective state space S, its feature identification algorithm is as follows:
Fresh target produces state S
0: fresh target produces the fresh target that means current extraction and does not coincide with any existing target signature, therefore, and S
0Corresponding recognizer is: M satisfies
j
0Make
Or
, i=1 wherein, 2 ..., N, i ≠ i
0, j=1,2 ..., P, j ≠ j
0
Target splitting status S
2: the target division has two kinds of possibilities, and a kind of is that target divides really, and a kind of then is due to target is blocked by background parts, and two kinds of divisions all can cause present frame to have the same target of a plurality of targets and former frame corresponding, so S
2Corresponding recognizer is: M satisfies
j
tMake
T=1 wherein, 2 ... h, h present frame are the target numbers with the former frame object matching, i=1, and 2 ..., N, i ≠ i
0, j=1,2 ..., P, j ≠ j
t
Target vanishing state S
4: target disappears similar with the target generation, and the target without any target and former frame in the scene is complementary, therefore, and S
4Corresponding recognizer is: M satisfies
Make
J=1 wherein, 2 ..., P.
Complexity is blocked, and promptly the situation of a frame n target will be converted into above five kinds of correspondences, thereby avoid ambiguity by calculating centre of form matching degree after former frame m target correspondence, judges the motion state of target better.
(3) multiple target tracking of analyzing based on motion state
The tracking of moving target, to adjust motion state residing with it closely related, tracking processing and predicting strategy that different motion state correspondence is different, could more reliable maintenance scene in multiobject tracking.
When being in fresh target generation state, but need judge whether this target is in the position of vehicles passing in and out in the background.If, think that then it is new target, generate a new target information structure (A, C, R, D), the area, the centre of form, boundary rectangle of representing vehicle respectively with and with the position feature on border, the information of forecasting of this target of initialization then, and target put into the target information tabulation.Surpassed the T frame and then confirmed target when this target is stabilized to follow the tracks of, to have given its new mark.If target the position occurs and does not meet, judge that then it is the previous target that occurs but blocked by background or the partial segmentation piece of other targets on every side in the scene, perhaps be noise.
Desirable tracking is the most general, in case be judged as this state, then can utilize the current goal feature to upgrade the former frame target signature.With the area parameters is example, its more new formula be:
Wherein:
Represent target i respectively
0The area at the k+1 frame, the prediction area of K frame with and at the area of K+1 frame correspondence.α is for upgrading the factor, and the speed that control is upgraded is at m
Ij=0 o'clock, α was can value big slightly, at m
Ij=1 o'clock, then need reduce value, slow down renewal.The mean change method is adopted in the prediction of area.The centre of form and then adopt the Kalman filter method with the prediction of frontier distance.
The target splitting status need further be differentiated two class states wherein.Real target division, separately trend can strengthen gradually, and significant change can take place in area and boundary rectangle, and this division is continual and steady.And owing to target is blocked the division that causes by background, trend can not continue to increase, and area and boundary rectangle can obviously not increase yet, and division is unstable.During differentiation, a plurality of targets that at first will merge the K+1 frame, calculate the area and the boundary rectangle that merge rear region, if area is less than the target area of K frame, and boundary rectangle does not have obvious expansion, then blocks for background, at this moment, utilize the feature that merges the zone to upgrade target information, renewal is consistent with desirable tracking mode with Forecasting Methodology.Boundary rectangle after merging obviously enlarges, and when distance became big with frame between the target centre of form, then target may divide, at this moment, keep on the one hand following the tracks of, on the one hand the sub-goal of division is put into the alternate target information list, and upgrade and prediction, when continuing division certain hour or distance of separation above a certain threshold value in maintenance, confirm division, delete former target, the oidiospore target is put into the target information tabulation, and give new mark.
Target merges state need set up template for the sub-goal before merging, and promptly preserves the image information of target before merging, and a new target is regarded in the zone after will merging then as, remains on the tracking during the merging.After merging end, utilize Template Information and each target information before merging to compare, thereby the target before and after will merging is mapped.Surpass a certain threshold value when the target merging time, confirm that then target merges, it is generated fresh target as common movable body, and the sub-goal before deleting template and merging.
The target vanishing state also has two kinds of situations, works as m
Ij=2 o'clock, target was in the position that image border or other targets can pass in and out, and therefore be normal the disappearance, kept its information and the last position that occurs this moment, when extinction time surpasses a certain threshold value, confirmed that then target disappears, and it is deleted from the target information tabulation.Work as m
Ij=3 o'clock, target was improper disappearance, should have been blocked fully by background, keep the information of target and slow down motion prediction speed this moment, when target reappears, recovers target label, upgrade target information, if its information and the interruption tracking to it is then write down in the long-time improper disappearance of target.
The information that can note according to same vehicle target on the basis of target following in the step 7 is released the vehicle movement in certain period, and then can calculate the speed of this vehicle.So the mean value of car speed is the speed parameter of current road on the image, can judge after the setting threshold and block up that this adjustable threshold value is whole with unimpeded.
Can differentiate the traffic in monitoring highway section based on the video monitoring image that obtains in real time automatically by method of the present invention, utilize the existing video monitoring resource of vehicle supervision department, start with from technological layers such as background modeling, foreground extraction, vehicle identification, multiple target trackings, surveillance video is carried out detailed classification analysis, finally just can differentiate the road traffic state that monitoring camera is installed by setting the relevant parameters value.The differentiation accuracy rate of road traffic state of the present invention is more than 90%.Block up in case determine, can induce vehicle to detour, thereby reduce the congestion in road time, help improving urban traffic environment.Method that adopts among the present invention and technology all can satisfy easily by software realization and service conditions, are easy to apply in medium-and-large-sized city, the whole nation.