CN101729872B - Video monitoring image based method for automatically distinguishing traffic states of roads - Google Patents

Video monitoring image based method for automatically distinguishing traffic states of roads Download PDF

Info

Publication number
CN101729872B
CN101729872B CN2009103112262A CN200910311226A CN101729872B CN 101729872 B CN101729872 B CN 101729872B CN 2009103112262 A CN2009103112262 A CN 2009103112262A CN 200910311226 A CN200910311226 A CN 200910311226A CN 101729872 B CN101729872 B CN 101729872B
Authority
CN
China
Prior art keywords
target
information
pixel
time
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009103112262A
Other languages
Chinese (zh)
Other versions
CN101729872A (en
Inventor
储浩
李建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Beidou Chengji On-line Information Technology Co., Ltd.
Original Assignee
NANJING INTERCITY ONLINE INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING INTERCITY ONLINE INFORMATION TECHNOLOGY Co Ltd filed Critical NANJING INTERCITY ONLINE INFORMATION TECHNOLOGY Co Ltd
Priority to CN2009103112262A priority Critical patent/CN101729872B/en
Publication of CN101729872A publication Critical patent/CN101729872A/en
Application granted granted Critical
Publication of CN101729872B publication Critical patent/CN101729872B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a video monitoring image based method for automatically distinguishing the traffic states of roads, comprising the following steps of: detailedly classifying and analyzing video monitoring videos starting with technical levels, i.e. background modeling, foreground extraction, vehicle identification, multi-target tracking, and the like; and finally distinguishing the traffic states of the roads provided with monitoring cameras by setting corresponding parameter values. The invention greatly enhances the accuracy of the automatic identification of the traffic states.

Description

A kind of method of differentiating road traffic state based on video monitoring image automatically
Technical field
The present invention relates to the application of video detection technology, be specifically related to a kind of method based on video monitoring image real time discriminating road traffic condition at field of traffic.
Background technology
It is the basis that the municipal intelligent traffic system carries out traffic information issue and traffic guidance that road traffic state is differentiated.Road traffic state differentiation at present mainly also is based on floating car data and carries out analyzing and processing, is aided with video monitoring resources a large amount of in the city and with the means of artificial observation road traffic state is revised and replenished.But the accuracy that is based on the traffic state judging algorithm of floating car data has direct related with Floating Car quantity and operation state, so can't reach very high accuracy, and the method for artificial observation video monitoring then has high requirement to the observation people, has the too subjective and higher rate of failing to report of distinguishing rule.So, be necessary advanced person's video detection technology reasonably is applied to China's field of traffic for the existing video monitoring resource of more efficient use.
Video detection technology is also referred to as digital image processing techniques, is in conjunction with video image and computerized mode identification technology emerging technology.This technology is applied to after the field of traffic, can be by the function of video camera and computer mould apery eye, continuous analog image is converted to discrete digital picture, on the basis of the physical model of maturation and mathematical model, image is analyzed by certain Processing Algorithm, thereby determined the traffic behavior of road.
Video detection technology is also not very long in the developing history of the application study of field of traffic, and 1984, the University of Minnesota of the U.S. carried out the research that computer vision is applied to senior traffic administration first.1984 ~ 1989 years, further experimental study was done by this university under the fostering of the vehicle supervision department of Minn., and (ImageSensing System, INC.) company specialize in the exploitation of traffic video technology meanwhile to have set up ISS.In 1987, ISS company designed first prototype, and the application of video detection technology at field of traffic verified in design this time first.ISS company had developed second generation product in 1989.In afterwards several years, released a series of with Autoscope TMBe the traffic video testing product of registered trade mark, formed a comparatively successful business system.The AUTOSCOP system has the ability of real-time detection traffic parameter, is one of video detection system the most competitive in the traffic information collection in the world.AUTOSCOP can be contained in the Cabinet, receives the vision signal that multirouting roadside video camera transmits.The user utilizes mouse to draw virtual " wagon detector " on screen according to the road traffic condition diagram picture that shows.In a single day system finishes the setting of detector, when vehicle passes through any one virtual detector, will produce a detection signal.The video image of AUTOSCOPE processor analysis input produces required traffic data, for example: flow, speed, occupation rate, headstock distance, vehicle queue length and type of vehicle etc.ISS company to the AUTOSCOPE technology continually develop, improvement and perfect, at present the product of the said firm has become the bellwether of industry.Also have some companies also to be engaged in application study and exploitation in this respect in addition abroad, also released ripe separately Complex Structural System, as the VideoTrak-900 of U.S. PEEK company, the Monitor series of the release of Belgian Traficon company etc.
Video detection technology application study is at home started late, be accompanied by the development of communications and transportation, the application demand of traffic administration and control is in continuous increase, domestic have many companies also to do many effort in this respect, " intelligent transportation video image processing system " such as Tianyi Information Techn Co., Ltd., Hunan's release, the VS3001 video detection system of Qing Hua Ziguang's exploitation, Headsun SmartViewer-II video traffic detector of the permanent dark intelligent software system Co., Ltd in Xiamen development or the like.The function that these products rest on the level of prototype or realization mostly is perfect inadequately, and effect and not obvious also is far from reaching requirement of actual application in the middle of the popularization of reality, compares with external product to also have suitable gap.
To sum up, road traffic state differentiation aspect based on video monitoring also has many weak points, be mainly reflected in, present video detection system not only accuracy of detection is high and still rest on the aspect of DETECTION OF TRAFFIC PARAMETERS, also need further traffic parameter to be carried out analyzing and processing if will differentiate road traffic state, do not give full play of the due performance of video monitoring.
Summary of the invention
In order to solve more the problem that road traffic condition is differentiated in accurately monitoring, the present invention proposes a kind ofly to differentiate the method for road traffic state automatically based on video monitoring image, comprises the steps:
A, obtain video monitoring image;
B, for the video monitoring image in time period 1, set up the time-space domain background model of video monitoring image based on mixed Gauss model;
The similitude of C, calculating present image and time-space domain background model is extracted foreground information based on similitude;
D, utilization connected domain blob analytical method are extracted the characteristic information of each connected region in the foreground information;
E, the distance of utilizing connected domain and area information are carried out vehicle identification, if car is arranged, preserve vehicle characteristics information, change step F; If no car, with the present image update background module, and export unimpeded, commentaries on classics step J;
F, to remove the background information update background module of information of vehicles;
G, in time period 1, set up the motion state that the feature identification matrix is judged target based on the matching result of the target signature that has target signature and present image, if identification finds to occur fresh target, then set up fresh target information and the target of prediction centre of form is followed the tracks of, if find that target is in desirable tracking mode, then the target of prediction centre of form is followed the tracks of, if target disappears, then deletes target information;
H, the average speed of calculating different target in time period 1, the mean value of all car speeds is as the speed parameter of current road on the computed image;
The threshold speed that I, basis configure is judged road traffic state, and speed parameter blocks up less than described threshold value output, and is unimpeded more than or equal to described threshold value output;
J, to the signal time counting number that blocks up of output, judge whether time second round finish, if finish, then change step K, otherwise the commentaries on classics steps A;
K, in time second round, calculate the number of times that output is blocked up, if output is blocked up number of times greater than predetermined value, then prompting monitoring highway section is in congestion status;
Wherein, step D specifically comprises:
D1, morphological operation are removed the noise influence in the foreground information;
D2, target is transformed into the connected component level from Pixel-level, utilizes the expansion operator to remove the aperture of filling up the target area, again the result is turned back to again on the initial prospect point set, to recover the intrinsic edge of foreground image;
The number of connected domain and to each connected component labeling in D3, the statistical picture;
Area, girth, position of form center and the boundary rectangle information of D4, extraction connected domain.
Wherein, the step B time-space domain background model of setting up video monitoring image based on mixed Gauss model specifically comprises:
B1, utilize mixed Gaussian background modeling method to set up the time-domain background model of each pixel;
B2, carry out the adaptively selected of gauss component number, specifically comprise:
When B21, initialization, the mixed Gauss model of each pixel of scene only is provided with a gauss component;
B22, change when scene, when the mixed Gauss model of pixel can not mate with current pixel value, if the gauss component number in this pixel mixed Gauss model does not reach the maximum of setting, then increasing by one automatically is the initial gauss component of average with the currency, otherwise is that the new gauss component of average replaces the end gauss component in the pixel mixed Gauss model with current pixel value then;
B23, after model modification is finished, judge whether last gauss component in the mixed Gauss model of each pixel expired, if expired then deletion
B3, by the time-domain background model to the study of scene, obtained one group of sample of expression background, directly add up the sample distribution spatially of these expression backgrounds, as the spatial domain background model of pixel.
Wherein, step e specifically comprises:
E1, the connected domain in the region of search is carried out Preliminary screening, just the Blob area places outside the limit of consideration less than the Blob of certain thresholding;
E2, carry out region clustering, a plurality of connected domains that will have similitude based on the up-and-down boundary condition of position of form center, boundary rectangle are polymerized to a vehicle;
The histogram information of E3, the area that obtains the vehicle that polymerization obtains, boundary rectangle characteristic, vehicle;
E4, the vehicle characteristics information that obtains is saved as target information.
Wherein, step G also comprises the steps:
G11, judge whether target division occurs,, otherwise jump out division observation if then carry out G12;
G12, maintenance are followed the trail of, and the sub-goal of division is put into the alternate target information list, and keep upgrading and predicting;
If G13 divides a scheduled time, confirm the former target of division deletion, the oidiospore target is put into the target information tabulation, and give new mark, follow the trail of.
Wherein, step G also comprises the steps:
G21, judge whether target occurs merging, if, then preserve the image information of target before merging, a new target is regarded in the zone after will merging then as, remains on the tracking during the merging;
G22, surpass a certain threshold value, confirm that then target merges, it is generated fresh target as common movable body when the target merging time, and the deletion template and merge before sub-goal.
Above-mentioned and other purpose, feature and advantage of the present invention will become clear and definite for those skilled in the art after the detailed description below having read in conjunction with the accompanying drawing that shows and describe specific embodiments of the invention.
Description of drawings
Fig. 1 is the method overall flow figure of the embodiment of the invention;
Fig. 2 is based on the flow chart that mixed Gauss model is set up the time-space domain background model of video monitoring image;
Fig. 3 is a target signature coupling flow chart;
Fig. 4 is the multiple target tracking process chart;
Fig. 5 is the tracking process chart of target splitting status;
Fig. 6 is the tracking process chart that target merges state.
Embodiment
Describe the specific embodiment of the present invention in detail below in conjunction with accompanying drawing.
The method of the invention can realize that its concrete grammar flow process as shown in Figure 1 with the software that embeds traffic surveillance and control system.
Step0: obtain video monitoring image
Step1: the time-space domain background model of setting up video monitoring image based on the ADAPTIVE MIXED Gauss model.
Step2: take to extract foreground information based on decision-making fusion goal detection method.
Step3: utilization Blob analytical method is extracted the characteristic information of each Blob.
Step4: utilize the distance of Bl0b and area information to carry out vehicle identification,, preserve vehicle characteristics information, change step5 if car is arranged; If no car changes step9.
Step5: to remove the background information update background module of information of vehicles.
Step6: set up the motion state that the feature identification matrix is judged target, take corresponding tracking strategy at the different motion state again.
Step7: calculate car speed according to following the tracks of the information of vehicles of preserving, and then calculate the road speeds parameter.
Step8: judge road traffic state according to the threshold speed that configures, block up less than described threshold value output; Unimpeded more than or equal to described threshold value output.
Step9:, and export unimpeded with the present image update background module.
Above-mentioned flow process was that loop cycle is carried out with one second, exported the decision content of a traffic behavior each second.On the basis of the traffic behavior of each second output, clear and definite traffic behavior signal of method output in per 30 seconds.If the backstage has accumulative total to surpass 15 times congestion status value in 30 seconds, then export a traffic congestion state signal, represent this observation station traffic congestion in these 30 seconds; Otherwise export the status signal that has a good transport and communication network.
Below be described in further detail at the embodiment of each step.
Wherein, adopt time-and-space background modeling to come video monitoring image is carried out background modeling in the step 1 based on adaptive mixed Gauss model.
Utilizing after mixed Gauss model learnt the time-domain background model of each pixel based on the time-and-space background modeling of ADAPTIVE MIXED Gauss model, use non-parametric density method of estimation has been constructed the spatial domain background model based on pixel, thereby has merged the distributed intelligence of each pixel on the time-space domain effectively; Utilize the adaptively selected strategy of the gauss component number of mixed Gauss model simultaneously, improved the modeling efficiency of time-and-space background.
(1) mixed Gauss model
Mixed Gaussian background modeling method adopts a plurality of Gauss models to carry out hybrid representation to each pixel.If be used for describing total K of the Gaussian Profile that each pixel color distributes, just the probability Distribution Model of mixed Gauss model remarked pixel on time-domain that can K gauss component composition.With the pixel j in the image is example, is χ in t moment value jProbability be:
P ( x j ) = Σ i = 1 K ω j , t i η ( x j ; μ j , t i , Σ j , t i )
In the formula: ω J, t iBe illustrated in the t weight of i gauss component in the mixed Gauss model of pixel j constantly,
Figure G200910311226220091211D000061
The average of representing i gauss component;
Figure G200910311226220091211D000062
The covariance of representing i gauss component, wherein σ represents the standard deviation of i gauss component, I representation unit matrix; η is a Gaussian probability-density function, and function expression is:
η ( x j ; μ j , t i , Σ j , t i ) = 1 ( 2 π ) d 2 | Σ j , t i | 1 2 exp [ - 1 2 ( x j - μ j , t i ) T ( Σ j , t i ) - 1 ( x j - μ j , t i ) ]
Wherein d is χ jDimension.
When scene changed in video, the mixed Gauss model of each pixel all can be upgraded by study constantly.Its concrete step of updating is: at first K gauss component in the mixed Gauss model of each scene all can be according to ω J, t i/ σ J, t iDescending order is arranged, and uses the currency χ of pixel then jCompare one by one with K composition in the mixed Gauss model, if χ jAverage μ with i gauss component J, t iDifference less than the δ standard deviation sigma of this gauss component of (δ is made as 2.5 ~ 3.5 usually) doubly J, t i, then this gauss component will be by χ jUpgrade, otherwise then remain unchanged.Renewal equation is preferably as follows:
ω j , t + 1 i = ( 1 - α ) ω j , t i + α M j , t i
μ j , t + 1 i = ( 1 - ρ ) μ j , t i + ρ x j
( σ j , t + 1 i ) 2 = ( 1 - ρ ) ( σ j , t i ) 2 + ρ ( x j - μ j , t i ) T ( x j - μ j , t i )
ρ = α ω j , t i
Wherein, α is the learning rate of model, as i gauss component and χ jDuring coupling, M J, t iOtherwise be 0; If χ jAll do not match with K composition in the mixed Gauss model of pixel j, then come last gauss component in this pixel mixed Gauss model and replaced by new gauss component, the average of new gauss component is χ j, primary standard difference and weight are made as σ InitAnd ω InjtAfter renewal was finished, the weight of each gauss component will be carried out normalized, guaranteed
Figure G200910311226220091211D000071
When judging the background prospect, often press the ω of each gauss component J, t i/ σ J, t iThe descending ordering got preceding B jIndividual gauss component distributes as a setting.B jComputing formula as follows:
B j = arg min b ( Σ i = 1 b ω j , t + 1 i > T )
Threshold value T has measured background gauss component shared minimum scale in the whole probability distribution of pixel.
(2) the gauss component number is adaptively selected
In the actual scene, the state number difference of zones of different background, along with scene changes, the state number of the same area also can change, and therefore if all pixels all are maintained fixed constant gauss component number, tends to cause a large amount of wastes of system's calculation resources.
By the renewal equation of mixed Gauss model as can be seen, the weight long-time and gauss component that scene is mated can be increasing, and unmatched gauss component weight can be more and more littler, thereby fall into the part of expression prospect gradually, when the weights omega of going of certain gauss component J, t iLess than initial weight ω Init, and the ω of this gauss component J, t i/ σ J, t iLess than initial ω Init/ σ InitThe time, through ordering, this gauss component will be come after the initialized composition.Continue to keep this gauss component, to make when this scene with this gauss component coupling occurs, it is longer than the time of learning this scene with a new gauss component and spending to utilize this gauss component to learn this scene, so such gauss component can be called " expired " gauss component, should be deleted.The discrimination formula of expired gauss component is as follows:
&omega; j , t i < &omega; init And &omega; j , t i &sigma; j , t i < &omega; init &sigma; init
On the basis of above-mentioned analysis, can provide the following adaptively selected strategy of gauss component number:
When a, initialization, the mixed Gauss model of each pixel of scene only is provided with a gauss component;
B, change when scene, when the mixed Gauss model of certain pixel can not mate with current pixel value, if the gauss component number in this pixel mixed Gauss model does not reach the maximum (being made as 3 ~ 5 usually) of setting, then increasing by one automatically is the initial gauss component of average with the currency, otherwise is that the new gauss component of average replaces the end gauss component in the pixel mixed Gauss model with current pixel value then;
C, after model modification is finished, judge whether last gauss component in the mixed Gauss model of each pixel expired, if expired then deletion.
(3) time-space domain background model
The time-domain background model is to after the scene study, obtained the preceding B of expression background in the mixed Gauss model of each pixel jIndividual gauss component, the average correspondence of these gauss components the state of background, and weight is represented the relative frequency that this background state occurs.Promptly, obtained one group of sample of expression background, thereby can directly add up the sample distribution spatially of these expression backgrounds, as the spatial domain background model of pixel by of the study of time-domain background model to scene.
Because the frequency that each background gauss component occurs in time is different, therefore when the spatial domain background model of statistical pixel, need respective sample be weighted with the weight of background gauss component.
Here utilize color histogram to add up the distribution of background gauss component in each neighborhood of pixels.Color histogram is a kind of simple nonparametric probability density method of estimation, has rotation and translation invariance, is used for the interference that the interior background distributions of statistical pixel neighborhood can overcome the background local motion preferably.If the spatial domain background model that pixel j represents with color histogram is:
q(x j)={q v(x j)} v=1,…,m
q v ( x j ) = C q &Sigma; l &Element; x j N &Sigma; i = 1 B l &omega; l , t i &delta; [ b ( &mu; l , t i ) - v ]
Wherein, m represents the number of histogram; x j NExpression is the N * N neighborhood at center with pixel j; B jNumber for background gauss component in the mixed Gauss model of ι pixel in this neighborhood; ω J, t iWeight for i the gauss component of pixel ι; B (μ L, t i) the expression average is μ J, t iThe histogram chromatic zones of gauss component correspondence between; V is between corresponding chromatic zones; δ is the Kronecker function; C qBe normalization coefficient.Because ω J, t iRepresented the relative frequency that corresponding background state occurs in time, thereby made the spatial domain background model also reflect the distributed intelligence of time-domain simultaneously, got final product the background of dynamic change on the express time with a histogram, thereby be called the time-space domain background model.
Because the pixel value that obtains in the image can be subjected to interference of noise usually, if directly utilize current pixel value as the color histogram in each neighborhood of pixels of sample statistics, can make color histogram be subject to The noise.Therefore, as the currency χ of pixel ι ιDuring with certain gauss component coupling of mixed Gauss model, then with the statistical sample of this gauss component as current spatial color histogram, otherwise with the currency of this pixel as statistical sample, so can get:
p v ( x j ) = C p &Sigma; l &Element; x j N &Sigma; i = 1 K l &delta; [ b ( &mu; l , t i M l , t i ) - v ]
Wherein, K ιThe number of gauss component in the mixed Gauss model of remarked pixel ι; C pBe normalization coefficient; As ι gauss component and χ ιDuring coupling, M L, t iBe 1, otherwise be 0.
When carrying out background subtraction, need to judge whether the spatial distribution of pixel is similar to its time-space domain background model in present frame, promptly judges two similitudes between the histogram.Here adopt the histogram intersection method, calculate the total part of two histograms.The time-space domain background model of pixel j and the similitude ρ of current scene are expressed as:
&rho; ( x j ) = &Sigma; v = 1 m min ( p v , q v )
After setting up the time-space domain background model, present embodiment step 2 is taked followingly thisly to extract foreground information based on decision-making fusion goal detection method and obtain the foreground target profile, its main thought is to utilize the time-space domain background model to carry out thick yardstick earlier to judge, utilizes the time-domain background model to judge carrying out thin yardstick again.
When the adjacent domains of pixel is very similar to its background model, i.e. ρ>τ 1The time (τ 1For judging that current scene is the lower bound of the similarity measurement of background), can think that this neighborhood of pixels is a background; When the current neighborhood of this pixel and its time-space domain background model are very dissimilar, i.e. ρ>τ 2The time (τ 2For judging that current scene is the upper bound of the similarity measurement of prospect), can think that then this neighborhood of pixels is a prospect; When the time-space domain background model can not accurately be judged pixel when ownership, i.e. τ 1≤ ρ≤τ 2The time, can utilize the time-domain background model that this pixel is carried out thin yardstick and judge that when having the background gauss component that mates with currency in the mixed Gauss model of this pixel, then pixel is a background, otherwise is prospect.The decision-making formula of whole flow process is as follows:
If τ 1≤ ρ≤τ 2
Figure G200910311226220091211D000093
Otherwise
D ( x j ) = 0 , &rho; > &tau; 1 1 , &rho; < &tau; 2
Wherein, D (χ j) be that 0 remarked pixel j is a background, D (χ j) be that 1 remarked pixel j is a prospect.
Whole foreground extraction process is that the distribution on time-domain and spatial domain detects each pixel according to pixel, thereby can eliminate traditional background model shortcoming relatively more responsive to the non-stationary variation, and can keep foreground target profile preferably.
Present embodiment step 3 adopts Blog to analyze the relevant information of extracting foreground target based on foreground information.It is that the connected domain of same pixel in the image is carried out signature analysis that Blob analyzes, and this connected domain is called as Blob.The Blob analytical technology is applied to the foreground target that extracted at this on the basis of time-space domain background model, it is analyzed, and main operational steps is as follows:
(1) morphological operation
The purpose of morphological operation is the influence of removing noise spot, can utilize erosion operator to remove isolated noise foreground point.But this processing also can influence the target yardstick of edge and shape, particularly target originally own less the time, and its edge details is easy to be destroyed by denoising.
(2) connectivity analysis
Target is transformed into the connected component level from Pixel-level, utilizes the expansion operator to remove the aperture of filling up the target area.Dilation operation can compensate the partial information that target is destroyed by denoising.We with denoising after the result of communication with detection turn back to again on the initial prospect point set, to recover its intrinsic edge.This partitioning algorithm had both kept the integrality of target, had avoided the influence of noise foreground point, had kept the edge details part of target again.
(3) Blob statistics
The Blob statistics stage is exactly the number that counts the Blob that satisfies condition in the image, and each Blob in the picture is carried out label
(4) the Blob characteristic information extracts
This stage will be extracted all required information of Blob, also be part the most consuming time, adopt Blob line processing method at this, and this method can effectively improve computational efficiency, realizes the requirement of handling in real time.Blob line processing method can be obtained the geometric properties of connected region simultaneously in the scanning process of connected region, comprise area, girth, position of form center, external matrix of connected region etc.
When weighing the target area size, target area area parameters A () can be used as a kind of scale of measurement, and (x, y), A () is defined as number of pixels in this zone for region R.
The centre of form is a very important parameter in target tracking stage, to region R (x, y), its centre of form (x 0, y 0) calculating formula be:
x 0=M 10(R(x,y))/M 00(R(x,y))
y 0=M 01(R(x,y))/M 00(R(x,y))
Square wherein
M pq ( R ( x , y ) ) = &Sigma; ( x , y ) &Element; R ( x , y ) f ( x , y ) x p y q ,
Not only can obtain centre of form coordinate according to this formula, also can obtain the more square of high-order if desired.
Except above-mentioned information, also can extract the boundary information of Blob, comprise the profile of boundary point and the position of each extreme point up and down, utilize the extreme point on border also can further determine the boundary rectangle of Blob.
Definite algorithm of extreme point is:
top = min y { ( x , y ) | ( x , y ) &Element; R ( x , y ) }
bottom = max y { ( x , y ) | ( x , y ) &Element; R ( x , y ) }
left = min x { ( x , y ) | ( x , y ) &Element; R ( x , y ) }
right = max x { ( x , y ) | ( x , y ) &Element; R ( x , y ) }
These information all are stored in the information structure of each Blob.
In the step 4, after the Blob information in obtaining image, need the Blob polymerization of same car be got up, finish extraction, thereby realize conversion from Blob information to information of vehicles to a car load information according to the characteristic of each Blob.
The main foundation of carrying out the zone merging is distance and the area information of Blob.In merging process, at first the Blob in the region of search is carried out preliminary screening, the Blob less than certain thresholding places outside the limit of consideration with the Blob area, thus the interference of these noises of filtering Blob, this step is called the area filter of Blob.Carry out region clustering then, it is according to the position that comprises the centre of form, the up-and-down boundary condition of boundary rectangle etc.A plurality of Blob with similitude that will satisfy condition by these conditions are polymerized to a vehicle, obtain histogram of information of vehicles Zhao size, boundary rectangle characteristic, vehicle etc. simultaneously by Blob information.The vehicle characteristics that these information are formed deposits in the structure of target information at last.
Step 6 is set up the motion state that the feature identification matrix is judged target, takes corresponding tracking strategy at the different motion state again.
1) sets up the feature identification matrix
The feature identification matrix is exactly the matrix that utilizes the matching result foundation of the target signature that has target signature and present image.If the target numbers of k frame is N, the target numbers of K+1 frame is P, and establishing k frame object set is X={X i| i=1,2 ..., N}, K+1 frame object set is Y={Y j| j=1,2 ..., P}, then X iAnd Y jThe matching result of several features is exactly a feature identification entry of a matrix element, is called identification element m IjBy identification element m IjN * P matrix of forming is the feature identification matrix M.
The motion state of a plurality of targets mainly can be divided into following five classes: fresh target generation, desirable tracking mode, target division, target merge, target disappears.In the multiple target tracking scene, motion state is just conversion mutually between five states also.If the state space of target is S={S i| i=0,1,2,3,4} represents above five kinds of states respectively.
Here adopt the metastable target area parameter A of performance, three features of the distance parameter on position of form center parameters C and target and border are set up the feature identification matrix, and by each target state is inferred in the analysis of identification matrix row and column.
m IjBe the input parameter of motion state recognizer, represent the result of this three-dimensional geometry characteristic matching, three characteristic matching function f of its value and two object sets A(i, j), f C(i, j), f D(i, j) in close relations.The matching result of representing two object sets of these three characteristic matching functions, it is as follows that it embodies formula:
f A(i,j)={λ A|if(|A(X i)-A(Y j)|≤H A),λ A=1,elseλ A=0}
f D(i,j)={λ D|if(|D(X i)|≤H D),λ D=1,elseλ D=0}
Wherein: A (), C (),
Figure G200910311226220091211D000122
D () represents the area, the centre of form, the prediction centre of form, boundary rectangle of the target distance from the image border respectively; H A, H C, H DThe limited coupling thresholding of representing each characteristic matching respectively, general H AGet 1/10th of target area smaller value, H CGet wide half of target boundary rectangle, H DGet 5; D represents the distance between the former frame target prediction centre of form and the present frame target centre of form, and d is more little, and overlapping degree is high more.
Can produce four kinds of significant match condition by these three features, as follows:
m ij = 0 , &lambda; A = 1 &cap; &lambda; C > 0 1 , &lambda; A = 0 &cap; &lambda; C > 0 2 , &lambda; C = 0 &cap; &lambda; D = 1 3 , &lambda; C = 0 &cap; &lambda; D = 0
Calculate m by following formula IjAfter, can set up the feature identification matrix M.
(2) divide the moving target state
For the dbjective state space S, its feature identification algorithm is as follows:
Fresh target produces state S 0: fresh target produces the fresh target that means current extraction and does not coincide with any existing target signature, therefore, and S 0Corresponding recognizer is: M satisfies
Figure G200910311226220091211D000132
j 0Make
( m i 0 j 0 = 0 ) &cap; ( m i 0 j &NotEqual; 0 ) &cap; ( m ij 0 &NotEqual; 0 ) Or ( m i 0 j 0 = 1 ) &cap; ( m i 0 j = 2,3 ) &cap; ( m ij 0 = 2,3 )
, i=1 wherein, 2 ..., N, i ≠ i 0, j=1,2 ..., P, j ≠ j 0
Target splitting status S 2: the target division has two kinds of possibilities, and a kind of is that target divides really, and a kind of then is due to target is blocked by background parts, and two kinds of divisions all can cause present frame to have the same target of a plurality of targets and former frame corresponding, so S 2Corresponding recognizer is: M satisfies
Figure G200910311226220091211D000135
j tMake
( m i 0 j t = 1 ) &cap; ( m i 0 j = 2,3 ) &cap; ( m ij t = 2,3 ) , T=1 wherein, 2 ... h, h present frame are the target numbers with the former frame object matching, i=1, and 2 ..., N, i ≠ i 0, j=1,2 ..., P, j ≠ j t
Target vanishing state S 4: target disappears similar with the target generation, and the target without any target and former frame in the scene is complementary, therefore, and S 4Corresponding recognizer is: M satisfies Make
Figure G200910311226220091211D000138
J=1 wherein, 2 ..., P.
Complexity is blocked, and promptly the situation of a frame n target will be converted into above five kinds of correspondences, thereby avoid ambiguity by calculating centre of form matching degree after former frame m target correspondence, judges the motion state of target better.
(3) multiple target tracking of analyzing based on motion state
The tracking of moving target, to adjust motion state residing with it closely related, tracking processing and predicting strategy that different motion state correspondence is different, could more reliable maintenance scene in multiobject tracking.
When being in fresh target generation state, but need judge whether this target is in the position of vehicles passing in and out in the background.If, think that then it is new target, generate a new target information structure (A, C, R, D), the area, the centre of form, boundary rectangle of representing vehicle respectively with and with the position feature on border, the information of forecasting of this target of initialization then, and target put into the target information tabulation.Surpassed the T frame and then confirmed target when this target is stabilized to follow the tracks of, to have given its new mark.If target the position occurs and does not meet, judge that then it is the previous target that occurs but blocked by background or the partial segmentation piece of other targets on every side in the scene, perhaps be noise.
Desirable tracking is the most general, in case be judged as this state, then can utilize the current goal feature to upgrade the former frame target signature.With the area parameters is example, its more new formula be:
Figure G200910311226220091211D000141
Wherein:
Figure G200910311226220091211D000143
Represent target i respectively 0The area at the k+1 frame, the prediction area of K frame with and at the area of K+1 frame correspondence.α is for upgrading the factor, and the speed that control is upgraded is at m Ij=0 o'clock, α was can value big slightly, at m Ij=1 o'clock, then need reduce value, slow down renewal.The mean change method is adopted in the prediction of area.The centre of form and then adopt the Kalman filter method with the prediction of frontier distance.
The target splitting status need further be differentiated two class states wherein.Real target division, separately trend can strengthen gradually, and significant change can take place in area and boundary rectangle, and this division is continual and steady.And owing to target is blocked the division that causes by background, trend can not continue to increase, and area and boundary rectangle can obviously not increase yet, and division is unstable.During differentiation, a plurality of targets that at first will merge the K+1 frame, calculate the area and the boundary rectangle that merge rear region, if area is less than the target area of K frame, and boundary rectangle does not have obvious expansion, then blocks for background, at this moment, utilize the feature that merges the zone to upgrade target information, renewal is consistent with desirable tracking mode with Forecasting Methodology.Boundary rectangle after merging obviously enlarges, and when distance became big with frame between the target centre of form, then target may divide, at this moment, keep on the one hand following the tracks of, on the one hand the sub-goal of division is put into the alternate target information list, and upgrade and prediction, when continuing division certain hour or distance of separation above a certain threshold value in maintenance, confirm division, delete former target, the oidiospore target is put into the target information tabulation, and give new mark.
Target merges state need set up template for the sub-goal before merging, and promptly preserves the image information of target before merging, and a new target is regarded in the zone after will merging then as, remains on the tracking during the merging.After merging end, utilize Template Information and each target information before merging to compare, thereby the target before and after will merging is mapped.Surpass a certain threshold value when the target merging time, confirm that then target merges, it is generated fresh target as common movable body, and the sub-goal before deleting template and merging.
The target vanishing state also has two kinds of situations, works as m Ij=2 o'clock, target was in the position that image border or other targets can pass in and out, and therefore be normal the disappearance, kept its information and the last position that occurs this moment, when extinction time surpasses a certain threshold value, confirmed that then target disappears, and it is deleted from the target information tabulation.Work as m Ij=3 o'clock, target was improper disappearance, should have been blocked fully by background, keep the information of target and slow down motion prediction speed this moment, when target reappears, recovers target label, upgrade target information, if its information and the interruption tracking to it is then write down in the long-time improper disappearance of target.
The information that can note according to same vehicle target on the basis of target following in the step 7 is released the vehicle movement in certain period, and then can calculate the speed of this vehicle.So the mean value of car speed is the speed parameter of current road on the image, can judge after the setting threshold and block up that this adjustable threshold value is whole with unimpeded.
Can differentiate the traffic in monitoring highway section based on the video monitoring image that obtains in real time automatically by method of the present invention, utilize the existing video monitoring resource of vehicle supervision department, start with from technological layers such as background modeling, foreground extraction, vehicle identification, multiple target trackings, surveillance video is carried out detailed classification analysis, finally just can differentiate the road traffic state that monitoring camera is installed by setting the relevant parameters value.The differentiation accuracy rate of road traffic state of the present invention is more than 90%.Block up in case determine, can induce vehicle to detour, thereby reduce the congestion in road time, help improving urban traffic environment.Method that adopts among the present invention and technology all can satisfy easily by software realization and service conditions, are easy to apply in medium-and-large-sized city, the whole nation.

Claims (5)

1. differentiate the method for road traffic state automatically based on video monitoring image for one kind, comprise the steps:
A, obtain video monitoring image;
B, for the video monitoring image in time period 1, set up the time-space domain background model of video monitoring image based on mixed Gauss model;
The similitude of C, calculating present image and time-space domain background model is extracted foreground information based on similitude;
D, utilization connected domain Blob analytical method are extracted the characteristic information of each connected region in the foreground information;
E, the distance of utilizing connected domain and area information are carried out vehicle identification, if car is arranged, preserve vehicle characteristics information, change step F; If no car, with the present image update background module, and export unimpeded, commentaries on classics step J;
F, to remove the background information update background module of information of vehicles;
G, in time period 1, set up the motion state that the feature identification matrix is judged target based on the matching result of the target signature that has target signature and present image, if identification finds to occur fresh target, then set up fresh target information and the target of prediction centre of form is followed the tracks of, if find that target is in desirable tracking mode, then continue the target of prediction centre of form and follow the tracks of,, then delete target information if target disappears;
H, calculate the average speed of different target in time period 1, so on the computed image mean value of all target average speeds as the speed parameter of current road;
The threshold speed that I, basis configure is judged road traffic state, and speed parameter blocks up less than described threshold value output, and is unimpeded more than or equal to described threshold value output;
J, to the signal time counting number that blocks up of output, judge whether time second round finish, if finish, then change step K, otherwise the commentaries on classics steps A;
K, in time second round, calculate the number of times that output is blocked up, if output is blocked up number of times greater than predetermined value, then prompting monitoring highway section is in congestion status;
Wherein, step D specifically comprises:
D1, morphological operation are removed the noise influence in the foreground information;
D2, target is transformed into the connected component level from Pixel-level, utilizes the expansion operator to remove the aperture of filling up the target area, again the result is turned back to again on the initial prospect point set, to recover the intrinsic edge of foreground image;
The number of connected domain and to each connected component labeling in D3, the statistical picture;
Area, girth, position of form center and the boundary rectangle information of D4, extraction connected domain.
2. the method for claim 1, it is characterized in that: the time-space domain background model that step B sets up video monitoring image based on mixed Gauss model specifically comprises:
B1, utilize mixed Gaussian background modeling method to set up the time-domain background model of each pixel;
B2, carry out the adaptively selected of gauss component number, specifically comprise:
When B21, initialization, the mixed Gauss model of each pixel of video monitor image scene only is provided with a gauss component;
B22, change when the video monitor image scene, when the mixed Gauss model of pixel can not mate with current pixel value, if the gauss component number in this pixel mixed Gauss model does not reach the maximum of setting, then increasing by one automatically is the initial gauss component of average with the currency, otherwise is that the new gauss component of average replaces the end gauss component in the pixel mixed Gauss model with current pixel value;
B23, after model modification is finished, judge whether last gauss component in the mixed Gauss model of each pixel expired, if expired then deletion;
3. method as claimed in claim 2, it is characterized in that: step e specifically comprises:
E1, the connected domain in the region of search is carried out Preliminary screening, the Blob less than certain thresholding places outside the limit of consideration with the Blob area;
E2, carry out region clustering, a plurality of connected domains that will have similitude based on the up-and-down boundary condition of position of form center, boundary rectangle are polymerized to a vehicle;
The histogram information of E3, the area that obtains the vehicle that polymerization obtains, boundary rectangle characteristic, vehicle;
E4, the vehicle characteristics information that obtains is saved as target information.
4. method as claimed in claim 3 is characterized in that step G also comprises the steps:
G11, judge whether target division occurs,, otherwise jump out division observation if then carry out G12;
G12, maintenance are followed the trail of, and the sub-goal of division is put into the alternate target information list, and keep upgrading and predicting;
If G13 divides a scheduled time, confirm the former target of division deletion, the oidiospore target is put into the target information tabulation, and give new mark, follow the trail of.
5. method as claimed in claim 3 is characterized in that step G also comprises the steps:
G21, judge whether target occurs merging, if, then preserve the image information of target before merging, a new target is regarded in the zone after will merging then as, remains on the tracking during the merging;
G22, surpass a certain threshold value, confirm that then target merges, it is generated fresh target as common movable body when the target merging time, and the deletion template and merge before sub-goal.
CN2009103112262A 2009-12-11 2009-12-11 Video monitoring image based method for automatically distinguishing traffic states of roads Expired - Fee Related CN101729872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009103112262A CN101729872B (en) 2009-12-11 2009-12-11 Video monitoring image based method for automatically distinguishing traffic states of roads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009103112262A CN101729872B (en) 2009-12-11 2009-12-11 Video monitoring image based method for automatically distinguishing traffic states of roads

Publications (2)

Publication Number Publication Date
CN101729872A CN101729872A (en) 2010-06-09
CN101729872B true CN101729872B (en) 2011-03-23

Family

ID=42449950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009103112262A Expired - Fee Related CN101729872B (en) 2009-12-11 2009-12-11 Video monitoring image based method for automatically distinguishing traffic states of roads

Country Status (1)

Country Link
CN (1) CN101729872B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964113A (en) * 2010-10-02 2011-02-02 上海交通大学 Method for detecting moving target in illuminance abrupt variation scene
CN102073851B (en) * 2011-01-13 2013-01-02 北京科技大学 Method and system for automatically identifying urban traffic accident
CN102314691A (en) * 2011-06-30 2012-01-11 北京平安视讯科技有限公司 Background model based on multiple information integration
CN102409599B (en) * 2011-09-22 2013-09-04 中国科学院深圳先进技术研究院 Road surface detection method and system
CN102592125A (en) * 2011-12-20 2012-07-18 福建省华大数码科技有限公司 Moving object detection method based on standard deviation characteristic
DE102012204542A1 (en) * 2012-03-21 2013-09-26 Bayerische Motoren Werke Aktiengesellschaft Method and device for determining a traffic condition
CN102999918B (en) * 2012-04-19 2015-04-22 浙江工业大学 Multi-target object tracking system of panorama video sequence image
CN102768801B (en) * 2012-07-12 2014-08-06 复旦大学 Method for detecting motor vehicle green light follow-up traffic violation based on video
CN103049738B (en) * 2012-12-07 2016-01-20 北京中邮致鼎科技有限公司 Many Method of Vehicle Segmentations that in video, shade connects
CN103559498A (en) * 2013-09-24 2014-02-05 北京环境特性研究所 Rapid man and vehicle target classification method based on multi-feature fusion
CN103491351A (en) * 2013-09-29 2014-01-01 东南大学 Intelligent video monitoring method for illegal buildings
CN103546726B (en) * 2013-10-28 2017-02-08 东南大学 Method for automatically discovering illegal land use
CN105809956B (en) * 2014-12-31 2019-07-12 大唐电信科技股份有限公司 The method and apparatus for obtaining vehicle queue length
WO2017028012A1 (en) * 2015-08-14 2017-02-23 富士通株式会社 Traffic jam condition detection device and method
CN105405127B (en) * 2015-10-30 2018-06-01 长安大学 A kind of highway minibus speed of service Forecasting Methodology
CN105632171A (en) * 2015-12-29 2016-06-01 安徽海兴泰瑞智能科技有限公司 Traffic road condition video monitoring method
CN105654064A (en) * 2016-01-25 2016-06-08 北京中科慧眼科技有限公司 Lane line detection method and device as well as advanced driver assistance system
CN106203451A (en) * 2016-07-14 2016-12-07 深圳市唯特视科技有限公司 A kind of image area characteristics extracts and the method for characteristic matching
WO2018068311A1 (en) * 2016-10-14 2018-04-19 富士通株式会社 Background model extraction device, and method and device for detecting traffic congestion
CN107220983B (en) * 2017-04-13 2019-09-24 中国农业大学 A kind of live pig detection method and system based on video
CN108108664A (en) * 2017-11-30 2018-06-01 江西洪都航空工业集团有限责任公司 A kind of city management monitoring system based on multi-target detection with classification
CN108806282B (en) * 2018-06-01 2020-09-04 浙江大学 Lane group maximum queuing length estimation method based on sample travel time information
CN111489545B (en) * 2019-01-28 2023-03-31 阿里巴巴集团控股有限公司 Road monitoring method, device and equipment and storage medium
CN111613052B (en) * 2019-02-25 2022-03-04 北京嘀嘀无限科技发展有限公司 Traffic condition determining method and device, electronic equipment and storage medium
CN110084115A (en) * 2019-03-22 2019-08-02 江苏现代工程检测有限公司 Pavement detection method based on multidimensional information probabilistic model
CN111833376A (en) * 2019-04-23 2020-10-27 上海富瀚微电子股份有限公司 Target tracking system and method
CN110619645B (en) * 2019-09-25 2022-11-25 上海海瞩智能科技有限公司 Automatic identification and positioning device and method for container towing bracket under bridge crane
CN111754790B (en) * 2020-06-04 2021-11-26 南京慧尔视智能科技有限公司 Ramp entrance traffic control system and method based on radar
CN113408432B (en) * 2021-06-22 2022-08-16 讯飞智元信息科技有限公司 Image-based traffic jam identification method, device and equipment
CN114119674B (en) * 2022-01-28 2022-04-26 深圳佑驾创新科技有限公司 Static target tracking method and device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466110A (en) * 2002-07-04 2004-01-07 深圳市哈工大交通电子技术有限公司 Video traffic dynamic information collecting equipment
CN1804927A (en) * 2005-12-28 2006-07-19 浙江工业大学 Omnibearing visual sensor based road monitoring apparatus
CN101004860A (en) * 2006-11-30 2007-07-25 复旦大学 Video method for collecting information of vehicle flowrate on road in real time
CN101587646A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of traffic flow detection based on video identification technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1466110A (en) * 2002-07-04 2004-01-07 深圳市哈工大交通电子技术有限公司 Video traffic dynamic information collecting equipment
CN1804927A (en) * 2005-12-28 2006-07-19 浙江工业大学 Omnibearing visual sensor based road monitoring apparatus
CN101004860A (en) * 2006-11-30 2007-07-25 复旦大学 Video method for collecting information of vehicle flowrate on road in real time
CN101587646A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of traffic flow detection based on video identification technology

Also Published As

Publication number Publication date
CN101729872A (en) 2010-06-09

Similar Documents

Publication Publication Date Title
CN101729872B (en) Video monitoring image based method for automatically distinguishing traffic states of roads
CN101794382B (en) Method for counting passenger flow of buses in real time
He et al. Obstacle detection of rail transit based on deep learning
CN104978567B (en) Vehicle checking method based on scene classification
CN102156983A (en) Pattern recognition and target tracking based method for detecting abnormal pedestrian positions
CN102592138B (en) Object tracking method for intensive scene based on multi-module sparse projection
CN111462488A (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN105957356B (en) A kind of traffic control system and method based on pedestrian&#39;s quantity
CN108229256B (en) Road construction detection method and device
CN104134068B (en) Monitoring vehicle characteristics based on sparse coding represent and sorting technique
CN105469425A (en) Video condensation method
CN101447082A (en) Detection method of moving target on a real-time basis
Chetouane et al. Vision‐based vehicle detection for road traffic congestion classification
CN105844229A (en) Method and system for calculating passenger crowdedness degree
CN104680542A (en) Online learning based detection method for change of remote-sensing image
CN103136534A (en) Method and device of self-adapting regional pedestrian counting
CN103679214A (en) Vehicle detection method based on online area estimation and multi-feature decision fusion
CN102867183A (en) Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system
Rabbouch et al. A vision-based statistical methodology for automatically modeling continuous urban traffic flows
CN107452212B (en) Crossing signal lamp control method and system
Bao A multi-index fusion clustering strategy for traffic flow state identification
Lin et al. A deep learning framework for video-based vehicle counting
CN104077788B (en) Moving object detection method fusing color and texture information for performing block background modeling
Shi et al. Learning for an aesthetic model for estimating the traffic state in the traffic video
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 210001, 16 floor, Tong Cheng Building, 501 South Zhongshan Road, Qinhuai District, Jiangsu, Nanjing

Patentee after: Nanjing Beidou Chengji On-line Information Technology Co., Ltd.

Address before: Building No. 20-1 Yuhuatai road flora read City District of Nanjing City, Jiangsu province 210012 1 floor

Patentee before: NANJING INTERCITY ONLINE INFORMATION TECHNOLOGY CO., LTD.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Video monitoring image based method for automatically distinguishing traffic states of roads

Effective date of registration: 20191115

Granted publication date: 20110323

Pledgee: Bank of China Limited Nanjing Gulou Branch

Pledgor: Nanjing Beidou Chengji On-line Information Technology Co., Ltd.

Registration number: Y2019320000284

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110323

Termination date: 20201211