CN101470809B - Moving object detection method based on expansion mixed gauss model - Google Patents

Moving object detection method based on expansion mixed gauss model Download PDF

Info

Publication number
CN101470809B
CN101470809B CN2007103042222A CN200710304222A CN101470809B CN 101470809 B CN101470809 B CN 101470809B CN 2007103042222 A CN2007103042222 A CN 2007103042222A CN 200710304222 A CN200710304222 A CN 200710304222A CN 101470809 B CN101470809 B CN 101470809B
Authority
CN
China
Prior art keywords
model
moving target
prospect
background
expansion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2007103042222A
Other languages
Chinese (zh)
Other versions
CN101470809A (en
Inventor
谭铁牛
黄凯奇
刘舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN2007103042222A priority Critical patent/CN101470809B/en
Publication of CN101470809A publication Critical patent/CN101470809A/en
Application granted granted Critical
Publication of CN101470809B publication Critical patent/CN101470809B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a motion target detecting method based on an expanded and mixed Gaussian model, wherein the method comprises the following steps: constructing a module through a first-level model, constructing probability density functions of shadow background and prospect based on the expanded and mixed Gaussian model, constructing a module through a second-level model, constructing probability density functions of motion targets and non-motion targets based on the three models, classifying through a classifying module and through applying a MAP-MRF(Maximum a Posteriori-Markov Random Field) method, applying feedback information which is traced, and further fining the prospect model. The motion target detecting method can overcome mistake detection of prospect caused by background motion through merging space information in the Gaussian mixed model, overcomes unbeneficial influence caused by shadows through merging background modeling, prospect detecting and shadow removing in a possibility framework, thereby improving the detection effect of a motion target.

Description

A kind of moving target detecting method based on expansion mixed gauss model
Technical field
The invention belongs to area of pattern recognition, relate to technology such as Flame Image Process and computer vision, particularly relate to the moving object detection in the video.
Background technology
In computer vision field, a problem the most basic is exactly how to obtain high-rise semantic understanding from the bottom original video data.The research of current domestic and international intelligent video monitoring mainly concentrates on the aspects such as visual analysis of camera calibration, multiple-camera fusion, moving object.Wherein, the visual analysis of moving object is one of most active research topic in the computer vision field, its core is to utilize computer vision technique to detect from image sequence, follow the tracks of and discern moving object (as people and Che etc.) and its behavior is understood and described, and it all has wide application prospect in fields such as virtual reality, video monitoring, perception interfaces.The visual analysis system of moving object generally comprises four following processing procedures, as shown in Figure 1: 1) motion detection; 2) target classification; 3) target following; 4) behavior is understood and is described.
The tracking of the detection of moving object and extraction and target object is as two core technologies in the visual analysis of moving object and bottom problem, they are follow-up various advanced processes, as the Video processing of the compressed encoding of object classification and behavior identification, event detection, behavioural analysis, video image and the contour level of semantic indexing with use the basis of understanding, also be robotization of video monitoring technology and real-time key in application, simultaneously, they also are focus and the focuses that the present image technical research is used.
Background modeling method is because can provide the information of more moving target than additive method (light stream, frame-to-frame differences), and it is widely used in the target detection.But in the application scenarios of reality, dynamic background and shade all can produce unwanted flase drop and survey, thereby make the moving object detection failure.
The most frequently used background modeling method is mixed Gaussian method (GMM) at present.But this method is based on the method for pixel and does not merge spatial information, so when background motion was very violent, this method will cause a lot of flase drops to survey, shown in Fig. 2 (b), Fig. 2 (b) is the testing result of conventional hybrid Gauss model method.Because there is strenuous exercise in background, this method has caused many prospect flase drops to survey (shown in white point among Fig. 2 (b)), and these prospect flase drops surveys are because the motion of neighborhood background causes.。Gauss's number in this model needs to determine in advance simultaneously, thereby has limited the further application of this method.Present shadow removal algorithm is also a lot, but they all be as one independently module application in target detection, as: after mixed Gaussian detects foreground target, use the shade in the shadow removal algorithm removal prospect again, its process flow diagram as shown in Figure 3.This structure makes the output of " background modeling " directly determine the result of " shadow removal ", if the result of " background modeling " is bad, the result of " shadow removal " certainly also can variation so.The present invention is based on the mixed Gauss model of an expansion, by the probability density function of structural setting, prospect and shade dynamic background modeling, foreground detection, shadow removal fusion are handled under a probabilistic framework, as shown in Figure 4.This framework can overcome, as shown in Figure 3, and some shortcomings that " background modeling " and " shadow removal " brings as module independently.The mixed Gauss model of this expansion has simultaneously merged spatial information, and its Gauss member's number also can be determined dynamically in operational process, thereby overcome the prospect flase drop survey that traditional mixed Gauss model causes because of not merging spatial information.
Summary of the invention
Existing target detection technique usually with background modeling, foreground detection and shadow removal as module independently, therefore be difficult to reach a good classifying quality.The purpose of this invention is to provide a kind of moving target detecting method based on expansion mixed gauss model, described method is handled dynamic background modeling, foreground detection, shadow removal fusion by the probability density function of structural setting, prospect and shade based on the mixed Gauss model of an expansion under a probabilistic framework.The mixed Gauss model of this expansion has simultaneously merged spatial information, and its Gauss member's number also can be determined in operational process dynamically.
To achieve these goals, the moving target detecting method based on extended hybrid Gauss provided by the invention comprises following steps:
By the one-level model construction module, based on expansion mixed gauss model, the probability density function of structure shaded background and prospect;
Make up module by second-level model, based on the model of above-mentioned three classes, the probability density function of tectonic movement target and non-moving target;
By sort module, use MAP-MRF (Maximum a Posteriori-MarkovRandom Field) method and classify;
The feedback information of application tracking, further accurate foreground model.
Further, the probability density function step of described structure shaded background and prospect comprises:
Based on the mixed Gauss model of expansion, suppose that it is covered by background for a certain grid in the most of the time, make up background model;
Based on the mixed Gauss model of expansion, suppose that if a certain moment, a grid detects the prospect sample, then next constantly, the probability that detects the prospect sample of color similarity along another grid on the velocity reversal increases, the structure foreground model;
Based on the mixed Gauss model of expansion, suppose that for same grid, the different shadow characters that moving target caused are similar, make up shadow model.
Further, the probability density function step of described tectonic movement target and non-moving target comprises:
Comprise the feature of shade and background based on non-moving target, make up non-moving target model;
Comprise the feature of moving target based on prospect, make up the model of moving target.
Further, in a probabilistic framework, handled background modeling simultaneously, moving object detection and shadow removal.
Further, the mixed Gauss model of expansion has incorporated spatial information, and the gauss component number in the model is dynamically to determine.
The invention has the beneficial effects as follows, can overcome the prospect flase drop survey that causes because of background motion by gauss hybrid models being merged spatial information; Can overcome the adverse effect that shade causes by in a probabilistic framework, merging background modeling, foreground detection and shadow removal, thereby improve the motion target detection effect.Effect improve later moving target information can better application follow-up classification in vision monitoring, follow the tracks of fields such as link and compression of images.
Description of drawings
Fig. 1 illustrates the general processing procedure of visual analysis system;
Fig. 2 (a) is an original image;
Fig. 2 (b) is the testing result of traditional mixed Gauss model method;
Fig. 2 (c) is a testing result of the present invention;
Fig. 3 is the target detection process flow diagram of classic method;
Fig. 4 is a system flowchart of the present invention;
Fig. 5 (a) is the result that traditional mixed Gauss model is learnt, and Gauss's number is 4;
Fig. 5 (b) is traditional mixed Gauss model learning outcome, and Gauss's number is 5;
Fig. 5 (c) is a learning outcome of the present invention.
Fig. 6 (a) is an original image;
Fig. 6 (b) is moving object detection result of the present invention;
Fig. 7 (a) is in outdoor environment, the grey level histogram of different grids;
Fig. 7 (b) is in indoor environment, the grey level histogram of different grids.
Embodiment
Describe each related detailed problem in the technical solution of the present invention in detail below in conjunction with accompanying drawing.Be to be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Moving object detection is for follow-up link in the monitoring, and as following the tracks of and identification, tool plays a very important role.Based on the mixed Gauss model of expansion, the present invention has realized moving target detecting method, and it can effectively handle the influence of dynamic scene and shade.As Fig. 4 the system flowchart of this method, described step are shown: based on the mixed Gauss model of expansion, the probability density function of structural setting, prospect and shade (model); By the probability density function (model) of tectonic movement target and non-moving target, the classification problem of one three class is changed into the classification problem of one two class then; Then carry out moving Object Segmentation with MAP-MRF (Maximum a Posteriori-Markov Random Field) method; Utilize the probability density function (model) of the further accurate prospect of information of motion target tracking at last.
The hardware minimalist configuration that method of the present invention needs is: P4 3.0G CPU, the computing machine of 512M internal memory; Lowest resolution is 320 * 240 monitoring camera; Frame per second is the video frequency collection card of 25 frame per seconds.On the hardware of this configuration level, adopt the C Plus Plus programming to realize this method, can reach the effect of real-time detection.
Concrete form of implementation of the present invention is as described below:
Based on the expansion mixed Gauss model, the structural setting model:
The structural setting model is based on following hypothesis: for a certain grid, covered by background in the most of the time.At this joint, we will derive the mixed Gauss model of expansion, and it and traditional mixed Gauss model have following difference: the mixed Gauss model of expansion has merged spatial information; The quantity of the gauss component in the model can be determined in application process dynamically.Derivation is as follows:
In order to merge spatial information, we at first use the method construct probability density function of Density Estimator as follows:
f ( x | b ) = n - 1 Σ i = 1 n K H ( x - y i ) ; K H(x)=|H| -1/2K(H -1/2x) (1)
Wherein, y 1, y 2... y nBe the image pattern of t before the moment.They are five dimensional vectors, and preceding bidimensional is the sample coordinate, and the back three-dimensional is the rgb value of sample.K is the kernel function of five dimensions, and H is the positive definite symmetrical matrix.We suppose between the coordinate of sample and the rgb value separate, then:
f ( x | b ) = n - 1 Σ i = 1 n K Hs ( s - s i ) K Hc ( c - c i ) - - - ( 2 )
Wherein, s iAnd c iBe sample y iCoordinate components and RGB component.Formula (2) can be rewritten as obviously
f ( x | b ) = n - 1 Σ i = 1 CN K Hc ( c - c i ) ( Σ j = 1 SN i K Hs ( s - s j ) ) - - - ( 3 )
Wherein CN is the codomain of RGB, SN iFor the RGB component equals c iNumber of samples.We are similar to binned kernel density estimator (document [1] to the coordinate components employing of sample then, P.Halland M.P.Wand.On the accuracy of binned kernel density estimators.J.Multivariate Analysis, 1995.) method simplify, can obtain
f ( x | b ) ≈ n - 1 Σ i = 1 CN K Hc ( c - c i ) ( Σ j = 1 BN N ij K Hs ( s - g j ) ) ; N ij = Σ a = 1 SN i ω j ( s a , δ ) - - - ( 4 )
Wherein, g jBe the centre coordinate of j grid, BN is the grid number in the frame, and δ is the width of grid, ω j(s a, δ) expression sample s aContribution to j grid.Rearrange equation (4), we can obtain:
f ( x | b ) ≈ Σ j = 1 BN N j n K Hs ( s - g j ) ( Σ i = 1 CN N ij N j K Hc ( c - c i ) ) ; - - - ( 5 )
Wherein,
Figure DEST_PATH_GSB00000435033300057
Then we get
&omega; j ( s a , &delta; ) = 1 , if | | s a - g j | | &infin; < &delta; / 2 , 0 , otherwise - - - ( 6 )
Wherein || || ∞ isInfinitely great progression.Obviously we can obtain N jFor falling into the number of samples of j grid, and N is arranged wherein IjThe rgb value of individual sample equals c iWe can rewrite equation (5) be:
f ( x | b ) &ap; &Sigma; j = 1 BN c b K Hs ( s - g j ) ( 1 N j &Sigma; z = 1 N j K Hc ( c - c jz ) ) - - - ( 7 )
C wherein JzBe the RGB component at the sample of j grid, c bBe a constant, its value equals
Figure DEST_PATH_GSB00000435033300062
Obviously, expression formula
Figure DEST_PATH_GSB00000435033300063
It is the marginal distribution of RGB component that drops on the sample of j grid.At this moment, we suppose that the RGB component of the sample of j grid is that mixed Gaussian distributes, then:
f ( x | b ) &ap; &Sigma; j = 1 BN c b K Hs ( s - g j ) ( &Sigma; i = 1 M j &omega; ji G &sigma; ji ( c - &mu; ji ) ) - - - ( 8 )
Wherein, M jBe the gauss component number that the mixed Gaussian of describing j grid distributes, G σ() is Gaussian function, and its variance is σ, ω JiWeights for i Gauss in the mixed Gaussian.By formula (8), we as can be seen spatial information merge by dual mode: 1) mixed Gaussian is based on grid, rather than based on pixel; 2) K Hs(s-g j) grid that certain sample is adjacent with it built contact.Fig. 7 (a) and accompanying drawing 7 (b) have shown the grey level histogram of the grid of varying environment and diverse location.Fig. 7 (a) is an outdoor scene, wherein the gray-scale value that is respectively the picture element among grid A, the B and C in the original image of histogram a, b and c reflection distribution curve in time.Fig. 7 (b) is an indoor scene, wherein the gray-scale value that is respectively the picture element among grid D, the E and F in the original image of histogram d, e and f reflection distribution curve in time.Histogrammic transverse axis is the gray-scale value of picture element, and the longitudinal axis is the number of picture element.From Fig. 7 (a) and Fig. 7 (b), their distribution can be described with mixed Gaussian as can be seen.
We can also find out from Fig. 7 (a) and Fig. 7 (b), even under same scene, along with the position difference of grid, the required Gauss's who is used for describing distribution number is also inequality.We will merge and the rule removed is dynamically determined the Gauss of mixed Gaussian in distributing number with Gauss below.
When certain Gauss in the mixed Gauss model was updated, whether we will check had other Gausses to need in same model and its merging.
Consider that in background modeling Gauss equates that along the variance of each component we construct two one dimension Gausses and replace original three-dimensional Gauss (R, G, B three-dimensional) at this moment.The new Gauss's who makes up variance, weights and centre distance are identical with former Gauss.Newly structure Gausses (weighted Gaussian) are when meeting the following conditions when two, and we will merge original two Gausses (weighted Gaussian):
1. as two Gausses (weighted Gaussian) when not having intersection point.This represents that one of them Gauss (weighted Gaussian) is entirely another Gauss (weighted Gaussian) and covers, and needs this moment to merge.
2. as two Gausses (weighted Gaussian) when intersection point is arranged.This moment is if the distance at intersection point and any one Gauss center less than the thresholding of an appointment, then needs to merge.
As two Gausses (weighted Gaussian) when satisfying the merging condition, we will merge by following formula:
&mu; new = ( &Sigma; i = 1 N x i + &Sigma; i = 1 M y i ) M + N = ( N / P &mu; 1 + M / P &mu; 2 ) N / P + M / P &ap; 1 &omega; 1 + &omega; 2 ( &omega; 1 &mu; 1 + &omega; 2 &mu; 2 )
&Sigma; new = ( &Sigma; i = 1 N x i ( x i ) T + &Sigma; i = 1 M y i ( y i ) T ) N + M - &mu; new ( &mu; new ) T &ap; &omega; 1 &Sigma; 1 &omega; 1 + &omega; 2 + &omega; 2 &Sigma; 2 &omega; 1 + &omega; 2 + &omega; 1 &omega; 2 ( &mu; 1 - &mu; 2 ) ( &mu; 1 - &mu; 2 ) T ( &omega; 1 + &omega; 2 ) 2
&sigma; new 2 = &omega; 1 &sigma; 1 2 &omega; 1 + &omega; 2 + &omega; 2 &sigma; 2 2 &omega; 1 + &omega; 2 + &omega; 1 &omega; 2 ( &mu; 1 - &mu; 2 ) T ( &mu; 1 - &mu; 2 ) ( &omega; 1 + &omega; 2 ) 2
The wherein new Gauss who merges uses
Figure DEST_PATH_GSB00000435033300074
Represent that Gauss originally is respectively With
Figure DEST_PATH_GSB00000435033300076
Consider that the Gauss after the merging will equate also that along the variance of all directions this moment, we used
&sigma; new 2 = &omega; 1 &sigma; 1 2 &omega; 1 + &omega; 2 + &omega; 2 &sigma; 2 2 &omega; 1 + &omega; 2 + &omega; 1 &omega; 2 ( &mu; 1 - &mu; 2 ) T ( &mu; 1 - &mu; 2 ) ( &omega; 1 + &omega; 2 ) 2
Come the variance of the Gauss after approximate replacement merges.
The renewal of Gauss's parameter and (document [2], C.Stauffer and W.Grimson.Learningpattern of acitivity using real-time tracking.IEEE Trans.Parttern Analysis andMachine Intelligence, 22:747-757,2000.) unanimity.When certain Gauss's weights during less than a threshold values, we delete this Gauss from this model.Fig. 5 (a)-Fig. 5 (c) shown use that Gauss merges and deletion strategy after learning method and the comparison of traditional mixed Gauss model method.Fig. 5 (a) is the learning outcome of traditional mixed Gaussian, and this moment, Gauss's number elected 4 as by hand.Fig. 5 (b) is the learning outcome of traditional mixed Gaussian, and this moment, Gauss's number elected 5 as by hand.Fig. 5 (c) is a learning outcome of the present invention.From Fig. 5 (a)-(c), more stablizing effect is also better for our method as can be seen.From the derivation of front, we as can be seen background model as shown in the formula:
f ( x | b ) &ap; &Sigma; j = 1 BN c b K Hs ( s - g j ) ( 1 N j &Sigma; z = 1 N j K Hc ( c - c jz ) )
2. based on the mixed Gauss model of expansion, construct foreground model:
Under normal conditions, moving target can not disappear after detecting immediately.The present invention utilizes these characteristics to make up foreground model, thereby the motion target detection rate is improved.The structure of foreground model is based on following hypothesis: if a certain moment, a grid detects the prospect sample, and then next detects the probability increase of the prospect sample of color similarity constantly along another grid on the velocity reversal.Consider that when not detecting any prospect sample the probability that grid detects the prospect of any color equates that it is as follows that we make up foreground model:
f ( x | f ) = &Sigma; j = 1 BN c f K Hs ( s - g j ) [ &omega; fj &gamma; + ( 1 - &omega; fj ) &psi; j ] ; &psi; j = &Sigma; i = 1 M j &omega; ji G &sigma; ji ( c - &mu; ji )
Wherein, ω FjBe the weights of mixed distribution, γ is equally distributed stochastic variable, c f=c bExpression formula, ω Fjγ+(1-ω Fj) ψ j, be used for describing the color distribution of the prospect sample that drops on j grid.
In order to utilize trace information, the renewal of foreground model is as follows: if a certain grid detects prospect, then these prospect samples are used for upgrading the marginal distribution of describing another grid foreground color value, and this grid is positioned at the prospect sample of current detection to be predicted on the position of appearance constantly at next.
3. based on the mixed Gauss model of expansion, construct shadow model:
If only utilize background model and foreground model, then detected prospect not only comprises moving target, also comprises shade.Shade can cause, such as: consequences such as a plurality of moving targets merge, the profile of target changes.These all can impact follow-up Tracking Recognition link.In order to remove shade, shadow removal can be incorporated a general frame simultaneously, we make up shadow model.The structure of shadow model is based on following hypothesis: for same grid, the feature of different shades that moving target causes is similar.Shadow model is as follows:
f ( x | sh ) = &Sigma; j = 1 BN c sh ( s - g j ) ( &Sigma; i = 1 M j &omega; ji G &sigma; ji ( c - &mu; ji ) )
Its form and background model are similar.Expression formula,
Figure DEST_PATH_GSB00000435033300085
Be used to describe the distribution of the shade rgb value of j grid.Its renewal process is similar to background, but being used for updated sample must satisfy following formula:
0 < I k V ( x , y ) B k V ( x , y ) < 1
Figure DEST_PATH_GSB00000435033300092
Figure DEST_PATH_GSB00000435033300093
Wherein,
Figure DEST_PATH_GSB00000435033300094
With
Figure DEST_PATH_GSB00000435033300095
Be respectively, in present frame, be positioned at coordinate (x, the V of picture element y), S and H color component.
Figure DEST_PATH_GSB00000435033300096
With
Figure DEST_PATH_GSB00000435033300097
Be respectively, in background reference image, be positioned at coordinate (x, the V of picture element y), S and H color component.The meaning of this formula is: shade can make the gray-scale value of background reduce, but it is too many can not to change the S and the H component of background.As from the foregoing, we need make up a background reference image.Reference picture is made up by following formula:
B t + 1 ( x , y ) = I 0 ( x , y ) if t = 0 ; ( 1 - &beta; 1 &beta; t ( x , y ) ) + &beta; 1 I t ( x , y ) , else if f ( x | b ) < f ( x | f ) ; ( 1 - &beta; 2 B t ( x , y ) ) + &beta; 2 I t ( x , y ) , otherwise ;
B T+1(x y) is t+1 picture element (x, background color value y), B constantly t(x y) is t background color value constantly, I t(x y) is the color value of t time chart picture, and β 1=0.0001, β 2=0.1.
4. tectonic movement target and non-moving target model, and carry out moving Object Segmentation:
In the application of reality, what we were concerned about is which partly is a moving target, and to which be background which be shade and be indifferent to.At this joint, we will make up the model of moving target and non-moving target.Clearly, non-moving target comprises background and shade, and prospect comprises moving target.Their model is as follows:
f(x|m)=f(x|f)
f(x|nm)=max(f(x|sh),f(x|b))
Wherein, f (x|m) is the model of moving target, and f (x|nm) is the model of non-moving target.
After making up moving target model and non-moving target model, we utilize as (document [3], Y.Sheikh and M.Shah.Bayesian modeling of dynamic scenes for objectdetection.IEEE Trans.Parttern Analysis and Machine Intelligence, pages1778-1792,2005.) described MAP-MRF (Maximum a Posteriori-MarkovRandom Field) method is carried out moving Object Segmentation.Segmentation effect is shown in Fig. 6 (b), and white portion is final motion target area among the figure.
From experimental result we as can be seen after having merged spatial information and shadow model, we can be good at handling dynamic background and shade.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (3)

1. the moving target detecting method based on expansion mixed gauss model is characterized in that, comprises following steps:
Utilize the computer vision technique camera from image pixel, to detect, follow the tracks of and discern moving object, one-level model construction module by computing machine, merged spatial information based on expansion mixed gauss model, the probability density function of structure shade, background and prospect;
Make up module by second-level model, based on above-mentioned three class models, the probability density function of tectonic movement target and non-moving target, described three class models are shadow model, background model and foreground model;
By sort module, use MAP-MRF (Maximum a Posteriori-MarkovRandom Field) method and classify, change the classification of one three class the classification of two classes into, obtain non-moving target and moving target;
Utilize the feedback information of moving target application tracking, further accurate foreground model;
The probability density function step of described structure shade, background and prospect comprises:
Based on the mixed Gauss model of expansion, suppose that it is covered by background for a certain grid in the most of the time, make up background model;
Based on the mixed Gauss model of expansion, suppose that if a certain moment, a grid detects the prospect sample, then next constantly, the probability that detects the prospect sample of color similarity along another grid on the velocity reversal increases, the structure foreground model;
Based on the mixed Gauss model of expansion, suppose that for same grid, the different shadow characters that moving target caused are similar, make up shadow model; The probability density function step of described tectonic movement target and non-moving target comprises:
Comprise the feature of shade and background based on non-moving target, make up non-moving target model;
Comprise the feature of moving target based on prospect, make up the model of moving target.
2. moving target detecting method as claimed in claim 1 is characterized in that, in a probabilistic framework, has handled background modeling simultaneously, moving object detection and shadow removal.
3. moving target detecting method as claimed in claim 1 is characterized in that the mixed Gauss model of expansion has incorporated spatial information, and the gauss component number in the model is dynamically to determine.
CN2007103042222A 2007-12-26 2007-12-26 Moving object detection method based on expansion mixed gauss model Expired - Fee Related CN101470809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007103042222A CN101470809B (en) 2007-12-26 2007-12-26 Moving object detection method based on expansion mixed gauss model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007103042222A CN101470809B (en) 2007-12-26 2007-12-26 Moving object detection method based on expansion mixed gauss model

Publications (2)

Publication Number Publication Date
CN101470809A CN101470809A (en) 2009-07-01
CN101470809B true CN101470809B (en) 2011-07-20

Family

ID=40828268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007103042222A Expired - Fee Related CN101470809B (en) 2007-12-26 2007-12-26 Moving object detection method based on expansion mixed gauss model

Country Status (1)

Country Link
CN (1) CN101470809B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883261B (en) * 2010-05-26 2012-12-12 中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking under large-range monitoring scene

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833760A (en) * 2010-03-29 2010-09-15 中山大学 Background modeling method and device based on image blocks
CN102129688B (en) * 2011-02-24 2012-09-05 哈尔滨工业大学 Moving target detection method aiming at complex background
CN102270346B (en) * 2011-07-27 2013-05-01 宁波大学 Method for extracting target object from interactive video
CN104320625A (en) * 2014-11-04 2015-01-28 无锡港湾网络科技有限公司 Intelligent video monitoring method and system for safe village
CN105354791B (en) * 2015-08-21 2019-01-11 华南农业大学 A kind of improved ADAPTIVE MIXED Gauss foreground detection method
CN106488257A (en) * 2015-08-27 2017-03-08 阿里巴巴集团控股有限公司 A kind of generation method of video file index information and equipment
CN108885469B (en) * 2016-09-27 2022-04-26 深圳市大疆创新科技有限公司 System and method for initializing a target object in a tracking system
CN106875423A (en) * 2017-01-13 2017-06-20 吉林工商学院 Moving Object Detection and tracking in a kind of stream for massive video
CN107871315B (en) * 2017-10-09 2020-08-14 中国电子科技集团公司第二十八研究所 Video image motion detection method and device
CN108564597B (en) * 2018-03-05 2022-03-29 华南理工大学 Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method
CN110060209B (en) * 2019-04-28 2021-09-24 北京理工大学 MAP-MRF super-resolution image reconstruction method based on attitude information constraint
CN111539444B (en) * 2020-02-12 2023-10-31 湖南理工学院 Gaussian mixture model method for correction type pattern recognition and statistical modeling
CN112597806A (en) * 2020-11-30 2021-04-02 北京影谱科技股份有限公司 Vehicle counting method and device based on sample background subtraction and shadow detection
CN113344874B (en) * 2021-06-04 2024-02-09 温州大学 Pedestrian boundary crossing detection method based on Gaussian mixture modeling

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529506A (en) * 2003-09-29 2004-09-15 �Ϻ���ͨ��ѧ Video target dividing method based on motion detection
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529506A (en) * 2003-09-29 2004-09-15 �Ϻ���ͨ��ѧ Video target dividing method based on motion detection
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘鑫.智能视频监控中的运动目标检测和跟踪技术研究.《中国博士学位论文全文数据库》.2007,(第2007年02期), *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883261B (en) * 2010-05-26 2012-12-12 中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking under large-range monitoring scene

Also Published As

Publication number Publication date
CN101470809A (en) 2009-07-01

Similar Documents

Publication Publication Date Title
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
Boult et al. Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings
CN103295016B (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN103093198B (en) A kind of crowd density monitoring method and device
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN102378992A (en) Articulated region detection device and method for same
CN107633226A (en) A kind of human action Tracking Recognition method and system
CN110298297A (en) Flame identification method and device
Chetverikov et al. Dynamic texture as foreground and background
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Neiswanger et al. The dependent Dirichlet process mixture of objects for detection-free tracking and object modeling
CN111340881B (en) Direct method visual positioning method based on semantic segmentation in dynamic scene
CN103955682A (en) Behavior recognition method and device based on SURF interest points
CN102314591B (en) Method and equipment for detecting static foreground object
Zhang et al. A novel framework for background subtraction and foreground detection
US8428369B2 (en) Information processing apparatus, information processing method, and program
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
CN103020631B (en) Human movement identification method based on star model
CN110008834B (en) Steering wheel intervention detection and statistics method based on vision
Bojkovic et al. Face detection approach in neural network based method for video surveillance
ELBAŞI et al. Control charts approach for scenario recognition in video sequences
Wang et al. Unusual events detection based on multi-dictionary sparse representation using Kinect
CN104732558B (en) moving object detection device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110720

Termination date: 20171226