CN100382600C - Detection method of moving object under dynamic scene - Google Patents

Detection method of moving object under dynamic scene Download PDF

Info

Publication number
CN100382600C
CN100382600C CNB2004100178570A CN200410017857A CN100382600C CN 100382600 C CN100382600 C CN 100382600C CN B2004100178570 A CNB2004100178570 A CN B2004100178570A CN 200410017857 A CN200410017857 A CN 200410017857A CN 100382600 C CN100382600 C CN 100382600C
Authority
CN
China
Prior art keywords
moving object
sample
pixel
value
density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100178570A
Other languages
Chinese (zh)
Other versions
CN1564600A (en
Inventor
毛燕芬
施鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2004100178570A priority Critical patent/CN100382600C/en
Publication of CN1564600A publication Critical patent/CN1564600A/en
Application granted granted Critical
Publication of CN100382600C publication Critical patent/CN100382600C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a detection method of a moving object under a dynamic scene. The pixel process is modeled by adopting a kernel density estimation (KDE) function for the state that a scene is not completely static under the existence of movement in a small scale, and the distribution of the pixel grey scale of a video image is calculated by utilizing the theory of nonparameter probability density estimation. A time-domain diversity sample set for model training is obtained from a primitive training sequence in the method of the present invention. Primitive training data does not need to be saved nor used in the processes of background extraction and the detection of the moving object, storage space is saved, time consuming repetitive calculation is avoided, and the real-time position and the shape of the moving object can be obtained. The method of the present invention has the advantages of simple and effective realization, high universality and high pertinence.

Description

Moving object detection method under the dynamic scene
Technical field
The present invention relates to the moving object detection method under a kind of dynamic scene, be mainly used in higher level video analysis such as video monitoring system moving object classification, tracking, belong to technical field of video processing.
Background technology
It is video monitor, man-machine interaction that moving object detects, the key issue that system videos such as traffic monitoring are analyzed, and its result is generally used for higher level analysis and processing such as target following, classification.The validity of detection method and robustness are very crucial to the whole video system.As a kind of effective ways of motion detection, background frames differential technique (Background Subtraction) deducts scene background to obtain sport foreground from present image, has accurate location and does not enlarge advantage such as moving region.Usually hypothesis complete transfixion of background or background can not have the so-called empty background of moving object to obtain by shooting.And in fact, in real application systems such as traffic monitoring, do not comprise that moving object or complete static constant background image can't obtain.Therefore, the background frames differential technique detects moving object to be needed at first dynamically to extract background model from the sequence image of the scene complexity that comprises moving object, and this model must have to the robustness of environmental change with to the high sensitivity of moving object.
Traditional parameter model treats that by prior supposition certain specific character of estimated background obeys certain statistical model, distributes as Gaussian Profile or mixed Gaussian, and the computation model parameter obtains background model then.This will relate to problems such as model parameter estimation and parameter optimization, and these problems are usually directed to the very big value algorithm of the expectation of calculation of complex.In addition, because the actual scene complexity, as have factors such as tree branches and leaves shakes, the real background model profile is unknown and can not suppose in advance, can't obtain treating the priori of estimated background pixel process at all.Therefore, parametric technique is not suitable for video monitoring system.In recent years, nonparametric model method (Elgammal A. by people such as Elgammal proposition, Harwood D., and Davis L., Non-parametric model for backgroundsubtraction, The 6th European Conference on Computer Vision, Dublin, Ireland, 2000, page 751-767) directly from data, estimates unknown density function, avoided the problems such as Estimation Optimization of model form supposition and distributed constant.Yet the nonparametric model method that Elgammal proposes adopts the data in whole sample sets to be used for the model estimation, need preserve all sample datas in testing process.And this method is carried out identical processing to different samples, each sample is considered as identical to the effect of density Estimation, adopts the homogeneous weights in density calculation, needs the many identical or similar samples of double counting like this in the background extracting process.
Summary of the invention
The objective of the invention is to actual needs at the deficiency and the video monitoring system of above-mentioned technology, moving object under a kind of dynamic scene detection method is provided, do not need to suppose in advance the distribution form of background, avoid information redundancy and double counting in background density is estimated, the multi-modal model of being set up of diversity sample nonparametric can be handled the complicated and incomplete static situation of scene, for higher level video analysis is established solid technical foundation as systems such as tracking, classification.
For achieving this end, not exclusively static and have a situation of little motion at scene, the present invention at first passes through the diversity principle, extracts to have higher occurrence frequency in the sample set and have multifarious sample from original training sequence, keeps the important information in the training image sequence.According to nonparametric probability density estimation theory, adopt the plain process of Density Estimator (KDE, kernel density estimation) function object to carry out the distribution that video image pixel gray scale is estimated in modeling then.Last passing thresholdization obtains the two-value mask of moving object, obtains the position and the shape of moving object.
Dynamic scene moving object detection method proposed by the invention mainly comprises four parts: the extraction of diversity sample set, nuclear estimate that window width calculating, Density Estimator and moving object two-value mask calculate.Concrete steps are as follows:
1) the diversity sample set is chosen.Gather one group of continuous shooting and comprise the video sequence image (N frame) of moving object as the original training sample collection, from N time domain value histogram of each pixel, alternately choose and have the highest occurrence frequency and form new sample set with selected sample has a maximum difference under Euclidean distance sample, be the former sample number in the positive and negative unit of the center calculation gray scale interval simultaneously with the new samples, the different weights that obtain new samples are used for Density Estimator;
2) nuclear estimates that window width calculates.After obtaining new samples collection and corresponding weights, also need to obtain examining the window width of each pixel correspondence in the estimation.In background model was estimated, window width mainly will reflect the pixel gray scale because the image blurring jump variation that waits the localized variation rather than the gray scale of generation.Utilize original sample to concentrate the sample absolute difference median (MAD) of each pixel in consecutive frame to calculate, obtain the relation of this pixel window width and sample absolute difference median, thereby try to achieve the window width value of different picture elements.
3) Density Estimator.After utilizing resulting diversity sample, weights and window width, can carry out Density Estimator to current frame image.With the gray scale value substitution Density Estimator function of each pixel of present image, calculate the estimation density of present image pixel.
4) moving object two-value mask calculates.For different image sequences, Xuan Ding a certain threshold value is carried out thresholding to the estimation density that calculates in the step 3 and is handled by experiment.When estimating density greater than threshold value, corresponding picture element is considered as background dot and composes 0, otherwise be considered as the foreground moving object point, tax is 1.The two-value mask that obtains thus can characterize the position and the shape thereof of current time moving object well.
The inventive method does not need to suppose in advance any form of background, has avoided complicated parameter calculation and optimization.No longer need to preserve and use the total data of original training sequence in background extracting and the moving object testing process, saved memory space, avoided double counting consuming time.The realization of the inventive method is simply effective, has good versatility and specific aim.
Description of drawings
Fig. 1 is the FB(flow block) of the moving object detection method under the dynamic scene of the present invention.
Fig. 2 extracts FB(flow block) for diversity sample set of the present invention.
Fig. 3 is the traffic scene original image that the embodiment of the invention adopted.
The moving object testing result that Fig. 4 obtains from original image for the embodiment of the invention.
Embodiment
In order to understand technical scheme of the present invention better, be described in further detail below in conjunction with drawings and Examples.
Fig. 1 is the FB(flow block) of the inventive method.In order to set up the dynamic background model that moving object detects, need N frame continuous images sequence to be used for model training as sample.(x y), needs to extract new diversity sample (M for a certain pixel X, yIndividual), and obtain window width σ simultaneously X, yThen current frame image is carried out Density Estimator, and estimated result is carried out thresholding handle, obtain the moving object testing result at last.Fig. 2 is the FB(flow block) that the diversity sample set extracts among Fig. 1.From the histogram of original training sample, at first obtain having the gray value of maximum occurrence frequency, then from the remaining sample of former sample set, choose and selected gray value farthest gray value under Euclidean distance, and then never choose obtain in the sample maximum frequency gray scale and with the gray value farthest of this distance of sampling, so repeatedly up to obtaining required sample number.
The current time original image of a certain traffic scene that the embodiment of the invention adopts as shown in Figure 3, moving object detects specifically to be carried out according to the following steps:
1) extraction of diversity sample set
Gather one group of video sequence image of taking continuously and comprising moving object (N frame) as the original training sample collection, (x, y) the gray scale value in the N frame is S to a certain pixel 1={ y 1, y 2..., y N.Because S 1Middle some similar even identical values of existence are so available M is individual at S 1In have the highest occurrence frequency and have maximum variational value and represent.Specific practice is: at first, calculate S 1The middle the highest gray value g of the frequency of occurrences 1:
g 1 = y q 1 = arg max q 1 ( n y 1 , n y 2 , · · · , n y p )
N in the formula YiExpression gray scale value is y iTotal sample number, P is the different gray scale value numbers of N sample.Secondly, choose and g 1Under Euclidean distance, differ gray value g farthest 2:
g 2 = y q 2 = arg max q 2 ( | g 1 - y k | ) k = 1,2 , · · · , P
Then, from S 1Obtain the highest gray value g of frequency in the value that is not selected 3:
g 3 = y q 3 = arg max q 3 ≠ q 1 , q 2 ( n y 1 , n y 2 , · · · , n y P )
New samples { the g that then chooses and obtained 1, g 2, g 3At a distance of farthest value g 4:
g 4 = y q 4 = arg max q 4 ≠ q 1 , q 2 , q 3 ( min l = q 1 , q 2 , q 3 ( | y k - g l | ) ) k = 1,2 , · · · , P
So repeatedly, alternately do not choose frequency in the sample set maximum and with selected to such an extent that sample differs farthest value, until obtaining M X, yThe new samples collection of individual sample S 2 = { g 1 , · · · , g M x , y } . Obviously, work as M X, yBe during=N and choose S 1In whole gray scale values.
For new samples g i, calculate its weights α by following formula i
α i = N i N , i = 1 , · · · , M x , y
N in the formula iBe at [g i-Δ g, g i+ Δ g] former sample number, M X, yCan (x, the gray scale value in N frame y) be counted P and is obtained by pixel
M x , y = P , 1 &le; P &le; K 1 &lsqb; P / ( 2 &Delta;g + 1 ) &rsqb; , K 1 < P &le; K 2 M max , P > K 2
[P/ (2 Δ g+1)] is for being not more than the smallest positive integral of P/ (2 Δ g+1) in the formula.K 1, K 2For testing given parameter, M MaxMaximum sample number for the new samples collection.
2) nuclear estimates that window width calculates
In background density nuclear was estimated, window width σ reflected that mainly the pixel gray scale waits the localized variation of generation rather than the jump of gray scale to change owing to image blurring.Neighboring pixels is to (y on the time domain i, y I+1) derive from identical local distribution usually and have only few cases to derive from cross-distribution.Suppose its local distribution obedience N (μ, σ 2) Gaussian Profile, difference (y so i-y I+1) be distributed as Gaussian Profile N (μ, 2 σ 2).Can obtain the sample absolute difference by the symmetry of Gaussian Profile and the definition of sample median | y i-y I+1| intermediate value m satisfy
&Integral; - &infin; m 1 2 &pi; 2 &sigma; 2 e u 2 2 &sigma; 2 du = 0.25
Can check in its upside 0.25 quantile Φ (u by the standardized normal distribution table 0.25) be 0.68, so
m=0+u 0.25(σ)=0.68σ
Window width σ X, yCan be by sample median m X, yObtain σ X, y=m X, y/ (0.68 ).
3) Density Estimator
Density Estimator is estimated unknown density distribution by the local function that the weighted average central point is positioned at sampled value.By 1) 2) step obtains the diversity new samples collection { g in pixel features space 1, g 2..., g Mx, y, weights α iAnd window width σ X, y, (x, gray scale value y) is y to pixel in the present image (accompanying drawing 3) tDensity distribution be p (y t):
p ( y t ) = &Sigma; i = 1 M x , y &alpha; i K &sigma; x , y ( y t - g i )
K in the formula σFor window width is the kernel function of σ and satisfies K &sigma; = ( x ) = 1 &sigma; K ( x &sigma; ) , α iBe normalization weights coefficient &Sigma; i = 1 N &alpha; i = 1 . Calculate if adopt standardized normal distribution to examine, then the formula of gradation of image distribution is:
p ( y t ) = &Sigma; i = 1 M x , y &alpha; i 1 2 &pi; &sigma; x , y 2 e ( y t - g i ) 2 2 &sigma; x , y 2
4) moving object two-value mask calculates
After calculating the density Estimation of each pixel among Fig. 3, the two-value mask that can passing thresholdization obtains obtains the position and the shape of moving object.For a certain pixel gray value y tIf Density Estimator result is less than a certain threshold value th, then this picture element is classified as the foreground point, otherwise is classified as background dot.The moving object testing result can be represented by the two-value mask
M t = 1 if p ( y t ) < th 0 otherwise
Fig. 4 is the moving object testing result that is obtained by Fig. 3.Though the contrast of overhead hypograph is very low, moving vehicle has still obtained quite good detecting.Though the pedestrian on Fig. 3 the right is blocked by trees and leaf existence motion among a small circle, pedestrian's position and shape have also obtained good detection.Noise in the testing result mainly is because some state of background is not included in the background model of being set up, and can it be removed to obtain better testing result by noise filtering technique.

Claims (1)

1. the moving object detection method under the dynamic scene is characterized in that comprising the steps:
1) the diversity sample set is chosen: gather one group of continuous shooting and comprise the video sequence image of moving object as the original training sample collection, from the time domain value histogram of each edge element, alternately choose and have the highest occurrence frequency and form new sample set with selected sample has a maximum difference under Euclidean distance sample, be the former sample number in the positive and negative unit of the center calculation gray scale interval simultaneously with the new samples, the different weights that obtain new samples are used for Density Estimator;
2) nuclear estimates that window width calculates: utilize original sample to concentrate the sample absolute difference median of each pixel in consecutive frame, obtain the relation of this pixel window width and sample absolute difference median, thereby try to achieve the window width value of different picture elements;
3) Density Estimator: utilize resulting diversity sample, weights and window width, current frame image is carried out Density Estimator,, calculate the estimation density of present image pixel with the gray scale value substitution Density Estimator function of each pixel of present image;
4) moving object two-value mask calculates: nuclear estimation density is carried out thresholding handle, when estimating density greater than selected a certain threshold value, corresponding picture element is considered as background dot and composes 0, otherwise be considered as the foreground moving object point, tax is 1, and the two-value mask that obtains thus characterizes the position and the shape thereof of current time moving object.
CNB2004100178570A 2004-04-22 2004-04-22 Detection method of moving object under dynamic scene Expired - Fee Related CN100382600C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100178570A CN100382600C (en) 2004-04-22 2004-04-22 Detection method of moving object under dynamic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100178570A CN100382600C (en) 2004-04-22 2004-04-22 Detection method of moving object under dynamic scene

Publications (2)

Publication Number Publication Date
CN1564600A CN1564600A (en) 2005-01-12
CN100382600C true CN100382600C (en) 2008-04-16

Family

ID=34479196

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100178570A Expired - Fee Related CN100382600C (en) 2004-04-22 2004-04-22 Detection method of moving object under dynamic scene

Country Status (1)

Country Link
CN (1) CN100382600C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727573B (en) * 2008-10-13 2013-02-20 汉王科技股份有限公司 Method and device for estimating crowd density in video image

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100337482C (en) * 2005-06-09 2007-09-12 上海交通大学 Fast motion assessment method based on object edge shape
CN100531405C (en) * 2005-12-31 2009-08-19 中国科学院计算技术研究所 Target tracking method of sports video
CN101405763B (en) * 2006-03-01 2011-05-04 新加坡科技研究局 Method and system for acquiring multiple views of real-time video output object
CN101184235B (en) * 2007-06-21 2010-07-28 腾讯科技(深圳)有限公司 Method and apparatus for implementing background image extraction from moving image
CN101141633B (en) * 2007-08-28 2011-01-05 湖南大学 Moving object detecting and tracing method in complex scene
CN101437113B (en) * 2007-11-14 2010-07-28 汉王科技股份有限公司 Apparatus and method for detecting self-adapting inner core density estimation movement
CN101448151B (en) * 2007-11-28 2011-08-17 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
CN101247479B (en) * 2008-03-26 2010-07-07 北京中星微电子有限公司 Automatic exposure method based on objective area in image
CN101567088B (en) * 2008-04-22 2012-01-04 华为技术有限公司 Method and device for detecting moving object
CN101832756B (en) * 2009-03-10 2014-12-10 深圳迈瑞生物医疗电子股份有限公司 Method and device for measuring displacement of targets in images and carrying out strain and strain rate imaging
CN101719219B (en) * 2009-11-20 2012-01-04 山东大学 Method for extracting shape features of statistics correlated with relative chord lengths
CN101957997B (en) * 2009-12-22 2012-02-22 北京航空航天大学 Regional average value kernel density estimation-based moving target detecting method in dynamic scene
CN104331874B (en) * 2014-08-11 2017-02-22 阔地教育科技有限公司 Background image extraction method and background image extraction system
CN104820774B (en) * 2015-04-16 2016-08-03 同济大学 A kind of map sheet sampling approach based on space complexity
CN105070061B (en) * 2015-08-19 2017-09-29 王恩琦 Vehicle peccancy evidence obtaining checking method and its system
CN107203755B (en) * 2017-05-31 2021-08-03 中国科学院遥感与数字地球研究所 Method, device and system for automatically adding new time sequence mark samples of remote sensing images
CN111598189B (en) * 2020-07-20 2020-10-30 北京瑞莱智慧科技有限公司 Generative model training method, data generation method, device, medium, and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000276602A (en) * 1999-03-23 2000-10-06 Nec Corp Device and method for detecting object and recording medium recording object detection program
US6317517B1 (en) * 1998-11-30 2001-11-13 Regents Of The University Of California Statistical pattern recognition
EP1265195A2 (en) * 2001-06-04 2002-12-11 The University Of Washington Video object tracking by estimating and subtracting background
WO2003036557A1 (en) * 2001-10-22 2003-05-01 Intel Zao Method and apparatus for background segmentation based on motion localization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317517B1 (en) * 1998-11-30 2001-11-13 Regents Of The University Of California Statistical pattern recognition
JP2000276602A (en) * 1999-03-23 2000-10-06 Nec Corp Device and method for detecting object and recording medium recording object detection program
EP1265195A2 (en) * 2001-06-04 2002-12-11 The University Of Washington Video object tracking by estimating and subtracting background
WO2003036557A1 (en) * 2001-10-22 2003-05-01 Intel Zao Method and apparatus for background segmentation based on motion localization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于直方图模式的运动目标实时跟踪算法. 胡明昊,任明武,杨静宇.计算机工程与应用. 2004 *
一种基于背景模型的运动目标检测与跟踪算法. 刘亚,艾海舟,徐光佑.信息与控制,第31卷第4期. 2002 *
背景模型的建立及保持方法比较. 李华,宋晓虹,张宁.电脑与信息技术. 2004 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727573B (en) * 2008-10-13 2013-02-20 汉王科技股份有限公司 Method and device for estimating crowd density in video image

Also Published As

Publication number Publication date
CN1564600A (en) 2005-01-12

Similar Documents

Publication Publication Date Title
CN100382600C (en) Detection method of moving object under dynamic scene
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN101567043B (en) Face tracking method based on classification and identification
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN106295564B (en) A kind of action identification method of neighborhood Gaussian structures and video features fusion
CN104978567B (en) Vehicle checking method based on scene classification
CN112800876B (en) Super-spherical feature embedding method and system for re-identification
CN112837344B (en) Target tracking method for generating twin network based on condition countermeasure
CN107480585B (en) Target detection method based on DPM algorithm
CN105957356B (en) A kind of traffic control system and method based on pedestrian&#39;s quantity
CN106650617A (en) Pedestrian abnormity identification method based on probabilistic latent semantic analysis
CN106780727B (en) Vehicle head detection model reconstruction method and device
CN108985204A (en) Pedestrian detection tracking and device
CN104050685A (en) Moving target detection method based on particle filtering visual attention model
CN111709300A (en) Crowd counting method based on video image
CN116030396B (en) Accurate segmentation method for video structured extraction
Song et al. Feature extraction and target recognition of moving image sequences
CN104282027A (en) Circle detecting method based on Hough transformation
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN103325123A (en) Image edge detection method based on self-adaptive neural fuzzy inference systems
CN114842507A (en) Reinforced pedestrian attribute identification method based on group optimization reward
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN107452212A (en) Crossing signals lamp control method and its system
Tang et al. Salient moving object detection using stochastic approach filtering
CN108764311A (en) A kind of shelter target detection method, electronic equipment, storage medium and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080416

Termination date: 20110422