CN101645171A - Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning - Google Patents

Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning Download PDF

Info

Publication number
CN101645171A
CN101645171A CN200910177528A CN200910177528A CN101645171A CN 101645171 A CN101645171 A CN 101645171A CN 200910177528 A CN200910177528 A CN 200910177528A CN 200910177528 A CN200910177528 A CN 200910177528A CN 101645171 A CN101645171 A CN 101645171A
Authority
CN
China
Prior art keywords
video
background
space
background modeling
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910177528A
Other languages
Chinese (zh)
Inventor
朱松纯
赵友东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Original Assignee
HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE filed Critical HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Priority to CN200910177528A priority Critical patent/CN101645171A/en
Publication of CN101645171A publication Critical patent/CN101645171A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the video field, in particular to the content analysis of the video and the object detection field. The purpose of the invention is to solve problem that the moving object segmentation is easily affected by illumination changes in the application of video monitoring, such as abrupt change of solar illumination in daytime, automobile light at night, a great amount of falsealarm can be generated by using the traditional method. Two key technologies are utilized in the invention for realizing the above purpose. One key technology is to take a space-time video block as abasic process unit, thus apparent spatial information and time motion information are simultaneously utilized to carry out background modeling and prospective detection segmentation. The other key technology is to effectively capture background modeling by utilizing online sub-space learning method. The method can be used in the systems for processing and analyzing the video content, which need tocarry out background modeling and prospective detection, such as video monitoring system.

Description

Background modeling method (video frequency motion target dividing method) based on space-time video blocks and the study of online subspace
Technical field
The present invention relates to video field, be specifically related to video content analysis and object detection field.The objective of the invention is to solve the target of transporting in the video surveillance applications and cut apart the influence that is subject to illumination variation, such as the unexpected variation of solar irradiation on daytime, night car bulb etc., classic method can produce a large amount of false-alarms.And the present invention can well address this problem.
Background technology
The background modeling of the video that the video camera visual angle is fixing is meant by mathematical model and algorithm, static background in the consecutive image sequence is set up a kind of technology of mathematical model.Utilize this background model the image-region of the moving target in the video sequence can be split from background automatically.This technology can be used for the video intelligent analysis, video coding, multiple application such as man-machine interaction.
By the complete transfixion of hypothesis background, the simplest background modeling algorithm is by two interframe, even moving target and background are differentiated in the calculating of many frame-to-frame differencess.What image difference was little between two frames or multiframe is background, and big is prospect.This method has the simple and direct advantage, and experiment effect is also better under the ideal conditions.Owing to often have the complicated background situation in the monitoring scene, a lot of scenes can't obtain desirable pure background image at all but in fact.The noise that produces in adding the camera imaging process, and the variation of automatic gain control and white balance are planted often poor effect of moving target detecting method, are difficult to practicality.
Present main flow background modeling method is gauss hybrid models method and various mutation thereof.Many Gaussian Background modeling method is utilized pixel color feature in the video sequence, it mainly considers the statistical distribution of background image and foreground image pixel value, by setting up a plurality of Gauss models this distribution is described, promptly set up background model, and then use certain decision rule that current pixel is belonged to background pixel or the moving target pixel is judged, also be foreground segmentation.After background and prospect are cut apart, upgrade according to the statistical distribution parameter of certain rule the background image pixel value, usually comparatively at a slow speed renewal is carried out in the distribution of prospect, the distribution of background is carried out the renewal of conventional speeds.Article in computer vision in 1998 and the pattern-recognition meeting " Adaptivebackground mixture models for real-time tracking " promptly be typical case's representative of this method.The highly effective customer service of this method the sensitivities of frame-to-frame differences class methods to noise, and can some are subjective and the motion that thinks little of, go into background as statements such as rocking of flowers and plants.In this simultaneously, these class methods do not need to use in advance a desirable background model, so and practical.But because the distribution of using Gaussian distribution to describe non-moving target pixel grey scale under a lot of situation is inaccurate, therefore all erroneous segmentation can appear to this method to situations such as illumination variation, shades.And since this method modeling background be one group of independently pixel change procedure, so in their the intractable video scene overall situation variation.
Recently, the method of utilizing realm information around the pixel to improve the background modeling effect has become a kind of trend, comprises spatial neighborhood information (as the method based on piece (block) such as portion's binaryzation pattern (Local Binary Pattern LBP) histogram feature) or time neighborhood information (as the method based on light stream).Utilize the background modeling method thought and the above-mentioned modeling method of pixel color feature of utilizing of local neighborhood information similar, just changed color characteristic into the local neighborhood statistical nature.The local neighborhood statistical nature has been considered the interior information of a small neighbourhood of current pixel, be subjected to the brightness random variation of current pixel to influence little, therefore than colouring information robust more, but to the smooth region in the image (as metope), zone that noise is bigger and violent illumination variation etc. all erroneous judgement occurs easily and produce a lot of omissions and false-alarm.
On directly perceived, spatial neighborhood information and time neighborhood information complement one another each other.Such as, in the environment of dim illumination or low contrast, the motion of prospect provides main information for our visually-perceptible; And when bigger illumination variation was arranged in scene, prospect apparent provided main visually-perceptible information.Some researchers also use space time information to carry out background modeling.Each pixel is the response of the space-time differential filter at center in the monitor video sequence but the background model of their their foundation is based on.And the main problem of paying close attention to is that the scene that consistent background of moving is arranged in the scene is carried out background modeling, such as the leaf that waves with the wind etc. is arranged in the scene.And can not well work to illumination variation violent in the scene etc.
In light of this situation, be accurately the monitoring scene of complexity is carried out background modeling, and successfully distinguish the moving target pixel, must consider the space-time neighborhood information around each pixel fully, and on algorithm design, guarantee real-time and versatility.
Summary of the invention
Fundamental purpose of the present invention is in order to solve the background modeling problem of outdoor scene at night.In the night outdoor scene, dim illumination condition, low signal-to-noise ratio, low contrast and violent illumination variation or the like factor all can cause the difficulty of background modeling.The invention provides the background modeling method based on video space-time piece of a novelty.With based on space piece or different based on the background modeling method of light stream, this method utilize simultaneously the space apparent with the time two aspect information of moving improve the modeling performance.In new method, the space-time video blocks is basic processing unit.Based on the space-time video blocks, learn background model by an online subspace learning method.Based on the background model that study obtains follow-up space-time video blocks is carried out background piece and foreground blocks judgement.In this simultaneously, utilize the background piece to come update background module according to the result who judges.It is suitable equally to other scenes that the inventive method is not limited to the night outdoor scene, such as the scene on daytime, is a blanket background modeling method.
The problem formalized description.For input video stream, as a new frame I n(n=1.2 ...) enter that (it is of a size of W * H), and we are divided into the N=(W * H)/(individual image block { P of h * h) to it I, n} I=1 N, i is the index of image block, h is the wide of image block and high (suppose W and H can be divided exactly by h).For each image block P I, n, we are combined to form a video blocks (as shown in Figure 2) by the image block that it is corresponding with front t-1 (as t=5) frame, and it is of a size of h * h * t.Along with constantly the reading in of video flowing, we will obtain one group of video blocks sequence { B so i} I=1 N, B i={ B I, 1, B I, 2..., B I, n... }.
For i video blocks sequence, wherein mainly contain two kinds of variations, that is, illumination variation and prospect are blocked.Only be that illumination variation (video blocks comprise the background video piece that goes significant change normally) is the background video piece, they are positioned at a low n-dimensional subspace n S 1, this sub spaces can be by obtaining in a line subspace learning method (as, CCIPCA algorithm) study.Based on this low-dimensional background subspace, we can carry out the judgement of background piece and foreground blocks.And utilize the background piece that real-time update is carried out in the background subspace.In order to express and convenience of calculation, we express the video blocks of 3 dimensions and are converted to easy-to-handle vector expression, promptly B I, nBe converted to a D (the dimensional vector x of D=h * h * t) I, n(having gone average).Because we are to the same foundation of all video blocks sequences and keep a background subspace, in the arthmetic statement of back, express for simplicity, we have removed the subscript index i of video blocks, and a video blocks sequence can be expressed as B={x so 1, x 2..., x n, ... }, x nBe n video blocks vector.
The speed that algorithm of the present invention is gone up operation at the individual PC (Inter Pentium 2.8G with 1GB RAM) of standard is about 15 frame per seconds.Some improved little strategies if sample, such as with new frame update model the time, every frame is new portion background model more in turn, and frame per second can be brought up to more than per second 40 frames, and very little to the experiment effect influence.
The invention provides background modeling method in a kind of video sequence, comprise following step:
1. model initialization (optional)
2. background model is mated and moving object detection
3. background model is upgraded
Description of drawings
The elementary cell that the background modeling method of three kinds of levels of Fig. 1 utilizes
Fig. 2 space-time video blocks spatial distribution analysis: the characteristic distribution curve of background subspace (a) and prospect subspace (b), be not difficult therefrom to find that background video piece (including the video blocks of illumination variation) is positioned at a low n-dimensional subspace n (3 ~ 4 dimension) really, and the video blocks that prospect is blocked is distributed in the space of a higher-dimension.(c), (d), (e) illustrate the reconstruction error curve of video blocks on different subspace and corresponding ROC curve thereof with (f).(c) curve is to the background subspace of the background video piece of application of pure training, and (d) curve is to using the background subspace that the video blocks that only comprises illumination variation is come out.The normal background video piece of red representative, green representative has the video blocks of illumination variation, the video blocks that blue representative has foreground target to block.As seen normal background video piece with have the video blocks of illumination variation to be positioned at same subspace, and the video blocks that has prospect to block is positioned at different subspaces and can well distinguishes.
Fig. 3 algorithm flow chart
Fig. 4 compares without result's (4 * 4 * 5. the 3rd row) of time dimension information (4 * 4 * 1. secondary series) with usefulness: be not difficult to find out from the result, have serious omission to exist in the result of space piece level, this is the time movable information that lacks owing to the video scene contrast is low in the space-time piece.
The result of Fig. 5 and commonly used two classical ways is relatively: the performance of GMM is the poorest when illumination variation bigger in the scene as can be seen.Though LBP can overcome the influence of illumination variation to a certain extent, when illumination variation was very violent, it still can produce large-area car light light false-alarm on the road surface.And the method that we propose is that to show also be good in that violent car light illumination variation is arranged.
Another extreme path of Fig. 6 compares according to the result of scene: for LBP method (secondary series), be not difficult from the result to find that more omission and false-alarm exist simultaneously, and our method (the 3rd row) not only can be tolerated violent illumination variation, and is also responsive to the variation that is produced by foreground moving when scene contrast is low.
Embodiment
Concrete implementing method of the present invention is as follows:
1. model initialization:
To a video blocks sequence B={ x 1, x 2..., x n... }, we at first carry out traditional principal component analysis (PCA) (batch PCA) algorithm to some video blocks (as preceding 200) of front, obtain a low n-dimensional subspace n (as the d=8 dimension).With its initial space, carry out normal background and keep and foreground detection then as the study renewal of online subspace.This initialization is an option, if computational resource do not allow, such as the less situation of embedded memory, can not want this initialization procedure and the renewal of directly carrying out model detects.There just do not have initialized words to begin the background modeling effect of a bit of time to be bad, needs a period of time study.
2. background model is mated and moving object detection:
For a new video blocks x n, the distance metric L between it and the background subspace model S that estimated can be by the following formula iterative computation:
x k-1,n=x k,n-(x k,n,q k,n-1)q k,n-1,(k=1.2,...,d) (1)
L(x n,S)=||x d+1,n||, (2)
Here, q k , n - 1 = ν k , n - 1 | | ν k , n - 1 | | Be k the unit principal vector of background subspace S.If distance L is less than threshold value T, x nBe exactly the background video piece, can be used to update background module S, otherwise just the prospect of being judged as blocked video blocks, can not be used for more new model.The distance L that it should be noted that in the formula 7 definition is x just nWith the residual error after the expression of background subspace, but it can not be calculated by traditional reconstruction residual computations method, that is,
L ( x n , S ) = | | x n - &Sigma; k = 1 d q km - 1 < x n , q k , n - 1 > | | - - - ( 3 )
Because the proper vector { q that estimates in line iteration here K, n-1} K=1 dUsually strict orthogonal not each other.
The threshold value T here can have following formula according to the adaptive adjusting of scene brightness:
T = | | &mu; n | | 10 + 1 80 &Sigma; i = 1 D ( x n ( i ) - &mu; n ( i ) ) - - - ( 4 )
The denominator 10 here and 80 can be done some fine settings to improve effect according to scene.This threshold setting method has embodied such basic consideration, that is, if new video blocks than average video piece " bright ", then threshold value is higher accordingly, when new than average video blocks " secretly " then the corresponding reduction of threshold value some.
3. background model is upgraded:
A given video blocks sequence B={ x 1, x 2..., x n... }, d main proper vector before we estimate by following two equation iteration:
&nu; k , n = ( 1 - &alpha; ) &nu; k , n - 1 + &alpha; x k , n < x k , n , &nu; k , n - 1 | | &nu; k , n - 1 | | > - - - ( 5 )
x k + 1 , n = x k , n - < x k , n , &nu; k , n | | &nu; k , n | | > &nu; k , n | | &nu; k , n | | , - - - ( 6 )
Here, (. .) the expression inner product, v K, nBe k principal component (1≤k≤d), the x after upgrading through n video blocks 1, n=x n, x K-1, nBe with video blocks x nK has estimated the residual video blocks after the good principal component projection forward, and α is a learning rate, as α=0.005.The author of CCIPCA algorithm proof is for above-mentioned iteration update algorithm, when n → ∞, and v K, n→ ± λ kq k, λ kBe k principal component characteristic of correspondence value of the covariance matrix of B, q kUnit character vector for correspondence. in addition, the average μ of subspace nUpgrade by following formula:
μ n=(1-α)μ n-1+αx n. (7)

Claims (4)

1. the background modeling of a monitor video (or moving Object Segmentation) method, flow process the steps include: as shown in Figure 3
A) model initialization
B) background model coupling and moving object detection
C) background model is upgraded
2. a kind of video background modeling method as claimed in claim 1 is characterized in that, utilizes the space-time video blocks as the elementary cell of handling.
3. a kind of video background modeling method as claimed in claim 1 is characterized in that, the method for utilizing the subspace to learn is found the low-dimensional spatial context in the higher-dimension video blocks space, and carries out the foreground target detection and cut apart with it.
4. a kind of video background modeling method as claimed in claim 1 is characterized in that, utilizes the apparent relevant adaptive threshold of a kind of and scene zones of different to set strategy, carries out target detection and cuts apart.
CN200910177528A 2009-09-15 2009-09-15 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning Pending CN101645171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910177528A CN101645171A (en) 2009-09-15 2009-09-15 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910177528A CN101645171A (en) 2009-09-15 2009-09-15 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning

Publications (1)

Publication Number Publication Date
CN101645171A true CN101645171A (en) 2010-02-10

Family

ID=41657048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910177528A Pending CN101645171A (en) 2009-09-15 2009-09-15 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning

Country Status (1)

Country Link
CN (1) CN101645171A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background
CN102542571A (en) * 2010-12-17 2012-07-04 中国移动通信集团广东有限公司 Moving target detecting method and device
CN102956032A (en) * 2011-08-22 2013-03-06 天津市亚安科技股份有限公司 Target template updating method
CN103034997A (en) * 2012-11-30 2013-04-10 杭州易尊数字科技有限公司 Foreground detection method for separation of foreground and background of surveillance video
CN103136742A (en) * 2011-11-28 2013-06-05 财团法人工业技术研究院 Foreground detection device and method
CN104660954A (en) * 2013-11-18 2015-05-27 深圳中兴力维技术有限公司 Method and device for improving image brightness based on background modeling under low-illuminance scene
CN105631405A (en) * 2015-12-17 2016-06-01 谢寒 Multistage blocking-based intelligent traffic video recognition background modeling method
CN110140147A (en) * 2016-11-14 2019-08-16 谷歌有限责任公司 Video frame synthesis with deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080112642A1 (en) * 2006-11-14 2008-05-15 Microsoft Corporation Video Completion By Motion Field Transfer
CN101216943A (en) * 2008-01-16 2008-07-09 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080112642A1 (en) * 2006-11-14 2008-05-15 Microsoft Corporation Video Completion By Motion Field Transfer
CN101216943A (en) * 2008-01-16 2008-07-09 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUYANG WENG等: "Candid Covariance-free Incremental Principal Component Analysis", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
肖春霞等: "基于时空全局优化的视频修复", 《计算机辅助设计与图形学学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background
CN102542571A (en) * 2010-12-17 2012-07-04 中国移动通信集团广东有限公司 Moving target detecting method and device
CN102542571B (en) * 2010-12-17 2014-11-05 中国移动通信集团广东有限公司 Moving target detecting method and device
CN102956032A (en) * 2011-08-22 2013-03-06 天津市亚安科技股份有限公司 Target template updating method
CN103136742A (en) * 2011-11-28 2013-06-05 财团法人工业技术研究院 Foreground detection device and method
CN103034997A (en) * 2012-11-30 2013-04-10 杭州易尊数字科技有限公司 Foreground detection method for separation of foreground and background of surveillance video
CN103034997B (en) * 2012-11-30 2017-04-19 北京博创天盛科技有限公司 Foreground detection method for separation of foreground and background of surveillance video
CN104660954A (en) * 2013-11-18 2015-05-27 深圳中兴力维技术有限公司 Method and device for improving image brightness based on background modeling under low-illuminance scene
CN105631405A (en) * 2015-12-17 2016-06-01 谢寒 Multistage blocking-based intelligent traffic video recognition background modeling method
CN105631405B (en) * 2015-12-17 2018-12-07 谢寒 Traffic video intelligent recognition background modeling method based on Multilevel Block
CN110140147A (en) * 2016-11-14 2019-08-16 谷歌有限责任公司 Video frame synthesis with deep learning
CN110140147B (en) * 2016-11-14 2023-10-10 谷歌有限责任公司 Video frame synthesis with deep learning

Similar Documents

Publication Publication Date Title
CN101645171A (en) Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning
CN110929578B (en) Anti-shielding pedestrian detection method based on attention mechanism
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
US10088600B2 (en) Weather recognition method and device based on image information detection
CN101957997B (en) Regional average value kernel density estimation-based moving target detecting method in dynamic scene
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN103136763B (en) Electronic installation and its method for detecting the abnormal paragraph of video sequence
CN103258193B (en) A kind of group abnormality Activity recognition method based on KOD energy feature
CN105354791A (en) Improved adaptive Gaussian mixture foreground detection method
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN103488993A (en) Crowd abnormal behavior identification method based on FAST
CN111259783A (en) Video behavior detection method and system, highlight video playback system and storage medium
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN103530640A (en) Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine)
CN105138987A (en) Vehicle detection method based on aggregation channel characteristic and motion estimation
CN109871778B (en) Lane keeping control method based on transfer learning
Ramirez-Alonso et al. Temporal weighted learning model for background estimation with an automatic re-initialization stage and adaptive parameters update
CN110084201A (en) A kind of human motion recognition method of convolutional neural networks based on specific objective tracking under monitoring scene
Niknejad et al. Occlusion handling using discriminative model of trained part templates and conditional random field
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
Charouh et al. Improved background subtraction-based moving vehicle detection by optimizing morphological operations using machine learning
CN102314591B (en) Method and equipment for detecting static foreground object
CN104036526A (en) Gray target tracking method based on self-adaptive window
CN116597424A (en) Fatigue driving detection system based on face recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100210