CN105741319A - Improved visual background extraction method based on blind updating strategy and foreground model - Google Patents

Improved visual background extraction method based on blind updating strategy and foreground model Download PDF

Info

Publication number
CN105741319A
CN105741319A CN201610045316.1A CN201610045316A CN105741319A CN 105741319 A CN105741319 A CN 105741319A CN 201610045316 A CN201610045316 A CN 201610045316A CN 105741319 A CN105741319 A CN 105741319A
Authority
CN
China
Prior art keywords
model
foreground
point
background
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610045316.1A
Other languages
Chinese (zh)
Other versions
CN105741319B (en
Inventor
王海霞
石丽
梁荣华
毛帅龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201610045316.1A priority Critical patent/CN105741319B/en
Publication of CN105741319A publication Critical patent/CN105741319A/en
Application granted granted Critical
Publication of CN105741319B publication Critical patent/CN105741319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

An improved visual background extraction method based on a blind updating strategy and a foreground model comprises the following steps: S1, reading the first frame of a video, and initializing and building a background model; S2, initializing and building a foreground model; S3, reading a new video frame, judging whether a point is classified as a background point according to the background model, and making classification judgment on the point based on the foreground model if the point is classified as a background point; S4, processing a binary foreground image through morphological filtering; S5, updating the foreground model and the background model; and S6, reading a new video frame, repeating S3 to S5, and detecting a moving foreground area in a video sequence in real time.

Description

Improvement visual background extracting method based on blindness more New Policy and foreground model
Technical field
The present invention relates to a kind of for the foreground detection method of moving target in video.
Background technology
Development along with modern safety defense monitoring system, increasing Video Supervision Technique has been widely used for commercial place, the field such as tourist attractions and traffic monitoring, major technique includes moving object detection, moving object identification, target following, behavior analysis and people counting etc..Wherein, moving object detection is the basis of follow-up study, and it directly affects the quality of follow-up study result.At present, because the demand of monitoring technical development, work out many foreground detection method, as: frame difference method, background subtraction method (GMM, CodeBook, SOBS, ViBe), optical flow method (sparse optical flow, dense optical flow), time entropy etc..These methods have each adaptive place, and can obtain good Detection results.But, due to the complexity of Video Applications scene dynamics background, still it is difficult to find that a kind of general detection method is to adapt to all of scene challenge.
In recent years, people are more and more higher to the requirement of video monitoring real-time performance.Particularly that those are short video sequence, it is often desirable that just can be carried out rapidly the detection of moving target by several frames, thus being rapidly achieved the purpose that monitor in real time is analyzed.The initialization of traditional foreground detection method needs the video sequence of certain length to complete, this would generally expend the time of several seconds, the real-time of strong influence detection, takes pictures in real time and improper for handheld camera.ViBe (Visualbackgroundextractor) method is exactly the foreground detection method that the comparatively welcome video background based on Pixel-level of one of which models, refer to list of references BarnichO, VanDroogenbroeckM.ViBe:auniversalbackgroundsubtractional gorithmforvideosequences [J] .IEEETransImageProcess, 2011,20 (6): 1,709 1724.The method only carries out fast initialization by a two field picture, adopts conservative update mechanism Renewal model.Parameter is few, and model is simple and EMS memory occupation is less, has good anti-noise ability.But, when object detection occurs that ghost region or scene exist dynamic background, the method can cause that many pixels become flase drop point.The update mode of traditional ViBe background model has two kinds: conservative update mechanism and blindly update mechanism.Namely conservative update mechanism only allows the pixel being judged as background to carry out Renewal model, and fills background model with foreground pixel point never, this ensure that the pure property of background model.But, if the first frame video exists moving object, then the initialization of this moving region can adopt the set of pixels of moving object to be modeled, thus causing deadlock, produce ghost phenomenon.Another kind of blindly update mechanism then Utilization prospects point and background dot update background model, and deadlock is insensitive, but shortcoming is the slow motive objects in video scene knows from experience and incorporate background model and cannot detect.Although ViBe method adopts the strategy of diffusible renewal neighborhood background model and the mode of prospect counting to eliminate ghost phenomenon, but this need nonetheless remain for expending regular hour sequence, have impact on the accuracy of foreground detection.The present invention proposes a kind of new method improving ViBe, adopts more New Policy blindly, it is to avoid produce ghost region, and set up foreground model to improve the accuracy rate of foreground detection.
Summary of the invention
The present invention to overcome prior art to be difficult to detect the problem of the moving foreground object of slow moving object and short stay, adopts the ViBe method based on blindness update mechanism, it is to avoid produces ghost region;Simultaneously, secondary classification is carried out by setting up the new foreground model pixel to being categorized as background dot by background model, adopt the more New Policy of first in first out, utilize the feature that neighbor has similar space-time characterisation to update neighborhood territory pixel point randomly simultaneously, this model substantially increases the accuracy of foreground detection, it is ensured that detect slowly or the prospect of short stay moving object.
The improvement visual background extracting method based on blindness more New Policy and foreground model of the present invention, comprises the following steps:
The first step: read the first frame of video, background model is carried out single frames initialization.The initialization of background model is mainly by single frame video sequence initialization background model.General detection method needs the video sequence according to certain length to learn, and have impact on the real-time of detection, and when video scene illumination changes suddenly, it is necessary to the consuming long period relearns background model.The present invention still adopts the single frame video sequence initialization of former ViBe method, decreases the process of Background Modeling.
Second step: the foundation of foreground model.Foreground model should be set up with the time with background model.At present, some researcheres attempt to more video sequence and are modeled solving the problem of slow mobile object, but cost is the increase in the complexity of calculating.Not being persistently exist due to moving object, this sample point meant that in foreground model is mostly in the state not having foreground pixel.In this case, the present invention is directly to one bigger value L of the pixel assignment (L > 350) being judged to background, it was shown that this pixel does not possess any foreground characteristics.Headed by cause, the pixel of two field picture is all considered as background dot, it is possible to the initialization sample value of foreground model is all set to L.If MFX () is the foreground model of certain pixel x, then the foreground model sample set of this point is expressed as formula (1), wherein ωiIt is model sampled point:
MF(x)={ ω12,...,ωN}(1)
According to background model, 3rd step: read new frame of video, first judges whether this point is categorized as background dot, if being categorized as background dot, then this point continues carry out classification with its foreground model and judges.The pixel being categorized as background dot is carried out secondary judgement, is primarily to the foreground point detecting and being categorized as background dot by mistake as much as possible.Owing to new pixel is when the classification judged before whether it belongs in background model, principle is identical, therefore the present invention is analyzed for foreground model.
If certain new pixel x can find at least constant #min in the circular scope that Rf is radius of theorem in Euclid spacef=5 foreground model sample points, then illustrate that this pixel is foreground point;Otherwise, this pixel or background dot.As shown in formula (2):
# { S R f ( &upsi; ( x ) ) &cap; { &omega; 1 , &omega; 2 , ... , &omega; N } ; x &Element; P F } &GreaterEqual; # min f x &Element; P F # { S R f ( &upsi; ( x ) ) &cap; { &omega; 1 , &omega; 2 , ... , &omega; N } ; x &Element; P F } < # min f x &Element; P B - - - ( 2 )
Wherein, SRf(υ (x)) represents with pixel x for the center of circle, and Rf is the area of space of radius, and υ (x) is the pixel value of pixel x, PBRepresent background point set, PFExpression prospect point set;
4th step: utilize the foreground area of the morphology principle bianry image to detecting to carry out later stage process, the judgment principle whether Main Basis central pixel point is close with the main body of its 5*5 neighborhood value, if unanimously, then retain the initial value of this point;If inconsistent, then the value of this point is set to the main body value of neighborhood.
5th step: Renewal model.The renewal of model is just so that background model is when running into illumination variation or background object changes, and remains able to constantly adapt to the change of background.The conservative more deficiency of New Policy is to be caused by the existence of the pixel of misclassification, when pixel is mistakenly identified as background pixel, it may result in real background pixel is considered as prospect, and these pixels will not by renewal background model of making a return journey, and this is ghost Producing reason.Background model is adopted blindly update mechanism by the present invention, it is classified as background or prospect all can for updating background model, now the classified pixels point of mistake is without influence on the background model adopting blindly more New Policy, and it still adapts to the change of environment while avoiding producing ghost region.Each new pixel hasProbability remove to update the background model sample set of oneself, also have simultaneouslyThe sample set of probability updating neighborhood background model,It it is the time sampling factor.
For foreground model, due to the transience of foreground model, and along with the movement of moving target, the sample value of foreground model can quickly lose efficacy.So, when pixel is classified as foreground point, update foreground model with the pixel value of foreground point;When pixel is classified as background dot, just directly replace corresponding foreground model sample point with a bigger value L.Meanwhile, adopting the more New Policy of FIFO, namely those sample points being introduced into model are preferred and replace.
6th step: read new frame of video, repeats the 3rd step to the 5th step, the sport foreground region in detection video sequence in real time.
The invention have the advantage that meanwhile, foreground model is as much as possible is re-classified as foreground point by the foreground point being categorized as background by mistake by setting up, and improves the accuracy of foreground detection by adopting blindly more New Policy to avoid producing ghost phenomenon.
Accompanying drawing explanation
The flow chart of Fig. 1 the inventive method;
Fig. 2 shows the foreground model impact on foreground point, and wherein Fig. 2 a is input video frame, and Fig. 2 b is the foreground picture only using background model to obtain, and Fig. 2 c is the foreground pixel point that prospect of the application model increases, and Fig. 2 d is the foreground detection effect after prospect of the application model;
The effect contrast figure of Fig. 3 present invention and former ViBe method, experiment test video includes video Pedestrians, video canoe, video sofa.
Detailed description of the invention
Below in conjunction with drawings and embodiments, the invention will be further described.
The present invention is based on the foreground detection method that ViBe method is improved, and by adopting blindness more New Policy and setting up foreground model, solves the problem that slow moving object is difficult to detect, is effectively increased the correct verification and measurement ratio of foreground point while ensuring method real-time.The flow chart of method is as it is shown in figure 1, its detailed description of the invention is as follows:
Step one: input video sequence, reads first frame;Adopting single-frame images to carry out background model initializing, each pixel utilizes the space-time similar characteristic of neighborhood territory pixel to randomly select Nb(experimental data Nb=20) pixel is as the sample point of its model.
Step 2: the foundation of foreground model, precisely foreground model should be set up with background model simultaneously.Employing single frames initializes, by Nf(experimental data Nf=10) value of model sample point is set to a bigger value L (L > 350), represents that the first frame of video sequence is without foreground point.
Step 3: read the next frame of video, the classification carrying out front background model judges.When new pixel occurs, just this pixel value and background model sample set are compared, if #min at least can be foundb=2 close sample points, then this pixel is judged to background dot;Otherwise, for foreground point.Then, the secondary classification that the pixel being categorized as background carries out foreground model judges, if this pixel at least can find #min in foreground modelf=5 close sample points, then illustrate that this point is foreground point;Otherwise, this point is still background dot.As in figure 2 it is shown, foreground model effectively increases the correct verification and measurement ratio of foreground point.
Step 4: the binary picture that above-mentioned steps produces is carried out morphologic process, filtering noise point, and the foreground blocks of moving target is filled with operation.Can substantially observe, as it is shown on figure 3, by the foreground detection method of the present invention, it is possible to effectively detect the prospect of the slowly moving object of motion or short stay.
Step 5: the renewal of front background model.In the more New Policy of background model, the present invention adopts update mechanism blindly, it is to avoid the generation of ghost phenomenon.Meanwhile, this pixel, except updating the model sample value of oneself, also has certain probability to go to update neighborhood background model sample value.For foreground model, due to the transience of foreground model, when pixel is classified as foreground point, update foreground model with the pixel value of foreground point;When pixel is classified as background dot, just directly replace corresponding foreground model sample point with a bigger value L.Meanwhile, adopting the more New Policy of FIFO, namely those sample points being introduced into model are preferred and replace.
Step 6: continue to read frame of video, repeat above-mentioned three to five steps, carry out continuous print video foreground detection, until having read all of frame of video.
The invention provides a kind of thinking based on the ViBe foreground detection method improved, the method and the approach that implement this technical scheme are a lot, and the above is only the preferred embodiment of the present invention.It should be pointed out that, for those skilled in the art, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should be regarded as protection scope of the present invention.

Claims (1)

1., based on the improvement visual background extracting method of blindness more New Policy and foreground model, comprise the steps:
The first step: read the first frame of video, background model is carried out single frames initialization and foundation;Adopt the single frame video sequence initialization mode of former ViBe algorithm;
Second step: foreground model is carried out single frames initialization;The initialization sample value of foreground model is all set to L > 350, it was shown that this pixel does not possess any foreground characteristics;If MFX () is the foreground model of certain pixel x, then the sample set of this point is expressed as formula (1), wherein ωiIt is model sampled point:
MF(x)={ ω12,...,ωN}(1)
According to background model, 3rd step: read new frame of video, first judges whether this point is categorized as background dot, if being categorized as background dot, then this point continues carry out classification with its foreground model and judges;If certain new pixel x can find in the circular scope that Rf is radius of theorem in Euclid space has at least constant #minf=5 sample values, then this pixel is foreground point, namely belongs to foreground model;Otherwise, this pixel or background dot;As shown in formula (2):
# { S R f ( &upsi; ( x ) ) &cap; { &omega; 1 , &omega; 2 , ... , &omega; N } ; x &Element; P F } &GreaterEqual; # min f x &Element; P F # { S R f ( &upsi; ( x ) ) &cap; { &omega; 1 , &omega; 2 , ... , &omega; N } ; x &Element; P F } < # min f x &Element; P B - - - ( 2 )
Wherein, SRf(υ (x)) represents with pixel x for the center of circle, and Rf is the area of space of radius, and υ (x) is the pixel value of pixel x, PBRepresent background point set, PFExpression prospect point set;
4th step: utilize the foreground area of the morphology principle bianry image to detecting to carry out later stage process, the judgment principle whether Main Basis central pixel point is close with the main body of its 5*5 neighborhood value, if unanimously, then retain the initial value of this point;If inconsistent, then the value of this point is set to the main body value of neighborhood;
5th step: Renewal model;Background model is adopted blindly more New Policy, and it is classified as background or prospect all can for updating background model;Each new pixel hasProbability remove to update the background model sample set of oneself, also have simultaneouslyThe sample set of probability updating neighborhood background model,It it is the time sampling factor;
For foreground model, when pixel is classified as foreground point, update foreground model with the pixel value of foreground point;When pixel is classified as background dot, just directly replace its corresponding foreground model sample point with L-value;Adopting the more New Policy of FIFO, namely those sample points being introduced into model are preferred and replace;
6th step: read new frame of video, repeats the 3rd step to the 5th step, the sport foreground region in detection video sequence in real time.
CN201610045316.1A 2016-01-22 2016-01-22 Improvement visual background extracting method based on blindly more new strategy and foreground model Active CN105741319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610045316.1A CN105741319B (en) 2016-01-22 2016-01-22 Improvement visual background extracting method based on blindly more new strategy and foreground model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610045316.1A CN105741319B (en) 2016-01-22 2016-01-22 Improvement visual background extracting method based on blindly more new strategy and foreground model

Publications (2)

Publication Number Publication Date
CN105741319A true CN105741319A (en) 2016-07-06
CN105741319B CN105741319B (en) 2018-05-08

Family

ID=56246412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610045316.1A Active CN105741319B (en) 2016-01-22 2016-01-22 Improvement visual background extracting method based on blindly more new strategy and foreground model

Country Status (1)

Country Link
CN (1) CN105741319B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107025652A (en) * 2017-05-05 2017-08-08 太原理工大学 A kind of flame detecting method based on kinetic characteristic and color space time information
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN109978916A (en) * 2019-03-11 2019-07-05 西安电子科技大学 Vibe moving target detecting method based on gray level image characteristic matching
CN110428394A (en) * 2019-06-14 2019-11-08 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target mobile detection
CN111666881A (en) * 2020-06-08 2020-09-15 成都大熊猫繁育研究基地 Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (en) * 2007-07-08 2009-01-14 Université de Liège Visual background extractor
CN104077776A (en) * 2014-06-27 2014-10-01 深圳市赛为智能股份有限公司 Visual background extracting algorithm based on color space self-adapting updating

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (en) * 2007-07-08 2009-01-14 Université de Liège Visual background extractor
CN104077776A (en) * 2014-06-27 2014-10-01 深圳市赛为智能股份有限公司 Visual background extracting algorithm based on color space self-adapting updating

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OLIVIER BARNICH ET AL: "ViBe:a universal background subtraction algorithm for video sequences", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
陈星明 等: "动态背景下基于改进视觉背景提取的前景检测", 《光学精密工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874864A (en) * 2017-02-09 2017-06-20 广州中国科学院软件应用技术研究所 A kind of outdoor pedestrian's real-time detection method
CN107025652A (en) * 2017-05-05 2017-08-08 太原理工大学 A kind of flame detecting method based on kinetic characteristic and color space time information
CN107025652B (en) * 2017-05-05 2019-09-27 太原理工大学 A kind of flame detecting method based on kinetic characteristic and color space time information
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN109978916A (en) * 2019-03-11 2019-07-05 西安电子科技大学 Vibe moving target detecting method based on gray level image characteristic matching
CN109978916B (en) * 2019-03-11 2021-09-03 西安电子科技大学 Vibe moving target detection method based on gray level image feature matching
CN110428394A (en) * 2019-06-14 2019-11-08 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target mobile detection
CN110428394B (en) * 2019-06-14 2022-04-26 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target movement detection
CN111666881A (en) * 2020-06-08 2020-09-15 成都大熊猫繁育研究基地 Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method

Also Published As

Publication number Publication date
CN105741319B (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN105741319A (en) Improved visual background extraction method based on blind updating strategy and foreground model
CN105608456B (en) A kind of multi-direction Method for text detection based on full convolutional network
CN111814621A (en) Multi-scale vehicle and pedestrian detection method and device based on attention mechanism
CN111582201A (en) Lane line detection system based on geometric attention perception
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN103093198B (en) A kind of crowd density monitoring method and device
CN113936256A (en) Image target detection method, device, equipment and storage medium
CN112633149B (en) Domain-adaptive foggy-day image target detection method and device
CN104463903A (en) Pedestrian image real-time detection method based on target behavior analysis
CN103810703B (en) A kind of tunnel based on image procossing video moving object detection method
CN104657724A (en) Method for detecting pedestrians in traffic videos
CN111274942A (en) Traffic cone identification method and device based on cascade network
CN103258332A (en) Moving object detection method resisting illumination variation
CN115620212B (en) Behavior identification method and system based on monitoring video
KR20210097782A (en) Indicator light detection method, apparatus, device and computer-readable recording medium
CN112163544B (en) Method and system for judging random placement of non-motor vehicles
CN104182983A (en) Highway monitoring video definition detection method based on corner features
CN104268563A (en) Video abstraction method based on abnormal behavior detection
CN103605960B (en) A kind of method for identifying traffic status merged based on different focal video image
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
CN105205834A (en) Target detection and extraction method based on Gaussian mixture and shade detection model
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
KR20210060938A (en) Method for Augmenting Pedestrian Image Data Based-on Deep Learning
CN111160274B (en) Pedestrian detection method based on binaryzation fast RCNN (radar cross-correlation neural network)

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant