CN107301655A - A kind of video movement target method for detecting based on background modeling - Google Patents

A kind of video movement target method for detecting based on background modeling Download PDF

Info

Publication number
CN107301655A
CN107301655A CN201710456321.6A CN201710456321A CN107301655A CN 107301655 A CN107301655 A CN 107301655A CN 201710456321 A CN201710456321 A CN 201710456321A CN 107301655 A CN107301655 A CN 107301655A
Authority
CN
China
Prior art keywords
mrow
pixel
background
mtd
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710456321.6A
Other languages
Chinese (zh)
Other versions
CN107301655B (en
Inventor
黄建华
俞启尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Far Chau Nuclear Software Polytron Technologies Inc
Original Assignee
Shanghai Far Chau Nuclear Software Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Far Chau Nuclear Software Polytron Technologies Inc filed Critical Shanghai Far Chau Nuclear Software Polytron Technologies Inc
Priority to CN201710456321.6A priority Critical patent/CN107301655B/en
Publication of CN107301655A publication Critical patent/CN107301655A/en
Application granted granted Critical
Publication of CN107301655B publication Critical patent/CN107301655B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

Target method for detecting is moved the invention discloses a kind of video based on background modeling, is comprised the following steps:A, set up three background submodels;B, input picture and BM1 single Gauss model is compared, generates the foreground target figure of binaryzation;Further filtered on C, the foreground target figure obtained in stepb;D, the foreground target figure for being provided in step C, remove isolated noise region and hole region;E, for each prospect connected domain, TLBP features in zoning, and being classified;F, foreground target figure and present frame according to obtained in step E, are updated to single Gaussian Background model in BM1;G, single Gaussian Background model in BM1 Background, carry out Fast image segmentation;H, then TLB figures are updated.The present invention can improve the deficiencies in the prior art, have relatively low sensitiveness for complicated dynamic background, have higher verification and measurement ratio simultaneously for normal mobile object.

Description

A kind of video movement target method for detecting based on background modeling
Technical field
The present invention relates to computer vision and Video Analysis Technology field, especially a kind of video based on background modeling is moved Moving-target method for detecting.
Background technology
Video monitoring system is widely existed in the application such as security protection, traffic, criminal investigation.In most of video monitoring systems Video camera is fixed static.For the monitor video analysis of the video camera under this static fixation, set up background model to carry Mobile target is taken to be most basic and obtain most widely used technology.But in actual environment, complicated dynamic background is such as DE Camera Shake that leaf is rocked, illumination variation, shade, water wave, raindrop, strong wind are caused etc. all can be to background model and movement Objective extraction causes very big interference.The past more than ten years, there is substantial amounts of Background Algorithm model to be suggested, to be chosen for these War.But in actual use, the performance of these Background Algorithms is not still good enough.It is better simply based on pixel in time-domain The method of statistical property is such as《Adaptive background mixture models for real-time tracking》 (Stauffer, C.;Grimson, W.E.L.In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23- 25June 1999;Pp.1585-1594.) and《Vibe:A universal background subtraction algorithm for video sequences》(Barnich, 0.;Van Droogenbroeck, M.IEEE Trans.Image Process.2011,20,1709-1724.), it is right due to not accounting for correlation of the pixel in spatial domain Trickle texture movement change (the periodic leaf of such as non-critical is rocked, water wave is fluctuated etc.) can all produce substantial amounts of flase drop. And the more complicated background model based on neighborhood is such as《Fast background subtraction based on a multilayer codebook model for moving object detection》(Guo, J.-M.;Hsia, C.-H.; Liu, Y.-F.;Shih, M.-H.;Chang, C.-H.;Wu, J.-Y.IEEE Trans.Circuits Syst.Video Technol.2013,23,1809-1821) and《A texture-based method for modeling the background and detecting moving objects》(Heikkila, M.;Pietikainen, M.IEEE Trans.Pattern Anal.Mach.Intell.2006,28,657-662), then it can not be regarded because amount of calculation is huge in real time Frequency is applied well in monitoring.
The content of the invention
Target method for detecting, energy are moved the technical problem to be solved in the present invention is to provide a kind of video based on background modeling The deficiencies in the prior art are enough solved, there is relatively low sensitiveness for complicated dynamic background, simultaneously for normal motive objects Body has higher verification and measurement ratio.
In order to solve the above technical problems, the technical solution used in the present invention is as follows.
A kind of video movement target method for detecting based on background modeling, it is characterised in that comprise the following steps:
A, set up three background submodels;Single Gauss model based on pixel is designated as BM1, and the model based on region is designated as BM2, LBP model are designated as BM3;
B, input picture and BM1 single Gauss model is compared, generates the foreground target figure of binaryzation;In prospect mesh In marking on a map, the pixel for being 0 is background, and the pixel for being 1 is prospect;
Further filtered on C, the foreground target figure obtained in stepb;BM2 is that each super-pixel safeguards one The model of 20 samples, each sample is some historical pixel value in the region;When judging whether some pixel is the back of the body Jing Shi, this pixel value is compared with 20 samples of place super-pixel, when similar sample exceedes threshold value, is then judged as background;
D, the foreground target figure for being provided in step C, carry out morphologic opening operation and closed operation, remove isolated noise Region and hole region;Connected domain analysis is carried out again, obtains one or more connected regions, each connected region is in ideal situation Lower one mobile object of correspondence;
E, for each prospect connected domain, TLBP features in zoning, and being carried out with a grader trained Classification;If classification results are to belong to background, then the connected domain is entirely erased from foreground target figure;
F, foreground target figure and present frame according to obtained in step E, are carried out more to single Gaussian Background model in BM1 Newly;All foreground pixel point positions in foreground target figure are not involved in updating;
G, single Gaussian Background model in BM1 Background, carry out Fast image segmentation, with the super picture of more new images Element is represented;It is updated according to foreground target figure, present frame, super-pixel segmentation figure, then to the model in BM2;Foreground target figure In all foreground pixel point positions be not involved in updating, background pixel point based on predetermined probabilities be put into belonging to 20 of the super-pixel back ofs the body In scape sample;
H, then TLB figures are updated;TLB figures are the memory space that each pixel maintains one 32, each storage Deposit the comparative result of corresponding adjacent two frame;The maximum reserve capacity of historical record is 32 frames.
Preferably, in step B,
Background in input picture and BM1 single Gauss model is carried out to subtract each other and threshold decision, generation binaryzation Foreground target figure, formula is as follows:
I (x, y, t) represents that (x, y) is in the pixel value of present frame (time is t) in formula, and μ (x, y, t-1) is pixel The average of (x, y) in time domain untill former frame, σ (x, y, t-1) be pixel (x, y) in time domain untill former frame Standard variance, the two together constitutes BM1 single Gauss model.
Preferably, in step C,
According to two field picture and BM2 models, the foreground target figure obtained in S1 is modified, formula is as follows:
N in formulaiI-th of sample of super-pixel belonging to (x, y);δ (x, y) is defined as follows:
Preferably, in step E,
TLBP features are the histogram features of one 512 dimension, 0,1 combination in every one-dimensional neighborhood for representing a 3x3 Pattern, its characteristic value is the occurrence number in such a 0, frame of past 32 of 1 integrated mode in the connected region.
Preferably, in step E, using linear svm graders.
Preferably, in step F,
The more new formula of single Gaussian Background model is as follows:
μ (x, y, t)=α μ (x, y, t-1)+(1- α) I (x, y, t)
σ2(x, y, t)=α σ2(x, y, t-1)+(1- α) (I (x, y, t)-μ (x, y, t))2
Wherein α is attenuation coefficient, represents the significance level of historical information.
Preferably, comprise the following steps in step G,
1) all foreground pixel point positions in foreground target figure are not involved in updating;
2) for each pixel for needing to update background model, the pixel institute is replaced with the pixel value of present frame at random Belong to a sample value of the sample set of super-pixel;
3) when a pixel is judged as background, it has 1/rate probability updating background model;Rate is the time Decimation factor, value is 16 here;
4) for needing to update pixel, the background model of random one neighborhood of pixel points of selection, with new pixel Point updates selected background model.
Preferably, in step H,
TLB figures more new formula is as follows:
T (x, y, t)=(T (x, y, t-1) < < 1)+S (I (x, y, t)-I (x, y, t-1))
Wherein S (x) is 1 when x is more than 0, is otherwise 0.
The beneficial effect brought using above-mentioned technical proposal is:The background model of the present invention is by three submodel structures Into.One is single Gauss model based on pixel to describe static background, and then Promethean using one for dynamic background Model based on region is simulated.Both are complemented each other, and the description to background and adaptive updates are completed jointly.3rd submodule Type is then a special LBP (local binary pattern) model, for further being carried out to some dynamic backgrounds Filter.Different from the background model of the prior art based on neighborhood, the model based on region here is not defined in regular shape On the neighborhood of shape, but the result split based on image, can be either large or small irregular shape region (super-pixel), this Be conducive to description complexity and the different background area of texture.The pixel value in neighborhood described present invention employs LBP operators Variation characteristic in time, describes texture different from the prior art with LBP.And from unlike existing method, this TLBP models are for filtering acquired mobile prospect connected domain, the judgement of prospect whether being moved not for pixel.Cause This, although the calculating of LBP in itself is not small, but relatively low due to calculating frequency, can't be to the place of whole mobile object extraction process Manage speed and produce great influence.
Versatility of the present invention is good, speed is fast, it is adaptable to rocked comprising leaf, illumination variation, shade, water wave, raindrop, strong wind The scene of the various dynamic backgrounds such as the DE Camera Shake caused.
Brief description of the drawings
Fig. 1 is the flow chart of an embodiment of the invention.
Embodiment
Reference picture 1 a, embodiment of the invention comprises the following steps:
A, set up three background submodels;Single Gauss model based on pixel is designated as BM1, and the model based on region is designated as BM2, LBP model are designated as BM3;
B, input picture and BM1 single Gauss model is compared, generates the foreground target figure of binaryzation;In prospect mesh In marking on a map, the pixel for being 0 is background, and the pixel for being 1 is prospect;
Further filtered on C, the foreground target figure obtained in stepb;BM2 is that each super-pixel safeguards one The model of 20 samples, each sample is some historical pixel value in the region;When judging whether some pixel is the back of the body Jing Shi, this pixel value is compared with 20 samples of place super-pixel, when similar sample exceedes threshold value, is then judged as background;
D, the foreground target figure for being provided in step C, carry out morphologic opening operation and closed operation, remove isolated noise Region and hole region;Connected domain analysis is carried out again, obtains one or more connected regions, each connected region is in ideal situation Lower one mobile object of correspondence;
E, for each prospect connected domain, TLBP features in zoning, and being carried out with a grader trained Classification;If classification results are to belong to background, then the connected domain is entirely erased from foreground target figure;
F, foreground target figure and present frame according to obtained in step E, are carried out more to single Gaussian Background model in BM1 Newly;All foreground pixel point positions in foreground target figure are not involved in updating;
G, single Gaussian Background model in BM1 Background, carry out Fast image segmentation, with the super picture of more new images Element is represented;It is updated according to foreground target figure, present frame, super-pixel segmentation figure, then to the model in BM2;Foreground target figure In all foreground pixel point positions be not involved in updating, background pixel point based on predetermined probabilities be put into belonging to 20 of the super-pixel back ofs the body In scape sample;
H, then TLB figures are updated;TLB figures are the memory space that each pixel maintains one 32, each storage Deposit the comparative result of corresponding adjacent two frame;The maximum reserve capacity of historical record is 32 frames.
In step B,
Background in input picture and BM1 single Gauss model is carried out to subtract each other and threshold decision, generation binaryzation Foreground target figure, formula is as follows:
I (x, y, t) represents that (x, y) is in the pixel value of present frame (time is t) in formula, and μ (x, y, t-1) is pixel The average of (x, y) in time domain untill former frame, σ (x, y, t-1) be pixel (x, y) in time domain untill former frame Standard variance, the two together constitutes BM1 single Gauss model.
In step C,
According to two field picture and BM2 models, the foreground target figure obtained in S1 is modified, formula is as follows:
N in formulaiI-th of sample of super-pixel belonging to (x, y);δ (x, y) is defined as follows:
In step E,
TLBP features are the histogram features of one 512 dimension, 0,1 combination in every one-dimensional neighborhood for representing a 3x3 Pattern, its characteristic value is the occurrence number in such a 0, frame of past 32 of 1 integrated mode in the connected region.With common LBP Unlike, the centre position of 3x3 neighborhoods is also included within pattern, and 0 represent be present frame value be less than or equal to it is previous Frame, 1 represents more than former frame.
In step E, using linear svm graders.
In step F,
The more new formula of single Gaussian Background model is as follows:
μ (x, y, t)=α μ (x, y, t-1)+(1- α) I (x, y, t)
σ2(x, y, t)=α σ2(x, y, t-1)+(1- α) (I (x, y, t)-μ (x, y, t))2
Wherein α is attenuation coefficient, represents the significance level of historical information.
Comprise the following steps in step G,
1) all foreground pixel point positions in foreground target figure are not involved in updating;
2) for each pixel for needing to update background model, the pixel institute is replaced with the pixel value of present frame at random Belong to a sample value of the sample set of super-pixel;
3) when a pixel is judged as background, it has 1/rate probability updating background model;Rate is the time Decimation factor, value is 16 here;
4) for needing to update pixel, the background model of random one neighborhood of pixel points of selection, with new pixel Point updates selected background model.
In step H,
TLB figures more new formula is as follows:
T (x, y, t)=(T (x, y, t-1) < < 1)+S (I (x, y, t)-I (x, y, t-1))
Wherein S (x) is 1 when x is more than 0, is otherwise 0.
The present embodiment is obtained in multiple video analysis projects such as video frequency abstract, video information structuring, intrusion detection Using achieving relatively satisfactory result.《changedetection.net:“A new change detection benchmark dataset》(Goyette, N.;Jodoin, P.M.;Porikli, F.;Konrad, J;Ishwar, P.in Proc.IEEE Workshop on Change Detection (CDW-2012) at CVPR-2012, Providence, RI, 16-21Jun., 2012) in two groups of dynamic background and camera jitter in the test of totally 10 videos, by mistake It is best in examining target numbers compared with prior art《Vibe:A universal background subtraction algorithm for video sequences》(Barnich, 0.;Van Droogenbroeck, M.IEEE Trans.Image Process.2011,20,1709-1724.), more than 86% is also reduced, and the verification and measurement ratio of correct target is then Remain unchanged.
The general principle and principal character and advantages of the present invention of the present invention has been shown and described above.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the simply explanation described in above-described embodiment and specification is originally The principle of invention, without departing from the spirit and scope of the present invention, various changes and modifications of the present invention are possible, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (8)

1. a kind of video movement target method for detecting based on background modeling, it is characterised in that comprise the following steps:
A, set up three background submodels;Single Gauss model based on pixel is designated as BM1, and the model based on region is designated as BM2, LBP models are designated as BM3;
B, input picture and BM1 single Gauss model is compared, generates the foreground target figure of binaryzation;In foreground target figure In, the pixel for being 0 is background, and the pixel for being 1 is prospect;
Further filtered on C, the foreground target figure obtained in stepb;BM2 is that each super-pixel safeguards one 20 The model of sample, each sample is some historical pixel value in the region;When whether judge some pixel is background, This pixel value is compared with 20 samples of place super-pixel, when similar sample exceedes threshold value, is then judged as background;
D, the foreground target figure for being provided in step C, carry out morphologic opening operation and closed operation, remove isolated noise region And hole region;Connected domain analysis is carried out again, one or more connected regions are obtained, and each connected region is right in the ideal case Answer a mobile object;
E, for each prospect connected domain, TLBP features in zoning, and being classified with a grader trained; If classification results are to belong to background, then the connected domain is entirely erased from foreground target figure;
F, foreground target figure and present frame according to obtained in step E, are updated to single Gaussian Background model in BM1; All foreground pixel point positions in foreground target figure are not involved in updating;
G, single Gaussian Background model in BM1 Background, carry out Fast image segmentation, with the super-pixel table of more new images Show;It is updated according to foreground target figure, present frame, super-pixel segmentation figure, then to the model in BM2;In foreground target figure All foreground pixel point positions are not involved in updating, and background pixel point is put into 20 background samples of affiliated super-pixel based on predetermined probabilities In this;
H, then TLB figures are updated;TLB figures are the memory space that each pixel maintains one 32, each storage phase Answer the comparative result of adjacent two frame;The maximum reserve capacity of historical record is 32 frames.
2. the video movement target method for detecting according to claim 1 based on background modeling, it is characterised in that:Step B In,
Background in input picture and BM1 single Gauss model is carried out to subtract each other and threshold decision, the prospect of binaryzation is generated Target figure, formula is as follows:
<mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>|</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>&amp;mu;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mn>3</mn> <mi>&amp;sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
I (x, y, t) represents that (x, y) is in the pixel value of present frame (time is t) in formula, and μ (x, y, t-1) is pixel (x, y) Average in time domain untill former frame, σ (x, y, t-1) is standard of the pixel (x, y) in time domain untill former frame Variance, the two together constitutes BM1 single Gauss model.
3. the video movement target method for detecting according to claim 2 based on background modeling, it is characterised in that:Step C In,
According to two field picture and BM2 models, the foreground target figure obtained in S1 is modified, formula is as follows:
<mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <msubsup> <mi>and&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>20</mn> </msubsup> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;GreaterEqual;</mo> <mn>3</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
N in formulaiI-th of sample of super-pixel belonging to (x, y);δ (x, y) is defined as follows:
<mrow> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mrow> <mo>|</mo> <mrow> <mi>x</mi> <mo>-</mo> <mi>y</mi> </mrow> <mo>|</mo> </mrow> <mo>&lt;</mo> <mn>16</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow>
4. the video movement target method for detecting according to claim 1 based on background modeling, it is characterised in that:Step E In,
TLBP features are the histogram features of one 512 dimension, 0,1 integrated mode in every one-dimensional neighborhood for representing a 3x3, Its characteristic value is the occurrence number in such a 0, frame of past 32 of 1 integrated mode in the connected region.
5. the video movement target method for detecting according to claim 4 based on background modeling, it is characterised in that:Step E In, using linear svm graders.
6. the video movement target method for detecting according to claim 3 based on background modeling, it is characterised in that:Step F In,
The more new formula of single Gaussian Background model is as follows:
μ (x, y, t)=α μ (x, y, t-1)+(1- α) I (x, y, t)
σ2(x, y, t)=α σ2(x, y, t-1)+(1- α) (I (x, y, t)-μ (x, y, t))2
Wherein α is attenuation coefficient, represents the significance level of historical information.
7. the video movement target method for detecting according to claim 1 based on background modeling, it is characterised in that:Step G In comprise the following steps,
1) all foreground pixel point positions in foreground target figure are not involved in updating;
2) for each pixel for needing to update background model, replaced at random with the pixel value of present frame and surpassed belonging to the pixel One sample value of the sample set of pixel;
3) when a pixel is judged as background, it has 1/rate probability updating background model;Rate is time sampling The factor, value is 16 here;
4) for needing to update pixel, the background model of random one neighborhood of pixel points of selection, with new pixel more The background model being newly selected.
8. the video movement target method for detecting according to claim 1 based on background modeling, it is characterised in that:Step H In,
TLB figures more new formula is as follows:
T (x, y, t)=(T (x, y, t-1) < < 1)+S (I (x, y, t)-I (x, y, t-1))
Wherein S (x) is 1 when x is more than 0, is otherwise 0.
CN201710456321.6A 2017-06-16 2017-06-16 Video moving target detection method based on background modeling Expired - Fee Related CN107301655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710456321.6A CN107301655B (en) 2017-06-16 2017-06-16 Video moving target detection method based on background modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710456321.6A CN107301655B (en) 2017-06-16 2017-06-16 Video moving target detection method based on background modeling

Publications (2)

Publication Number Publication Date
CN107301655A true CN107301655A (en) 2017-10-27
CN107301655B CN107301655B (en) 2020-10-02

Family

ID=60135257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710456321.6A Expired - Fee Related CN107301655B (en) 2017-06-16 2017-06-16 Video moving target detection method based on background modeling

Country Status (1)

Country Link
CN (1) CN107301655B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993778A (en) * 2019-04-11 2019-07-09 浙江立元通信技术股份有限公司 A kind of method and device of determining target position
CN110428394A (en) * 2019-06-14 2019-11-08 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target mobile detection
CN110866937A (en) * 2019-10-25 2020-03-06 深圳市瑞图生物技术有限公司 Sperm movement track reconstruction and classification method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN101957997A (en) * 2009-12-22 2011-01-26 北京航空航天大学 Regional average value kernel density estimation-based moving target detecting method in dynamic scene
CN102314691A (en) * 2011-06-30 2012-01-11 北京平安视讯科技有限公司 Background model based on multiple information integration
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information
KR20160112788A (en) * 2015-03-20 2016-09-28 목포해양대학교 산학협력단 Method and apparatus for adaptive modeling background based on multiple gaussian distribution
CN106557760A (en) * 2016-11-28 2017-04-05 江苏鸿信系统集成有限公司 Monitoring system is filtered in a kind of image frame retrieval based on video identification technology
CN106570885A (en) * 2016-11-10 2017-04-19 河海大学 Background modeling method based on brightness and texture fusion threshold value

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957997A (en) * 2009-12-22 2011-01-26 北京航空航天大学 Regional average value kernel density estimation-based moving target detecting method in dynamic scene
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN102314691A (en) * 2011-06-30 2012-01-11 北京平安视讯科技有限公司 Background model based on multiple information integration
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information
KR20160112788A (en) * 2015-03-20 2016-09-28 목포해양대학교 산학협력단 Method and apparatus for adaptive modeling background based on multiple gaussian distribution
CN106570885A (en) * 2016-11-10 2017-04-19 河海大学 Background modeling method based on brightness and texture fusion threshold value
CN106557760A (en) * 2016-11-28 2017-04-05 江苏鸿信系统集成有限公司 Monitoring system is filtered in a kind of image frame retrieval based on video identification technology

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BARNICH, O 等: "ViBe: A Universal Background Subtraction Algorithm for Video Sequences", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
HUI LI 等: "Multitarget Tracking of Pedestrians in Video Sequences Based on Particle Filters", 《ADVANCES IN MULTIMEDIA》 *
XIAOCHUN LIU 等: "Robust compositional method for background subtraction", 《2012 12TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV)》 *
YONG XU 等: "Background modeling methods in video analysis: A review and comparative evaluation", 《CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY》 *
宋涛 等: "基于空时多线索融合的超像素运动目标检测方法", 《电子与信息学报》 *
陈海永 等: "基于超像素信息反馈的视觉背景提取算法", 《光学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993778A (en) * 2019-04-11 2019-07-09 浙江立元通信技术股份有限公司 A kind of method and device of determining target position
CN110428394A (en) * 2019-06-14 2019-11-08 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target mobile detection
CN110428394B (en) * 2019-06-14 2022-04-26 北京迈格威科技有限公司 Method, apparatus and computer storage medium for target movement detection
CN110866937A (en) * 2019-10-25 2020-03-06 深圳市瑞图生物技术有限公司 Sperm movement track reconstruction and classification method

Also Published As

Publication number Publication date
CN107301655B (en) 2020-10-02

Similar Documents

Publication Publication Date Title
Lu A multiscale spatio-temporal background model for motion detection
CN107256225B (en) Method and device for generating heat map based on video analysis
CN110598558B (en) Crowd density estimation method, device, electronic equipment and medium
Van Droogenbroeck et al. ViBe: A disruptive method for background subtraction
EP1881454A1 (en) Image processing for change detection
Cuevas et al. Efficient moving object detection for lightweight applications on smart cameras
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN110400294B (en) Infrared target detection system and detection method
CN112364865B (en) Method for detecting small moving target in complex scene
CN107301655A (en) A kind of video movement target method for detecting based on background modeling
CN112417955B (en) Method and device for processing tour inspection video stream
Kryjak et al. Real-time implementation of the ViBe foreground object segmentation algorithm
Patil et al. Motion saliency based generative adversarial network for underwater moving object segmentation
Zhang et al. Counting vehicles in urban traffic scenes using foreground time‐spatial images
Pawaskar et al. Detection of moving object based on background subtraction
CN109102026A (en) A kind of vehicle image detection method, apparatus and system
Verma et al. Analysis of moving object detection and tracking in video surveillance system
CN109978855A (en) A kind of method for detecting change of remote sensing image and device
CN113379789B (en) Moving target tracking method in complex environment
CN108961293B (en) Background subtraction method, device, equipment and storage medium
CN108776972B (en) Object tracking method and device
Singh et al. Object motion detection methods for real-time video surveillance: A survey with empirical evaluation
Li et al. Spatiotemporal tree filtering for enhancing image change detection
CN110858392A (en) Monitoring target positioning method based on fusion background model
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201002

Termination date: 20210616