CN101777186A - Multimodality automatic updating and replacing background modeling method - Google Patents

Multimodality automatic updating and replacing background modeling method Download PDF

Info

Publication number
CN101777186A
CN101777186A CN201010013590A CN201010013590A CN101777186A CN 101777186 A CN101777186 A CN 101777186A CN 201010013590 A CN201010013590 A CN 201010013590A CN 201010013590 A CN201010013590 A CN 201010013590A CN 101777186 A CN101777186 A CN 101777186A
Authority
CN
China
Prior art keywords
background
main
auxiliary
frame
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201010013590A
Other languages
Chinese (zh)
Other versions
CN101777186B (en
Inventor
朱虹
马文庆
王栋
孟凡星
邢楠
刘薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN2010100135903A priority Critical patent/CN101777186B/en
Publication of CN101777186A publication Critical patent/CN101777186A/en
Application granted granted Critical
Publication of CN101777186B publication Critical patent/CN101777186B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a multimodality automatic updating and replacing background modeling method which comprises the following steps: firstly, carrying out modeling of a main background and an auxiliary background, wherein the modeling of the main background is carried out by the following steps: initializing a main background model, modifying the main background model, and updating threshold; and the modeling of the auxiliary background is carried out by the following steps: establishing an alternate auxiliary background sequence, counting classified data, and updating the threshold of the auxiliary background model; and then computing the background to be updated, and determining the background to replace the auxiliary background according to the occurring frequency of the background to be updated. The method of the invention adopts a multimodality updating and replacing design concept, adopts various modalities to form a vector to carry out the modeling of the background, and finishes background adaptation to the change of ambient lighting through continuous updating and replacing among the modalities. The background modeling method of the invention is suitable for an intelligent monitoring system, and adopts a background differential method for detecting a moving target.

Description

The background modeling method that a kind of multimodality automatic updating is replaced
Technical field
The invention belongs to technical field of video monitoring, relate to a kind of background modeling method that video frequency motion target detects that is used for, be specifically related to the background modeling method that a kind of multimodality automatic updating is replaced.
Background technology
Moving object detection is to finish the link of the most critical that goal behavior is analyzed in the intelligent monitor system, considers to monitor the uncertainty of moving target on direction of motion, speed in the visual field, adopts the background subtraction point-score to finish motion target detection usually mostly.Yet adopt the background subtraction point-score to carry out the effect of moving object detection, depend on whether background model is effective.Because in background, though scenery does not change, but unforeseen variation takes place in the illumination meeting of environment, except that the variation of natural lighting slowly, also exist as cross bursts such as cloud, wind leaf shake to change, for this reason, the background model that just needs be set up has certain adaptability, though had several different methods such as mixed Gauss model to realize having certain adaptive background modeling at present, still existed many problems in actual applications.
Summary of the invention
The purpose of this invention is to provide the background modeling method that a kind of multimodality automatic updating is replaced, solved the burst variation adaptability not high problem of existing background modeling method environment.
The technical solution adopted in the present invention is, the background modeling method that a kind of multimodality automatic updating is replaced is specifically implemented according to following steps:
Step 1: the modeling of main background and auxiliary background:
Main background modeling is specifically implemented according to following steps:
A. the initialization of main background model;
B. main background model is revised;
C. upgrade main background model threshold value;
The auxiliary background modeling is specifically implemented according to following steps:
A. set up alternate auxiliary background sequence;
B. grouped data is added up;
C. upgrade the threshold value of auxiliary background model;
Step 2: the calculating of background to be updated:
According to following formula, carry out context update:
Figure G2010100135903D00021
F k=[f k(i, j)] M * nBe the present frame of monitoring video, B=[b (i, j)] M * nBe background model, O k=[o k(i, j)] M * nBe the target detection result of present frame, th is the background judgment threshold.
Characteristics of the present invention also are,
The initialization of the main background modeling a of step 1 wherein main background model in the step, specifically implement according to following steps:
Get continuous N frame video and carry out background study, the size of N satisfies in the sequence of frames of video, and each pixel not frame number of passive movement target occlusion accounts for 98% of totalframes, adopts single Gaussian processes, obtains the initial value B of main background Main=[b Main(i, j)] M * n, m, n are respectively the line number and the columns of video frame images, b Main(i, j)=μ (i, j), i=1,2 ..., m, j=1,2 ..., n, wherein, μ (i, j) be N frame video frame images point (i, the j) average on, promptly μ ( i , j ) = 1 N Σ k = 1 N f k ( i , j ) , f k(i j) is the k frame video image, and add up this N frame video image standard deviation sigma (i, j), that is:
Figure G2010100135903D00023
The main background modeling b of step 1 wherein revised main background model in the step, specifically implemented according to following steps:
To the sequence of video images of N+1 frame, calculate revised main background model B by following to the 2N frame Main=[b Main(i, j)] M * n, that is:
Figure G2010100135903D00031
Wherein, threshold value th1 (i, j)=2 σ (i, j), σ (i, j) standard deviation of the preceding N frame video image that calculates for step 1a step.
The main background modeling c of step 1 wherein upgrades main background model threshold value in the step, specifically implements according to following steps:
Make σ Old(i, j)=th 1(i, j)/2,
Figure G2010100135903D00032
Wherein, i=1,2 ..., m, j=1,2 ..., n, k=N+1, N+2 ..., 2N,
Th 1(i, j)=2 σ (i, j), i=1,2 ..., m, j=1,2 ..., n, α are turnover rate.
Step 1 auxiliary background modeling a wherein sets up alternate auxiliary background sequence in the step, specifically implements according to following steps:
In the sequence of video images of 3N frame, threshold series th is set at the 2N+1 frame 2(i, j)=th 1(i, j)+σ (i, j), th 3(i, j)=th 2(i, j)+σ (i, j) ..., th K+1(i, j)=th k(i, j)+σ (i, j), k is a positive integer,
If | f k(i, j)-b Main(i, j) |>th 1(i, j), and | f k(i, j)-b Main(i, j) |≤th 2(i, j), then this frame (i, j) pixel value on is classified as C1 (i, j) class;
In like manner, | f k(i, j)-b Main(i, j) |>th m(i, j), and | f k(i, j)-b Main(i, j) |≤th M+1(i, j), then this frame is in that (i, j) pixel value on is classified as C m(i, j) class.
Step 1 auxiliary background modeling b wherein added up grouped data in the step, specifically implemented according to following steps:
Add up each classification C k(i, j), k=1,2 ..., the number of pixels among the m, note is done
Figure G2010100135903D00041
If the auxiliary background that is provided with is L, then from m classification, select the maximum L class of number of pixels as auxiliary background, not selected classification is then deleted, and calculates the average of this L classification respectively according to following formula:
Figure G2010100135903D00042
Afterwards, add up the pixel distribution standard deviation in the selected classification, that is:
Step 1 auxiliary background modeling c wherein upgrades the threshold value of auxiliary background model in the step, specifically implement according to following steps:
According to the pixel distribution standard deviation that step 1 auxiliary background modeling b obtained in the step, upgrade the judgment threshold of auxiliary background according to following formula:
th k(i,j)=2σ k(i,j)i=1,2,...,m,j=1,2,...,n,k=1,2,...,L。
The calculating of step 2 wherein background to be updated, specifically implement according to following steps:
When background is when slowly changing, according to following formula:
Figure G2010100135903D00044
Wherein, bs (i j) is a main background and L auxiliary background, ths (i j) is the judgment threshold of a main background and L auxiliary background,
Simultaneously, the pixel distribution number of background model is added up, that is:
Figure G2010100135903D00045
Higher limit N is set Lim it, work as N s(i, j) 〉=N Lim itThe time, force N s(i, j)=N Lim it,
At first, upgrade the standard deviation of each background model according to following formula:
Figure G2010100135903D00051
Afterwards, upgrade the judgment threshold of background:
th s(i,j)=2σ s(i,j),
Wherein, σ s(i j) is the standard deviation of a main background and L auxiliary background model;
When saltus step takes place in the part point in the background, if background to be updated and the main background b that is provided with before Main(i, j), and the difference between L auxiliary background all surpassed setting threshold, is judged as the foreground point, its value is copied to background to be updated, that is:
b renew(i,j)=f k(i,j),
Simultaneously, threshold value th is set New(i, j)=min{th 1(i, j), th s(i, j) (s=1,2 ..., L) }, th 1(i j) is the judgment threshold of main background, th s(i j) is the judgment threshold of s auxiliary background, to this background to be updated after the P frame in add up, if in the P frame Q frame is arranged, Q is no less than the lower limit that sets in advance, and satisfies
|b renew(i,j)-f j(i,j)|≤th new,j=k+1,...,P
Then be judged as and belong to background to be updated, be i.e. the number of pixels N of background to be updated Renew(i j) is:
N renew(i,j)=Q,
Then the average with this Q frame deposits in the update background module, upgrades the judgment threshold of model to be updated, otherwise, this point is judged as prospect, from background to be updated, delete, wait for rejudging of next time, if N Renew(i, j) greater than the pixel distribution number of some auxiliary background in L the auxiliary background, then background to be updated is replaced less one of statistical value in L the auxiliary background, as new auxiliary background, the auxiliary background deletion that replaces, also delete simultaneously background to be updated, wait for rejudging of next time.
The invention has the beneficial effects as follows, in background training study and renewal process, set up a plurality of mode and carry out the background description, and according to the current state of declaring, select suitable mode to carry out background and describe, and when detecting moving target, set a plurality of mode are upgraded and replacement, thus, improve when moving object detection adaptation to greatest extent to photoenvironment.
Embodiment
The present invention is described in detail below in conjunction with embodiment.
The background modeling method that multimodality automatic updating of the present invention is replaced, specifically implement according to following steps:
Step 1: the modeling of main background and auxiliary background
Main background modeling is specifically implemented according to following steps:
A. the initialization of main background model
During system start-up, at first enter the background learning phase, get continuous N frame video and carry out background study, the size of N depends on the movement velocity of moving target in this N frame, requires in the sequence of frames of video, and each pixel not frame number of passive movement target occlusion accounts for 98% of totalframes, at this moment, just need not to reject specially moving target, only need to adopt single Gaussian processes, can obtain the initial value B of main background Main=[b Main(i, j)] M * n(m, n are respectively the line number and the columns of video frame images), that is:
b main(i,j)=μ(i,j),i=1,2,...,m,j=1,2,...,n (1)
Wherein, μ (i, j) be N frame video frame images point (i, the j) average on, promptly μ ( i , j ) = 1 N Σ k = 1 N f k ( i , j ) , f k(i j) is the k frame video image.
And add up this N frame video image standard deviation sigma (i, j), that is:
Figure G2010100135903D00062
B. main background model is revised
To the sequence of video images of N+1 frame, by the following revised main background model B that calculates to the 2N frame Main=[b Main(i, j)] M * n, that is:
Figure G2010100135903D00071
Wherein, threshold value th 0(i, j)=2 (i, j), (i j) is the standard deviation of the preceding N frame video image that calculated by formula (2) to σ to σ.
C. upgrade threshold value th 0(i, j)
When utilizing formula (3) that background model is revised, also to threshold value th 1(i j) upgrades, and can adapt to the variation of current ambient lighting to guarantee it.
That is: make σ Old(i, j)=th 0(i, j)/2,
Figure G2010100135903D00072
i=1,2,...,m,j=1,2,...,n,k=N+1,N+2,...,2N。(4)
th 0(i,j)=2σ(i,j) i=1,2,...,m,j=1,2,...,n (5)
Wherein, α is a turnover rate, α ∈ [0,1].
The auxiliary background modeling is specifically implemented according to following steps:
The 2N+1 frame is provided with threshold series th in the sequence of video images of 3N frame 1(i, j)=th 0(i, j)+σ (i, j), th 2(i, j)=th1 (i, j)+σ (i, j) ..., th K+1(i, j)=th k(i, j)+σ (i, j), (k is a positive integer).
A. set up alternate auxiliary background sequence
If | f k(i, j)-b Main(i, j) |>th 0(i, j), and | f k(i, j)-b Main(i, j) |≤th 1(i, j), then this frame is in that (i, j) pixel value on is classified as C 1(i, j) class.
In like manner, | f k(i, j)-b Main(i, j) |>th M-1(i, j), and | f k(i, j)-b Main(i, j) |≤th m(i, j), then this frame is in that (i, j) pixel value on is classified as C m(i, j) class.
B. grouped data is added up
Add up each classification C k(i, j), k=1,2 ..., the number of pixels among the m, note is done
Figure G2010100135903D00081
If the auxiliary background that is provided with is L, then from a top m classification, selecting the maximum L class of number of pixels (annotates: if classification number m≤L that top statistics obtains, then need not to select, all be auxiliary background, at this moment, make L=m), as auxiliary background, not selected classification is then deleted.Simultaneously, calculate the average of this L classification respectively according to formula (6):
Afterwards, add up the pixel distribution standard deviation in the selected classification, that is:
Figure G2010100135903D00083
C. upgrade the threshold value of auxiliary background model
According to the standard deviation that formula (7) calculates, upgrade the judgment threshold of two auxiliary background respectively, that is:
th k(i,j)=2σ k(i,j)i=1,2,...,m,j=1,2,...,n,k=1,2,...,L?(8)
Step 2: the calculating of background to be updated
Finished after the modeling of main background and auxiliary background, just the threshold parameter substitution formula that obtains can have been carried out target detection
Figure G2010100135903D00084
Wherein, F k=[f k(i, j)] M * nBe the present frame of monitoring video, B=[b (i, j)] M * nBe background model, O k=[o k(i, j)] M * nBe the target detection result of present frame, th is the background judgment threshold, th ∈ { th 0, th 1..., th L.
Detect moving target by the background subtraction point-score.In the process that detects, background is constantly upgraded to adapt to the variation of photoenvironment.When the situation difference of change of background, then the update method of this patent employing is also different.
(1) background is a situation about slowly changing
This situation is meant that the illumination condition of environment does not have the situation of acute variation, and then context update calculates according to following formula.
Figure G2010100135903D00091
Wherein, b s(i j) is a main background and L auxiliary background, th s(i j) is the judgment threshold of a main background and L auxiliary background.
Simultaneously, the pixel distribution number of the background model under it is added up, that is:
Figure G2010100135903D00092
In order to prevent to monitor for a long time, accumulative frequency N s(safeguard measure of Cai Yonging is here, and higher limit N is set for i, j) excessive overflowing of causing Lim it, work as N s(i, j) 〉=N Lim itThe time, force N s(i, j)=N Lim it
Afterwards, carry out next context update threshold value constantly.
At first, upgrade the standard deviation of each background model according to following formula:
Figure G2010100135903D00093
Afterwards, upgrade the judgment threshold of background:
th s(i,j)=2σ s(i,j) (13)
Wherein, σ s(i j) is the standard deviation of a main background and L auxiliary background model.
(2) situation of saltus step takes place in the part point in the background
In the background because the point of permanent saltus step takes place in a variety of causes such as wind leaf generally is fewer and is discrete relatively, this variation begins that it(?) may not can have influence on the extraction of moving target, if but do not upgrade for a long time, may will take care of the pence even can have influence on detection to target, cause a lot of flase drops.The real-time renewal that this just requires the point to saltus step in this background also will carry out.
The way of this patent is, if differentiate itself and the main background b that is provided with before by criterion Main(i, j), and the difference between L auxiliary background all surpassed setting threshold, is judged as the foreground point, simultaneously, might be the background of saltus step in order to prevent it, its value is copied to background to be updated, that is:
b renew(i,j)=f k(i,j) (14)
Simultaneously, threshold value th is set New(i, j)=min{th 0(i, j), th s(i, j) (s=1,2 ..., L) } (th 0(i j) is the judgment threshold of main background, th s(i j) is the judgment threshold of s auxiliary background), to this background to be updated after the P frame in add up, if having Q frame (Q is no less than the lower limit that sets in advance, and lower limit is an empirical value) to satisfy in the P frame
|b renew(i,j)-f j(i,j)|≤th new,j=k+1,...,P (15)
Then be judged as and belong to background to be updated, be i.e. the number of pixels N of background to be updated Renew(i j) is:
N renew(i,j)=Q (16)
Then the average (calculating according to formula (6)) with this Q frame deposits in the update background module, and according to formula (7), (8) upgrade the judgment threshold of model to be updated.Otherwise, this point is judged as prospect, from background to be updated, delete, wait for rejudging of next time.
If N Renew(i, j) greater than the pixel distribution number of some auxiliary background in L the auxiliary background (result who calculates by formula (11)), then background to be updated is replaced less one of statistical value in L the auxiliary background, as new auxiliary background, the auxiliary background deletion that replaces, also delete simultaneously background to be updated, wait for rejudging of next time.

Claims (8)

1. the background modeling method that multimodality automatic updating is replaced is characterized in that, specifically implements according to following steps:
Step 1: the modeling of main background and auxiliary background:
Main background modeling is specifically implemented according to following steps:
A. the initialization of main background model;
B. main background model is revised;
C. upgrade main background model threshold value;
The auxiliary background modeling is specifically implemented according to following steps:
A. set up alternate auxiliary background sequence;
B. grouped data is added up;
C. upgrade the threshold value of auxiliary background model;
Step 2: the calculating of background to be updated:
According to following formula, carry out context update:
Figure F2010100135903C00011
F k=[f k(i, j)] M * nBe the present frame of monitoring video, B=[b (i, j)] M * nBe background model, O k=[o k(i, j)] M * nBe the target detection result of present frame, th is the background judgment threshold.
2. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, the initialization of the main background modeling a of described step 1 main background model in the step is specifically implemented according to following steps:
Get continuous N frame video and carry out background study, the size of N satisfies in the sequence of frames of video, and each pixel not frame number of passive movement target occlusion accounts for 98% of totalframes, adopts single Gaussian processes, obtains the initial value B of main background Main=[b Main(i, j)] M * n, m, n are respectively the line number and the columns of video frame images, b Main(i, j)=μ (i, j), i=1,2 ..., m, j=1,2 ..., n, wherein, μ (i, j) be N frame video frame images point (i, the j) average on, promptly μ ( i , j ) = 1 N Σ k = 1 N f k ( i , j ) , f k(i j) is the k frame video image, and add up this N frame video image standard deviation sigma (i, j), that is:
σ ( i , j ) = 1 N Σ k = 1 N ( f k ( i , j ) - μ ( i , j ) ) 2 i=1,2,…,m,j=1,2,…,n。
3. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, the main background modeling b of described step 1 revised main background model in the step, specifically implemented according to following steps:
To the sequence of video images of N+1 frame, calculate revised main background model B by following to the 2N frame Main=[b Main(i, j)] M * n, that is:
b main ( i , j ) = b main ( i , j ) if | b main ( i , j ) - f k ( i , j ) | &GreaterEqual; th 1 ( i , j ) ( b main ( i , j ) + f k ( i , j ) ) / 2 if | b main ( i , j ) - f k ( i , j ) | < th 1 ( i , j ) , k=N+1,N+2,…,2N,
Wherein, threshold value th 1(i, j)=2 σ (i, j), σ (i, j) standard deviation of the preceding N frame video image that calculates for step 1a step.
4. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, the main background modeling c of described step 1 upgrades main background model threshold value in the step, specifically implements according to following steps:
Make σ Old(i, j)=th 1(i, j)/2,
&sigma; ( i , j ) = &sigma; old ( i , j ) if | b main ( i , j ) - f k ( i , j ) | &GreaterEqual; th 1 ( i , j ) &alpha; &CenterDot; &sigma; old ( i , j ) 2 + ( 1 - &alpha; ) &CenterDot; ( f k ( i , j ) - b main ( i , j ) ) 2 if | b main ( i , j ) - f k ( i , j ) | < th 1 ( i , j ) ,
Wherein, i=1,2 ..., m, j=1,2 ..., n, k=N+1, N+2 ..., 2N,
Th 1(i, j)=2 σ (i, j), i=1,2 ..., m, j=1,2 ..., n, α are turnover rate.
5. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, described step 1 auxiliary background modeling a sets up alternate auxiliary background sequence in the step, specifically implements according to following steps:
In the sequence of video images of 3N frame, threshold series th is set at the 2N+1 frame 2(i, j)=th 1(i, j)+σ (i, j), th 3(i, j)=th 2(i, j)+σ (i, j) ..., th K+1(i, j)=th k(i, j)+σ (i, j), k is a positive integer,
If | f k(i, j)-b Main(i, j) |>th 1(i, j), and | f k(i, j)-b Main(i, j) |≤th 2(i, j), then this frame is in that (i, j) pixel value on is classified as C 1(i, j) class;
In like manner, | f k(i, j)-b Main(i, j) |>th m(i, j), and | f k(i, j)-b Main(i, j) |≤th M+1(i, j), then this frame is in that (i, j) pixel value on is classified as C m(i, j) class.
6. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, described step 1 auxiliary background modeling b added up grouped data in the step, specifically implemented according to following steps:
Add up each classification C k(i, j), k=1,2 ..., the number of pixels among the m, note is done
Figure F2010100135903C00031
K=1,2 ..., m if the auxiliary background that is provided with is L, then from m classification, selects the maximum L class of number of pixels as auxiliary background, and not selected classification is then deleted, and calculates the average of this L classification respectively according to following formula:
&mu; k ( i , j ) = 1 N C k &Sigma; f k ( i , j ) &Element; C k f k ( i , j ) i=1,2,…,m,j=1,2,…,n,k=1,2,…,L,
Afterwards, add up the pixel distribution standard deviation in the selected classification, that is:
&sigma; k ( i , j ) = 1 N &Sigma; k = 1 N ( f k ( i , j ) - &mu; k ( i , j ) ) 2 i=1,2,…,m,j=1,2,…,n,k=1,2,…,L。
7. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that, described step 1 auxiliary background modeling c upgrades the threshold value of auxiliary background model in the step, specifically implement according to following steps:
According to the pixel distribution standard deviation that step 1 auxiliary background modeling b obtained in the step, upgrade the judgment threshold of auxiliary background according to following formula:
th k(i,j)=2σ k(i,j)i=1,2,…,m,j=1,2,…,n,k=1,2,…,L。
8. the background modeling method that multimodality automatic updating according to claim 1 is replaced is characterized in that the calculating of described step 2 background to be updated is specifically implemented according to following steps:
When background is when slowly changing, according to following formula:
b s ( i , j ) = b s ( i , j ) if | b s ( i , j ) - f k ( i , j ) | &GreaterEqual; th s ( i , j ) ( b s ( i , j ) + f k ( i , j ) ) / 2 if | b s ( i , j ) - f k ( i , j ) | < th s ( i , j ) ,
Wherein, b s(i j) is a main background and L auxiliary background, th s(i j) is the judgment threshold of a main background and L auxiliary background,
Simultaneously, the pixel distribution number of background model is added up, that is:
N s ( i , j ) = N s ( i , j ) if | b s ( i , j ) - f k ( i , j ) | &GreaterEqual; th s ( i , j ) N s ( i , j ) + 1 if | b s ( i , j ) - f k ( i , j ) | < th s ( i , j ) ,
Higher limit N is set Lim it, work as N s(i, j) 〉=N Lim itThe time, force N s(i, j)=N Lim it, at first, upgrade the standard deviation of each background model according to following formula:
&sigma; s ( i , j ) = &sigma; s ( i , j ) if | b s ( i , j ) - f k ( i , j ) | &GreaterEqual; th s ( i , j ) 0.9 &CenterDot; &sigma; 2 s ( i , j ) + 0.1 &CenterDot; ( f k ( i , j ) - b s ( i , j ) ) 2 if | b s ( i , j ) - f k ( i , j ) | < th s ( i , j ) ,
Afterwards, upgrade the judgment threshold of background:
th s(i,j)=2σ s(i,j),
Wherein, σ s(i j) is the standard deviation of a main background and L auxiliary background model;
When saltus step takes place in the part point in the background, if background to be updated and the main background b that is provided with before Main(i, j), and the difference between L auxiliary background all surpassed setting threshold, is judged as the foreground point, its value is copied to background to be updated, that is:
b renew(i,j)=f k(i,j),
Simultaneously, threshold value th is set New(i, j)=min{th 1(i, j), th s(i, j) (s=1,2 ..., L) }, th 1(i j) is the judgment threshold of main background, th s(i j) is the judgment threshold of s auxiliary background, to this background to be updated after the P frame in add up, if in the P frame Q frame is arranged, Q is no less than the lower limit that sets in advance, and satisfies
|b renew(i,j)-f j(i,j)|≤th new,j=k+1,…,P
Then be judged as and belong to background to be updated, be i.e. the number of pixels N of background to be updated Renew(i j) is:
N renew(i,j)=Q,
Then the average with this Q frame deposits in the update background module, upgrades the judgment threshold of model to be updated, otherwise, this point is judged as prospect, from background to be updated, delete, wait for rejudging of next time, if N Renew(i, j) greater than the pixel distribution number of some auxiliary background in L the auxiliary background, then background to be updated is replaced less one of statistical value in L the auxiliary background, as new auxiliary background, the auxiliary background deletion that replaces, also delete simultaneously background to be updated, wait for rejudging of next time.
CN2010100135903A 2010-01-13 2010-01-13 Multimodality automatic updating and replacing background modeling method Expired - Fee Related CN101777186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010100135903A CN101777186B (en) 2010-01-13 2010-01-13 Multimodality automatic updating and replacing background modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010100135903A CN101777186B (en) 2010-01-13 2010-01-13 Multimodality automatic updating and replacing background modeling method

Publications (2)

Publication Number Publication Date
CN101777186A true CN101777186A (en) 2010-07-14
CN101777186B CN101777186B (en) 2011-12-14

Family

ID=42513641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010100135903A Expired - Fee Related CN101777186B (en) 2010-01-13 2010-01-13 Multimodality automatic updating and replacing background modeling method

Country Status (1)

Country Link
CN (1) CN101777186B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background
CN102222340A (en) * 2011-06-30 2011-10-19 东软集团股份有限公司 Method and system for detecting prospect
CN104680521A (en) * 2015-02-06 2015-06-03 哈尔滨工业大学深圳研究生院 Improved background modeling and foreground detecting method
CN104751484A (en) * 2015-03-20 2015-07-01 西安理工大学 Moving target detection method and detection system for achieving same
CN105023248A (en) * 2015-06-25 2015-11-04 西安理工大学 Low-SNR (signal to noise ratio) video motion target extraction method
CN106157318A (en) * 2016-07-26 2016-11-23 电子科技大学 Monitor video background image modeling method
CN107333031A (en) * 2017-07-27 2017-11-07 李静雯 A kind of automatic edit methods of multi-channel video suitable for campus football match
CN109479120A (en) * 2016-10-14 2019-03-15 富士通株式会社 Extraction element, traffic congestion detection method and the device of background model
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN113011216A (en) * 2019-12-19 2021-06-22 合肥君正科技有限公司 Multi-classification threshold self-adaptive occlusion detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0425937D0 (en) * 2004-11-25 2004-12-29 British Telecomm Method and system for initialising a background model
US7574043B2 (en) * 2005-06-27 2009-08-11 Mitsubishi Electric Research Laboratories, Inc. Method for modeling cast shadows in videos
CN101221663A (en) * 2008-01-18 2008-07-16 电子科技大学中山学院 Intelligent monitoring and alarming method based on movement object detection

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background
CN102222340A (en) * 2011-06-30 2011-10-19 东软集团股份有限公司 Method and system for detecting prospect
CN102222340B (en) * 2011-06-30 2013-04-10 东软集团股份有限公司 Method and system for detecting prospect
CN104680521A (en) * 2015-02-06 2015-06-03 哈尔滨工业大学深圳研究生院 Improved background modeling and foreground detecting method
CN104680521B (en) * 2015-02-06 2018-04-06 哈尔滨工业大学深圳研究生院 A kind of improved background modeling and foreground detection method
CN104751484B (en) * 2015-03-20 2017-08-25 西安理工大学 A kind of moving target detecting method and the detecting system for realizing moving target detecting method
CN104751484A (en) * 2015-03-20 2015-07-01 西安理工大学 Moving target detection method and detection system for achieving same
CN105023248B (en) * 2015-06-25 2017-11-03 西安理工大学 A kind of video frequency motion target extracting method of low signal-to-noise ratio
CN105023248A (en) * 2015-06-25 2015-11-04 西安理工大学 Low-SNR (signal to noise ratio) video motion target extraction method
CN106157318A (en) * 2016-07-26 2016-11-23 电子科技大学 Monitor video background image modeling method
CN106157318B (en) * 2016-07-26 2018-10-16 电子科技大学 Monitor video background image modeling method
CN109479120A (en) * 2016-10-14 2019-03-15 富士通株式会社 Extraction element, traffic congestion detection method and the device of background model
CN107333031A (en) * 2017-07-27 2017-11-07 李静雯 A kind of automatic edit methods of multi-channel video suitable for campus football match
CN107333031B (en) * 2017-07-27 2020-09-01 李静雯 Multi-channel video automatic editing method suitable for campus football match
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN113011216A (en) * 2019-12-19 2021-06-22 合肥君正科技有限公司 Multi-classification threshold self-adaptive occlusion detection method
CN113011216B (en) * 2019-12-19 2024-04-02 合肥君正科技有限公司 Multi-classification threshold self-adaptive shielding detection method

Also Published As

Publication number Publication date
CN101777186B (en) 2011-12-14

Similar Documents

Publication Publication Date Title
CN101777186B (en) Multimodality automatic updating and replacing background modeling method
CN101777180B (en) Complex background real-time alternating method based on background modeling and energy minimization
CN102542571B (en) Moving target detecting method and device
CN105608456A (en) Multi-directional text detection method based on full convolution network
CN105261037A (en) Moving object detection method capable of automatically adapting to complex scenes
CN105354791A (en) Improved adaptive Gaussian mixture foreground detection method
CN105023256B (en) A kind of image defogging method and system
CN104616290A (en) Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN101420536B (en) Background image modeling method for video stream
CN101286231A (en) Contrast enhancement method for uniformly distributing image brightness
CN102663362B (en) Moving target detection method based on gray features
CN106204586A (en) A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN102750712B (en) Moving object segmenting method based on local space-time manifold learning
CN105046683A (en) Object detection method based on adaptive-parameter-adjustment Gaussian mixture model
CN103700065A (en) Structure sparsity propagation image repairing method adopting characteristic classified learning
CN107563299A (en) A kind of pedestrian detection method using ReCNN integrating context informations
CN101883209A (en) Method by integrating background model and three-frame difference to detect video background
CN101908214B (en) Moving object detection method with background reconstruction based on neighborhood correlation
CN102509095B (en) Number plate image preprocessing method
CN104021527A (en) Rain and snow removal method in image
CN104376580B (en) The processing method of non-interesting zone issue in a kind of video frequency abstract
CN104658009A (en) Moving-target detection method based on video images
CN105405153A (en) Intelligent mobile terminal anti-noise interference motion target extraction method
CN103209321B (en) A kind of video background Rapid Updating
CN102592125A (en) Moving object detection method based on standard deviation characteristic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111214

Termination date: 20160113

CF01 Termination of patent right due to non-payment of annual fee