CN102800105A - Target detection method based on motion vector - Google Patents

Target detection method based on motion vector Download PDF

Info

Publication number
CN102800105A
CN102800105A CN2012102186152A CN201210218615A CN102800105A CN 102800105 A CN102800105 A CN 102800105A CN 2012102186152 A CN2012102186152 A CN 2012102186152A CN 201210218615 A CN201210218615 A CN 201210218615A CN 102800105 A CN102800105 A CN 102800105A
Authority
CN
China
Prior art keywords
frame
target
pixel
motion
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102186152A
Other languages
Chinese (zh)
Other versions
CN102800105B (en
Inventor
吴炜
黄鹏
罗霁
宋彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210218615.2A priority Critical patent/CN102800105B/en
Publication of CN102800105A publication Critical patent/CN102800105A/en
Application granted granted Critical
Publication of CN102800105B publication Critical patent/CN102800105B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a target detection method based on motion vector, and mainly aims to solve the problems that in the prior art, the speed is too low, the timeliness is poor, and an object cannot be detected rapidly and accurately in case of foreground-to-background and background-to-foreground. The method comprises the following steps of: (1) obtaining the motion vector information of each frame by using video decoding; (2) setting a target state; (3) initializing a background model by using a K-means algorithm; (4) reading the motion vector information of the next frame, and processing the motion vector; (5) judging the target state; and (6) putting forward a new background model updating method according to the target state so as to detect the target. By the method, a great amount of target detection time is reduced, and variation can be found rapidly and the motion target can be detected accurately in case of foreground-to-background and background-to-foreground. The method can be used for military investigation, safe detection, video monitoring and traffic management.

Description

Object detection method based on motion vector
Technical field
The invention belongs to technical field of video processing, particularly the motion target detection method can be used for military investigation, safety detection, video monitoring and traffic administration.
Background technology
The purpose of moving object detection is the zone that from sequence image, is partitioned into motion, just the foreground target of common indication.Whether foreground target can be had very big influence for post-processed such as target following and Target Recognition by effective detection.Yet in video sequence, there is dynamic change in the zone at target place incessantly, also possibly have dynamic change in the background image.In addition, for outdoor scene, the factor that causes the background area to change is many, for example leaf, flag or the like, and indoor scene lacks relatively.Therefore, moving object detection is the quite work of difficulty.
Moving target detecting method commonly used at present mainly is the background subtraction method.The background subtraction method is through asking difference to detect moving target to present image and background image, and it generally can provide complete characteristic, but to the dynamic change in the background, for example light changes or branch shakes the comparison sensitivity.The mixed Gauss model MOG that Stauffer and Grimson propose becomes the classic algorithm in the background subtraction method.The MOG algorithm is represented the variation of a pixel with a plurality of Gauss models, the dynamic change in the identification background that this algorithm can be correct, and can false judgment be not foreground target; Secondly the model of MOG algorithm is an online updating, whenever reads the information of a frame, just all corresponding models of each pixel is once upgraded, and has guaranteed that model can adapt to the variation in the background.Yet the MOG algorithm has problems equally, and at first when carrying out target detection, speed is slow excessively, and real-time is relatively poor; Secondly, after the stop motion of prospect target, when promptly foreground target transferred background to, the MOG algorithm can not be found target stop motion timely, and in a period of time of stop motion just, this algorithm still can be judged to be prospect with it by error in target; Once more, when the target that stops to restart the motion after, the MOG algorithm can't correctly determine foreground target in a short period of time, but by error it is judged to be background, had only considerable time after, could differentiate out.
People such as Atrey P.K. propose a kind of algorithm of target detection that reduces detection time in " Experiential sampling based foreground/background segmentation for video surveillance "; This algorithm use experiment sampling method obtains area-of-interest; And only area-of-interest is carried out target detection, obtain moving target.Yet this algorithm need increase computational complexity to the frame of video sampling that experimentizes; And this algorithm can not be used for turn round scape and background of prospect changes the target detection under the prospect situation.People such as Wang propose one type of counter people such as " Are-evaluation of mixture-of-Gaussian background modeling " and Zhang in " An adaptive mixture Gaussian background model with online background reconstruction and adjustable foreground mergence time for motion segmentation "; Represent that a pixel is judged to be the number of times of prospect; When this number of times during greater than a thresholding; This pixel just is judged to be background, has provided the time restriction that a pixel is judged to be static prospect like this.Yet this method is the turn round situation of scape of the prospect of determining apace.Can find out, need propose new method, can when target travel, detect foreground target apace, can change to find apace to change under the situation of prospect and also accurately detect target in turn round scape and background of prospect again.
Summary of the invention
The objective of the invention is to the deficiency to above-mentioned algorithm, on the MOG model based, propose a kind of object detection method based on motion vector, reducing computational complexity, the prospect of the being implemented in fast detecting under two kinds of situation of scape and background commentaries on classics prospect of turning round.
Realize that basic thought of the present invention is: propose background model update method fast; This method places the video decode end, and the motion vector that utilizes video decode to obtain distinguishes background and prospect affiliated area; Use different background model update methods respectively; To regional every L frame update of belonging to background once, to belonging to regional every frame update of prospect, thereby reach the purpose that improves target detection speed; Set a plurality of motion states; Carrying out motion state through decision criteria judges; When the target stop motion, can judge the change of motion state, complete when static in target; The target region directly is set at background, background model renewal fast and accurately under the scape situation thereby the realization prospect is turned round; When target is moved again; Way through the background replacement; With the direct background when moving again of the background of target before stopping, the real background of target region when promptly obtaining moving, thus realize under the background commentaries on classics prospect situation detection target fast and accurately.
Its technical scheme comprises the steps:
(1) utilizes video decode, obtain each frame motion vector information;
(2) set motion, accurate static and static these 3 dbjective states, wherein motion state representes that foreground target has displacement between the two continuous frames; Stationary state representes that target does not produce displacement between the two continuous frames; Accurate stationary state is represented the transition between motion state and the stationary state, prevents the foreground target of motion is judged to static target by error;
Adopt mixed Gauss model model as a setting when (3) video sequence begins, and utilize the K-means algorithm to carry out the initialization of background model;
(4) read the motion vector information of next frame, motion vector is handled;
4.1) write down the coordinate of pixel in all foreground targets; And choose minimum with the maximum pixel of horizontal ordinate wherein; And minimum with the maximum pixel of ordinate; Confirm a square region with these four pixels, make these four pixels on the border of this square region, obtain the displacement of square region through the variation of these four pixel coordinates;
4.2) with 4 * 4 V all in the square region I, m, nValue adds up, the number that the non-zero motion vectors that obtains comprising in this square region is 4 * 4;
V wherein I, m, nRepresent that the i framing bit is changed to that (whether m, 4 * 4 motion vector n) are zero, and 1≤m≤M wherein, 1≤n≤N, M and N are respectively the number of 4 * 4 in figure image width and senior middle school; When this block motion vector is zero, V I, m, n=0, when this block motion vector is non-vanishing, V I, m, n=1;
(5) judge dbjective state:
5.1) when target during, judge according to the number of square region non-zero motion vectors piece whether target gets into accurate stationary state by motion state earlier, if the number of continuous three frame non-zero motion vectors pieces is less than given threshold T in motion state; Judge that then target gets into accurate stationary state, otherwise judge target still in motion state, execution in step (8); Judge according to the displacement of the number of square region non-zero motion vectors piece and square region whether target gets into stationary state by accurate stationary state again, if afterwards in 20 frames, the number that does not have continuous three frame non-zero motion vectors pieces is greater than given threshold T; Square region displacement simultaneously judges then that less than given displacement thresholding D target gets into stationary state, execution in step (6); Otherwise judge target rebound motion state; Execution in step (8), T=55 wherein, D=6;
5.2) when target during in stationary state; Judge whether again motion of target according to the number of square region non-zero motion vectors piece; If the number of continuous three frame non-zero motion vectors pieces, is then judged target greater than given threshold T and is got into motion state, execution in step (7); Otherwise judge target still in stationary state, execution in step (8);
(6) for first frame that gets into stationary state, the background model of all pixels adopts new update method to upgrade in square shaped the zone in, promptly directly with the image of square region in this frame as a setting, realizes background model renewal fast; And to all pixels outside this square region, execution in step (8); Begin execution in step (8) from second frame of stationary state;
(7) for first frame after moving again; The background model of all pixels adopts new update method to upgrade in the square shaped zone; Last frame in the motion state before being about to; The background of the square region at target place directly as present frame should the zone background, realize that background model is upgraded fast, thereby accurately detect target; And all pixels outside the square shaped zone, execution in step (8); Second frame after moving again begins execution in step (8);
(8) every L frame upgrades all pixels of whole frame, is the pixel of 0 piece for motion vector in other frame, skips and does not upgrade, and the pixel for the piece of motion vector in other frame non-0 adopts mixed Gauss model MOG update method to upgrade.
The motion vector that the present invention utilizes video decode to obtain is provided with different dbjective states, through motion vector dbjective state is judged, proposes new background model update method according to dbjective state, thus the target of detecting.Compare with existing method and to have following advantage:
(a) computational complexity is less, can reduce great amount of time, detects moving target more apace;
(b) under prospect is turned round the scape situation, can realize moving object detection fast and accurately;
(c) change under the prospect situation in background, can realize moving object detection fast and accurately;
Experimental result shows that the present invention is effective aspect above three.
Description of drawings
Fig. 1 is the process flow diagram that the present invention carries out target detection;
Fig. 2 is when target is kept in motion, and detects the comparison diagram of foreground target with the present invention and existing mixed Gauss model MOG method;
Fig. 3 is under prospect is turned round the scape situation, detects the comparison diagram of foreground target with the present invention and existing mixed Gauss model MOG method;
Fig. 4 changes under the prospect situation in background, detects the comparison diagram of foreground target with the present invention and existing mixed Gauss model MOG method.
Embodiment
To combine accompanying drawing that embodiments of the invention are described in detail below.Present embodiment is that prerequisite is implemented with technical scheme of the present invention, provided detailed embodiment and specific operation process, but protection scope of the present invention is not limited to following embodiment.
With reference to Fig. 1, concrete performing step of the present invention is following:
Step 1: utilize video decode, obtain each frame motion vector information of target.
Use the demoder of joint video code sets JVT to decode in decoding end, obtain all motion vectors of 4 * 4 in each two field picture.
Step 2: target setting state.
Set motion, accurate static and static these 3 dbjective states, wherein motion state representes that foreground target has displacement between the two continuous frames; Stationary state representes that target does not produce displacement between the two continuous frames; Accurate stationary state is represented the transition between motion state and the stationary state, prevents the foreground target of motion is judged to static target by error.
Step 3: adopt mixed Gauss model model as a setting when video sequence begins, and utilize the K-means algorithm to carry out the initialization of background model.
3.1) on each pixel, set up 10 classes, calculate the distance between each gray values of pixel points and these types, the pixel number adds 1 in the nearest class of adjusting the distance, and calculates such average simultaneously;
3.2) use K Gauss model to simulate the value of each pixel, the average of that type of pixel most number to be composed to first Gauss model, it is 0 that the average of the K-1 of a back model is all composed; Composing the weights of first model is 0.8 again, and the weights of the K-1 of a back model are composed and are 0.2/K-1, and wherein K representes the number of Gauss model.
Step 4: read the motion vector information of next frame, and motion vector is handled.
4.1) confirm the square region at moving target place:
4.1.1) coordinate of all pixels in the record moving target; And choose minimum with the maximum pixel of horizontal ordinate wherein; And minimum with the maximum pixel of ordinate, confirm a square region with these four pixels, make these four pixels on the border of this square region;
4.1.2) displacement of calculating square region
If getting into the frame number of first frame of accurate stationary state is a, then the coordinate of four boundary pixel points of square region is respectively left margin l a, right margin r a, lower boundary d aWith coboundary u aThe frame number that makes target be in accurate stationary state is 20 frames, and behind 20 frames, four borders of square region are changed to l respectively A+20, r A+20, d A+20, u A+20, then the displacement of square region horizontal direction does | l A+20-l a| with | r A+20-r a| mean value; The vertical direction displacement does | d A+20-d a| with | u A+20-u a| mean value;
4.2) obtain 4 * 4 number of motion vector non-zero in the square region:
4.2.1) be changed to (m, 4 * 4 motion vector x on transverse axis n) according to the i framing bit I, m, nWith the motion vector y on the longitudinal axis I, m, nConfirm the motion vector v of current block I, m, nValue:
If x I, m, n=0 and y I, m, n=0, then get v I, m, n=0; Otherwise, get v I, m, n=1;
4.2.2) with 4 * 4 v all in the square region I, m, nValue adds up, the number that the non-zero motion vectors that obtains comprising in this square region is 4 * 4: Z i=∑ v I, m, n
Step 5: judge dbjective state.
5.1) target gets into the judgement of accurate stationary state by motion state
When target during in motion state, judge according to the number of square region non-zero motion vectors piece whether target gets into accurate stationary state by motion state earlier, if the number of continuous three frame non-zero motion vectors pieces promptly satisfies following formula less than given threshold T,
0<Z i-2<T
0<Z i-1<T
0<Z i<T,
Judge that then target gets into accurate stationary state, otherwise judge target still in motion state, execution in step eight,
Z wherein I-14 * 4 number of the motion vector non-zero of representing to comprise in the square region of i-1 frame, Z I-24 * 4 number of the motion vector non-zero of representing to comprise in the square region of i-2 frame;
5.2) target gets into the judgement of stationary state by accurate stationary state
Judge according to the number of square region non-zero motion vectors piece and the displacement of square region whether target gets into stationary state by accurate stationary state; If in 20 frames afterwards; The number that does not have continuous three frame non-zero motion vectors pieces is greater than given threshold T, and promptly continuous three frames do not satisfy Z I-2>T, Z I-1>T, Z i>T condition, square region displacement is simultaneously promptly satisfied following formula less than given displacement thresholding D,
|d a+20-d a|<D
|u a+20-u a|<D
|l a+20-l a|<D
|r a+20-r a|<D,
Judge that then target gets into stationary state, execution in step six, otherwise judge target rebound motion state, execution in step eight;
5.3) when target during, judge whether have target to move again according to the number of square region non-zero motion vectors piece in stationary state, if the number of continuous three frame non-zero motion vectors pieces promptly satisfies following formula greater than given threshold T,
Z i-2>T
Z i-1>T
Z i>T,
Judge that then target gets into motion state, execution in step seven, otherwise judge target, execution in step eight, T=55 wherein, D=6 still in stationary state.
Step 6: target is upgraded by the background model that accurate stationary state gets into stationary state.
For first frame that gets into stationary state, make that its frame number is e, the background model of all pixels adopts new update method to upgrade in square shaped the zone in, and soon coordinate is (p, gray values of pixel points f q) in the e frame E, p, qComposing to coordinate in the e frame is (p, the average μ of first mixed Gauss model of pixel q) E, p, q, 1, realize that background model is upgraded fast;
To all pixels outside this square region, execution in step eight;
Begin execution in step eight from second frame of stationary state.
Step 7: target is upgraded by the background model that stationary state gets into motion state.
For first frame after moving again; Make that its frame number is d; The background model of all pixels adopts new update method to upgrade in the square shaped zone, and soon coordinate is (p, q) the average μ of the 1st of pixel the mixed Gauss model among the motion state last frame c C, p, q, 1, be (p, q) the average μ of the 1st of pixel the mixed Gauss model as coordinate in the d frame D, p, q, 1, realize background model renewal fast under the background commentaries on classics prospect situation;
All pixels outside the square shaped zone, execution in step eight;
Second frame after moving again begins execution in step eight.
Step 8: turn round scape or background of prospect not occurring changes under the prospect situation, and the background model outside the square region occurs under the both of these case and upgrade.
8.1) every L frame, adopt mixed Gauss model MOG method, all pixels in this frame are upgraded execution in step 8.3), L=10 wherein;
8.2) in other frames, for motion vector the pixel of zero piece, skip and do not upgrade, otherwise for the pixel of the piece of motion vector non-zero, execution in step 8.3).
Judge that each frame meta is changed to (m, 4 * 4 v n) I, m, nValue whether be 0, if v I, m, n=0, skip and do not upgrade, otherwise execution in step 8.3);
8.3) mixed Gauss model MOG is upgraded:
8.3.1) each pixel and K Gauss model are carried out matching detection, if pixel satisfies following formula, then expression is mated successfully:
μ i,p,q,k-2.5·∑ i,p,q,k≤X i,p,q≤μ i,p,q,k+2.5·∑ i,p,q,k
X wherein I, p, qRepresent that coordinate is (p, the value of pixel q), μ in the i frame I, p, q, kBe illustrated in the average of k mixed Gauss model in the i frame, ∑ I, p, q, kBe illustrated in the covariance matrix of k mixed Gauss model in the i frame;
8.3.2) for the successful pixel of coupling, upgrade its weight w by following formula I, p, q, k, average μ I, p, q, k, standard deviation sigma I, p, q, k, that is:
w i,p,q,k=(1-α)·w i-1p,q,k+α,
μ i,p,q,k=(1-ρ)·μ i-1,p,q,k+ρ·X i,p,q
σ i , p , q , k 2 = ( 1 - ρ ) · σ i - 1 , p , q , k 2 + ρ · ( X i , p , q - μ i , p , q , k ) T ( X i , p , q - μ i , p , q , k ) ,
W wherein I-1p, q, kCoordinate is (p, the weights of k mixed Gauss model of pixel q), w in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is (p, the weights of k mixed Gauss model of pixel q), μ in the i frame of expression renewal back I-1, p, q, kCoordinate is (p, the average of k mixed Gauss model of pixel q), μ in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is (p, the average of k mixed Gauss model of pixel q), σ in the i frame of expression renewal back I-1, p, q, kCoordinate is (p, the standard deviation of k mixed Gauss model of pixel q), σ in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is that (α representes a learning rate for p, the standard deviation of k mixed Gauss model of pixel q), and its value is 0.02, and what ρ represented is another learning rate, and its value is 0.005 in the i frame of expression renewal back;
8.3.3) for not mating successful pixel, keep their average μ I, p, q, kAnd standard deviation sigma I, p, q, kConstant, and press following formula right value update:
w i,p,q,k=(1-α)·w i-1p,q,k
Through for pixel background model weights, the renewal of average and standard deviation has promptly obtained the characteristic information of this background model, thereby has reached the purpose of target detection.
Above-mentioned steps has been described preferred embodiment of the present invention, and obviously the researchist of this area can make various modifications and replacement to the present invention with reference to preferred embodiment of the present invention and accompanying drawing, and these modifications and replacement all should fall within protection scope of the present invention.
Effect of the present invention can further specify through following experiment:
1) experiment condition
Testing software: Matlab, version 7.0.0.19920 (R14);
Experimental data: gather one section indoor video, it is static by moving to that its content is that the personage gets into behind the indoor scene, then by static motion again; Always have 490 frames, in 1~100 frame in front, have only scene not have target in the video; Since 101 frames; Target begins to get into scene, and to 430 frames, target is left scene fully.
Resolution: 176 * 144;
2) experiment content and result
Experiment 1; When target is kept in motion, detect foreground target with the present invention and existing mixed Gauss model MOG method, the result is as shown in Figure 2; Wherein Fig. 2 (a) is the prospect of the existing method target detection of the 50th frame; Fig. 2 (b) is the prospect of the 50th frame target detection of the present invention, and Fig. 2 (c) is the prospect of the existing method target detection of the 110th frame, and Fig. 2 (d) is the prospect of the 110th frame target detection of the present invention; Fig. 2 (e) is the prospect of the existing method target detection of the 170th frame, and Figure 29 (f) is the prospect of the 170th frame target detection of the present invention.
Experiment 2; Under prospect is turned round the scape situation, detect foreground target with the present invention and existing mixed Gauss model MOG method, the result is as shown in Figure 3; Wherein Fig. 3 (a) is the prospect that detects when having method 200 frames now; Fig. 3 (b) is the background that detects when having method 200 frames now, the prospect that Fig. 3 (c) detects when being the present invention's 200 frames, the background that Fig. 3 (d) detects when being the present invention's 200 frames.
Experiment 3; Change under the prospect situation in background, detect foreground target with the present invention and existing mixed Gauss model MOG method, the result is as shown in Figure 4; Wherein Fig. 4 (a) is the prospect that detects when having method 368 frames now; Fig. 4 (b) is the background that detects when having method 368 frames now, the prospect that Fig. 4 (c) detects when being the present invention's 368 frames, the background that Fig. 4 (d) detects when being the present invention's 368 frames.
Can find out by Fig. 2 (a) and Fig. 2 (b); The present invention is basic identical in the target that the detected target of the 50th frame and existing mixed Gauss model MOG method are detected; Can find out by Fig. 2 (c) and Fig. 2 (d); The present invention is basic identical in the target that the detected target of the 110th frame and existing mixed Gauss model MOG method are detected; Can find out that by Fig. 2 (e) and Fig. 2 (f) the present invention is basic identical in the target that the detected target of the 170th frame and existing mixed Gauss model MOG method are detected, prove that the present invention does not have influence basically to the accuracy of testing result.
Can be found out that by Fig. 3 (c) and Fig. 3 (d) the present invention can obtain correct background image when 200 frames, and can be found out by Fig. 3 (a) and Fig. 3 (b), existing mixed Gauss model MOG method is still can't be when 200 frames correct detects background.The present invention of this proof is effective.
Can find out that by Fig. 4 (c) and Fig. 4 (d) the present invention can obtain correct prospect and background image when 368 frames, and can be found out by Fig. 4 (a) and Fig. 4 (b); The prospect that detects and background that existing mixed Gauss model MOG method is can't be when 368 frames correct; Though experiment showed, that the present invention can not be when the firm setting in motion of target, the just correct at once prospect that detects and background through above; But it can accomplish correct detection within very short time, explains that the present invention is effective.
Time to testing 1 processing video is added up its result such as following table 1:
Table 1 the present invention and existing mixed Gauss model MOG method detect the time of target
Figure BDA00001825150600101
Can find out that from table 1 time that the present invention spends target detection has reduced 2/3 on the basis of existing mixed Gauss model MOG method, reach the purpose that improves target detection speed.

Claims (4)

1. the object detection method based on motion vector comprises the steps:
(1) utilizes video decode, obtain each frame motion vector information;
(2) set motion, accurate static and static these 3 dbjective states, wherein motion state representes that foreground target has displacement between the two continuous frames; Stationary state representes that target does not produce displacement between the two continuous frames; Accurate stationary state is represented the transition between motion state and the stationary state, prevents the foreground target of motion is judged to static target by error;
Adopt mixed Gauss model model as a setting when (3) video sequence begins, and utilize the K-means algorithm to carry out the initialization of background model;
(4) read the motion vector information of next frame, motion vector is handled;
4.1) write down the coordinate of pixel in all foreground targets; And choose minimum with the maximum pixel of horizontal ordinate wherein; And minimum with the maximum pixel of ordinate; Confirm a square region with these four pixels, make these four pixels on the border of this square region, obtain the displacement of square region through the variation of these four pixel coordinates;
4.2) with 4 * 4 V all in the square region I, m, nValue adds up, the number that the non-zero motion vectors that obtains comprising in this square region is 4 * 4;
V wherein I, m, nRepresent that the i framing bit is changed to that (whether m, 4 * 4 motion vector n) are zero, and 1≤m≤M wherein, 1≤n≤N, M and N are respectively the number of 4 * 4 in figure image width and senior middle school; When this block motion vector is zero, V I, m, n=0, when this block motion vector is non-vanishing, V I, m, n=1;
(5) judge dbjective state:
5.1) when target during, judge according to the number of square region non-zero motion vectors piece whether target gets into accurate stationary state by motion state earlier, if the number of continuous three frame non-zero motion vectors pieces is less than given threshold T in motion state; Judge that then target gets into accurate stationary state, otherwise judge target still in motion state, execution in step (8); Judge according to the displacement of the number of square region non-zero motion vectors piece and square region whether target gets into stationary state by accurate stationary state again, if afterwards in 20 frames, the number that does not have continuous three frame non-zero motion vectors pieces is greater than given threshold T; Square region displacement simultaneously judges then that less than given displacement thresholding D target gets into stationary state, execution in step (6); Otherwise judge target rebound motion state; Execution in step (8), T=55 wherein, D=6;
5.2) when target during in stationary state; Judge whether again motion of target according to the number of square region non-zero motion vectors piece; If the number of continuous three frame non-zero motion vectors pieces, is then judged target greater than given threshold T and is got into motion state, execution in step (7); Otherwise judge target still in stationary state, execution in step (8);
(6) for first frame that gets into stationary state, the background model of all pixels adopts new update method to upgrade in square shaped the zone in, promptly directly with the image of square region in this frame as a setting, realizes background model renewal fast; And to all pixels outside this square region, execution in step (8); Begin execution in step (8) from second frame of stationary state;
(7) for first frame after moving again; The background model of all pixels adopts new update method to upgrade in the square shaped zone; Last frame in the motion state before being about to; The background of the square region at target place directly as present frame should the zone background, realize that background model is upgraded fast, thereby accurately detect target; And all pixels outside the square shaped zone, execution in step (8); Second frame after moving again begins execution in step (8);
(8) every L frame upgrades all pixels of whole frame, is the pixel of 0 piece for motion vector in other frame, skips and does not upgrade, and the pixel for the piece of motion vector in other frame non-0 adopts mixed Gauss model MOG update method to upgrade.
2. the object detection method based on motion vector according to claim 1; The described video decode that utilizes of step (1) wherein; Obtain each frame motion vector information; Be meant in decoding end and use the demoder of joint video code sets JVT to decode, obtain all motion vectors of 4 * 4 in each two field picture.
3. the object detection method based on motion vector according to claim 1; Adopt mixed Gauss model model as a setting when wherein the described video sequence of step (3) begins; And utilize the K-means algorithm to carry out the initialization of background model, be to carry out as follows:
3a) on each pixel, set up 10 classes, calculate the distance between each gray values of pixel points and these types, the pixel number adds 1 in the nearest class of adjusting the distance, and calculates such average simultaneously;
3b) use K Gauss model to simulate the value of each pixel, the average of that type of pixel most number is composed to first Gauss model, it is 0 that the average of the K-1 of a back model is all composed; Composing the weights of first model is 0.8 again, and the weights of the K-1 of a back model are composed and are 0.2/K-1, and wherein K representes the number of Gauss model.
4. the object detection method based on motion vector according to claim 1, wherein the described employing mixed Gauss model of step (8) MOG update method is upgraded, and carries out as follows:
8a) each pixel and K Gauss model are carried out matching detection, if pixel satisfies following formula, then expression is mated successfully:
μ i,p,q,k-2.5·∑ i,p,q,k≤X i,p,q≤μ i,p,q,k+2.5·∑ i,p,q,k
X wherein I, p, qRepresent that coordinate is (p, the value of pixel q), μ in the i frame I, p, q, kBe illustrated in the average of k mixed Gauss model in the i frame, ∑ I, p, q, kBe illustrated in the covariance matrix of k mixed Gauss model in the i frame;
8b), upgrade its weight w by following formula for the successful pixel of coupling I, p, q, k, average μ I, p, q, k, standard deviation sigma I, p, q, k, that is:
w i,p,q,k=(1-α)·w i-1,p,q,k+α,
μ i,p,q,k=(1-ρ)·μ i-1p,q,k+ρ·X i,p,q
σ i , p , q , k 2 = ( 1 - ρ ) · σ i - 1 , p , q , k 2 + ρ · ( X i , p , q - μ i , p , q , k ) T ( X i , p , q - μ i , p , q , k ) ,
W wherein I-1, p, q, kCoordinate is (p, the weights of k mixed Gauss model of pixel q), w in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is (p, the weights of k mixed Gauss model of pixel q), μ in the i frame of expression renewal back I-1, p, q, kCoordinate is (p, the average of k mixed Gauss model of pixel q), μ in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is (p, the average of k mixed Gauss model of pixel q), σ in the i frame of expression renewal back I-1, p, q, kCoordinate is (p, the standard deviation of k mixed Gauss model of pixel q), σ in the preceding i-1 frame of expression renewal I, p, q, kCoordinate is that (α representes a learning rate for p, the standard deviation of k mixed Gauss model of pixel q), and its value is 0.02, and what ρ represented is another learning rate, and its value is 0.005 in the i frame of expression renewal back;
8c), keep their average μ for not mating successful pixel I, p, q, kAnd standard deviation sigma I, p, q, kConstant, and press following formula right value update:
w i,p,q,k=(1-α)·w i-1p,q,k
CN201210218615.2A 2012-06-28 2012-06-28 Target detection method based on motion vector Expired - Fee Related CN102800105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210218615.2A CN102800105B (en) 2012-06-28 2012-06-28 Target detection method based on motion vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210218615.2A CN102800105B (en) 2012-06-28 2012-06-28 Target detection method based on motion vector

Publications (2)

Publication Number Publication Date
CN102800105A true CN102800105A (en) 2012-11-28
CN102800105B CN102800105B (en) 2014-09-17

Family

ID=47199202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210218615.2A Expired - Fee Related CN102800105B (en) 2012-06-28 2012-06-28 Target detection method based on motion vector

Country Status (1)

Country Link
CN (1) CN102800105B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658009A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Moving-target detection method based on video images
CN104966305A (en) * 2015-06-12 2015-10-07 上海交通大学 Foreground detection method based on motion vector division
CN107808388A (en) * 2017-10-19 2018-03-16 中科创达软件股份有限公司 Image processing method, device and electronic equipment comprising moving target
CN110927726A (en) * 2019-11-14 2020-03-27 广东奥迪威传感科技股份有限公司 Approach detection method and module
WO2021120866A1 (en) * 2019-12-18 2021-06-24 深圳云天励飞技术股份有限公司 Target object tracking method and apparatus, and terminal device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673404A (en) * 2009-10-19 2010-03-17 北京中星微电子有限公司 Target detection method and device
US20110243451A1 (en) * 2010-03-30 2011-10-06 Hideki Oyaizu Image processing apparatus and method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673404A (en) * 2009-10-19 2010-03-17 北京中星微电子有限公司 Target detection method and device
US20110243451A1 (en) * 2010-03-30 2011-10-06 Hideki Oyaizu Image processing apparatus and method, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张彦等: "一种用于运动物体检测的自适应更新背景模型", 《计算机辅助设计与图形学学报》, vol. 20, no. 10, 31 October 2008 (2008-10-31), pages 1316 - 1324 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658009A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Moving-target detection method based on video images
CN104966305A (en) * 2015-06-12 2015-10-07 上海交通大学 Foreground detection method based on motion vector division
CN104966305B (en) * 2015-06-12 2017-12-15 上海交通大学 Foreground detection method based on motion vector division
CN107808388A (en) * 2017-10-19 2018-03-16 中科创达软件股份有限公司 Image processing method, device and electronic equipment comprising moving target
CN107808388B (en) * 2017-10-19 2021-10-12 中科创达软件股份有限公司 Image processing method and device containing moving object and electronic equipment
CN110927726A (en) * 2019-11-14 2020-03-27 广东奥迪威传感科技股份有限公司 Approach detection method and module
WO2021120866A1 (en) * 2019-12-18 2021-06-24 深圳云天励飞技术股份有限公司 Target object tracking method and apparatus, and terminal device

Also Published As

Publication number Publication date
CN102800105B (en) 2014-09-17

Similar Documents

Publication Publication Date Title
CN101447082B (en) Detection method of moving target on a real-time basis
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN106778712B (en) Multi-target detection and tracking method
CN103886325B (en) Cyclic matrix video tracking method with partition
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN101882217B (en) Target classification method of video image and device
CN104318263A (en) Real-time high-precision people stream counting method
CN102800105B (en) Target detection method based on motion vector
CN103106659A (en) Open area target detection and tracking method based on binocular vision sparse point matching
CN101883209B (en) Method for integrating background model and three-frame difference to detect video background
CN104299243A (en) Target tracking method based on Hough forests
CN106204586A (en) A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN100531405C (en) Target tracking method of sports video
CN102831580A (en) Method for restoring image shot by cell phone based on motion detection
CN103488993A (en) Crowd abnormal behavior identification method based on FAST
CN104616006A (en) Surveillance video oriented bearded face detection method
CN105512618A (en) Video tracking method
CN104751466A (en) Deform able object tracking algorithm based on visual salience and system thereof
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN108764338A (en) A kind of pedestrian tracking algorithm applied to video analysis
CN105321188A (en) Foreground probability based target tracking method
CN104168444A (en) Target tracking method of tracking ball machine and tracking ball machine
CN104700384B (en) Display systems and methods of exhibiting based on augmented reality
CN102314591A (en) Method and equipment for detecting static foreground object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140917

Termination date: 20190628

CF01 Termination of patent right due to non-payment of annual fee