CN103617632A - Moving target detection method with adjacent frame difference method and Gaussian mixture models combined - Google Patents

Moving target detection method with adjacent frame difference method and Gaussian mixture models combined Download PDF

Info

Publication number
CN103617632A
CN103617632A CN201310586151.5A CN201310586151A CN103617632A CN 103617632 A CN103617632 A CN 103617632A CN 201310586151 A CN201310586151 A CN 201310586151A CN 103617632 A CN103617632 A CN 103617632A
Authority
CN
China
Prior art keywords
gauss model
value
frame
region
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310586151.5A
Other languages
Chinese (zh)
Other versions
CN103617632B (en
Inventor
宦若虹
潘赟
於正强
王楚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Gaohang Intellectual Property Operation Co ltd
Zhejiang Haining Warp Knitting Industrial Park Development Co.,Ltd.
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201310586151.5A priority Critical patent/CN103617632B/en
Publication of CN103617632A publication Critical patent/CN103617632A/en
Application granted granted Critical
Publication of CN103617632B publication Critical patent/CN103617632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a moving target detection method with an adjacent frame difference method and Gaussian mixture models combined. The method includes the following steps that (1) an image sequence is obtained, wherein the image sequence comprises a current-frame image and a previous-frame image; (2) the current-frame image is divided into a temporary moving area and a temporary background area by the utilization of the improved adjacent frame difference method; (3) matching is performed on the two areas generated in the step (2) through the Gaussian mixture models, the two areas are divided into different areas according to matching results; (4) different updating is performed on the Gaussian mixture models in the different areas; (5) final moving target areas are determined according to the areas generated in the step (3). Gradient comparison and median filtering are added in the improved adjacent frame difference method, and the boundary and the anti-noise-interference capability of a moving target are highlighted. The adaptability to background and foreground conversion of the Gaussian mixture models is improved through changes of the updating rate of the Gaussian mixture models. The detection result of the method alleviates the problem of cavities generated through the adjacent frame difference method, and problem of the shadow generated by the fact that a background object is converted into a moving object suddenly is eliminated.

Description

The moving target detecting method of a kind of combination neighbor frame difference method and mixed Gauss model
Technical field
The present invention relates to field of video image processing, particularly a kind of video moving object detection method.
Background technology
Video monitoring is a kind of highly effective means for individual and public safety.It is processed by video camera the video data photographing, and can make related personnel monitor in real time some important places, region.Intelligent video monitoring generally comprises moving object detection, target following, target classification identification and behavioural analysis etc.So moving object detection is the basis of realizing intelligent video monitoring, the quality of moving object detection result will directly affect follow-up Treatment Analysis.
General motion target detection technique is the variation based on detected image pixel.Conventional moving target detecting method has optical flow method, background subtraction point-score and neighbor frame difference method.Optical flow method is very responsive and calculation of complex to noise, is difficult to realize in real time process.Background subtraction point-score is more responsive to the variation of the external conditions such as illumination, weather.Background subtraction point-score based on mixed Gauss model is set up a plurality of Gauss models for each pixel, can improve the adaptability to environmental change, but speed of convergence is slower, can produce moving object leave " shadow " for the object of unexpected variation.Neighbor frame difference method utilizes the difference between consecutive frame to carry out target detection, and algorithm is simple and real-time good, but can produce interior of articles cavity problem.
Summary of the invention
The present invention overcomes the shortcoming of prior art, the moving target detecting method of a kind of combination neighbor frame difference method and mixed Gauss model has been proposed, advantage in conjunction with two kinds of methods, both alleviated the empty problem that neighbor frame difference method produces, also eliminate background object and transferred suddenly " shadow " problem producing after moving object to, can carry out accurate, real-time moving object detection to detection zone.
A moving target detecting method for combination neighbor frame difference method and mixed Gauss model, comprises the following steps:
1) obtain image sequence, comprise present frame and former frame image;
2) to present frame, utilize improved neighbor frame difference method to be divided into interim moving region and interim background area;
3) to 2) in 2 regions producing with mixed Gauss model, mate respectively, according to matching result, be divided into again different regions;
4) Gauss model in zones of different carries out different renewals;
5) according to 3) in the region that produces determine last motion target area;
Described step 2) comprising:
A) present frame and former frame are subtracted each other and taken absolute value, establish f k(x, y) and f k+1(x, y) represents respectively the pixel value of coordinate points (x, y) in former frame and present frame, calculates their difference result d x(x, y):
d k(x,y)=|f k+1(x,y)-f k(x,y)|
B) the Grad Grad of the pixel of calculating present frame in M * M window k(x, y), work as M=5:
Grad k ( x , y ) = Σ i = - 2 2 Σ j = - 2 2 ( | f k ( x + i , y + j + 1 ) - f k ( x + i , y + j ) | ) + Σ i = - 2 2 Σ j = - 2 2 ( | f k ( x + i + 1 , y + j ) - f k ( x + i , y + j ) | )
C) Grad of more adjacent two frame same coordinate position pixels, it is larger to calculate the weight coefficient of frame when poor just larger that gradient differs, and makes r represent gradient factor, Fd k(x, y) represents to introduce the frame difference of gradient factor, calculates Fd k(x, y):
Fd k(x,y)=d k(x,y)×r
The relevant system of gradient difference of the value of r and the pixel of adjacent two frame same coordinate, it is larger that gradient differs, and the value of r is also larger;
D) by Fd k(x, y) sorts with the frame difference data of front J frame, using middle value as the poor Fd of new frame k(x, y);
E) the poor Fd of new frame k(x, y) and threshold value T compare, if be greater than threshold value, this point are designated as to 1, if be less than threshold value, are designated as 0;
F) the later bianry image of thresholding is carried out to connected component processing, after processing by connected component, image is divided into interim moving region a fg and interim background area a bg, a wherein fgcomprise moving object and its interior void region that neighbor frame difference method calculates;
Described step 3) comprises:
G) in mixed Gauss model, comprise N Gauss model, for i Gauss model, in t its parameter of the moment, comprise: average value mu i,t, variance
Figure BDA0000417698700000034
with weight w i,t, parameter is initialization when the first two field picture;
H) pixel that is Xt for pixel value, judges that it with the condition that certain Gauss model mates is:
|X ti,t-1|<2.5σ i,t-1
If meet this condition, this Gauss model mates with pixel, if do not met, do not match, Gauss model according to size order sort, the Gauss model of B model as a setting before rank; To a fgand a bgarea pixel point carries out respectively mixed Gauss model coupling, according to matching result, image is divided into 4 regions: 1. a fgthe region a inside matching fgm, 2. a fgthe region a inside not matching fgu, 3. a bgthe region a inside matching bgm, 4. a bgthe region a inside not matching bgu;
Described step 4) comprises:
I) a fgmthe renewal of Gauss model in region:
w i,t=(1-α)w i,t-1+αM i,t
μ i,t=(1-ρ)μ i,t-1+M i,tρX t+ρμ i,t-1(1-M i,t)
&sigma; i , t 2 = ( 1 - &rho; ) &sigma; i , t - 1 2 + M i , t &rho; ( X t - &mu; i , t ) T ( X t - &mu; i , t ) + ( 1 - M i , t ) &rho;&sigma; i , t - 1 2
Wherein, α is parameter turnover rate, gets larger value (for example 0.1) here, w i,t, μ i,twith
Figure BDA0000417698700000033
be respectively i Gauss model in t weight, mean value and variance constantly,
Figure BDA0000417698700000041
m i,tvalue and i Gauss model at t, whether mate relevantly constantly with pixel, if mated, get 1, do not mate and get 0;
J)a fgugauss model in region does not upgrade, and does not set up new Gauss model yet;
K)a bgmgauss model update method and i in region) in similar, parameter turnover rate α gets more moderate value that different is (for example 0.01);
L) a bguin region, except the Gauss model of weight minimum, all the other are all according to i) in formula upgrade, parameter turnover rate α value is in 0.01 left and right; A newly-built Gauss model substitutes the model of weight minimum, and the parameter of newly-built Gauss model is set to: average μ tget current pixel value X t, variances sigma tget a smaller value, weight w tafter value makes more greatly normalization, this new established model rank is higher;
Described step 5) comprises:
M) consider f) in the motor image vegetarian refreshments and a that calculate fguregion, determines final motion target area again after mark connected component.
Advantage of the present invention is: for neighbor frame difference method, increased gradient ratio and medium filtering, given prominence to moving object boundary and anti-noise jamming ability.By the testing result in conjunction with neighbor frame difference method and mixed Gauss model, image is divided into 4 regions, the mixed Gauss model in each region is adopted to different parameter turnover rates.Compare with common mixed Gauss model, reduced the interference of moving target to background model, for object motion with static between situation about changing also there is good adaptability.The judgement of final moving object is also the testing result in conjunction with two kinds of methods, has both alleviated the empty problem that neighbor frame difference method produces, and has also eliminated background object and has transferred suddenly " shadow " problem producing after moving object to.
Accompanying drawing explanation
Fig. 1 is the implementing procedure figure of the inventive method
There is overlapping mark rectangle frame in Fig. 2
Fig. 3 is the moving object detection result of test video
Embodiment
The combination neighbor frame difference method that the present invention proposes and the moving target detecting method of mixed Gauss model, it implements process flow diagram as shown in Figure 1, by adjacent 2 two field pictures that obtain, calculate the poor and gradient difference of frames, gradient difference result be incorporated into frame poor in, frame difference datas present frame is poor and front 4 frames sort, middle value is poor as new frame, and frame is poor becomes 0 or 12 system numbers through threshold value after relatively.The bianry image that 2 system numbers represent is processed through connected component, be divided into foreground area and background area, to these 2 regions, use mixed Gauss model to mate respectively, according to matching result, image subdivision, be 4 regions, according to different regions, Gauss model is taked to different update methods, in conjunction with neighbor frame difference method and mixed Gauss model and re-start connected component and process and to draw motion target area.
Below in conjunction with Figure of description, the inventive method is elaborated.
Fig. 1 is the implementing procedure figure of the inventive method.First obtain 2 adjacent two field pictures, " adjacent " herein can be both continuous adjacent 2 frames 2 two field pictures of certain frame number that can be also middle intervals.This determines according to actual situation, if video speed is too fast or be not very high to the requirement of monitoring, can get successive frame and obtains a frame every several frames.
Present frame and former frame are subtracted each other and taken absolute value, establish f k(x, y) and f k+1(x, y) represents respectively the pixel value of coordinate points (x, y) in former frame and present frame, calculates their difference result d x(x, y):
d k(x,y)=|f k+1(x,y)-f k(x,y)|
The Grad Grad of the pixel that to calculate coordinate in k two field picture be (x, y) in M * M window k(x, y), work as M=5:
Grad k ( x , y ) = &Sigma; i = - 2 2 &Sigma; j = - 2 2 ( | f k ( x + i , y + j + 1 ) - f k ( x + i , y + j ) | ) + &Sigma; i = - 2 2 &Sigma; j = - 2 2 ( | f k ( x + i + 1 , y + j ) - f k ( x + i , y + j ) | )
If the gradient difference of the pixel of adjacent 2 frame same coordinate is larger, show that this pixel is more likely motor point.So gradient difference is larger, the weight r of gradient difference is larger, and the method for definite r a kind of is as follows:
r = 1 Grad max &le; 3 2 Grad min 3 2 3 2 Grad min < Grad max &le; 5 2 Grad min 2 Grad max > 5 2 Grad min
Grad wherein mingrad kand Grad k-1in minimum value, Grad maxgrad kand Grad k-1in maximal value.
Make Fd k(x, y) represents to introduce the frame difference of gradient factor, calculates Fd k(x, y):
Fd k(x,y)=d k(x,y)×r
Introduce gradient difference and can be good at outstanding moving object border and internal edge, outstanding motor image vegetarian refreshments, processes and provides convenience for connected component below.But because have the noise of randomness in the image collecting, gradient factor also can be amplified the impact of noise.In order to eliminate the impact of noise, also need the poor medium filtering that passes through of frame.The poor Fd of present frame k(x, y) sorts with the frame difference data of front J frame, using middle value output as the poor Fd of new frame k(x, y), the value of J is determined on a case-by-case basis, for example, be 5.
Result after threshold process is still more coarse, in order to obtain profile and the size of more accurate moving object, also needs the view data after thresholding to carry out the mark of connected region.Employing is that 8 neighborhoods are asked connected region, if current point is 1, it is 0 that the value of revising this point makes it, and 8 picture elements around it are searched for, if having value around in 8 points is 1 point, the value of revising these points is 0 and record these points, and then centered by these points, searches for respectively.Repeat said process, until be all 0.The connected component that mark is crossed is all to represent by the form of rectangle frame, and what in rectangle frame, represent is the poor pixel that is greater than threshold value of frame.Although it is overlapping that connected component does not exist, the rectangle frame of mark but likely exists overlapping, as shown in Figure 2, thus just need to overlapping connected component be merged, to obtain moving object shape more accurately.After processing by connected component, image is divided into interim moving region a fgwith interim background area a bg, a wherein fgcomprise moving object and its interior void region that neighbor frame difference method calculates.
To interim moving region a fgwith interim background area a bgwith mixed Gauss model, mate respectively, according to matching result, be divided into again different regions.In mixed Gauss model, the value of the number N of model is generally 5~7, can adjust according to the memory capacity of computing machine and arithmetic capability.For i Gauss model, in t its parameter of the moment, comprise: average value mu i,t, variance
Figure BDA0000417698700000073
with weight w i,t, parameter is initialization when the first two field picture; During initialization mixed Gauss model parameter, mean value is got the respective pixel point value X of the first two field picture 1, variance is got a larger value as far as possible.
For pixel value, be X tpixel, judge that it with the condition that certain Gauss model mates is:
|X ti,t-1|<2.5σ i,t-1
If meet this condition, this Gauss model mates with pixel, if do not met, do not match, Gauss model according to
Figure BDA0000417698700000071
size order sort, the Gauss model of B model as a setting before rank, the value standard of B is:
Figure BDA0000417698700000072
at least before the weight w of B Gauss model i,tsum is greater than T, and the value of T is between 0~1.If T more greatly, has a plurality of Gaussian distribution model to describe background, this is better for such jamming pattern treatment effect such as the tree of rocking, but also likely occurs moving object to be used as the mistake of background.If T is smaller, only have a Gaussian distribution model to describe background.
To a fgand a bgarea pixel point carries out respectively mixed Gauss model coupling, according to matching result, image is divided into 4 regions: 1. a fgthe region a inside matching fgm, 2. a fgthe region a inside not matching fgu, 3. a bgthe region a inside matching bgm, 4. a bgthe region a inside not matching bgu.
Gauss model in zones of different carries out different renewals:
A fgmthe renewal of Gauss model in region:
w i,t=(1-α)w i,t-1+αM i,t
μ i,t=(1-ρ)μ i,t-1+M i,tρX t+ρμ i,t-1(1-M i,t)
&sigma; i , t 2 = ( 1 - &rho; ) &sigma; i , t - 1 2 + M i , t &rho; ( X t - &mu; i , t ) T ( X t - &mu; i , t ) + ( 1 - M i , t ) &rho;&sigma; i , t - 1 2
Wherein, α is parameter turnover rate, gets larger value (for example 0.1) here, w i,t, μ i,twith
Figure BDA0000417698700000082
be respectively i Gauss model in t weight, mean value and variance constantly, m i,tvalue and i Gauss model at t, whether mate relevantly constantly with pixel, if mated, get 1, do not mate and get 0;
A fgugauss model in region does not upgrade, and does not set up new Gauss model yet;
A bgmgauss model update method and a in region fgmsimilar in region, parameter turnover rate α gets more moderate value that different is (for example 0.01);
A bguin region, except the Gauss model of weight minimum, all the other are all according to a fgmformula in region upgrades, and parameter turnover rate α value is in 0.01 left and right; A newly-built Gauss model substitutes the model of weight minimum, and the parameter of newly-built Gauss model is set to: average μ tget current pixel value X t, variances sigma tget a smaller value, after weight wt value makes more greatly normalization, this new established model rank is higher.
Although mixture gaussian modelling can have certain adaptability to the variation of environmental background, but the turnover rate of its parameter is fixed, if parameter turnover rate is too little, context update is too slow, the unexpected variation response time for background can be slow, and if parameter turnover rate is too fast, just lose the ability that it describes complex background.So in this algorithm, parameter turnover rate α can adjust dynamically according to the result of neighbor frame difference method: region a fgmthe Gauss model of interior pixel is given larger parameter turnover rate (0.1); Region a fguthe result of interior two kinds of methods is more consistent, in order to reduce the impact of moving target on background, does not build new Gaussian distribution; Region a bgminterior Gauss model parameter turnover rate adopts more moderate turnover rate (0.01), can reduce the impact of noise and also can carry out suitable renewal to background; Region a bguinterior pixel need to re-construct a Gauss model, and distributes a higher weight w t(as 0.3) and smaller variances sigma t.
Finally, consider motor point and moving object and a that neighbor frame difference method calculates fguregion, determines final motion target area again after mark connected component.
The superiority of the inventive method result is described with an example below.Fig. 3 adopts 5 test video sequence of Wallflower as test source, adopts the moving object detection result of the wherein frame drawing after the inventive method processing, and compares with benchmark result.Benchmark result is manual obtaining, expression moving object that can be more accurate.These 5 test videos represent respectively different problem typeses, can reflect the comprehensive treatment capability of the inventive method.In video Bootstrap, there are a plurality of objects in motion always.In video Camouflage, a people comes into computer and has covered computer, and people's clothes color and computer color very approaching.In video LightSwitch, brightness within doors has larger several times variation, and chair can become motion from static.In video MovedObject, have a people to enter in room, after mobile phone and chair, leave again, the object being moved also can incorporate in background.In video TimeOfDay, room brightness brightens suddenly, and has a people to come in to be sitting on sofa.The empty problem (seeing the testing result in video Camouflage) that the inventive method can reasonable alleviation neighbor frame difference method as can be seen from Figure 3, also can solve (seeing the testing result in video LightSwitch and MovedObject) preferably for the unexpected transfer problem of background and moving region.The detection error number comparative result of the inventive method and additive method is as shown in table 1, wherein False neg represents is in benchmark result, to mark motor point and mistake that testing result does not mark, False pos represents be in benchmark result, be not motor point detected result be designated as the mistake in motor point.The detection error rate of the inventive method is low compared with additive method as seen from Table 1, can comparatively accurately detect moving object.
Figure BDA0000417698700000101
Table 1
Obviously, do not departing under the prerequisite of true spirit of the present invention and scope, the present invention described here can have many variations.Therefore, all changes that it will be apparent to those skilled in the art that, within all should being included in the scope that these claims contain.The present invention's scope required for protection is only limited by described claims.

Claims (5)

1. in conjunction with a moving target detecting method for neighbor frame difference method and mixed Gauss model, it is characterized by, comprise the following steps:
1) obtain image sequence, comprise present frame and former frame image;
2) to present frame, utilize improved neighbor frame difference method to be divided into interim moving region and interim background area;
3) to 2) in 2 regions producing with mixed Gauss model, mate respectively, according to matching result, be divided into again different regions;
4) Gauss model in zones of different carries out different renewals;
5) according to 3) in the region that produces determine last motion target area.
2. the moving object detection algorithm described in 1, is characterized in that described step 2) comprising:
A) present frame and former frame are subtracted each other and taken absolute value, establish f k(x, y) and f k+1(x, y) represents respectively the pixel value of coordinate points (x, y) in former frame and present frame, calculates their difference result d x(x, y):
d k(x,y)=|f k+1(x,y)-f k(x,y)|
B) the Grad Grad of the pixel of calculating present frame in M * M window k(x, y), work as M=5:
C) Grad of more adjacent two frame same coordinate position pixels, it is larger to calculate the weight coefficient of frame when poor just larger that gradient differs, and makes r represent gradient factor, Fd k(x, y) represents to introduce the frame difference of gradient factor, calculates Fd k(x, y):
Fd k(x,y)=d k(x,y)×r
The relevant system of gradient difference of the value of r and the pixel of adjacent two frame same coordinate, it is larger that gradient differs, and the value of r is also larger;
D) by Fd k(x, y) sorts with the frame difference data of front J frame, using middle value as the poor Fd of new frame k(x, y);
E) the poor Fd of new frame k(x, y) and threshold value T compare, if be greater than threshold value, this point are designated as to 1, if be less than threshold value, are designated as 0;
F) the later bianry image of thresholding is carried out to connected component processing, after processing by connected component, image is divided into interim moving region a fgwith interim background area a bg, a wherein fgcomprise moving object and its interior void region that neighbor frame difference method calculates.
3. the moving object detection algorithm described in 1, is characterized in that, described step 3) comprises:
G) in mixed Gauss model, comprise N Gauss model, for i Gauss model, in t its parameter of the moment, comprise: average value mu i,t, variance
Figure FDA0000417698690000025
with weight w i,t, parameter is initialization when the first two field picture;
H) pixel that is Xt for pixel value, judges that it with the condition that certain Gauss model mates is:
|X ti,t-1|<2.5σ i,t-1
If meet this condition, this Gauss model mates with pixel, if do not met, do not match, Gauss model according to
Figure FDA0000417698690000021
size order sort, the Gauss model of B model as a setting before rank; To a fgand a bgarea pixel point carries out respectively mixed Gauss model coupling, according to matching result, image is divided into 4 regions: 1. a fgthe region a inside matching fgm, 2. a fgthe region a inside not matching fgu, 3. a bgthe region a inside matching bgm, 4. a bgthe region a inside not matching bgu.
4. the moving object detection algorithm described in 1, is characterized in that, described step 4) comprises:
I) a fgmthe renewal of Gauss model in region:
w i,t=(1-α)w i,t-1+αM i,t
μ i,t=(1-ρ)μ i,t-1+M i,tρX t+ρμ i,t-1(1-M i,t)
Figure FDA0000417698690000022
Wherein, α is parameter turnover rate, gets larger value (for example 0.1) here, w i,t, μ i,twith
Figure FDA0000417698690000023
be respectively i Gauss model in t weight, mean value and variance constantly,
Figure FDA0000417698690000024
m i,tvalue and i Gauss model at t, whether mate relevantly constantly with pixel, if mated, get 1, do not mate and get 0;
J)a fgugauss model in region does not upgrade, and does not set up new Gauss model yet;
K)a bgmgauss model update method and i in region) in similar, parameter turnover rate α gets more moderate value that different is (for example 0.01);
L) a bguin region, except the Gauss model of weight minimum, all the other are all according to i) in formula upgrade, parameter turnover rate α value is in 0.01 left and right; A newly-built Gauss model substitutes the model of weight minimum, and the parameter of newly-built Gauss model is set to: average μ tget current pixel value X t, variances sigma tget a smaller value, weight w tafter value makes more greatly normalization, this new established model rank is higher.
5. the moving object detection algorithm described in 1, is characterized in that, described step 5) comprises:
M) consider f) in the motor image vegetarian refreshments and a that calculate fguregion, determines final motion target area again after mark connected component.
CN201310586151.5A 2013-11-19 2013-11-19 A kind of moving target detecting method of combination neighbor frame difference method and mixed Gauss model Active CN103617632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310586151.5A CN103617632B (en) 2013-11-19 2013-11-19 A kind of moving target detecting method of combination neighbor frame difference method and mixed Gauss model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310586151.5A CN103617632B (en) 2013-11-19 2013-11-19 A kind of moving target detecting method of combination neighbor frame difference method and mixed Gauss model

Publications (2)

Publication Number Publication Date
CN103617632A true CN103617632A (en) 2014-03-05
CN103617632B CN103617632B (en) 2017-06-13

Family

ID=50168336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310586151.5A Active CN103617632B (en) 2013-11-19 2013-11-19 A kind of moving target detecting method of combination neighbor frame difference method and mixed Gauss model

Country Status (1)

Country Link
CN (1) CN103617632B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335717A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal video jitter analysis-based face recognition system
CN105374051A (en) * 2015-10-29 2016-03-02 宁波大学 Lens jitter prevention video movement target detection method for intelligent mobile terminal
CN105913068A (en) * 2016-04-27 2016-08-31 北京以萨技术股份有限公司 Multidimensional direction gradient representation method for image characteristic description
CN106157272A (en) * 2016-06-17 2016-11-23 奇瑞汽车股份有限公司 The method and apparatus setting up background image
CN106204636A (en) * 2016-06-27 2016-12-07 北京大学深圳研究生院 Video foreground extracting method based on monitor video
CN107450583A (en) * 2017-08-23 2017-12-08 浙江工业大学 Unmanned plane motion tracking system based on the valiant imperial processor of high pass
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN108122243A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 For the method for robot detection moving object
CN108230369A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of improved neighbor frame difference method
CN108510527A (en) * 2017-12-07 2018-09-07 上海悠络客电子科技股份有限公司 A kind of moving target detecting method clustered based on frame difference method and motor point
CN109165600A (en) * 2018-08-27 2019-01-08 浙江大丰实业股份有限公司 Stage performance personnel's intelligent search platform
CN109993767A (en) * 2017-12-28 2019-07-09 北京京东尚科信息技术有限公司 Image processing method and system
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN111553931A (en) * 2020-04-03 2020-08-18 中国地质大学(武汉) ViBe-ID foreground detection method for indoor real-time monitoring
CN112347899A (en) * 2020-11-03 2021-02-09 广州杰赛科技股份有限公司 Moving target image extraction method, device, equipment and storage medium
CN113435237A (en) * 2020-03-23 2021-09-24 丰田自动车株式会社 Object state recognition device, recognition method, recognition program, and control device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024146A (en) * 2010-12-08 2011-04-20 江苏大学 Method for extracting foreground in piggery monitoring video
CN102568005A (en) * 2011-12-28 2012-07-11 江苏大学 Moving object detection method based on Gaussian mixture model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024146A (en) * 2010-12-08 2011-04-20 江苏大学 Method for extracting foreground in piggery monitoring video
CN102568005A (en) * 2011-12-28 2012-07-11 江苏大学 Moving object detection method based on Gaussian mixture model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUI YAN等: "Moving Object Detection Based on an Improved Gaussian Mixture Background Model", 《2009 ISECS INTERNATIONAL COLLOQUIUM ON COMPUTING,COMMUNICATION,CONTROL,AND MANAGEMENT》, 9 August 2009 (2009-08-09), pages 12 - 15, XP031532809 *
陈敏: "视频监控中运动目标检测算法的研究", 《中国优秀硕士论文全文数据库》, no. 04, 15 April 2012 (2012-04-15), pages 20 - 30 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374051B (en) * 2015-10-29 2018-04-24 宁波大学 The anti-camera lens shake video moving object detection method of intelligent mobile terminal
CN105374051A (en) * 2015-10-29 2016-03-02 宁波大学 Lens jitter prevention video movement target detection method for intelligent mobile terminal
CN105335717B (en) * 2015-10-29 2019-03-05 宁波大学 Face identification system based on the analysis of intelligent mobile terminal video jitter
CN105335717A (en) * 2015-10-29 2016-02-17 宁波大学 Intelligent mobile terminal video jitter analysis-based face recognition system
CN105913068A (en) * 2016-04-27 2016-08-31 北京以萨技术股份有限公司 Multidimensional direction gradient representation method for image characteristic description
CN106157272A (en) * 2016-06-17 2016-11-23 奇瑞汽车股份有限公司 The method and apparatus setting up background image
CN106157272B (en) * 2016-06-17 2019-01-01 奇瑞汽车股份有限公司 The method and apparatus for establishing background image
CN106204636A (en) * 2016-06-27 2016-12-07 北京大学深圳研究生院 Video foreground extracting method based on monitor video
CN106204636B (en) * 2016-06-27 2019-03-22 北京大学深圳研究生院 Video foreground extracting method based on monitor video
CN108122243A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 For the method for robot detection moving object
CN108122243B (en) * 2016-11-26 2021-05-28 沈阳新松机器人自动化股份有限公司 Method for robot to detect moving object
CN108230369A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of improved neighbor frame difference method
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN107450583A (en) * 2017-08-23 2017-12-08 浙江工业大学 Unmanned plane motion tracking system based on the valiant imperial processor of high pass
CN108510527A (en) * 2017-12-07 2018-09-07 上海悠络客电子科技股份有限公司 A kind of moving target detecting method clustered based on frame difference method and motor point
CN108510527B (en) * 2017-12-07 2024-05-03 上海悠络客电子科技股份有限公司 Moving object detection method based on frame difference method and moving point clustering
CN109993767A (en) * 2017-12-28 2019-07-09 北京京东尚科信息技术有限公司 Image processing method and system
CN109165600A (en) * 2018-08-27 2019-01-08 浙江大丰实业股份有限公司 Stage performance personnel's intelligent search platform
CN109165600B (en) * 2018-08-27 2021-11-26 浙江大丰实业股份有限公司 Intelligent search platform for stage performance personnel
CN111523385B (en) * 2020-03-20 2022-11-04 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN113435237A (en) * 2020-03-23 2021-09-24 丰田自动车株式会社 Object state recognition device, recognition method, recognition program, and control device
CN113435237B (en) * 2020-03-23 2023-12-26 丰田自动车株式会社 Object state recognition device, recognition method, and computer-readable recording medium, and control device
CN111553931A (en) * 2020-04-03 2020-08-18 中国地质大学(武汉) ViBe-ID foreground detection method for indoor real-time monitoring
CN112347899A (en) * 2020-11-03 2021-02-09 广州杰赛科技股份有限公司 Moving target image extraction method, device, equipment and storage medium
CN112347899B (en) * 2020-11-03 2023-09-19 广州杰赛科技股份有限公司 Moving object image extraction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN103617632B (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN103617632A (en) Moving target detection method with adjacent frame difference method and Gaussian mixture models combined
US9311534B2 (en) Method and apparatus for tracking object
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
CN106600625A (en) Image processing method and device for detecting small-sized living thing
CN103455991A (en) Multi-focus image fusion method
CN102147861A (en) Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN103996045B (en) A kind of smog recognition methods of the various features fusion based on video
CN103150738A (en) Detection method of moving objects of distributed multisensor
Arseneau et al. Real-time image segmentation for action recognition
CN104599290A (en) Video sensing node-oriented target detection method
CN103955682A (en) Behavior recognition method and device based on SURF interest points
CN102663775A (en) Target tracking method oriented to video with low frame rate
CN104463869A (en) Video flame image composite recognition method
CN103927519A (en) Real-time face detection and filtration method
Hsu et al. Industrial smoke detection and visualization
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN106254723B (en) A kind of method of real-time monitoring video noise interference
CN103886324A (en) Scale adaptive target tracking method based on log likelihood image
Yang et al. Digital video intrusion intelligent detection method based on narrowband Internet of Things and its application
CN105096343B (en) A kind of method for tracking moving target and device
CN106815567A (en) A kind of flame detecting method and device based on video
CN111899200B (en) Infrared image enhancement method based on 3D filtering
CN103578112B (en) A kind of aerator working state detecting method based on video image characteristic
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191226

Address after: 314400 No.2, Fengshou Avenue, Haining warp knitting industrial park, Jiaxing City, Zhejiang Province

Patentee after: Zhejiang Haining Warp Knitting Industrial Park Development Co.,Ltd.

Address before: 510000 unit 2414-2416, building, No. five, No. 371, Tianhe District, Guangdong, China

Patentee before: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.

Effective date of registration: 20191226

Address after: 510000 unit 2414-2416, building, No. five, No. 371, Tianhe District, Guangdong, China

Patentee after: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.

Address before: 310014 Hangzhou city in the lower reaches of the city of Zhejiang Wang Road, No. 18

Patentee before: Zhejiang University of Technology

TR01 Transfer of patent right