CN103700116A - Background modeling method for movement target detection - Google Patents

Background modeling method for movement target detection Download PDF

Info

Publication number
CN103700116A
CN103700116A CN201210370484.XA CN201210370484A CN103700116A CN 103700116 A CN103700116 A CN 103700116A CN 201210370484 A CN201210370484 A CN 201210370484A CN 103700116 A CN103700116 A CN 103700116A
Authority
CN
China
Prior art keywords
background
image
formula
pixel
object detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210370484.XA
Other languages
Chinese (zh)
Other versions
CN103700116B (en
Inventor
朱敏
朱振福
柴智
石春雷
刘峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No207 Institute Of No2 Research Institute Of Avic
Original Assignee
No207 Institute Of No2 Research Institute Of Avic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No207 Institute Of No2 Research Institute Of Avic filed Critical No207 Institute Of No2 Research Institute Of Avic
Priority to CN201210370484.XA priority Critical patent/CN103700116B/en
Publication of CN103700116A publication Critical patent/CN103700116A/en
Application granted granted Critical
Publication of CN103700116B publication Critical patent/CN103700116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a background modeling method for movement target detection, and belongs to the technical field of photoelectric image processing. In order to establish an accurate background model in a complex scene, the method provides background statistical modeling methods utilizing inter-frame differential information, namely a median method and an average method, based on the problems that a background model is difficult to obtain for the background differential method and the background differential method is easily affected by external interference, the influence of a foreground target is removed, so that the established background model is more accurate and has high anti-interference performance. The median method and the average method have simple principles and high anti-noise performance and are easy to implement. In addition, according to the method, adjacent frames of video images are differenced, and an appropriate threshold value is selected, so that an accurate background model is guaranteed.

Description

A kind of background modeling method for moving object detection
Technical field
The present invention relates to opto-electronic image processing technical field, be specifically related to a kind of background modeling method for moving object detection.
Background technology
Background subtraction point-score is a kind of method the most frequently used in current moving object detection.It can provide characteristic information more completely, and complexity is low, real-time is better, but is generally difficult in actual applications obtain in advance complete background image; And the variation of background subtraction point-score to dynamic scene, as illumination gradual change, external environment disturbance, noise etc. disturb responsive especially.Therefore how according to scene changes, setting up background model accurately, realize the real-time update to background simultaneously, is the basis that follow-up whole motion analysis and behavior are understood.
In conjunction with above-mentioned background content, how to reject the impact of foreground image, make the background model image of foundation more accurate, be the direction that current those skilled in the art make great efforts exploitation.
Summary of the invention
(1) technical matters that will solve
The technical problem to be solved in the present invention is how under complex scene, to set up background model exactly.
(2) technical scheme
For solving the problems of the technologies described above, the invention provides a kind of background modeling method for moving object detection, the method comprises:
Step S1: read in continuous M two field picture, guarantee that the position of moving target in the first frame and last frame image is not overlapping;
Step S2: two two field pictures to some frames of being separated by subtract each other acquisition difference image;
Step S3: default dynamic partition threshold value, utilize described dynamic partition threshold value to carry out binaryzation operation to described difference image and obtain differentiated binary image, to distinguish background area and the target area in described difference image;
Step S4: adopt morphologic filtering method and regional connectivity method to remove noise and fill medium and small cavity, target area;
Step S5: reject target area, calculate intermediate value or the average of residue each pixel of background area gray scale on time shaft, obtain complete background image.
Wherein, in described step S1, M is greater than 10 integer.
Wherein, in described step S2, described in be separated by two two field pictures of some frames be two two field pictures of multiframe of being separated by.
Wherein, suppose I(x, y, m) be that m two field picture is positioned at the pixel value that coordinate (x, y) is located, D(x, y, m) be differentiated binary image, B(x, y) be the background image that will form; In described step S2, described in two two field pictures that two two field pictures of some frames are direct neighbor of being separated by;
According to formula (1), calculate the difference image of consecutive frame image:
A(x,y,m)=|I(x,y,m+1)-I(x,y,m)|,1≤m<M (1)。
Wherein, in described step S3, described dynamic partition threshold hypothesis is T, and it adopts the gray-scale value fractile of difference image; And carry out binaryzation difference image according to formula (2):
D ( x , y , m ) = 1 , A ( x , y , m ) &GreaterEqual; T 0 , A ( x , y , m ) < T - - - ( 2 ) .
Wherein, in described step S3, described dynamic partition threshold value T adopts 95% fractile of difference image gray-scale value fractile.
Wherein, in described step S5, by calculating, remain each pixel of background area intermediate value of gray scale on time shaft and obtain the process of complete background image and be:
Step S501: first to each pixel (x, y) on binary sequence image, get indexed set and represent that difference is not less than the place frame of threshold value T; Described indexed set is obtained according to formula (3):
S x,y(m)={m:D(x,y,m)=1,1≤m<M} (3);
And, respectively by f x,yand e x,ybe designated as pixel D(x, y, m) be the label of 1 o'clock first with last; Described f x,yand e x,yaccording to formula (4), obtain:
f x,y=min{m:m∈S x,y(m)}
e x,y=max{m:m∈S x,y(m)} (4);
Step S502: by indexed set S x,y(m) and in conjunction with formula (5) can obtain background image:
In formula, median () represents to get intermediate value.
Wherein, in described step S5, by calculating, remain each pixel of background area average of gray scale on time shaft and obtain the process of complete background image and be:
Step S501 ': first binary image is done and corroded and expansion process, remove little noise agglomerate, larger binaryzation agglomerate is done to minimum boundary rectangle computing, and the weight w of the pixel in rectangle is got to 0, the pixel weight w outside rectangle gets 1; Wherein, the weight matrix of m frame is designated as W (x, y, m);
Step S502 ': to each pixel (x, y) on binary sequence image, get indexed set T according to formula (6) x,y(m):
T x,y(m)={m:W(x,y,m)=1,1<m≤M} (6);
Step S503 ': by indexed set T x,y(m) and in conjunction with formula (7) can obtain background image B(x, y):
Figure BDA00002211406800032
In formula, mean () represents to average.
Wherein, if scene changes is relatively very fast, the method also comprises step 6: background is carried out to real-time update.
Wherein, described step 6 comprises:
According to formula (8), background is carried out to real-time update:
B ( x , y , m ) = B ( x , y , m - 1 ) , D ( x , y , m ) = 1 &alpha;I ( x , y , m ) + ( 1 - &alpha; ) B ( x , y , m - 1 ) , D ( x , y , m ) = 0 - - - ( 8 )
In formula, α is for upgrading the factor, D(x, y, m) be:
D ( x , y , m ) = 1 , | I ( x , y , m ) - I ( x , y , m - 1 ) | &GreaterEqual; T 0 , | I ( x , y , m ) - I ( x , y , m - 1 ) | < T - - - ( 9 )
Wherein, threshold value T chooses by fractile threshold method.
(3) beneficial effect
Compared with prior art, technical solution of the present invention possesses following some beneficial effect:
(1) it rejects the impact of foreground image by median method and averaging method, makes the background model image of foundation more accurate, and two Method And Principles are simple, are easy to realize, and have good noise immunity;
(2) by consecutive frame video image difference, choose suitable threshold value, utilize the method that image is processed to reject foreground target impact, thereby obtain background model accurately.
(3) for background subtraction point-score, be difficult for background extraction model, and be easily subject to the impact of external disturbance, the Background statistic modeling method of utilizing inter-frame difference information that the present invention proposes, can guarantee under complex background, to obtain background image accurately, and have good anti-interference.
Accompanying drawing explanation
The process flow diagram of the background modeling method for moving object detection that Fig. 1 provides for technical solution of the present invention.
Fig. 2 (a) and Fig. 2 (c) are respectively the difference image of adjacent two frames of applying in the invention process process.
Fig. 2 (b) and Fig. 2 (d) are respectively the grey level histogram of statistical difference partial image Fig. 2 (a) and Fig. 2 (c).
Fig. 3 (a)-Fig. 3 (h) is for carrying out the design sketch of the background image after modeling by technical solution of the present invention.
Fig. 3 (i) is median method background modeling effect.
Fig. 3 (j) is averaging method background modeling effect.
Fig. 4 (a) and Fig. 4 (b) are respectively the front and back frame as Fig. 2 (a) source.
Fig. 4 (c) and Fig. 4 (d) are respectively the front and back frame as Fig. 2 (c) source.
Embodiment
For making object of the present invention, content and advantage clearer, below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.
In order to set up background model exactly under complex scene, the present invention is directed to background subtraction point-score and be difficult for background extraction model, and be easily subject to the impact of external disturbance, two kinds of Background statistic modeling methods of utilizing inter-frame difference information---median method and averaging method have been proposed, remove the impact of foreground target, made the background model of foundation more accurate.Based on above-mentioned two kinds of methods, the invention provides a kind of background modeling method for moving object detection, as shown in Figure 1, the method comprises:
Step S1: read in continuous M two field picture, guarantee that the position of moving target in the first frame and last frame image is not overlapping; Wherein, M is greater than 10 integer;
Step S2: two two field pictures to some frames of being separated by subtract each other acquisition difference image; Wherein, when target speed compares speed, two two field pictures that two two field pictures of some frames of being separated by described in can choosing are direct neighbor; And when target speed is slow, the number of image frames M that need to read in also can increase, now when step S1 makes calculus of differences, can not adopt the frame of adjacent two frames poor, but the n frame of being separated by is made first difference, can effectively reduce operand, now, described in be separated by two two field pictures of some frames be two two field pictures of multiframe of being separated by.
Step S3: default dynamic partition threshold value, utilize described dynamic partition threshold value to carry out binaryzation operation to described difference image and obtain differentiated binary image, to distinguish background area and the target area in described difference image; Wherein, if adopt static threshold, the method is not strong to the adaptability of the complex environments such as light variation, noise, and the present invention adopts the gray-scale value fractile of difference image for this reason, preferably can adopt 95% fractile to be used as dynamic partition threshold value; The grey level histogram of statistical difference partial image, as Fig. 2 shows, Fig. 2 (a) and Fig. 2 (c) are respectively the difference image of rigid body two frames adjacent with non-rigid motion; The primitive frame that Fig. 2 (a) originates is respectively Fig. 4 (a) and Fig. 4 (b), wherein, first Fig. 4 (a) and Fig. 4 (b) is carried out to black and white processing, then Fig. 4 of black and white (b) pattern is deducted to Fig. 4 (a), thereby obtains Fig. 2 (a); The primitive frame that Fig. 2 (c) originates is respectively Fig. 4 (c) and Fig. 4 (d), wherein, first Fig. 4 (c) and Fig. 4 (d) is carried out to black and white processing, then Fig. 4 of black and white (d) pattern is deducted to Fig. 4 (c), thereby obtains Fig. 2 (c); Fig. 2 (b) is respectively its corresponding difference image grey level histogram with Fig. 2 (d).Can find out, the most value of gray scale of difference image concentrates near 0, and this mainly causes due to external environment condition interference, and only has the value of minority to be significantly greater than 0, can think the result that target travel causes; Therefore, suitable fractile threshold value can be according to actual conditions self-adaptation value, thereby effectively reduces the probability of miscarriage of justice of classification;
Step S4: adopt morphologic filtering method and regional connectivity method to remove noise and fill medium and small cavity, target area; This step is conventional at present technological means, does not repeat them here;
Step S5: reject target area, calculate intermediate value or the average of residue each pixel of background area gray scale on time shaft, obtain complete background image.
(1), for median method in the situation that, specifically details are as follows for above-mentioned steps:
First, suppose I(x, y, m) be that m two field picture is positioned at the pixel value that coordinate (x, y) is located, D(x, y, m) be differentiated binary image, B(x, y) be the background image that will form; In described step S2, described in two two field pictures that two two field pictures of some frames are direct neighbor of being separated by;
According to formula (1), calculate the difference image of consecutive frame image:
A(x,y,m)=|I(x,y,m+1)-I(x,y,m)|,1≤m<M (1)
In described step S3, described dynamic partition threshold hypothesis is T, and it adopts the gray-scale value fractile of difference image; And carry out binaryzation difference image according to formula (2):
D ( x , y , m ) = 1 , A ( x , y , m ) &GreaterEqual; T 0 , A ( x , y , m ) < T - - - ( 2 ) .
In described step S3, described dynamic partition threshold value T adopts 95% fractile of difference image gray-scale value fractile.
In described step S5, by calculating, remain each pixel of background area intermediate value of gray scale on time shaft and obtain the process of complete background image and be:
Step S501: first to each pixel (x, y) on binary sequence image, get indexed set and represent that difference is not less than the place frame of threshold value T; Described indexed set is obtained according to formula (3):
S x,y(m)={m:D(x,y,m)=1,1≤m<M} (3);
And, respectively by f x,yand e x,ybe designated as pixel D(x, y, m) be the label of 1 o'clock first with last; Described f x,yand e x, yaccording to formula (4), obtain:
f x,y=min{m:m∈S x,y(m)}
e x,y=max{m:m∈S x,y(m)} (4);
Step S502: by indexed set S x,y(m) and in conjunction with formula (5) can obtain background image:
Figure BDA00002211406800071
In formula, median () represents to get intermediate value.
(2), for median method in the situation that, specifically details are as follows for above-mentioned steps:
It is a kind of comprehensive of multi-frame mean method and Surendra update method that the method can be considered as, and wherein, step S1-step S4 is identical with above-mentioned median method: for step S5,
By calculating, remaining each pixel of background area average of gray scale on time shaft obtains the process of complete background image and is:
Step S501 ': first binary image is done and corroded and expansion process, remove little noise agglomerate, larger binaryzation agglomerate is done to minimum boundary rectangle computing, and the weight w of the pixel in rectangle is got to 0, the pixel weight w outside rectangle gets 1; Wherein, the weight matrix of m frame is designated as W (x, y, m);
Step S502 ': to each pixel (x, y) on binary sequence image, get indexed set T according to formula (6) x,y(m):
T x,y(m)={m:W(x,y,m)=1,1<m≤M} (6);
Step S503 ': by indexed set T x,y(m) and in conjunction with formula (7) can obtain background image B(x, y):
Figure BDA00002211406800081
In formula, mean () represents to average.
In sum, according to median method and averaging method, reject the impact of foreground image, make the background model image of foundation more accurate.Two Method And Principles are simple, are easy to realize, and have good noise immunity.
In addition,, aspect the renewal of background model, if scene changes is slow, can take above-described median method or averaging method periodically to upgrade background; If scene changes is relatively very fast, the method also comprises step 6: background is carried out to real-time update;
Described step 6 comprises:
According to formula (8), background is carried out to real-time update:
B ( x , y , m ) = B ( x , y , m - 1 ) , D ( x , y , m ) = 1 &alpha;I ( x , y , m ) + ( 1 - &alpha; ) B ( x , y , m - 1 ) , D ( x , y , m ) = 0 - - - ( 8 )
In formula, α is for upgrading the factor, D(x, y, m) be:
D ( x , y , m ) = 1 , | I ( x , y , m ) - I ( x , y , m - 1 ) | &GreaterEqual; T 0 , | I ( x , y , m ) - I ( x , y , m - 1 ) | < T - - - ( 9 )
Wherein, threshold value T chooses by fractile threshold method.
For the validity of checking technical solution of the present invention, for rigid motion and non-rigid motion, utilize technical solution of the present invention and traditional Surendra algorithm to carry out the contrast experiment of background modeling, effect contrast figure is as shown in Fig. 3 (a)-Fig. 3 (j).Wherein, Fig. 3 (a)-Fig. 3 (e) is for rigid motion situation, the 1114th frame that Fig. 3 (a) is sequence image, Fig. 3 (b) is the 1149th frame, Fig. 3 (c) is traditional Surendra algorithm background modeling effect, Fig. 3 (d) is median method background modeling effect, and Fig. 3 (e) is averaging method background modeling effect.As can be seen from Figure, for the situation of rigid motion, three kinds of methods all have good effect, but the background that the median method based on statistics and averaging method are recovered is cleaner than Surendra algorithm.This be because, median method and averaging method are that the overall situation of the picture frame of all participation background modelings is considered, the moving region that any frame detects, reject by reasonable manner in capital, and can not affect obtaining of background image in this region, even if inter-frame difference is not accurately cut apart moving region, also can be by getting the tactful correction obtaining to a certain extent of intermediate value or average; And the context update of Surendra algorithm must depend on the testing result of the moving region of current frame image, if present frame has target originally, but inter-frame difference does not detect, will produce larger adverse effect to the context update in this target area, this limitation of Surendra algorithm shows more obvious in little target.
Fig. 3 (f)-Fig. 3 (J) is for non-rigid motion situation, the 1st frame that Fig. 3 (f) is sequence image, Fig. 3 (g) is the 50th frame, Fig. 3 (h) is traditional Surendra algorithm background modeling effect, Fig. 3 is (i) median method background modeling effect, and Fig. 3 (j) is averaging method background modeling effect.As can be seen from Figure, for the situation of non-rigid motion, the successful of Surendra algorithm is not as other two kinds of methods.This be because, Surendra algorithm must rely on the moving region testing result of next frame image background is upgraded, when in image, the regional area of moving target keeps motionless within a certain period of time, this region will be mistaken as background, thereby causes context update to be made mistakes; The equilibrium based on all picture frames of median method and averaging method is considered can reduce this impact as far as possible, makes background image more accurate.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, do not departing under the prerequisite of the technology of the present invention principle; can also make some improvement and distortion, these improvement and distortion also should be considered as protection scope of the present invention.

Claims (10)

1. for a background modeling method for moving object detection, it is characterized in that, the method comprises:
Step S1: read in continuous M two field picture, guarantee that the position of moving target in the first frame and last frame image is not overlapping;
Step S2: two two field pictures to some frames of being separated by subtract each other acquisition difference image;
Step S3: default dynamic partition threshold value, utilize described dynamic partition threshold value to carry out binaryzation operation to described difference image and obtain differentiated binary image, to distinguish background area and the target area in described difference image;
Step S4: adopt morphologic filtering method and regional connectivity method to remove noise and fill medium and small cavity, target area;
Step S5: reject target area, calculate intermediate value or the average of residue each pixel of background area gray scale on time shaft, obtain complete background image.
2. the background modeling method for moving object detection as claimed in claim 1, is characterized in that, in described step S1, M is greater than 10 integer.
3. the background modeling method for moving object detection as claimed in claim 1, is characterized in that, in described step S2, described in be separated by two two field pictures of some frames be two two field pictures of multiframe of being separated by.
4. the background modeling method for moving object detection as claimed in claim 1, is characterized in that, supposes I(x, y; m) be that m two field picture is positioned at the pixel value that coordinate (x, y) is located, D(x, y; m) be differentiated binary image, B(x, y) be the background image that will form; In described step S2, described in two two field pictures that two two field pictures of some frames are direct neighbor of being separated by;
According to formula (1), calculate the difference image of consecutive frame image:
A(x,y,m)=|I(x,y,m+1)-I(x,y,m)|,1≤m<M (1)。
5. the background modeling method for moving object detection as claimed in claim 4, is characterized in that, in described step S3, described dynamic partition threshold hypothesis is T, and it adopts the gray-scale value fractile of difference image; And carry out binaryzation difference image according to formula (2):
D ( x , y , m ) = 1 , A ( x , y , m ) &GreaterEqual; T 0 , A ( x , y , m ) < T - - - ( 2 ) .
6. the background modeling method for moving object detection as claimed in claim 5, is characterized in that, in described step S3, described dynamic partition threshold value T adopts 95% fractile of difference image gray-scale value fractile.
7. the background modeling method for moving object detection as claimed in claim 5, is characterized in that, in described step S5, remains each pixel of background area intermediate value of gray scale on time shaft obtain the process of complete background image and be by calculating:
Step S501: first to each pixel (x, y) on binary sequence image, get indexed set and represent that difference is not less than the place frame of threshold value T; Described indexed set is obtained according to formula (3):
S x,y(m)={m:D(x,y,m)=1,1≤m<M} (3);
And, respectively by f x,yand e x, ybe designated as pixel D(x, y, m) be the label of 1 o'clock first with last; Described f x,yand e x,yaccording to formula (4), obtain:
f x,y=min{m:m∈S x,y(m)}
e x,y=max{m:m∈S x,y(m)} (4);
Step S502: by indexed set S x,y(m) and in conjunction with formula (5) can obtain background image:
Figure FDA00002211406700022
In formula, median () represents to get intermediate value.
8. the background modeling method for moving object detection as claimed in claim 5, is characterized in that, in described step S5, remains each pixel of background area average of gray scale on time shaft obtain the process of complete background image and be by calculating:
Step S501 ': first binary image is done and corroded and expansion process, remove little noise agglomerate, larger binaryzation agglomerate is done to minimum boundary rectangle computing, and the weight w of the pixel in rectangle is got to 0, the pixel weight w outside rectangle gets 1; Wherein, the weight matrix of m frame is designated as W (x, y, m);
Step S502 ': to each pixel (x, y) on binary sequence image, get indexed set T according to formula (6) x,y(m):
T x,y(m)={m:W(x,y,m)=1,1<m≤M} (6);
Step S503 ': by indexed set T x,y(m) and in conjunction with formula (7) can obtain background image B(x, y):
In formula, mean () represents to average.
9. the background modeling method for moving object detection as claimed in claim 5, is characterized in that, if scene changes is relatively very fast, the method also comprises step 6: background is carried out to real-time update.
10. the background modeling method for moving object detection as claimed in claim 9, is characterized in that, described step 6 comprises:
According to formula (8), background is carried out to real-time update:
B ( x , y , m ) = B ( x , y , m - 1 ) , D ( x , y , m ) = 1 &alpha;I ( x , y , m ) + ( 1 - &alpha; ) B ( x , y , m - 1 ) , D ( x , y , m ) = 0 - - - ( 8 )
In formula, α is for upgrading the factor, D(x, y, m) be:
D ( x , y , m ) = 1 , | I ( x , y , m ) - I ( x , y , m - 1 ) | &GreaterEqual; T 0 , | I ( x , y , m ) - I ( x , y , m - 1 ) | < T - - - ( 9 )
Wherein, threshold value T chooses by fractile threshold method.
CN201210370484.XA 2012-09-27 2012-09-27 Background modeling method for movement target detection Active CN103700116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210370484.XA CN103700116B (en) 2012-09-27 2012-09-27 Background modeling method for movement target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210370484.XA CN103700116B (en) 2012-09-27 2012-09-27 Background modeling method for movement target detection

Publications (2)

Publication Number Publication Date
CN103700116A true CN103700116A (en) 2014-04-02
CN103700116B CN103700116B (en) 2017-02-22

Family

ID=50361634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210370484.XA Active CN103700116B (en) 2012-09-27 2012-09-27 Background modeling method for movement target detection

Country Status (1)

Country Link
CN (1) CN103700116B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715446A (en) * 2015-02-28 2015-06-17 深圳市中兴移动通信有限公司 Mobile terminal and method and device for removing moving target in camera shooting for same
CN104954893A (en) * 2015-06-25 2015-09-30 西安理工大学 Falsely-detected target chain deleting method for video abstract generation
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN105894531A (en) * 2014-12-24 2016-08-24 北京明景科技有限公司 Moving object extraction method under low illumination
CN106023114A (en) * 2016-05-27 2016-10-12 北京小米移动软件有限公司 Image processing method and apparatus
CN106157303A (en) * 2016-06-24 2016-11-23 浙江工商大学 A kind of method based on machine vision to Surface testing
CN106327520A (en) * 2016-08-19 2017-01-11 苏州大学 Moving object detection method and system
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN107133624A (en) * 2017-05-26 2017-09-05 四川九洲电器集团有限责任公司 A kind of object detection method and equipment
CN107169985A (en) * 2017-05-23 2017-09-15 南京邮电大学 A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN107194953A (en) * 2017-05-18 2017-09-22 中国科学院长春光学精密机械与物理研究所 The detection method and device of moving target under a kind of dynamic background
CN107301388A (en) * 2017-06-16 2017-10-27 重庆交通大学 A kind of automatic vehicle identification method and device
CN108010065A (en) * 2017-11-07 2018-05-08 西安天和防务技术股份有限公司 Low target quick determination method and device, storage medium and electric terminal
CN108765451A (en) * 2018-05-07 2018-11-06 南京邮电大学 A kind of movement of traffic object detection method of adaptive RTS threshold adjustment
CN109064743A (en) * 2018-08-09 2018-12-21 李艳芹 Traffic management information automatic collection platform
CN109472808A (en) * 2018-11-23 2019-03-15 东北大学 The detection method of moving target in a kind of acquisition video
CN110555863A (en) * 2019-09-11 2019-12-10 湖南德雅坤创科技有限公司 moving object detection method and device and computer readable storage medium
CN110634152A (en) * 2019-08-08 2019-12-31 西安电子科技大学 Target detection method based on background modeling and multi-frame confirmation
CN111047595A (en) * 2019-11-21 2020-04-21 深圳市若雅方舟科技有限公司 Real-time sea wave segmentation method and device based on self-adaptive threshold frame difference method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221663A (en) * 2008-01-18 2008-07-16 电子科技大学中山学院 Intelligent monitoring and alarming method based on movement object detection
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101894381A (en) * 2010-08-05 2010-11-24 上海交通大学 Multi-target tracking system in dynamic video sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221663A (en) * 2008-01-18 2008-07-16 电子科技大学中山学院 Intelligent monitoring and alarming method based on movement object detection
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101894381A (en) * 2010-08-05 2010-11-24 上海交通大学 Multi-target tracking system in dynamic video sequence

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894531A (en) * 2014-12-24 2016-08-24 北京明景科技有限公司 Moving object extraction method under low illumination
CN104715446A (en) * 2015-02-28 2015-06-17 深圳市中兴移动通信有限公司 Mobile terminal and method and device for removing moving target in camera shooting for same
CN104715446B (en) * 2015-02-28 2019-08-16 努比亚技术有限公司 A kind of mobile terminal and its by the method and apparatus of the object removal moved in camera shooting
CN104954893B (en) * 2015-06-25 2017-11-28 西安理工大学 A kind of flase drop target chain delet method of video frequency abstract generation
CN104954893A (en) * 2015-06-25 2015-09-30 西安理工大学 Falsely-detected target chain deleting method for video abstract generation
CN105279771A (en) * 2015-10-23 2016-01-27 中国科学院自动化研究所 Method for detecting moving object on basis of online dynamic background modeling in video
CN105279771B (en) * 2015-10-23 2018-04-10 中国科学院自动化研究所 A kind of moving target detecting method based on the modeling of online dynamic background in video
CN106023114A (en) * 2016-05-27 2016-10-12 北京小米移动软件有限公司 Image processing method and apparatus
CN106023114B (en) * 2016-05-27 2019-02-12 北京小米移动软件有限公司 Image processing method and device
CN106157303A (en) * 2016-06-24 2016-11-23 浙江工商大学 A kind of method based on machine vision to Surface testing
US10410361B2 (en) 2016-08-19 2019-09-10 Soochow University Moving object detection method and system
CN106327520B (en) * 2016-08-19 2020-04-07 苏州大学 Moving target detection method and system
WO2018032660A1 (en) * 2016-08-19 2018-02-22 苏州大学 Moving target detection method and system
CN106327520A (en) * 2016-08-19 2017-01-11 苏州大学 Moving object detection method and system
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN107194953A (en) * 2017-05-18 2017-09-22 中国科学院长春光学精密机械与物理研究所 The detection method and device of moving target under a kind of dynamic background
CN107169985A (en) * 2017-05-23 2017-09-15 南京邮电大学 A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN107133624A (en) * 2017-05-26 2017-09-05 四川九洲电器集团有限责任公司 A kind of object detection method and equipment
CN107301388A (en) * 2017-06-16 2017-10-27 重庆交通大学 A kind of automatic vehicle identification method and device
CN108010065A (en) * 2017-11-07 2018-05-08 西安天和防务技术股份有限公司 Low target quick determination method and device, storage medium and electric terminal
CN108765451A (en) * 2018-05-07 2018-11-06 南京邮电大学 A kind of movement of traffic object detection method of adaptive RTS threshold adjustment
CN109064743A (en) * 2018-08-09 2018-12-21 李艳芹 Traffic management information automatic collection platform
CN109472808A (en) * 2018-11-23 2019-03-15 东北大学 The detection method of moving target in a kind of acquisition video
CN109472808B (en) * 2018-11-23 2022-03-04 东北大学 Detection method for obtaining moving target in video
CN110634152A (en) * 2019-08-08 2019-12-31 西安电子科技大学 Target detection method based on background modeling and multi-frame confirmation
CN110634152B (en) * 2019-08-08 2023-07-04 西安电子科技大学 Target detection method based on background modeling and multi-frame confirmation
CN110555863A (en) * 2019-09-11 2019-12-10 湖南德雅坤创科技有限公司 moving object detection method and device and computer readable storage medium
CN111047595A (en) * 2019-11-21 2020-04-21 深圳市若雅方舟科技有限公司 Real-time sea wave segmentation method and device based on self-adaptive threshold frame difference method

Also Published As

Publication number Publication date
CN103700116B (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN103700116A (en) Background modeling method for movement target detection
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN106022231A (en) Multi-feature-fusion-based technical method for rapid detection of pedestrian
CN103559237A (en) Semi-automatic image annotation sample generating method based on target tracking
Cheung et al. A local region based approach to lip tracking
CN103077521A (en) Area-of-interest extracting method used for video monitoring
Gupta et al. Automatic trimap generation for image matting
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
CN104268520A (en) Human motion recognition method based on depth movement trail
CN107844772A (en) A kind of motor vehicle automatic testing method based on movable object tracking
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN106447656B (en) Rendering flaw image detecting method based on image recognition
CN103473547A (en) Vehicle target recognizing algorithm used for intelligent traffic detecting system
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN108010050B (en) Foreground detection method based on adaptive background updating and selective background updating
CN105975911A (en) Energy perception motion significance target detection algorithm based on filter
CN108009480A (en) A kind of image human body behavioral value method of feature based identification
CN102129687B (en) Self-adapting target tracking method based on local background subtraction under dynamic scene
CN111414938A (en) Target detection method for bubbles in plate heat exchanger
CN104732558B (en) moving object detection device
CN111862152A (en) Moving target detection method based on interframe difference and super-pixel segmentation
Yang et al. Dual frame differences based background extraction algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant