CN102903120A - Time-space condition information based moving object detection method - Google Patents

Time-space condition information based moving object detection method Download PDF

Info

Publication number
CN102903120A
CN102903120A CN2012102513544A CN201210251354A CN102903120A CN 102903120 A CN102903120 A CN 102903120A CN 2012102513544 A CN2012102513544 A CN 2012102513544A CN 201210251354 A CN201210251354 A CN 201210251354A CN 102903120 A CN102903120 A CN 102903120A
Authority
CN
China
Prior art keywords
image
background
space
image block
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102513544A
Other languages
Chinese (zh)
Inventor
包卫东
熊志辉
王斌
谭树人
刘煜
王炜
徐玮
陈立栋
张茂军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN VISION SPLEND PHOTOELECTRIC TECHNOLOGY Co.,Ltd.
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2012102513544A priority Critical patent/CN102903120A/en
Publication of CN102903120A publication Critical patent/CN102903120A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a time-space condition information based moving object detection method. The method comprises the following steps: building a target detection time-space domain model through considering the significance of human visual time-space domains; calculating a conditional probability that a detection image belongs to a time-space domain reference background; carrying out nonlinear transformation on the conditional probability through negative logarithm checking so as to extract time-space conditional information; carrying out weighted summation on the conditional information of image in an adjacent domain through considering the local consistency of image characteristics; and as characteristics, carrying out object detection by using a linear classifier. The conditional probability is rapidly calculated by using a color histogram, and an image block replacing a single pixel is adopted for carrying out modeling and detection, thereby reducing the algorithm complexity and the storage space requirements; and through combining with an image block difference pre-detection mechanism, the object detection speed is increased. The method disclosed by the invention is low in algorithm complexity, less in storage space requirements and high in algorithm instantaneity, and can effectively suppress the background disturbance interference and isolate the noise influence; and by using the method, the real-time detection of moving objects on the existing computers is realized, therefore, the method is applicable to embedded intelligent camera platforms.

Description

Moving target detecting method based on space-time condition information
Technical field:
The present invention relates to the video moving object subdivision in the computer vision, refer in particular to the moving object detection in the video monitoring system.
Background technology:
It is one of underlying issue of computer vision application that video frequency motion target detects, it is the shoring of foundation of the senior application such as motion target tracking, moving target identification, man-machine interface, action recognition, behavior understanding, in the concrete application such as video monitoring, video frequency searching, play a significant role, will play a greater role at numerous areas such as military affairs, traffic, security protection, entertainments.
Intelligent video monitoring system can free the people from heavy video monitor task, reduce manual intervention, alleviate guarder employee and make burden, automatically find to monitor the moving target in the environment, automatic recognition and tracking moving target finds to monitor suspicious event and extraction information of interest in the scene automatically.Intellectual analysis function in the aforementioned intelligent video monitoring system all depends on the video frequency motion target detection algorithm, it is that moving target in the video is separated with background that video frequency motion target detects, to extract moving target, it is the basic algorithm of intelligent video monitoring system, is the algorithm basis of succeeding target tracking, identification, suspicious event detection.
The moving target detecting method of main flow has background subtraction and optical flow method at present.The calculating of optical flow method complexity makes it be difficult to be able to practical application; Background subtraction be at present the most frequently used also be the most effective moving target detecting method, its core concept is to use suitable model to carry out scene description, and detects moving target with this variation of judging scene.Background subtraction method commonly used comprises mixed Gauss model (Gaussians Mixture Model, GMM), nonparametric model (Kernel Density Estimation, KDE), code book model (Code Book) etc., they are by carrying out modeling to detect moving target to pixel intensity on time domain.The challenge of moving object detection comes from the impact that how to overcome Environmental variations (the rocking of illumination variation, leaf, sleet, water level fluctuation etc.) and imaging equipment (electronic noise, video camera rock etc.).Background subtraction method based on time domain modeling commonly used detects and the setting movement target by the variation of characteristics of image (color, gradient, texture, edge etc.) on time domain.But the characteristics of image of each pixel position is not to isolate in the image, exist contact between them, therefore utilize time domain to change and be difficult to process background perturbation in the scene, even adopt multiple mode model, such as mixed Gauss model (GMM), also be difficult to suppress Environmental Noise Influence.Although the method based on image segmentation (based on the moving Object Segmentation of random field) can suppress isolated noise, but be based on the method for cutting apart and depend on initial testing result, when the initial detecting erroneous results is serious, also be difficult to obtain accurate segmentation result, and the algorithm real-time is poor.Taken into full account the space-time consistency that color of image distributes based on the method for time-space domain model, moving object detection is carried out in the associating modeling in the time-space domain, shows superperformance in processing dynamic scene during background perturbation.Algorithm based on the time-space domain, owing to need to process a large amount of time-space domain data, computation complexity is high, memory requirements is large, the algorithm real-time is poor, because the isolated noise disturbing effect, final testing result also needs to carry out the aftertreatments such as morphologic filtering or image segmentation just can obtain preferably testing result.
Along with video monitoring system was developed to cybertimes by the simulation epoch, video camera also develops towards intelligent direction, increasing intelligent video Processing Algorithm comprises the moving object detection algorithm, need to transplant to intelligent camera, carries out embedded realization at intelligent camera.But, the existing video frequency motion target detection algorithm that can process neighbourhood noise in the dynamic scene, not only computation complexity is high, and memory requirements is very large, and is difficult to use at the embedded intelligence Camera Platform.For this reason, we are towards the practical application of intelligent video monitoring system, be subject to the ambient noise interference problem for the moving object detection in the dynamic scene, a kind of dynamic scene moving target detecting method based on space-time condition information is proposed, the method can the establishment dynamic scene in ambient noise interference, effectively detect moving target, and adopt the image block strategy to carry out target detection and accelerate, reduce algorithm complex, increase real-time, reduce memory requirements, make the moving target detecting method based on space-time condition information, not only can realize that the dynamic scene moving target detects in real time at existing PC platform, also be suitable for the embedded intelligence Camera Platform and use.
Moving object detection essence is two classification problems, namely take background sequence as reference conditions, the pixel in the current observation image is categorized as prospect (being also referred to as target among the present invention) and background.Consideration based on algorithm complex, existing moving object detection algorithm adopts linear classifier more, image pixel is classified, be partitioned into the prospect in the present image, but, under dynamic scene (such as the scene of water level fluctuation, leaf swing), the background of disturbance (water surface of fluctuation, the leaf of swing) and prospect often show as linearly inseparable.Floating thing detects (b among Fig. 1) as example in the water level fluctuation scene, can find that all there are the problem of background and prospect linearly inseparable to a certain extent in background difference, mixed Gauss model, nonparametric model.
Background subtraction componental movement target detection: input picture and reference background image subtracted each other obtain difference image, as characteristic of division, adopt binaryzation operation (the simplest two sorters) to detect moving target with this.As shown in Figure 1, current input image (b among Fig. 1) and reference background (a) are done difference among Fig. 1 and obtained background subtraction partial image (c among Fig. 1), then add up respectively background area color histogram and foreground area color histogram in this difference image, the separable degree between these two histograms has just embodied the linear separability of background and prospect.Statistics background and foreground area histogram, specific implementation is: the target area that goes out in each two field picture of manual markings obtains moving target mask template (d among Fig. 1) in advance, then, to belong to the image statistics foreground area histogram of difference in the target mask zone, with the image statistics background area histogram of difference (b0 among Fig. 2) in the non-mask zone, in the same way, obtain the difference image histogram of whole video object zone and background area shown in e0 among Fig. 2.B0, local amplify (c0, the f0 among Fig. 2) of e0 the latter half among Fig. 2 can be found out, there is overlapping region on a large scale in the difference image histogram of target area and background area, that is the separable degree of background and prospect is low, therefore, adopt the background subtraction partial image as the moving object detection feature, be difficult to carry out linear classification by linear classifier, therefore, during the moving object detection of background subtraction partial image feature in processing dynamic scene, target and background are linearly inseparables.
Mixed Gauss model, nonparametric model are two kinds of video frequency motion target detection algorithms that typically carry out modeling with the color of image probability distribution, then the conditional probability that they all belong to background with image pixel to be detected adopts linear classifier to detect as characteristic of division.Because nonparametric model can represent arbitrariness probability distributing, therefore, we are take nonparametric model as example, and the background of investigating the modeling of color-based probability distribution subtracts the linear separability of prospect and background in the method.As shown in Figure 2, adopt nonparametric model to estimate that it is a1 among Fig. 2 that current input image (b among Fig. 1) belongs to background b conditional probability characteristic image, obtain on this characteristic image the histogram in the target and background area shown in b1 among Fig. 2 according to preceding method, and the histogram of the target area of whole video and background area is shown in e1 among Fig. 2.Can find out b1, local amplify (c1, the f1 among Fig. 2) of e1 the latter half among Fig. 2, compare with c0, f0 among Fig. 2, the histogram overlapping range of target and background area has reduced, linear separability has increased, but the linear interphase of target and background is narrower, the selection of segmentation threshold is subject to noise easily, affects the algorithm robustness.
Conditional probability p in the nonparametric model (x|b) is carried out nonlinear transformation, can obtain the characteristic image shown in a2 among Fig. 2, obtain after the same method corresponding target and the histogram (b2, e2 among Fig. 2) of background area.Can find from the latter half partial enlarged drawing (c2, f2 Fig. 2) of correspondence: the linear interphase broadening of target and background.That is to say that this nonlinear transformation has strengthened the linear separability of prospect and background.
Characteristics of image distributes and to have locally coherence, and namely image pixel does not isolate, it with the interior pixel of neighborhood between exist and contact.The characteristics of image of current pixel x can be subject to the impact of pixel image feature in the neighborhood, therefore, with the characteristics of image after the nonlinear transformation in its neighborhood, be weighted and, can further suppress isolated noise, increase the linear interphase (b3, c3, e3, f3 among Fig. 2) of target and background, increase the classification robustness, reduce classification error.
As shown in Figure 2, d0 is background difference algorithm testing result among Fig. 2, d1 is the nonparametric model testing result among Fig. 2, and d2 is conditional probability nonlinear transformation testing result among Fig. 2, and d3 is the interior weighted sum testing result of characteristic image neighborhood after the conditional probability nonlinear transformation among Fig. 2.Can find out from testing result, conditional probability is carried out nonlinear transformation, and with its in neighborhood, be weighted summation can the establishment dynamic scene in background perturbation disturb, reduce isolated noise and pollute, obtain good target detection result.Therefore, the present invention intends adopting the mode of the color of image probability distribution being carried out nonlinear transformation, and the linear separability of prospect and background in the enhancing dynamic scene is to improve the precision of moving object detection in the dynamic scene.
Summary of the invention:
Use in the intelligent video monitoring system especially moving object detection towards dynamic scene for computation vision, be subject to easily the ambient noise interference such as background perturbation and produce the problem of error-detecting, the present invention is intended to propose a kind of moving target detecting method based on space-time condition information towards dynamic scene, to suppress disturbance background interference in the dynamic scene, accurately detect moving target.
The solution that the present invention proposes is:
1 considers that the visual space-time conspicuousness makes up the time-space domain model, with this time-space domain model use nonparametric probability density method of estimation, estimate that detected image pixel x belongs to the conditional probability p (x|b) of reference background sequence b, utilize negative logarithmic kernel function that conditional probability p (x|b) is carried out nonlinear transformation, obtain the space-time condition information I (x|b) of x, consider that pixel affects it in the x neighborhood, with the space-time condition information weighting summation of pixel in the x neighborhood, pass through linear classifier class object and background with this as feature;
2 in the model of time-space domain, distributes design conditions information as the reference background probability with color histogram in the reference background territory of pixel x in the present image;
3 adopt the image block method that preceding method is optimized, and replace single pixel to carry out background modeling and detection with image block (Image Block is abbreviated as IB);
4 with image block background color histogram model as a setting, replaces the background image sequence of buffer memory in the model of time-space domain, reduces the data space demand;
The 5 reference background color histograms with image block, as the reference background model that all pixels in the image block share, design conditions information and weighted sum are carried out image block and are detected;
6 adopt image block difference pre-detection mechanism, and it is regional as couple candidate detection to detect in advance the image block that changes in the image, reduces the data processing amount based on the image block detection method of conditional information;
When 7 image blocks are detected as background, upgrade the reference background color histogram of this image block with current frame image piece color histogram, and in this image block neighborhood, select randomly an image block to carry out model modification by this method, when image block is detected as target, do not upgrade.
The moving target detecting method based on space-time condition information towards dynamic scene proposed by the invention mainly contains following advantage:
1, conditional probability is born the logarithm nonlinear transformation, increased the linear classification border width when utilizing linear classifier to carry out target detection, strengthened the linear separability of target and background, improved the target detection robustness.
2, conditional probability is born the logarithm nonlinear transformation have clear and definite physical significance, i.e. conditional information I (x|y), it is the uncertainty of variable x take y as condition.In video, take reference background b as condition, the conditional information I (x|b) of current observed value x has measured reference background to observed value x power surely really.In dynamic scene, reference background b can determine the zone that do not change fully, and part is determined the region of variation that caused by background perturbation, is difficult to determine the region of variation that is caused by target travel.Therefore, adopt conditional information as the characteristic of division of dynamic scene moving object detection, can carry out linear classification to disturbance background and moving target.
3, conditional information is weighted summation, suppressed the isolated noise impact, strengthened opposing disturbance background interference ability, further strengthened the linearity property distinguished of target and background, reduced error in classification.
4, consider that the visual space-time conspicuousness makes up the time-space domain model, meet human visual psychology's characteristics, be easy to extract interested movable information.
5, adopt image block to replace single pixel to detect, reduced algorithm complex and memory requirements.Adopt image block difference pre-detection mechanism, by the simple algorithm non-region of variation of filtering image in advance, reduced succeeding target detection computations amount, accelerated algorithm speed.
6, the model update method that adopts can either effectively adapt to scene illumination and change, and target information can be updated in the reference background again, can effectively avoid sliding window updating method to produce the problem of target afterbody test leakage when target travel is slow.
7, in general, the present invention has not only realized effective detection of moving target in the dynamic scene, it is high also to have overcome the existing algorithm complex that exists towards the moving target detecting method of dynamic scene, real-time is poor, and memory requirements is large, is not easy to the shortcomings such as embedded realization, can realize that the dynamic scene moving target detects in real time at the active computer platform, be suitable for the embedded intelligence Camera Platform and use.
Description of drawings:
Fig. 1 is background subtraction algorithm synoptic diagram;
Wherein a is background image;
B is input picture;
C is difference image;
D is target mask template, and middle darker regions is prospect, and all the other light areas are background.
Fig. 2 is prospect and background linear separability contrast synoptic diagram in the moving object detection algorithm;
Wherein a0-a3 is respectively background Differential Characteristics image, conditional probability density characteristic image, conditional information characteristic image, weighting conditional information characteristic image;
B0-b3 is respectively the characteristics of image distribution histogram of a0-a3, they be in Fig. 1 target mask template b as with reference to calculating;
C0-c3 is respectively b0-b3 bottom section local enlargement display;
D0-d3 is respectively the linear classification result of a0-a3 characteristic image;
E0-e3 is respectively corresponding prospect and the background characteristics distribution histogram of background Differential Characteristics image, conditional probability density characteristic image, conditional information characteristic image, weighting conditional information characteristic image of whole section video;
F0-f3 is respectively e0-e3 bottom section local enlargement display.
Fig. 3 is vision significance model around the center;
Wherein a is the floating on water object image, zone centered by 1 wherein, and 2 be reference field all around;
B is the difference of Gaussian model;
C is the vision significance figure that a extracts under difference of Gaussian model b effect, and the brighter expression conspicuousness of image is higher.
Fig. 4 is vision significance time-space domain model;
Wherein 1 is central field, corresponding with the central area shown in a 1 among Fig. 3;
The 2nd, around reference field, with shown in a among Fig. 3 around reference field 2 corresponding;
The 3rd, the pixel in the image;
The 4th, the reference background territory of pixel 3.
Fig. 5 is that image block detects synoptic diagram;
Wherein 3 is pixels in the image;
The 4th, the reference background territory that image block is corresponding, 4 corresponding with among Fig. 4;
The 5th, image block;
The 6th, entire image.
In above-mentioned accompanying drawing:
Reference field 3-pixel 4-reference background territory 5-image block 6-image around the 1-central field 2-
Embodiment:
The present invention proposes: consider that the visual space-time conspicuousness makes up the time-space domain model, with this time-space domain model use nonparametric probability density method of estimation, estimate that detected image pixel x belongs to the conditional probability p (x|b) of reference background sequence b, utilize negative logarithmic kernel function that conditional probability p (x|b) is carried out nonlinear transformation, obtain the space-time condition information I (x|b) of x, consider that pixel affects it in the x neighborhood, space-time condition information weighting summation with pixel in the x neighborhood, by linear classifier target and background are classified as feature with this, finish moving object detection.In order to reduce algorithm complex, improve algorithm speed, to reduce memory space requirements, we adopt the image block strategy that preceding method is optimized, method after optimizing, at double-core Intel Pentium Dual CPU E21802.0GHz, detect 640*480 resolution dynamic scene video on the computing machine of 1GBRAM and can reach 26fps (frame per second), satisfied the real-time application demand.
We consider that the visual space-time conspicuousness makes up the time-space domain model, and reference background is carried out modeling, are used for estimating reference background color distribution probability and input picture target detection.Human visual system's vision significance is presented as spatial domain conspicuousness and time domain conspicuousness.When the spatial domain vision significance is embodied in the eye-observation image, pay close attention to the highly significant zone, ignore low salient region.The receptive field of human eye retina's gangliocyte shows as model around a kind of center, i.e. difference of Gaussian model (Difference of Gaussians, among Fig. 3 shown in the b).Under the difference of Gaussian model, difference is more obvious around the center, and the receptive field eye response is larger, and the image vision conspicuousness of corresponding region is also just higher.Specifically as shown in Figure 3, floating on water object image (among Fig. 3 a), under difference of Gaussian model (b among Fig. 3) effect, obtain spatial domain vision significance figure (among Fig. 3 shown in the c), c can find out from Fig. 3, because plastic bottle and its peripheral regions (water surface) difference floating on the water surface are obvious, the response of this zone under the difference of Gaussian model is large, vision significance is high, and the water surface itself and peripheral regions (the being similarly the water surface) difference of fluctuation are not obvious, the response of this position under the difference of Gaussian model is little, and vision significance is low.The time domain vision significance is embodied in: during eye-observation, ignore easily the variation (such as the water surface of the leaf that rocks, fluctuation) that periodically occurs, and (unexpected) variation of special concern novelty, such as the moving target under the disturbance background.Video is the image sequence of combining in chronological order, therefore, in video, exist simultaneously spatial domain vision significance (single image itself has the spatial domain vision significance) and time domain vision significance (picture material variation in time is presented as the time domain vision significance).In video, the modified-image vision significance that often occurs is low, and emerging modified-image vision significance is high.Than disturbance background (modified-image that often occurs), moving target often shows as emerging modified-image, has higher vision significance, therefore, in the video frequency motion target Detection task, consider time domain, the spatial domain vision significance of human vision, effectively the wiping out background disturbances improves moving object detection effect in the dynamic scene.
As shown in Figure 4, with the neighborhood 1 of pixel 3 in the input picture (CurImg) central field as visual attention model around the center, with the peripheral regions 2 of neighborhood 1 correspondence as territory around the attention model around the center, and with overseas border setting reference background sequence spatial domain scope 4 around this, with N frame background sequence (BckSeq, being labeled as diagonal line hatches zone 4 among Fig. 4) image is as the time-space domain reference background of pixel 3, with this space-time condition information as reference condition calculating pixel 3.
Calculate the conditional probability p (x|b) that space-time condition informational needs calculating pixel value x belongs to reference background, we are with the Density Estimator in the nonparametric model (KDE) method design conditions probability, and its general form as shown in Equation 1.
p ( x | S ) = 1 | S | Σ s ∈ S K ( s - x ) Formula 1
Wherein K is kernel function, and it satisfies ∫ K (x) dx=1, K (x)=K (x), ∫ xK (x) dx=0, ∫ xx TK (x) dx=I | x|, x is observed data, S is reference data set, | S| is that normalized factor represents the data amount check that comprises among the reference data set S.We adopt δ (s-x) kernel function to replace gaussian kernel function commonly used, carry out Density Estimator, as shown in Equation 2.
p ( x | S ) = 1 | S | Σ s ∈ S δ ( s - x ) Formula 2
δ (s-x) kernel function can be calculated fast with statistic histogram, therefore, the conditional probability p (x|b) that pixel x belongs to reference background b can calculate fast by the color histogram of reference background, as shown in Equation 3, H is the reference background color histogram, the value of H (x) expression pixel x in color histogram H, H (x) is carried out normalization just obtained the conditional probability p (x|b) that pixel x belongs to background, wherein | H| is normalized factor, by all value summations obtain to histogram H.
p ( x | b ) = 1 | H | H ( x ) Formula 3
Cause the probability density distortion estimator for fear of the histogram sawtooth, we adopt Gaussian convolution nuclear g that histogram is carried out smoothing processing (see formula 4, H is the reference background color histogram), to improve the probability density estimated accuracy.
H=H*g formula 4
The conditional probability p (x|b) that pixel x in the detected image is belonged to background carries out nonlinear transformation, can increase the linear classification border of target and prospect in the dynamic scene, strengthens the algorithm robustness.The non-linear transformation method comprises exponential transform, triangular transformation, negative log-transformation etc., the purpose of carrying out nonlinear transformation is: increase prospect and background class border, with regard to c1 among Fig. 3, need to carry out Nonlinear extension to the low value interval in histogram left side, and non-linear compression is carried out in the high value interval on right side.Negative log-transformation just in time has extraordinary low value interval nonlinear and stretches, high value interval nonlinear compressed capability, so we adopt negative logarithmic kernel that conditional probability p (x|b) is carried out nonlinear transformation.In information theory, it is exactly to calculate the conditional information I (x|b) of variable x under the b condition that conditional probability p (x|b) is born log-transformation, and conditional information has clear and definite physical significance, and it is illustrated in the uncertainty of x under the b condition.In video, take reference background b as condition, the conditional information I (x|b) of current observed value x has measured reference background to observed value x power surely really.In dynamic scene, reference background b can determine the zone that do not change fully, and part is determined the region of variation that caused by background perturbation, is difficult to determine the region of variation that is caused by target travel.Therefore, conditional information is a very effective dynamic scene moving object detection characteristic of division, utilizes conditional information to carry out linear classification to disturbance background and moving target.
Adopt negative logarithmic kernel that conditional probability density p (x|b) is carried out nonlinear transformation, obtain new characteristics of image I (x|b), specifically as shown in Equation 5.Characteristics of image distributes and to have locally coherence, and namely image pixel does not isolate, it with the interior pixel of neighborhood between exist and contact.The characteristics of image of pixel x can be subject to the impact of pixel image feature in the neighborhood, and as shown in Equation 6, we are with all pixel x in the x neighborhood KlConditional information I (x Kl| b) be weighted and rear conditional information I'(x|b as pixel x), recycling linear classifier (formula 7), passing threshold τ (getting generally speaking 5) carries out prospect and background class.In formula 6, α KlBe the weighting weight, can select even weight α Kl=1/ (BL*BL) (BL is width neighborhood), perhaps gaussian kernel weight is perhaps with pixel x KlProportion is as weight in the neighborhood color distribution, and we adopt even weight in the present invention.
I (x|b)=-logp (x|b) formula 5
I ′ ( x | b ) = - Σ k = 1 BL Σ l = 1 BL α kl log ( p ( x kl | b ) ) Formula 6
x = 1 I ′ ( x | b ) > τ 0 else Formula 7
Aforementioned object detection method based on space-time condition information, need to calculate reference background color histogram corresponding to each pixel, and the conditional information in each neighborhood of pixels is weighted summation, computation complexity is higher, the algorithm real-time is relatively poor, below we will replace single pixel to carry out modeling and acceleration take image block as unit, to reduce algorithm complex and to reduce memory space requirements, improve algorithm speed.
Moving target all shows as locally coherence in the video on time domain and spatial domain, compares single pixel, and image block more can embody aforementioned locally coherence.Therefore, adopt image block to carry out moving object detection to the reference background modeling with take image block as unit, not only can reduce algorithm complex, reduce calculated amount, reduce memory space requirements, and can suppress better isolated noise and disturb, do not affect the target detection precision.
Different for each pixel makes up a reference background territory from preceding method, we carry out piecemeal to image, and with the reference distribution that the image block background color distributes and shares as pixel in this image block, can reduce color histogram quantity, reduce algorithm complex.As shown in Figure 5, all pixels share a reference background territory 4 in the image block 5, share the background color distribution design conditions information in this reference background territory 4, and the conditional information of all pixels in the image block 5 is weighted summation, characteristic of division as image block 5, with linear classifier image block is classified, the classification and Detection method is with aforementioned (formula 7).
In aforesaid time-space domain model (Fig. 4), need buffer memory N frame background sequence image, and when detecting each time, calculate new reference background color histogram.Buffer memory N frame image sequence needs larger storage space, is difficult to use at the embedded platform of limited storage space.The reference background color distribution is not that each frame all can change, and there is no need all to recomputate the reference background color histogram with N frame buffer image when detecting each time.We are take image block as unit, calculate the color histogram of N frame buffer image, and with this background data as this image block, replace original N frame buffer image, can reduce memory space requirements, and just can adapt to the scene illumination variation by the color histogram renewal.When detecting, directly the color histogram summation in the reference background of the image block 5 correspondences zone 4 has just been obtained the reference background distribution of image block 5.
In the video surveillance applications of practical application, most zones show as static scene in the image in most time, i.e. more stable less the changing of this area image, and such as building surface, ground, and other motionless objects.To with image in a large amount of static scenes zones that exist, there is no need to adopt the object detection method towards dynamic scene to detect, but can determine whether this zone changes by the simplest background subtraction separating method, minimizing can improve the speed of practical application based on the data processing amount of conditional information method.
Image difference is the simplest image change method for detecting area, can pre-determine the zone that changes in the image as the candidate region by image difference, adopt again based on the method for conditional information and do further processing, to identify it as disturbance background or real motion target.The image block difference has been considered the space-time locally coherence of target travel, compares with single pixel difference, can better suppress isolated noise, and is combined with image block conditional information detection method easily.Image block difference pre-detection, the image block that changes in the picture of can fast detecting publishing picture reduces the data volume that the image block conditional information detects as the couple candidate detection zone.The image block difference is to calculate the SAD (Sum of Absolute Difference) of each block of pixels, as shown in Equation 8, the difference absolute value of calculating input image and background image correspondence position epigraph piece and, wherein BL is the image block width, m, n presentation video piece position in image.The image block difference result is carried out binaryzation (formula 9, T are binary-state threshold) just can detect in advance the image change zone, as the couple candidate detection zone.
SAD ( m , n ) = Σ x = 1 BL Σ y = 1 BL | I ( m × BL + x , n × BL + y ) - B ( m × BL + x , n × BL + y ) | Formula 8
IB = 1 SAD > T 0 else Formula 9
Change for adapting to scene illumination, need to upgrade background model.Model modification comprises two aspects, and the image block background color histogram in the conditional information detection method is upgraded, and the reference background in the image block difference pre-detection algorithm is upgraded.Upgrade for color histogram, concrete update strategy is: when only having the current detection zone for background, just carry out histogram and upgrade (formula 10), upgrading the current image block histogrammic while of background color, and the background color histogram of the image block in its neighborhood is carried out selective updating.Specifically refer to, select randomly an image block in the current image block neighborhood with certain probability, upgrade the background color histogram of selected image block, update method the same (formula 10) with current image block color histogram in present frame.Just carry out model modification when only being background in the current detection zone, can adapt to scene illumination changes, but not can with the moving target information errors be updated in the background, and can effectively overcome the time sliding window updating target information is incorporated reference background and causes the undetected problem of target.When upgrading current region, optionally its neighborhood background is upgraded, can overcome and only upgrade the background area and do not upgrade the target that foreground area causes and move in and out the error detection problem, because detect as being upgraded by its neighborhood background in the zone of prospect, after the accumulation of experience a period of time, move in and out the target area background and just finished renewal.
Formula 10 is actually an image block background color histogram H MnWith the color histogram H in the present frame MncAfter being weighted summation, as new reference background distribution H Mn, β wherein 0Be the weighting weight, the larger renewal speed of value is faster.
H Mn=H Mn* (1-β 0)+H Mnc* β 0Formula 10
The strategy that different from the histogram update method in the conditional information method, we adopt, and background is upgraded fast, prospect is slowly upgraded upgrades the reference background in the image block difference pre-detection.Take final target detection result as foundation, upgrade with formula 11, when image block is detected as prospect, with speed β 1Upgrade at a slow speed, when image block is detected as background, with speed β 2Upgrade fast, wherein β 1<β 2Because image block is poor to be pre-detection, for the variation that can conform better, we here singly do not upgrade also background prospect are upgraded, the speed of just upgrading is very slow, by practical application test proof, the method can not cause target undetected, and can conform better and change and processing target moves in and out problem.
b = x * ( 1 - β 1 ) + b * β 1 if x = F x * ( 1 - β 2 ) + b * β 2 else Formula 11
Basic procedure based on the dynamic scene moving target detecting method of space-time condition information is as follows:
1 model initialization
The background extraction image is as the reference background of image block difference pre-detection.To image block, on reference background image, calculate each image block IB in the background image MnColor histogram as the initial background color histogram H of this image block Mn
2 image block difference pre-detection
Utilize formula 7 calculate block images difference absolute value and, and carry out the thresholding pre-detection picture modified-image piece of publishing picture with formula 8, as the couple candidate detection zone.
3 candidate image piece secondary detection
On the couple candidate detection zone that image block difference pre-detection obtains in the step 2, utilize conditional information to carry out secondary detection.The reference background histogram H of calculated candidate image block at first again with the conditional information of these all pixels of computed image piece and carry out weighted sum, carries out binaryzation by formula 7 at last, obtains target detection result images (BinImg).
4 model modifications
Carry out model modification according to target detection result (BinImg), comprise reference background and image block color histogram are upgraded.According to formula 11, reference background is upgraded.When the image block color histogram is upgraded, at first to calculate the color distribution histogram H of each image block in the present frame Mnc, according to aforementioned update method image block background color histogram is upgraded again.

Claims (8)

1. based on the moving target detecting method of space-time condition information, it is characterized in that: the conditional probability p (x|b) that the pixel in the detected image (3) is belonged to reference background carries out nonlinear transformation, as the moving object detection characteristic of division, carry out prospect and background class in the dynamic scene with linear classifier.
2. the moving target detecting method based on space-time condition information according to claim 1, it is characterized in that, adopt non-negative logarithm that conditional probability p (x|b) is carried out nonlinear transformation, obtain the conditional information I (x|b) of pixel (3) under the reference background condition, as the Images Classification feature.
3. the moving target detecting method based on space-time condition information according to claim 2 is characterized in that, the conditional information I (x|b) of pixel in weighted pixel point (3) neighborhood is as the final characteristic of division that detects pixel (3).
4. according to claim 1 or 3 described moving target detecting methods based on space-time condition information, it is characterized in that, time-space domain model design conditions Probability p (x|b) around the employing center, and with the weighting conditional information of this calculating pixel point (3).This time-space domain model, centered by current detection pixel (3), make up central area (1), and determine reference background with peripheral regions (2) corresponding to this central area (1): with all the N-1 frame background sequence image B ckSeq(4 in peripheral regions (2) the outer boundary scope), and together with peripheral regions among the current detection image C urImg 2 as with reference to background b, design conditions Probability p (x|b).With central area (1) neighborhood as pixel (3), calculate the weighting conditional information.
5. the moving target detecting method based on space-time condition information according to claim 4, it is characterized in that, image is carried out piecemeal, replacing single pixel (3) to detect with image block (5) accelerates, image is carried out piecemeal, detect classification with the conditional information weighted sum of pixel x in the image block as this image block characteristics.
6. the moving object detection algorithm based on space-time condition information according to claim 5, it is characterized in that, adopt image block color distribution histogram to reference background probability distribution Direct Modeling, with the color distribution histogram H of image block in the reference background sequence probability distribution p (b) as reference background b, directly utilize histogram H design conditions Probability p (x|b).
7. the moving object detection algorithm based on space-time condition information according to claim 5 is characterized in that, adopts image block difference pre-detection, detects in advance modified-image piece in the image, and as candidate image, the recycling conditional information carries out secondary detection.
8. the moving target detecting method based on space-time condition information according to claim 6, it is characterized in that, when image block is detected as background, reference background color histogram to image block upgrades, and select at random image block reference background color histogram in the neighborhood, the color histogram of selecting is upgraded.
CN2012102513544A 2012-07-19 2012-07-19 Time-space condition information based moving object detection method Pending CN102903120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102513544A CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102513544A CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Publications (1)

Publication Number Publication Date
CN102903120A true CN102903120A (en) 2013-01-30

Family

ID=47575333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102513544A Pending CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Country Status (1)

Country Link
CN (1) CN102903120A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168405A (en) * 2013-05-20 2014-11-26 聚晶半导体股份有限公司 Noise reduction method and image processing device
CN104408742A (en) * 2014-10-29 2015-03-11 河海大学 Moving object detection method based on space-time frequency spectrum combined analysis
CN104616323A (en) * 2015-02-28 2015-05-13 苏州大学 Space-time significance detecting method based on slow characteristic analysis
TWI493160B (en) * 2013-05-13 2015-07-21 Global Fiberoptics Inc Method for measuring the color uniformity of a light spot and apparatus for measuring the same
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN106447656A (en) * 2016-09-22 2017-02-22 江苏赞奇科技股份有限公司 Rendering flawed image detection method based on image recognition
CN108133488A (en) * 2017-12-29 2018-06-08 安徽慧视金瞳科技有限公司 A kind of infrared image foreground detection method and equipment
CN109886132A (en) * 2019-01-25 2019-06-14 北京市遥感信息研究所 A kind of sea of clouds background Aircraft Targets detection method, apparatus and system
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN111476815A (en) * 2020-04-03 2020-07-31 浙江大学 Moving target detection method based on color probability of moving area
CN112101148A (en) * 2020-08-28 2020-12-18 普联国际有限公司 Moving target detection method and device, storage medium and terminal equipment
CN113542588A (en) * 2021-05-28 2021-10-22 上海第二工业大学 Anti-interference electronic image stabilization method based on visual saliency
CN115200544A (en) * 2022-07-06 2022-10-18 中国电子科技集团公司第三十八研究所 Method and device for tracking target of maneuvering measurement and control station
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0737100A (en) * 1993-07-15 1995-02-07 Tokyo Electric Power Co Inc:The Moving object detection and judgement device
CN101482923A (en) * 2009-01-19 2009-07-15 刘云 Human body target detection and sexuality recognition method in video monitoring
CN102254394A (en) * 2011-05-31 2011-11-23 西安工程大学 Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0737100A (en) * 1993-07-15 1995-02-07 Tokyo Electric Power Co Inc:The Moving object detection and judgement device
CN101482923A (en) * 2009-01-19 2009-07-15 刘云 Human body target detection and sexuality recognition method in video monitoring
CN102254394A (en) * 2011-05-31 2011-11-23 西安工程大学 Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YASER SHEIKH等: "Bayesian Modeling of Dynamic Scenes for Object Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 27, no. 11, 30 November 2005 (2005-11-30), pages 1778 - 1792, XP001512568, DOI: doi:10.1109/TPAMI.2005.213 *
单勇: "复杂条件下视频运动目标检测和跟踪", 《中国博士学位论文全文数据库》, no. 06, 15 December 2007 (2007-12-15), pages 17 - 18 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI493160B (en) * 2013-05-13 2015-07-21 Global Fiberoptics Inc Method for measuring the color uniformity of a light spot and apparatus for measuring the same
CN104168405A (en) * 2013-05-20 2014-11-26 聚晶半导体股份有限公司 Noise reduction method and image processing device
CN104168405B (en) * 2013-05-20 2017-09-01 聚晶半导体股份有限公司 Noise suppressing method and its image processing apparatus
CN104408742B (en) * 2014-10-29 2017-04-05 河海大学 A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis
CN104408742A (en) * 2014-10-29 2015-03-11 河海大学 Moving object detection method based on space-time frequency spectrum combined analysis
CN104616323A (en) * 2015-02-28 2015-05-13 苏州大学 Space-time significance detecting method based on slow characteristic analysis
CN104616323B (en) * 2015-02-28 2018-02-13 苏州大学 A kind of time and space significance detection method based on slow signature analysis
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN105631898B (en) * 2015-12-28 2019-04-19 西北工业大学 The infrared motion target detection method that conspicuousness merges when based on sky
CN106447656B (en) * 2016-09-22 2019-02-15 江苏赞奇科技股份有限公司 Rendering flaw image detecting method based on image recognition
CN106447656A (en) * 2016-09-22 2017-02-22 江苏赞奇科技股份有限公司 Rendering flawed image detection method based on image recognition
CN108133488A (en) * 2017-12-29 2018-06-08 安徽慧视金瞳科技有限公司 A kind of infrared image foreground detection method and equipment
CN109886132B (en) * 2019-01-25 2020-12-15 北京市遥感信息研究所 Method, device and system for detecting target of cloud sea background airplane
CN109886132A (en) * 2019-01-25 2019-06-14 北京市遥感信息研究所 A kind of sea of clouds background Aircraft Targets detection method, apparatus and system
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN111476815A (en) * 2020-04-03 2020-07-31 浙江大学 Moving target detection method based on color probability of moving area
CN112101148A (en) * 2020-08-28 2020-12-18 普联国际有限公司 Moving target detection method and device, storage medium and terminal equipment
CN113542588A (en) * 2021-05-28 2021-10-22 上海第二工业大学 Anti-interference electronic image stabilization method based on visual saliency
CN115200544A (en) * 2022-07-06 2022-10-18 中国电子科技集团公司第三十八研究所 Method and device for tracking target of maneuvering measurement and control station
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination
CN115359085B (en) * 2022-08-10 2023-04-04 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination

Similar Documents

Publication Publication Date Title
CN102903120A (en) Time-space condition information based moving object detection method
Modava et al. Coastline extraction from SAR images using spatial fuzzy clustering and the active contour method
US9767570B2 (en) Systems and methods for computer vision background estimation using foreground-aware statistical models
Fu et al. Centroid weighted Kalman filter for visual object tracking
Chen et al. A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction
Zhang et al. A vehicle detection algorithm based on three-frame differencing and background subtraction
WO2018032660A1 (en) Moving target detection method and system
Wang et al. Ship detection in SAR images via local contrast of Fisher vectors
Qu et al. A pedestrian detection method based on yolov3 model and image enhanced by retinex
CN104835179A (en) Improved ViBe background modeling algorithm based on dynamic background self-adaption
CN111723644A (en) Method and system for detecting occlusion of surveillance video
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN110889843B (en) SAR image ship target detection method based on maximum stable extremal region
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
Liu et al. Moving detection research of background frame difference based on Gaussian model
Bloisi et al. Parallel multi-modal background modeling
CN111008585A (en) Ship target detection method based on self-adaptive layered high-resolution SAR image
Deng et al. Small target detection based on weighted self-information map
García-González et al. Foreground detection by probabilistic modeling of the features discovered by stacked denoising autoencoders in noisy video sequences
Zhou et al. Foreground detection based on co-occurrence background model with hypothesis on degradation modification in dynamic scenes
Cheng et al. A background model re-initialization method based on sudden luminance change detection
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
Maity et al. Background modeling and foreground extraction in video data using spatio-temporal region persistence features
CN102930541B (en) Background extracting and updating method of video images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHANXI GREEN ELECTRO-OPTIC INDUSTRY TECHNOLOGY INS

Free format text: FORMER OWNER: DEFENSIVE SCIENTIFIC AND TECHNOLOGICAL UNIV., PLA

Effective date: 20130514

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 410073 CHANGSHA, HUNAN PROVINCE TO: 033300 LVLIANG, SHAANXI PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20130514

Address after: 033300 Shanxi city of Lvliang province Liulin County Li Jia Wan Xiang Ge duo Cun Bei River No. 1

Applicant after: SHANXI GREEN OPTOELECTRONIC INDUSTRY SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE (CO., LTD.)

Address before: Zheng Jie, Kaifu District, Hunan province 410073 Changsha inkstone wachi No. 47

Applicant before: National University of Defense Technology of People's Liberation Army of China

ASS Succession or assignment of patent right

Owner name: HUNAN VISIONSPLEND OPTOELECTRONIC TECHNOLOGY CO.,

Free format text: FORMER OWNER: SHANXI GREEN ELECTRO-OPTIC INDUSTRY TECHNOLOGY INSTITUTE (CO., LTD.)

Effective date: 20140110

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 033300 LVLIANG, SHAANXI PROVINCE TO: 410073 CHANGSHA, HUNAN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20140110

Address after: 410073 Hunan province Changsha Kaifu District, 31 Road No. 303 Building 5 floor A Di Shang Yong

Applicant after: HUNAN VISION SPLEND PHOTOELECTRIC TECHNOLOGY Co.,Ltd.

Address before: 033300 Shanxi city of Lvliang province Liulin County Li Jia Wan Xiang Ge duo Cun Bei River No. 1

Applicant before: SHANXI GREEN OPTOELECTRONIC INDUSTRY SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE (CO., LTD.)

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130130