Moving target detecting method based on space-time condition information
Technical field:
The present invention relates to the video moving object subdivision in the computer vision, refer in particular to the moving object detection in the video monitoring system.
Background technology:
It is one of underlying issue of computer vision application that video frequency motion target detects, it is the shoring of foundation of the senior application such as motion target tracking, moving target identification, man-machine interface, action recognition, behavior understanding, in the concrete application such as video monitoring, video frequency searching, play a significant role, will play a greater role at numerous areas such as military affairs, traffic, security protection, entertainments.
Intelligent video monitoring system can free the people from heavy video monitor task, reduce manual intervention, alleviate guarder employee and make burden, automatically find to monitor the moving target in the environment, automatic recognition and tracking moving target finds to monitor suspicious event and extraction information of interest in the scene automatically.Intellectual analysis function in the aforementioned intelligent video monitoring system all depends on the video frequency motion target detection algorithm, it is that moving target in the video is separated with background that video frequency motion target detects, to extract moving target, it is the basic algorithm of intelligent video monitoring system, is the algorithm basis of succeeding target tracking, identification, suspicious event detection.
The moving target detecting method of main flow has background subtraction and optical flow method at present.The calculating of optical flow method complexity makes it be difficult to be able to practical application; Background subtraction be at present the most frequently used also be the most effective moving target detecting method, its core concept is to use suitable model to carry out scene description, and detects moving target with this variation of judging scene.Background subtraction method commonly used comprises mixed Gauss model (Gaussians Mixture Model, GMM), nonparametric model (Kernel Density Estimation, KDE), code book model (Code Book) etc., they are by carrying out modeling to detect moving target to pixel intensity on time domain.The challenge of moving object detection comes from the impact that how to overcome Environmental variations (the rocking of illumination variation, leaf, sleet, water level fluctuation etc.) and imaging equipment (electronic noise, video camera rock etc.).Background subtraction method based on time domain modeling commonly used detects and the setting movement target by the variation of characteristics of image (color, gradient, texture, edge etc.) on time domain.But the characteristics of image of each pixel position is not to isolate in the image, exist contact between them, therefore utilize time domain to change and be difficult to process background perturbation in the scene, even adopt multiple mode model, such as mixed Gauss model (GMM), also be difficult to suppress Environmental Noise Influence.Although the method based on image segmentation (based on the moving Object Segmentation of random field) can suppress isolated noise, but be based on the method for cutting apart and depend on initial testing result, when the initial detecting erroneous results is serious, also be difficult to obtain accurate segmentation result, and the algorithm real-time is poor.Taken into full account the space-time consistency that color of image distributes based on the method for time-space domain model, moving object detection is carried out in the associating modeling in the time-space domain, shows superperformance in processing dynamic scene during background perturbation.Algorithm based on the time-space domain, owing to need to process a large amount of time-space domain data, computation complexity is high, memory requirements is large, the algorithm real-time is poor, because the isolated noise disturbing effect, final testing result also needs to carry out the aftertreatments such as morphologic filtering or image segmentation just can obtain preferably testing result.
Along with video monitoring system was developed to cybertimes by the simulation epoch, video camera also develops towards intelligent direction, increasing intelligent video Processing Algorithm comprises the moving object detection algorithm, need to transplant to intelligent camera, carries out embedded realization at intelligent camera.But, the existing video frequency motion target detection algorithm that can process neighbourhood noise in the dynamic scene, not only computation complexity is high, and memory requirements is very large, and is difficult to use at the embedded intelligence Camera Platform.For this reason, we are towards the practical application of intelligent video monitoring system, be subject to the ambient noise interference problem for the moving object detection in the dynamic scene, a kind of dynamic scene moving target detecting method based on space-time condition information is proposed, the method can the establishment dynamic scene in ambient noise interference, effectively detect moving target, and adopt the image block strategy to carry out target detection and accelerate, reduce algorithm complex, increase real-time, reduce memory requirements, make the moving target detecting method based on space-time condition information, not only can realize that the dynamic scene moving target detects in real time at existing PC platform, also be suitable for the embedded intelligence Camera Platform and use.
Moving object detection essence is two classification problems, namely take background sequence as reference conditions, the pixel in the current observation image is categorized as prospect (being also referred to as target among the present invention) and background.Consideration based on algorithm complex, existing moving object detection algorithm adopts linear classifier more, image pixel is classified, be partitioned into the prospect in the present image, but, under dynamic scene (such as the scene of water level fluctuation, leaf swing), the background of disturbance (water surface of fluctuation, the leaf of swing) and prospect often show as linearly inseparable.Floating thing detects (b among Fig. 1) as example in the water level fluctuation scene, can find that all there are the problem of background and prospect linearly inseparable to a certain extent in background difference, mixed Gauss model, nonparametric model.
Background subtraction componental movement target detection: input picture and reference background image subtracted each other obtain difference image, as characteristic of division, adopt binaryzation operation (the simplest two sorters) to detect moving target with this.As shown in Figure 1, current input image (b among Fig. 1) and reference background (a) are done difference among Fig. 1 and obtained background subtraction partial image (c among Fig. 1), then add up respectively background area color histogram and foreground area color histogram in this difference image, the separable degree between these two histograms has just embodied the linear separability of background and prospect.Statistics background and foreground area histogram, specific implementation is: the target area that goes out in each two field picture of manual markings obtains moving target mask template (d among Fig. 1) in advance, then, to belong to the image statistics foreground area histogram of difference in the target mask zone, with the image statistics background area histogram of difference (b0 among Fig. 2) in the non-mask zone, in the same way, obtain the difference image histogram of whole video object zone and background area shown in e0 among Fig. 2.B0, local amplify (c0, the f0 among Fig. 2) of e0 the latter half among Fig. 2 can be found out, there is overlapping region on a large scale in the difference image histogram of target area and background area, that is the separable degree of background and prospect is low, therefore, adopt the background subtraction partial image as the moving object detection feature, be difficult to carry out linear classification by linear classifier, therefore, during the moving object detection of background subtraction partial image feature in processing dynamic scene, target and background are linearly inseparables.
Mixed Gauss model, nonparametric model are two kinds of video frequency motion target detection algorithms that typically carry out modeling with the color of image probability distribution, then the conditional probability that they all belong to background with image pixel to be detected adopts linear classifier to detect as characteristic of division.Because nonparametric model can represent arbitrariness probability distributing, therefore, we are take nonparametric model as example, and the background of investigating the modeling of color-based probability distribution subtracts the linear separability of prospect and background in the method.As shown in Figure 2, adopt nonparametric model to estimate that it is a1 among Fig. 2 that current input image (b among Fig. 1) belongs to background b conditional probability characteristic image, obtain on this characteristic image the histogram in the target and background area shown in b1 among Fig. 2 according to preceding method, and the histogram of the target area of whole video and background area is shown in e1 among Fig. 2.Can find out b1, local amplify (c1, the f1 among Fig. 2) of e1 the latter half among Fig. 2, compare with c0, f0 among Fig. 2, the histogram overlapping range of target and background area has reduced, linear separability has increased, but the linear interphase of target and background is narrower, the selection of segmentation threshold is subject to noise easily, affects the algorithm robustness.
Conditional probability p in the nonparametric model (x|b) is carried out nonlinear transformation, can obtain the characteristic image shown in a2 among Fig. 2, obtain after the same method corresponding target and the histogram (b2, e2 among Fig. 2) of background area.Can find from the latter half partial enlarged drawing (c2, f2 Fig. 2) of correspondence: the linear interphase broadening of target and background.That is to say that this nonlinear transformation has strengthened the linear separability of prospect and background.
Characteristics of image distributes and to have locally coherence, and namely image pixel does not isolate, it with the interior pixel of neighborhood between exist and contact.The characteristics of image of current pixel x can be subject to the impact of pixel image feature in the neighborhood, therefore, with the characteristics of image after the nonlinear transformation in its neighborhood, be weighted and, can further suppress isolated noise, increase the linear interphase (b3, c3, e3, f3 among Fig. 2) of target and background, increase the classification robustness, reduce classification error.
As shown in Figure 2, d0 is background difference algorithm testing result among Fig. 2, d1 is the nonparametric model testing result among Fig. 2, and d2 is conditional probability nonlinear transformation testing result among Fig. 2, and d3 is the interior weighted sum testing result of characteristic image neighborhood after the conditional probability nonlinear transformation among Fig. 2.Can find out from testing result, conditional probability is carried out nonlinear transformation, and with its in neighborhood, be weighted summation can the establishment dynamic scene in background perturbation disturb, reduce isolated noise and pollute, obtain good target detection result.Therefore, the present invention intends adopting the mode of the color of image probability distribution being carried out nonlinear transformation, and the linear separability of prospect and background in the enhancing dynamic scene is to improve the precision of moving object detection in the dynamic scene.
Summary of the invention:
Use in the intelligent video monitoring system especially moving object detection towards dynamic scene for computation vision, be subject to easily the ambient noise interference such as background perturbation and produce the problem of error-detecting, the present invention is intended to propose a kind of moving target detecting method based on space-time condition information towards dynamic scene, to suppress disturbance background interference in the dynamic scene, accurately detect moving target.
The solution that the present invention proposes is:
1 considers that the visual space-time conspicuousness makes up the time-space domain model, with this time-space domain model use nonparametric probability density method of estimation, estimate that detected image pixel x belongs to the conditional probability p (x|b) of reference background sequence b, utilize negative logarithmic kernel function that conditional probability p (x|b) is carried out nonlinear transformation, obtain the space-time condition information I (x|b) of x, consider that pixel affects it in the x neighborhood, with the space-time condition information weighting summation of pixel in the x neighborhood, pass through linear classifier class object and background with this as feature;
2 in the model of time-space domain, distributes design conditions information as the reference background probability with color histogram in the reference background territory of pixel x in the present image;
3 adopt the image block method that preceding method is optimized, and replace single pixel to carry out background modeling and detection with image block (Image Block is abbreviated as IB);
4 with image block background color histogram model as a setting, replaces the background image sequence of buffer memory in the model of time-space domain, reduces the data space demand;
The 5 reference background color histograms with image block, as the reference background model that all pixels in the image block share, design conditions information and weighted sum are carried out image block and are detected;
6 adopt image block difference pre-detection mechanism, and it is regional as couple candidate detection to detect in advance the image block that changes in the image, reduces the data processing amount based on the image block detection method of conditional information;
When 7 image blocks are detected as background, upgrade the reference background color histogram of this image block with current frame image piece color histogram, and in this image block neighborhood, select randomly an image block to carry out model modification by this method, when image block is detected as target, do not upgrade.
The moving target detecting method based on space-time condition information towards dynamic scene proposed by the invention mainly contains following advantage:
1, conditional probability is born the logarithm nonlinear transformation, increased the linear classification border width when utilizing linear classifier to carry out target detection, strengthened the linear separability of target and background, improved the target detection robustness.
2, conditional probability is born the logarithm nonlinear transformation have clear and definite physical significance, i.e. conditional information I (x|y), it is the uncertainty of variable x take y as condition.In video, take reference background b as condition, the conditional information I (x|b) of current observed value x has measured reference background to observed value x power surely really.In dynamic scene, reference background b can determine the zone that do not change fully, and part is determined the region of variation that caused by background perturbation, is difficult to determine the region of variation that is caused by target travel.Therefore, adopt conditional information as the characteristic of division of dynamic scene moving object detection, can carry out linear classification to disturbance background and moving target.
3, conditional information is weighted summation, suppressed the isolated noise impact, strengthened opposing disturbance background interference ability, further strengthened the linearity property distinguished of target and background, reduced error in classification.
4, consider that the visual space-time conspicuousness makes up the time-space domain model, meet human visual psychology's characteristics, be easy to extract interested movable information.
5, adopt image block to replace single pixel to detect, reduced algorithm complex and memory requirements.Adopt image block difference pre-detection mechanism, by the simple algorithm non-region of variation of filtering image in advance, reduced succeeding target detection computations amount, accelerated algorithm speed.
6, the model update method that adopts can either effectively adapt to scene illumination and change, and target information can be updated in the reference background again, can effectively avoid sliding window updating method to produce the problem of target afterbody test leakage when target travel is slow.
7, in general, the present invention has not only realized effective detection of moving target in the dynamic scene, it is high also to have overcome the existing algorithm complex that exists towards the moving target detecting method of dynamic scene, real-time is poor, and memory requirements is large, is not easy to the shortcomings such as embedded realization, can realize that the dynamic scene moving target detects in real time at the active computer platform, be suitable for the embedded intelligence Camera Platform and use.
Description of drawings:
Fig. 1 is background subtraction algorithm synoptic diagram;
Wherein a is background image;
B is input picture;
C is difference image;
D is target mask template, and middle darker regions is prospect, and all the other light areas are background.
Fig. 2 is prospect and background linear separability contrast synoptic diagram in the moving object detection algorithm;
Wherein a0-a3 is respectively background Differential Characteristics image, conditional probability density characteristic image, conditional information characteristic image, weighting conditional information characteristic image;
B0-b3 is respectively the characteristics of image distribution histogram of a0-a3, they be in Fig. 1 target mask template b as with reference to calculating;
C0-c3 is respectively b0-b3 bottom section local enlargement display;
D0-d3 is respectively the linear classification result of a0-a3 characteristic image;
E0-e3 is respectively corresponding prospect and the background characteristics distribution histogram of background Differential Characteristics image, conditional probability density characteristic image, conditional information characteristic image, weighting conditional information characteristic image of whole section video;
F0-f3 is respectively e0-e3 bottom section local enlargement display.
Fig. 3 is vision significance model around the center;
Wherein a is the floating on water object image, zone centered by 1 wherein, and 2 be reference field all around;
B is the difference of Gaussian model;
C is the vision significance figure that a extracts under difference of Gaussian model b effect, and the brighter expression conspicuousness of image is higher.
Fig. 4 is vision significance time-space domain model;
Wherein 1 is central field, corresponding with the central area shown in a 1 among Fig. 3;
The 2nd, around reference field, with shown in a among Fig. 3 around reference field 2 corresponding;
The 3rd, the pixel in the image;
The 4th, the reference background territory of pixel 3.
Fig. 5 is that image block detects synoptic diagram;
Wherein 3 is pixels in the image;
The 4th, the reference background territory that image block is corresponding, 4 corresponding with among Fig. 4;
The 5th, image block;
The 6th, entire image.
In above-mentioned accompanying drawing:
Reference field 3-pixel 4-reference background territory 5-image block 6-image around the 1-central field 2-
Embodiment:
The present invention proposes: consider that the visual space-time conspicuousness makes up the time-space domain model, with this time-space domain model use nonparametric probability density method of estimation, estimate that detected image pixel x belongs to the conditional probability p (x|b) of reference background sequence b, utilize negative logarithmic kernel function that conditional probability p (x|b) is carried out nonlinear transformation, obtain the space-time condition information I (x|b) of x, consider that pixel affects it in the x neighborhood, space-time condition information weighting summation with pixel in the x neighborhood, by linear classifier target and background are classified as feature with this, finish moving object detection.In order to reduce algorithm complex, improve algorithm speed, to reduce memory space requirements, we adopt the image block strategy that preceding method is optimized, method after optimizing, at double-core Intel Pentium Dual CPU E21802.0GHz, detect 640*480 resolution dynamic scene video on the computing machine of 1GBRAM and can reach 26fps (frame per second), satisfied the real-time application demand.
We consider that the visual space-time conspicuousness makes up the time-space domain model, and reference background is carried out modeling, are used for estimating reference background color distribution probability and input picture target detection.Human visual system's vision significance is presented as spatial domain conspicuousness and time domain conspicuousness.When the spatial domain vision significance is embodied in the eye-observation image, pay close attention to the highly significant zone, ignore low salient region.The receptive field of human eye retina's gangliocyte shows as model around a kind of center, i.e. difference of Gaussian model (Difference of Gaussians, among Fig. 3 shown in the b).Under the difference of Gaussian model, difference is more obvious around the center, and the receptive field eye response is larger, and the image vision conspicuousness of corresponding region is also just higher.Specifically as shown in Figure 3, floating on water object image (among Fig. 3 a), under difference of Gaussian model (b among Fig. 3) effect, obtain spatial domain vision significance figure (among Fig. 3 shown in the c), c can find out from Fig. 3, because plastic bottle and its peripheral regions (water surface) difference floating on the water surface are obvious, the response of this zone under the difference of Gaussian model is large, vision significance is high, and the water surface itself and peripheral regions (the being similarly the water surface) difference of fluctuation are not obvious, the response of this position under the difference of Gaussian model is little, and vision significance is low.The time domain vision significance is embodied in: during eye-observation, ignore easily the variation (such as the water surface of the leaf that rocks, fluctuation) that periodically occurs, and (unexpected) variation of special concern novelty, such as the moving target under the disturbance background.Video is the image sequence of combining in chronological order, therefore, in video, exist simultaneously spatial domain vision significance (single image itself has the spatial domain vision significance) and time domain vision significance (picture material variation in time is presented as the time domain vision significance).In video, the modified-image vision significance that often occurs is low, and emerging modified-image vision significance is high.Than disturbance background (modified-image that often occurs), moving target often shows as emerging modified-image, has higher vision significance, therefore, in the video frequency motion target Detection task, consider time domain, the spatial domain vision significance of human vision, effectively the wiping out background disturbances improves moving object detection effect in the dynamic scene.
As shown in Figure 4, with the neighborhood 1 of pixel 3 in the input picture (CurImg) central field as visual attention model around the center, with the peripheral regions 2 of neighborhood 1 correspondence as territory around the attention model around the center, and with overseas border setting reference background sequence spatial domain scope 4 around this, with N frame background sequence (BckSeq, being labeled as diagonal line hatches zone 4 among Fig. 4) image is as the time-space domain reference background of pixel 3, with this space-time condition information as reference condition calculating pixel 3.
Calculate the conditional probability p (x|b) that space-time condition informational needs calculating pixel value x belongs to reference background, we are with the Density Estimator in the nonparametric model (KDE) method design conditions probability, and its general form as shown in Equation 1.
Formula 1
Wherein K is kernel function, and it satisfies ∫ K (x) dx=1, K (x)=K (x), ∫ xK (x) dx=0, ∫ xx
TK (x) dx=I
| x|, x is observed data, S is reference data set, | S| is that normalized factor represents the data amount check that comprises among the reference data set S.We adopt δ (s-x) kernel function to replace gaussian kernel function commonly used, carry out Density Estimator, as shown in Equation 2.
Formula 2
δ (s-x) kernel function can be calculated fast with statistic histogram, therefore, the conditional probability p (x|b) that pixel x belongs to reference background b can calculate fast by the color histogram of reference background, as shown in Equation 3, H is the reference background color histogram, the value of H (x) expression pixel x in color histogram H, H (x) is carried out normalization just obtained the conditional probability p (x|b) that pixel x belongs to background, wherein | H| is normalized factor, by all value summations obtain to histogram H.
Formula 3
Cause the probability density distortion estimator for fear of the histogram sawtooth, we adopt Gaussian convolution nuclear g that histogram is carried out smoothing processing (see formula 4, H is the reference background color histogram), to improve the probability density estimated accuracy.
H=H*g formula 4
The conditional probability p (x|b) that pixel x in the detected image is belonged to background carries out nonlinear transformation, can increase the linear classification border of target and prospect in the dynamic scene, strengthens the algorithm robustness.The non-linear transformation method comprises exponential transform, triangular transformation, negative log-transformation etc., the purpose of carrying out nonlinear transformation is: increase prospect and background class border, with regard to c1 among Fig. 3, need to carry out Nonlinear extension to the low value interval in histogram left side, and non-linear compression is carried out in the high value interval on right side.Negative log-transformation just in time has extraordinary low value interval nonlinear and stretches, high value interval nonlinear compressed capability, so we adopt negative logarithmic kernel that conditional probability p (x|b) is carried out nonlinear transformation.In information theory, it is exactly to calculate the conditional information I (x|b) of variable x under the b condition that conditional probability p (x|b) is born log-transformation, and conditional information has clear and definite physical significance, and it is illustrated in the uncertainty of x under the b condition.In video, take reference background b as condition, the conditional information I (x|b) of current observed value x has measured reference background to observed value x power surely really.In dynamic scene, reference background b can determine the zone that do not change fully, and part is determined the region of variation that caused by background perturbation, is difficult to determine the region of variation that is caused by target travel.Therefore, conditional information is a very effective dynamic scene moving object detection characteristic of division, utilizes conditional information to carry out linear classification to disturbance background and moving target.
Adopt negative logarithmic kernel that conditional probability density p (x|b) is carried out nonlinear transformation, obtain new characteristics of image I (x|b), specifically as shown in Equation 5.Characteristics of image distributes and to have locally coherence, and namely image pixel does not isolate, it with the interior pixel of neighborhood between exist and contact.The characteristics of image of pixel x can be subject to the impact of pixel image feature in the neighborhood, and as shown in Equation 6, we are with all pixel x in the x neighborhood
KlConditional information I (x
Kl| b) be weighted and rear conditional information I'(x|b as pixel x), recycling linear classifier (formula 7), passing threshold τ (getting generally speaking 5) carries out prospect and background class.In formula 6, α
KlBe the weighting weight, can select even weight α
Kl=1/ (BL*BL) (BL is width neighborhood), perhaps gaussian kernel weight is perhaps with pixel x
KlProportion is as weight in the neighborhood color distribution, and we adopt even weight in the present invention.
I (x|b)=-logp (x|b) formula 5
Formula 6
Formula 7
Aforementioned object detection method based on space-time condition information, need to calculate reference background color histogram corresponding to each pixel, and the conditional information in each neighborhood of pixels is weighted summation, computation complexity is higher, the algorithm real-time is relatively poor, below we will replace single pixel to carry out modeling and acceleration take image block as unit, to reduce algorithm complex and to reduce memory space requirements, improve algorithm speed.
Moving target all shows as locally coherence in the video on time domain and spatial domain, compares single pixel, and image block more can embody aforementioned locally coherence.Therefore, adopt image block to carry out moving object detection to the reference background modeling with take image block as unit, not only can reduce algorithm complex, reduce calculated amount, reduce memory space requirements, and can suppress better isolated noise and disturb, do not affect the target detection precision.
Different for each pixel makes up a reference background territory from preceding method, we carry out piecemeal to image, and with the reference distribution that the image block background color distributes and shares as pixel in this image block, can reduce color histogram quantity, reduce algorithm complex.As shown in Figure 5, all pixels share a reference background territory 4 in the image block 5, share the background color distribution design conditions information in this reference background territory 4, and the conditional information of all pixels in the image block 5 is weighted summation, characteristic of division as image block 5, with linear classifier image block is classified, the classification and Detection method is with aforementioned (formula 7).
In aforesaid time-space domain model (Fig. 4), need buffer memory N frame background sequence image, and when detecting each time, calculate new reference background color histogram.Buffer memory N frame image sequence needs larger storage space, is difficult to use at the embedded platform of limited storage space.The reference background color distribution is not that each frame all can change, and there is no need all to recomputate the reference background color histogram with N frame buffer image when detecting each time.We are take image block as unit, calculate the color histogram of N frame buffer image, and with this background data as this image block, replace original N frame buffer image, can reduce memory space requirements, and just can adapt to the scene illumination variation by the color histogram renewal.When detecting, directly the color histogram summation in the reference background of the image block 5 correspondences zone 4 has just been obtained the reference background distribution of image block 5.
In the video surveillance applications of practical application, most zones show as static scene in the image in most time, i.e. more stable less the changing of this area image, and such as building surface, ground, and other motionless objects.To with image in a large amount of static scenes zones that exist, there is no need to adopt the object detection method towards dynamic scene to detect, but can determine whether this zone changes by the simplest background subtraction separating method, minimizing can improve the speed of practical application based on the data processing amount of conditional information method.
Image difference is the simplest image change method for detecting area, can pre-determine the zone that changes in the image as the candidate region by image difference, adopt again based on the method for conditional information and do further processing, to identify it as disturbance background or real motion target.The image block difference has been considered the space-time locally coherence of target travel, compares with single pixel difference, can better suppress isolated noise, and is combined with image block conditional information detection method easily.Image block difference pre-detection, the image block that changes in the picture of can fast detecting publishing picture reduces the data volume that the image block conditional information detects as the couple candidate detection zone.The image block difference is to calculate the SAD (Sum of Absolute Difference) of each block of pixels, as shown in Equation 8, the difference absolute value of calculating input image and background image correspondence position epigraph piece and, wherein BL is the image block width, m, n presentation video piece position in image.The image block difference result is carried out binaryzation (formula 9, T are binary-state threshold) just can detect in advance the image change zone, as the couple candidate detection zone.
Formula 8
Formula 9
Change for adapting to scene illumination, need to upgrade background model.Model modification comprises two aspects, and the image block background color histogram in the conditional information detection method is upgraded, and the reference background in the image block difference pre-detection algorithm is upgraded.Upgrade for color histogram, concrete update strategy is: when only having the current detection zone for background, just carry out histogram and upgrade (formula 10), upgrading the current image block histogrammic while of background color, and the background color histogram of the image block in its neighborhood is carried out selective updating.Specifically refer to, select randomly an image block in the current image block neighborhood with certain probability, upgrade the background color histogram of selected image block, update method the same (formula 10) with current image block color histogram in present frame.Just carry out model modification when only being background in the current detection zone, can adapt to scene illumination changes, but not can with the moving target information errors be updated in the background, and can effectively overcome the time sliding window updating target information is incorporated reference background and causes the undetected problem of target.When upgrading current region, optionally its neighborhood background is upgraded, can overcome and only upgrade the background area and do not upgrade the target that foreground area causes and move in and out the error detection problem, because detect as being upgraded by its neighborhood background in the zone of prospect, after the accumulation of experience a period of time, move in and out the target area background and just finished renewal.
Formula 10 is actually an image block background color histogram H
MnWith the color histogram H in the present frame
MncAfter being weighted summation, as new reference background distribution H
Mn, β wherein
0Be the weighting weight, the larger renewal speed of value is faster.
H
Mn=H
Mn* (1-β
0)+H
Mnc* β
0Formula 10
The strategy that different from the histogram update method in the conditional information method, we adopt, and background is upgraded fast, prospect is slowly upgraded upgrades the reference background in the image block difference pre-detection.Take final target detection result as foundation, upgrade with formula 11, when image block is detected as prospect, with speed β
1Upgrade at a slow speed, when image block is detected as background, with speed β
2Upgrade fast, wherein β
1<β
2Because image block is poor to be pre-detection, for the variation that can conform better, we here singly do not upgrade also background prospect are upgraded, the speed of just upgrading is very slow, by practical application test proof, the method can not cause target undetected, and can conform better and change and processing target moves in and out problem.
Formula 11
Basic procedure based on the dynamic scene moving target detecting method of space-time condition information is as follows:
1 model initialization
The background extraction image is as the reference background of image block difference pre-detection.To image block, on reference background image, calculate each image block IB in the background image
MnColor histogram as the initial background color histogram H of this image block
Mn
2 image block difference pre-detection
Utilize formula 7 calculate block images difference absolute value and, and carry out the thresholding pre-detection picture modified-image piece of publishing picture with formula 8, as the couple candidate detection zone.
3 candidate image piece secondary detection
On the couple candidate detection zone that image block difference pre-detection obtains in the step 2, utilize conditional information to carry out secondary detection.The reference background histogram H of calculated candidate image block at first again with the conditional information of these all pixels of computed image piece and carry out weighted sum, carries out binaryzation by formula 7 at last, obtains target detection result images (BinImg).
4 model modifications
Carry out model modification according to target detection result (BinImg), comprise reference background and image block color histogram are upgraded.According to formula 11, reference background is upgraded.When the image block color histogram is upgraded, at first to calculate the color distribution histogram H of each image block in the present frame
Mnc, according to aforementioned update method image block background color histogram is upgraded again.