CN102982558A - Method and device for detecting moving target - Google Patents

Method and device for detecting moving target Download PDF

Info

Publication number
CN102982558A
CN102982558A CN201210495320XA CN201210495320A CN102982558A CN 102982558 A CN102982558 A CN 102982558A CN 201210495320X A CN201210495320X A CN 201210495320XA CN 201210495320 A CN201210495320 A CN 201210495320A CN 102982558 A CN102982558 A CN 102982558A
Authority
CN
China
Prior art keywords
model
moving target
max
foreground image
shade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210495320XA
Other languages
Chinese (zh)
Inventor
高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Original Assignee
WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd filed Critical WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Priority to CN201210495320XA priority Critical patent/CN102982558A/en
Publication of CN102982558A publication Critical patent/CN102982558A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a method and device for detecting a moving target. The method for detecting the moving target comprises the following steps: acquiring a key monitoring area selected by a user from a monitoring scene; and establishing a background model for the key monitoring area; and extracting a foreground image of the moving target from an image fame of the monitoring screen by referring to the background model so as to achieve the monitoring effect. By the method, interference factors besides the key monitoring area do not need to be considered, so that the algorithm is simple and the data processing quantity is small. The device for implementing the method is low in cost and high in running speed because the method for detecting the moving target is simple in algorithm and small in data processing quantity.

Description

Moving target detecting method and device
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of moving target detecting method and device.
Background technology
At present, because the advantage such as the artificial dependence of intelligent video monitoring is little, safe, leakage/rate of false alarm is low, so that it is bringing into play more and more important effect at public safety-security area.And the detection of the moving target in the video pictures, tracking and identify guardian technique in the intelligent video monitoring especially.
Nowadays, the background subtraction method is the moving target detecting method that is most widely used, and its adopts and to set up background model and to constantly update background model, with current image frame with background model is compared and difference, thereby the foreground image of acquisition moving target.But in the monitoring environment of reality, monitoring scene is normally very complicated: as the rocking of leaf, light that light and shade is indefinite change, inevitably noise (such as shade), moving target stack and block (such as crowded crowd), these factors all can be extracted movement destination image and cause interference, so that the detection method algorithm of moving target is complicated, data processing amount is large, the pick-up unit cost of moving target is high, operation is slow.
Summary of the invention
The embodiment of the invention provides the moving target detecting method that a kind of algorithm is simple, data processing amount is little, adopts the installation cost of described moving target detecting method low, travelling speed is fast.
A kind of detection method of moving target comprises: obtain the key monitoring zone that the user chooses from monitoring scene; For setting up background model in described key monitoring zone; With reference to described background model, from the picture frame of described monitoring scene, extract the foreground image of moving target.
Wherein, the described step of setting up background model for described key monitoring zone comprises: adopting single Gaussian Background modeling is that background model is set up in described key monitoring zone, and single Gaussian Background modeling is divided into two steps:
Steps A. gather one section described key monitoring zone fixed background video that obtains, for described video is set up video sequence B 0Estimate, namely try to achieve the average μ of each pixel intensity 0And variance
Figure BDA00002484705200021
Figure BDA00002484705200022
μ 0 ( x , y ) = 1 T Σ i = 0 T = 1 f i ( x , y ) , δ 0 2 ( x , y ) = 1 T Σ i = 0 T = 1 [ f i ( x , y ) - μ 0 ( x , y ) ] 2 , T is time span corresponding to described video sequence, and (x, y) is described video pixel value, sets up initial back-ground model;
The frame of video Renewal model of the each input of step B. basis,
Figure BDA00002484705200025
Figure BDA00002484705200026
δ t 2 = ( 1 - α ) δ t - 1 2 + ∂ ( f t - μ t ) 2 , ∂ = K 0 1 2 π δ t - 1 exp { - ( μ t - 1 - f t ) 2 2 } , α is turnover rate, K 0Between 0 ~ 1.
The described background model of described reference, the step that extracts the foreground image of moving target from the picture frame of described monitoring scene comprises: the picture frame that obtains described key monitoring zone from the picture frame of described monitoring scene; From the picture frame in described key monitoring zone, find out the area image that has different pixels with described background model; Described area image with different pixels is defined as the foreground image of moving target.
Wherein, the described background model of described reference, the step that extracts the foreground image of moving target from the picture frame of described monitoring scene comprises afterwards: the foreground image to described moving target carries out morphologic filtering.
The described background model of described reference, also comprise after from the picture frame of described monitoring scene, extracting the step of foreground image of moving target: when in detecting described foreground image, shade being arranged, adopt color model based on tone Hue, saturation degree Saturation, brightness Value-hsv color model shade elimination algorithm to eliminate the shade in the described foreground image; Described hsv color model shade elimination algorithm is divided into following step:
Step C. establishes f (x, y) and is the value of current moving region pixel, and g (x, y) is the value of background model pixel;
Step D. is converted into the hsv color model with RGB color model-RGB color model:
max=max(R,G,?B)
min=min(R,G,?B)
ifR=max,H=(G-B)/(max-min)
if?G=max,H=2+(B-R)/(max-min)
if?B=max,H=4+(R-G)/(max-min)
H=H*60
If?H<0,H=H+360
V=max(R,G,?B)
S=(max–min)/max
Obtain the brightness value V (f (x, y)) of current moving region, color value H (f (x, y)), intensity value S (f (x, y)), and the brightness value V of background model (g (x, y)), H (g (x, y)), S (g (x, y));
E. set threshold values U, if | V (f (x, y))-V (g (x, y)) |<U, definition f (x, y) point at this moment is shadow spots, removes the value of shadow spots pixel in the value of current moving region pixel, removes shadow spots;
F. the pixel of setting the shade of moving target becomes white.
Correspondingly, the present invention also provides a kind of device be used to realizing described moving target detecting method, comprising: acquisition module is used for obtaining the user from key monitoring zone that monitoring scene is chosen; Model building module is used to described key monitoring zone to set up background model; Extraction module is used for extracting the foreground image of moving target with reference to described background model from the picture frame of described monitoring scene.
Wherein, described model building module is that the single Gaussian Background modeling of employing is that background model is set up in described key monitoring zone.Described extraction module comprises: the guarded region image acquisition unit is used for obtaining the picture frame in described key monitoring zone from the picture frame of described monitoring scene; Search the unit, be used for finding out the area image that has different pixels with described background model from the picture frame in described key monitoring zone; Determining unit is for the foreground image that described area image with different pixels is defined as moving target.
Described moving object detection device also comprises: filtration module is used for the foreground image of described moving target is carried out morphologic filtering.
Wherein, described moving object detection device also comprises: the shadow removal module, when being used for when detecting described foreground image in shade being arranged, employing is removed shade in the described foreground image based on the shade elimination algorithm of hsv color model.
Beneficial effect of the present invention is: the detection method of described moving target adopts obtains the key monitoring zone that the user chooses from monitoring scene, and the guarded region of attaching most importance to is set up background model, with reference to background model, from the picture frame of monitoring scene, extract the foreground image of moving target, realize monitoring effect.This method need not to consider disturbing factor outside the key monitoring zone, so that algorithm is simple, data processing amount is little.Be used for realizing the device of described method because the detection method algorithm of moving target is simple, data processing amount is little, so cost is low, travelling speed is fast.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the schematic flow sheet of the first embodiment of moving target detecting method of the present invention.
Fig. 2 is the schematic flow sheet of the second embodiment of moving target detecting method of the present invention.
Fig. 3 is the structural representation of the first embodiment of moving object detection device of the present invention.
Fig. 4 is the structural representation of the embodiment of extraction module of the present invention.
Fig. 5 is the structural representation of the second embodiment of moving object detection device of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
Please refer to Fig. 1, is the schematic flow sheet of moving target detecting method the first embodiment of the present invention.The method comprises:
Step S11 obtains the key monitoring zone that the user chooses from monitoring scene.
Generally speaking, moving target always appears at certain subregion rather than all guarded regions in the monitoring scene, therefore, this subregion can be classified as the key monitoring zone.Step S11 employing prompting user dotted line frame on the monitoring image of monitoring scene identifies the mode in key monitoring zone, the key monitoring zone of choosing to obtain the user.
Step S12, the guarded region of attaching most importance to is set up background model.
The method of setting up background model has two large classes: a class is that the monitoring image (being foreground image) that will not contain moving target is filmed as a setting model, this method is only applicable to Indoor Video, this is because the Background of the monitoring scene of actual environment is constantly changing, such as illumination, wind fallen leaves etc., all can bring serious interference; Another kind of is the Gaussian Background modeling, and it adopts and constantly refreshes background model, to adapt to the environmental change of monitoring scene.The Gaussian Background modeling is divided into again single Gaussian Background modeling, mixed Gaussian background modeling method.Wherein, single Gaussian Background modeling is applicable to background pixel and changes little situation, and namely noise is few, and color is more concentrated; And mixed Gaussian background modeling method has stronger robustness to the dynamic change of monitoring scene, is applicable to the target detection in the complex scene.
Step S13 with reference to background model, extracts the foreground image of moving target from the picture frame of monitoring scene.
The key monitoring zone that embodiments of the invention are chosen from monitoring scene by obtaining the user, thus monitoring range greatly reduced; The guarded region of attaching most importance to is set up background model, rather than all guarded region set up background model so that wind, rain, snow, leaf the disturbing factor such as to rock relatively definite.The embodiments of the invention algorithm is simple, data processing amount is little.
Please refer to Fig. 2, is the schematic flow sheet of moving target detecting method the second embodiment of the present invention.The method comprises:
Step S21 obtains the key monitoring zone that the user chooses from monitoring scene.
Generally speaking, moving target always appears at certain subregion rather than all guarded regions in the monitoring scene, therefore, this subregion can be classified as the key monitoring zone.Step S21 employing prompting user dotted line frame on the monitoring image of monitoring scene identifies the mode in key monitoring zone, the key monitoring zone of choosing to obtain the user.
Step S22 adopts single Gaussian Background modeling guarded region of attaching most importance to set up background model.
The method of setting up background model has two large classes, one class is that the monitoring image (being foreground image) that will not contain moving target is filmed as a setting model, this method is only applicable to Indoor Video, this is because the Background of the monitoring scene of actual environment is constantly changing, such as illumination, wind fallen leaves etc., all can bring serious interference; Another kind of is the Gaussian Background modeling, and it adopts and constantly refreshes background model, to adapt to the environmental change of monitoring scene.The Gaussian Background modeling is divided into again single Gaussian Background modeling, mixed Gaussian background modeling method.Wherein, single Gaussian Background modeling is applicable to background pixel and changes little situation, and namely noise is few, and color is more concentrated; And mixed Gaussian background modeling method has stronger robustness to the dynamic change of monitoring scene, is applicable to the target detection in the complex scene.Because step S21 has adopted the user that choosing of emphasis guarded region got rid of quite a few noise, so the preferred single Gaussian Background modeling of step S22.
Single Gaussian Background modeling is divided into two steps:
Steps A. the estimated background image.Gather one section described key monitoring zone fixed background video that obtains, for described video is set up video sequence B 0Estimate, namely try to achieve the average μ of each pixel intensity 0And variance Express as follows with mathematical formulae:
B 0 = [ &mu; 0 , &delta; 0 2 ]
Wherein:
&mu; 0 ( x , y ) = 1 T &Sigma; i = 0 T = 1 f i ( x , y )
&delta; 0 2 ( x , y ) = 1 T &Sigma; i = 0 T = 1 [ f i ( x , y ) - &mu; 0 ( x , y ) ] 2
T is time span corresponding to described video sequence, and (x, y) is described video pixel value, sets up initial back-ground model;
Step B. is according to the frame of video Renewal model of each input background image.The initial back-ground model that obtains according to steps A reaches each frame of video Renewal model of inputting, with the variation that conforms.Express as follows with mathematical formulae:
Figure BDA00002484705200072
Wherein,
&mu; t = ( 1 - &alpha; ) &mu; t - 1 + &PartialD; f t
&delta; t 2 = ( 1 - &alpha; ) &delta; t - 1 2 + &PartialD; ( f t - &mu; t ) 2
&PartialD; = K 0 1 2 &pi; &delta; t - 1 exp { - ( &mu; t - 1 - f t ) 2 2 }
Wherein, α is turnover rate, K 0It is the number between 0 ~ 1.
Step S23 obtains the picture frame in key monitoring zone from the picture frame of monitoring scene.
Guarded region has been set up background model because step S22 only attaches most importance to, so need to obtain the image in key monitoring zone from the picture frame of monitoring scene, compares for the two.
Step S24 finds out the area image that has different pixels with background model from the picture frame in key monitoring zone.
Employing is carried out the comparison of individual element point with picture frame and the background model of key area, can find out the area image that has different pixels with background model.
Step S25 is defined as this area image with different pixels the foreground image of moving target.
After the step S25, the foreground image of moving target can be filled to white, and other area filling with same pixel in the picture frame of key area is become black, thus the output binary picture, for the processing of successive image.
Step S26 carries out morphologic filtering to the foreground image of moving target.The edge of the foreground picture of the moving target that step S25 determines is normally not too smooth, and there is hole inside, adopts foreground image is carried out morphologic filtering can to make its smooth of the edge, eliminate hole.
Step S27 when in detecting described foreground image shade being arranged, adopts based on the shade in the shade elimination algorithm removal foreground image of hsv color model.
In the monitoring scene of reality, owing to reasons such as blocking between weather or the moving target, can cause uneven illumination even, thereby inevitably produce shade at foreground image, in the moving object detection algorithm, if not with shadow removal, watch-dog can be foreground image with Shadow recognition automatically.
At present, the method for removal shade generally has two kinds: a kind of is the statistical model of setting up a shade according to the characteristic of shade, differentiates each pixel according to this model and whether belongs to the shadow region; Another kind is based on the shade elimination algorithm of hsv color model, and its general direct employing Characteristic of Image determines whether shade such as information such as brightness, tone and saturation degrees.Through a large amount of observation experiments, after background pixel point was covered by shade, the variation of its colourity and saturation degree was all very little, and saturation degree can slightly reduce, target on the impact of background luminance, colourity then more at random, relevant with texture and the color of target.Therefore, we can according to the characteristic of shade in brightness variation and colourity variation, distinguish it and moving target.Because shade is diversified, so the statistical model of shade is difficult to Erecting and improving, at actual mechanical process, often need the slip-stick artist to set up its specific shade statistical model for different monitoring scenes, very inconvenient, and adopt the general character of utilizing shade to remove shade based on the shade elimination algorithm of hsv color model, be adapted to various monitoring scenes, therefore, can be with based on preferred as present embodiment of the shade elimination algorithm of hsv color model.
Shade elimination algorithm based on the hsv color model is divided into following step:
Step 1: establish f (x, y) and be the value of current moving region pixel, g (x, y) is the value of background model pixel.
Step 2: the RGB color model is converted into HSV.It is as follows that the RGB color model is transformed into the algorithm of hsv color model:
max=max(R,G,?B)
min=min(R,G,?B)
ifR=max,H=(G-B)/(max-min)
if?G=max,H=2+(B-R)/(max-min)
if?B=max,H=4+(R-G)/(max-min)
H=H*60
If?H<0,H=H+360
V=max(R,G,?B)
S=(max–min)/max
Adopt above conversion can obtain the brightness value V (f (x of current moving region, y)), color value H (f (x, y)), intensity value S (f (x, and the brightness value V of background model (g (x, y)) y)),, H (g (x, y)), S (g (x, y)).
Step 3: set threshold values U, calculate the poor of brightness value | V (f (x, y))-V (g (x, y)) |, if | V (f (x, y))-V (g (x, y)) |<U illustrates that f (x, y) point is shadow spots, in the value of current moving region pixel, remove the value of shadow spots pixel, remove shadow spots.
After shadow spots is removed, can find that moving target has a lot of cavities with it, this is because shade appears at it the moving target, causes moving target itself also to be taken as shade and eliminates.Therefore, be necessary the shade that has moving target itself is eliminated, restore moving target.
Step 4: moving target cavity is with it restored, and the pixel that is about to moving target shade with it becomes white.
Adopt above four steps, can roughly remove shade, export the binary picture of complete moving target.
The key monitoring zone that embodiments of the invention are chosen from monitoring scene by obtaining the user, the guarded region of attaching most importance to is set up single Gaussian Background model, thereby reduced monitoring range, can avoid to a great extent rocking etc. such as leaf the interference of noise, reduce data processing amount, reduced cost.
Fig. 1 to Fig. 2 has done to elaborate to moving target detecting method of the present invention, and the below will be described further the device corresponding to said method.
Please refer to Fig. 3, is the schematic flow sheet of moving object detection device the first embodiment of the present invention.This moving object detection device 100 comprises: acquisition module 110 is used for obtaining the user from key monitoring zone that monitoring scene is chosen.
Generally speaking, moving target always appears at certain subregion rather than all guarded regions in the monitoring scene, therefore, this subregion can be classified as the key monitoring zone.Acquisition module 110 can adopt prompting user dotted line frame on the monitoring image of monitoring scene to identify the mode in key monitoring zone, the key monitoring zone of choosing to obtain the user.
Model building module 120 is used to the key monitoring zone to set up background model.The method that model building module 120 is set up background model has two large classes: a class is that the monitoring image (being foreground image) that will not contain moving target is filmed as a setting model, this method is only applicable to Indoor Video, this is because the Background of the monitoring scene of actual environment is constantly changing, such as illumination, wind fallen leaves etc., all can bring serious interference; Another kind of is the Gaussian Background modeling, and it adopts and constantly refreshes background model, to adapt to the environmental change of monitoring scene.The Gaussian Background modeling is divided into again single Gaussian Background modeling, mixed Gaussian background modeling method.Wherein, single Gaussian Background modeling is applicable to background pixel and changes little situation, and namely noise is few, and color is more concentrated; And mixed Gaussian background modeling method has stronger robustness to the dynamic change of monitoring scene, is applicable to the target detection in the complex scene.Embodiments of the invention adopt mixed Gaussian background modeling method to process the target detection in the complex scene.
Extraction module 130 is used for reference to background model, extracts the foreground image of moving target from the picture frame of monitoring scene.
The key monitoring zone that embodiments of the invention are chosen from monitoring scene by obtaining the user, and the guarded region of attaching most importance to sets up background model, thereby reduced monitoring range, can avoid to a great extent rocking etc. such as leaf the interference of noise, reduce data processing amount, reduced cost.
Please refer to Fig. 4, is the structural representation of the embodiment of extraction module of the present invention.Extraction module 130 comprises:
Guarded region image acquisition unit 131 is used for obtaining the picture frame in key monitoring zone from the picture frame of monitoring scene.Because 120 guarded regions of attaching most importance to of model building module have been set up background model, so need to obtain the image in key monitoring zone from the picture frame of monitoring scene, compare for the two.
Search unit 132, be used for finding out the area image that has different pixels with background model from the picture frame in key monitoring zone.Employing is carried out the comparison of individual element point with picture frame and the background model of key area, can find out the area image that has different pixels with background model.
Determining unit 133 is for the foreground image that this area image with different pixels is defined as moving target.Determining unit 133 is after determining the foreground image of moving target, the foreground image of moving target can be filled to white, and other area filling with same pixel in the picture frame of key area is become black, thus the output binary picture, for the processing of successive image.
Please refer to Fig. 5, is the schematic flow sheet of moving object detection device the second embodiment of the present invention.Moving object detection device 100 comprises:
Acquisition module 110, be used for obtaining the user from key monitoring zone that monitoring scene is chosen generally speaking, moving target always appears at certain subregion rather than all guarded regions in the monitoring scene, therefore, this subregion can be classified as the key monitoring zone.Acquisition module 110 can adopt prompting user dotted line frame on the monitoring image of monitoring scene to identify the mode in key monitoring zone, the key monitoring zone of choosing to obtain the user.
Model building module 120 is used for adopting single Gaussian Background modeling guarded region of attaching most importance to set up background model.The method of setting up background model has two large classes, one class is that the monitoring image (being foreground image) that will not contain moving target is filmed as a setting model, this method is only applicable to Indoor Video, this is because the Background of the monitoring scene of actual environment is constantly changing, such as illumination, wind fallen leaves etc., all can bring serious interference; Another kind of is the Gaussian Background modeling, and it adopts and constantly refreshes background model, to adapt to the environmental change of monitoring scene.The Gaussian Background modeling is divided into again single Gaussian Background modeling, mixed Gaussian background modeling method.Wherein, single Gaussian Background modeling is applicable to background pixel and changes little situation, and namely noise is few, and color is more concentrated; And mixed Gaussian background modeling method has stronger robustness to the dynamic change of monitoring scene, is applicable to the target detection in the complex scene.Because acquisition module 110 has adopted the user that choosing of emphasis guarded region got rid of quite a few noise, so model building module 120 preferred single Gaussian Background modelings.
Adopt the model building module 120 of single Gaussian Background modeling algorithm specifically to comprise:
Background image estimation unit 121 is used for the estimated background image: specifically namely gather the video of one section fixed background, make estimation for this video sequence, namely try to achieve the average μ of each pixel intensity 0And variance
Figure BDA00002484705200121
Express as follows with mathematical formulae: Wherein,
Figure BDA00002484705200123
Figure BDA00002484705200124
T is time span corresponding to described video sequence, and (x, y) is described video pixel value, sets up initial back-ground model.Foundation and the initialization of background model have so just been finished.
Background image updating block 122 is used for background image updating: after having obtained initial back-ground model, and according to the frame of video Renewal model of each input, the variation of the brightness that brings with the variation that conforms.Express as follows with mathematical formulae: B t = [ &mu; t , &delta; t 2 ] . Wherein, &mu; t = ( 1 - &alpha; ) &mu; t - 1 + &PartialD; f t , &delta; t 2 = ( 1 - &alpha; ) &delta; t - 1 2 + &PartialD; ( f t - &mu; t ) 2 , Wherein, α is turnover rate, K 0It is the number between 0 ~ 1.
Extraction module 130 is used for reference to background model, extracts the foreground image of moving target from the picture frame of monitoring scene.
Filtration module 140 is used for the foreground image of moving target is carried out morphologic filtering.The edge of the foreground picture of moving target is normally not too smooth, and there is hole inside, needs to adopt foreground image is carried out morphologic filtering so that its smooth of the edge, eliminate hole.
Shadow removal module 150 when being used in detecting foreground image shade being arranged, adopts based on the shade in the shade elimination algorithm removal foreground image of hsv color model.In the monitoring scene of reality, owing to reasons such as blocking between weather or the moving target, can cause uneven illumination even, thereby inevitably produce shade at foreground image, in the moving object detection algorithm, if not with shadow removal, watch-dog can be foreground image with Shadow recognition automatically.At present, the method for removal shade generally has two kinds: a kind of is the statistical model of setting up a shade according to the characteristic of shade, differentiates each pixel according to this model and whether belongs to the shadow region; Another kind is based on the shade elimination algorithm of hsv color model, and its general direct employing Characteristic of Image determines whether shade such as information such as brightness, tone and saturation degrees.Through a large amount of observation experiments, after background pixel point was covered by shade, the variation of its colourity and saturation degree was all very little, and saturation degree can slightly reduce, target on the impact of background luminance, colourity then more at random, relevant with texture and the color of target.Therefore, we can according to the characteristic of shade in brightness variation and colourity variation, distinguish it and moving target.Because shade is diversified, so the statistical model of shade is difficult to Erecting and improving, at actual mechanical process, often need the slip-stick artist to set up its specific shade statistical model for different monitoring scenes, very inconvenient, and adopt the general character of utilizing shade to remove shade based on the shade elimination algorithm of hsv color model, be adapted to various monitoring scenes, therefore, can be with based on preferred as present embodiment of the shade elimination algorithm of hsv color model.
Employing specifically comprises based on the shadow removal module 150 of the shade elimination algorithm of hsv color model:
If value cell 151 is used for establishing f (x, y) and is the value of current moving region pixel, g (x, y) is the value of background model pixel.
Conversion unit 152 is used for the RGB color model is converted into the hsv color model.It is as follows that the RGB color model is transformed into the algorithm of hsv color model:
max=max(R,G,?B)
min=min(R,G,?B)
ifR=max,H=(G-B)/(max-min)
if?G=max,H=2+(B-R)/(max-min)
if?B=max,H=4+(R-G)/(max-min)
H=H*60
If?H<0,H=H+360
V=max(R,G,?B)
S=(max–min)/max
Adopt above conversion can obtain the brightness value V (f (x of current moving region, y)), color value H (f (x, y)), intensity value S (f (x, and the brightness value V of background model (g (x, y)) y)),, H (g (x, y)), S (g (x, y)).
Computing unit 153, set threshold values U, calculate the poor of brightness value | V (f (x, y))-V (g (x, y)) |, if | V (f (x, y))-and V (g (x, y)) |<U illustrates f (x, y) be shadow spots, in the value of current moving region pixel, remove the value of shadow spots pixel.
After shadow spots is removed, can find that moving target has a lot of cavities with it, this is because shade appears at it the moving target, causes moving target itself also to be taken as shade and eliminates.Therefore, be necessary the shade that has moving target itself is restored.
Processing unit 154 is used for being left to be present in moving target shade with it and is restored to moving target, and the pixel that is about to moving target shade with it becomes white, from exporting the binary picture of complete moving target.
The key monitoring zone that embodiments of the invention are chosen from monitoring scene by obtaining the user, and the guarded region of attaching most importance to sets up background model, thereby reduced monitoring range, can avoid to a great extent rocking etc. such as leaf the interference of noise, reduce data processing amount, reduced cost.
One of ordinary skill in the art will appreciate that all or part of flow process that realizes in above-described embodiment method, to adopt computer program to come the relevant hardware of instruction to finish, described program can be stored in the computer read/write memory medium, this program can comprise the flow process such as the embodiment of above-mentioned each side method when carrying out.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or store-memory body (RandomAccess Memory, RAM) etc. at random.
Above disclosed only is a kind of preferred embodiment of the present invention, certainly can not limit with this interest field of the present invention, one of ordinary skill in the art will appreciate that all or part of flow process that realizes above-described embodiment, and according to the equivalent variations that claim of the present invention is done, still belong to the scope that invention is contained.

Claims (10)

1. a moving target detecting method is characterized in that, comprising:
Obtain the key monitoring zone that the user chooses from monitoring scene;
For setting up background model in described key monitoring zone;
With reference to described background model, from the picture frame of described monitoring scene, extract the foreground image of moving target.
2. the method for claim 1 is characterized in that, the described step of setting up background model for described key monitoring zone comprises:
Adopting single Gaussian Background modeling is that background model is set up in described key monitoring zone; Described single Gaussian Background modeling is divided into two steps:
Steps A. gather one section described key monitoring zone fixed background video that obtains, for described video is set up video sequence B 0Estimate, namely try to achieve the average μ of each pixel intensity 0And variance
Figure FDA00002484705100011
Figure FDA00002484705100012
&mu; 0 ( x , y ) = 1 T &Sigma; i = 0 T = 1 f i ( x , y ) , &delta; 0 2 ( x , y ) = 1 T &Sigma; i = 0 T = 1 [ f i ( x , y ) - &mu; 0 ( x , y ) ] 2 , T is time span corresponding to described video sequence, and (x, y) is described video pixel value, sets up initial back-ground model;
The frame of video Renewal model of the each input of step B. basis,
Figure FDA00002484705100016
&delta; t 2 = ( 1 - &alpha; ) &delta; t - 1 2 + &PartialD; ( f t - &mu; t ) 2 , &PartialD; = K 0 1 2 &pi; &delta; t - 1 exp { - ( &mu; t - 1 - f t ) 2 2 } , α is turnover rate, K 0Between 0 ~ 1.
3. the method for claim 1 is characterized in that, the described background model of described reference, and the step that extracts the foreground image of moving target from the picture frame of described monitoring scene comprises:
From the picture frame of described monitoring scene, obtain the picture frame in described key monitoring zone;
From the picture frame in described key monitoring zone, find out the area image that has different pixels with described background model;
Described area image with different pixels is defined as the foreground image of moving target.
4. the method for claim 1 is characterized in that, the described background model of described reference, and the step that extracts the foreground image of moving target from the picture frame of described monitoring scene comprises afterwards:
Foreground image to described moving target carries out morphologic filtering.
5. the method for claim 1 is characterized in that, the described background model of described reference, and the step that extracts the foreground image of moving target from the picture frame of described monitoring scene also comprises afterwards:
When in detecting described foreground image, shade being arranged, adopt color model based on tone Hue, saturation degree Saturation, brightness Value-hsv color model shade elimination algorithm to eliminate the shade in the described foreground image; Described hsv color model shade elimination algorithm is divided into following step:
Step C. establishes f (x, y) and is the value of current moving region pixel, and g (x, y) is the value of background model pixel;
Step D. is converted into the hsv color model with RGB color model-RGB color model:
max=max(R,G,?B)
min=min(R,G,?B)
ifR=max,H=(G-B)/(max-min)
if?G=max,H=2+(B-R)/(max-min)
if?B=max,H=4+(R-G)/(max-min)
H=H*60
If?H<0,H=H+360
V=max(R,G,?B)
S=(max-min)/max
Obtain the brightness value V (f (x, y)) of current moving region, color value H (f (x, y)), intensity value S (f (x, y)), and the brightness value V of background model (g (x, y)), H (g (x, y)), S (g (x, y));
E. set threshold values U, if | V (f (x, y))-V (g (x, y)) |<U, definition f (x, y) point at this moment is shadow spots, removes the value of shadow spots pixel in the value of current moving region pixel;
F. the pixel of setting the shade of moving target becomes white.
6. a moving object detection device is characterized in that, comprising:
Acquisition module is used for obtaining the user from key monitoring zone that monitoring scene is chosen;
Model building module is used to described key monitoring zone to set up background model;
Extraction module is used for extracting the foreground image of moving target with reference to described background model from the picture frame of described monitoring scene.
7. device as claimed in claim 6 is characterized in that, described model building module is that the single Gaussian Background modeling of employing is that background model is set up in described key monitoring zone.
8. device as claimed in claim 6 is characterized in that, described extraction module comprises:
The guarded region image acquisition unit is used for obtaining the picture frame in described key monitoring zone from the picture frame of described monitoring scene;
Search the unit, be used for finding out the area image that has different pixels with described background model from the picture frame in described key monitoring zone;
Determining unit is for the foreground image that described area image with different pixels is defined as moving target.
9. device as claimed in claim 6 is characterized in that, also comprises:
Filtration module is used for the foreground image of described moving target is carried out morphologic filtering.
10. device as claimed in claim 6 is characterized in that, also comprises:
The shadow removal module, when being used for when detecting described foreground image in shade being arranged, employing is removed shade in the described foreground image based on the shade elimination algorithm of hsv color model.
CN201210495320XA 2012-11-28 2012-11-28 Method and device for detecting moving target Pending CN102982558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210495320XA CN102982558A (en) 2012-11-28 2012-11-28 Method and device for detecting moving target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210495320XA CN102982558A (en) 2012-11-28 2012-11-28 Method and device for detecting moving target

Publications (1)

Publication Number Publication Date
CN102982558A true CN102982558A (en) 2013-03-20

Family

ID=47856499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210495320XA Pending CN102982558A (en) 2012-11-28 2012-11-28 Method and device for detecting moving target

Country Status (1)

Country Link
CN (1) CN102982558A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214293A (en) * 2018-08-07 2019-01-15 电子科技大学 A kind of oil field operation region personnel wearing behavioral value method and system
CN110209063A (en) * 2019-05-23 2019-09-06 成都世纪光合作用科技有限公司 A kind of smart machine control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246547A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for detecting moving objects in video according to scene variation characteristic
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background
CN102073863A (en) * 2010-11-24 2011-05-25 中国科学院半导体研究所 Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246547A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for detecting moving objects in video according to scene variation characteristic
CN102073863A (en) * 2010-11-24 2011-05-25 中国科学院半导体研究所 Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘泉志: "基于视频的运动目标检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库》 *
宋杨: "基于高斯混合模型的运动目标检测算法研究", 《中国优秀硕士学位论文全文数据》 *
陈瑜: "智能视频监控系统中运动目标检测与跟踪算法的研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214293A (en) * 2018-08-07 2019-01-15 电子科技大学 A kind of oil field operation region personnel wearing behavioral value method and system
CN110209063A (en) * 2019-05-23 2019-09-06 成都世纪光合作用科技有限公司 A kind of smart machine control method and device

Similar Documents

Publication Publication Date Title
Xu et al. Background modeling methods in video analysis: A review and comparative evaluation
Huang et al. A real-time object detecting and tracking system for outdoor night surveillance
Chen et al. A self-adaptive Gaussian mixture model
CN103136766B (en) A kind of object conspicuousness detection method based on color contrast and color distribution
Choi et al. Robust moving object detection against fast illumination change
WO2022027931A1 (en) Video image-based foreground detection method for vehicle in motion
CN104063885A (en) Improved movement target detecting and tracking method
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
CN102663405B (en) Prominence and Gaussian mixture model-based method for extracting foreground of surveillance video
CN103116985A (en) Detection method and device of parking against rules
WO2007126525A3 (en) Video segmentation using statistical pixel modeling
CN104601965A (en) Camera shielding detection method
CN105354791A (en) Improved adaptive Gaussian mixture foreground detection method
CN109711256B (en) Low-altitude complex background unmanned aerial vehicle target detection method
CN103700087A (en) Motion detection method and device
CN104268520A (en) Human motion recognition method based on depth movement trail
CN104200197A (en) Three-dimensional human body behavior recognition method and device
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
Armanfard et al. TED: A texture-edge descriptor for pedestrian detection in video sequences
CN104156941A (en) Method and system for determining geometric outline area on image
CN106204586A (en) A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN103473547A (en) Vehicle target recognizing algorithm used for intelligent traffic detecting system
Kar et al. Moving cast shadow detection and removal from Video based on HSV color space
CN103020980A (en) Moving target detection method based on improved double-layer code book model
CN111652033A (en) Lane line detection method based on OpenCV

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130320