Summary of the invention
In view of above content, be necessary to provide a kind of object monitoring system, can under busy, crowded, mixed and disorderly background, because shooting angle, convergent-divergent cause object profile to change or under light situation of change, simultaneously and effectively monitoring is with the object judging to be removed in guarded region or enter thing, and this enters thing to utilize the identification of characteristic point description vectors.
There is a need to provide a kind of object method for supervising, can under busy, crowded, mixed and disorderly background, because shooting angle, convergent-divergent cause object profile to change or under light situation of change, simultaneously and effectively monitoring is with the object judging to be removed in guarded region or enter thing, and this enters thing to utilize the identification of characteristic point description vectors.
A kind of object monitoring system, run in image server, this system comprises: prospect object detecting unit, the prospect object in the image of catching for utilizing background double layer model detecting watch-dog, and this background double layer model comprises existing background model and temporary background model, object and regional determination unit, for when the prospect object of this detecting is still judged as prospect object after being more than or equal to a setting-up time interval, being moved into pixel described in temporary background model tense marker in the pixel of this prospect object is interested pixel, pixel identical with the pixel value of described interested pixel in the region contiguous with described interested pixel is searched as interested pixel from temporary background model, obtain a pixel set b thus, when the area gathering b is greater than a setting range, the pixel that acquisition is corresponding with set b from existing background model, and obtain pixel set a thus, described object and regional determination unit, also for utilizing characteristic point algorithm to implement computing to set a and b respectively, find out the characteristic point in each pixel set and description vectors thereof, then the characteristic point in set a is implemented image cutting as seed and obtains block A, and using in set b with on block A opposite position characteristic point implement image cutting as seed and obtain block B, and article identification unit, for when the area of block B is greater than the area of block A, judge to remove thing in this guarded region, and when the area of block B is less than the area of block A, judge to enter thing in this guarded region.
A kind of object method for supervising, comprise the steps: to utilize the prospect object in background double layer model detecting watch-dog institute capturing video, this background double layer model comprises existing background model and temporary background model; If the prospect object of this detecting is still judged as prospect object after being more than or equal to a setting-up time interval, being then moved into temporary this respective pixel of background model tense marker in this prospect object respective pixel is interested pixel; From temporary background model, search the region contiguous with described interested pixel, find out the pixel identical with the pixel value of described interested pixel as interested pixel, obtain a pixel set b thus; When the area gathering b is greater than a setting range, from existing background model, acquisition gathers pixel corresponding to b with described, and obtains pixel set a thus; Utilize characteristic point algorithm to implement computing to set a and b respectively, find out the characteristic point in each pixel set and description vectors thereof; Characteristic point in set a is implemented image cutting as seed and obtains block A, and obtain block B using implementing image cutting with the characteristic point on block A opposite position as seed in set b; When the area of block B is greater than the area of block A, judge to remove thing in this guarded region; And when the area of block B is less than the area of block A, judge to enter thing in this guarded region.
Compared to prior art, described object monitoring system and method, colour element is utilized to set up background model, before judging with this, background object, comparatively general supervisory control system and the method only adopting greyscale pixel, there is better judgment, it can not only busy, crowded, identify the object be removed in guarded region under mixed and disorderly background or enter thing, also can in shooting angle, when convergent-divergent causes the change of object profile or light to change, simultaneously and effectively monitoring is with the object judging to be removed in this guarded region with enter thing, this enters thing to utilize the identification of characteristic point description vectors.
Embodiment
As shown in Figure 1, be the running environment figure of object monitoring system preferred embodiment of the present invention.This object monitoring system 10 is installed and is run in image server 1.This image server 1 is connected with a characteristic point data storehouse 3 with at least one watch-dog 2 by network.In the present embodiment, described watch-dog 2 can be the electronic equipment that web camera or other types have monitoring function.The characteristic point description vectors model of the multiple object (comprising people) that training in advance is crossed is had in described characteristic point data storehouse 3.
As shown in Figure 2, be the function unit figure of object monitoring system 10 of the present invention preferred embodiment.In this figure, in image server 1 except running and having object monitoring system 10, also comprise memory device 20, processor 30 and display device 40.
Wherein, memory device 20 for storing the computerization program code of described object monitoring system 10, and stores the chromatic image captured by watch-dog 2.In other embodiments, this memory device 20 can be the external memory of image server 1.
Processor 30 performs the computerization program code of described object monitoring system 10, the image of namely catching watch-dog 2 carries out the detecting of prospect object, judges the object interested in described image and region, and identify which kind of object this enters thing as after judging to remove thing in guarded region or enter thing, and send alarm.
Display device 40 is for showing the chromatic image captured by described watch-dog 2, and each picture corresponding when processor 30 performs object monitoring system 10, and the image as background area and prospect object cuts picture, schematic diagram as shown in Figure 8.
Described object monitoring system 10 comprises: prospect object detecting unit 100, object and regional determination unit 102 and article identification unit 104, the function of this object monitoring system 10 is specifically described by Fig. 3 to Fig. 8.
Described prospect object detecting unit 100 comprises the model building module 1000 shown in Fig. 3, pixel separation module 1002, memory module 1004, temporary background model monitoring module 1006 and background model update module 1008.This prospect object detecting unit 100 detects prospect object in watch-dog 2 image of catching for utilizing background double layer model, and concrete grammar will be described in detail in Figure 5.Wherein, described background double layer model comprises existing background model and temporary background model, and this existing background model refers to the background model that the last width image of detecting current image generates.
Described object and regional determination unit 102, for when described prospect object is still judged as prospect object after being more than or equal to a setting-up time interval, if the pixel forming this prospect object is moved into temporary background model, can pixel be interested pixel described in automatic mark.Object and regional determination unit 102 search in the region contiguous with described interested pixel whether there is the pixel identical with these interested pixels from temporary background model, and the pixel this searched is considered as interested pixel equally, obtain a pixel set b thus.In the present embodiment, described identical pixel refers to the pixel that pixel value is identical with the pixel value of described interested pixel.
When gathering the area of b and being greater than a setting range, as set b be greater than 50 pixel × 50 pixel time, described object and regional determination unit 102 also for the pixel that acquisition from existing background model is corresponding with gathering b, and obtain pixel set a thus.Wherein, described setting range can be determined voluntarily by user, such as, when user only wants that the object larger to volume is detected, this setting range can be arranged to a larger value, filter out the object be comparatively concerned about monitor so that follow-up from image.
Described object and regional determination unit 102, also for utilizing characteristic point algorithm to implement computing to set a and b respectively, find out the characteristic point in each pixel set and description vectors thereof.In the present embodiment, described characteristic point algorithm is scale invariant feature conversion (scale-invariant feature transform, SIFT) algorithm, SURF algorithm or other algorithms that can be used for detecting and describe image locality characteristic.Wherein, the characteristic point utilizing SIFT algorithm to extract is the point of interest based on some local appearance on object, has nothing to do with rotating with the size of image.If the black dot in Fig. 8 (b2) is the characteristic point found out in set b, the black dot in Fig. 8 (a2) is the characteristic point found out in set a.
Subsequently, object and regional determination unit 102 utilize seed region to increase algorithm and the characteristic point in set a are obtained block A as the cutting of seed enforcement image, as shown in Fig. 8 (a3), and obtain block B, as shown in Fig. 8 (b3) using implementing image cutting with the characteristic point on block A opposite position as seed in set b.
Article identification unit 104 is for judging that the area of block B is greater than or is less than the area of block A, when the area of block B is less than the area of block A, article identification unit 104 judges to enter thing in this guarded region, and when the area of block B is greater than the area of block A, article identification unit 104 judges to remove thing in this guarded region.
This article identification unit 104 is also for carrying out the filtration of size, color and entry time to the judged thing that enters, and utilize general machine learning algorithm as neural network (Neural Networks), SVMs (Support Vector Machine) etc., the characteristic point description vectors model of each object stored in the characteristic point entering thing after filtration and description vectors and characteristic point data storehouse 3 thereof is compared, to identify that this enters thing, and remove thing described in judging and whether be at the appointed time removed in section.
Wherein, described filtration specifically refers to that the multiple objects by interested pixel forms filter, the time making the size of the final object judged, color and enter guarded region all meets the requirement of user, as the object after filtering need have automobile size, ignores the color of city taxi and enter time of guarded region need within the uncovered time period.
As described in Figure 4, be the operation process chart of object method for supervising of the present invention preferred embodiment.
Step S400, the prospect object in the image that prospect object detecting unit 100 utilizes background double layer model detecting watch-dog 2 to catch, specifically describes as described in Figure 5.This background double layer model comprises existing background model and temporary background model.
If the prospect object detected above-mentioned is still judged as prospect object after being more than or equal to a setting-up time interval, in step S402, described pixel can be labeled as interested pixel when the pixel of this prospect object is moved into temporary background model by object and regional determination unit 102, object and regional determination unit 102 search the region contiguous with described interested pixel from temporary background model, find out the pixel identical with the pixel value of described interested pixel, and be judged to be interested pixel, obtain a pixel set b (the pixel set as composition diagram 8 (b1)) thus, when the area of described pixel set b is greater than a setting range, object and regional determination unit 102 capture the pixel corresponding with described pixel set b from existing background model, and obtain pixel set a (the pixel set as five-pointed star in composition diagram 8 (a1)) thus.
Step S404, object and regional determination unit 102 utilize characteristic point algorithm to implement computing to pixel set a and b respectively, find out characteristic point in each pixel set (as Fig. 8 (a2), (b2) the black dot in) and description vectors, then utilize seed region to increase algorithm and the characteristic point in set a is obtained block A (the black part as in Fig. 8 (a3)) as the cutting of seed enforcement image, and obtain block B (the black part as in Fig. 8 (b3)) using implementing image cutting with the characteristic point on block A opposite position as seed in set b.
Step S406, article identification unit 104 judges that the area of this block B is greater than the area of block A or is less than the area of block A.If the area of block B is the area being greater than block A, then flow process enters step S408, if the area of block B is the area being less than block A, then flow process enters step S414.In the present embodiment, if the area of block B is the area equaling block A, show that both not entered thing in guarded region does not also remove thing.
Step S408, article identification unit 104 judges have object to be removed in this guarded region, namely removes thing in this guarded region.
Step S410, article identification unit 104 judges that this removes thing and whether is at the appointed time removed in section, if judged result is at the appointed time be removed in section for this removes thing, then flow process enters step S412.If judged result is not at the appointed time be removed in section for this removes thing, then process ends.
Step S412, article identification unit 104 sends this guarded region of alarm Security Officer threat, and then flow process terminates.
Step S414, article identification unit 104 judges have object to enter in this guarded region, namely enters thing in this guarded region.
Step S416, article identification unit 104 to this enter the size of thing, color and entry time filter this filtration of rear identification after enter thing, then flow process enters step S412.Specifically, article identification unit 104 enters the size of thing (multiple), color and entry time whether in the claimed range that user is arranged described in analyzing, and the satisfactory thing that enters is identified, as utilized, the characteristic point description vectors model of each object stored in the characteristic point entering thing after this filtration and description vectors and characteristic point data storehouse 3 thereof compares as neural network (neural networks) or SVMs (support vector machine) by machine learning algorithm, is which kind of object to identify that this enters thing.
As shown in Figure 5, be the particular flow sheet that in Fig. 4 step S400, prospect object is detected.This flow process is only described for the detecting of the prospect object of certain the two width image in N width chromatic image, and the prospect object detecting in other images is all carried out according to this method for detecting.
Step S500, sets an empty background model by model building module 1000, and receive the first width image in N width chromatic image, that is, this sky background model is for storing the first width image.In the present embodiment, the prospect detecting of the image after the 2nd width ~ the N width and N width is without the need to re-establishing sky background model again.
Step S502, successively using the width image in this N width image as current image, be existing background model with the background model that the last width image detecting this image generates.
Step S504, each pixel in this current image and the pixel in existing background model compare by pixel separation module 1002, to determine difference and the luminance difference of the pixel value between respective pixel.In the present embodiment, the second width image is being existing background model stored in the first width image in empty background model; After this second width image processing is complete, then takes out the 3rd width image and process, the 3rd width image is existing background model with the background model generated by detecting first width, the second width image, by that analogy, until by complete for all image processing.Such as, as shown in Figure 6, N width image is the background model A0 detected acquired by the 1st ~ the N-1 width image is existing background model, and N+1 width image is existing background model with background model A.
Step S506, pixel separation module 1002 judges whether the difference of the above-mentioned pixel value determined and luminance difference are all less than or equal to the threshold value preset.
If when the difference of the pixel value between the respective pixel in described pixel and existing background model and luminance difference are all less than or equal to the threshold value preset, in step S508, pixel separation module 1002 judges that this pixel is as background pixel, this pixel adds in existing background model by memory module 1004, thus generate new background model, then enter step S518, wherein, the object the present embodiment be made up of all background pixels is referred to as background object.Such as, suppose that guarded region involves in without external object (as people or car), only light has slight change, and by the light of this change can not cause the more existing background model of the pixel in current image have too greatly change time, pixel separation module 1002 still can continue the pixel in current image to be judged to be background pixel, and this pixel adds in existing background model and generates new background model by memory module 1004.
Otherwise, the threshold value preset described in if the difference of the pixel value between the respective pixel in described pixel and existing background model and luminance difference are all greater than, in step S510, pixel separation module 1002 judges that this pixel is as foreground pixel, and the object the present embodiment be made up of all foreground pixels is referred to as prospect object.As shown in Figure 6 and Figure 7, if the background model be made up of above-mentioned 1st ~ the N-1 width chromatic image is A0, this background model A0 is made up of the tree stopped in guarded region, road, in N width image, if there is vehicle to enter guarded region, then the detecting process through step S506 can judge to form the pixel of vehicle as prospect object.
Step S512, the pixel of the prospect object in step S510 and existing background model are kept in by memory module 1004, obtain described temporary background model B.
Step S514, whether pixel value and the brightness value of the pixel in the described temporary background model B of the real-time monitoring of temporary background model monitoring module 1006 change within a preset time interval.If pixel value and the brightness value of the pixel in this prefixed time interval in described temporary background model B change, suppose that the temporary background model after changing is B`, then temporary background model monitoring module 1006 repeated execution of steps S514, judges whether this temporary background model B` changes within the time interval of presetting.Otherwise if the pixel value of the pixel in this prefixed time interval in described temporary background model B (or temporary background model B`) and brightness value do not change, then flow process enters step S516.
Step S516, background model update module 1008 upgrades described existing background model with described temporary background model B or B`, thus generate new background model, such as, as shown in Figure 7, background model update module 1008 obtains new background model (as background model A) with the described existing background model of temporary background model B renewal.For the image after described N width, as the N+1 width image in Fig. 6, after pixel separation module 1002 detects prospect object and this prospect object is kept in temporary background model B`, if temporary background model monitoring module 1006 monitors described temporary background model B` not change in described prefixed time interval, then background model update module 1008 can upgrade described background model A with this temporary background model B` and obtains background model A`, by that analogy, background model can constantly be upgraded, the method of this background real-time update can avoid image to rock, light changes, the interference of Periodic Object, detect the prospect object in image more accurately, to reach objects such as guarded region effective monitorings.In addition, the method is utilized also the object stayed for some time in guarded region can be considered as background automatically.
Step S518, by checking received chromatic image, pixel separation module 1002 judges whether that image is not detected in addition, that is, pixel separation module 1002 judges whether that the pixel that the prospect object of chromatic image is in addition corresponding with background object is separated.If judged result is no, then direct process ends.If the determination result is YES, then return step S504 with the image do not detected for current image, the background model generated with the image detected before this image is existing background model, performs step S504 successively to step S516.
Finally it should be noted that, above embodiment is only in order to illustrate technical scheme of the present invention and unrestricted, although with reference to preferred embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that, can modify to technical scheme of the present invention or equivalent replacement, and not depart from the spirit and scope of technical solution of the present invention.