CN102496164A - Event detection method and event detection system - Google Patents

Event detection method and event detection system Download PDF

Info

Publication number
CN102496164A
CN102496164A CN2011103594332A CN201110359433A CN102496164A CN 102496164 A CN102496164 A CN 102496164A CN 2011103594332 A CN2011103594332 A CN 2011103594332A CN 201110359433 A CN201110359433 A CN 201110359433A CN 102496164 A CN102496164 A CN 102496164A
Authority
CN
China
Prior art keywords
image
pixel
threshold value
foreground
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103594332A
Other languages
Chinese (zh)
Other versions
CN102496164B (en
Inventor
安国成
李洪研
罗志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY Co Ltd
Original Assignee
BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY Co Ltd filed Critical BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY Co Ltd
Priority to CN2011103594332A priority Critical patent/CN102496164B/en
Publication of CN102496164A publication Critical patent/CN102496164A/en
Application granted granted Critical
Publication of CN102496164B publication Critical patent/CN102496164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an event detection method and an event detection system. The event detection method comprises the following steps of: modeling for an initial frame image, and obtaining a background image; obtaining a current frame image and a previous frame image, and obtaining a motion history image corresponding to self; obtaining a small threshold foreground image corresponding to self; fusing image blocks in any motion history image, performing binarization on the fused motion history image, and obtaining a binarized foreground image; under the condition that the total amount of pixels in any image block of the binarized foreground image of the current frame image is more than that of the pixels in the image block at a neighboring position corresponding to the any image block in the binarized foreground image of the previous frame image, counting the number of changed gray levels; and determining that the violence behavior happens when the amount of the gray level number is more than a predetermined gray level number. The method and the system are not based on body part identification, trajectory analysis or color features, so that universality is improved, moreover, monitoring accuracy of an intelligent video analysis system is improved.

Description

A kind of event detecting method and system
Technical field
The application relates to technical field of image processing, particularly relates to a kind of event detecting method and system.
Background technology
The intelligent video analysis system has the intellectual analysis function, and its incident that can pay close attention to the user who occurs in the video is carried out extract real-time and record, thereby in time reports to the police.Whether for example: whether have pedestrian and vehicle swarm into prohibited area, perhaps in prohibited area, pace up and down for a long time, stop if detecting, perhaps have incident of violence to take place in the video.
Whether in detecting video, have incident of violence to take place, above-mentioned intelligent video analysis system can adopt multiple incident of violence detection method.For example: Ankur Datta etc. were at ICPR (International Conference On Pattern Recognition in 2002; The pattern-recognition symposial) " the Person-on-Person Violence Detection in Video Data " that mentions in the 433-438 page or leaf of record; It comprises treatment steps such as human detection, the extraction of human body outline, the identification of body four limbs, head tracking, utilizes motion track information to carry out event detection to going out behaviors such as fist, savate, bump.Perhaps Alessandro Mecocci in 2007 etc. " the Real-Time Recognition of Violent Acts in Monocular Colour Video Sequences [C] " that in " Signal Processing Applications for Public Security and Forensics ", propose; Its clothes color to the violence participant is carried out piecemeal, utilizes that violence participant's clothes colouring information detects incident of violence behind the piecemeal.
Yet; Participant's clothing color, model and violence attitude have diversity in the incident of violence; And that these multifarious existence cause based on the versatility of the event detecting method of human body identification or trajectory analysis or color characteristic is poor, and further, the intelligent video analysis system is when carrying out the incident of violence monitoring; Because the versatility of the event detecting method that self uses is poor, thereby cause the accuracy of intelligent video analysis system monitoring to reduce.
Summary of the invention
In view of this, the application embodiment discloses a kind of event detecting method and system, improves the versatility of detection method, and further the intelligent video analysis system improves the monitoring accuracy of self when using the disclosed event detecting method of the application to monitor.Technical scheme is following:
Based on the application's one side, a kind of event detecting method is disclosed, comprise the modeling of initial frame image background, obtain the background image of said initial frame image, also comprise:
Obtain current frame image and previous frame image, and obtain self corresponding motion history image;
Through little threshold value foreground detection method, in conjunction with said background image arbitrary two field picture is detected, obtain self corresponding little threshold value foreground image;
To arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image is merged, and the motion history image after merging is carried out binary conversion treatment, obtain the binaryzation foreground image;
In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; The number of greyscale levels of statistics variations; Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image;
Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.
Preferably, said to arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image merged comprise:
Use formula
Figure BDA0000108271810000021
Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, and τ is preset gray-scale value, M (x; Y is that coordinate is (x, the gray-scale value of pixel y), S (x in the motion history image after the fusion of current frame image t); Y is that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y), H in the motion history image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y) in the motion history image of previous frame image t-1).
Preferably, after obtaining the binaryzation foreground image, also comprise:
According to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence;
Adopt eight to be communicated with, the binaryzation foreground image after each self-corresponding filtering is handled to current frame image and previous frame image carries out image block and merges, with the binaryzation foreground image after the processing as each self-corresponding binaryzation foreground image.
Preferably, after confirming that alert event takes place, also comprise:
Through big threshold value foreground detection method, in conjunction with said background image, current frame image is detected, obtain self corresponding big threshold value foreground image;
Each pixel in the little threshold value foreground image of said current frame image is counted;
Adopt formula
Figure BDA0000108271810000031
upgrades said background image, and with the image as a setting of the background image after upgrading, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the said background image t) to B; The gray-scale value of pixel y), (x, y are (x for coordinate in said prior image frame t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
Preferably, use mixed Gauss model that said initial frame image background is carried out modeling.
Preferably, the total number of pixel of arbitrary image block is the actual pixels number of this image block and the product of preset value in the binaryzation foreground image of said current frame image.
Based on the application's one side, a kind of event detection system is also disclosed, comprise the background image acquisition module, be used for the background image of said initial frame image is obtained in the modeling of initial frame image background, also comprise:
The motion history image collection module is used to obtain current frame image and previous frame image, and obtains self corresponding motion history image;
Little threshold value foreground image acquisition module is used for arbitrary two field picture being detected in conjunction with said background image through little threshold value foreground detection method, obtains self corresponding little threshold value foreground image;
Fusion Module is used for arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image is merged;
Binaryzation foreground image acquisition module is used for the motion history image after merging is carried out binary conversion treatment, obtains the binaryzation foreground image;
Counter; Be used under the situation of the total number of pixel of image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of the total number of pixel greater than said previous frame image of the arbitrary image block of binaryzation foreground image of current frame image; The number of greyscale levels of statistics variations; Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image;
The incident determination module is used under the situation of said number of greyscale levels greater than preset number of greyscale levels, confirming to take place incident of violence.
Preferably, said Fusion Module specifically is used to use formula Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, and τ is preset gray-scale value, M (x; Y is that coordinate is (x, the gray-scale value of pixel y), S (x in the motion history image after the fusion of current frame image t); Y is that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y), H in the motion history image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y) in the motion history image of previous frame image t-1).
Preferably, also comprise:
The filtering module is used for according to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence;
Merge module, be used to adopt eight connections, the binaryzation foreground image after each self-corresponding filtering is handled to current frame image and previous frame image carries out image block and merges, with the binaryzation foreground image after the processing as each self-corresponding binaryzation foreground image.
Preferably, also comprise:
Big threshold value foreground image acquisition module is used for, in conjunction with said background image current frame image being detected through big threshold value foreground detection method, obtains self corresponding big threshold value foreground image;
Counter is used for little each pixel of threshold value foreground image of said current frame image is counted;
Update module is used to adopt formula
Figure BDA0000108271810000051
upgrades said background image, and with the image as a setting of the background image after upgrading, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the said background image t) to B; The gray-scale value of pixel y), (x, y are (x for coordinate in said prior image frame t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
Use technique scheme; In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Add up in the motion history image after the corresponding fusion of said current frame image total number of the gray level of the corresponding number of pixels of same gray level in the image block of adjacent position corresponding in the motion history image of the corresponding number of pixels of gray level in this arbitrary image block after, with its number of greyscale levels as variation with this arbitrary image block greater than the corresponding fusion of said previous frame image.Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.Compared with prior art, the disclosed event detecting method of the application improves versatility not based on human body identification, trajectory analysis or color characteristic.Further, improve the monitoring accuracy of the intelligent video analysis system that uses this event detecting method.
Description of drawings
In order to be illustrated more clearly in the application embodiment or technical scheme of the prior art; To do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below; Obviously, the accompanying drawing in describing below only is some embodiment that put down in writing among the application, for those of ordinary skills; Under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is a kind of process flow diagram of the disclosed event detecting method of the application embodiment;
Fig. 2 is the initial frame image;
Fig. 3 is the background image of initial frame image shown in Figure 2;
Fig. 4 is a current frame image;
Fig. 5 is the corresponding motion history image of current frame image shown in Figure 4;
Fig. 6 is the corresponding little threshold value foreground image of current frame image shown in Figure 4;
Fig. 7 is the motion history image after the fusion of motion history image shown in Figure 5;
Fig. 8 is the binaryzation foreground image of the motion history image after the fusion shown in Figure 7;
Fig. 9 is the another kind of process flow diagram of the disclosed event detecting method of the application embodiment;
Figure 10 is the corresponding big threshold value foreground image of current frame image shown in Figure 4;
Figure 11 is a kind of structural representation of the disclosed event detection system of the application embodiment;
Figure 12 is the another kind of structural representation of the disclosed event detection system of the application embodiment.
Embodiment
The applicant is through discovering that existing event detecting method all is based on human body identification or trajectory analysis or color characteristic the incident of violence in the video is detected.But; Said method is in actual application; Because the diversity of participant's clothing color, model and violence attitude in the incident of violence, and cause the method versatility to reduce, further can cause using the monitoring accuracy of the intelligent video analysis system of this method to reduce.
In order to address the above problem; The applicant studies incident of violence; Sum up the height abstract definition of incident of violence; Promptly when incident of violence took place, the motion in the current frame image was fierce, in the current frame image image block with respect in the previous frame image with the image block rapid expanding of this image block adjacent position.Those skilled in the art can confirm incident of violence through following mode: the total number of pixel of at first judging arbitrary image block in the binaryzation foreground image of current frame image whether greater than in the binaryzation foreground image of previous frame image with the total number of pixel of the image block of this arbitrary image block adjacent position; If; Add up in the motion history image after the corresponding fusion of said current frame image total number of the gray level of the corresponding number of pixels of same gray level in the image block of adjacent position corresponding in the motion history image of the corresponding number of pixels of gray level in this arbitrary image block after, with its number of greyscale levels as variation with this arbitrary image block greater than the corresponding fusion of said previous frame image.Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.
For above-mentioned purpose, the feature and advantage that make the application can be more obviously understandable, the application is done further detailed explanation below in conjunction with accompanying drawing and embodiment.In present specification, with current time, last constantly, before constantly and the image of initial time be called current frame image, previous frame image, two field picture and initial frame image before respectively.
See also Fig. 1, Fig. 1 is a kind of process flow diagram of the disclosed event detecting method of the application embodiment, can comprise the steps:
S101:, obtain the background image of initial frame image to the modeling of initial frame image background.
The initial frame image is carried out background modeling can adopt existing image modeling method, for example use mixed Gauss model that the initial frame image is carried out background modeling.
The concrete mode of above-mentioned mixed Gauss model modeling is for adopting k Gaussian function, and coordinate is (x, pixel X y) in the computed image tThe probability density that belongs to background, computing formula is:
Pr ( x t ) = 1 K Σ i = 1 K Π j = 1 d 1 2 π σ j 2 e - 1 2 ( x t j - x i j ) 2 σ j 2
Wherein, d representes the dimension of the color space that adopts, and as adopting triple channel RGB color space, d equals 3; As adopting the single channel gray level image, d equals 1; σ representes the standard deviation of each passage; X TjBe pixel X tGray-scale value in j passage; X IjBe pixel X tGray-scale value in i Gaussian function in j passage.
Use above-mentioned mixed Gauss model that initial frame image shown in Figure 2 is carried out background modeling, the background image of acquisition sees also Fig. 3.
S102: obtain current frame image and previous frame image, and obtain self corresponding motion history image.
The acquisition process of the motion history image of arbitrary two field picture can pass through this two field picture of contrast and previous frame Image Acquisition, and is specific as follows:
Suppose that arbitrary two field picture is that (t), (x, y t-1), adopt formula to its corresponding previous frame image I to current frame image I for x, y D ( x , y , t ) = 1 | I ( x , y , t ) - I ( x , y , t - 1 ) | > C 0 | I ( x , y , t ) - I ( x , y , t - 1 ) | ≤ C Contrast the two continuous frames image, obtain the two-value difference image of current frame image, wherein, C is used to control the change sensitivity of two-value difference image to background image, and its numerical value can be provided with according to concrete condition.For example: strengthen the antijamming capability of the disclosed event detecting method of present embodiment, then C need be provided with bigger numerical value.
Afterwards;
Figure BDA0000108271810000073
handles the binaryzation difference image according to formula; Can draw the corresponding motion history image of current frame image, and the scope of gray-scale value is 0 to 255 in this motion history image.Wherein, (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame, H τ(x, y, t) be in the motion history image of current frame image coordinate for (τ is preset gray-scale value for x, the gray-scale value of pixel y), and its numerical value can be provided with according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
In the present embodiment; Fig. 4 is a current frame image in a certain video; Fig. 5 is the corresponding motion history image of current frame image, as can beappreciated from fig. 5 the motion history image be a brightness from black to white gray level image, and it is from secretly showing travel direction to bright direction.
Need to prove: the motion history image of initial frame image is that the user is provided with, and the gray-scale value of all pixels all is set to 0.
S103: through little threshold value foreground detection method, arbitrary two field picture is detected, obtain self corresponding little threshold value foreground image in conjunction with background image.
Little threshold value foreground detection method is for being provided with a little threshold value, the probability density of all pixels in the current frame image compared with this little threshold value respectively, as shown in the formula:
S ( x , y ) = 1 Pr ( x t ) < R 0 Pr ( x t ) &GreaterEqual; R
Wherein, R is little threshold value, and its numerical value can be provided with according to concrete condition.
Can find out that from following formula under the situation of probability density less than little threshold value of pixel, the gray-scale value of this pixel changes to 1; Under the situation of probability density more than or equal to little threshold value of pixel, the gray-scale value of this pixel changes to 0.Little threshold value foreground image can be referring to Fig. 6, and Fig. 6 is the corresponding little threshold value foreground image of current frame image shown in Figure 4.Little threshold value foreground image can be rejected the influence to image of illumination and shade, but its corresponding little threshold value foreground image can exist leak and/or fracture under the too small situation of little threshold value.
S104: to arbitrary motion history image,, the image block in the said motion history image is merged, and the motion history image after merging is carried out binary conversion treatment, obtain the binaryzation foreground image according to self corresponding little threshold value foreground image.
As can beappreciated from fig. 5, though motion history figure has write down the motion history of foreground image, the membership of each image block is unclear, and along with the diminishing gradually of the motion amplitude of target to be detected in the image, the motion of its detection also disappears.For example: wear people's (prospect) of dark clothes among Fig. 4, corresponding a plurality of image blocks in its motion history image, and whether these image blocks are under the jurisdiction of same prospect, its membership and unclear in Fig. 5.Therefore, present embodiment is handled the motion history image, and is clear with the membership that guarantees image block in the motion history image.
In the different application scene, can adopt different processing modes to the motion history image.In the present embodiment, according to the corresponding little threshold value foreground image of motion history image, image block is carried out fusion treatment.Be specially:
Use formula
Figure BDA0000108271810000091
Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, M (x, y; T) be that coordinate is (x, the gray-scale value of pixel y), S (x, y in the motion history image after the fusion of current frame image; T) be that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image τ(x, y, t) be in the motion history image of current frame image coordinate for (τ is preset gray-scale value for x, the gray-scale value of pixel y).
In same two field picture, the numerical value of the numerical value that τ is provided with during to motion history image co-registration τ setting during with the motion history image that obtains this two field picture is identical, and its numerical value can be provided with according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Motion history image after the fusion and binaryzation foreground image are please consulted Fig. 7 and Fig. 8 respectively.Wherein, Fig. 7 is the motion history image after the fusion of motion history image shown in Figure 5, and Fig. 8 is the binaryzation foreground image of the motion history image after the fusion shown in Figure 7.The membership of image block is clear in the motion history image after the fusion, and does not have leak and/or fracture.Simultaneously, the binaryzation foreground image does not exist leak and/or fracture yet.
S105: whether the total number of pixel of judging arbitrary image block in the binaryzation foreground image of current frame image total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block in the binaryzation foreground image greater than said previous frame image; If; Execution in step S106; If not, execution in step S109.
Wherein, the total number of pixel of arbitrary image block is the actual pixels number of this image block and the product of preset value in the binaryzation foreground image of current frame image, and preset value can be provided with according to concrete scene, usually greater than 2.
In the present embodiment, the image block of the adjacent position corresponding with this arbitrary image block can pass through the arest neighbors matching way, and the binaryzation foreground image of coupling current frame image and the binaryzation foreground image of previous frame image are confirmed.In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Show this arbitrary image block with respect to being adjacent the images of positions piece; Expand fast, in the current frame image incident of violence might take place; The total number of pixel of arbitrary image block is not more than under the situation of the total number of pixel of image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of said previous frame image in the binaryzation foreground image of current frame image; Show this arbitrary image block with respect to being adjacent the images of positions piece; Expand slowly, incident of violence can not take place in the current frame image.
S106: the number of greyscale levels of statistics variations.
Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image.
S107: whether judge number of greyscale levels greater than preset number of greyscale levels, if, execution in step S108, if not, execution in step S109.
Wherein, preset number of greyscale levels can be according to the different application scenes setting, as when adopting different motion severity to confirm incident of violence, the value of preset number of greyscale levels is also different.
S108: confirm to take place incident of violence.
S109: confirm not take place incident of violence.
After determining whether that incident of violence takes place, obtain the next frame image as current frame image, and with the current frame image in this testing process as the previous frame image, begin to carry out next testing process.
Use technique scheme; In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Add up in the motion history image after the corresponding fusion of said current frame image total number of the gray level of the corresponding number of pixels of same gray level in the image block of adjacent position corresponding in the motion history image of the corresponding number of pixels of gray level in this arbitrary image block after, with its number of greyscale levels as variation with this arbitrary image block greater than the corresponding fusion of said previous frame image.Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.Compared with prior art, the disclosed event detecting method of the application improves versatility not based on human body identification, trajectory analysis or color characteristic.Further, improve the monitoring accuracy of the intelligent video analysis system that uses this event detecting method.
Referring to Fig. 9, show the another kind of process flow diagram of the disclosed event detecting method of the application embodiment, present embodiment is appreciated that to the event detecting method with the application is applied to an object lesson in the reality, specifically can may further comprise the steps:
S801:, obtain the background image of initial frame image to the modeling of initial frame image background.
S802: obtain current frame image and previous frame image, and obtain self corresponding motion history image.
S803: through little threshold value foreground detection method, arbitrary two field picture is detected, obtain self corresponding little threshold value foreground image in conjunction with background image.
S804: to arbitrary motion history image,, the image block in the said motion history image is merged, and the motion history image after merging is carried out binary conversion treatment, obtain the binaryzation foreground image according to self corresponding little threshold value foreground image.
S805: the binaryzation foreground image is handled.Be specially: according to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence; Adopt eight connections again; Binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out image block to be merged; With the binaryzation foreground image after handling as each self-corresponding binaryzation foreground image; With the quantity of minimizing image block, thereby shorten detection time, improve detection efficiency.
Wherein, predetermined threshold value can be provided with different values according to the different application scene.
S806: whether the total number of pixel of judging arbitrary image block in the binaryzation foreground image of current frame image total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block in the binaryzation foreground image greater than said previous frame image; If; Execution in step S807; If not, execution in step S810.
S807: the number of greyscale levels of statistics variations.
Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image.
S808: whether judge number of greyscale levels greater than preset number of greyscale levels, if, execution in step S809, if not, execution in step S810.
S809: confirm to take place incident of violence.
S810: confirm not take place incident of violence.
S811: through big threshold value foreground detection method,, current frame image is detected, obtain self corresponding big threshold value foreground image in conjunction with said background image.
Big threshold value foreground detection method is for being provided with a big threshold value, the probability density of all pixels in the current frame image compared with this big threshold value respectively, as shown in the formula:
L ( x , y ) = 1 Pr ( x t ) < W 0 Pr ( x t ) &GreaterEqual; W
Wherein, W is big threshold value, and its numerical value can be provided with according to concrete condition.
Can find out that from following formula under the situation of probability density less than big threshold value of pixel, the gray-scale value of this pixel changes to 1; Under the situation of probability density more than or equal to big threshold value of pixel, the gray-scale value of this pixel changes to 0.Big threshold value foreground image can be referring to Figure 10, and Figure 10 is the corresponding big threshold value foreground image of current frame image shown in Figure 4.Little threshold value foreground image can be rejected the influence to image of illumination and shade, but can there be false-alarm in its corresponding big threshold value foreground image under the excessive situation of big threshold value.
Big threshold value foreground image comprises two types of zones, and one type is to change significantly zone in the current frame image, like the prospect of background and motion; Another kind of is to change unconspicuous zone in the current frame image, like shadow region, the slow region of variation of illumination.And little threshold value foreground image comprises that variation is significantly regional in the current frame image, and like the prospect of background and motion, therefore, big threshold value foreground image comprises little threshold value foreground image.
S812: each pixel in the little threshold value foreground image of said current frame image is counted.
S813: adopt formula
Figure BDA0000108271810000122
upgrades background image, and with the background image of the background image after upgrading as current frame image, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the background image of current frame image t) to B; The gray-scale value of pixel y), (x, y are that coordinate is (x in the two-value difference image of current frame image t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
U (x, y)>during Y, show in the little threshold value foreground image coordinate for (x, pixel y) its position does not within a certain period of time change, and like dustbin in the image, then it is thought the pixel in the background image; U (x, y)<during Y, show that coordinate is for (x, pixel y) its position within a certain period of time changes, and then it is thought the pixel in the foreground image in the little threshold value foreground image.
In the present embodiment; Adopt mode that big threshold value foreground image and little threshold value foreground image combine that the background image of current frame image is upgraded, this update mode is alleviated the problem that big threshold value and little threshold value are provided with effectively, improves the detection performance to prospect in the current frame image; Therefore this update mode can be avoided the fusion with foreground pixel and background pixel; Promptly avoid in the foreground image background image updating, improve the accuracy of background image, and then improve accuracy in detection.Wherein: foreground image is the foreground area of image, and like the people, foreground pixel then is as the pixel of foreground area, like people's pixel in the image.Correspondingly, background image is the background area of image, and like tree, background pixel then is a pixel regional as a setting in the image, like the pixel of tree.
Use technique scheme; In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Add up in the motion history image after the corresponding fusion of said current frame image total number of the gray level of the corresponding number of pixels of same gray level in the image block of adjacent position corresponding in the motion history image of the corresponding number of pixels of gray level in this arbitrary image block after, with its number of greyscale levels as variation with this arbitrary image block greater than the corresponding fusion of said previous frame image.Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.Compared with prior art, the disclosed event detecting method of the application improves versatility not based on human body identification, trajectory analysis or color characteristic.Further, improve the monitoring accuracy of the intelligent video analysis system that uses this event detecting method.
Further; Adopt mode that big threshold value foreground image and little threshold value foreground image combine that the background image of current frame image is upgraded, this update mode is alleviated the problem that big threshold value and little threshold value are provided with effectively, improves the detection performance to prospect in the current frame image; Therefore this update mode can be avoided the fusion with foreground pixel and background pixel; Promptly avoid in the foreground image background image updating, improve the accuracy of background image, and then improve accuracy in detection.
Embodiment is corresponding with said method; The application embodiment also discloses a kind of event detection system; Structural representation sees also shown in Figure 11, comprising: background image acquisition module 11, motion history image collection module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16 and incident determination module 17.Wherein:
Background image acquisition module 11 is used for the background image of said initial frame image is obtained in the modeling of initial frame image background.Wherein, the initial frame image is carried out background modeling can adopt existing image modeling method, for example use mixed Gauss model that the initial frame image is carried out background modeling.
The concrete mode of above-mentioned mixed Gauss model modeling is for adopting k Gaussian function, in the computed image coordinate be (x, pixel Xt y) belong to the probability density of background, and computing formula is:
Pr ( x t ) = 1 K &Sigma; i = 1 K &Pi; j = 1 d 1 2 &pi; &sigma; j 2 e - 1 2 ( x t j - x i j ) 2 &sigma; j 2
Wherein, d representes the dimension of the color space that adopts, and as adopting triple channel RGB color space, d equals 3; As adopting the single channel gray level image, d equals 1; σ representes the standard deviation of each passage; X TjBe pixel X tGray-scale value in j passage; X IjBe pixel X tGray-scale value in i Gaussian function in j passage.
Motion history image collection module 12 is used to obtain current frame image and previous frame image, and obtains self corresponding motion history image.
In the present embodiment, motion history image collection module 12 obtain the corresponding motion history image of every two field picture process can for: at first adopt formula D ( x , y , t ) = 1 | I ( x , y , t ) - I ( x , y , t - 1 ) | > C 0 | I ( x , y , t ) - I ( x , y , t - 1 ) | &le; C Contrast the two continuous frames image, obtain the two-value difference image of current frame image, wherein, C is used to control the change sensitivity of two-value difference image to background image, and its numerical value can be provided with according to concrete condition.For example: strengthen the antijamming capability of the disclosed event detecting method of present embodiment, then C need be provided with bigger numerical value, and (x, y t) are current frame image to I, and (x, y t-1) are its corresponding previous frame image to I.
Afterwards; According to the binaryzation difference image is handled again; Can draw the corresponding motion history image of current frame image, and the scope of gray-scale value is 0 to 255 in this motion history image.Wherein, (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame, H τ(x, y, t) be in the motion history image of current frame image coordinate for (τ is preset gray-scale value for x, the gray-scale value of pixel y), and its numerical value can be provided with according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Little threshold value foreground image acquisition module 13 is used for arbitrary two field picture being detected in conjunction with said background image through little threshold value foreground detection method, obtains self corresponding little threshold value foreground image.Wherein, little threshold value foreground detection method is for being provided with a little threshold value, the probability density of all pixels in the current frame image is compared with this little threshold value respectively, as shown in the formula:
S ( x , y ) = 1 Pr ( x t ) < R 0 Pr ( x t ) &GreaterEqual; R
Wherein, R is little threshold value, and its numerical value can be provided with according to concrete condition.
Fusion Module 14 is used for arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image is merged.Fusion Module specifically is used to use formula
Figure BDA0000108271810000152
Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, and τ is preset gray-scale value, M (x; Y is that coordinate is (x, the gray-scale value of pixel y), S (x in the motion history image after the fusion of current frame image t); Y is that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y), H in the motion history image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y) in the motion history image of previous frame image t-1).
In same two field picture, the numerical value of the numerical value that τ is provided with during to motion history image co-registration τ setting during with the motion history image that obtains this two field picture is identical, and its numerical value can be provided with according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Binaryzation foreground image acquisition module 15 is used for the motion history image after merging is carried out binary conversion treatment, obtains the binaryzation foreground image.
Counter 16; Be used under the situation of the total number of pixel of image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of the total number of pixel greater than said previous frame image of the arbitrary image block of binaryzation foreground image of current frame image; The number of greyscale levels of statistics variations; Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image.
Wherein, the total number of pixel of arbitrary image block is the actual pixels number of this image block and the product of preset value in the binaryzation foreground image of current frame image, and preset value can be provided with according to concrete scene, usually greater than 2.
In the present embodiment, the image block of the adjacent position corresponding with this arbitrary image block can pass through the arest neighbors matching way, and the binaryzation foreground image of coupling current frame image and the binaryzation foreground image of previous frame image are confirmed.In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Show this arbitrary image block with respect to being adjacent the images of positions piece; Expand fast, in the current frame image incident of violence might take place; The total number of pixel of arbitrary image block is not more than under the situation of the total number of pixel of image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of said previous frame image in the binaryzation foreground image of current frame image; Show this arbitrary image block with respect to being adjacent the images of positions piece; Expand slowly, incident of violence can not take place in the current frame image.
Incident determination module 17 is used under the situation of said number of greyscale levels greater than preset number of greyscale levels, confirming to take place incident of violence.
In the present embodiment; In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; Add up in the motion history image after the corresponding fusion of said current frame image total number of the gray level of the corresponding number of pixels of same gray level in the image block of adjacent position corresponding in the motion history image of the corresponding number of pixels of gray level in this arbitrary image block after, with its number of greyscale levels as variation with this arbitrary image block greater than the corresponding fusion of said previous frame image.Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.Compared with prior art, the disclosed event detecting method of the application improves versatility not based on human body identification, trajectory analysis or color characteristic.Further, improve the monitoring accuracy of the intelligent video analysis system that uses this event detecting method.
Referring to Figure 12; It is the basis with Figure 11; Show the another kind of structural representation of the disclosed event detection system of the application embodiment; Present embodiment is appreciated that to the event detecting method with the application is applied to an object lesson in the reality, specifically can comprises: background image acquisition module 11, motion history image collection module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16, incident determination module 17, filtering module 18, merging module 19, big threshold value foreground image acquisition module 20, counter 21 and update module 22.
Wherein, Its respective modules function is identical in background image acquisition module 11, motion history image collection module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16 and incident determination module 17 functions and the event detection system shown in Figure 11, and this is no longer set forth.
Filtering module 18 is used for according to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence.Wherein, predetermined threshold value can be provided with different values according to the different application scene.
Merge module 19, be used to adopt eight connections, the binaryzation foreground image after each self-corresponding filtering is handled to current frame image and previous frame image carries out image block and merges, with the binaryzation foreground image after the processing as each self-corresponding binaryzation foreground image.
In the present embodiment, the binaryzation foreground image is handled through filtering module 18 and merging module 19, and image block quantity reduces in it, thereby can shorten detection time, improves detection efficiency.
Big threshold value foreground image acquisition module 20 is used for, in conjunction with said background image current frame image being detected through big threshold value foreground detection method, obtains self corresponding big threshold value foreground image.
Big threshold value foreground detection method is for being provided with a big threshold value, the probability density of all pixels in the current frame image compared with this big threshold value respectively, as shown in the formula:
L ( x , y ) = 1 Pr ( x t ) < W 0 Pr ( x t ) &GreaterEqual; W
Wherein, W is big threshold value, and its numerical value can be provided with according to concrete condition.
Counter 21 is used for little each pixel of threshold value foreground image of current frame image is counted.
Update module 22 is used to adopt formula
Figure BDA0000108271810000172
upgrades said background image, and with the image as a setting of the background image after upgrading, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the said background image t) to B; The gray-scale value of pixel y), (x, y are (x for coordinate in said prior image frame t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
U (x, y)>during Y, show in the little threshold value foreground image coordinate for (x, pixel y) its position does not within a certain period of time change, and like dustbin in the image, then it is thought the pixel in the background image; U (x, y)<during Y, show that coordinate is for (x, pixel y) its position within a certain period of time changes, and then it is thought the pixel in the foreground image in the little threshold value foreground image.
In the present embodiment; Adopt mode that big threshold value foreground image and little threshold value foreground image combine that the background image of current frame image is upgraded, this update mode is alleviated the problem that big threshold value and little threshold value are provided with effectively, improves the detection performance to prospect in the current frame image; Therefore this update mode can be avoided the fusion with foreground pixel and background pixel; Promptly avoid in the foreground image background image updating, improve the accuracy of background image, and then improve accuracy in detection.Wherein: foreground image is the foreground area of image, and like the people, foreground pixel then is as the pixel of foreground area, like people's pixel in the image.Correspondingly, background image is the background area of image, and like tree, background pixel then is a pixel regional as a setting in the image, like the pixel of tree.
Present embodiment improves versatility not based on human body identification, trajectory analysis or color characteristic.Further, improve the monitoring accuracy of the intelligent video analysis system that uses this event detecting method.
Further; Adopt mode that big threshold value foreground image and little threshold value foreground image combine that the background image of current frame image is upgraded, this update mode is alleviated the problem that big threshold value and little threshold value are provided with effectively, improves the detection performance to prospect in the current frame image; Therefore this update mode can be avoided the fusion with foreground pixel and background pixel; Promptly avoid in the foreground image background image updating, improve the accuracy of background image, and then improve accuracy in detection.
Need to prove; In this article; Term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability; Thereby make to comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as this process, method, article or equipment intrinsic key element.Under the situation that do not having much more more restrictions, the key element that limits by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises said key element and also have other identical element.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe; Identical similar part is mutually referring to getting final product between each embodiment; Each embodiment stresses it is the difference with other embodiment; Those of ordinary skills promptly can understand and implement under the situation of not paying creative work.
The above only is the application's a embodiment; Should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the application's principle; Can also make some improvement and retouching, these improvement and retouching also should be regarded as the application's protection domain.

Claims (10)

1. an event detecting method comprises the modeling of initial frame image background, obtains the background image of said initial frame image, it is characterized in that, also comprises:
Obtain current frame image and previous frame image, and obtain self corresponding motion history image;
Through little threshold value foreground detection method, in conjunction with said background image arbitrary two field picture is detected, obtain self corresponding little threshold value foreground image;
To arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image is merged, and the motion history image after merging is carried out binary conversion treatment, obtain the binaryzation foreground image;
In the binaryzation foreground image of current frame image in the binaryzation foreground image of the total number of pixel greater than said previous frame image of arbitrary image block under the situation of the total number of pixel of the image block of the adjacent position corresponding with this arbitrary image block; The number of greyscale levels of statistics variations; Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image;
Under the situation of said number of greyscale levels, confirm to take place incident of violence greater than preset number of greyscale levels.
2. event detecting method according to claim 1 is characterized in that, and is said to arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image merged comprises:
Use formula
Figure FDA0000108271800000011
Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, and τ is preset gray-scale value, M (x; Y is that coordinate is (x, the gray-scale value of pixel y), S (x in the motion history image after the fusion of current frame image t); Y is that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y), H in the motion history image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y) in the motion history image of previous frame image t-1).
3. event detecting method according to claim 2 is characterized in that, after obtaining the binaryzation foreground image, also comprises:
According to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence;
Adopt eight to be communicated with, the binaryzation foreground image after each self-corresponding filtering is handled to current frame image and previous frame image carries out image block and merges, with the binaryzation foreground image after the processing as each self-corresponding binaryzation foreground image.
4. according to any described event detecting method of claim 1 to 3, it is characterized in that, after confirming that alert event takes place, also comprise:
Through big threshold value foreground detection method, in conjunction with said background image, current frame image is detected, obtain self corresponding big threshold value foreground image;
Each pixel in the little threshold value foreground image of said current frame image is counted;
Adopt formula
Figure FDA0000108271800000021
upgrades said background image, and with the image as a setting of the background image after upgrading, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the said background image t) to B; The gray-scale value of pixel y), (x, y are (x for coordinate in said prior image frame t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
5. according to any described event detecting method of claim 1 to 3, it is characterized in that, use mixed Gauss model that said initial frame image background is carried out modeling.
6. according to any described event detecting method of claim 1 to 3, it is characterized in that the total number of pixel of arbitrary image block is the actual pixels number of this image block and the product of preset value in the binaryzation foreground image of said current frame image.
7. an event detection system comprises the background image acquisition module, is used for the background image of said initial frame image is obtained in the modeling of initial frame image background, it is characterized in that, also comprises:
The motion history image collection module is used to obtain current frame image and previous frame image, and obtains self corresponding motion history image;
Little threshold value foreground image acquisition module is used for arbitrary two field picture being detected in conjunction with said background image through little threshold value foreground detection method, obtains self corresponding little threshold value foreground image;
Fusion Module is used for arbitrary motion history image, according to self corresponding little threshold value foreground image, the image block in the said motion history image is merged;
Binaryzation foreground image acquisition module is used for the motion history image after merging is carried out binary conversion treatment, obtains the binaryzation foreground image;
Counter; Be used under the situation of the total number of pixel of image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of the total number of pixel greater than said previous frame image of the arbitrary image block of binaryzation foreground image of current frame image; The number of greyscale levels of statistics variations; Wherein, The number of greyscale levels of said variation is total number of the gray level that changes, and the gray level of said variation is the gray level of the corresponding number of pixels of same gray level in the image block of the adjacent position corresponding with this arbitrary image block in the motion history image of the corresponding number of pixels of gray level after greater than the fusion of said previous frame image correspondence in this arbitrary image block in the motion history image after the corresponding fusion of said current frame image;
The incident determination module is used under the situation of said number of greyscale levels greater than preset number of greyscale levels, confirming to take place incident of violence.
8. event detection system according to claim 7 is characterized in that said Fusion Module specifically is used to use formula
Figure FDA0000108271800000031
Merge, wherein, (x y) is pixel coordinate; T is a present frame, and t-1 is a previous frame, and τ is preset gray-scale value, M (x; Y is that coordinate is (x, the gray-scale value of pixel y), S (x in the motion history image after the fusion of current frame image t); Y is that coordinate is (x, the gray-scale value of pixel y), H in the little threshold value foreground image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y), H in the motion history image of current frame image t) τ(x, y are that coordinate is (x, the gray-scale value of pixel y) in the motion history image of previous frame image t-1).
9. event detection system according to claim 8 is characterized in that, also comprises:
The filtering module is used for according to predetermined threshold value, the little image block in the binaryzation foreground image of binaryzation foreground image that the filtering current frame image is corresponding and previous frame image correspondence;
Merge module, be used to adopt eight connections, the binaryzation foreground image after each self-corresponding filtering is handled to current frame image and previous frame image carries out image block and merges, with the binaryzation foreground image after the processing as each self-corresponding binaryzation foreground image.
10. according to any described event detection system of claim 7 to 9, it is characterized in that, also comprise:
Big threshold value foreground image acquisition module is used for, in conjunction with said background image current frame image being detected through big threshold value foreground detection method, obtains self corresponding big threshold value foreground image;
Counter is used for little each pixel of threshold value foreground image of said current frame image is counted;
Update module is used to adopt formula
upgrades said background image, and with the image as a setting of the background image after upgrading, wherein, a is a renewal speed; (x y) is pixel coordinate, and t is a present frame, and t-1 is a previous frame; Y is preset counting, and (x, y are that coordinate is (x in the said background image t) to B; The gray-scale value of pixel y), (x, y are (x for coordinate in said prior image frame t) to I; The gray-scale value of pixel y), (x, y are that coordinate is (x in the big threshold value foreground image of current frame image t) to L; The gray-scale value of pixel y), (x, y are that coordinate is (x in the little threshold value foreground image of current frame image t) to S; The gray-scale value of pixel y), (x is that coordinate is (x, the counting of pixel y) in the little threshold value foreground image y) to U.
CN2011103594332A 2011-11-14 2011-11-14 Event detection method and event detection system Active CN102496164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103594332A CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103594332A CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Publications (2)

Publication Number Publication Date
CN102496164A true CN102496164A (en) 2012-06-13
CN102496164B CN102496164B (en) 2013-12-11

Family

ID=46187986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103594332A Active CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Country Status (1)

Country Link
CN (1) CN102496164B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612385A (en) * 2015-10-22 2017-05-03 株式会社理光 Video detection method and video detection device
CN107491731A (en) * 2017-07-17 2017-12-19 南京航空航天大学 A kind of Ground moving target detection and recognition methods towards precision strike
CN108764028A (en) * 2018-04-13 2018-11-06 北京航天自动控制研究所 A kind of method of filtering mode processing frame difference method On-Screen Identification label
CN110119653A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Image processing method, device and computer-readable medium
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system
CN110879948A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Image processing method, device and storage medium
CN112330618A (en) * 2020-10-29 2021-02-05 浙江大华技术股份有限公司 Image offset detection method, device and storage medium
CN112966556A (en) * 2021-02-02 2021-06-15 豪威芯仑传感器(上海)有限公司 Moving object detection method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212510B1 (en) * 1998-01-30 2001-04-03 Mitsubishi Electric Research Laboratories, Inc. Method for minimizing entropy in hidden Markov models of physical signals
CN101251927A (en) * 2008-04-01 2008-08-27 东南大学 Vehicle detecting and tracing method based on video technique
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101872244A (en) * 2010-06-25 2010-10-27 中国科学院软件研究所 Method for human-computer interaction based on hand movement and color information of user

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212510B1 (en) * 1998-01-30 2001-04-03 Mitsubishi Electric Research Laboratories, Inc. Method for minimizing entropy in hidden Markov models of physical signals
CN101251927A (en) * 2008-04-01 2008-08-27 东南大学 Vehicle detecting and tracing method based on video technique
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101872244A (en) * 2010-06-25 2010-10-27 中国科学院软件研究所 Method for human-computer interaction based on hand movement and color information of user

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘军学等: "基于改进运动历史图像的多运动目标实时跟踪", 《计算机应用》 *
徐姗: "基于视频分析的异常群体事件检测", 《中国优秀硕士论文电子期刊网》 *
蔡辉: "图像序列的运动检测与目标跟踪方法研究", 《中国优秀硕士论文电子期刊网》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612385A (en) * 2015-10-22 2017-05-03 株式会社理光 Video detection method and video detection device
CN106612385B (en) * 2015-10-22 2019-09-06 株式会社理光 Video detecting method and video detecting device
CN107491731A (en) * 2017-07-17 2017-12-19 南京航空航天大学 A kind of Ground moving target detection and recognition methods towards precision strike
CN110119653A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Image processing method, device and computer-readable medium
CN108764028A (en) * 2018-04-13 2018-11-06 北京航天自动控制研究所 A kind of method of filtering mode processing frame difference method On-Screen Identification label
CN108764028B (en) * 2018-04-13 2020-07-14 北京航天自动控制研究所 Method for processing screen identification label by frame difference method in filtering mode
CN110879948A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Image processing method, device and storage medium
CN110879948B (en) * 2018-09-06 2022-10-18 华为技术有限公司 Image processing method, device and storage medium
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system
CN112330618A (en) * 2020-10-29 2021-02-05 浙江大华技术股份有限公司 Image offset detection method, device and storage medium
CN112330618B (en) * 2020-10-29 2023-09-01 浙江大华技术股份有限公司 Image offset detection method, device and storage medium
CN112966556A (en) * 2021-02-02 2021-06-15 豪威芯仑传感器(上海)有限公司 Moving object detection method and system

Also Published As

Publication number Publication date
CN102496164B (en) 2013-12-11

Similar Documents

Publication Publication Date Title
CN102496164B (en) Event detection method and event detection system
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
CN103826102B (en) A kind of recognition methods of moving target, device
CN103971521B (en) Road traffic anomalous event real-time detection method and device
CN105744232A (en) Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
Danescu et al. Detection and classification of painted road objects for intersection assistance applications
CN103605967A (en) Subway fare evasion prevention system and working method thereof based on image recognition
CN106067003A (en) Road vectors tag line extraction method in a kind of Vehicle-borne Laser Scanning point cloud
CN103281477A (en) Multi-level characteristic data association-based multi-target visual tracking method
CN104662585B (en) The method and the event monitoring device using the method for event rules are set
CN106022278A (en) Method and system for detecting people wearing burka in video images
Martínez-Martín et al. Robust motion detection in real-life scenarios
CN103049749A (en) Method for re-recognizing human body under grid shielding
CN106529404A (en) Imaging principle-based recognition method for pilotless automobile to recognize road marker line
Zhang et al. Counting vehicles in urban traffic scenes using foreground time‐spatial images
CN101950352A (en) Target detection method capable of removing illumination influence and device thereof
CN107729811B (en) Night flame detection method based on scene modeling
CN105118072A (en) Method and device for tracking multiple moving targets
CN101877135A (en) Moving target detecting method based on background reconstruction
Płaczek A real time vehicle detection algorithm for vision-based sensors
Garg et al. Vehicle Lane Detection for Accident Prevention and Smart Autodrive Using OpenCV
Bourja et al. Movits: Moroccan video intelligent transport system
Kim et al. Robust lane detection for video-based navigation systems
CN105512653A (en) Method for detecting vehicle in urban traffic scene based on vehicle symmetry feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 100070 Beijing Fengtai District Branch Road No. 9 room 113

Applicant after: CRSC Communication &Information Corporation

Address before: 100071 No. 11 East Fengtai Road, Beijing, Fengtai District

Applicant before: Beijing China Railway Huachen Communication Information Technology Co., Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY CO.,LTD. TO: TONGHAO COMMUNICATION INFORMATION GROUP CO., LTD.

C14 Grant of patent or utility model
GR01 Patent grant