CN102496164B - Event detection method and event detection system - Google Patents

Event detection method and event detection system Download PDF

Info

Publication number
CN102496164B
CN102496164B CN2011103594332A CN201110359433A CN102496164B CN 102496164 B CN102496164 B CN 102496164B CN 2011103594332 A CN2011103594332 A CN 2011103594332A CN 201110359433 A CN201110359433 A CN 201110359433A CN 102496164 B CN102496164 B CN 102496164B
Authority
CN
China
Prior art keywords
image
pixel
current frame
frame image
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2011103594332A
Other languages
Chinese (zh)
Other versions
CN102496164A (en
Inventor
安国成
李洪研
罗志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Communication and Information Group Co Ltd CRSCIC
Original Assignee
CRSC Communication and Information Group Co Ltd CRSCIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Communication and Information Group Co Ltd CRSCIC filed Critical CRSC Communication and Information Group Co Ltd CRSCIC
Priority to CN2011103594332A priority Critical patent/CN102496164B/en
Publication of CN102496164A publication Critical patent/CN102496164A/en
Application granted granted Critical
Publication of CN102496164B publication Critical patent/CN102496164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an event detection method and an event detection system. The event detection method comprises the following steps of: modeling for an initial frame image, and obtaining a background image; obtaining a current frame image and a previous frame image, and obtaining a motion history image corresponding to self; obtaining a small threshold foreground image corresponding to self; fusing image blocks in any motion history image, performing binarization on the fused motion history image, and obtaining a binarized foreground image; under the condition that the total amount of pixels in any image block of the binarized foreground image of the current frame image is more than that of the pixels in the image block at a neighboring position corresponding to the any image block in the binarized foreground image of the previous frame image, counting the number of changed gray levels; and determining that the violence behavior happens when the amount of the gray level number is more than a predetermined gray level number. The method and the system are not based on body part identification, trajectory analysis or color features, so that universality is improved, moreover, monitoring accuracy of an intelligent video analysis system is improved.

Description

A kind of event detecting method and system
Technical field
The application relates to technical field of image processing, particularly relates to a kind of event detecting method and system.
Background technology
The intelligent video analysis system has the intellectual analysis function, and its event that can pay close attention to the user who occurs in video is carried out extract real-time and record, thus and alarm.For example: detect and whether have pedestrian and vehicle to swarm into prohibited area, or hover for a long time in prohibited area, stop, or in video, whether have incident of violence to occur.
Whether in detecting video, have incident of violence to occur, above-mentioned intelligent video analysis system can adopt multiple incident of violence detection method.Such as: Ankur Datta etc. were at ICPR (International Conference On Pattern Recognition in 2002, the pattern-recognition symposial) " the Person-on-Person Violence Detection in Video Data " mentioned in the 433-438 page of record, it comprises the treatment steps such as human detection, the extraction of human body outline, the identification of body four limbs, head tracking, utilizes motion track information to carry out event detection to going out the behaviors such as fist, savate, shock." the Real-Time Recognition of Violent Acts in Monocular Colour Video Sequences[C] " that perhaps Alessandro Mecocci in 2007 etc. propose in " Signal Processing Applications for Public Security and Forensics ", its color of clothes to the violence participant is carried out piecemeal, utilizes violence participant's clothes colouring information after piecemeal to be detected incident of violence.
Yet, in incident of violence, participant's clothing color, model and violence attitude have diversity, and these multifarious existence cause the versatility of the event detecting method based on human body identification or trajectory analysis or color characteristic poor, further, the intelligent video analysis system is when carrying out the incident of violence monitoring, because the versatility of the event detecting method self used is poor, thereby cause the accuracy of intelligent video analysis system monitoring to reduce.
Summary of the invention
In view of this, the embodiment of the present application discloses a kind of event detecting method and system, improves the versatility of detection method, and further the intelligent video analysis system, when using the disclosed event detecting method of the application to be monitored, improves the monitoring accuracy of self.Technical scheme is as follows:
One side based on the application, disclose a kind of event detecting method, comprises the modeling of initial frame image background, obtains the background image of described initial frame image, also comprises:
Obtain current frame image and previous frame image, and obtain the motion history image of self correspondence;
By little threshold value foreground detection method, in conjunction with described background image, arbitrary two field picture is detected, obtain the little threshold value foreground image of self correspondence;
To arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image, and the motion history image after merging is carried out to binary conversion treatment, obtains the binaryzation foreground image;
In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, the number of greyscale levels of statistics variations, wherein, the number of greyscale levels of described variation is total number of the gray level of variation, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding,
In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.
Preferably, described to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merge and comprise the image block in described motion history image:
Use formula
Figure BDA0000108271810000021
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, τ is default gray-scale value, M (x, y, t) be the gray-scale value of the pixel that in the motion history image after the fusion of current frame image, coordinate is (x, y), S (x, y, t) be the gray-scale value of the pixel that in the little threshold value foreground image of current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t-1) is the previous frame image, coordinate is (x, y).
Preferably, after obtaining the binaryzation foreground image, also comprise:
According to predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image;
Adopt eight to be communicated with, the binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, the binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image.
Preferably, after determining the generation alert event, also comprise:
By large threshold value foreground detection method, in conjunction with described background image, current frame image is detected, obtain the large threshold value foreground image of self correspondence;
Each pixel in the little threshold value foreground image of described current frame image is counted;
Adopt formula
Figure BDA0000108271810000031
described background image is upgraded, by the image as a setting of the background image after upgrading, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in described background image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) for coordinate in described prior image frame, be (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
Preferably, use mixed Gauss model to carry out modeling to described initial frame image background.
Preferably, the actual pixels number that in the binaryzation foreground image of described current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value.
One side based on the application, also disclose a kind of event detection system, comprises the background image acquisition module, for to the modeling of initial frame image background, obtains the background image of described initial frame image, also comprises:
The motion history image acquisition module, for obtaining current frame image and previous frame image, and obtain the motion history image of self correspondence;
Little threshold value foreground image acquisition module, for by little threshold value foreground detection method, detected arbitrary two field picture in conjunction with described background image, obtains the little threshold value foreground image of self correspondence;
Fusion Module, for to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image;
Binaryzation foreground image acquisition module, carry out binary conversion treatment for the motion history image to after merging, and obtains the binaryzation foreground image;
Counter, be used in the situation that the total number of pixel of the total number of pixel of the arbitrary image block of binaryzation foreground image of current frame image image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image, the number of greyscale levels of statistics variations, wherein, the number of greyscale levels of described variation is total number of the gray level of variation, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding,
The event determination module, in the situation that described number of greyscale levels is greater than default number of greyscale levels, determine incident of violence occur.
Preferably, described Fusion Module is specifically for being used formula
Figure BDA0000108271810000041
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, τ is default gray-scale value, M (x, y, t) be the gray-scale value of the pixel that in the motion history image after the fusion of current frame image, coordinate is (x, y), S (x, y, t) be the gray-scale value of the pixel that in the little threshold value foreground image of current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t-1) is the previous frame image, coordinate is (x, y).
Preferably, also comprise:
The filtering module, for the foundation predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image;
Merge module, for adopting eight connections, the binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, the binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image.
Preferably, also comprise:
Large threshold value foreground image acquisition module, for by large threshold value foreground detection method, in conjunction with described background image, detected current frame image, obtains the large threshold value foreground image of self correspondence;
Counter, counted for little each pixel of threshold value foreground image to described current frame image;
Update module, for adopting formula
Figure BDA0000108271810000051
described background image is upgraded, by the image as a setting of the background image after upgrading, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in described background image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) for coordinate in described prior image frame, be (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
The application technique scheme, in the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, add up in the motion history image after the fusion that described current frame image is corresponding total number of the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after number of pixels corresponding to gray level in this arbitrary image block is greater than the fusion that described previous frame image is corresponding, using it as the number of greyscale levels changed.In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.Compared with prior art, the disclosed event detecting method of the application, not based on human body identification, trajectory analysis or color characteristic, improves versatility.Further, improve the monitoring accuracy of the intelligent video analysis system of using this event detecting method.
The accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, below will the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described, apparently, the accompanying drawing the following describes is only some embodiment that put down in writing in the application, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
A kind of process flow diagram that Fig. 1 is the disclosed event detecting method of the embodiment of the present application;
Fig. 2 is the initial frame image;
The background image that Fig. 3 is the initial frame image shown in Fig. 2;
Fig. 4 is current frame image;
Fig. 5 is the motion history image that the current frame image shown in Fig. 4 is corresponding;
Fig. 6 is the little threshold value foreground image that current frame image shown in Fig. 4 is corresponding;
Motion history image after the fusion that Fig. 7 is the motion history image shown in Fig. 5;
The binaryzation foreground image that Fig. 8 is the motion history image after the fusion shown in Fig. 7;
The another kind of process flow diagram that Fig. 9 is the disclosed event detecting method of the embodiment of the present application;
Figure 10 is the large threshold value foreground image that current frame image shown in Fig. 4 is corresponding;
A kind of structural representation that Figure 11 is the disclosed event detection system of the embodiment of the present application;
The another kind of structural representation that Figure 12 is the disclosed event detection system of the embodiment of the present application.
Embodiment
The applicant is through the research discovery, and existing event detecting method all is based on human body identification or trajectory analysis or color characteristic the incident of violence in video is detected.But, said method is in actual application, due to the diversity of clothing color, model and the violence attitude of participant in incident of violence, and cause the method versatility to reduce, further can cause using the monitoring accuracy of the intelligent video analysis system of the method to reduce.
In order to address the above problem, the applicant is studied incident of violence, sum up the height abstract definition of incident of violence, when incident of violence occurs, motion fierceness in current frame image, in current frame image image block with respect in the previous frame image with the image block rapid expanding of this image block adjacent position.Those skilled in the art can determine incident of violence in the following way: whether the total number of pixel that at first judges arbitrary image block in the binaryzation foreground image of current frame image is greater than in the binaryzation foreground image of previous frame image the total number of pixel with the image block of this arbitrary image block adjacent position, if, add up in the motion history image after the fusion that described current frame image is corresponding total number of the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after number of pixels corresponding to gray level in this arbitrary image block is greater than the fusion that described previous frame image is corresponding, using it as the number of greyscale levels changed.In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.
For above-mentioned purpose, the feature and advantage that make the application can become apparent more, below in conjunction with the drawings and specific embodiments, the application is described in further detail.In present specification, by current time, upper constantly, before constantly and the image of initial time be called current frame image, previous frame image, two field picture and initial frame image before.
Refer to Fig. 1, a kind of process flow diagram that Fig. 1 is the disclosed event detecting method of the embodiment of the present application, can comprise the steps:
S101: to the modeling of initial frame image background, obtain the background image of initial frame image.
The initial frame image is carried out to background modeling and can adopt existing image modeling method, for example use mixed Gauss model to carry out background modeling to the initial frame image.
The concrete mode of above-mentioned mixed Gauss model modeling is for adopting k Gaussian function, the pixel X that in computed image, coordinate is (x, y) tthe probability density that belongs to background, computing formula is:
Pr ( x t ) = 1 K Σ i = 1 K Π j = 1 d 1 2 π σ j 2 e - 1 2 ( x t j - x i j ) 2 σ j 2
Wherein, d means the dimension of adopted color space, and as adopted triple channel RGB color space, d equals 3; As adopting the single channel gray level image, d equals 1; σ means the standard deviation of each passage; X tjfor pixel X tgray-scale value in j passage; X ijfor pixel X tgray-scale value in i Gaussian function in j passage.
Use above-mentioned mixed Gauss model to carry out background modeling to the initial frame image shown in Fig. 2, the background image of acquisition refers to Fig. 3.
S102: obtain current frame image and previous frame image, and obtain the motion history image of self correspondence.
The acquisition process of the motion history image of arbitrary two field picture can pass through this two field picture of contrast and previous frame Image Acquisition, specific as follows:
Suppose that arbitrary two field picture is current frame image I (x, y, t), its corresponding previous frame image I (x, y, t-1), adopt formula D ( x , y , t ) = 1 | I ( x , y , t ) - I ( x , y , t - 1 ) | > C 0 | I ( x , y , t ) - I ( x , y , t - 1 ) | ≤ C Contrast the two continuous frames image, obtain the two-value difference image of current frame image, wherein, C is for controlling the change sensitivity of two-value difference image to background image, and its numerical value can arrange according to concrete condition.For example: strengthen the antijamming capability of the disclosed event detecting method of the present embodiment, C need to arrange larger numerical value.
Afterwards, according to formula
Figure BDA0000108271810000073
the binaryzation difference image is processed, can draw the motion history image that current frame image is corresponding, and in this motion history image, the scope of gray-scale value is 0 to 255.Wherein, (x, y) is pixel coordinate, and t is present frame, and t-1 is previous frame, H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), τ is default gray-scale value, its numerical value can arrange according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
In the present embodiment, Fig. 4 is current frame image in a certain video, Fig. 5 is the motion history image that current frame image is corresponding, as can be seen from Figure 5 motion history image be a brightness from deceiving white gray level image, and it is from secretly showing the direction of motion to bright direction.
It should be noted that: the motion history image of initial frame image is that the user arranges, and the gray-scale value of all pixels all is set to 0.
S103: by little threshold value foreground detection method, in conjunction with background image, arbitrary two field picture is detected, obtain the little threshold value foreground image of self correspondence.
Little threshold value foreground detection method, for a little threshold value is set, compares the probability density of all pixels in current frame image respectively with this little threshold value, as shown in the formula:
S ( x , y ) = 1 Pr ( x t ) < R 0 Pr ( x t ) &GreaterEqual; R
Wherein, R is little threshold value, and its numerical value can arrange according to concrete condition.
As can be seen from the above equation, in the situation that the probability density of pixel is less than little threshold value, the gray-scale value of this pixel changes to 1; In the situation that the probability density of pixel is more than or equal to little threshold value, the gray-scale value of this pixel changes to 0.Little threshold value foreground image can be referring to Fig. 6, and Fig. 6 is the little threshold value foreground image that current frame image shown in Fig. 4 is corresponding.Little threshold value foreground image can be rejected the impact on image of illumination and shade, but the little threshold value foreground image of its correspondence can exist leak and/or fracture in the situation that little threshold value is too small.
S104: to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image, and the motion history image after merging is carried out to binary conversion treatment, obtains the binaryzation foreground image.
As can be seen from Figure 5, although motion history figure has recorded the motion history of foreground image, the membership of each image block is unclear, and, along with the diminishing gradually of the motion amplitude of detected target in image, the motion of its detection also disappears.For example: wear people's (prospect) of dark clothes in Fig. 4, corresponding a plurality of image blocks in its motion history image, and whether these image blocks are under the jurisdiction of same prospect, its membership unclear in Fig. 5.Therefore, the present embodiment is processed motion history image, clear with the membership that guarantees image block in motion history image.
In the different application scene, can adopt different processing modes to motion history image.In the present embodiment, the corresponding little threshold value foreground image according to motion history image, carry out fusion treatment to image block.Be specially:
Use formula
Figure BDA0000108271810000091
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, in motion history image after the fusion that M (x, y, t) is current frame image, coordinate is (x, the gray-scale value of pixel y), in the little threshold value foreground image that S (x, y, t) is current frame image, coordinate is (x, the gray-scale value of pixel y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), τ is default gray-scale value.
In same two field picture, the numerical value of the numerical value that when motion history image is merged, τ arranges τ setting during with the motion history image that obtains this two field picture is identical, and its numerical value can arrange according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Motion history image after fusion and binaryzation foreground image are please consulted respectively Fig. 7 and Fig. 8.Wherein, the motion history image after the fusion that Fig. 7 is the motion history image shown in Fig. 5, the binaryzation foreground image that Fig. 8 is the motion history image after the fusion shown in Fig. 7.In motion history image after fusion, the membership of image block is clear, and does not have leak and/or fracture.Simultaneously, the binaryzation foreground image does not exist leak and/or fracture yet.
S105: judge whether the total number of pixel of arbitrary image block in the binaryzation foreground image of current frame image is greater than the total number of pixel of the image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of described previous frame image, if, execution step S106, if not, execution step S109.
Wherein, the actual pixels number that in the binaryzation foreground image of current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value, preset value can arrange according to concrete scene, usually is greater than 2.
In the present embodiment, the image block of the adjacent position corresponding with this arbitrary image block can pass through the arest neighbors matching way, and the binaryzation foreground image of coupling current frame image and the binaryzation foreground image of previous frame image are determined.In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, show that this arbitrary image block is with respect to the image block that is adjacent position, expand fast, incident of violence likely occurs in current frame image; In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being not more than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, show that this arbitrary image block is with respect to the image block that is adjacent position, expand slowly, in current frame image, incident of violence can not occur.
S106: the number of greyscale levels of statistics variations.
Wherein, total number that the number of greyscale levels of described variation is the gray level that changes, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding.
S107: judge whether number of greyscale levels is greater than default number of greyscale levels, if so, execution step S108, if not, execution step S109.
Wherein, default number of greyscale levels can be according to different application scenarios settings, as while adopting the fierce degree of different motions to determine incident of violence, the value of default number of greyscale levels is also different.
S108: determine incident of violence occurs.
S109: determine incident of violence does not occur.
After determining whether that incident of violence occurs, obtain the next frame image as current frame image, and, using the current frame image in this testing process as the previous frame image, start to carry out next testing process.
The application technique scheme, in the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, add up in the motion history image after the fusion that described current frame image is corresponding total number of the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after number of pixels corresponding to gray level in this arbitrary image block is greater than the fusion that described previous frame image is corresponding, using it as the number of greyscale levels changed.In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.Compared with prior art, the disclosed event detecting method of the application, not based on human body identification, trajectory analysis or color characteristic, improves versatility.Further, improve the monitoring accuracy of the intelligent video analysis system of using this event detecting method.
Referring to Fig. 9, show the another kind of process flow diagram of the disclosed event detecting method of the embodiment of the present application, the present embodiment can be understood as the application's event detecting method is applied to an object lesson in reality, specifically can comprise the following steps:
S801: to the modeling of initial frame image background, obtain the background image of initial frame image.
S802: obtain current frame image and previous frame image, and obtain the motion history image of self correspondence.
S803: by little threshold value foreground detection method, in conjunction with background image, arbitrary two field picture is detected, obtain the little threshold value foreground image of self correspondence.
S804: to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image, and the motion history image after merging is carried out to binary conversion treatment, obtains the binaryzation foreground image.
S805: the binaryzation foreground image is processed.Be specially: according to predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image; Adopt again eight connections, binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image, to reduce the quantity of image block, thereby shorten detection time, improve detection efficiency.
Wherein, predetermined threshold value can arrange different values according to the different application scene.
S806: judge whether the total number of pixel of arbitrary image block in the binaryzation foreground image of current frame image is greater than the total number of pixel of the image block of adjacent position corresponding with this arbitrary image block in the binaryzation foreground image of described previous frame image, if, execution step S807, if not, execution step S810.
S807: the number of greyscale levels of statistics variations.
Wherein, total number that the number of greyscale levels of described variation is the gray level that changes, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding.
S808: judge whether number of greyscale levels is greater than default number of greyscale levels, if so, execution step S809, if not, execution step S810.
S809: determine incident of violence occurs.
S810: determine incident of violence does not occur.
S811: by large threshold value foreground detection method, in conjunction with described background image, current frame image is detected, obtain the large threshold value foreground image of self correspondence.
Large threshold value foreground detection method, for a large threshold value is set, compares the probability density of all pixels in current frame image respectively with this large threshold value, as shown in the formula:
L ( x , y ) = 1 Pr ( x t ) < W 0 Pr ( x t ) &GreaterEqual; W
Wherein, W is large threshold value, and its numerical value can arrange according to concrete condition.
As can be seen from the above equation, in the situation that the probability density of pixel is less than large threshold value, the gray-scale value of this pixel changes to 1; In the situation that the probability density of pixel is more than or equal to large threshold value, the gray-scale value of this pixel changes to 0.Large threshold value foreground image can be referring to Figure 10, and Figure 10 is the large threshold value foreground image that current frame image shown in Fig. 4 is corresponding.Little threshold value foreground image can be rejected the impact on image of illumination and shade, but can there be false-alarm in the large threshold value foreground image of its correspondence in the situation that large threshold value is excessive.
Large threshold value foreground image comprises two class zones, and a class is to change significantly zone in current frame image, as the prospect of background and motion; Another kind of is to change unconspicuous zone in current frame image, as shadow region, the slow region of variation of illumination.And little threshold value foreground image comprises that in current frame image, variation is significantly regional, as the prospect of background and motion, therefore, large threshold value foreground image comprises little threshold value foreground image.
S812: each pixel in the little threshold value foreground image of described current frame image is counted.
S813: adopt formula
Figure BDA0000108271810000122
background image is upgraded, background image after upgrading is as the background image of current frame image, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in the background image of current frame image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) be that in the two-value difference image of current frame image, coordinate is (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
When U (x, y)>Y, show that the pixel that in little threshold value foreground image, coordinate is (x, y) does not change its position within a certain period of time, as dustbin in image, thinks it pixel in background image; When U (x, y)<Y, show that the pixel that in little threshold value foreground image, coordinate is (x, y) changes its position within a certain period of time, thinks it pixel in foreground image.
In the present embodiment, the mode that adopts large threshold value foreground image and little threshold value foreground image to combine is upgraded the background image of current frame image, this update mode is alleviated the problem of large threshold value and little threshold value setting effectively, the detection performance of raising to prospect in current frame image, therefore this update mode can be avoided the fusion of foreground pixel and background pixel, avoid foreground image is upgraded in background image, improve the accuracy of background image, and then improve accuracy in detection.Wherein: the foreground area that foreground image is image, as the people, foreground pixel is as the pixel of foreground area, as people's pixel in image.Correspondingly, the background area that background image is image, as tree, background pixel be the pixel in zone as a setting in image, as the pixel of setting.
The application technique scheme, in the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, add up in the motion history image after the fusion that described current frame image is corresponding total number of the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after number of pixels corresponding to gray level in this arbitrary image block is greater than the fusion that described previous frame image is corresponding, using it as the number of greyscale levels changed.In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.Compared with prior art, the disclosed event detecting method of the application, not based on human body identification, trajectory analysis or color characteristic, improves versatility.Further, improve the monitoring accuracy of the intelligent video analysis system of using this event detecting method.
Further, the mode that adopts large threshold value foreground image and little threshold value foreground image to combine is upgraded the background image of current frame image, this update mode is alleviated the problem of large threshold value and little threshold value setting effectively, the detection performance of raising to prospect in current frame image, therefore this update mode can be avoided the fusion of foreground pixel and background pixel, avoid foreground image is upgraded in background image, improve the accuracy of background image, and then improve accuracy in detection.
With said method, embodiment is corresponding, the embodiment of the present application also discloses a kind of event detection system, structural representation refers to shown in Figure 11, comprising: background image acquisition module 11, motion history image acquisition module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16 and event determination module 17.Wherein:
Background image acquisition module 11, for to the modeling of initial frame image background, obtain the background image of described initial frame image.Wherein, the initial frame image is carried out to background modeling and can adopt existing image modeling method, for example use mixed Gauss model to carry out background modeling to the initial frame image.
The concrete mode of above-mentioned mixed Gauss model modeling is for adopting k Gaussian function, and the pixel Xt that in computed image, coordinate is (x, y) belongs to the probability density of background, and computing formula is:
Pr ( x t ) = 1 K &Sigma; i = 1 K &Pi; j = 1 d 1 2 &pi; &sigma; j 2 e - 1 2 ( x t j - x i j ) 2 &sigma; j 2
Wherein, d means the dimension of adopted color space, and as adopted triple channel RGB color space, d equals 3; As adopting the single channel gray level image, d equals 1; σ means the standard deviation of each passage; X tjfor pixel X tgray-scale value in j passage; X ijfor pixel X tgray-scale value in i Gaussian function in j passage.
Motion history image acquisition module 12, for obtaining current frame image and previous frame image, and obtain the motion history image of self correspondence.
In the present embodiment, motion history image acquisition module 12 obtains the process of the motion history image that every two field picture is corresponding and can be: at first adopt formula D ( x , y , t ) = 1 | I ( x , y , t ) - I ( x , y , t - 1 ) | > C 0 | I ( x , y , t ) - I ( x , y , t - 1 ) | &le; C Contrast the two continuous frames image, obtain the two-value difference image of current frame image, wherein, C is for controlling the change sensitivity of two-value difference image to background image, and its numerical value can arrange according to concrete condition.For example: strengthen the antijamming capability of the disclosed event detecting method of the present embodiment, C need to arrange larger numerical value, and I (x, y, t) is current frame image, and I (x, y, t-1) is its corresponding previous frame image.
Afterwards, then foundation the binaryzation difference image is processed, can draw the motion history image that current frame image is corresponding, and in this motion history image, the scope of gray-scale value is 0 to 255.Wherein, (x, y) is pixel coordinate, and t is present frame, and t-1 is previous frame, H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), τ is default gray-scale value, its numerical value can arrange according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Little threshold value foreground image acquisition module 13, for by little threshold value foreground detection method, detected arbitrary two field picture in conjunction with described background image, obtains the little threshold value foreground image of self correspondence.Wherein, little threshold value foreground detection method, for a little threshold value is set, compares the probability density of all pixels in current frame image respectively with this little threshold value, as shown in the formula:
S ( x , y ) = 1 Pr ( x t ) < R 0 Pr ( x t ) &GreaterEqual; R
Wherein, R is little threshold value, and its numerical value can arrange according to concrete condition.
Fusion Module 14, for to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image.Fusion Module is specifically for being used formula
Figure BDA0000108271810000152
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, τ is default gray-scale value, M (x, y, t) be the gray-scale value of the pixel that in the motion history image after the fusion of current frame image, coordinate is (x, y), S (x, y, t) be the gray-scale value of the pixel that in the little threshold value foreground image of current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t-1) is the previous frame image, coordinate is (x, y).
In same two field picture, the numerical value of the numerical value that when motion history image is merged, τ arranges τ setting during with the motion history image that obtains this two field picture is identical, and its numerical value can arrange according to concrete condition.For example: during vigorous exercise in a monitor video, reduce the value of τ, otherwise, improve the value of τ.
Binaryzation foreground image acquisition module 15, carry out binary conversion treatment for the motion history image to after merging, and obtains the binaryzation foreground image.
Counter 16, be used in the situation that the total number of pixel of the total number of pixel of the arbitrary image block of binaryzation foreground image of current frame image image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image, the number of greyscale levels of statistics variations, wherein, the number of greyscale levels of described variation is total number of the gray level of variation, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding.
Wherein, the actual pixels number that in the binaryzation foreground image of current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value, preset value can arrange according to concrete scene, usually is greater than 2.
In the present embodiment, the image block of the adjacent position corresponding with this arbitrary image block can pass through the arest neighbors matching way, and the binaryzation foreground image of coupling current frame image and the binaryzation foreground image of previous frame image are determined.In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, show that this arbitrary image block is with respect to the image block that is adjacent position, expand fast, incident of violence likely occurs in current frame image; In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being not more than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, show that this arbitrary image block is with respect to the image block that is adjacent position, expand slowly, in current frame image, incident of violence can not occur.
Event determination module 17, in the situation that described number of greyscale levels is greater than default number of greyscale levels, determine incident of violence occur.
In the present embodiment, in the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, add up in the motion history image after the fusion that described current frame image is corresponding total number of the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after number of pixels corresponding to gray level in this arbitrary image block is greater than the fusion that described previous frame image is corresponding, using it as the number of greyscale levels changed.In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.Compared with prior art, the disclosed event detecting method of the application, not based on human body identification, trajectory analysis or color characteristic, improves versatility.Further, improve the monitoring accuracy of the intelligent video analysis system of using this event detecting method.
Referring to Figure 12, it take Figure 11 as basis, show the another kind of structural representation of the disclosed event detection system of the embodiment of the present application, the present embodiment can be understood as the application's event detecting method is applied to an object lesson in reality, specifically can comprise: background image acquisition module 11, motion history image acquisition module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16, event determination module 17, filtering module 18, merge module 19, large threshold value foreground image acquisition module 20, counter 21 and update module 22.
Wherein, background image acquisition module 11, motion history image acquisition module 12, little threshold value foreground image acquisition module 13, Fusion Module 14, binaryzation foreground image acquisition module 15, counter 16 and event determination module 17 functions are identical with its respective modules function in the event detection system shown in Figure 11, and this is no longer set forth.
Filtering module 18, for the foundation predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image.Wherein, predetermined threshold value can arrange different values according to the different application scene.
Merge module 19, for adopting eight connections, the binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, the binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image.
In the present embodiment, the binaryzation foreground image is after filtration except module 18 and 19 processing of merging module, and in it, image block quantity reduces, thereby can shorten detection time, improves detection efficiency.
Large threshold value foreground image acquisition module 20, for by large threshold value foreground detection method, in conjunction with described background image, detected current frame image, obtains the large threshold value foreground image of self correspondence.
Large threshold value foreground detection method, for a large threshold value is set, compares the probability density of all pixels in current frame image respectively with this large threshold value, as shown in the formula:
L ( x , y ) = 1 Pr ( x t ) < W 0 Pr ( x t ) &GreaterEqual; W
Wherein, W is large threshold value, and its numerical value can arrange according to concrete condition.
Counter 21, counted for little each pixel of threshold value foreground image to current frame image.
Update module 22, for adopting formula
described background image is upgraded, by the image as a setting of the background image after upgrading, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in described background image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) for coordinate in described prior image frame, be (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
When U (x, y)>Y, show that the pixel that in little threshold value foreground image, coordinate is (x, y) does not change its position within a certain period of time, as dustbin in image, thinks it pixel in background image; When U (x, y)<Y, show that the pixel that in little threshold value foreground image, coordinate is (x, y) changes its position within a certain period of time, thinks it pixel in foreground image.
In the present embodiment, the mode that adopts large threshold value foreground image and little threshold value foreground image to combine is upgraded the background image of current frame image, this update mode is alleviated the problem of large threshold value and little threshold value setting effectively, the detection performance of raising to prospect in current frame image, therefore this update mode can be avoided the fusion of foreground pixel and background pixel, avoid foreground image is upgraded in background image, improve the accuracy of background image, and then improve accuracy in detection.Wherein: the foreground area that foreground image is image, as the people, foreground pixel is as the pixel of foreground area, as people's pixel in image.Correspondingly, the background area that background image is image, as tree, background pixel be the pixel in zone as a setting in image, as the pixel of setting.
The present embodiment, not based on human body identification, trajectory analysis or color characteristic, improves versatility.Further, improve the monitoring accuracy of the intelligent video analysis system of using this event detecting method.
Further, the mode that adopts large threshold value foreground image and little threshold value foreground image to combine is upgraded the background image of current frame image, this update mode is alleviated the problem of large threshold value and little threshold value setting effectively, the detection performance of raising to prospect in current frame image, therefore this update mode can be avoided the fusion of foreground pixel and background pixel, avoid foreground image is upgraded in background image, improve the accuracy of background image, and then improve accuracy in detection.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby make the process, method, article or the equipment that comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as the intrinsic key element of this process, method, article or equipment.In the situation that not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, between each embodiment, identical similar part is mutually referring to getting final product, each embodiment stresses it is the difference with other embodiment, those of ordinary skills in the situation that do not pay creative work, can understand and implement.
The above is only the application's embodiment; it should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the application's principle; can also make some improvements and modifications, these improvements and modifications also should be considered as the application's protection domain.

Claims (8)

1. an event detecting method, comprise the modeling of initial frame image background, obtains the background image of described initial frame image, it is characterized in that, also comprises:
Obtain current frame image and previous frame image, and obtain the motion history image of self correspondence;
By little threshold value foreground detection method, in conjunction with described background image, arbitrary two field picture is detected, obtain the little threshold value foreground image of self correspondence;
To arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image, and the motion history image after merging is carried out to binary conversion treatment, obtains the binaryzation foreground image;
According to predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image;
Adopt eight to be communicated with, the binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, the binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image;
In the situation that the total number of pixel of the total number of pixel of arbitrary image block image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image in the binaryzation foreground image of current frame image, the number of greyscale levels of statistics variations, wherein, the number of greyscale levels of described variation is total number of the gray level of variation, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding,
The actual pixels number that in the binaryzation foreground image of described current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value;
The mode that the image block of described adjacent position mates by arest neighbors is obtained;
In the situation that described number of greyscale levels is greater than default number of greyscale levels, determines incident of violence occurs.
2. event detecting method according to claim 1, is characterized in that, described to arbitrary motion history image, and the little threshold value foreground image according to self correspondence merges and comprises the image block in described motion history image:
Use formula
Figure FDA0000401378420000021
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, τ is default gray-scale value, M (x, y, t) be the gray-scale value of the pixel that in the motion history image after the fusion of current frame image, coordinate is (x, y), S (x, y, t) be the gray-scale value of the pixel that in the little threshold value foreground image of current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t-1) is the previous frame image, coordinate is (x, y).
3. according to the described event detecting method of claim 1 to 2 any one, it is characterized in that, after incident of violence determine to occur, or, after determining incident of violence not occurring, also comprise:
By large threshold value foreground detection method, in conjunction with described background image, current frame image is detected, obtain the large threshold value foreground image of self correspondence;
Each pixel in the little threshold value foreground image of described current frame image is counted;
Adopt formula
Figure FDA0000401378420000022
described background image is upgraded, by the image as a setting of the background image after upgrading, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in described background image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) be that in described current frame image, coordinate is (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
4. according to the described event detecting method of claim 1 to 2 any one, it is characterized in that, use mixed Gauss model to carry out modeling to described initial frame image background.
5. according to the described event detecting method of claim 1 to 2 any one, it is characterized in that the actual pixels number that in the binaryzation foreground image of described current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value.
6. an event detection system, comprise the background image acquisition module, for to the modeling of initial frame image background, obtains the background image of described initial frame image, it is characterized in that, also comprises:
The motion history image acquisition module, for obtaining current frame image and previous frame image, and obtain the motion history image of self correspondence;
Little threshold value foreground image acquisition module, for by little threshold value foreground detection method, detected arbitrary two field picture in conjunction with described background image, obtains the little threshold value foreground image of self correspondence;
Fusion Module, for to arbitrary motion history image, the little threshold value foreground image according to self correspondence, merged the image block in described motion history image;
Binaryzation foreground image acquisition module, carry out binary conversion treatment for the motion history image to after merging, and obtains the binaryzation foreground image;
The filtering module, for the foundation predetermined threshold value, the little image block in binaryzation foreground image corresponding to the binaryzation foreground image that the filtering current frame image is corresponding and previous frame image;
Merge module, for adopting eight connections, the binaryzation foreground image after current frame image and each the self-corresponding filtering processing of previous frame image is carried out to the image block merging, the binaryzation foreground image after processing is as each self-corresponding binaryzation foreground image;
Counter, be used in the situation that the total number of pixel of the total number of pixel of the arbitrary image block of binaryzation foreground image of current frame image image block of the adjacent position corresponding with this arbitrary image block in being greater than the binaryzation foreground image of described previous frame image, the number of greyscale levels of statistics variations, wherein, the number of greyscale levels of described variation is total number of the gray level of variation, the gray level of described variation be in the motion history image after the fusion that described current frame image is corresponding in this arbitrary image block number of pixels corresponding to gray level be greater than the gray level of number of pixels corresponding to same gray level in the image block of adjacent position corresponding with this arbitrary image block in the motion history image after the fusion that described previous frame image is corresponding,
The actual pixels number that in the binaryzation foreground image of described current frame image, the total number of pixel of arbitrary image block is this image block and the product of preset value;
The mode that the image block of described adjacent position mates by arest neighbors is obtained;
The event determination module, in the situation that described number of greyscale levels is greater than default number of greyscale levels, determine incident of violence occur.
7. event detection system according to claim 6, is characterized in that, described Fusion Module is specifically for being used formula
Figure FDA0000401378420000041
merged, wherein, (x, y) be pixel coordinate, t is present frame, and t-1 is previous frame, τ is default gray-scale value, M (x, y, t) be the gray-scale value of the pixel that in the motion history image after the fusion of current frame image, coordinate is (x, y), S (x, y, t) be the gray-scale value of the pixel that in the little threshold value foreground image of current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t) is current frame image, coordinate is (x, y), H τthe gray-scale value of the pixel that in the motion history image that (x, y, t-1) is the previous frame image, coordinate is (x, y).
8. according to the described event detection system of claim 6 to 7 any one, it is characterized in that, also comprise:
Large threshold value foreground image acquisition module, for by large threshold value foreground detection method, in conjunction with described background image, detected current frame image, obtains the large threshold value foreground image of self correspondence;
Counter, counted for little each pixel of threshold value foreground image to described current frame image;
Update module, for adopting formula
Figure FDA0000401378420000051
described background image is upgraded, by the image as a setting of the background image after upgrading, wherein, a is renewal speed, (x, y) be pixel coordinate, t is present frame, t-1 is previous frame, Y is default counting, B (x, y, t) be that in described background image, coordinate is (x, the gray-scale value of pixel y), I (x, y, t) be that in described current frame image, coordinate is (x, the gray-scale value of pixel y), L (x, y, t) be that in the large threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), S (x, y, t) be that in the little threshold value foreground image of current frame image, coordinate is (x, the gray-scale value of pixel y), U (x, y) be that in little threshold value foreground image, coordinate is (x, the counting of pixel y).
CN2011103594332A 2011-11-14 2011-11-14 Event detection method and event detection system Active CN102496164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103594332A CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103594332A CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Publications (2)

Publication Number Publication Date
CN102496164A CN102496164A (en) 2012-06-13
CN102496164B true CN102496164B (en) 2013-12-11

Family

ID=46187986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103594332A Active CN102496164B (en) 2011-11-14 2011-11-14 Event detection method and event detection system

Country Status (1)

Country Link
CN (1) CN102496164B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612385B (en) * 2015-10-22 2019-09-06 株式会社理光 Video detecting method and video detecting device
CN107491731B (en) * 2017-07-17 2019-12-20 南京航空航天大学 Ground moving target detection and identification method for accurate striking
CN110119653A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Image processing method, device and computer-readable medium
CN108764028B (en) * 2018-04-13 2020-07-14 北京航天自动控制研究所 Method for processing screen identification label by frame difference method in filtering mode
CN110879948B (en) * 2018-09-06 2022-10-18 华为技术有限公司 Image processing method, device and storage medium
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system
CN112330618B (en) * 2020-10-29 2023-09-01 浙江大华技术股份有限公司 Image offset detection method, device and storage medium
CN112966556B (en) * 2021-02-02 2022-06-10 豪威芯仑传感器(上海)有限公司 Moving object detection method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212510B1 (en) * 1998-01-30 2001-04-03 Mitsubishi Electric Research Laboratories, Inc. Method for minimizing entropy in hidden Markov models of physical signals
CN100595792C (en) * 2008-04-01 2010-03-24 东南大学 Vehicle detecting and tracing method based on video technique
CN101303727B (en) * 2008-07-08 2011-11-23 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
CN101872244B (en) * 2010-06-25 2011-12-21 中国科学院软件研究所 Method for human-computer interaction based on hand movement and color information of user

Also Published As

Publication number Publication date
CN102496164A (en) 2012-06-13

Similar Documents

Publication Publication Date Title
CN102496164B (en) Event detection method and event detection system
CN103826102B (en) A kind of recognition methods of moving target, device
CN105744232B (en) A kind of method of the transmission line of electricity video external force damage prevention of Behavior-based control analytical technology
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
CN103971521B (en) Road traffic anomalous event real-time detection method and device
CN101226597B (en) Method and system for recognizing nights pedestrian based on thermal infrared gait
Reisman et al. Crowd detection in video sequences
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
CN104662585B (en) The method and the event monitoring device using the method for event rules are set
CN103605967A (en) Subway fare evasion prevention system and working method thereof based on image recognition
CN103366156A (en) Road structure detection and tracking
CN105844128A (en) Method and device for identity identification
CN103116985A (en) Detection method and device of parking against rules
CN104680555A (en) Border-crossing detection method and border-crossing monitoring system based on video monitoring
CN103106397A (en) Human face living body detection method based on bright pupil effect
CN106067003A (en) Road vectors tag line extraction method in a kind of Vehicle-borne Laser Scanning point cloud
CN106022278A (en) Method and system for detecting people wearing burka in video images
CN103500324A (en) Violent behavior recognition method based on video monitoring
Martínez-Martín et al. Robust motion detection in real-life scenarios
CN103049788B (en) Based on space number for the treatment of object detection system and the method for computer vision
CN113223046A (en) Method and system for identifying prisoner behaviors
CN106529404A (en) Imaging principle-based recognition method for pilotless automobile to recognize road marker line
CN103927875B (en) Based on the traffic overflow state identification method of video
CN103077380A (en) Method and device for carrying out statistics on number of people on basis of video
CN104159088A (en) System and method of remote monitoring of intelligent vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB02 Change of applicant information

Address after: 100070 Beijing Fengtai District Branch Road No. 9 room 113

Applicant after: CRSC Communication &Information Corporation

Address before: 100071 No. 11 East Fengtai Road, Beijing, Fengtai District

Applicant before: Beijing China Railway Huachen Communication Information Technology Co., Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: BEIJING CHINA RAILWAY HUACHEN COMMUNICATION INFORMATION TECHNOLOGY CO.,LTD. TO: TONGHAO COMMUNICATION INFORMATION GROUP CO., LTD.

C14 Grant of patent or utility model
GR01 Patent grant