CN103810691B - Video-based automatic teller machine monitoring scene detection method and apparatus - Google Patents
Video-based automatic teller machine monitoring scene detection method and apparatus Download PDFInfo
- Publication number
- CN103810691B CN103810691B CN201210444071.1A CN201210444071A CN103810691B CN 103810691 B CN103810691 B CN 103810691B CN 201210444071 A CN201210444071 A CN 201210444071A CN 103810691 B CN103810691 B CN 103810691B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- value
- monitoring
- monitoring image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video-based automatic teller machine (ATM) monitoring scene detection method and apparatus. The method comprises the following steps that: a background model related with an ATM monitoring scene is established, wherein a background image and a predetermined parameter respectively corresponding to each pixel point in the background image are determined; and after model establishment completion, when a frame of monitoring image X is obtain each time, the following steps are carried out: a binary foreground image of the monitoring image X is generated according to the background model; edge texture information of the monitoring image X and the background image is respectively obtained and edge similarity of the monitoring image X and the background image is determined according to the obtained edge texture information; and according to the generated binary foreground image and the determined edge similarity, whether someone is in the monitoring image X is determined.
Description
Technical field
The present invention relates to video technique, particularly to a kind of ATM based on video (ATM,
AutomaticTeller Machine) monitoring scene detection method and device.
Background technology
In prior art, how detect in ATM monitoring scene whether someone using physical sensors, such as conventional is infrared right
Penetrate.But, infrared emission is easily disturbed by foreign body, such as once have disturb foreign body occur in penetrate range ring domestic when, then occur
The wrong report of someone always, thus lead to the accuracy of testing result to reduce.
Content of the invention
In view of this, the invention provides a kind of ATM monitoring scene detection method based on video and device, it is possible to increase
The accuracy of testing result.
For reaching above-mentioned purpose, the technical scheme is that and be achieved in that:
A kind of ATM monitoring scene detection method based on video, including:
Set up the background model with regard to described ATM monitoring scene, every in background image and background image including determining
Individual pixel corresponding predefined parameter respectively;
After the completion of modeling, when often getting a frame monitoring image X, carry out following process respectively:
Generate the two-value foreground image of monitoring image X according to background model;
Obtain the Edge texture information of monitoring image X and background image respectively, and according to the Edge texture information getting
Determine the edge similar degree of monitoring image X and background image;
Whether someone is determined in monitoring image X according to the two-value foreground image generating and the edge similar degree determined.
A kind of ATM monitoring scene detection means based on video, including:
MBM, for setting up the background model with regard to described ATM monitoring scene, including determination background image and the back of the body
Each pixel in scape image corresponding predefined parameter respectively, and the background model set up is sent to detection module;
Described detection module, for, after the completion of modeling, when often getting a frame monitoring image X, carrying out following respectively
Process:Generate the two-value foreground image of monitoring image X according to background model;Obtain the side of monitoring image X and background image respectively
Edge texture information, and the edge similar degree of monitoring image X and background image is determined according to the Edge texture information getting;According to
The two-value foreground image generating and the edge similar degree determined determine in monitoring image X whether someone.
It can be seen that, using scheme of the present invention, the brightness prospect in conjunction with image and Edge texture information to determine that ATM supervises
Whether someone in control scene, thus improve the accuracy of testing result;And, scheme of the present invention be applicable to various not
Same ATM monitoring scene, has broad applicability, is easy to popularize and promotes.
Brief description
Fig. 1 is the flow chart based on the ATM monitoring scene detection method embodiment of video for the present invention.
Fig. 2 is the schematic diagram of existing Sobel operator.
Specific embodiment
For problems of the prior art, a kind of ATM monitoring scene detection side based on video is proposed in the present invention
Case, it is possible to increase the accuracy of testing result.
Monitoring image in scheme of the present invention is photographed using ATM monitoring camera, and ATM monitoring camera needs
The zone of action of storage/access money people can be photographed.
In order that technical scheme is clearer, clear, develop simultaneously embodiment referring to the drawings, to institute of the present invention
The scheme of stating is described in further detail.
Fig. 1 is the flow chart based on the ATM monitoring scene detection method embodiment of video for the present invention.As shown in figure 1, bag
Include:
Step 11:Set up the background model with regard to ATM monitoring scene, including in determination background image and background image
Each pixel corresponding predefined parameter respectively.
Because ATM monitoring scene environment is relatively single, therefore can be using single Gaussian Background modeling, single Gaussian Background modeling is suitable
For Unimodal Distribution background.
Scheme of the present invention is modeled just for the gray value of each pixel, and each pixel is corresponding predetermined respectively
Parameter includes:Mean μ and variances sigma etc..
Implementing of this step may include:
A, acquisition one frame monitoring image, using this monitoring image as background image;
For each pixel in this background image, respectively that the gray value of this pixel is corresponding as this pixel
Average, using the variance of the gray value of this pixel as the corresponding variance of this pixel.
Whether the monitoring image number that B, determination get is equal to M, and M is the positive integer more than 1, if it is, obtaining up-to-date
The background image arriving, as finally required background image, completes to model, if it is not, then obtaining the new monitoring image of a frame, and
Execution step C.
Background image B after C, determination renewalnew(x, y):
Bnew(x, y)=(1- ρ) Bold(x, y)+ρ I (x, y); (1)
Wherein, ρ represents renewal rate, and its value is equal to 1/N, and N represents the monitoring image number getting, and I (x, y) represents
The monitoring image newly getting, Bold(x, y) represents the background image before updating;
For BnewEach pixel in (x, y), respectively that the gray value of this pixel is corresponding all as this pixel
Value, by (1- ρ) σold+ ρ d is as the corresponding variances sigma of this pixelnew, that is, have:
σnew=(1- ρ) σold+ρd; (2)
Wherein, σoldRepresent BoldThe corresponding variance of coordinate position identical pixel with this pixel, d table in (x, y)
Show in I (x, y) with seat with this pixel in the gray value of the coordinate position identical pixel of this pixel and Bold (x, y)
Difference between the corresponding average of cursor position identical pixel;
Afterwards, repeated execution of steps B.
The concrete value of M can be decided according to the actual requirements, and can be such as 100.
Illustrate:
The value of hypothesis M is 100, for ease of statement, by this 100 frame monitoring image according to the acquisition time by suitable after arriving first
It is monitoring image 1~monitoring image 100 that sequence is numbered respectively;
First, initial background model is set up according to monitoring image 1, will monitoring image 1 as background image, and respectively
Determine the corresponding average of each pixel and the variance in this background image;
Afterwards, according to formula (1), (2), using monitoring image 2, the up-to-date background model obtaining is updated, including really
Make the background image after renewal and determine respectively the corresponding average of each pixel in the background image after renewal and
Variance etc., wherein, the value of ρ is equal to 1/2;
Followed by, according to formula (1), (2), using monitoring image 3, the up-to-date background model obtaining is updated, including
Determine the background image after renewal and determine the corresponding average of each pixel in the background image after renewal respectively
With variance etc., wherein, the value of ρ is equal to 1/3;
Numbering is that the processing mode of 4~99 monitoring image repeats no more;
Finally, according to formula (1), (2), using monitoring image 100, the up-to-date background model obtaining is updated, including
Determine the background image after renewal and determine the corresponding average of each pixel in the background image after renewal respectively
With variance etc., wherein, the value of ρ is equal to 1/100, and the background image finally giving and each pixel determined are divided
Not corresponding average and variance are as finally required background model.
Step 12:After the completion of modeling, when often getting a frame monitoring image X, carry out following process respectively:According to the back of the body
Scape model generates the two-value foreground image of monitoring image X;Obtain the Edge texture information of monitoring image X and background image respectively,
And the edge similar degree of monitoring image X and background image is determined according to the Edge texture information getting;According to the two-value generating
Foreground image and the edge similar degree determined determine in monitoring image X whether someone.
For ease of statement, in scheme of the present invention, represent arbitrary needs with monitoring image X and carry out there is unmanned detection
Monitoring image.
In actual applications, due to ATM monitoring scene it may happen that changing, therefore, can be to the back of the body set up in step 11
Scape model is updated at any time, with guarantee subsequent detection whether someone when testing result accuracy.Specifically, can work as
Every time according to the two-value foreground image generating and the edge similar degree determined determine in monitoring image X whether someone it
Afterwards, using monitoring image X, original background model is updated.
Correspondingly, for arbitrary monitoring image X, the realization of step 12 can be:According to up-to-date background model (the i.e. profit obtaining
Background model after the up-to-date monitoring image that gets updates before monitoring image X) generate the two-value prospect of monitoring image X
Image;Obtain the Edge texture information of monitoring image X and the up-to-date background image obtaining respectively, and according to the edge stricture of vagina getting
Reason information determines the edge similar degree of monitoring image X and the up-to-date background image obtaining;According to generate two-value foreground image with
And the edge similar degree determined determines in monitoring image X whether someone.
Hereinafter above-mentioned related realization is described in detail respectively.
One) using monitoring image X, original background model is updated
Implement and may include:
Determine the background image B after updatingnew(x, y):
Bnew(x, y)=(1- ρ) Bold(x, y)+ρ I (x, y); (1)
Wherein, I (x, y) represents monitoring image X, the i.e. up-to-date monitoring image getting, BoldBefore (x, y) represents renewal
Background image.
For BnewEach pixel in (x, y), respectively that the gray value of this pixel is corresponding all as this pixel
Value, by (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate with this pixel in (x, y)
The corresponding variance of position identical pixel, d represents the ash in I (x, y) with the coordinate position identical pixel of this pixel
Angle value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y).
Wherein, ρ represents renewal rate, and its value may include several as follows:
1) when determining in I (x, y) nobody, that is, when determining in monitoring image X nobody, the value of ρ is set to
0.01, so that background model is constantly updated, to adapt to the slowly varying of the scenes such as illumination;
2) when determining someone in I (x, y), the value of ρ is set to 0, that is, as someone in ATM monitoring scene, stops
Only it is updated to by background model;
3) when determining in ATM monitoring scene in this time period from T to T-t someone always, and ATM monitoring scene is always
When being in transfixion state, the value of ρ is set to 1, T and represents the moment getting I (x, y), t > 0;
For preventing ATM monitoring scene from changing suddenly, such as ATM monitoring scene is modified, and causes to be judged as someone always,
Can be as someone in ATM monitoring scene, and the transfixion time exceedes predetermined threshold such as 2 minutes (i.e. the value of t is 2 minutes)
When, make ρ=1, that is, reset background, using present image I (x, y) as background image.
How to determine in this time period from T to T-t whether ATM monitoring scene is constantly in transfixion state can be as follows
Shown:
For any two field pictures I getting in this time period from T to T-t1(x, y) and I2(x, y), is carried out respectively
Hereinafter process:
Calculate Dif (x, y)=I1(x, y)-I2(x, y); (3)
Wherein, Dif (x, y) represents frame difference image, I1(x, y) is prior to I2(x, y) gets;
For each pixel in Dif (x, y), determine whether the gray value of this pixel is more than predetermined threshold respectively
T1, if it is, the value of this pixel is set to 1, otherwise, is set to 0, thus obtaining the frame difference binary map of Dif (x, y)
As Dif_Fg (x, y);
Whether in statistics Dif_Fg (x, y), value is number Dif_Num of 1 pixel, and determine Dif_Num less than pre-
Determine threshold value T2, if it is, determining I1(x, y) and I2It is in transfixion state between (x, y);
If being in transfixion state between any two field pictures getting in this time period from T to T-t,
Can determine that in this time period from T to T-t, ATM monitoring scene is constantly in transfixion state.
The concrete value of T1 and T2 all can be decided according to the actual requirements, and such as, the value of T1 can the value for 10, T2 can be
50.
Two) the two-value foreground image of monitoring image X is generated according to the up-to-date background model obtaining
For each pixel in monitoring image X, following process can be carried out respectively:
Calculate the coordinate position identical with this pixel in the gray value of this pixel and the up-to-date background image obtaining
Difference d between the corresponding average of pixel;
CalculateWherein, σ represents the coordinate position identical in the up-to-date background image obtaining with this pixel
The corresponding variance of pixel;
DetermineResult of calculation whether be more than predetermined threshold T0, if it is, the value setting by this pixel
For 1, otherwise, it is set to 0, thus generating the two-value foreground image of monitoring image X.
The concrete value of T0 can be decided according to the actual requirements, such as 9.
After generating the two-value foreground image of monitoring image X, also the two-value foreground image generating can be carried out successively swollen
Swollen and etching operation, to remove the isolated point of noise jamming formation, and then guarantees the subsequent detection whether testing result of someone
Accuracy.
Three) obtain the Edge texture information of monitoring image X and the up-to-date background image obtaining respectively, and according to getting
Edge texture information determines the edge similar degree of monitoring image X and the up-to-date background image obtaining
Implement and may include:
1) obtain the horizontal edge image of monitoring image X and vertical edge image respectively, and obtain up-to-date obtaining respectively
The horizontal edge image of background image and vertical edge image.
In actual applications, monitoring image X and the up-to-date background image obtaining can be obtained respectively using Sobel operator
Horizontal edge image and vertical edge image, how to be retrieved as prior art.
Fig. 2 is the schematic diagram of existing Sobel operator.As shown in Fig. 2 monitoring can be obtained using the Sobel operator on the left side
Image X and the horizontal edge image of the up-to-date background image obtaining, the Sobel algorithm using the right obtains monitoring image X
And the up-to-date background image obtaining vertical edge image.
2) the horizontal edge image according to monitoring image X and vertical edge image, for each pixel in monitoring image X
Point, calculates the gradient magnitude I of this pixel respectivelygxy:
Igxy=| Igx|+|Igy|; (4)
Wherein, IgxRepresent the horizontal gradient value of this pixel, IgyRepresent the vertical gradient value of this pixel, | | represent and take
Absolute value;
Horizontal edge image according to the up-to-date background image obtaining and vertical edge image, for the up-to-date background obtaining
Each pixel in image, calculates the gradient magnitude B of this pixel respectivelygxy:
Bgxy=| Bgx|+|Bgy|;(5)
Wherein, BgxRepresent the horizontal gradient value of this pixel, BgyRepresent the vertical gradient value of this pixel.
3) calculate the edge similar degree ESIM of monitoring image X and the up-to-date background image obtaining:
Wherein, the span of x is from 1 to E, and the span of y is from 1 to F, and E represents in monitoring image X horizontal direction
Pixel number, F represents the pixel number on monitoring image X vertical direction.
Four) determine in monitoring image X whether have according to the two-value foreground image generating and the edge similar degree determined
People
Implement and may include:
1) in the two-value foreground image of Statistical monitor image X, value is 1 pixel number Fgnum.
2) determine whether to meet following condition:
Flag=(Fgnum/ Area > T3) ∩ (ESIM < T4);(7)
Wherein, Area represents the pixel number in monitoring image X horizontal direction and the pixel number on vertical direction
Product, T3 and T4 all represent predetermined threshold, ∩ represent and;
If meeting above-mentioned condition, the value of Flag is 1, that is, determine someone in monitoring image X, otherwise unmanned.
The concrete value of T3 and T4 all can be decided according to the actual requirements, and such as, the value of T3 can be that the value of 0.6, T4 can
For 0.8.
More sensitive to light change interference etc. in view of brightness prospect, simple dependence brightness prospect is made whether someone's
Detection is likely to result in erroneous judgement, therefore to judge in monitoring image X whether someone with reference to brightness prospect and Edge texture information,
Thus improve the accuracy of testing result.
So far, that is, complete the introduction with regard to the inventive method embodiment.
Based on above-mentioned introduction, the present invention discloses a kind of ATM monitoring scene detection means based on video, including:
MBM, for setting up the background model with regard to ATM monitoring scene, including determination background image and Background
Each pixel in picture corresponding predefined parameter respectively, and the background model set up is sent to detection module;
Detection module, for, after the completion of modeling, when often getting a frame monitoring image X, carrying out following process respectively:
Generate the two-value foreground image of monitoring image X according to background model;Obtain the Edge texture of monitoring image X and background image respectively
Information, and the edge similar degree of monitoring image X and background image is determined according to the Edge texture information getting;According to generate
Two-value foreground image and the edge similar degree determined determine in monitoring image X whether someone.
Wherein, may include in MBM:
First processing units, for obtaining M frame monitoring image successively, M is the positive integer more than 1, and each by get
Frame monitoring image is sent respectively to second processing unit;
Second processing unit, for using the receive first frame monitoring image as background image, and be directed to this Background
Each pixel in picture, respectively using the gray value of this pixel as the corresponding average of this pixel, by the ash of this pixel
The variance of angle value is as the corresponding variance of this pixel;
Afterwards, when often receiving a frame monitoring image, it is handled as follows respectively:
Determine the background image B after updatingnew(x, y):
Bnew(x, y)=(1- ρ) Bold(x, y)+ρ I (x, y); (1)
Wherein, ρ represents renewal rate, and its value is equal to 1/N, and N represents the monitoring image number receiving, and I (x, y) represents
The monitoring image newly receiving, Bold(x, y) represents the background image before updating;
For BnewEach pixel in (x, y), respectively that the gray value of this pixel is corresponding all as this pixel
Value, by (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate with this pixel in (x, y)
The corresponding variance of position identical pixel, d represents the ash in I (x, y) with the coordinate position identical pixel of this pixel
Angle value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y).
May include in detection module:
3rd processing unit, for obtaining each frame monitoring image successively, and each frame monitoring image getting is sent out respectively
Give fourth processing unit;
Fourth processing unit, for when often receiving a frame monitoring image X, carrying out following process respectively:According to background
Model generates the two-value foreground image of monitoring image X;Obtain the Edge texture information of monitoring image X and background image respectively, and
Edge texture information according to getting determines the edge similar degree of monitoring image X and background image;Before the two-value generating
Scape image and the edge similar degree determined determine in monitoring image X whether someone.
Also can further include in detection module:
Whether the 5th processing unit, for determining in monitoring image X after someone when fourth processing unit, using monitoring
Image X is updated to original background model;
Correspondingly, fourth processing unit generates the two-value foreground image of monitoring image X according to the up-to-date background model obtaining;
Obtain the Edge texture information of monitoring image X and the up-to-date background image obtaining respectively, and according to the Edge texture letter getting
Breath determines the edge similar degree of monitoring image X and the up-to-date background image obtaining.
Specifically,
5th processing unit determines the background image B after updatingnew(x, y):
Bnew(x, y)=(1- ρ) Bold(x, y)+ρ I (x, y); (1)
Wherein, I (x, y) represents monitoring image X, Bold(x, y) represents the background image before updating;
For BnewEach pixel in (x, y), respectively that the gray value of this pixel is corresponding all as this pixel
Value, by (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate with this pixel in (x, y)
The corresponding variance of position identical pixel, d represents the ash in I (x, y) with the coordinate position identical pixel of this pixel
Angle value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y);
Wherein, ρ represents renewal rate;
When determining in I (x, y) nobody, the value of ρ is set to 0.01;
When determining someone in I (x, y), the value of ρ is set to 0;
When determining in ATM monitoring scene in this time period from T to T-t someone always, and ATM monitoring scene is located always
When transfixion state, the value of ρ is set to 1, T and represents the moment getting I (x, y), t > 0.
For any two field pictures I getting in this time period from T to T-t1(x, y) and I2(x, y), the 5th process
Unit carries out following process respectively:
Calculate Dif (x, y)=I1(x, y)-I2(x, y); (3)
Wherein, Dif (x, y) represents frame difference image, I1(x, y) is prior to I2(x, y) gets;
For each pixel in Dif (x, y), determine whether the gray value of this pixel is more than predetermined threshold respectively
T1, if it is, the value of this pixel is set to 1, otherwise, is set to 0, obtains the frame difference bianry image of Dif (x, y)
Dif_Fg (x, y);
Whether in statistics Dif_Fg (x, y), value is number Dif_um of 1 pixel, and determine Dif_Num less than pre-
Determine threshold value T2, if it is, determining I1(x, y) and I2It is in transfixion state between (x, y);
If being in transfixion state between any two field pictures getting in this time period from T to T-t,
In determining this time period from T to T-t, ATM monitoring scene is constantly in transfixion state.
May particularly include in fourth processing unit again:
Foreground detection subelement, for generating the two-value foreground image of monitoring image X according to the up-to-date background model obtaining,
And the two-value foreground image of generation is sent to analysis subelement;
Edge similar degree determination subelement, for obtaining the edge of monitoring image X and the up-to-date background image obtaining respectively
Texture information, and the edge phase of monitoring image X and the up-to-date background image obtaining is determined according to the Edge texture information getting
Like spending, the edge similar degree determined is sent to analysis subelement;
Analysis subelement, for determining in monitoring image X according to the two-value foreground image receiving and edge similar degree
Whether someone.
Foreground detection subelement is directed to each pixel in monitoring image X, carries out following process respectively:
Calculate the coordinate position identical with this pixel in the gray value of this pixel and the up-to-date background image obtaining
Difference d between the corresponding average of pixel;
Calculate d2(σ2)-1;Wherein, σ represents the coordinate position identical in the up-to-date background image obtaining with this pixel
The corresponding variance of pixel;
DetermineResult of calculation whether be more than predetermined threshold T0, if it is, the value setting by this pixel
For 1, otherwise, it is set to 0.
Foreground detection subelement can be further used for, after generating the two-value foreground image of monitoring image X, to this two-value
Foreground image is expanded and etching operation successively, and by through expansion and etching operation after two-value foreground image be sent to point
Analysis subelement.
Edge similar degree determination subelement obtains the horizontal edge image of monitoring image X and vertical edge image respectively, and
Obtain the horizontal edge image of the up-to-date background image obtaining and vertical edge image respectively;
Horizontal edge image according to monitoring image X and vertical edge image, for each pixel in monitoring image X
Point, calculates the gradient magnitude I of this pixel respectivelygxy:
Igxy=| Igx|+|Igy|;(4)
Wherein, IgxRepresent the horizontal gradient value of this pixel, IgyRepresent the vertical gradient value of this pixel, | | represent and take
Absolute value;
Horizontal edge image according to the up-to-date background image obtaining and vertical edge image, for the up-to-date background obtaining
Each pixel in image, calculates the gradient magnitude B of this pixel respectivelygxy:
Bgxy=| Bgx|+|Bgy|; (5)
Wherein, BgxRepresent the horizontal gradient value of this pixel, BgyRepresent the vertical gradient value of this pixel;
Calculate the edge similar degree ESIM of monitoring image X and the up-to-date background image obtaining:
Wherein, the span of x is from 1 to E, and the span of y is from 1 to F, and E represents in monitoring image X horizontal direction
Pixel number, F represents the pixel number on monitoring image X vertical direction.
In the two-value foreground image of analysis subelement Statistical monitor image X, value is 1 pixel number Fgnum;
Determine whether to meet following condition:
Flag=(Fgnum/ Area > T3) ∩ (ESIM < T4); (7)
Wherein, Area represents the pixel number in monitoring image X horizontal direction and the pixel number on vertical direction
Product, T3 and T4 all represent predetermined threshold;
If meeting above-mentioned condition it is determined that someone in monitoring image X, otherwise unmanned.
The specific workflow of said apparatus embodiment refer to the respective description in preceding method embodiment, herein no longer
Repeat.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention
Within god and principle, any modification, equivalent substitution and improvement done etc., should be included within the scope of protection of the invention.
Claims (20)
1. a kind of ATM ATM monitoring scene detection method based on video is it is characterised in that include:
Set up the background model with regard to described ATM monitoring scene, including each picture determining in background image and background image
Vegetarian refreshments corresponding predefined parameter respectively;
After the completion of modeling, when often getting a frame monitoring image X, carry out following process respectively:
Generate the two-value foreground image of monitoring image X according to background model;
Obtain the Edge texture information of monitoring image X and background image respectively, and determined according to the Edge texture information getting
Monitoring image X and the edge similar degree of background image;
Whether someone is determined in monitoring image X according to the two-value foreground image generating and the edge similar degree determined.
2. method according to claim 1 is it is characterised in that each in described determination background image and background image
Corresponding predefined parameter includes pixel respectively:
A, acquisition one frame monitoring image, using this monitoring image as background image;
For each pixel in this background image, respectively that the gray value of this pixel is corresponding all as this pixel
Value, using the variance of the gray value of this pixel as the corresponding variance of this pixel;
B, determine whether the monitoring image number N that gets is equal to M, M is the positive integer more than 1, if it is, obtaining up-to-date
Background image, as finally required background image, completes to model, if it is not, then obtaining the new monitoring image of a frame, and executes
Step C;
Background image B after C, determination renewalnew(x,y):Bnew(x, y)=(1- ρ) Bold(x,y)+ρI(x,y);Wherein, ρ represents
Renewal rate, its value is equal to 1/N, and I (x, y) represents the up-to-date monitoring image getting, Bold(x, y) represents the back of the body before updating
Scape image;
For BnewEach pixel in (x, y), respectively using the gray value of this pixel as the corresponding average of this pixel,
By (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate bit with this pixel in (x, y)
Put the corresponding variance of identical pixel, d represents the gray scale in I (x, y) with the coordinate position identical pixel of this pixel
Value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y);Repeat step
Rapid B.
3. method according to claim 1 it is characterised in that
In described determination monitoring image X whether after someone, further include:Using monitoring image X, original background model is entered
Row updates;
The described two-value foreground image according to background model generation monitoring image X includes:Given birth to according to the up-to-date background model obtaining
Become the two-value foreground image of monitoring image X;
The described Edge texture information obtaining monitoring image X and background image respectively, and according to the Edge texture information getting
Determine that monitoring image X and the edge similar degree of background image include:Obtain monitoring image X and the up-to-date background image obtaining respectively
Edge texture information, and monitoring image X and the up-to-date background image obtaining are determined according to the Edge texture information getting
Edge similar degree.
4. method according to claim 3 is it is characterised in that described carried out to original background model using monitoring image X
Update and include:
Determine the background image B after updatingnew(x,y):Bnew(x, y)=(1- ρ) Bold(x,y)+ρI(x,y);Wherein, I (x, y)
Represent monitoring image X, Bold(x, y) represents the background image before updating;
For BnewEach pixel in (x, y), respectively using the gray value of this pixel as the corresponding average of this pixel,
By (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate bit with this pixel in (x, y)
Put the corresponding variance of identical pixel, d represents the gray scale in I (x, y) with the coordinate position identical pixel of this pixel
Value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y);
Wherein, ρ represents renewal rate;
When determining in I (x, y) nobody, the value of ρ is set to 0.01;
When determining someone in I (x, y), the value of ρ is set to 0;
When determining in described ATM monitoring scene in this time period from T to T-t someone always, and described ATM monitoring scene one
When being directly in transfixion state, the value of ρ is set to 1, T and represents the moment getting I (x, y), t>0.
5. method according to claim 4 is it is characterised in that described ATM in described determination this time period from T to T-t
Monitoring scene is constantly in transfixion state and includes:
For any two field pictures I getting in this time period from T to T-t1(x, y) and I2(x, y), carries out following respectively
Process:
Calculate Dif (x, y)=I1(x,y)-I2(x,y);Wherein, Dif (x, y) represents frame difference image, I1(x, y) is prior to I2(x,y)
Get;
For each pixel in Dif (x, y), determine whether the gray value of this pixel is more than predetermined threshold T1, such as respectively
Fruit is then the value of this pixel to be set to 1, otherwise, is set to 0, obtains the frame difference bianry image Dif_Fg of Dif (x, y)
(x,y);
In statistics Dif_Fg (x, y), value is number Dif_Num of 1 pixel, and determines whether Dif_Num is less than predetermined threshold
Value T2, if it is, determine I1(x, y) and I2It is in transfixion state between (x, y);
If be in transfixion state between any two field pictures getting in this time period from T to T-t it is determined that
In this time period from T to T-t, described ATM monitoring scene is constantly in transfixion state.
6. the method according to any one of claim 3,4 or 5 it is characterised in that described according to the up-to-date background obtaining
The two-value foreground image that model generates monitoring image X includes:
For each pixel in monitoring image X, carry out following process respectively:
Calculate the coordinate position identical pixel with this pixel in the gray value of this pixel and the up-to-date background image obtaining
Difference d between the corresponding average of point;
CalculateWherein, σ represents the coordinate position identical pixel in the up-to-date background image obtaining with this pixel
Corresponding variance;
DetermineResult of calculation whether be more than predetermined threshold T0, if it is, the value of this pixel is set to 1,
Otherwise, it is set to 0.
7. the method according to any one of claim 3,4 or 5 is it is characterised in that the two-value of described generation monitoring image X
After foreground image, further include:
The two-value foreground image of monitoring image X is expanded and etching operation successively.
8. method according to claim 6 is it is characterised in that described obtain monitoring image X and the up-to-date back of the body obtaining respectively
The Edge texture information of scape image, and monitoring image X and the up-to-date background obtaining are determined according to the Edge texture information getting
The edge similar degree of image includes:
Obtain the horizontal edge image of monitoring image X and vertical edge image respectively, and obtain the up-to-date Background obtaining respectively
The horizontal edge image of picture and vertical edge image;
Horizontal edge image according to monitoring image X and vertical edge image, for each pixel in monitoring image X, point
Do not calculate the gradient magnitude I of this pixelgxy:Igxy=| Igx|+|Igy|;Wherein, IgxRepresent the horizontal gradient value of this pixel,
IgyRepresent the vertical gradient value of this pixel, | | represent and take absolute value;
Horizontal edge image according to the up-to-date background image obtaining and vertical edge image, for the up-to-date background image obtaining
In each pixel, calculate the gradient magnitude B of this pixel respectivelygxy:Bgxy=| Bgx|+|Bgy|;Wherein, BgxRepresent this picture
The horizontal gradient value of vegetarian refreshments, BgyRepresent the vertical gradient value of this pixel;
Calculate the edge similar degree of monitoring image X and the up-to-date background image obtaining
Wherein, the span of x is from 1 to E, and the span of y is from 1 to F, and E represents the pixel in monitoring image X horizontal direction
Number, F represents the pixel number on monitoring image X vertical direction.
9. method according to claim 8 it is characterised in that described according to generate two-value foreground image and determine
Edge similar degree determine in monitoring image X, whether someone includes:
In the two-value foreground image of Statistical monitor image X, value is 1 pixel number Fgnum;
Determine whether to meet following condition:Flag=(Fgnum/ Area > T3) ∩ (ESIM < T4);Wherein, Area represents monitoring
Pixel number in image X horizontal direction and the product of the pixel number on vertical direction, T3 and T4 all represents predetermined threshold
Value;
If meeting above-mentioned condition it is determined that someone in monitoring image X, otherwise unmanned.
10. a kind of ATM ATM monitoring scene detection means based on video is it is characterised in that include:
MBM, for setting up the background model with regard to described ATM monitoring scene, including determination background image and Background
Each pixel in picture corresponding predefined parameter respectively, and the background model set up is sent to detection module;
Described detection module, for, after the completion of modeling, when often getting a frame monitoring image X, carrying out following process respectively:
Generate the two-value foreground image of monitoring image X according to background model;Obtain the Edge texture of monitoring image X and background image respectively
Information, and the edge similar degree of monitoring image X and background image is determined according to the Edge texture information getting;According to generate
Two-value foreground image and the edge similar degree determined determine in monitoring image X whether someone.
11. devices according to claim 10 are it is characterised in that described MBM includes:
First processing units, for obtaining M frame monitoring image successively, M is the positive integer more than 1, and each frame getting is supervised
Control image is sent respectively to second processing unit;
Described second processing unit, for using the receive first frame monitoring image as background image, and be directed to this Background
Each pixel in picture, respectively using the gray value of this pixel as the corresponding average of this pixel, by the ash of this pixel
The variance of angle value is as the corresponding variance of this pixel;
Afterwards, when often receiving a frame monitoring image, it is handled as follows respectively:
Determine the background image B after updatingnew(x,y):Bnew(x, y)=(1- ρ) Bold(x,y)+ρI(x,y);Wherein, ρ represents more
New speed, its value is equal to 1/N, and N represents the monitoring image number being currently received, and I (x, y) represents recently received monitoring figure
Picture, Bold(x, y) represents the background image before updating;
For BnewEach pixel in (x, y), respectively using the gray value of this pixel as the corresponding average of this pixel,
By (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate bit with this pixel in (x, y)
Put the corresponding variance of identical pixel, d represents the gray scale in I (x, y) with the coordinate position identical pixel of this pixel
Value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y).
12. devices according to claim 10 are it is characterised in that described detection module includes:
3rd processing unit, for obtaining each frame monitoring image successively, and each frame monitoring image getting is sent respectively to
Fourth processing unit;
Described fourth processing unit, for when often receiving a frame monitoring image X, carrying out following process respectively:According to background
Model generates the two-value foreground image of monitoring image X;Obtain the Edge texture information of monitoring image X and background image respectively, and
Edge texture information according to getting determines the edge similar degree of monitoring image X and background image;Before the two-value generating
Scape image and the edge similar degree determined determine in monitoring image X whether someone.
13. devices according to claim 12 are it is characterised in that described detection module further includes:
Whether the 5th processing unit, for determining in monitoring image X after someone when described fourth processing unit, using monitoring
Image X is updated to original background model;
Generate the two-value foreground image of monitoring image X according to the up-to-date background model obtaining described in described fourth processing unit;Point
Not Huo Qu monitoring image X and the up-to-date background image obtaining Edge texture information, and according to the Edge texture information getting
Determine the edge similar degree of monitoring image X and the up-to-date background image obtaining.
14. devices according to claim 13 it is characterised in that
Described 5th processing unit determines the background image B after updatingnew(x,y):
Bnew(x, y)=(1- ρ) Bold(x,y)+ρI(x,y);Wherein, I (x, y) represents monitoring image X, Bold(x, y) represents renewal
Front background image;
For BnewEach pixel in (x, y), respectively using the gray value of this pixel as the corresponding average of this pixel,
By (1- ρ) σold+ ρ d is as the corresponding variance of this pixel;Wherein, σoldRepresent BoldCoordinate bit with this pixel in (x, y)
Put the corresponding variance of identical pixel, d represents the gray scale in I (x, y) with the coordinate position identical pixel of this pixel
Value and BoldDifference and the corresponding average of coordinate position identical pixel of this pixel between in (x, y);
Wherein, ρ represents renewal rate;
When determining in I (x, y) nobody, the value of ρ is set to 0.01;
When determining someone in I (x, y), the value of ρ is set to 0;
When determining in described ATM monitoring scene in this time period from T to T-t someone always, and described ATM monitoring scene one
When being directly in transfixion state, the value of ρ is set to 1, T and represents the moment getting I (x, y), t>0.
15. devices according to claim 14 it is characterised in that
For any two field pictures I getting in this time period from T to T-t1(x, y) and I2(x, y), described 5th process
Unit carries out following process respectively:
Calculate Dif (x, y)=I1(x,y)-I2(x,y);Wherein, Dif (x, y) represents frame difference image, I1(x, y) is prior to I2(x,y)
Get;
For each pixel in Dif (x, y), determine whether the gray value of this pixel is more than predetermined threshold T1, such as respectively
Fruit is then the value of this pixel to be set to 1, otherwise, is set to 0, obtains the frame difference bianry image Dif_Fg of Dif (x, y)
(x,y);
In statistics Dif_Fg (x, y), value is number Dif_Num of 1 pixel, and determines whether Dif_Num is less than predetermined threshold
Value T2, if it is, determine I1(x, y) and I2It is in transfixion state between (x, y);
If be in transfixion state between any two field pictures getting in this time period from T to T-t it is determined that
In this time period from T to T-t, described ATM monitoring scene is constantly in transfixion state.
16. devices according to any one of claim 13,14 or 15 are it is characterised in that in described fourth processing unit
Including:
Foreground detection subelement, for generating the two-value foreground image of monitoring image X according to the up-to-date background model obtaining, and will
The two-value foreground image generating is sent to analysis subelement;
Edge similar degree determination subelement, for obtaining the Edge texture of monitoring image X and the up-to-date background image obtaining respectively
Information, and the edge similar degree of monitoring image X and the up-to-date background image obtaining is determined according to the Edge texture information getting,
The edge similar degree determined is sent to described analysis subelement;
Described analysis subelement, for determining in monitoring image X according to the two-value foreground image receiving and edge similar degree
Whether someone.
17. devices according to claim 16 it is characterised in that
Described foreground detection subelement is directed to each pixel in monitoring image X, carries out following process respectively:
Calculate the coordinate position identical pixel with this pixel in the gray value of this pixel and the up-to-date background image obtaining
Difference d between the corresponding average of point;
CalculateWherein, σ represents the coordinate position identical pixel in the up-to-date background image obtaining with this pixel
The corresponding variance of point;
DetermineResult of calculation whether be more than predetermined threshold T0, if it is, the value of this pixel is set to 1,
Otherwise, it is set to 0.
18. devices according to claim 16 it is characterised in that
Described foreground detection subelement is further used for, after generating the two-value foreground image of monitoring image X, before this two-value
Scape image is expanded and etching operation successively, and the two-value foreground image after expansion and etching operation is sent to described
Analysis subelement.
19. devices according to claim 17 it is characterised in that
Described edge similar degree determination subelement obtains the horizontal edge image of monitoring image X and vertical edge image respectively, and
Obtain the horizontal edge image of the up-to-date background image obtaining and vertical edge image respectively;
Horizontal edge image according to monitoring image X and vertical edge image, for each pixel in monitoring image X, point
Do not calculate the gradient magnitude I of this pixelgxy:Igxy=| Igx|+|Igy|;Wherein, IgxRepresent the horizontal gradient value of this pixel,
IgyRepresent the vertical gradient value of this pixel, | | represent and take absolute value;
Horizontal edge image according to the up-to-date background image obtaining and vertical edge image, for the up-to-date background image obtaining
In each pixel, calculate the gradient magnitude B of this pixel respectivelygxy:Bgxy=| Bgx|+|Bgy|;Wherein, BgxRepresent this picture
The horizontal gradient value of vegetarian refreshments, BgyRepresent the vertical gradient value of this pixel;
Calculate the edge similar degree of monitoring image X and the up-to-date background image obtaining
Wherein, the span of x is from 1 to E, and the span of y is from 1 to F, and E represents the pixel in monitoring image X horizontal direction
Number, F represents the pixel number on monitoring image X vertical direction.
20. devices according to claim 19 it is characterised in that
In the two-value foreground image of described analysis subelement Statistical monitor image X, value is 1 pixel number Fgnum;
Determine whether to meet following condition:Flag=(Fgnum/ Area > T3) ∩ (ESIM < T4);Wherein, Area represents monitoring
Pixel number in image X horizontal direction and the product of the pixel number on vertical direction, T3 and T4 all represents predetermined threshold
Value;
If meeting above-mentioned condition it is determined that someone in monitoring image X, otherwise unmanned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210444071.1A CN103810691B (en) | 2012-11-08 | 2012-11-08 | Video-based automatic teller machine monitoring scene detection method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210444071.1A CN103810691B (en) | 2012-11-08 | 2012-11-08 | Video-based automatic teller machine monitoring scene detection method and apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103810691A CN103810691A (en) | 2014-05-21 |
CN103810691B true CN103810691B (en) | 2017-02-22 |
Family
ID=50707412
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210444071.1A Active CN103810691B (en) | 2012-11-08 | 2012-11-08 | Video-based automatic teller machine monitoring scene detection method and apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103810691B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657997B (en) * | 2015-02-28 | 2018-01-09 | 北京格灵深瞳信息技术有限公司 | A kind of lens shift detection method and device |
CN107588857A (en) * | 2016-07-06 | 2018-01-16 | 众智光电科技股份有限公司 | Infrared ray position sensing apparatus |
CN108090916B (en) * | 2017-12-21 | 2019-05-07 | 百度在线网络技术(北京)有限公司 | Method and apparatus for tracking the targeted graphical in video |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3496563B2 (en) * | 1999-03-23 | 2004-02-16 | 日本電気株式会社 | Object detection device, object detection method, and recording medium recording object detection program |
US6917721B2 (en) * | 2001-07-16 | 2005-07-12 | Hewlett-Packard Development Company, L.P. | Method and apparatus for sub-pixel edge detection |
CN101276499B (en) * | 2008-04-18 | 2010-09-01 | 浙江工业大学 | Intelligent monitoring apparatus of ATM equipment based on all-directional computer vision |
CN101404060B (en) * | 2008-11-10 | 2010-06-30 | 北京航空航天大学 | Human face recognition method based on visible light and near-infrared Gabor information amalgamation |
CN101950448B (en) * | 2010-05-31 | 2012-08-22 | 北京智安邦科技有限公司 | Detection method and system for masquerade and peep behaviors before ATM (Automatic Teller Machine) |
CN102236902B (en) * | 2011-06-21 | 2013-01-09 | 杭州海康威视数字技术股份有限公司 | Method and device for detecting targets |
-
2012
- 2012-11-08 CN CN201210444071.1A patent/CN103810691B/en active Active
Non-Patent Citations (4)
Title |
---|
An Edge-Texture based Moving Object Detection for Video Content Based Application;Taskeed Jabid等;《Proceedings of 14th International Conference on Computer and Information Technology (ICCIT 2011) 》;20111224;第1-5页 * |
Difference of Gaussian Edge-Texture Based Background Modeling for Dynamic Traffic Conditions;Amit Satpathy等;《Lecture Notes in Computer Science》;20081231;第5358卷;第406-417页 * |
一种基于多层背景模型的前景检测算法;杨涛等;《中国图象图形学报》;20080731;第13卷(第7期);第1303-1308页 * |
动态图像理解技术在ATM智能监控中的应用;汤一平等;《计算机测量与控制》;20091231;第17卷(第6期);第1110-1112、1119页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103810691A (en) | 2014-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107833236B (en) | Visual positioning system and method combining semantics under dynamic environment | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
CN103532949B (en) | Self adaptation wooden horse communication behavior detection method based on dynamical feedback | |
US7982774B2 (en) | Image processing apparatus and image processing method | |
CN108764085A (en) | Based on the people counting method for generating confrontation network | |
CN102855459B (en) | For the method and system of the detection validation of particular prospect object | |
CN104123544A (en) | Video analysis based abnormal behavior detection method and system | |
CN108304758A (en) | Facial features tracking method and device | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN108665541B (en) | A kind of ground drawing generating method and device and robot based on laser sensor | |
CN104463869B (en) | A kind of video flame image composite identification method | |
CN111767888A (en) | Object state detection method, computer device, storage medium, and electronic device | |
JP5388829B2 (en) | Intruder detection device | |
CN109063625A (en) | A kind of face critical point detection method based on cascade deep network | |
Zin et al. | Unattended object intelligent analyzer for consumer video surveillance | |
CN103810691B (en) | Video-based automatic teller machine monitoring scene detection method and apparatus | |
CN105469427B (en) | One kind is for method for tracking target in video | |
CN106101622A (en) | A kind of big data-storage system | |
CN112926522B (en) | Behavior recognition method based on skeleton gesture and space-time diagram convolution network | |
CN113111767A (en) | Fall detection method based on deep learning 3D posture assessment | |
CN108629254A (en) | A kind of detection method and device of moving target | |
CN108229421A (en) | A kind of falling from bed behavior real-time detection method based on deep video information | |
Ying-hong et al. | An improved Gaussian mixture background model with real-time adjustment of learning rate | |
CN105844671B (en) | A kind of fast background relief method under the conditions of change illumination | |
US20140147011A1 (en) | Object removal detection using 3-d depth information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |