CN103516956A - PTZ camera invasion monitoring detection method - Google Patents
PTZ camera invasion monitoring detection method Download PDFInfo
- Publication number
- CN103516956A CN103516956A CN201210211920.9A CN201210211920A CN103516956A CN 103516956 A CN103516956 A CN 103516956A CN 201210211920 A CN201210211920 A CN 201210211920A CN 103516956 A CN103516956 A CN 103516956A
- Authority
- CN
- China
- Prior art keywords
- background
- pan
- image
- camera
- video camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
A PTZ camera invasion monitoring detection method comprises the following steps: a first step, acquiring a background frame, and establishing an index to the background frame, firstly resetting the camera, then causing the camera to calculate a motion parameter between two adjacent image frame every time one image frame is acquired from an initial angle; then, estimating Pan and Tilt rotation change amount of the camera according to x-direction translation component, y-direction translation component and z-direction translation component which are acquired from the motion parameters, and when Pan and Tilt rotation accumulated change amounts of the camera exceed preset values, recording a background frame and marking the ground frame with the current motion parameter; and a second step, resetting the camera again, and simultaneously utilizing the recorded the background frame as the current background; after operation of the camera, the camera starts to rotate and performs Pan and Tilt rotation data estimation between the current image and the current background every time one image frame is acquired, registering the background frame and the current frame according to the acquired Pan and Tilt rotation data, and detecting an invasive object by means of an image background eliminating method.
Description
Technical field
The present invention relates to a kind of camera status detection method, can from camera acquisition image, detect invasion object.
Background technology
In camera supervised process, conventionally utilize background subtraction point-score to judge whether what object was invaded.Select which frame need to know the current state of video camera as current background frame.A vector representation that comprises two components for camera status is respectively to horizontally rotate angle, vertical tilt angle (Pan, Tilt).
Although can obtain the current state of video camera from cradle head controllor, obtain accurate state value and need to stop cloud platform rotation because when The Cloud Terrace moves due to control structure machine error and communication delay, the state value of acquisition is not current last look.
The foundation of background model and maintenance are very important to the intrusion detection method based on background subtraction, in recent years researched and proposed a lot of background models and maintenance algorithm, but these algorithms are all for the fixing situation of video camera, Background maintenance problem in the present invention is more complicated, except will considering the problem of general Background maintenance, also to consider the problem that image registration causes.
Summary of the invention
The object of the invention is a kind of simple camera status computational methods, can be used for background frames and select.
For achieving the above object, the present invention is by the following technical solutions:
A monitoring intrusion detection method, it comprises the steps:
In described step 2, the anglec of rotation after video camera work is: adjacent two background image overlapping region numbers are no less than 2/3rds of total image size.
In described step 2, adopt multiresolution layered image to carry out registration to background frames and present frame; And the generation of multiresolution layered image adopts gaussian pyramid decomposition method.
In described step 2, the described image background method of removing refers to: set up single Gaussian Background model, and in conjunction with neighborhood relevance, detect the falseness of eliminating by image registration and background motion generation and detect.
Adopt the present invention of technique scheme, can under camera motion state, detect rapidly and accurately invasion object, and this algorithm do not need camera calibration, greatly improved detection efficiency.
Accompanying drawing explanation
Fig. 1 is flow chart of the present invention.
Fig. 2 is background acquisition algorithm flow chart of the present invention.
Fig. 3 is background track algorithm flow chart of the present invention.
Fig. 4 is the one group of background frames capturing on video camera pan path.
Fig. 5 is Moving Objects detection example in the same time not.
Fig. 6 is for adopting background subtraction and eliminating registration residual error and suppress the false testing result obtaining after pixel that changes.Fig. 6 (1) background frames wherein; Fig. 6 (2) present frame; Fig. 6 (3) present frame is pressed the image after background frames registration; Image after the direct background of Fig. 6 (4) subtracts; Fig. 6 (5) is used threshold binarization treatment to (4); Fig. 6 (6) utilizes the variation pixel obtaining after the false change algorithm of inhibition.
Embodiment
The intrusion detection that Pan/Tilt/Zoom camera rotatablely moves in situation divides two stages: the first stage is pretreatment stage, completes and captures background frames and set up index; Second stage is formal detection-phase, according to current camera position, chooses background frames, utilizes the background method of removing to find to change pixel, as shown in Figure 1, specifically comprises the steps:
With (X, Y, Z), represent a three-dimensional coordinate point in initial camera coordinate system, its imaging point at two dimensional image coordinate system is (u, v), and the relation according to pin-hole imaging principle between them meets:
Be expressed as homogeneous coordinates form
Wherein f is lens focus.
If video camera is respectively around reference axis x, y, z rotation alpha, β, γ angle, point (X, Y, Z) is at the new coordinate of new camera coordinate system (X ', Y ', Z '), and the pass between the two is
When camera motion is less, when α, β, γ are very little, by trigonometric function approximate formula (sinx ≈ x, cosx ≈ 1) above formula, be reduced to
If new images coordinate is (x ', y '), it can be obtained by (2) (4) with the relation of (x, y)
If adopt 4 parameter models, above formula can be write as
Wherein
Obviously the variation of α, β, γ is reflected in respectively m
1, m
2, m
3in, if the focal length of video camera remains unchanged in monitor procedure,
like this can be according to m
1, m
2, m
3value approximate evaluation go out the value of α, β, γ, i.e. m
1, m
2, m
3accumulated value (Pan, Tilt, Roll) can be used as the approximate index of background frames.Because do not consider the Roll motion of video camera, so Roll value is approximately 0.
Specifically, after video camera resets, start video camera, video camera often rotates to an angle and generates a Background, the anglec of rotation determine that with adjacent two background picture overlapping region numbers, being no less than 2/3rds of total image size is advisable.
When Tilt gets a fixed value, the acquisition algorithm of background frames is as follows:
Step1: send reset command to cradle head controllor, wait for that The Cloud Terrace has resetted
Step2: kinematic parameter initialization: t=0, ρ (t)=(m
1m
2m
3m
4)=(0 00 0); Pan initial value S=0, the old value of pan S
old=0; Catch a two field picture I (t), preserve kinematic parameter ρ (t) and present image I (t) as the first width background frames;
Step3: catch next frame image I (t+1);
Step4: calculate kinematic parameter and change
δρ=-(H
tH)
-1?H
t(I(t+1)-I(t));
S=S+δm
1;
Step5: upgrade kinematic parameter ρ (t+1)=ρ (t)+δ ρ;
Step6: if
w
sizefor picture frame width, preserve kinematic parameter ρ (t+1) and present image I (t+1) frame as a setting; S
old=S
Step7: if sign is (δ m
1)=sign (S), sign is symbol detection function, turns Step3, otherwise finishes.
Background track algorithm, for calculating the current rotation of video camera and angle of inclination (pan, tilt), selects suitable background picture to do background subtraction with this.When above-mentioned Tilt is fixed value, invasion object detection algorithm is described below:
Step1: send reset command to cradle head controllor, wait for that The Cloud Terrace has resetted, capture a sub-picture I (t);
Step2: initialization camera motion ρ (t=0)=(m
1m
2m
3m
4)=(0 00 0); Read first width background frames I
b; Initial pan value S=0;
Step3: capture next frame image I (t+1);
Step4: calculate camera motion and change
δρ=-(H
tH)
-1H
t(I(t+1)-I(t));
Step5: upgrade kinematic parameter S=S+ δ m
1;
Step6: retrieve most suitable background frames according to S, judge whether to change background, if need to change, read new background and give I
b, otherwise turn Step8;
Step7: use Multi-Resolution Registration technology to estimate kinematic parameter initial point ρ
0
ρ
0=esitmate(I(t+1),I
b)
Go to Step10;
Step8: estimate kinematic parameter
δρ
0=-(H
tH)
-1?H
t(I(t+1)-warp(ρ
0,I
b));
Step9: upgrade kinematic parameter ρ
0(t+1)=ρ
0(t)+δ ρ
0;
Step10: use background subtraction to detect Moving Objects
Turn Step3.
Due to the background frames Limited Number prestoring, motion between present frame and background frames may be larger, and now direct solution equation, not only needs the computing time of growing very much, and likely have to the locally optimal solution of equation, generally adopt multiresolution layered image registration to solve this problem.The generation of multiresolution layered image can be used wavelet decomposition, also can use gaussian pyramid to decompose, and considers the speed of algorithm and realizes complexity, and the present invention adopts gaussian pyramid to decompose.Registration operation is from lowest resolution, and the kinematic parameter that low resolution obtains, as the initial parameter of next high-resolution level, solves new more accurate kinematic parameter, until highest resolution layer.
In fact at same stage resolution ratio, one time registration operation is often difficult to obtain optimal solution, generally needs repeatedly registration cycling just can obtain optimal solution.Adjacent twice registration is similar to consecutive image motion problems, can use the kinematic parameter model identical with layering registration.Repeatedly registration operation to same layers of resolution herein, utilize the kinematic parameter of previous step that present frame is transformed under background frames coordinate system, in upper once circulation again with background frames registration, upgrade kinematic parameter, so go down, until significant variation no longer occurs kinematic parameter.
In step 2, the image background method of removing refers to: set up single Gaussian Background model, and in conjunction with domain-specific, detect the falseness of eliminating by image registration and background motion generation and detect.
Neighborhood relevance detects thought, point pixel (the x that is motion pixel to preliminary judgement, y), further check whether (x, y) meets neighborhood pixel background model, if (x, y) belong to neighborhood pixel background model, think to cause due to registration error or background motion, now no longer think (x, y) be Moving Objects pixel,
Wherein M preserves the sign that is judged to be motion pixel point,
represent the set of the neighborhood pixel of pixel (x, y), I (x, y) is the intensity of (x, y) position pixel in present frame, I
b(x, y) is the intensity of (x, y) position pixel in background frames, and T (x, y) is for changing decision threshold.The general neighborhood that detects is set as the border circular areas that diameter is 3 to 5 pixels.
The renewal of background model: if (x, y) is judged to be motion pixel, background model remains unchanged, otherwise upgrade background model and decision threshold,
Wherein λ ∈ [0,1] is renewal speed.
Intrusion detection experiment under camera motion, the one group background frames of Fig. 4 for capturing on video camera pan path, Fig. 5 is one group of intrusion detection example, Fig. 6 is for adopting background subtraction and eliminating registration residual error and suppress the false testing result obtaining after pixel that changes.
Claims (4)
1. a Pan/Tilt/Zoom camera monitoring intrusion detection method, is characterized in that, it comprises the steps:
Step 1, background extraction frame, and set up the index of background frames: first video camera is resetted, then control video camera from initial angle, every collection one two field picture calculates the kinematic parameter between an adjacent image frame, and described kinematic parameter refers to Pan and the Tilt value that represents video camera attitude; Then, according to the translational component of the x, y, z direction obtaining, estimate the variable quantity of video camera Pan and Tilt rotation from kinematic parameter, when the accumulated change amount of video camera Pan and Tilt rotation surpasses while setting numerical value, capture a width background frames, and with current this background frames of kinematic parameter mark;
Step 2, detects swarming into the object of video camera: video camera is resetted again, simultaneously using the background frames of gained as current background; After video camera work, start rotation, Pan and Tilt spin data that every collection one two field picture carries out between present image and current background are estimated, according to the Pan obtaining and Tilt spin data, background frames and present frame are carried out to registration, utilize the image background method of removing to detect and swarm into object; If detect, do not swarm into object, change background frames.
2. Pan/Tilt/Zoom camera monitoring intrusion detection method according to claim 1, is characterized in that: in described step 2, the anglec of rotation after video camera work is: adjacent two background image overlapping region numbers are no less than 2/3rds of total image size.
3. Pan/Tilt/Zoom camera monitoring intrusion detection method according to claim 1, is characterized in that: in described step 2, adopt multiresolution layered image to carry out registration to background frames and present frame; And the generation of multiresolution layered image adopts gaussian pyramid decomposition method.
4. Pan/Tilt/Zoom camera according to claim 1 is monitored intrusion detection method, it is characterized in that: in described step 2, the described image background method of removing refers to: set up single Gaussian Background model, and in conjunction with neighborhood relevance, detect the falseness of eliminating by image registration and background motion generation and detect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210211920.9A CN103516956B (en) | 2012-06-26 | 2012-06-26 | Pan/Tilt/Zoom camera monitoring intrusion detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210211920.9A CN103516956B (en) | 2012-06-26 | 2012-06-26 | Pan/Tilt/Zoom camera monitoring intrusion detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103516956A true CN103516956A (en) | 2014-01-15 |
CN103516956B CN103516956B (en) | 2016-12-21 |
Family
ID=49898923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210211920.9A Expired - Fee Related CN103516956B (en) | 2012-06-26 | 2012-06-26 | Pan/Tilt/Zoom camera monitoring intrusion detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103516956B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107404419A (en) * | 2017-08-01 | 2017-11-28 | 南京华苏科技有限公司 | Based on the anti-false survey method and device of the network covering property of picture or video test |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060192857A1 (en) * | 2004-02-13 | 2006-08-31 | Sony Corporation | Image processing device, image processing method, and program |
JP2008217521A (en) * | 2007-03-06 | 2008-09-18 | Nippon Telegr & Teleph Corp <Ntt> | Parameter estimation device, parameter estimation method, program with this method loaded, and recording medium with this program recorded |
CN101739686A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Moving object tracking method and system thereof |
CN102006461A (en) * | 2010-11-18 | 2011-04-06 | 无锡中星微电子有限公司 | Joint tracking detection system for cameras |
-
2012
- 2012-06-26 CN CN201210211920.9A patent/CN103516956B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060192857A1 (en) * | 2004-02-13 | 2006-08-31 | Sony Corporation | Image processing device, image processing method, and program |
JP2008217521A (en) * | 2007-03-06 | 2008-09-18 | Nippon Telegr & Teleph Corp <Ntt> | Parameter estimation device, parameter estimation method, program with this method loaded, and recording medium with this program recorded |
CN101739686A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Moving object tracking method and system thereof |
CN102006461A (en) * | 2010-11-18 | 2011-04-06 | 无锡中星微电子有限公司 | Joint tracking detection system for cameras |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107404419A (en) * | 2017-08-01 | 2017-11-28 | 南京华苏科技有限公司 | Based on the anti-false survey method and device of the network covering property of picture or video test |
Also Published As
Publication number | Publication date |
---|---|
CN103516956B (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6095018B2 (en) | Detection and tracking of moving objects | |
EP3196594B1 (en) | Surveying system | |
CN102168954B (en) | Monocular-camera-based method for measuring depth, depth field and sizes of objects | |
CN102708685B (en) | Device and method for detecting and snapshotting violation vehicles | |
CN104574393B (en) | A kind of three-dimensional pavement crack pattern picture generates system and method | |
US20150356357A1 (en) | A method of detecting structural parts of a scene | |
Salih et al. | Depth and geometry from a single 2d image using triangulation | |
FR2976355A1 (en) | DEVICE FOR MEASURING SPEED AND POSITION OF A VEHICLE MOVING ALONG A GUIDE PATH, METHOD AND CORRESPONDING COMPUTER PROGRAM PRODUCT. | |
KR101275297B1 (en) | Camera Apparatus of tracking moving object | |
CN104469170B (en) | Binocular camera shooting device, image processing method and device | |
JP6032034B2 (en) | Object detection device | |
CN104125372A (en) | Target photoelectric search and detection method | |
KR101929557B1 (en) | Method and apparatus for processing blur | |
WO2014010203A1 (en) | Fall detection device, fall detection method, fall detection camera, and computer program | |
El Bouazzaoui et al. | Enhancing rgb-d slam performances considering sensor specifications for indoor localization | |
KR101400400B1 (en) | Robot cleaner and control method of the same | |
CN103516956A (en) | PTZ camera invasion monitoring detection method | |
US20210055420A1 (en) | Base for spherical laser scanner and method for three-dimensional measurement of an area | |
CN104104902B (en) | Holder direction fault detection method and device | |
JP5261752B2 (en) | Drive recorder | |
KR101465236B1 (en) | vehicle velocity detector and method using stereo camera | |
CN107563371A (en) | The method of News Search area-of-interest based on line laser striation | |
CN115661453B (en) | Tower crane object detection and segmentation method and system based on downward view camera | |
Briese et al. | Analysis of mobile laser scanning data and multi-view image reconstruction | |
CN113379850A (en) | Mobile robot control method, mobile robot control device, mobile robot, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161221 Termination date: 20180626 |