CN105979211B - A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system - Google Patents

A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system Download PDF

Info

Publication number
CN105979211B
CN105979211B CN201610397382.5A CN201610397382A CN105979211B CN 105979211 B CN105979211 B CN 105979211B CN 201610397382 A CN201610397382 A CN 201610397382A CN 105979211 B CN105979211 B CN 105979211B
Authority
CN
China
Prior art keywords
camera
dimensional
covering
rectangular body
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610397382.5A
Other languages
Chinese (zh)
Other versions
CN105979211A (en
Inventor
熊永华
伍成静
赖旭芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201610397382.5A priority Critical patent/CN105979211B/en
Publication of CN105979211A publication Critical patent/CN105979211A/en
Application granted granted Critical
Publication of CN105979211B publication Critical patent/CN105979211B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of three-dimensional coverage rate calculation methods suitable for multi-view point video monitoring system, establish three-dimensional camera shooting head model according to camera parameter;According to monitoring radius judge camera whether can the covering object to rectangular body Model effectively covered;In the case where effectively covering can be carried out, the covering object of camera and rectangular body Model is projected in horizontal plane, and the three-dimensional cone of coverage formed to covering object of the camera to rectangular body Model calculates;Calculating carries out that union is taken to calculate by the cone of coverage to multiple cameras, and then solves one group of camera to the three-dimensional coverage rate of covering object.Camera and covering object of the invention is all threedimensional model, is tallied with the actual situation, and the input of coverage rate calculation method is the relevant parameter of given covering object and camera, simple and easy, is calculated simply, efficiency of algorithm is high, easy to use;This method also has general, universality simultaneously.

Description

A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system
Technical field
It is the invention belongs to multimedia sensor network field, in particular to a kind of suitable for multi-view point video monitoring system Three-dimensional coverage rate calculation method.
Background technique
Video monitoring system has been to be concerned by more and more people, and also becomes the hot spot of current sphere of learning research.It is current big Part research assumes that a covering object can be covered by individual monitoring node.But in practical application scene And do not have versatility.When covering object, which arrives greatly single monitoring node, can only cover a part of the object, then need multiple Node completes the monitoring of the covering object, i.e. multiple views monitor.In order to reduce the energy for completing to want to the necessary for monitoring of covering object Consumption, it is necessary to find out that node is least or the smallest monitoring node set of redundancy, and the premise for completing monitoring node selection is to need It is to be understood that individually monitoring node and a monitoring node set to the coverage rate of the covering object.
However existing coverage rate calculation method mainly has three classes: target coverage, region overlay and angle covering.Target coverage It is to be handled monitored object as a particle, the ratio between the particle number for being calculated as covering of coverage rate and total particle number.It covers in region The covering object of lid is a piece of region, the area for being calculated as covering of coverage rate and the ratio of the gross area.Monitored object model and Coverage rate calculation is not all suitable for practical application scene.Multi-view point video monitoring system belongs to angle covering.But work as anterior angle The coverage rate calculation method of covering has many limitations, and camera shooting head model and covering object do not use threedimensional model simultaneously, And monitoring direction is becomed privileged, and there is no discuss to the covering of vertical direction.
To sum up, existing coverage rate calculation method otherwise do not meet actual conditions or do not have universality, it is difficult to be applicable in In multi-view point video monitoring system.
Summary of the invention
The present invention is in view of the shortcomings of the prior art, provide a kind of three-dimensional coverage rate meter suitable for multi-view point video monitoring system Calculation method, to assess one group of monitoring node to the coverage condition of monitored object, camera and covering object are all three in this method Dimension module tallies with the actual situation, and the input of coverage rate calculation method is the related ginseng of given covering object and camera Number, it is simple and easy, simple, efficiency of algorithm height is calculated, it is easy to use.
Realize technical solution used by above-mentioned purpose of the present invention are as follows:
A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system, method includes the following steps:
1) three-dimensional system of coordinate is established, converts rectangular body Model for monitored object, and according to camera parameter and installation Three-dimensional camera shooting head model is established in position;
2) according to monitoring radius judge camera whether can the covering object to rectangular body Model effectively covered;
3) can carry out effectively covering in the case where, by the covering object of camera and rectangular body Model horizontal plane into Row projection, and the three-dimensional cone of coverage formed to covering object of the camera to rectangular body Model calculates;
4) it carries out that union is taken to calculate by the cone of coverage to multiple cameras, and then solves one group of camera to covering pair The three-dimensional coverage rate of elephant.
Three-dimensional camera shooting head model is defined as five yuan of array C in step 1)i(niiiii), wherein ni= (xi,yi,zi) it is space coordinate of the monitoring node i in three-dimensional system of coordinate;Monitoring node i monitoring direction vector beαiIt isWith the angle of Z axis, βiIt isThe angle of projection and X-axis on an x-y plane;λiAnd γiIt is monitoring direction vector respectively? The field angle of vertical direction and horizontal direction.
In step 2) camera can covering object to rectangular body Model carry out effectively covering and need to meet following two Condition: first is that the entire profile height of monitored object needs to be capped, second is that the geometric center of the upper surface of rectangular body Model needs To be judged as effective covering if meeting two above condition in monitoring range.
In step 3), the calculation method of three-dimensional cone of coverage is as follows:
A: using the geometric center of rectangular body Model upper surface X-Y plane subpoint as coordinate origin, establish right angle Then coordinate system accordingly marks the coordinate of other points;
B: the slope of straight line determines that camera regards where the coordinate of camera and the right boundary of camera field angle The linear equation expression formula of straight line where the right boundary of rink corner;
C: the right boundary of camera field angle and the intersecting point coordinate of rectangular body Model upper surface are calculated;
D: three-dimensional cone of coverage is calculated according to the cosine law.
With the continuous development of current video monitoring system, the coverage rate for assessing coverage effect has been characterization covering quality A key factor.Meanwhile the coverage rate for calculating a group node can provide evaluation mark also later for subsequent node scheduling Standard realizes the effective use of resource.The present invention targetedly proposes a kind of three-dimensional suitable for multi-view point video monitoring system Coverage rate calculation method, compared with prior art, the advantageous effect of present invention is that: camera and covering pair of the invention As being all threedimensional model, tally with the actual situation.In addition, the input of this method is intrinsic parameter and installation parameter.Intrinsic parameter packet Monitoring direction and three-dimensional visual field angle containing camera, and the size of covering object, these parameters can be directly according to given Camera and covering object type and obtain.Installation parameter includes the coordinate of camera and covering object, and its it is opposite away from From, these parameters can then obtain when installing equipment, and it is simple and easy, simple, efficiency of algorithm height is calculated, it is easy to use. In addition, this method does not do special consideration to monitoring direction, there is generality, while this method is also applicable in and the field of isomery camera Scape has universality.
Detailed description of the invention
Fig. 1 is three-dimensional oriented camera model schematic;
Fig. 2 is 3 D monitoring schematic diagram;
Fig. 3 is the monitoring schematic diagram of X-Y plane.
Specific embodiment
Detailed specific description done to the present invention combined with specific embodiments below, but protection scope of the present invention not office It is limited to following embodiment.
In the present embodiment monitored object can according to the actual situation (such as market building) to be generalized to a length and width a height of The cuboid of 2a*2b*H.Three-dimensional oriented camera shooting head model is defined as five yuan of array Ci(niiiii).Wherein ni=(xi,yi,zi) it is the space coordinate for monitoring node i;αiIt is monitoring direction vectorWith angle (the also known as pitching of Z axis Angle), βiIt is the angle of direction vector projection on an x-y plane and X-axis;λiAnd γiIt is monitoring direction vector respectively in vertical side To the field angle with horizontal direction.Three-dimensional oriented camera shooting head model is as shown in Figure 1.R in figureiIt is the effective monitoring for monitoring node i Radius.
Monitored object is the cuboid of an a height of H, therefore the boundary for imaging head erect field angle is possible to and cuboid Four vertical side intersections, then minimum monitoring radius Ri1Radius R is monitored with maximumi2Formula (1) and formula can be then expressed as (2)。
H is the height for the point that the vertical field angle boundary of camera is intersected with rectangular body side surface, as h=0, vertical visual field The boundary at angle is intersected with the ground of X-Y plane, does not have intersection point with cuboid.As h=H, the boundary of vertical field angle with it is rectangular Intersect the upper surface of body.As 0 < h < H, the boundary of field angle and the side of cuboid intersect at certain point P (xp,yp, h), relatively The monitoring radius answered then hasApparent h meets relational expression h=f (xp,yp), i.e., it can be with The horizontal coordinate of intersection point is different and changes, then has same camera to the monitoring height of same side | h2-h1| also it is different. Therefore, monitoring height is very difficult to the coverage condition in one region of assessment, unless entire vertical height is all simultaneously by the camera institute Covering.
Because in multiple views monitoring scene, rectangular body side surface entirely highly requiring simultaneously by one from bottom to top A camera is covered, so, it is contemplated that all cameras must all be deployed in the position that a height is greater than H, i.e. zi≥ H.We say that a cuboid is covered by a camera, R firsti1And Ri2Need to meet respectively formula (3) and formula (4):
Wherein, Q (xq,yq, 0) be cuboid lower surface on any point, O (xo,yo, H) and it is the several of cuboid upper surface What center.When with point O (xo,yo, H) and when being used as coordinate origin, formula (4) be can simplify as formula (5) by projection O ' on an x-y plane.
In addition, a cuboid also needs to meet upper surface geometric center O at horizontal field of view angle by a camera covering Within, i.e. formula (6), to guarantee to complete the covering of upper surface center O.
Camera i monitors the 3D of covering object as shown in Figure 2.The coordinate n of definition monitoring nodeiIt throws on an x-y plane Shadow is ni', the monitoring direction vector of nodeIt is projected as C ' on an x-y plane with the intersection point of cuboid.The right boundary of field angle Point A and point B are intersected at cuboid respectively, and remembers that it in the projection of X-Y plane is respectively A ' and B '.Entire 3 D monitoring scene Situation after projecting to X-Y plane is as shown in Figure 3.
Because a camera can be completed at the same time portion of upper surface (comprising upper surface center O) and corresponding lateral side regions Covering, so being very easy to find, when upper surface is covered by full-shape, all parts of four sides just can be all covered, To which entire cuboid surface is also all covered (because lower surface on the ground, does not need to cover), cuboid is realized Full-shape covering.Therefore three-dimensional covering problem can be converted to the perimeter covering problem of upper surface.Meet formula (3), formula (4) and formula (6) cone of coverage θ of the camera to upper surfacei XYIt is that can serve as the camera shooting head node to the three-dimensional cone of coverage of the cuboid θi
Firstly, the geometric center of cuboid upper surface X-Y plane subpoint as coordinate origin, be expressed as O ' (0, 0), due to the length, width and height of cuboid it is known that the coordinate on each vertex of event cuboid is also readily recognized.Camera parameter is once given Determine, then ni, αi, βi, λiAnd γiAll it is found that so θi XYIt can calculate.
θi XYIt is exactly ∠ A ' O ' B ', and ∠ A ' O ' B ' is only it is to be understood that the coordinate of point A ' and point B ' can calculate.As taking the photograph As the intersection point A and B of head field angle right boundary and cuboid upper surface is in the projection of X-Y plane, point A ' and point B ' can distinguish By straight line ni' A ', straight line niThe intersection point calculation that ' B ' and its side intersected with rectangle upper surface are formed comes out.And straight line ni′A′、 Straight line ni' B ' again can be according to point ni′(xi,yi) and by the point straight line ni' A ' and straight line niThe slope of ' B ' and determine.And This slope then can be according to βiWith λiAngular relationship solve.
The entire calculating process of cone of coverage is as follows.
Step a: establishing rectangular coordinate system using O ' as coordinate origin, then accordingly marks the coordinate of other points.
Step b: by point ni' coordinate, and cross two straight lines of the point slope tan (βi±0.5λi) it can determine straight line ni' A ' and straight line ni' B ', i.e., straight line where the right boundary of camera field angle.Set up an office A ' slope be tan (βi-0.5λi) Straight line on, point B ' slope be tan (βi+0.5λi) straight line on, shown in the slope expression such as formula (7).
Step c: because point A ' and point B ' are located at cuboid top, according to point A ' and the location point B ' can minute Not Que Ding two o'clock coordinate an element, then by known one of coordinate substitute into formula (7) can derive calculate point A ' with Complete coordinate (the x of point B 'A′,yA′) and (xB′,yB′);
Step d: according to cosine law cos θi XY=[(A ' O '2+B′O′2-A′B′2)/(2A ' O ' B ' O ')], then have
I.e.
After being calculated according to the three-dimensional cone of coverage that above step completes single camera, the three-dimensional coverage rate of single camera It is then Rii XY/360°.It should be noted that three-dimensional cone of coverage also can be expressed as θ with range formati XY=[si,ti]∈[0°, 360°].Wherein tansi=xB′/yB′, tanti=xA'/yA′.One group of monitoring node is θ to the cone of coverage of monitoring areaXY K=Ui∈K [si,ti], wherein K is a monitoring node set.Monitoring node set K is R to the coverage rate of monitoring areaKXY K/[0°, 360°]。
Three-dimensional coverage rate calculation method of the invention can be calculated and be provided according to the parameter of given camera and covering object The three-dimensional coverage value of body.
With the continuous development of current video monitoring system, the coverage rate for assessing coverage effect has been characterization covering quality A key factor.Meanwhile the coverage rate for calculating a group node can provide evaluation mark also later for subsequent node scheduling Standard realizes the effective use of resource.This method has stronger universality, compared with other angles cover coverage rate calculation method, The covering object model of this method is square, more meets practical application scene, while the camera head monitor direction of this method is not Make particularization processing, is suitable for ordinary circumstance.In addition, the input of this method is intrinsic parameter and installation parameter.Intrinsic parameter packet Monitoring direction and three-dimensional visual field angle containing camera, and the size of covering object, these parameters can be directly according to given Camera and covering object type and obtain.Installation parameter includes the coordinate of camera and covering object, and its it is opposite away from From these parameters can then obtain when installing equipment, and this method is simple and effective, and easy to accomplish.

Claims (2)

1. a kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system, it is characterised in that this method is to include Following steps:
1) three-dimensional system of coordinate is established, converts rectangular body Model for monitored object, and according to camera parameter and installation site Establish three-dimensional camera shooting head model;
2) according to monitoring radius judge camera whether can the covering object to rectangular body Model effectively covered;
Camera can covering object to rectangular body Model carry out effectively covering and need to meet following two condition: first is that monitoring The entire profile height of object needs to be capped, second is that the geometric center of the upper surface of rectangular body Model is needed in monitoring range It is interior, it is judged as effective covering if meeting two above condition;
3) in the case where effectively covering can be carried out, the covering object of camera and rectangular body Model is thrown in horizontal plane Shadow, and the three-dimensional cone of coverage formed to covering object of the camera to rectangular body Model calculates;
The calculation method of three-dimensional cone of coverage is as follows:
A: using the geometric center of rectangular body Model upper surface X-Y plane subpoint as coordinate origin, establish rectangular co-ordinate Then system accordingly marks the coordinate of other points;
B: the slope of straight line determines camera field angle where the coordinate of camera and the right boundary of camera field angle Right boundary where straight line linear equation expression formula;
C: the right boundary of camera field angle and the intersecting point coordinate of rectangular body Model upper surface are calculated;
D: three-dimensional cone of coverage is calculated according to the cosine law;
4) it carries out that union is taken to calculate by the cone of coverage to multiple cameras, and then solves one group of camera to covering object Three-dimensional coverage rate.
2. the three-dimensional coverage rate calculation method according to claim 1 suitable for multi-view point video monitoring system, feature Be: three-dimensional camera shooting head model is defined as five yuan of array C in step 1)i(niiiii), wherein ni=(xi, yi,zi) it is space coordinate of the monitoring node i in three-dimensional system of coordinate;Monitoring node i monitoring direction vector beαiIt isWith Z The angle of axis, βiIt isThe angle of projection and X-axis on an x-y plane;λiAnd γiIt is monitoring direction vector respectivelyIn vertical side To the field angle with horizontal direction.
CN201610397382.5A 2016-06-07 2016-06-07 A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system Expired - Fee Related CN105979211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610397382.5A CN105979211B (en) 2016-06-07 2016-06-07 A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610397382.5A CN105979211B (en) 2016-06-07 2016-06-07 A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system

Publications (2)

Publication Number Publication Date
CN105979211A CN105979211A (en) 2016-09-28
CN105979211B true CN105979211B (en) 2019-01-22

Family

ID=57011551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610397382.5A Expired - Fee Related CN105979211B (en) 2016-06-07 2016-06-07 A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system

Country Status (1)

Country Link
CN (1) CN105979211B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274449B (en) * 2017-05-22 2020-11-13 陕西科技大学 Space positioning system and method for object by optical photo
CN108537374B (en) * 2018-03-30 2022-09-16 深圳市行健自动化股份有限公司 Method for processing coverage rate of fire and gas system
CN113259624A (en) * 2021-03-24 2021-08-13 北京潞电电气设备有限公司 Monitoring equipment and method thereof
CN113923406B (en) * 2021-09-29 2023-05-12 四川警察学院 Method, device, equipment and storage medium for adjusting video monitoring coverage area

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262285A (en) * 2008-04-10 2008-09-10 杭州电子科技大学 3-D wireless sensor network coverage method based on probability
CN102291724A (en) * 2011-07-14 2011-12-21 南京邮电大学 Three-dimensional-scene-oriented wireless sensor network node deterministic deployment method
CN102867086A (en) * 2012-09-10 2013-01-09 安科智慧城市技术(中国)有限公司 Automatic deploying method for monitoring camera, system and electronic equipment
CN103824277A (en) * 2013-11-29 2014-05-28 广东电网公司电力科学研究院 Substation three-dimensional live-action monitoring stationing method based on nonlinear parameter optimization calibration

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9876953B2 (en) * 2010-10-29 2018-01-23 Ecole Polytechnique Federale De Lausanne (Epfl) Omnidirectional sensor array system
US10235338B2 (en) * 2014-09-04 2019-03-19 Nvidia Corporation Short stack traversal of tree data structures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262285A (en) * 2008-04-10 2008-09-10 杭州电子科技大学 3-D wireless sensor network coverage method based on probability
CN102291724A (en) * 2011-07-14 2011-12-21 南京邮电大学 Three-dimensional-scene-oriented wireless sensor network node deterministic deployment method
CN102867086A (en) * 2012-09-10 2013-01-09 安科智慧城市技术(中国)有限公司 Automatic deploying method for monitoring camera, system and electronic equipment
CN103824277A (en) * 2013-11-29 2014-05-28 广东电网公司电力科学研究院 Substation three-dimensional live-action monitoring stationing method based on nonlinear parameter optimization calibration

Also Published As

Publication number Publication date
CN105979211A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN105979211B (en) A kind of three-dimensional coverage rate calculation method suitable for multi-view point video monitoring system
CN103646394B (en) A kind of mixing vision system calibration method based on Kinect video camera
CN103033132B (en) Plane survey method and device based on monocular vision
JP6860180B2 (en) Positioning device for solar panel cleaning robot and its positioning method
JP6747292B2 (en) Image processing apparatus, image processing method, and program
CN107621226A (en) The 3-D scanning method and system of multi-view stereo vision
CN106228579B (en) A kind of video image dynamic water table information extracting method based on geographical space-time scene
CN106210643A (en) A kind of video camera viewing area call method
CN105072414A (en) Method and system for detecting and tracking target
CN109166153A (en) Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN104318604A (en) 3D image stitching method and apparatus
CN107967712A (en) Mountain fire is accurately positioned and algorithm of the mountain fire edge far from overhead transmission line vertical range
CN109191533B (en) Tower crane high-altitude construction method based on fabricated building
CN107392944A (en) Full-view image and the method for registering and device for putting cloud
CN103795935B (en) A kind of camera shooting type multi-target orientation method and device based on image rectification
WO2015078107A1 (en) Method for locating spill area of liquefied petroleum gas tank
CN106600549A (en) Method and device for correcting fisheye image
CN109949367A (en) A kind of visual light imaging localization method based on circular projection
CN108668108A (en) A kind of method, apparatus and electronic equipment of video monitoring
CN108180888A (en) A kind of distance detection method based on rotating pick-up head
CN109085789A (en) The intelligent management system positioned based on ultra wide band and iBeacon high-precision three-dimensional
CN111062986A (en) Monocular vision-based auxiliary positioning method and device for shared bicycle
CN206740128U (en) Big visual angle 3D vision systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190122

Termination date: 20190607

CF01 Termination of patent right due to non-payment of annual fee