GB2611030A - Method of estimating an orientation adjustment and surveillance apparatus - Google Patents

Method of estimating an orientation adjustment and surveillance apparatus Download PDF

Info

Publication number
GB2611030A
GB2611030A GB2113349.1A GB202113349A GB2611030A GB 2611030 A GB2611030 A GB 2611030A GB 202113349 A GB202113349 A GB 202113349A GB 2611030 A GB2611030 A GB 2611030A
Authority
GB
United Kingdom
Prior art keywords
camera
reference object
surveillance
surveillance apparatus
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2113349.1A
Other versions
GB2611030B (en
Inventor
Baldacci Alberto
Reche Alex
Pauc Lionel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marss Ventures Ltd
Original Assignee
Marss Ventures Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marss Ventures Ltd filed Critical Marss Ventures Ltd
Priority to GB2113349.1A priority Critical patent/GB2611030B/en
Publication of GB2611030A publication Critical patent/GB2611030A/en
Application granted granted Critical
Publication of GB2611030B publication Critical patent/GB2611030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19617Surveillance camera constructional details
    • G08B13/1963Arrangements allowing camera rotation to change view, e.g. pivoting camera, pan-tilt and zoom [PTZ]
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method of estimating an orientation adjustment of a camera comprising: obtaining a first position of the camera 310, fig 3; obtaining a second position of the reference object 316, figure 3; directing the field-of-view of the camera towards the reference object 404; capturing an image comprising the object 408; identifying the object in the image 410; determining offsets of the object with respect to a pre-determined reference in the image 414. The positions of the camera and the object may be calculated with respect to a reference frae of coordinates or to a GNSS. The offsets may comprise horizontal, vertical and angular offsets. The distance between the camera and object may also be calculated. The pan, tilt, and zoom 418 of the camera may be adjusted in response to the determined offsets or the frame of reference may be altered to compensate for the offsets. The reference object may be another camera and the method may further comprise selecting two cameras from a plurality of cameras.

Description

METHOD OF ESTIMATING AN ORIENTATION ADJUSTMENT AND SURVEILLANCE APPARATUS
[0001] The present invention relates to a method of estimating an orientation adjustment, the method being of the type that, for example, estimates an orientation between a camera of a surveillance apparatus and a reference object. The present invention also relates to a surveillance apparatus of the type that, for example, is configured to estimate an orientation adjustment between a camera thereof and a reference object.
[0002] In known surveillance systems, a number of fixed surveillance units are deployed along, for example, a perimeter of an area to be monitored. Typically, each fixed surveillance unit comprises one or more sensors mounted atop a streamlined pole in order to provide a vantage point for the surveillance unit unhindered by obstructions to the view of the surveillance unit, for example natural obstructions caused by vegetation or uneven terrain, or man-made obstructions, such as buildings.
[0003] In order to gather information generated by the surveillance units, the surveillance units are usually connected to a centralised command and control system, which gathers and manages the information produced by each of the surveillance units, via an underground network of communications cables or via a wireless communication system. Similarly, in order to power the surveillance units, the underground network of communications cables is also accompanied by a power supply network and each surveillance unit is connected to the power supply network or a power supply accompanies each surveillance unit.
[0004] Data generated by the sensor or sensors of the surveillance unit are analysed using signal processing algorithms to detect and track objects. Each surveillance unit is capable of monitoring a predetermined surveillance area, which is a volume of space in a "scene", for example a landscape, where detection and tracking of objects in the scene can be performed within normal operating parameters of the sensor or sensors of the surveillance unit. As the surveillance area of a single surveillance unit is limited, multiple surveillance units are typically deployed to surveil the scene, such that the combination of the individual surveillance areas of the respective individual surveillance units cover the entire area of interest, i.e. the scene. Usually, the individual surveillance areas overlap partially.
[0005] As such, when an object moves in a first surveillance area covered by a first surveillance unit, the object is detected and tracked by the sensor or sensors of the first surveillance unit until it reaches an edge of the first surveillance area. In a region where the first surveillance area overlaps with a second, neighbouring, surveillance area of a second surveillance unit, the object is detected and tracked by both the first and second surveillance units. It is therefore possible to associate the tracks generated by the first and second surveillance units to the same object and join them in order to create a single, extended track spanning across the first and second surveillance areas of the first and second surveillance units, respectively.
[0006] In order to achieve satisfactory geo-referencing of track data collected by sensors distributed geographically throughout the surveillance system and that tag track data in a common frame of reference, it is necessary for each surveillance unit in the system to have knowledge of both a position of its sensor, for example a camera, in the common frame of reference, for example using a spherical geographic coordinate system (such as latitude, longitude, elevation), or a Cartesian geographic coordinate system (such as x, y, z), as well as an orientation of the frame of reference of the sensor with respect to the common frame of reference. Whilst sensor positions are typically known with relatively good accuracy, orientations are not known so accurately and can be difficult to measure.
[0007] As mentioned above, the sensor or sensors are mounted atop the pole of the surveillance unit. When mounted, a given sensor has, generally speaking, an expected orientation with respect to the reference coordinate system. The orientation of the sensor with respect to the reference coordinate system can be expressed as the combination of horizontal levelling and a pointing direction. However, levelling and pointing direction inaccuracies can occur owing to tolerances inherent to the installation of each surveillance unit, for example the ability to mount the pole as close to the vertical as possible, bending of the pole, and/or the quality of the mechanical interface between the pole and the sensor or sensors. The orientation of the sensors may only be known with limited precision due to the limited accuracy of measurement instruments, for example a compass, used to set the pointing direction of the sensors and a spirit level to measure the levelling. Indeed, mismatches between coordinates associated with a first track of the first surveillance unit and coordinates associated with a second track of the second neighbouring surveillance unit using the common reference system mentioned above can arise even from small levelling and direction errors. So-called handover of surveillance of an object between neighbouring surveillance units is therefore complicated by such mismatches.
[0008] It is therefore desirable to estimate, as precisely as possible, and apply correction for, the orientation of one or more sensors of one or more respective surveillance units relative to a common frame of reference.
[0009] It is known to compensate for orientation errors between neighbouring surveillance units using a number of different techniques. For example, signal processing techniques are known to be employed to compensate for orientation errors. However, such compensation attracts a data processing overhead owing to the complexity of the analysis required. It is also known to employ physical (non-software) tools to measure the orientation of the sensors of surveillance apparatus, but these tools typically high-precision expensive tools. Trial and error techniques are also known to be employed, where orientation is estimated or compensated iteratively by using an in-situ mechanical adjustment device. However, such an approach is time consuming.
[0010] According to a first aspect of the invention, there is provided a method of estimating an orientation adjustment of a camera of a surveillance apparatus with respect to a reference object, the method comprising: obtaining a first three-dimensional geospatial position of the camera; obtaining a second three-dimensional geospatial position of the reference object; directing a field of view of the camera towards the reference object using the first and second three-dimensional positions; using the camera to capture an image comprising the reference object; identifying the reference object in the image; and analysing the image and determining offsets of the reference object with respect to a predetermined reference within the image.
[0011] The method may further comprise: determining the first and second geospatial positions in a first geospatial frame of reference.
[0012] The method may further comprise: determining the first and second geospatial positions in a GNSS frame of reference.
[0013] The offsets may comprise a first offset and a second offset; the first offset may be relative to a first axis of the image and the second offset may be relative to 15 a second axis of the image, and the first and second axes may intersect [0014] The offsets may comprise a third offset; the third offset may be an angular offset relative to one of the first axis and the second axis.
[0015] The method may further comprise: calculating a distance between the camera and the reference object using the first and second three-dimensional positions; and setting a magnification ratio of the camera using the calculated distance between the camera and the reference object.
[0016] The method may further comprise: panning the camera remotely to direct the field of view of the camera towards the reference object.
[0017] The method may further comprise: tilting the field of view of the camera remotely to direct the camera towards the reference object.
[0018] According to a second aspect of the invention, there is provided a method of automated adjustment of orientation of a camera of a surveillance apparatus, the method comprising: estimating the orientation adjustment in respect of the camera and with respect to the reference object as set forth above in relation to the first aspect of the invention; and adjusting an orientation of the field of view of the camera in response to the orientation adjustment estimated.
[0019] According to a third aspect of the invention, there is provided a method of automated compensation of images of a camera of a surveillance apparatus, the method comprising: estimating the orientation adjustment in respect of the camera and with respect to the reference object as set forth above in relation to the first aspect of the invention; and determining a compensation to apply to the first geospatial frame of reference using the orientation adjustment estimated.
[0020] The reference object may be another surveillance apparatus comprising another camera.
[0021] According to a fourth aspect of the invention, there is provided a method of automated adjustment of orientation of surveillance apparatus in a surveillance system, the method comprising: selecting a first surveillance apparatus and a second surveillance apparatus from a plurality of surveillance apparatus in the surveillance system; and estimating an orientation adjustment using the method of estimating the orientation adjustment as set forth above in relation to the first aspect of the invention; wherein the first surveillance apparatus comprises the camera; and the second surveillance apparatus comprises another camera constituting the reference object.
[0022] According to a fifth aspect of the invention, there is provided a surveillance apparatus comprising: an elongate mounting having a proximal end configured to be anchored and a distal end operably coupled to a sensor module, the sensor module comprising a camera; a camera pan mechanism configured to pan the camera; a camera tilt mechanism configured to tilt the camera; a processing resource configured to obtain a first three-dimensional geospatial position of the camera and a second three-dimensional geospatial position of a reference object; wherein the camera pan mechanism and the camera tilt mechanism are configured to direct a field of view of the camera towards the reference object; the camera is configured to capture an image comprising the reference object; the processing resource is configured to recognise the reference object and to determine offsets of the reference object with respect to a predetermined reference within the image; and the processing resource is further configured to determine an orientation adjustment in respect of the camera.
[0023] It is thus possible to provide a method of estimating an orientation and a surveillance apparatus that facilitates estimation of the orientation for each camera of respective surveillance apparatus in a surveillance system. The processing overhead required to determine the offset is relatively low as compared to known signal processing techniques used to compensate for levelling and pointing direction errors. Similarly, expensive tools are not required. Furthermore, the method is less time consuming than other known, for example iterative, techniques. The calculation of the offset permits operation of the surveillance system with greater accuracy, for example with respect to exchange of geo-referenced data between surveillance apparatus, despite inaccuracies associated with erecting an elongate mounting carrying the sensor module. The apparatus and method are fully automatic and determination of the offset can therefore be repeated without the need of human intervention, for example each time the camera orientation change, either owing to slight changes to the elongate mounting or drift in mechanical parts of the camera. However, the apparatus and method can alternatively be used only once, for example when system installation takes place, thereby reducing the processing overhead further.
[0024] At least one embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of a surveillance system; Figure 2 is a schematic diagram of fields of view of a first surveillance unit and a second surveillance unit of the surveillance system of Figure 1; Figure 3 is a schematic diagram of the first and second surveillance units of Figure 2 constituting an embodiment of the invention; Figure 4 is a flow diagram of a method of estimating and correcting a misalignment between a camera of a surveillance unit and a reference object constituting another embodiment of the invention; Figure 5 is a schematic diagram of an image of the second surveillance unit captured by the first surveillance unit of Figure 3 prior to estimating orientation offset and compensation therefor; Figure 6 is a schematic diagram of an image of the second surveillance unit captured by the first surveillance unit of Figure 3 after estimating the orientation offset and compensating therefor.
[0025] Throughout the following description identical reference numerals will be used to identify like parts.
[0026] Referring to Figure 1, a surveillance system 100 is provided to monitor a region 102 in front of a perimeter 104 of a property. In other examples, the surveillance system 100 can be employed to monitor any other suitable environment, for example a border between lands or an asset, such as a wall. The surveillance system 100 comprises a plurality of surveillance apparatus, for example a first surveillance apparatus 106 located at a first site along the perimeter 104, a second surveillance apparatus 108 located at a second site along the perimeter 104, a third surveillance apparatus 110 located at a third site along the perimeter 104, and a fourth surveillance apparatus located 112 located at a fourth site along the perimeter 104.
[0027] In this example, the first surveillance apparatus 106 has a first lateral field of view 114, the second surveillance apparatus 108 has a second lateral field of view 116, the third surveillance apparatus 110 has a third lateral field of view 118 and the fourth surveillance apparatus 112 has a fourth lateral field of view 120. The first, second, third and fourth fields of view are spaced apart along the perimeter 104. The first, second, third and fourth surveillance apparatus 106, 108, 110, 112 are each capable of communicating with a central control unit 122, for example wireless ly.
[0028] Turning to Figure 2, and taking the first and second surveillance apparatus 106, 108 to exemplify tracking, an object 200 follows a path that passes through the first field of view 114 of the first surveillance apparatus 106, where the object 200 is initially detected, and into the second field of view 116 of the second surveillance apparatus 108. In this regard, the path of the object 200 has a track 202 associated therewith. For the object 200 to be properly tracked as it passes from the first field of view 114 to the second field of view 116, it is necessary to associate the detection of the object 200 (and the path followed by the object 200) by the first surveillance apparatus 106 with a subsequent detection of the object 200 (and path followed by the object 200) by the second surveillance apparatus 108 when the object enters the second field of view 116. In order to achieve optimum association of detections between fields of views of neighbouring surveillance apparatus, it is necessary to align the fields of view, for example the first and second fields of view 114, 116, and possibly have overlap 204 between the fields of view 114, 116.
[0029] In order to exemplify the system 100 further, the first and second surveillance apparatus 106 and 108 are considered (Figure 3). Reference to the third and fourth surveillance apparatus 110, 112 is omitted for the sake of clarity and conciseness of description, but are of a like construction to the first and second surveillance apparatus 106, 108.
[0030] Referring to the first surveillance apparatus 106, the first surveillance apparatus 106 comprises an elongate mounting, for example a mounting that is cylindrical in shape, such as a first pole 300, anchored in the ground (not shown) at a site of choice for locating the first surveillance apparatus 106. The first pole 300 is anchored in the ground in any manner suitable for the environment in which the first surveillance apparatus 106 is to be located, the manner of anchoring typically being decided following a site survey and can comprise, for example, setting a lowermost portion (when deployed, not shown) at a proximal end of the first pole 300 in a concrete foundation (also not shown).
[0031] A coupling assembly (not shown) has a first end thereof fixedly mounted at a distal, free, end of the first pole 300 constituting an uppermost region (relative to when deployed). A first sensor module 302 is operably coupled to a second end of the coupling assembly. The coupling assembly is approximately adjustable during installation with respect to the first pole 300 in a first degree of freedom 304, for example about a local x-axis, and a second degree of freedom 306, for example about a local y-axis. In this example, the coupling assembly is also adjustable in a third degree of freedom 308, for example about a local z-axis. The first and second degrees of freedom 304, 306 define the levelling of the first sensor unit 302 with respect to the horizontal plane. The third degree of freedom 308 defines the sensor pointing direction when in home position, i.e. not pointing in any specific target direction.
[0032] In this example, the first sensor module 302 mentioned above comprises a housing within which a ranging system is disposed, for example a radar-based detection and tracking system, comprising signal processing and driving circuitry (not shown) operably coupled to a plurality of radar antennae (not shown) constituting a plurality of fixed beam projectors. The plurality of radar antennae is angularly spaced with respect to the longitudinal axis of the first pole 300. The radar antennae provide, when in use, by way of beamforming, the overlapping (non-optical) fields of view in order to detect and track objects constituting potential targets. In addition to the detection and tracking system, the housing comprises a first camera device 310, for example a video camera. The first camera device 310 is rotatably mounted and, in this example, configured to rotate or pan by up to 360 degrees about a longitudinal axis of the first pole 300 and is also capable, in this example, of tilting and can tilt over an elevation angle of about ±45 degrees with respect to a central horizontal level. The amount of rotation of the respective camera devices of each of the surveillance apparatus 106, 108, 110, 112 can be set in accordance with the first, second, third and fourth lateral fields of view 114, 116, 118, 120. Similarly, a field of view associated with the plurality of radar antennae of each of the surveillance apparatus 106, 108, 110, 112 can be set in accordance with the first, second, third and fourth lateral fields of view 114, 116, 118, 120. However, in other examples, the radar-based detection and tracking system does not have to be employed or can be provided separate to the housing containing the first camera device 310.
[0033] In this example, the second surveillance apparatus 108 is of a like construction to the first surveillance apparatus 106. The second surveillance apparatus 108 comprises a second sensor module 312 mounted atop a second pole 314, the second sensor module 312 comprising a second camera 316.
[0034] In this example, the first and second cameras 310, 316 are located at respective first and second positions, identified using a first frame of reference 324 common to the first and second cameras 310, 316, for example a GNSS frame of reference. However, the skilled person will appreciate that any suitable absolute frame of reference can be employed, for example a Cartesian frame of reference.
In this example, the location of each of the first and second cameras 310, 316, is recorded and thus known in terms of latitude, longitude and elevation. The respective locations of the first and second cameras 310, 316, are known to an acceptable degree of accuracy, i.e. the positional errors are small compared to the relative distance between the locations of the cameras. It is also assumed that a clear line of visual sight exists between neighbouring cameras at least.
[0035] The respective orientations of the first and second cameras 310, 316 with respect to the common frame of reference 324 can be expressed, as explained above, in terms of angles around the three main axes 304, 306, 308. In this example, it is assumed that, for the first and second cameras 310, 316, the orientation of each camera with respect to the horizontal plane (levelling) and the pointing direction are known with limited accuracy.
[0036] In this example, each of the first and second cameras 310, 316, are motorised and so comprise servomotors in order to provide pan and tilt adjustments for each of the first and second cameras 310, 316. The first and second cameras 310, 316, each optionally also comprises an adjustable magnification ratio (zoom).
[0037] A central controller 326 is provided and capable of communicating with the first and second surveillance apparatus 106, 108. The central controller 326 comprises a processing unit 328 operably coupled to a communications unit 330 and a storage device 332, for example a non-volatile memory. The storage device 332 stores, in this example, the respective positions of the cameras of the surveillance system 100, recorded in the common frame of reference 324 mentioned above.
[0038] In operation (Figure 4), the central controller 326 selects a surveillance apparatus and one other surveillance apparatus to use in order to orient one of the surveillance apparatuses selected by way of camera adjustment. For example, the central controller 326 selects the first surveillance apparatus 106 and the second surveillance apparatus 108. Of course, the skilled person will appreciate that other combinations of pairs of surveillance apparatus can be selected. Once the orientation of the first surveillance apparatus 106 has been correctly adjusted, the central controller 326 selects other surveillance apparatus for which orientation thereof needs to be adjusted.
[0039] A method of automated adjustment of orientation of a surveillance apparatus in the surveillance system 100 therefore comprises the selection, for example systematically, of pairs of surveillance apparatus for use in performance of a method of estimation of an orientation adjustment between, for example the cameras of the pair of surveillance apparatus selected.
[0040] It is assumed that the central controller 326 has selected the first and second surveillance units 106, 108 as a first pair of surveillance apparatus. In this regard, the central controller 326 accesses (Step 400) the data store 332 to retrieve the respective positions of the first and second surveillance apparatus 106, 108 of the surveillance system 100 and expressed in the common frame of reference 324 of Figure 3. Of course, in other examples, the respective positions of the cameras 310, 316, 322 can be stored locally by each of the respective surveillance apparatus and the central controller 326 can interrogate each surveillance apparatus selected by the central controller 326 in order to obtain the locally stored positions.
[0041] The method of estimating the orientation adjustment, for example between the first and second cameras 310, 316, is based upon the principle that a priori knowledge of the positions of the first and second cameras 310, 316, and an approximate orientation of the first camera 310 with respect to the common frame of reference 324, enables calculation of a relative position of one camera in another camera's local frame of reference. In this example, a spherical system of coordinates is employed as a second, local, frame of reference.
[0042] The processing unit 328 of the central controller 326 therefore uses the first and second position data for the first and second cameras 310, 316, in order to calculate (Step 402) a relative position of the second camera 316 with respect to the first camera 310 in terms of the second frame of reference of the first camera 310. Once the relative position of the second camera 316 with respect to the first camera 310 has been calculated, the first camera 310 uses this information to adjust its orientation (Step 404) so as to direct its field of view towards the second camera 316. In this regard, actuation of the servo motors described above implements the adjustment of the orientation.
[0043] Optionally, using the radial distance calculated, the first camera 310 adjusts (Step 406) a magnification ratio thereof in order to ensure that the second camera 316 is readily identifiable, with enough spatial resolution, in the field of view of the first camera 310.
[0044] Referring to Figure 5, the first camera 310 then captures (Step 408) a first image 500 that comprises an image of the second camera 316. In the event that the orientation of the first camera 310 with respect to the common frame of reference 324 is sufficiently accurately aligned therewith such that the field of view of the first camera 310 is in registry with the field of view of the second camera 316, then the image of the second camera 316 in the first image 500 captured by the first camera 310 appears in the centre of the first image 500. However, as shown in Figure 5, in the event that the orientation of the first camera 310 is not sufficiently accurately aligned with the common frame of reference 324, the image of the second camera 316 appears off-centre in the first image 500 captured by the first camera 310. In this example, a centre point 502 of the first image 500 constitutes a predetermined reference, the centre point 502 being defined by an intersection of a horizonal central axis 504 and a vertical central axis 506 of the field of view of the first camera 310.
[0045] Referring back to Figure 4, the processing resource of the first camera 310 employs an image recognition processing technique, for example a machine-learning based image recognition technique, to recognise (Step 410) the second camera 316 in the first image 500. Once identified, the processing unit 328 calculates a centre of the recognised image of the second camera 316 and then calculates (Step 412) offsets of the centre of the image of the second camera 316 relative to the centre point 502 of the first image 500, the determined offsets comprising a horizontal offset 508 and a vertical offset 510 (Figure 7). The processing unit 328 also calculates an angular offset 512 of the second camera 316 in the first image 500 with respect to the vertical axis 506.
[0046] Using the offsets 508, 510, 512, calculated from the first image 500 and the known relative position between the two cameras, 310, 316, the processing unit 328 calculates the mechanical offsets required to compensate for the offsets 508, 510, 512 calculated from the first image 500, and thus an actual orientation of the first camera 310 with respect to the second camera 316 in terms of horizontal levelling and pointing direction. Once such adjustment has been performed, and referring to Figure 6, a second image 514 captured by the first camera 310 of the second camera 316 should show the second camera 316 centred on the centre point 502.
[0047] The offsets 508, 510, 512 calculated can therefore be used in, for example, two possible ways. In one example, the angles derived from the offsets 508, 510, 512 calculated can be used to adjust (Step 416) orientation of the first camera 310 and thus the field of view thereof, for example by using the servo motors described above. In another example, calculated orientation of the first camera 310 can be used to compensate future calculations (Step 418) concerning the orientation of the first camera 310 when tracking the object 200, for example by applying a compensation to the common frame of reference.
[0048] In the above example, calculation of adjustments to the orientation of the first camera 310, and indeed other cameras where orientation of such cameras need to be adjusted, is performed by determining axes in respect of the first camera 310 about which a rotation can be performed in order to adjust orientation of the first camera 310. In this example, the axes are the vertical one and the radial line extending from the first camera 310 to the second camera 316.
[0049] The skilled person should appreciate that the above-described implementations are merely examples of the various implementations that are conceivable within the scope of the appended claims. Indeed, in the above examples, the first camera 310 captures images of the second camera 316 as a means of calculating the orientation adjustment. However, in other examples, any suitable reference object can be used in place of the second camera 316 when determining the offsets, provided the position and vertical orientation of the reference object is known accurately in the common frame of reference 324 and is recognisable using an image processing technique. Indeed, in some examples, the reference object can be a landmark recognisable by an image processing technique. Although in the above examples, the processing resource of the first camera 310 is used to perform image recognition processing, such processing can, in other examples, be centralised, for example performed by the central processing unit 328.

Claims (13)

  1. Claims 1. A method of estimating an orientation adjustment of a camera of a surveillance apparatus with respect to a reference object, the method comprising: obtaining a first three-dimensional geospatial position of the camera; obtaining a second three-dimensional geospatial position of the reference object; directing a field of view of the camera towards the reference object using the first and second three-dimensional positions; using the camera to capture an image comprising the reference object; identifying the reference object in the image; and analysing the image and determining offsets of the reference object with respect to a predetermined reference within the image.
  2. 2. A method as claimed in Claim 1, further comprising: determining the first and second geospatial positions in a first geospatial frame of reference.
  3. 3. A method as claimed in Claim 1 or Claim 2, further comprising: determining the first and second geospatial positions in a GNSS frame of reference.
  4. 4. A method as claimed in Claim 1 or Claim 2 or Claim 3, wherein the offsets comprise a first offset and a second offset, the first offset being relative to a first axis 25 of the image and the second offset being relative to a second axis of the image, and the first and second axes intersect.
  5. 5. A method as claimed in Claim 4, wherein the offsets comprise a third offset, the third offset being an angular offset relative to one of the first axis and the second 30 axis.
  6. 6. A method as claimed in any one of the preceding claims, further comprising.calculating a distance between the camera and the reference object using the first and second three-dimensional positions; and setting a magnification ratio of the camera using the calculated distance between the camera and the reference object.
  7. 7. A method as claimed in any one of the preceding claims, further comprising: panning the camera remotely to direct the field of view of the camera towards the reference object.
  8. 8. A method as claimed in any one of the preceding claims, further comprising: tilting the field of view of the camera remotely to direct the camera towards the reference object.
  9. 9. A method of automated adjustment of orientation of a camera of a surveillance apparatus, the method comprising: estimating the orientation adjustment in respect of the camera and with respect to the reference object as claimed in any one of Claims 1 to 8; and adjusting an orientation of the field of view of the camera in response to the orientation adjustment estimated.
  10. 10. A method of automated compensation of images of a camera of a surveillance apparatus, the method comprising: estimating the orientation adjustment in respect of the camera and with respect to the reference object as claimed in any one of Claims 1 to 8, when dependent upon Claim 2; and determining a compensation to apply to the first geospatial frame of reference using the orientation adjustment estimated.
  11. 11. A method as claimed in any one of the preceding claims, wherein the reference object is another surveillance apparatus comprising another camera.
  12. 12. A method of automated adjustment of orientation of surveillance apparatus in a surveillance system, the method comprising: selecting a first surveillance apparatus and a second surveillance apparatus from a plurality of surveillance apparatus in the surveillance system; and estimating an orientation adjustment using the method of estimating the orientation adjustment as claimed in any one of Claims 1 to 8; wherein the first surveillance apparatus comprises the camera; and the second surveillance apparatus comprises another camera constituting the reference object.
  13. 13. A surveillance apparatus comprising: an elongate mounting having a proximal end configured to be anchored and a distal end operably coupled to a sensor module, the sensor module comprising a 15 camera; a camera pan mechanism configured to pan the camera; a camera tilt mechanism configured to tilt the camera; a processing resource configured to obtain a first three-dimensional geospatial position of the camera and a second three-dimensional geospatial position of a reference object; wherein the camera pan mechanism and the camera tilt mechanism are configured to direct a field of view of the camera towards the reference object; the camera is configured to capture an image comprising the reference object; the processing resource is configured to recognise the reference object and to determine offsets of the reference object with respect to a predetermined reference within the image; and the processing resource is further configured to determine an orientation adjustment in respect of the camera.
GB2113349.1A 2021-09-17 2021-09-17 Method of estimating an orientation adjustment and surveillance apparatus Active GB2611030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2113349.1A GB2611030B (en) 2021-09-17 2021-09-17 Method of estimating an orientation adjustment and surveillance apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2113349.1A GB2611030B (en) 2021-09-17 2021-09-17 Method of estimating an orientation adjustment and surveillance apparatus

Publications (2)

Publication Number Publication Date
GB2611030A true GB2611030A (en) 2023-03-29
GB2611030B GB2611030B (en) 2024-04-03

Family

ID=85384709

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2113349.1A Active GB2611030B (en) 2021-09-17 2021-09-17 Method of estimating an orientation adjustment and surveillance apparatus

Country Status (1)

Country Link
GB (1) GB2611030B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116458A1 (en) * 2005-11-18 2007-05-24 Mccormack Kenneth Methods and systems for operating a pan tilt zoom camera
US20130271604A1 (en) * 2010-12-23 2013-10-17 Alcatel Lucent Integrated method for camera planning and positioning
WO2017215295A1 (en) * 2016-06-14 2017-12-21 华为技术有限公司 Camera parameter adjusting method, robotic camera, and system
US20190073550A1 (en) * 2017-09-07 2019-03-07 Symbol Technologies, Llc Imaging-based sensor calibration
EP3793184A1 (en) * 2019-09-11 2021-03-17 EVS Broadcast Equipment SA Method for operating a robotic camera and automatic camera system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116458A1 (en) * 2005-11-18 2007-05-24 Mccormack Kenneth Methods and systems for operating a pan tilt zoom camera
US20130271604A1 (en) * 2010-12-23 2013-10-17 Alcatel Lucent Integrated method for camera planning and positioning
WO2017215295A1 (en) * 2016-06-14 2017-12-21 华为技术有限公司 Camera parameter adjusting method, robotic camera, and system
US20190073550A1 (en) * 2017-09-07 2019-03-07 Symbol Technologies, Llc Imaging-based sensor calibration
EP3793184A1 (en) * 2019-09-11 2021-03-17 EVS Broadcast Equipment SA Method for operating a robotic camera and automatic camera system

Also Published As

Publication number Publication date
GB2611030B (en) 2024-04-03

Similar Documents

Publication Publication Date Title
EP2342580B1 (en) Method and system involving controlling a video camera to track a movable target object
US9322652B2 (en) Stereo photogrammetry from a single station using a surveying instrument with an eccentric camera
US7978128B2 (en) Land survey system
KR101187909B1 (en) Surveillance camera system
EP3021078B1 (en) Geodetic surveying system with virtual camera
US7742176B2 (en) Method and system for determining the spatial position of a hand-held measuring appliance
US9025033B2 (en) Surveillance camera and method for calibrating the survelliance camera using a calibration tool
EP3062066A1 (en) Determination of object data by template-based UAV control
US20150156423A1 (en) System for following an object marked by a tag device with a camera
CN105184776A (en) Target tracking method
EP3911968B1 (en) Locating system
EP2240740A1 (en) Localization of a surveying instrument in relation to a ground mark
CN115760999A (en) Monocular camera calibration and target geographic position extraction method based on GIS assistance
US7768631B1 (en) Method and system for providing a known reference point for an airborne imaging platform
GB2611030A (en) Method of estimating an orientation adjustment and surveillance apparatus
US10750132B2 (en) System and method for audio source localization using multiple audio sensors
JP2009086471A (en) Overhead wire photographing system and method
GB2610856A (en) Surveillance apparatus
Hrabar et al. PTZ camera pose estimation by tracking a 3D target
EP3283898A1 (en) Method for readjusting a parallactic or azimuthal mounting
EP3786578B1 (en) Surveying instrument
Mohammadi et al. Mounting Calibration of a Multi-View Camera System on a Uav Platform
Jankovic et al. Calibrating an active omnidirectional vision system
TWM545254U (en) Image range-finding and positioning device
Hasler et al. Image-based Orientation Determination of Mobile Sensor Platforms