GB2384128A - Schematic mapping of surveillance area - Google Patents

Schematic mapping of surveillance area Download PDF

Info

Publication number
GB2384128A
GB2384128A GB0129812A GB0129812A GB2384128A GB 2384128 A GB2384128 A GB 2384128A GB 0129812 A GB0129812 A GB 0129812A GB 0129812 A GB0129812 A GB 0129812A GB 2384128 A GB2384128 A GB 2384128A
Authority
GB
United Kingdom
Prior art keywords
camera
image
view
area
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0129812A
Other versions
GB0129812D0 (en
Inventor
Derek Frank Bond
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INVIDEO Ltd
Original Assignee
INVIDEO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INVIDEO Ltd filed Critical INVIDEO Ltd
Priority to GB0129812A priority Critical patent/GB2384128A/en
Publication of GB0129812D0 publication Critical patent/GB0129812D0/en
Priority to PCT/GB2002/005657 priority patent/WO2003051059A1/en
Priority to AU2002350957A priority patent/AU2002350957A1/en
Publication of GB2384128A publication Critical patent/GB2384128A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19689Remote control of cameras, e.g. remote orientation or image zooming control for a PTZ camera
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

A surveillance system and related method arranged to survey an area which is digitally mapped into the system to create an image map of the area, onto which the location of a number of cameras is mapped, together with the field of view of at least one of those cameras.

Description

?, e À e e 2384 1 28
Image Mapping The present invention relates to Image Mapping.
5 Video cameras are often employed in surveillance systems to view an area, such as town centre streets, railway stations, office buildings and the like. A number of video cameras are located in various places within the area being surveyed, the images from which are displayed on a bank of video displays in a control room. This means that control centre operators must survey a large number of video screens in order to look 10 out for events such as theft and vandalism. Because every camera has its own screen, a large number of video screens must be viewed which limits the number of cameras which can be installed to survey the area. More screens can be used, but then more operators must be employed to view the area, and the control centre must be larger in order to house the extra screens, all of which is expensive. In addition, when an 1 S operator within the control centre sees something of importance, for example a theft, he or she must watch the person carrying out the theft as he moves around the area, operating cameras to zoom, pan and tilt as appropriate, if the camera is able to carry out such functions. Since the operator within the conko1 centre is so distant from the camera, delays often occur in making the necessary pan tilt and zoom adjustments to 20 the camera which are necessary, with the effect that the suspected thief is likely to escape. Also, the operator must be able to view any suspect when he moves out of sight of the original camera, in order to pick up that person as he enters the field of view of
another camera. Therefore, the operator must learn where different cameras are so that, as a suspect walks out of the field of view of a camera, he knows which screen to look
25 at to see the suspect next. As a result of moving from screen to screen, the control centre operator will often loose the suspect, particularly if the operator is not experienced. In order to reduce the number of screens, it is possible to arrange the screens to switch between cameras, but this makes the area even more difficult to view.
r, lo, The present invention seeks to reduce or overcome some of the disadvantages of current surveillance systems. According to a first aspect of the invention, a surveillance system is established in which an area to be surveyed is digitally mapped to create an image map of the area, onto which the location of a number of cameras is mapped, 5 together with the field of view of at least one of those cameras. In this way, the images
from the cameras can be mapped into the system. The image which a user will see is built up in layers starting with the image map which forms the base layer onto which a camera layer is added which, as explained above, includes the location of the cameras together with the field of view of those cameras is mapped. One advantage of this
10 invention is that, because a large bank of screens is not required, more cameras can be arranged to view the area. Also, if the operator in the control centre is watching a person as they move about the area, and as they move out of the field of view of one
camera into the field of view of another, it is very simple to select the appropriate
camera. This may be done in a number of different ways, for example by using a 15 mouse or other controller to click on a camera in the map which is most appropriate for viewing the object, by clicking on the position on the map at which the object is located; by clicking on the object within the image of the camera currently being viewed; or by using a joystick or other controller in order to get the impression of moving around the area, as if in a video game. As the viewer moves about the area and 20 out of the field of view of one camera, the system could automatically cut to the view
from a different camera as appropriate.
According to a second aspect of the invention, a method of imaging an area comprises the digital mapping of the area to create an image map of the area, and mapping onto 25 the image map layer the positions and fields of view of cameras within the area. It is
preferred that the images from at least one of the cameras are mapped onto the image map layer in the form of a camera image layer. This can allow improved imaging of the area, and in particular reduces the number of screens which are needed to view the area.
Ultimately, a single screen can be used to view a large area incorporating a large 30 number of cameras.
i- À À it: In all, this system will make it very much easier to view the events within an area covered by multiple cameras by using fewer screens, and in a much more effective way.
The operator is very much less likely to lose any person or object which is being tracked as a consequence of mapping the camera image into the image map. It is very 5 much easier for the operator to see the relationships between the images from different cameras. The cameras can be fixed cameras or can be cameras which include pan, tilt and zoom features.
The area map may be created in any one of a number of different ways, or even in a 10 combination of more than one way. For example, the map may be created simply from an aerial view of the area or from existing skeet or building plans. However, there are advantages to the map being created from photographs, both aerial photographs in order to get the two dimensional ground plan, and photographs taken from the ground in order to build up a three dimensional image space with blocks, and is vectored so that a 15 3D virtual map is created, similar to the creating of 3D image space in flight simulator mapping. The more photographs that are included in the image, and from as many different views as possible, the better the mapping will be. Of course, the more accurate the mapping, the more processing power will be required to use it in this system. For example, the buildings in a downtown area can be defined by their corners 20 which will form points which are vectored to define blocks. When displayed, these blocks are "skinned" to give the appearance of a solid mass.
The better the map that is created, the easier it is for a control centre operator to track an object moving around the area. There is not a great need for the area map to have enormously high resolution since it is used principally as a schematic to give the user 25 both a visual and location reference, just like a road map some processing can be reduced by showing some less important parts of the map in very low resolution. On the map, it is preferred that the location points of the cameras are specified as icon objects in the image space so that they may be selected. It is expected that selection of a particular camera icon will cause the image view from that camera to appear on the 30 screen. The images from a selected camera, or even more than one selected camera may be called up onto a screen, and may be windowed so that it is still possible to view
À À-. I. -a: À: : -: A. the map outside of the image window. Alternatively, the map might be overlaid on the video image.
Above it is explained that an image map layer is created onto which a camera layer is 5 overlaid. In addition, it is advantageous to overlay the images from the cameras onto the map. Therefore, when viewing the area, most of the area that is viewed will be shown in the form of the map layer, but overlaid onto this are the images from the cameras. Because a three dimensional image space has been created in the map layer, the images from the cameras can be overlaid onto the map layer with different parts of 10 the camera image correctly positioned in the map space. This is an important feature because it is possible to place the images from the cameras into the mapped imaged space without having to mathematically transform those images into the map. It will be appreciated here that the map is made up of points having Cartesian co- ordinates, whereas the images from the camera is made up of points having polar co-ordinates.
15 Transforming the two together involves considerable processing power, and whilst this is possible, the overlaying of the images from the camera onto the map layer involves considerably less computing power whilst giving a very good result.
Another advantage of using a three dimensional image space map is that not all of the 20 image shown to the control centre operator will contain real video imagery. Some of the peripheral regions viewed by the operator might be from the map, whereas real video imagery may only be used for the central region of each view, such as where people are moving about. If an operator is viewing a street, it will normally be unimportant to view the walls of the buildings and the sky, whereas imaging the street 25 with real video imagery will be more important, and will help reduce the amount of image processing that is needed.
In addition to the creation of the area map, the cameras must be mapped onto the area map together with the field of view of those cameras. The field of view of the cameras
30 is defined by its polar co-ordinates which may be fixed or variable depending on whether or not the camera has a zoom, or can pan or tilt.
À À e . e The camera image is a three dimensional Cartesian data array from the camera, some of which data is static and some dynamic. In some cases, it may be advantageous for the static or still data to be presented, even when the camera is off line, and this can be used as a reference image within the area map.
During filming, it is expected that real time moving images are used in the image layer, that is the image from the camera which is viewed by the operator, and which may be used to overlay the area map and camera map layers.
It is preferred that the system includes an image server which is driven directly by the operator from his computer, and which switches the real time data from the cameras into the image layer, handling the scaling of the map and image data and creating the operator's on-screen images. It receives the commands from the operator in order to create the image in the control centre that the operator wishes to see.
Since it is very complex to mechanically control cameras remotely, there being delays in panning, tilting and zooming cameras mechanically, it is also envisaged that the cameras take advantage of high resolution sensors which are able to resolve images to a very high resolution. Although most of the time the high resolution is not needed and is 20 not used, if the camera is to be zoomed to see an image more closely, instead of using a zoom lens, the image from the camera can be zoomed by viewing only a part of the image collected on the image sensor. The use of a high resolution sensor means that resolution will not be lost until a very high degree of zoom is used. In addition, whereas an image would be distorted by the use of a zoom lens since the focal length of 25 the camera is changed thereby distorting the perspective of the image, distortion is reduced or eliminated by the use of this non-mechanical zooming. In addition, panning and tilting can be achieved merely by viewing a different part of the image collected on the camera sensor. This may be achieved by the camera forwarding only part of the image data from the sensor relating to the part of the image which the operator wishes 30 to see, or can be done in the computer that the operator is using whereby the computer receives all of the image, but only selects part of it in order to effect the zoom, pan and tilt. Fixed still image shots may be mapped directly into the image server memory
À - - -
À e e o*e. C it, space, and their images expanded, scrolled or compressed by the image server as appropriate. The system may include GPS whereby objects can be tracked across the surface of the 5 area.
The present invention will now be described by way of example only and with reference to Figure 1, which is a map of an area under surveillance by a number of cameras. By way of example, a street plan is shown in Figure 1, in which a number of cameras are located in various positions throughout the plan. Of course, the street plan could be significantly larger than this, and the number of cameras used could be much larger.
Alternatively, instead of the use in a town centre, this system could be used in other 15 environments, such as, but not limited to, within a suite of offices, or within a railway station or airport. It could be used to view other types of objects besides people, for example, vehicles on a road, or animals on a farm or in a zoo. The image mapping process is described later in this specification.
20 In Figure 1, a number of streets are linked together in a town centre, and within the plan, there is a square. At the top of the plan, a first street 1 enters the area and leads into a second street 2. The second street 2 runs across the plan, and is open at each end.
A third street 3 has a junction on the second street 2 on the opposite side of the street to the first street 1, and offset from it. At the other end of the third street 3 is a town 25 square 7 which has fourth, fifth and sixth streets 4, 5, 6 also running off it and leading out of the area represented in the map. Five cameras are positioned around the area in order to view most of the area. More cameras can be added in order to ensure that all parts of the area are viewed, or to allow certain areas to be viewed from more direction.
For example, more cameras could be added in the town square 7.
A first camera 8 is shown at the left hand end of the second street 2 pointing down the street to the right past the ends of the third street 3 and the first street 1. The passage of
À e _ _ _ n À an object into or from the third street 3 or the first street 1 would be seen by this camera. A second camera 9 is located in the second street 2 opposite the junction with the third street 3 so that it views the third street 3 and its junction with the second street 2. A third camera 10 is also located in the second street 2 opposite the junction with the 5 first street 1 so as to view street 1.
A fourth camera 11 is located in the town square 7 opposite the third street 3 so that it may view objects in street 3, as well as objects in a large proportion of the town square 7. Finally, a fifth camera 12 is located in the town square 7 which is able to view the majority of the town square 7 including the fourth street 4, the fifth street 5, and the 10 corner of the town square 7 which opens into the sixth street 6.
In Figure 1, each camera includes a pair of lines showing the field of view of each
camera. The cameras may be fixed in the direction in which they can view the streets, or can be movable to pan, tilt and zoom. In the case of the fourth camera 1 1, two 15 different fields of view are shown which correspond with two different configurations,
a wide field of view which might be the maximum field of view, and which is able to
allow the camera to view a large proportion of the town square. A narrower field of
view is also shown which might correspond to a fully or partially zoomed field of view.
20 The fifth camera 12 is shown with a very wide field of view. This might be achieved
by a wide angle lens, or alternatively might be achieved by the camera being able to pan around the town square. Either is envisaged for this camera, or for any of the other cameras. Therefore, it will be appreciated that a wide variety of configurations can be achieved depending on whether any of the cameras are able to zoom, tilt or pan.
In use, an operator may spot a person known to them as a potential thief in the first street 1 from the view from the third camera 10. If that person is walking down the first street 1 towards the third camera 10, there will come a point where he will leave the field of view of the third camera. The path of the person is indicated by a line 13 on the
30 map. Various points on the map are indicated with a small cross which are points which will be described in the following description.
À À À... e À -: Or À::: À À À The person can be tracked as he walks through the streets. This can be done in any one of several ways. First of all, the map shown in Figure 1 of the area will be placed on a computer screen. The positions of the cameras will form icons which can be clicked on using a pointing device such as a mouse. Therefore, once the person enters the second S street 2, the operator can click on the first camera 8 in order to view the person in the second street 2 where the person would leave the field of view of the third camera 10.
The operator will view the images from the camera on a screen which may be the same screen as has the map superimposed on it, or a different screen. Alternatively, the map and video image from the selected camera can be windowed onto the same screen.
10 Therefore, as the person enters the second street 2 he will exit from the field of view of
the third camera 10 at point 14 by which time the operator should have selected the first camera 8.
As the person walks into the third street 3, at some point between point 1 S and point 16 15 in the path 13, the operator should select the second camera 9 so that the persons passage down the third street 3 can be viewed. As the person winds his way down the third street 3, he can be viewed either by the second camera 9 or by the fourth camera 11. This is particularly useful where the person might pass behind a large object such as a tree, or become obscured by a crowd of people. One camera may give a clearer 20 view than the other.
As the person walks into the town square 7, at some point between points 17 and 18, the operator should switch from the fourth camera 1 1 to the fifth camera 12. The fifth camera 12 will view the person as they leave the area via street 4.
One of the real advantages of this invention is that the cameras are mapped into the area so that it is much easier for the camera operator to observe something moving through the area. An even better arrangement will be described as follows. Instead of the operator having to manually click on the appropriate camera, this can be done 30 automatically. For example, as the person walks from the second street 2 into the third street 3, he is within the field of view of two cameras, and possibly even the fourth
camera 11. The operator can then click on the object in the image that is being viewed,
À À e i À !: . 7
and the system will automatically change the camera, in this case from the first camera 8 to the second camera 9. All the operator needs to do is to watch the image and click on the object moving about the area from time to time so that the most appropriate camera is selected.
5 An even greater enhancement would allow the operator to mark the object as it enters the area so that, as it proceeds through the area, the computer system automatically observes the passage of the person through the street automatically selecting the correct camera, thereby requiring no intervention by the operator. In much of the area, the object is viewed by more than one camera, which allows the computer system to 10 identify exactly where the object is on the map, and to choose the correct camera as appropriate. This is achieved by the camera mapping the object into the virtual map held in its memory.
The system can even be arranged such that the images from the cameras are mapped 15 into a virtual three dimensional image map of the complete area, and the area can then be displayed as a "three dimensional" view of the area, and the operator can move the controller such as a joystick so that the image viewed by the operator is like in a video game where the impression of travailing through virtual space is given around the area.
In such case, not all of the image seen by the operator is real video imagery, but most is 20 created electronically similar to that of a video game. The image of the person walking through the street, and the part of the area around them will normally be real video imagery grafted onto the virtual three dimensional image. The view that the operator sees will be made up of layers, with the bottom layer being the 3-D map, the next layer being the cameras and their fields of view, and the third being the video images from
25 the cameras. The operator can effectively operate "within" the virtual space.
The map of the area is created from a selection of previously recorded aerial and ground photographs which are tiled and vectored to form a 3D image space, in a similar manner to the mapping used in creative flight simulators. As previously explained, the 30 locations of the cameras are specified as objects in this image space, in the form of icons which can be selected by the operator. Most of the map area does not need to have a high resolution since it is principally there as a schematic to give the user a
r : it, ',, O visual and location reference, just like a road map. The operator could choose a high view point, in which case the map would appear as a typical street map of the area.
This area map has added to it a camera map layout in which the location and field of
view of each camera deemed by its polar co-ordinates are added. These may be fixed or 5 variable depending on whether or not the camera has zoom facilities, or can pan or tilt.
The camera map is then a layer with each camera represented by an icon which varies in complexity depending on the users range and direction with respect to the camera view.
In the near field, it may be necessary to perform a geometric projection whereby the
map is based upon real location imagery. The camera map layer is overlaid 10 electronically into the area map.
The third layer is an image layer. The camera images are three dimensional Cartesian data arrays from the camera, some of which is static and some dynamic. The static or still data may optionally be presented when the camera is off line and used as a 15 reference image within the area map. If the camera is filming then the real time moving images are used in the image layer. The image layers are electronically overlaid onto the area map and camera map layer.
An image server can be driven directly by the operator via his computer screen view. It 20 switches the real time data from the cameras into the image layer, handles scaling of the map and image data and creates the on screen images. It also accepts commands from the operator in order to view what the operator wants to see. In this case, the operator might start with an aerial view of the area which he can navigate by computer joystick.
As he approaches a locality, the positions of cameras and their coverage is shown in 25 iconic form. If the user clicks on an icon, then a window pops up showing the view from that camera. He can click on other cameras to cover the area. He can pan or zoom within one of the camera images to get a closer view and open or close views at will. With a suitable software and multiple cameras, this system would allow an operator to navigate by computer joy sticks throughout the hole area without physically 30 controlling pan and zoom cameras or having to watch multiple video screens or switch between them. The system software and hardware would do this directly.
e I e e e e r r . a This system is particularly useful for real time tracking. Where movement is constrained, for example down roads, rails, corridors etc., once an object is located in the scene, it is possible to possible to follow it with the joystick. This process can compute the speed and direction of the object via the map area which joins all regions 5 not completely covered by the cameras. A remote operator could therefore "drive" through the virtual space following an object, like one might do in a video game, with the exception that all regions containing cameras would contain real images. If the object within the area has GPS or other tracker system installed, then it will be possible to continuously and automatically track through both the map and image space over a 10 large geographic area. The cost and complexity of such a system would depend on the image resolution required, and the number and resolution of the cameras used, but the scaleability of the system, for large areas can be simultaneously navigated with far more cameras than could be viewed in a conventional control room with a bank of screens.
15 One of the problems associated with known surveillance systems is that, as an object moves from the field of view of a first camera into the field view of the second camera,
if that second camera is one which can pan, tilt or zoom, it will normally be inappropriately configured when it is needed, either pointing the wrong way, or zooming in on some object which is irrelevant. This means that the operator must 20 waste time reconfiguring the second camera. One way in which an automated system-
which automatically tracks an object through the area can work is to make sure that cameras are correctly orientated long before the object travelling through the area reaches the point where it enters the field of view. Alternatively, and preferred is a
camera which, rather than mechanically zooming, panning and tilting, can do any one 25 or more of these operations without mechanical movement. Such a camera will now be described. Improved imaging is achieved by an imaging system including a high resolution digital camera where the zoom is achieved by enlarging the digital image so as to view only a 30 very small part of it, such as on a screen. Provided that the resolution of the camera is high enough, a much higher magnification of a view can be achieved than from a human eye. In conventional zoom cameras, the image is distorted by the changing of
aee e c r row e e focal length such that the distance between the camera and re-focussing is normally required. By using a high resolution digital camera and achieving the zoom through displaying on a screen a small part of the image enlarged to fill the screen, very much greater resolution can be achieved, and because the camera is focussed on the far field,
S and because the lenses are not adjusted to effect the zoom, refocusing of the camera is not needed during zooming.
The image from the camera can be panned, tilted and zoomed without making any changes whatsoever to the camera itself. This is achieved by a camera positioned to view a wide area with the image being formed on a high resolution sensor. Rather than 10 moving the position of the camera or moving the zoom lens of a camera, panning, tilting and zooming can be achieved by viewing on a screen only a part of the overall image. Thus, if one wants to see the wide area, the whole image on the sensor is displayed on the monitor, whereas if a particular object at the centre of the image is to be viewed, a zooming effect can be achieved by viewing only the part of the image on 15 the sensor in which the particular object which the user is interested in, is located. If a very high resolution sensor is used, the zooming effect can be achieved without significant deterioration of the image. Also, because the focal length of the lens of the camera is not being changed, the image is not distorted by the zooming.
20 Likewise, if the object being viewed now moves away from the centre of the image formed on the sensor, the object can be followed left, right, up or down merely by changing the part of the overall image which is being viewed. What is more, the zooming, panning and tilting can be achieved solely in software which is used to select the appropriate part of the image, and no changes have to be made to the camera at all.
The image sensor can be divided into regions, each of which is arranged to be appropriate for the different functions to be viewed. For example' the very central part of the area may be very high resolution.
30 By using cameras of this sort the full field of view is always received on the sensor, and so is always available without having to wait for a mechanical operation
to move the camera in some way. For example, a reset button, or icon might be selected which
e en- e e e e e e e e À e e e e e ;,., e D À e automatically zooms back to show the whole image viewed by the camera. Of course, what is more preferred is that during automatic operation, the camera will automatically select the correct part of the picture.

Claims (39)

Claims
1. A surveillance system arranged to survey an area which is digitally mapped into the system to create an image map of the area, onto which the location of a number S of cameras is mapped, together with the field of view of at least one of those cameras.
2. A surveillance system according to claim 1 in which an image from at least one of the cameras is mapped onto the image map.
10
3. A surveillance system according to claim 1 or 2, further comprising a display device arranged to display a display image built up in layers starting with the image map which forms a base layer onto which a camera layer is added which includes the location of the cameras together with the field of view of those cameras.
15
4. A surveillance system according to claim 3, further comprising a controller for selecting objects or positions on the display device.
5. A surveillance system according to claim 4, wherein an object moving within the surveyed area can be tracked as it moves out of the field of view of one camera into
20 the field of view of another camera by using the controller to select a camera in the
display image which is most appropriate for viewing the object.
6. A surveillance system according to claim 5, wherein the cameras are shown on the display image as icons.
7. A surveillance system according to any one of claims 4 to 6, wherein an object moving within the surveyed area can be tracked as it moves out of the field of view of
one camera into the field of view of another camera by using the controller to point at
the position on the display image at which the object is located.
8. A surveillance system according to any one of claims 4 to 7, wherein an object moving within the surveyed area can be tracked as it moves out of the field of view of
one camera into the field of view of another camera by using the controller to point at
the position on the image of the camera currently being viewed.
9. A surveillance system according to any one of claims 4 to 8, wherein an object 5 moving within the surveyed area can be tracked as it moves out of the field of view of
one camera into the field of view of another camera by using the controller to point at
the object such that the system automatically cuts to the view from a different camera as appropriate.
10 10. A surveillance system according to any one of claims 4 to 9, wherein the controller is one of a mouse and a joystick.
11. A surveillance system according to any one of claims 4 to 10, wherein the images from a selected camera, or from more than one selected camera may be called 15 up onto the display device in a windowed arrangement.
12. A surveillance system according to any one of the preceding claims, wherein the cameras are fixed cameras.
20
13. A surveillance system according to any one of the preceding claims, wherein the cameras include pan, tilt and zoom features.
14. A surveillance system according to any one of the preceding claims, wherein the image map of the area is a three dimensional virtual map.
15. A surveillance system according to claim 14, wherein the virtual map is built up from images of the area being surveyed.
16. A surveillance system according to any one of claims 3 to 15, further 30 comprising an image server which is driveable directly by an operator, and which places the real time data from the cameras into the display image.
17. A surveillance system according to any one of the preceding claims, wherein one or more of the cameras include high resolution sensors which are able to resolve images to a very high resolution.
5
18. A surveillance system according to claim 17, wherein any one or more of panning, tilting and zooming is achieved through selecting which part of the image falling on the sensor to display.
19. A surveillance system according to any one of the preceding claims, further 10 including GPS whereby objects can be tracked across the surface of the area.
20. A method of imaging an area comprising: digitally mapping the area to create an image map of the area, and mapping onto the image map layer the positions and fields of view of cameras
15 within the area.
21. A method according to claim 20, further comprising mapping the images from at least one of the cameras in the form of a camera image layer onto the image map.
20
22. A method according to claim 20 or 21, further comprising tracking an object moving within the surveyed area as it moves out of the field of view of one camera into
the f eld of view of another camera by selecting a camera in the display image which is most appropriate for viewing the object.
25
23. A method according to claim 22, wherein the cameras are shown on the display image as icons.
24. A method according to any one of claims 20 to 23, further comprising tracking an object moving within the surveyed area as it moves out of the field of view of one
30 camera into the field of view of another camera by pointing at the position on the
display image at which the object is located.
25. A method according to any one of claims 20 to 24, further comprising tracking an object moving within the surveyed area as it moves out of the field of view of one
camera into the field of view of another camera by pointing at the position on the image
of the camera currently being viewed.
s
26. A method according to any one of claims 20 to 25, further comprising tracking an object moving within the surveyed area as it moves out of the field of view of one
camera into the field of view of another camera by pointing at the object, and
automatically cutting to the view from a different camera as appropriate.
27. A method according to any one of claims 20 to 26, including calling up images from a selected camera, or from more than one selected camera onto a display device in a windowed arrangement.
15
28. A method according to any one of claims 20 to 27, according to any one of the preceding claims, wherein the image map of the area is a three dimensional virtual map.
29. A method according to claim 28, wherein the virtual map is built up from 20 images of the area being surveyed.
30. A method according to claim 28 or claim 29, wherein the map is created from a view of the area and from street or building plans.
25
31. A method according to any one of claims 28 to 30, wherein the virtual map is constructed in image space with blocks, and is vectored.
32. A method according to claim 31, wherein the vectors are skinned.
30 33. A method according to any one of claims 20 to 32, including storing the image map in memory.
33. A method according to any one of claims 20 to 32, further comprising displaying an image of the area in which not all of the image is real.
34. A method according to claim 33, comprising displaying some of the peripheral 5 regions from the image map.
35. A method according to claim 34, comprising displaying real video imagery for the central region.
10
36. A method according to any one of claims 20 to 35, including placing the real time data from the cameras into the display image.
37. A method according to claim 36, further including scaling the map and image data to create the display image.
38. A method according to any one of the preceding claims, further comprising tracking objects across the surface of the area using GPS.
39. A surveillance system constructed and arranged substantially as herein 20 described with reference to Figure 1.
GB0129812A 2001-12-13 2001-12-13 Schematic mapping of surveillance area Withdrawn GB2384128A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0129812A GB2384128A (en) 2001-12-13 2001-12-13 Schematic mapping of surveillance area
PCT/GB2002/005657 WO2003051059A1 (en) 2001-12-13 2002-12-13 Image mapping
AU2002350957A AU2002350957A1 (en) 2001-12-13 2002-12-13 Image mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0129812A GB2384128A (en) 2001-12-13 2001-12-13 Schematic mapping of surveillance area

Publications (2)

Publication Number Publication Date
GB0129812D0 GB0129812D0 (en) 2002-01-30
GB2384128A true GB2384128A (en) 2003-07-16

Family

ID=9927544

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0129812A Withdrawn GB2384128A (en) 2001-12-13 2001-12-13 Schematic mapping of surveillance area

Country Status (3)

Country Link
AU (1) AU2002350957A1 (en)
GB (1) GB2384128A (en)
WO (1) WO2003051059A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2424784A (en) * 2005-03-31 2006-10-04 Avermedia Tech Inc Interactive surveillance system
GB2457707A (en) * 2008-02-22 2009-08-26 Crockford Christopher Neil Joh Integration of video information
CN103517035A (en) * 2012-06-28 2014-01-15 南京中兴力维软件有限公司 Intelligent park security panorama monitoring system and method
EP2854397A4 (en) * 2012-05-23 2016-01-13 Sony Corp Surveillance camera administration device, surveillance camera administration method, and program

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10358017A1 (en) * 2003-12-11 2005-07-21 Siemens Ag 3D camera control
US20050225634A1 (en) * 2004-04-05 2005-10-13 Sam Brunetti Closed circuit TV security system
US7720257B2 (en) * 2005-06-16 2010-05-18 Honeywell International Inc. Object tracking system
DE102006002602A1 (en) 2006-01-13 2007-07-19 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Calibration method and calibration system
ITMI20071016A1 (en) 2007-05-19 2008-11-20 Videotec Spa METHOD AND SYSTEM FOR SURPRISING AN ENVIRONMENT
US8531522B2 (en) 2008-05-30 2013-09-10 Verint Systems Ltd. Systems and methods for video monitoring using linked devices
US9245183B2 (en) 2014-06-26 2016-01-26 International Business Machines Corporation Geographical area condition determination

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19531593A1 (en) * 1995-08-28 1997-03-06 Siemens Ag Camera cross-connection control for monitoring arrangement
WO1997023096A1 (en) * 1995-12-15 1997-06-26 Bell Communications Research, Inc. Systems and methods employing video combining for intelligent transportation applications
EP0884909A2 (en) * 1997-06-10 1998-12-16 Canon Kabushiki Kaisha Camera control system
EP0967584A2 (en) * 1998-04-30 1999-12-29 Texas Instruments Incorporated Automatic video monitoring system
US20010022615A1 (en) * 1998-03-19 2001-09-20 Fernandez Dennis Sunga Integrated network for monitoring remote objects

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2057961C (en) * 1991-05-06 2000-06-13 Robert Paff Graphical workstation for integrated security system
US6002995A (en) * 1995-12-19 1999-12-14 Canon Kabushiki Kaisha Apparatus and method for displaying control information of cameras connected to a network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19531593A1 (en) * 1995-08-28 1997-03-06 Siemens Ag Camera cross-connection control for monitoring arrangement
WO1997023096A1 (en) * 1995-12-15 1997-06-26 Bell Communications Research, Inc. Systems and methods employing video combining for intelligent transportation applications
EP0884909A2 (en) * 1997-06-10 1998-12-16 Canon Kabushiki Kaisha Camera control system
US20010022615A1 (en) * 1998-03-19 2001-09-20 Fernandez Dennis Sunga Integrated network for monitoring remote objects
EP0967584A2 (en) * 1998-04-30 1999-12-29 Texas Instruments Incorporated Automatic video monitoring system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2424784A (en) * 2005-03-31 2006-10-04 Avermedia Tech Inc Interactive surveillance system
GB2457707A (en) * 2008-02-22 2009-08-26 Crockford Christopher Neil Joh Integration of video information
EP2093999A1 (en) 2008-02-22 2009-08-26 Christopher Neil John Crockford Integration of video information
EP2854397A4 (en) * 2012-05-23 2016-01-13 Sony Corp Surveillance camera administration device, surveillance camera administration method, and program
US9948897B2 (en) 2012-05-23 2018-04-17 Sony Corporation Surveillance camera management device, surveillance camera management method, and program
CN103517035A (en) * 2012-06-28 2014-01-15 南京中兴力维软件有限公司 Intelligent park security panorama monitoring system and method

Also Published As

Publication number Publication date
AU2002350957A1 (en) 2003-06-23
GB0129812D0 (en) 2002-01-30
WO2003051059A1 (en) 2003-06-19

Similar Documents

Publication Publication Date Title
US10237478B2 (en) System and method for correlating camera views
US8711218B2 (en) Continuous geospatial tracking system and method
US7750936B2 (en) Immersive surveillance system interface
JP4928670B2 (en) Pointing device for digital camera display
EP2553924B1 (en) Effortless navigation across cameras and cooperative control of cameras
US20060028550A1 (en) Surveillance system and method
Ciampa Pictometry Digital Video Mapping
US20080291279A1 (en) Method and System for Performing Video Flashlight
US20060114251A1 (en) Methods for simulating movement of a computer user through a remote environment
US20090040302A1 (en) Automated surveillance system
US20080239102A1 (en) Camera Controller and Zoom Ratio Control Method For the Camera Controller
GB2384128A (en) Schematic mapping of surveillance area
US20120188333A1 (en) Spherical view point controller and method for navigating a network of sensors
CN107957772A (en) The method that the processing method of VR images is gathered in reality scene and realizes VR experience
CN109120901A (en) A kind of method of screen switching between video camera
KR101297294B1 (en) Map gui system for camera control
WO2006017402A2 (en) Surveillance system and method
US20080129818A1 (en) Methods for practically simulatnig compact 3d environments for display in a web browser
GB2457707A (en) Integration of video information
Velagapudi et al. Synchronous vs. asynchronous video in multi-robot search
CN114390245A (en) Display device for video monitoring system, video monitoring system and method
Velagapudi et al. Scaling effects for streaming video vs. static panorama in multirobot search
US20240020927A1 (en) Method and system for optimum positioning of cameras for accurate rendering of a virtual scene
EP1040450A1 (en) Acquisition and animation of surface detail images
Molinet The Digital Enhancement Database

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)