WO2009122416A2 - Object content navigation - Google Patents

Object content navigation Download PDF

Info

Publication number
WO2009122416A2
WO2009122416A2 PCT/IL2009/000373 IL2009000373W WO2009122416A2 WO 2009122416 A2 WO2009122416 A2 WO 2009122416A2 IL 2009000373 W IL2009000373 W IL 2009000373W WO 2009122416 A2 WO2009122416 A2 WO 2009122416A2
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
image
video
surveillance
area
Prior art date
Application number
PCT/IL2009/000373
Other languages
French (fr)
Other versions
WO2009122416A3 (en
Inventor
David Keidar
Eran Bauberg
Original Assignee
Evt Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL190584A external-priority patent/IL190584A0/en
Priority claimed from US12/061,035 external-priority patent/US9398266B2/en
Application filed by Evt Technologies Ltd. filed Critical Evt Technologies Ltd.
Publication of WO2009122416A2 publication Critical patent/WO2009122416A2/en
Publication of WO2009122416A3 publication Critical patent/WO2009122416A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over

Definitions

  • the present invention relates generally to the field of surveillance monitoring systems and more particularly, to closed circuit television surveillance systems based on video streams arriving from a multiplicity of video cameras.
  • Monitoring and surveillance systems often require a multiplicity of cameras showing a user that supervises a predefine location, facility, property or any other predefined area (e.g. a supervisor, a guard etc.) video streams arriving from the cameras over a multiplicity of monitors (e.g. screens and/or computers).
  • a multiplicity of monitors e.g. screens and/or computers.
  • the user has to simultaneously observe a number of screens (or a number of windows in a single screen).
  • An patent application number WO2008001345 which is incorporated herein by reference, provides a system for monitoring a closed circuit television center using enhanced graphic interface, wherein the graphic interface enables an intuitive control over multiple video streams located on a two dimensional map according to their relative physical position. Layout of icons of the cameras providing the video streams are shown on the two-dimensional map in proportion to the real physical location of the cameras in the area that is being monitored by the system. The user can use the graphical interface to shift from one scale of the map to another scale whereby the display can shift from viewing camera icons at their proportional location to windows showing the actual video streams filmed by the cameras. [0004J To monitor a single moving figure or object image of (e.g.
  • L0005J The navigation process by which the user can navigate between cameras is usually carried out by guessing which screen or window-screen belongs to which location, which is often far from being intuitively friendly to the user.
  • Fig. 1 is a schematic illustration of cameras-installation layout over a location in a predefined surveillance area of the image content navigation system, according to some embodiments of the invention
  • FIG. 2 is a schematic illustration of a surveillance system, according to some embodiments of the invention.
  • Fig. 3 is a schematic illustration of an architectural plan of a surveillance area, according to some embodiments of the invention.
  • Fig. 4 is a schematic illustration of a graphical user interface of the surveillance system, according to some embodiments of the invention.
  • Fig. 5 is a schematic illustration of a graphical user interface of the surveillance system, according to other embodiments of the invention.
  • Fig. 6 is a flowchart schematically illustrating a process for object monitoring, according to some embodiments of the invention.
  • Fig. 7 is a flowchart schematically illustrating a process for object monitoring with two optional types of objects, according to some embodiments of the invention.
  • L0008J An embodiment is an example or implementation of the inventions.
  • the various appearances of "one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments.
  • various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination.
  • the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
  • L0009J While the description below contains many specifications, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of the preferred embodiments. Those skilled in the art will envision other possible variations that are within its scope. Accordingly, the scope of the invention should be determined not by the embodiment illustrated, but by the appended claims and their legal equivalents.
  • the present invention in some embodiments thereof, discloses a surveillance system 1000 and a method for monitoring at least one surveillance-target 99 may be a real person, people, article such as a vehicle etc. using a multiplicity of surveillance cameras 10 producing real time video streams.
  • the system 1000 may be used to navigate through a predefined surveillance area 2000 and locations 200 inside that surveillance area 2000, to follow and monitor the surveillance-target 99, filmed by the cameras 10 of the surveillance system 1000, in an intuitive manner allowing a user (e.g. an authorized user such as an operator, a supervisor, a guard or any other type of user as known in the art) to quickly and/or automatically shift from one camera 10 to another in order to follow the moving surveillance-target 99.
  • a user e.g. an authorized user such as an operator, a supervisor, a guard or any other type of user as known in the art
  • the surveillance area 2000 which the system 1000 may monitor, may be a building with many rooms and/or apartments where each room or apartment may be defined as a location 200.
  • the location 200 is a room with a window 220 and a door 210 where one camera 10 has a filming field of view that is in the area of the window 220 and another camera 10 has a filming field of view that is in the area of the door 10.
  • the surveillance-target 99 is shown in the location 200, where at least two cameras 10 (according to this exemplary drawing) can film the surveillance-target 99.
  • the surveillance system 1000 may comprise: a multiplicity of video cameras 10 (e.g. digital cameras 10 enabling to transmit online video streams as digital data); - a software application 300; - and at least one computerized medium 500 (e.g. a server-computer, a laptop or any other computerized machine that can digitally execute programs, transmit , receive and analyze data - as known in the art); and at least one controller 20 which is a hardware unit.
  • a multiplicity of video cameras 10 e.g. digital cameras 10 enabling to transmit online video streams as digital data
  • a software application 300 e.g. a software application 300
  • at least one computerized medium 500 e.g. a server-computer, a laptop or any other computerized machine that can digitally execute programs, transmit , receive and analyze data - as known in the art
  • controller 20 which is a hardware unit.
  • the computerized medium 500 may enable operating said application 300 and receiving and transmitting data from and to the cameras 10 through the controller 20.
  • the application 300 may be operatively associated with the cameras 10 through the controller 20 and the computerized medium 500, enabling to operate and control the cameras 10 and receive cameras' 10 operative commands transmitted by the application 300 and the computerized medium 500.
  • the cameras 10 may enable transmitting data and receiving operational commands directly from the computerized medium 500 (through any kind of communication network known in the art).
  • Each camera 10 of the system 1000 may be associated with locating areas in video-image-displays (the display of the cameras' 10 video streams) in the computerized medium 500 and/or a remote terminal 80, as illustrated in Fig. 2.
  • the coordinates of the video-image-display in the area that shows the door 210 where this area may be associated as a hyperlink with one camera 10 that physically located at the inner side of the room 200 and with another camera that is physically located at the outer side of the room 200.
  • the system 1000 may automatically shift to the first camera 10 (showing its video stream) associated with this area and to the second camera 10 (showing its video stream) associated with this area.
  • the system 1000 may comprise predefined hyperlinks, associating each camera 10 of the system 1000 with coordinates of areas related video-image-displays.
  • the system 1000 may allow the user to select at least one virtual-object 50 that is filmed by at least one of the cameras 10 and monitor this selected virtual-object 50 by automatically operating and displaying video streams of an associated camera 10 (that is associated with the selected virtual-object's 50 area in the video-image- display of the video stream).
  • the virtual-object 50 may be an area comprising the image of the real surveillance-target 99 or the image of the surveillance-target 99 as shown in the video-image-display displaying online video streams that comprise the image of the surveillance-target 99.
  • the application 300 may comprise a graphical user interface 100, a database 310 and an object navigator 350, as illustrated in Fig. 2.
  • the graphical user interface 100 may allow the user to view the video- image-display of at least one camera 10 and to graphically select a virtual-object 50.
  • L0029J The database 310 may store cameras' 10 installation-locations 200 and enable associating the cameras 10 with the coordinates-location areas in the video- image-display of the camera 10 allowing creating the hyperlinks between the virtual- object 50 selected by the user and the associated camera 10 that allows the user to view the virtual-object 50 by identifying the coordinates of the objects 50 in the image-display.
  • the navigator 350 may enable processing the data of the video streams arriving from the cameras 10 according to a selected object's 50 location 200.
  • the object navigator 350 may comprise a selected area module 351 and a selected image module 352.
  • the selected area module 351 allows monitoring an virtual-object
  • the selected image module 352 allows monitoring a moving image virtual-object 50
  • the application 300 may enable a user to view recorded video streams filmed by at least some of the cameras
  • the system 1000 may be used to view video-image-displays that are either filmed online in real time or video-image-displays of offline recorded streams that were filmed by the system's 1000 cameras in the past.
  • the object navigator 350 may additionally comprise an architectural module 353, as illustrated in Fig. 2.
  • the architectural module 353 may comprise an architectural plan 2100 of the surveillance area 2000 with at least some of its locations 200.
  • L0035J Fig. 3 is a schematic illustration of an architectural plan 2100 of a surveillance area 2000, according to some embodiments of the invention.
  • the architectural plan 2100 may comprise a drawing of the locations 200 of the surveillance area 2000 and the location of access points 290 such as egresses enabling access to each location 200.
  • 353 may enable linking the associated camera(s) 10 that is (are) relevant to the selected virtual-object 50 by using an architectural plan 2100 of the surveillance area 2000.
  • FIG. 3 schematically illustrates an architectural plan 2100 of a surveillance area 2000, according to some embodiments of the invention.
  • the architectural plan 2100 may depict at least some of the locations 200 of the surveillance area 2000 and access points 290 enabling access to these locations 200.
  • the architectural module 353 may enable linking of at least one associated cameralO with the virtual-object 50 by using the architectural plan 2100 of the surveillance area 2000; the architectural module 353 may enables identifying of the camera 10 that is associated with the selected virtual-object 50 by identifying the access point(s) 290 in the virtual-object 50 and linking the identified access point(s) 290 with their associated camera(s) 10.
  • the camera 10 in the next location 200 may be automatically associated with that access point 290 and linked as the camera 10 from which filmed video streams will be displayed as video-image-displays.
  • each access point 290 may be associated with at least one camera 10 where the association may be stored in the database 310.
  • the GUI 100 may comprise retrieving options allowing the user to select the specific cameras 10 or locations 200 filmed in selected time-intervals in order to view specific past-filmed video streams and locations 200. This may enable users to follow an object 50 that has been in the surveillance area 2000 retrospectively.
  • the tracking module 360 may enable a user to retrieve recoded video streams from a predefined past time- interval, select a virtual-object 50, where the system 1000 allows viewing the video- image-displays corresponding to the probable passage-tracks 250 calculated by the tracking module 360.
  • the graphical user interface 100 may comprises: at least one display window 110 enabling a user to view the video streams that are filmed by the system's 1000 cameras 10 as a video image display where each produced video streamed that arrives from the currently associated camera 10 is viewed through the display window 110; and at least one selecting tool 112 (e.g. an arrow or any other movable icon that can be moved across the screen, where upon moving and clicking a mouse of a computer (medium 500) the area can be graphically defined), where this tool 112 enables the user to graphically select and define an virtual-object 50 from said display unit 110.
  • at least one selecting tool 112 e.g. an arrow or any other movable icon that can be moved across the screen, where upon moving and clicking a mouse of a computer (medium 500) the area can be graphically defined
  • the graphical user interface 100 may allow the user to navigate through the locations 200 of the surveillance area 2000 to monitor the virtual-object 50 by selecting the virtual-object 50, using the selecting tool 112, and viewing the selected virtual-object 50 through the display window 110.
  • Fig. 4 schematically illustrates a graphical user interface 100 allowing navigation through a selected area, which is the virtual-object 50, according to some embodiments of the invention.
  • the graphical user interface 100 may allow the user to graphically select an area as the virtual-object 50 comprising the image of the surveillance-target 99 by, for example, graphically defining the coordinates of the area (e.g. by drawing a polygon to define the area), using the selecting tool 112, where the application 300 identifies the camera 10 that is associated (hyper-linked) to the coordinates defining the selected area that is defined as the virtual-object 50.
  • the application 300 identifies the camera 10 that is associated (hyper-linked) to the coordinates defining the selected area that is defined as the virtual-object 50.
  • FIG. 5 schematically illustrates a graphical user interface 100 allowing navigation through selected moving image as the virtual-object 50, which is the image of the surveillance-target 99.
  • the graphical user interface 100 may allow the user to graphically select a virtual-object 50 by, for example, graphically marking or pointing the image of the surveillance-target 99 (e.g. a person or a vehicle), using the selecting tool 112.
  • the application 100 may identify the virtual-object 50 in the video-image- display by, for instance, identifying the moving elements in the image in relation to the coordinates of the marker - selecting tool 112.
  • the application 300 may identify the camera 10 that is associated (hyper-linked) to the coordinates defining the positioning of the virtual-object 50 (e.g.
  • the application 300 selected image module 352 may comprise algorithms enabling to identify the coordinates of the object (and therefore the associated camera 10) at any given moment/time-interval.
  • the graphical user interface 100 further comprises activators 150, which are areas in said graphical user interface 100 that allow the user to define features of display, record video streams data and save settings.
  • the interface 100 may also comprise other links 160 enabling, for example, to select cameras 10 and define cameras 10 settings.
  • Some of the activators 150 may allow the user to select options out of predefined lists, using, for example, scrollbars 155.
  • one of the activators 150 may be a "Shift to planar view" button or link, enabling the user to view a map of the surveillance area 2000 or at least one of its locations 200 with icons showing the cameras 10 and/or video-images-display of the video streams located along the map in locations that are proportional to the cameras' 10 real physical locations.
  • the map and cameras' 50 icons may be the one described in patent application number WO2008001345, which is incorporated by reference to this application (see Background).
  • the planar view may enable the user to see all the locations 200 of the surveillance area 2000 by dividing the display window 110 into several windows showing a number of locations 200 video-images-display simultaneously to help the user to find and select the virtual-object 50 according to the surveillance- target 99 he/she wishes to monitor.
  • the graphical user interface 100 may also comprise a "Previous" button 118A and a "Next" button 118B allowing the user to go back and/or forth to view previous and last video-images-displays.
  • Fig. 6 is a flowchart schematically illustrating a process for monitoring of surveillance-targets 99, according to some embodiments of the invention. This process may comprise:
  • FIG. 7 is a flowchart schematically illustrating a process for object monitoring with two optional types of objects, according to some embodiments of the invention.
  • selecting an object-type 63 where the user may select the type of the virtual- object 50 he /she wishes to monitor (e.g. a moving image such as a person image or an area virtual-object 50 defining the area in the video-image-display he/she wishes to view; • selecting an object 64, where the user graphically selects the virtual-object 50 from the video-image-display in the display window 110 according to the selected object type (e.g. a moving image virtual-object 50 may be selected by marking the virtual-object 50 with the selecting tool 112 and double clicking the mouse where the area virtual-object 50 may be selected by defining an area in the display window 110 that comprises an object 10 the user wishes to monitor, e.g. by using the selecting tool 112 to define a closed shape such as a polygon comprising the virtual-object 50).
  • a moving image such as a person image or an area virtual-object 50 defining the area in the video-image-display he/she wishes to view
  • selecting an object 64 where the user
  • the process may comprise:
  • the user may select another area 70, for example, to monitor a moving image by repeatedly selecting the areas in which he/she spots the image.
  • L0056J As illustrated in Fig. 7, if the user selects the moving image 64 as an virtual- object 50, the process may comprise: • defining the image 71 (e.g. by selecting it or by marking the contours of the image's figure);
  • identifying the image's location 72 by, for example, identifying the coordinates in which the object's contours are (or were at the moment the virtual-object 50 has been selected);
  • the system 1000 may additionally comprise movement-sensors enabling to sense movements of selected virtual-objects 50 and therefore transmit the coordinates of the physical location of the virtual-object 50 in the location 200 of the surveillance area 2000, where the application 300 may enable translating the coordinates of the object's 50 physical location into coordinates of the object's 50 location in the video-image- display.
  • the application 300 may enable storing (recording) and processing of the data video streams arriving from the of the cameras 10 to allow playback of recorded filmed video streams to reach the correct moment in time in which one camera's 10 stream is shifted to another's.

Abstract

A surveillance system for monitoring a selected object that comprises a multiplicity of video cameras, a software application and at least one computerized medium. The computerized medium enables operating the application and the application is operatively associated with the cameras, which are installed in different locations in a surveillance area. The system allows a user to select at least one object such as an image or an area that is filmed by at least one of said cameras and monitor said object by automatically operating cameras that are associated with the object's coordinates' location, where the associated camera enables filming the selected object.

Description

Object Content Navigation
FIELD OF THE INVENTION
LOOOlJ The present invention relates generally to the field of surveillance monitoring systems and more particularly, to closed circuit television surveillance systems based on video streams arriving from a multiplicity of video cameras.
BACKGROUND OF THE INVENTION
[0002J Monitoring and surveillance systems often require a multiplicity of cameras showing a user that supervises a predefine location, facility, property or any other predefined area (e.g. a supervisor, a guard etc.) video streams arriving from the cameras over a multiplicity of monitors (e.g. screens and/or computers). To supervise over an area in which a multiplicity of cameras are installed in different locations over the area, the user has to simultaneously observe a number of screens (or a number of windows in a single screen).
[0003J An patent application number WO2008001345, which is incorporated herein by reference, provides a system for monitoring a closed circuit television center using enhanced graphic interface, wherein the graphic interface enables an intuitive control over multiple video streams located on a two dimensional map according to their relative physical position. Layout of icons of the cameras providing the video streams are shown on the two-dimensional map in proportion to the real physical location of the cameras in the area that is being monitored by the system. The user can use the graphical interface to shift from one scale of the map to another scale whereby the display can shift from viewing camera icons at their proportional location to windows showing the actual video streams filmed by the cameras. [0004J To monitor a single moving figure or object image of (e.g. a person or a vehicle) that is being filmed in the area the user usually has to locate the specific camera(s) located at the area and shift to the associated camera in order to view the image or object of his/her desire. This may turn an extremely cumbersome task requiring the user to juggle between the cameras and guessing the location of a moving image.
L0005J The navigation process by which the user can navigate between cameras is usually carried out by guessing which screen or window-screen belongs to which location, which is often far from being intuitively friendly to the user.
BRIEF DESCRIPTIONS OF THE DRAWINGS
[0006] The subject matter regarded as the invention will become more clearly understood in light of the ensuing description of embodiments herein, given by way of example and for purposes of illustrative discussion of the present invention only, with reference to the accompanying drawings, wherein
Fig. 1 is a schematic illustration of cameras-installation layout over a location in a predefined surveillance area of the image content navigation system, according to some embodiments of the invention;
Fig. 2 is a schematic illustration of a surveillance system, according to some embodiments of the invention;
Fig. 3 is a schematic illustration of an architectural plan of a surveillance area, according to some embodiments of the invention.
Fig. 4 is a schematic illustration of a graphical user interface of the surveillance system, according to some embodiments of the invention;
Fig. 5 is a schematic illustration of a graphical user interface of the surveillance system, according to other embodiments of the invention;
Fig. 6 is a flowchart schematically illustrating a process for object monitoring, according to some embodiments of the invention; and
Fig. 7 is a flowchart schematically illustrating a process for object monitoring with two optional types of objects, according to some embodiments of the invention.
[0007J The drawings together with the description make apparent to those skilled in the art how the invention may be embodied in practice. DETAILED DESCRIPTIONS OF SOME EMBODIMENTS OF THE
INVENTION
L0008J An embodiment is an example or implementation of the inventions. The various appearances of "one embodiment," "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments. Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment. L0009J While the description below contains many specifications, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of the preferred embodiments. Those skilled in the art will envision other possible variations that are within its scope. Accordingly, the scope of the invention should be determined not by the embodiment illustrated, but by the appended claims and their legal equivalents.
LOOlOJ Reference in the specification to "one embodiment", "an embodiment", "some embodiments" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiments, but not necessarily all embodiments, of the inventions. It is understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
LOOIlJ The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples. It is to be understood that the details set forth herein do not construe a limitation to an application of the invention. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description below.
[0012] It is to be understood that the terms "including", "comprising", "consisting" and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers. The phrase "consisting essentially of", and grammatical variants thereof, when used herein is not to be construed as excluding additional components, steps, features, integers or groups thereof but rather that the additional features, integers, steps, components or groups thereof do not materially alter the basic and novel characteristics of the claimed composition, device or method.
LOO 13J If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element. It is to be understood that where the claims or specification refer to "a" or "an" element, such reference is not be construed that there is only one of that element. It is to be understood that where the specification states that a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, that particular component, feature, structure, or characteristic is not required to be included.
L0014J Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described. [0015J Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The term "method" refers to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs. The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
[0016J Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention can be implemented in the testing or practice with methods and materials equivalent or similar to those described herein. L0017J Any publications, including patents, patent applications and articles, referenced or mentioned in this specification are herein incorporated in their entirety into the specification, to the same extent as if each individual publication was specifically and individually indicated to be incorporated herein. In addition, citation or identification of any reference in the description of some embodiments of the invention shall not be construed as an admission that such reference is available as prior art to the present invention.
L0018J The present invention, in some embodiments thereof, discloses a surveillance system 1000 and a method for monitoring at least one surveillance-target 99 may be a real person, people, article such as a vehicle etc. using a multiplicity of surveillance cameras 10 producing real time video streams. The system 1000 may be used to navigate through a predefined surveillance area 2000 and locations 200 inside that surveillance area 2000, to follow and monitor the surveillance-target 99, filmed by the cameras 10 of the surveillance system 1000, in an intuitive manner allowing a user (e.g. an authorized user such as an operator, a supervisor, a guard or any other type of user as known in the art) to quickly and/or automatically shift from one camera 10 to another in order to follow the moving surveillance-target 99. [0019J Fig. 1 schematically illustrates a layout of cameras 10 in a location 200, which is one part of the surveillance area 2000. For example, the surveillance area 2000, which the system 1000 may monitor, may be a building with many rooms and/or apartments where each room or apartment may be defined as a location 200. In the illustrative example of Fig. 1, the location 200 is a room with a window 220 and a door 210 where one camera 10 has a filming field of view that is in the area of the window 220 and another camera 10 has a filming field of view that is in the area of the door 10. The surveillance-target 99 is shown in the location 200, where at least two cameras 10 (according to this exemplary drawing) can film the surveillance-target 99.
L0020J According to some embodiments of the invention, as illustrated in Fig. 2, the surveillance system 1000 may comprise: a multiplicity of video cameras 10 (e.g. digital cameras 10 enabling to transmit online video streams as digital data); - a software application 300; - and at least one computerized medium 500 (e.g. a server-computer, a laptop or any other computerized machine that can digitally execute programs, transmit , receive and analyze data - as known in the art); and at least one controller 20 which is a hardware unit.
[002 IJ According to some embodiments of the invention, the computerized medium 500 may enable operating said application 300 and receiving and transmitting data from and to the cameras 10 through the controller 20. The application 300 may be operatively associated with the cameras 10 through the controller 20 and the computerized medium 500, enabling to operate and control the cameras 10 and receive cameras' 10 operative commands transmitted by the application 300 and the computerized medium 500.
[0022J Alternatively, the cameras 10 may enable transmitting data and receiving operational commands directly from the computerized medium 500 (through any kind of communication network known in the art).
L0023J Each camera 10 of the system 1000 may be associated with locating areas in video-image-displays (the display of the cameras' 10 video streams) in the computerized medium 500 and/or a remote terminal 80, as illustrated in Fig. 2. For example, the coordinates of the video-image-display in the area that shows the door 210 where this area may be associated as a hyperlink with one camera 10 that physically located at the inner side of the room 200 and with another camera that is physically located at the outer side of the room 200. When the user is viewing the video-image-display of the inner room and marks (e.g. clicks) at least a part of the area that displays the door 120, the system 1000 may automatically shift to the first camera 10 (showing its video stream) associated with this area and to the second camera 10 (showing its video stream) associated with this area. [0024J To associate cameras with areas in the video-image-displays, the system 1000 may comprise predefined hyperlinks, associating each camera 10 of the system 1000 with coordinates of areas related video-image-displays.
L0025J The system 1000 may allow the user to select at least one virtual-object 50 that is filmed by at least one of the cameras 10 and monitor this selected virtual-object 50 by automatically operating and displaying video streams of an associated camera 10 (that is associated with the selected virtual-object's 50 area in the video-image- display of the video stream).
[0026J The virtual-object 50 may be an area comprising the image of the real surveillance-target 99 or the image of the surveillance-target 99 as shown in the video-image-display displaying online video streams that comprise the image of the surveillance-target 99.
[0027J According to some embodiments of the invention, the application 300 may comprise a graphical user interface 100, a database 310 and an object navigator 350, as illustrated in Fig. 2.
L0028J The graphical user interface 100 may allow the user to view the video- image-display of at least one camera 10 and to graphically select a virtual-object 50. L0029J The database 310 may store cameras' 10 installation-locations 200 and enable associating the cameras 10 with the coordinates-location areas in the video- image-display of the camera 10 allowing creating the hyperlinks between the virtual- object 50 selected by the user and the associated camera 10 that allows the user to view the virtual-object 50 by identifying the coordinates of the objects 50 in the image-display.
[0030J The navigator 350 may enable processing the data of the video streams arriving from the cameras 10 according to a selected object's 50 location 200. L0031J According to some embodiments of the invention, as illustrated in Fig. 2, the object navigator 350 may comprise a selected area module 351 and a selected image module 352. The selected area module 351 allows monitoring an virtual-object
50, which is an area in the video image that has been graphically selected by the user.
The selected image module 352 allows monitoring a moving image virtual-object 50
(e.g. a moving person that is filmed by the system's 1000 cameras 10) by selecting of an image as an virtual-object 50.
L0032J According to some embodiments of the invention, the application 300 may enable a user to view recorded video streams filmed by at least some of the cameras
10 of the system 1000.
[.0033 J The system 1000 may be used to view video-image-displays that are either filmed online in real time or video-image-displays of offline recorded streams that were filmed by the system's 1000 cameras in the past.
[0034J According to some embodiments of the invention, the object navigator 350 may additionally comprise an architectural module 353, as illustrated in Fig. 2. The architectural module 353 may comprise an architectural plan 2100 of the surveillance area 2000 with at least some of its locations 200.
L0035J Fig. 3 is a schematic illustration of an architectural plan 2100 of a surveillance area 2000, according to some embodiments of the invention.
[0036J According to these embodiments, the architectural plan 2100 may comprise a drawing of the locations 200 of the surveillance area 2000 and the location of access points 290 such as egresses enabling access to each location 200.
L0037J According to some embodiments of the invention, the architectural module
353 may enable linking the associated camera(s) 10 that is (are) relevant to the selected virtual-object 50 by using an architectural plan 2100 of the surveillance area 2000.
[OO38J Fig. 3 schematically illustrates an architectural plan 2100 of a surveillance area 2000, according to some embodiments of the invention. The architectural plan 2100 may depict at least some of the locations 200 of the surveillance area 2000 and access points 290 enabling access to these locations 200. The architectural module 353 may enable linking of at least one associated cameralO with the virtual-object 50 by using the architectural plan 2100 of the surveillance area 2000; the architectural module 353 may enables identifying of the camera 10 that is associated with the selected virtual-object 50 by identifying the access point(s) 290 in the virtual-object 50 and linking the identified access point(s) 290 with their associated camera(s) 10. [0039J For example, if the user has selected an area as the virtual-object 50 where a door access point 290 is predefined in the architectural module 353 according to the architectural plan 2100, the camera 10 in the next location 200 may be automatically associated with that access point 290 and linked as the camera 10 from which filmed video streams will be displayed as video-image-displays.
[0040J According to some embodiments of the invention, the each access point 290 may be associated with at least one camera 10 where the association may be stored in the database 310.
[004 IJ Additionally or alternatively, the GUI 100 may comprise retrieving options allowing the user to select the specific cameras 10 or locations 200 filmed in selected time-intervals in order to view specific past-filmed video streams and locations 200. This may enable users to follow an object 50 that has been in the surveillance area 2000 retrospectively. [0042J According to some embodiments of the invention, the tracking module 360 may enable a user to retrieve recoded video streams from a predefined past time- interval, select a virtual-object 50, where the system 1000 allows viewing the video- image-displays corresponding to the probable passage-tracks 250 calculated by the tracking module 360.
L 0043] According to some embodiments of the invention, as illustrated in Fig. 4 and Fig. 5, the graphical user interface 100 may comprises: at least one display window 110 enabling a user to view the video streams that are filmed by the system's 1000 cameras 10 as a video image display where each produced video streamed that arrives from the currently associated camera 10 is viewed through the display window 110; and at least one selecting tool 112 (e.g. an arrow or any other movable icon that can be moved across the screen, where upon moving and clicking a mouse of a computer (medium 500) the area can be graphically defined), where this tool 112 enables the user to graphically select and define an virtual-object 50 from said display unit 110.
L0044] The graphical user interface 100 may allow the user to navigate through the locations 200 of the surveillance area 2000 to monitor the virtual-object 50 by selecting the virtual-object 50, using the selecting tool 112, and viewing the selected virtual-object 50 through the display window 110.
[0045] Fig. 4 schematically illustrates a graphical user interface 100 allowing navigation through a selected area, which is the virtual-object 50, according to some embodiments of the invention. The graphical user interface 100 may allow the user to graphically select an area as the virtual-object 50 comprising the image of the surveillance-target 99 by, for example, graphically defining the coordinates of the area (e.g. by drawing a polygon to define the area), using the selecting tool 112, where the application 300 identifies the camera 10 that is associated (hyper-linked) to the coordinates defining the selected area that is defined as the virtual-object 50. [0046J Fig. 5 schematically illustrates a graphical user interface 100 allowing navigation through selected moving image as the virtual-object 50, which is the image of the surveillance-target 99. The graphical user interface 100 may allow the user to graphically select a virtual-object 50 by, for example, graphically marking or pointing the image of the surveillance-target 99 (e.g. a person or a vehicle), using the selecting tool 112. The application 100 may identify the virtual-object 50 in the video-image- display by, for instance, identifying the moving elements in the image in relation to the coordinates of the marker - selecting tool 112. The application 300 may identify the camera 10 that is associated (hyper-linked) to the coordinates defining the positioning of the virtual-object 50 (e.g. an article or a human image) at each moment or each predefined time-interval. To identify the virtual-object 50 once it is marked, the application 300 selected image module 352 may comprise algorithms enabling to identify the coordinates of the object (and therefore the associated camera 10) at any given moment/time-interval.
[0047J According to some embodiments of the invention, as illustrated in Fig. 4 and Fig. 5, the graphical user interface 100 further comprises activators 150, which are areas in said graphical user interface 100 that allow the user to define features of display, record video streams data and save settings. The interface 100 may also comprise other links 160 enabling, for example, to select cameras 10 and define cameras 10 settings. Some of the activators 150 may allow the user to select options out of predefined lists, using, for example, scrollbars 155. [0048J According to some embodiments of the invention, one of the activators 150 may be a "Shift to planar view" button or link, enabling the user to view a map of the surveillance area 2000 or at least one of its locations 200 with icons showing the cameras 10 and/or video-images-display of the video streams located along the map in locations that are proportional to the cameras' 10 real physical locations. The map and cameras' 50 icons. This viewing technique may be the one described in patent application number WO2008001345, which is incorporated by reference to this application (see Background).
[0049J Alternatively, the planar view may enable the user to see all the locations 200 of the surveillance area 2000 by dividing the display window 110 into several windows showing a number of locations 200 video-images-display simultaneously to help the user to find and select the virtual-object 50 according to the surveillance- target 99 he/she wishes to monitor.
[0050J Additionally, as illustrated in Fig. 4 and Fig. 5, the graphical user interface 100 may also comprise a "Previous" button 118A and a "Next" button 118B allowing the user to go back and/or forth to view previous and last video-images-displays. [0051J Fig. 6 is a flowchart schematically illustrating a process for monitoring of surveillance-targets 99, according to some embodiments of the invention. This process may comprise:
• selecting the planar view 51 in the user interface's 100 display window 110, where the user may view more than one of the surveillance area's 2000 locations 200;
• selecting a location 52;
• once the user finds an object 53 he/she wishes to monitor; • selecting an object 54, where the user graphically selects the virtual-object 50 from the video-image-display in the display window 110;
• identifying at least one camera 10 that is associated with the area in the video- image-display 55 of the selected virtual-object 50 that is able to film said selected virtual-object 50 according to the coordinates defining the location of the selected virtual-object 50;
• linking the area of the selected object 56 in the video-image-display to the identified associated camera 50 (e.g. by creating a hyperlink between the area and the associated camera 10;;
• displaying the video-image-display of the video streams arriving from the associated camera 57.
[0052J The user may choose to select another different object 57 and repeat steps
51-57 or steps 54-57.
[0053J Fig. 7 is a flowchart schematically illustrating a process for object monitoring with two optional types of objects, according to some embodiments of the invention.
• selecting the planar view 61 in the user interface's 100 display window 110, where the user may view more than one of the surveillance area's 2000 locations 200;
• selecting a location 62;
• selecting an object-type 63 where the user may select the type of the virtual- object 50 he /she wishes to monitor (e.g. a moving image such as a person image or an area virtual-object 50 defining the area in the video-image-display he/she wishes to view; • selecting an object 64, where the user graphically selects the virtual-object 50 from the video-image-display in the display window 110 according to the selected object type (e.g. a moving image virtual-object 50 may be selected by marking the virtual-object 50 with the selecting tool 112 and double clicking the mouse where the area virtual-object 50 may be selected by defining an area in the display window 110 that comprises an object 10 the user wishes to monitor, e.g. by using the selecting tool 112 to define a closed shape such as a polygon comprising the virtual-object 50).
L0054J As illustrated in Fig. 7, if the user selects the area 64 as a virtual-object 50, the process may comprise:
• enabling a user to define the area 65 (e.g. by marking or coloring a closed shape such as a polygon or a circle that comprises an image or article the user wishes to view);
• identifying the area object's location 66 by, for example, identifying the coordinates encircling the shape defining the area;
• identifying the associated camera 10 which is associated with the identified location 67;
• linking the area or the location of the area object 68 with the associated camera 10;
• displaying the video-image-display of the associated camera 69 in the display window 110.
[0055J The user may select another area 70, for example, to monitor a moving image by repeatedly selecting the areas in which he/she spots the image. L0056J As illustrated in Fig. 7, if the user selects the moving image 64 as an virtual- object 50, the process may comprise: • defining the image 71 (e.g. by selecting it or by marking the contours of the image's figure);
• identifying the image's location 72 by, for example, identifying the coordinates in which the object's contours are (or were at the moment the virtual-object 50 has been selected);
• identifying the associated camera 10 which is associated with the identified location 73;
• linking the area or the location of the image virtual-object 50 with the associated camera 74;
• displaying the video-image-display of the associated camera 75 in the display window 110.
L0057J Since the image can move steps 72-75 may automatically repeat according to a predefined program (algorithm) in which the application's 300 selected image module 352 may enable to identify the image virtual-object 50 and its changing location therefore its changing associated cameras 10 (e.g. by image processing). [0058 J Alternatively or additionally, to detect the location of the moving or non moving virtual-object 50 and therefore the associated camera 10, the system 1000 may additionally comprise movement-sensors enabling to sense movements of selected virtual-objects 50 and therefore transmit the coordinates of the physical location of the virtual-object 50 in the location 200 of the surveillance area 2000, where the application 300 may enable translating the coordinates of the object's 50 physical location into coordinates of the object's 50 location in the video-image- display.
[0059J According to some embodiments of the invention, to shift smoothly from one camera 10 to another (either when using the moving image type and its related selected image module 352 or the area virtual-object 50 and its related selected area module 351) the application 300 (and its modules e.g. 351, 352) may enable storing (recording) and processing of the data video streams arriving from the of the cameras 10 to allow playback of recorded filmed video streams to reach the correct moment in time in which one camera's 10 stream is shifted to another's.
[0060J While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Those skilled in the art will envision other possible variations, modifications, and applications that are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.

Claims

What is claimed is:
1. A surveillance system for monitoring at least one surveillance-target, said system comprising a multiplicity of video cameras, a software application and at least one computerized medium, wherein said computerized medium enables operating said application and, wherein said application is operatively associated with said cameras, which are installed in different locations of at least one surveillance area and produce video streams that are displayed to the user through video-image-display; and wherein said system allows a user to select at least one virtual-object 50 in said video-image-display, that is filmed by at least one of said cameras and monitor said surveillance-target by automatically identifying at least one camera that is associated with said virtual-object, linking said virtual-object location in the video-image-display with the associated camera and displaying an video-image- display of said associated camera.
2. The surveillance system of claim 1 further allows online viewing of video streams filmed in a real time as said video-image-displays.
3. The surveillance system of claim 2 further allows offline viewing of video streams filmed in a selected past time-interval as said video-image-displays.
4. The surveillance system of claim 1 further comprising at least one controller operatively associated with said cameras of said system, which is a hardware unit enabling to operate and control said cameras according to operative commands transmitted by said application.
5. The surveillance system of claim 1 wherein said application comprises a graphical user interface, wherein said interface comprises: at least one display window displaying the video-image-displays of video streams that are filmed by at least one of the system's cameras; and at least one selecting tool 112, which enables the user to graphically select an virtual-object from said display window; wherein said graphical user interface allows said user to navigate through the locations of said at least one area to monitor said virtual-object by selecting said virtual-object, which acts as a hyperlink to at least one pre-assigned camera of said system, using said selecting tool and viewing said virtual-object through said display window.
6. The surveillance system of claim 5 wherein said application further comprises an object navigator, which processes the video streams data arriving from said cameras according to the location coordinates of said selected virtual-object.
7. The surveillance system of claim 6 wherein said object navigator comprises a selected area module enabling the user to define the virtual-object by selecting an area in said video-image-display on said display window, using said selecting tool by marking said area with said tool, wherein upon selecting said area, said system automatically operates at least one camera of said system that is associated with the selected area.
8. The surveillance system of claim 7 wherein said object navigator further comprises a selected image module enabling the user to define the virtual-object by selecting the moving image of the surveillance-target on said display window, using said selecting tool, by marking said image virtual-object with said tool, wherein upon selecting said image virtual-object said application automatically links at least one camera of said system that is associated with the location of the selected image virtual-object allowing the user to monitor said surveillance-target by monitoring the image virtual-object.
9. The surveillance system of claim 8 wherein said object navigator further comprising an architectural module enabling linking of at least one associated camera with the virtual-object by using an architectural plan of the surveillance area depicting at least some of the locations of the surveillance area and access points enabling access to said locations, wherein each access point may be associated with at least one camera; wherein said architectural module enables likening the camera that is associated with the selected virtual-object by identifying the access point in said object and identifying the camera that is associated with said identified access point.
10. The surveillance system of claim 8 wherein said graphical user interface allows the user to select the virtual-object type, wherein said virtual-object type is one of: image virtual-object and area virtual-object.
11. The surveillance system of claim 6 wherein said object navigator comprises a selected image module enabling the user to define the virtual-object by selecting the virtual moving image of said surveillance-target on said display window, using said selecting tool, by marking said image as the selected virtual-object with said tool, wherein upon selecting said image virtual-object said application automatically links to at least one camera of said system that is associated with the location of the selected image virtual-object allowing the user to monitor the surveillance-target by monitoring the moving image virtual-object.
12. The surveillance system of claim 5 wherein said graphical user interface further comprises activators, which are areas in said graphical user interface that allow the user to define features of display, record video streams data, save settings, select cameras and define cameras settings.
13. The surveillance system of claim 12 wherein at least some of said actuators further enable the user to shift to a planner view, in which the display window shows a layout of at least some of the system's cameras over a map showing at least a part of the surveillance area.
14. A software application comprising: a graphical user interface; and an object navigator; wherein said graphical user interface allows a user to view video-image-displays of online video streams arriving from at least one of a multiplicity of cameras, navigating through said cameras to monitor at least one surveillance-target 99 by selecting of said virtual-object; wherein said navigation is carried out through said object navigator, which processes the video streams arriving from said cameras according to said selected virtual-object's location on the video-image-display to link the area in the video-image-display in which the virtual-object is located with an associated camera 10 and displaying the video-image-display of said associated camera.
15. A method for monitoring at least one surveillance-target by navigating through a surveillance area's locations, using online video-image-displays of video streams arriving from a multiplicity of video cameras, said method comprising:
• Enabling a user to select an virtual-object, which comprises an image of said surveillance-target, wherein said user graphically selects said virtual-object from said video-image-display; • identifying at least one camera that is associated with the area in said video- image-display of said selected virtual-object that is able to film said selected virtual-object according to said selected object's coordinates' location;
• linking said object's area in said video-image-display to said identified associated camera; and
• displaying the video-image-display of the video streams of said associated camera.
16. The method of claim 15 further comprising online transmitting of video streams according to which the location of said selected virtual-object 50 is identified, wherein said cameras continuously transmit said online video streams as digital data.
17. The method of claim 15 further comprising selecting a virtual-object type whereby said virtual-object type is one of: an area virtual-object type, wherein said virtual-object is an area in said video-image-display comprising an image of the surveillance-target; and an image virtual-object type, wherein said virtual-object is the moving image of the surveillance-target.
PCT/IL2009/000373 2008-04-02 2009-04-05 Object content navigation WO2009122416A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IL190584 2008-04-02
IL190584A IL190584A0 (en) 2008-04-02 2008-04-02 Object content navigation
US12/061,035 2008-04-02
US12/061,035 US9398266B2 (en) 2008-04-02 2008-04-02 Object content navigation

Publications (2)

Publication Number Publication Date
WO2009122416A2 true WO2009122416A2 (en) 2009-10-08
WO2009122416A3 WO2009122416A3 (en) 2010-03-18

Family

ID=41136011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2009/000373 WO2009122416A2 (en) 2008-04-02 2009-04-05 Object content navigation

Country Status (1)

Country Link
WO (1) WO2009122416A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2811465A1 (en) * 2013-06-06 2014-12-10 Thales Video surveillance system
CN104284147A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, a tracking assistance system and a tracking assistance method
CN111083349A (en) * 2018-10-19 2020-04-28 韩国斯诺有限公司 System including camera application program and camera function control method
CN115225963A (en) * 2022-06-24 2022-10-21 浪潮通信技术有限公司 Indoor positioning monitoring method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100563A1 (en) * 2002-11-27 2004-05-27 Sezai Sablak Video tracking system and method
US20050265582A1 (en) * 2002-11-12 2005-12-01 Buehler Christopher J Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US20060197839A1 (en) * 2005-03-07 2006-09-07 Senior Andrew W Automatic multiscale image acquisition from a steerable camera
US20060222209A1 (en) * 2005-04-05 2006-10-05 Objectvideo, Inc. Wide-area site-based video surveillance system
US20060274828A1 (en) * 2001-11-01 2006-12-07 A4S Security, Inc. High capacity surveillance system with fast search capability

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060274828A1 (en) * 2001-11-01 2006-12-07 A4S Security, Inc. High capacity surveillance system with fast search capability
US20050265582A1 (en) * 2002-11-12 2005-12-01 Buehler Christopher J Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US20040100563A1 (en) * 2002-11-27 2004-05-27 Sezai Sablak Video tracking system and method
US20060197839A1 (en) * 2005-03-07 2006-09-07 Senior Andrew W Automatic multiscale image acquisition from a steerable camera
US20060222209A1 (en) * 2005-04-05 2006-10-05 Objectvideo, Inc. Wide-area site-based video surveillance system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2811465A1 (en) * 2013-06-06 2014-12-10 Thales Video surveillance system
FR3006842A1 (en) * 2013-06-06 2014-12-12 Thales Sa VIDEO SURVEILLANCE SYSTEM
CN104284147A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, a tracking assistance system and a tracking assistance method
GB2516172A (en) * 2013-07-11 2015-01-14 Panasonic Corp A tracking assistance device, a tracking assistance system and a tracking assistance method
GB2516172B (en) * 2013-07-11 2015-11-04 Panasonic Corp A tracking assistance device, a tracking assistance system and a tracking assistance method
CN111083349A (en) * 2018-10-19 2020-04-28 韩国斯诺有限公司 System including camera application program and camera function control method
CN115225963A (en) * 2022-06-24 2022-10-21 浪潮通信技术有限公司 Indoor positioning monitoring method and device

Also Published As

Publication number Publication date
WO2009122416A3 (en) 2010-03-18

Similar Documents

Publication Publication Date Title
US9398266B2 (en) Object content navigation
US11054796B2 (en) Dashboard and button/tile system for an interface
US11089268B2 (en) Systems and methods for managing and displaying video sources
US20110002548A1 (en) Systems and methods of video navigation
US20190037178A1 (en) Autonomous video management system
US7595833B2 (en) Visualizing camera position in recorded video
US8572502B2 (en) Building control system user interface with docking feature
JP6399356B2 (en) Tracking support device, tracking support system, and tracking support method
RU2494567C2 (en) Environment monitoring method and system
US8160746B2 (en) System and method for graphically allocating robot's working space
US10733231B2 (en) Method and system for modeling image of interest to users
US20070226616A1 (en) Method and System For Wide Area Security Monitoring, Sensor Management and Situational Awareness
WO2006071259A2 (en) Method and system for wide area security monitoring, sensor management and situational awareness
WO2014122884A1 (en) Information processing apparatus, information processing method, program, and information processing system
US20130010144A1 (en) Systems and methods for providing immersive displays of video camera information from a plurality of cameras
US20100097472A1 (en) Method of efficient camera control and hand over in surveillance management
US7684591B2 (en) Information processing system, information processing apparatus and information processing method, program, and recording medium
US10803667B1 (en) Enhancing monitoring system with augmented reality
Martinel et al. Camera selection for adaptive human-computer interface
WO2009122416A2 (en) Object content navigation
CN110072085A (en) Video monitoring method and device
US9805477B2 (en) Display management method and associated computer program product and electronic device
US20140215381A1 (en) Method for integrating and displaying multiple different images simultaneously in a single main-window on the screen of a display
US11594114B2 (en) Computer-implemented method, computer program and apparatus for generating a video stream recommendation
US11151730B2 (en) System and method for tracking moving objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09728151

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09728151

Country of ref document: EP

Kind code of ref document: A2