WO2012091537A1 - System and method for navigation and visualization - Google Patents
System and method for navigation and visualization Download PDFInfo
- Publication number
- WO2012091537A1 WO2012091537A1 PCT/MY2011/000129 MY2011000129W WO2012091537A1 WO 2012091537 A1 WO2012091537 A1 WO 2012091537A1 MY 2011000129 W MY2011000129 W MY 2011000129W WO 2012091537 A1 WO2012091537 A1 WO 2012091537A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- surveillance
- display screen
- orientation
- navigation unit
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
Definitions
- the present invention relates to video surveillance system.
- the present invention relates to a system and method for navigating and visualizing video surveillance system within a given area.
- Video surveillance systems are being widely used to ensure the security of an area. To minimize blind spots on the video surveillance systems, generally, many surveillance video cameras are required to cover as many regions and angles as possible. However, the large number of video cameras can be a challenge for security personnel to monitor all the required screens at all time. Further, it requires a very well -trained personnel to navigate and control the large number of video cameras in an intense situation.
- US 7,5 1 1 ,736 entitled “Augmented Reality Navigation System” discloses a navigation system utilizing augmented reality that utilizes captured images to generate orientation information, wherein the pitch, yaw mid roll are periodically derived from a sensor, in-between the periodical update, the pitch, roll, and yaw information are derived by capturing and image of the observed scene, identifying reference image components (RlC's) in a images and comparing those RIC's with subsequently captured images, to derive orientation information.
- RlC's reference image components
- a surveillance system for an area under surveillaace, the surveillance system having a plurality of surveillance cameras installed within the area.
- the surveillance system comprises a three dimensional (3D) computer generated (CG) model modeling the area under surveillance with locations of all the plurality of surveillance cameras marked therein, a navigation unit for controlling orientations of the 3D CG model for showing on a display screen; a detection unit for acquiring orientation information from the navigation unit; and a processing unit for receiving the orientation information from the detection unit, wherein the orientation information is processed and associated to the orientations of the 3D CG model in a manner that the orientation of the 3D CG model shown on the display screen is directly controlled through the navigation unit in realtime.
- 3D three dimensional
- CG computer generated
- the navigation unit includes a marker board and the detection unit includes a video camera, wherein the video camera is operable to record orientations of the marker board, wherein the recorded video is process by the processing unit to generate the orientation information.
- the navigation unit includes an orientation sensor and the detection unit includes a transmission unit to transmit the signals from the orientation sensor to the processing to generate the orientation information.
- the navigation unit includes a keyboard or computer mouse.
- Live videos may be captured by the plurality of the surveillance cameras arc displayed with reference to the respective surveillance camera on the display screen together with the 3D CG model. It is possible that the live videos are displayed cm the display screen in thumbnails alongside with the 3D CG model. The thumbnails may be positioned dynamically alongside with the 3D CG model. Alternatively, the live videos of only the surveillance cameras displaying at the forefront of the 3D CG model are displayed on the display screen.
- a method for controlling a surveillance system for an area under surveillance with a plurality of surveillance cameras installed within the area comprises creating a 3D CG model by modeling the area under surveillance: marking locations of ali the plurality of surveillance cameras on the 3D CG model; displaying the 3D CG model on a display screen; controlling the orientation of the 3D CG model through a navigation unit; acquiring orientation information through the navigation unit; processing the orientation information; and associating the orientation information to the orientations of the 3D CG model in a manner that the orientation of tile 3D CG model .shown on the display screen is directly controlled through the navigation unit in real-time.
- the method may further comprise recording videos of the navigation unit to detect the orientations of the navigation unit to generate orientation information, and detecting orientations of the navigation unit through an orientation sensor resided therein to generate orientation information.
- the method may further comprise displaying live videos captured by the plurality of the surveillance cameras with reference to the respective surveillance camera on the display screen together with the 3D CG mode.
- the live videos may be displayed on the display screen in thumbnails alongside the 3D CG model.
- the thumbnails are positioned dynamically alongside the 3D CG model. More preferably, the live videos of only the surveillance cameras displaying at the forefront of the 3D CG model are displayed on the display screen.
- FIG. 2A exemplifies a 3D model of an area under surveillance that is shown on the display screen of FIG. 1 ;
- FIG. 2B exemplifies a screen shot showing the 3D model of FIG. 2A when the 3D model and the surveillance cameras' point of views are shown in thumbnails;
- FIG. 2C illustrates a. screen shot of FIG. 2B with one of the thumbnail selected:
- FIG. 3 A shows a top surface of the rectangular block;
- FIG. 3B shows the top surface where there is a foreshortening on the top surface
- FIG. 3C shows three separate orientations of the rectangular block, wherein the 3D CG model of the landscape of the area under surveillance that is denoted by the rectangular block is oriented accordingly on the display screen;
- FTG. 4 illustrates a flow diagram of controlling and navigating surveillance system in accordance with one embodiment of the present invention
- FIG.5 A illustrates a 3D CG model of FIG. 2A; and [0022]
- FIG. 5B illustrates visible point map of the 3D CG model.
- the present invention provides a system and method that create situational awareness between a large number of cameras and their corresponding physical locations using a computer graphics/generated (CG) model for the area under surveillance as a medium to navigate and visualizing the area of interest to create total situational awareness.
- the CG model comprises a 3D (three dimensional) environment that includes interior and exterior of the building, if any, where the cameras and their respective views are mapped on the CG model. Each camera view is mapped onto the 3D model as a virtual camera and it can be displayed on one single display screen. The cameras installed can be shown on the exterior model for easy selections.
- An intuitive navigation system for controlling orientations of the CG model can be made through orientation sensor and augmented reality input. The navigational system would help to establish and associate the cameras on the display screen with their respective physical locations.
- FIG. 1 illustrates schematically a surveillance system 100 in accordance with one embodiment of the present invention.
- I he surveillance system 1.00 comprises a processing unit 110.
- the controller 120 can be any established surveillance system in the art that comprises a surveillance system 121 and a plurality of surveil lance cameras 122 connected thereto.
- the plurality of surveillance cameras 122 are installed in various locations of an area under surveillance. Usually, a large number of surveillance cameras 122 are required to ensure that (he area can be closely watched with minimal or no blink spot.
- the surveillance cameras 122 may be installed within a building of the area or outside the building.
- the processing unit 1.10 is adapted to acquire videos from the surveillance controller 121 and process accordingly. In the processing unit 110, there is provided a pre-rendered three-dimensional (3D) model 132 of the area.
- the 3D model 132 is a computerized 3D graphic that modeling area under surveillance with the plurality of surveillance cameras 122 mapped out on the 3D model 132.
- the 3D model 132 is displayed on the display screen 130 connected directly to the center controller 110.
- a user 101 controlling the center controller 110 may gain an interactive vi realization of the 3D model 132 through the navigation unit 140.
- the navigation unit 140 incorporates with the detection unit 150 to form an interactive visual interface allowing user 10.1 to view the area under surveillance through the 3D model 132 at will.
- the navigation unit 140 is a handheld marker board representing the landscape of the area under surveillance.
- the handheld marker board can be a rectangular flat panel without any shape or configuration form on the panel as the landscape of the area is modeled and shown on the display screen 130.
- the detection unit 150 is a video camera for capturing videos of the navigation unit 140.
- the videos are processed by the center controller 110 to determine the orientation of the navigation unit 140.
- the 3D model 132 displayed on the display scrccd 130 changes its orientation correspondingly, which allows the user 101 to control and view the area through the 3D model 132 at an angle and orientation thai the user 101 desires.
- the videos taken by the respective surveillance cameras can»be displayed in thumbnails 135 alongside with the 3D model 132 and further referenced to the respective video cameras. More desirably, the video thumbnails 135 arc shown with live videos. Accordingly, the user 101 may have a full control of viewing.orientations and angles of the entire landscape of the area under surveillance through the navigation unit 140. With the intuitive controls of the orientations and angles, the user 101 may easily select any one of the surveillance cameras and zoom into the video for belter view.
- the surveillance system 100 provides an intuitive navigation and visualization of the area under surveillance.
- the 3D model 132 facilitates an interactive means to control the surveillance cameras at the physical location of the area to create real-time situational awareness and shortlist the camera of interest.
- the detector unit 140 may be a standard video camera adapted to capture videos of the navigation unit 150 for observing its orientation. Once the orientation information is obtained by the processing unit l ift, the 3D model is oriented accordingly and shown on the display screen 130. [0030] In one specific embodiment, the detection unit 140 (i.e. the video camera) can he worn by the user 101 in a manner that the detection unit 140 is viewing at a same direction as the user would.
- the navigation unit 140 can te a bare panel without any electronic components provided therein because the detection unit 150, being a imaging device, is able to capture videos of the navigation unit 140 to determine the orientations and angles through imaging processing techniques that are know in the art
- the navigation unit 140 can be adapted with a communication unit and orientation sensor therein as a stand-alone controlling unit.
- the detection unit 140 can be a receiver connected Lo the processing unit 110 for allowing communication between the processing unit 110 and the navigation unit 150 through communication unit.
- the orientation sensor may include acceleromeler or any gesture detection sensor for detecting the orientation of the navigation unit 140, wherein the communication unit may include any wireless communication module, such as Bluetooth module, for sending the orientation information to the processing unit ⁇ 10.
- the navigation unit 150 may be a handheld device, such as a tablet computer, tliat is able to communicate with the processing unit 110.
- the handheld device may further comprise orientation sensor, such that the orientation of the handheld device may similarly be used for controlling the orientation of the 3D model 132.
- the display screen may allow the user 101 to view the 3D model 132 thereon directly and provided the necessary control.
- the tough screen may further be used to select the surveillance camera ⁇ interest.
- FIG. 2A exemplifies a 3D model 200 of an area under .surveillance tliat is shown on the screen 130 of FIG. I .
- the 3D model 200 comprises a building 210 with markings 212 representing the exact locations of surveillance cameras.
- the 3D model 200 is pre-generated for simulating the actual area under surveillance.
- the exact location of each surveillance cameras is being mapped on the 3D model.
- ⁇ wireframe rectangular block 250 is further superimposed on the 3D model 200 as a reference to the navigation unit ISO of FIG. 1.
- the 3D model 200 changes its orienialion accordingly so that the user 101 is able lo view the 3D model 200 at different orienialion.
- the rectangular block 250 would not be shown in the actual display as it is provided for reference and illustrations only.
- FIG. 2B exemplifies a screen shot showing die 3D model 200 of FIG.
- thumbnails 235 when the 3D model 200 and the surveillance cameras' point of views are shown in thumbnails 235. Rach of the thumbnails 255 is reference to the respective surveillance camera through a line callout for easy reference.
- FIG. 2C illustrates a screen shot of FiG. 2B with one of the thumbnail 255 selected.
- the video is presented in larger frame size for better viewing.
- FIGs. 3A-3C illustrate schematically movements of a rectangular block in relation lo the change in orientation of the 3D CG model.
- FIG. 3 ⁇ shows a top surface 302 of toe rectangular block.
- FIG. 3B shows the top surface 302 where there is a foreshortening on the top surface indicating that the rectangular block has tilled slightly, which can be detected easily with imaging methods that are widely known in the art.
- FIG. 3C shows three separate orientations of the rectangular block (on the left). wherein the 3D CG model of the landscape of the area under surveillance that is denoted by the rectangular block is oriented accordingly on the display screen.
- FIG. 4 illustrates a flow diagram of controlling and navigating surveillance system in accordance with one embodiment of the present invention.
- the method comprises mapping a three dimensional (3D) computer generated (CG) model with camera locations at step 402; receiving input from multiple surveillance cameras at step 404; receiving input from navigating unit at step 406: moving the CG model according to the orientation information provided by the navigation unit at step 408; determining the visible camera view point and short list the visible camera ID ai step 410; calculating the placeholder for the shortlisted at step 412; and displaying the shortlisted real-time video on the predefined video placeholder at step 414.
- 3D three dimensional
- CG computer generated
- the 3D CG model is pre- generaied for simulating the area under surveillance.
- the surveillance cameras locations arc mapped on the 3D CG model
- video inputs from the actual surveillance cameras on die area under surveillance are incorporated into the 3D CG model.
- the navigation input is received and provided.
- the navigation input controls the orientation or the position of the 3D CG model.
- the navigation input may be provided through a detection unit and/or navigation unit depending on the configuration of the navigation system.
- the step 406 further comprises processing the videos taken from the standard camera to determine the orientation of the marker board, and sending the determined orientations information to change the orientation of the 3D CO model.
- the step 406 further comprises detecting the orientation of the handheld device through the orientation sensor, and sending determined orientations information to change the orientation of the 3D CG model.
- the navigation unit is handheld by a user as a controller for controlling orientations of the 3D CG model.
- the navigation unit's orientations are being determined, and in turn the orientations information is being sent to affect the 3D CG model to orient in a corresponding orientation on the display screen.
- the method further determines the surveillance cameras that are visible at the current orientation of the 3D CO model.
- the visible surveillance cameras are short listed with their respective identities (IDs).
- IDs identities
- the short-listing process filters out the surveillance cameras that are not to be shown on the display screen.
- surveillance cameras that are positioned behind the scene of the 3D CO model are not necessary to be shown on the display screen.
- the viewpoint of the surveillance cameras are also determined through the surface information of the 3D CO model for short listitig»the appropriate visible camera IDs.
- placements of video thumbnails of the shortlisted surveillance cameras on the display screen are being calculated according to the orientation of the 3D CO model on the display screen.
- the placements of the video thumbnails are changes dynamically as the orientation of the 3D CG model changes.
- the system determines a closest point from the surveillance camera location point and place the thumbnails to the right bottom of the display screen by default.
- the frame sir.e i.e. width and height
- the thumbnail When the frame sir.e (i.e. width and height) of each thumbnail appear to be smaller than thai or the intended size, it signifies that the thumbnail is placed out of the display screen, and accordingly, the thumbnail will be relocated at another locations.
- the video thumbnails are displayed according to the assigned placements on the display screen with real time video captured by the surveillance camera.
- 3D CG model is assigned with a 3D location denoted by x, y and L ,or a 3D coordinate.
- the 3D location of the surveillance oamera can be used as a factor for the short-listing process.
- the surveillance camera with a 3D location that faces the user will be short-l isted.
- Il is knowrf that the surveillance camera are fixed on the actual location and the field of view is also fixed within a range, as the 3D CG model changes its orientation shown on the display screen, the 3D location of each surveillance camera changes accordingly.
- the marking of surveillance camera in the 3D CG model shown on the display screen will not be shown.
- FIG. 5A illustrates a 3D CG model 500 of FIG. 2A
- FIG. 5B illustrates visible point map 505 of the 3D CG model 500. llie visible point map 505 is generated to determine the surveillance cameras to be shown on the display screen as the 3D CG model changes its orientations.
- the visible point map 505 is formed by all visible points in two-dimensional (2D) view, i.e. all points that are visible on the display screen. These visible points are surface points on the 3D CG model 500 having corresponding 3D coordinates in the 3D CG model 500.
- the visible point map 505 is provided in FIG.
Abstract
The present invention provides a surveillance system for an area under surveillance, the surveillance system having a plurality of surveillance cameras installed within the area. The surveillance system comprises a three dimensional (3D) computer generated (CG) model modeling the area under surveillance with locations of all the plurality of surveillance cameras marked therein, a navigation unit for controlling orientations of the 3D CG model for showing on a display screen; a detection unit for acquiring orientation information from the navigation unit: and a processing unit for receiving the orientation information from the detection unit, wherein the orientation information is processed and associated to the orientations of the 3D CG model in a manner that the orientation of the 3 Q CG model shown on the display screen is directly controlled through the navigation unit in real-time. A method for controlling the surveillance system is also provided herewith.
Description
System and Method for Navigation and Visualization
Field of the Invention
[0001] The present invention relates to video surveillance system. In particular, the present invention relates to a system and method for navigating and visualizing video surveillance system within a given area.
Background
[0002] Video surveillance systems are being widely used to ensure the security of an area. To minimize blind spots on the video surveillance systems, generally, many surveillance video cameras are required to cover as many regions and angles as possible. However, the large number of video cameras can be a challenge for security personnel to monitor all the required screens at all time. Further, it requires a very well -trained personnel to navigate and control the large number of video cameras in an intense situation.
[0003] Suspicious behaviors can be detected using video analytics and video processing methods. When an event is detected, there is can be limited lead-time between the event detected the security personnel reaches the location where the event occurred. During which period, the person who commits the crime (the event) could have gone away. It is difficult to track the person through a large number of cameras
[0004] US 7,5 1 1 ,736 entitled "Augmented Reality Navigation System" discloses a navigation system utilizing augmented reality that utilizes captured images to generate orientation information, wherein the pitch, yaw mid roll are periodically
derived from a sensor, in-between the periodical update, the pitch, roll, and yaw information are derived by capturing and image of the observed scene, identifying reference image components (RlC's) in a images and comparing those RIC's with subsequently captured images, to derive orientation information. Summary
[0005] In one aspect of the present invention, there is provided a surveillance system for an area under surveillaace, the surveillance system having a plurality of surveillance cameras installed within the area. The surveillance system comprises a three dimensional (3D) computer generated (CG) model modeling the area under surveillance with locations of all the plurality of surveillance cameras marked therein, a navigation unit for controlling orientations of the 3D CG model for showing on a display screen; a detection unit for acquiring orientation information from the navigation unit; and a processing unit for receiving the orientation information from the detection unit, wherein the orientation information is processed and associated to the orientations of the 3D CG model in a manner that the orientation of the 3D CG model shown on the display screen is directly controlled through the navigation unit in realtime.
[0006] In one embodiment, the navigation unit includes a marker board and the detection unit includes a video camera, wherein the video camera is operable to record orientations of the marker board, wherein the recorded video is process by the processing unit to generate the orientation information.
[0007] In another embodiment, the navigation unit includes an orientation sensor and the detection unit includes a transmission unit to transmit the signals from the orientation sensor to the processing to generate the orientation information.
[0008] Yet in a further embodiment, the navigation unit includes a keyboard or computer mouse. Live videos may be captured by the plurality of the surveillance cameras arc displayed with reference to the respective surveillance camera on the display screen together with the 3D CG model. It is possible that the live videos are displayed cm the display screen in thumbnails alongside with the 3D CG model. The thumbnails may be positioned dynamically alongside with the 3D CG model. Alternatively, the live videos of only the surveillance cameras displaying at the forefront of the 3D CG model are displayed on the display screen.
[0009] In another aspect of the present invention, there is provided a method for controlling a surveillance system for an area under surveillance with a plurality of surveillance cameras installed within the area. The method comprises creating a 3D CG model by modeling the area under surveillance: marking locations of ali the plurality of surveillance cameras on the 3D CG model; displaying the 3D CG model on a display screen; controlling the orientation of the 3D CG model through a navigation unit; acquiring orientation information through the navigation unit; processing the orientation information; and associating the orientation information to the orientations of the 3D CG model in a manner that the orientation of tile 3D CG model .shown on the display screen is directly controlled through the navigation unit in real-time.
[0010] In one embodiment, the method may further comprise recording videos of the navigation unit to detect the orientations of the navigation unit to generate
orientation information, and detecting orientations of the navigation unit through an orientation sensor resided therein to generate orientation information.
[0011] In a further embodiment, the method may further comprise displaying live videos captured by the plurality of the surveillance cameras with reference to the respective surveillance camera on the display screen together with the 3D CG mode. The live videos may be displayed on the display screen in thumbnails alongside the 3D CG model. Preferably, the thumbnails are positioned dynamically alongside the 3D CG model. More preferably, the live videos of only the surveillance cameras displaying at the forefront of the 3D CG model are displayed on the display screen. Brief Description of the Drawings
[0012] This invention will be4 described by way of non-limiting embodiments of the present invention, with reference to the accompanying drawings, in which:
[0013] FlG. l illustrates schematically a surveillance system in accordance with one embodiment of the present invention; [0014] FIG. 2A exemplifies a 3D model of an area under surveillance that is shown on the display screen of FIG. 1 ;
[0015] FIG. 2B exemplifies a screen shot showing the 3D model of FIG. 2A when the 3D model and the surveillance cameras' point of views are shown in thumbnails; [0016] FIG. 2C illustrates a. screen shot of FIG. 2B with one of the thumbnail selected:
[0017] FIG. 3 A shows a top surface of the rectangular block;
[0018] FIG. 3B shows the top surface where there is a foreshortening on the top surface;
[0019] FIG. 3C shows three separate orientations of the rectangular block, wherein the 3D CG model of the landscape of the area under surveillance that is denoted by the rectangular block is oriented accordingly on the display screen;
[0020] FTG. 4 illustrates a flow diagram of controlling and navigating surveillance system in accordance with one embodiment of the present invention;
[0021] FIG.5 A illustrates a 3D CG model of FIG. 2A; and [0022] FIG. 5B illustrates visible point map of the 3D CG model.
Detailed Description
[0023] In line with the above summary, the following description of a number of specific and alternative embodiments is provided to understand the inventive features of the present invention. It shall be apparent to one skilled in the art, however that this invention may be practiced without such specific details. Some of the details may not be described at length so as not to obscure the invention. For ease of reference, common reference numerals will be used throughout the figures when referring to the same or similar features common to the figures.
[0024] The present invention provides a system and method that create situational awareness between a large number of cameras and their corresponding
physical locations using a computer graphics/generated (CG) model for the area under surveillance as a medium to navigate and visualizing the area of interest to create total situational awareness. The CG model comprises a 3D (three dimensional) environment that includes interior and exterior of the building, if any, where the cameras and their respective views are mapped on the CG model. Each camera view is mapped onto the 3D model as a virtual camera and it can be displayed on one single display screen. The cameras installed can be shown on the exterior model for easy selections. An intuitive navigation system for controlling orientations of the CG model can be made through orientation sensor and augmented reality input. The navigational system would help to establish and associate the cameras on the display screen with their respective physical locations.
[0025] in the current system and method, it is possible to provide a single video view displaying only the camera of interest based on the orientations of the CG model. There are various means adaptable for controlling the orientation of the CG model. These means may include any pointing devices such as computer mouse, keyboard, or other orientation sensing devices. In one embodiment, it is possible to use a marker board acting as ground of the area under surveillance as a trigger to move the camera view. The present system and method provides a real-time link lo locate the cameras at physical area through the navigation means. [0026] FIG. 1 illustrates schematically a surveillance system 100 in accordance with one embodiment of the present invention. I he surveillance system 1.00 comprises a processing unit 110. a controller 120, a display screen 130, a navigation unit 140 and a detection unit 150. The controller 120 can be any established surveillance system in
the art that comprises a surveillance system 121 and a plurality of surveil lance cameras 122 connected thereto. The plurality of surveillance cameras 122 are installed in various locations of an area under surveillance. Usually, a large number of surveillance cameras 122 are required to ensure that (he area can be closely watched with minimal or no blink spot. The surveillance cameras 122 may be installed within a building of the area or outside the building. The processing unit 1.10 is adapted to acquire videos from the surveillance controller 121 and process accordingly. In the processing unit 110, there is provided a pre-rendered three-dimensional (3D) model 132 of the area. The 3D model 132 is a computerized 3D graphic that modeling area under surveillance with the plurality of surveillance cameras 122 mapped out on the 3D model 132. The 3D model 132 is displayed on the display screen 130 connected directly to the center controller 110. A user 101 controlling the center controller 110 may gain an interactive vi realization of the 3D model 132 through the navigation unit 140. In the present embodiment, the navigation unit 140 incorporates with the detection unit 150 to form an interactive visual interface allowing user 10.1 to view the area under surveillance through the 3D model 132 at will. Specifically, the navigation unit 140 is a handheld marker board representing the landscape of the area under surveillance. The handheld marker board can be a rectangular flat panel without any shape or configuration form on the panel as the landscape of the area is modeled and shown on the display screen 130. The detection unit 150 is a video camera for capturing videos of the navigation unit 140. The videos are processed by the center controller 110 to determine the orientation of the navigation unit 140. As the orientation of the navigation unit 140 changes, the 3D model 132 displayed on the display scrccd 130 changes its orientation
correspondingly, which allows the user 101 to control and view the area through the 3D model 132 at an angle and orientation thai the user 101 desires.
[0027] Through the display screen 130, it is desired that the videos taken by the respective surveillance cameras can»be displayed in thumbnails 135 alongside with the 3D model 132 and further referenced to the respective video cameras. More desirably, the video thumbnails 135 arc shown with live videos. Accordingly, the user 101 may have a full control of viewing.orientations and angles of the entire landscape of the area under surveillance through the navigation unit 140. With the intuitive controls of the orientations and angles, the user 101 may easily select any one of the surveillance cameras and zoom into the video for belter view.
[0028] In the embodiment described above, the surveillance system 100 provides an intuitive navigation and visualization of the area under surveillance. The 3D model 132 facilitates an interactive means to control the surveillance cameras at the physical location of the area to create real-time situational awareness and shortlist the camera of interest.
[0029] in the above embodiment, the detector unit 140 may be a standard video camera adapted to capture videos of the navigation unit 150 for observing its orientation. Once the orientation information is obtained by the processing unit l ift, the 3D model is oriented accordingly and shown on the display screen 130. [0030] In one specific embodiment, the detection unit 140 (i.e. the video camera) can he worn by the user 101 in a manner that the detection unit 140 is viewing at a same direction as the user would.
[0031] In the above embodiment, the navigation unit 140 can te a bare panel without any electronic components provided therein because the detection unit 150, being a imaging device, is able to capture videos of the navigation unit 140 to determine the orientations and angles through imaging processing techniques that are know in the art In an alternative embodiment, the navigation unit 140 can be adapted with a communication unit and orientation sensor therein as a stand-alone controlling unit. In such configuration, the detection unit 140 can be a receiver connected Lo the processing unit 110 for allowing communication between the processing unit 110 and the navigation unit 150 through communication unit. The orientation sensor may include acceleromeler or any gesture detection sensor for detecting the orientation of the navigation unit 140, wherein the communication unit may include any wireless communication module, such as Bluetooth module, for sending the orientation information to the processing unit ί 10.
[0032] In a further embodiment, the navigation unit 150 may be a handheld device, such as a tablet computer, tliat is able to communicate with the processing unit 110. The handheld device may further comprise orientation sensor, such that the orientation of the handheld device may similarly be used for controlling the orientation of the 3D model 132. As the handheld device has an integrated display screen thereon, the display screen may allow the user 101 to view the 3D model 132 thereon directly and provided the necessary control. The tough screen may further be used to select the surveillance camera οΓ interest.
[0033] FIG. 2A exemplifies a 3D model 200 of an area under .surveillance tliat is shown on the screen 130 of FIG. I . The 3D model 200 comprises a building 210
with markings 212 representing the exact locations of surveillance cameras. The 3D model 200 is pre-generated for simulating the actual area under surveillance. The exact location of each surveillance cameras is being mapped on the 3D model. Λ wireframe rectangular block 250 is further superimposed on the 3D model 200 as a reference to the navigation unit ISO of FIG. 1. As the orientation of the rectangular block 250 changes, the 3D model 200 changes its orienialion accordingly so that the user 101 is able lo view the 3D model 200 at different orienialion. For avoidance of doubt, the rectangular block 250 would not be shown in the actual display as it is provided for reference and illustrations only. [0034] FIG. 2B exemplifies a screen shot showing die 3D model 200 of FIG.
2A when the 3D model 200 and the surveillance cameras' point of views are shown in thumbnails 235. Rach of the thumbnails 255 is reference to the respective surveillance camera through a line callout for easy reference.
[0035] FIG. 2C illustrates a screen shot of FiG. 2B with one of the thumbnail 255 selected. When a thumbnail 255 is selected, the video is presented in larger frame size for better viewing.
[0036] FIGs. 3A-3C illustrate schematically movements of a rectangular block in relation lo the change in orientation of the 3D CG model. FIG. 3Λ shows a top surface 302 of toe rectangular block. FIG. 3B shows the top surface 302 where there is a foreshortening on the top surface indicating that the rectangular block has tilled slightly, which can be detected easily with imaging methods that are widely known in the art. FIG. 3C shows three separate orientations of the rectangular block (on the left).
wherein the 3D CG model of the landscape of the area under surveillance that is denoted by the rectangular block is oriented accordingly on the display screen.
[0037] FIG. 4 illustrates a flow diagram of controlling and navigating surveillance system in accordance with one embodiment of the present invention. Briefly, the method comprises mapping a three dimensional (3D) computer generated (CG) model with camera locations at step 402; receiving input from multiple surveillance cameras at step 404; receiving input from navigating unit at step 406: moving the CG model according to the orientation information provided by the navigation unit at step 408; determining the visible camera view point and short list the visible camera ID ai step 410; calculating the placeholder for the shortlisted at step 412; and displaying the shortlisted real-time video on the predefined video placeholder at step 414.
[0038] Still referring to FIG. 4, at the step 402. the 3D CG model is pre- generaied for simulating the area under surveillance. With the 3D CG model, the surveillance cameras locations arc mapped on the 3D CG model In a very wide area, it is possible that the area is sectioned into smaller area, and therefore, a plurality of pre- generated 3D CG models are stored for selections. In accordance with another embodiment, there may be one 3D QG model generated for exterior or outdoor of the area and another 3D CG model is generated for interior or indoor of the area. [0039] At the step 404, video inputs from the actual surveillance cameras on die area under surveillance are incorporated into the 3D CG model. It is understood thai most surveillance systems that are known in the art are able to acquire and handle multiple streams of video. It is also understood that these system are capable of pre-
and post processing the video streams before sending to the navigation system. The video pre- and post- processing may include pattern recognition, machine learning, neural network processing or more. These videos acquisition and processing are welt known in the art and any of these system and method can be desired and therefore they arc not described herewith for simplicity.
[0040] At the step 406. the navigation input is received and provided. The navigation input controls the orientation or the position of the 3D CG model. At discussed above., the navigation input may be provided through a detection unit and/or navigation unit depending on the configuration of the navigation system. In one embodiment, when a standard camera and a marker board is used as a navigation means, the step 406 further comprises processing the videos taken from the standard camera to determine the orientation of the marker board, and sending the determined orientations information to change the orientation of the 3D CO model. In another embodiment, when a handheld device having an orientation sensor is adapted in the navigation system, the step 406 further comprises detecting the orientation of the handheld device through the orientation sensor, and sending determined orientations information to change the orientation of the 3D CG model. Typically, the navigation unit is handheld by a user as a controller for controlling orientations of the 3D CG model. [0041] At the step 408, the navigation unit's orientations are being determined, and in turn the orientations information is being sent to affect the 3D CG model to orient in a corresponding orientation on the display screen. At the step 410, as orientation of the 3D CG model is changing, the method further determines the
surveillance cameras that are visible at the current orientation of the 3D CO model. The visible surveillance cameras are short listed with their respective identities (IDs). The short-listing process filters out the surveillance cameras that are not to be shown on the display screen. Generally, surveillance cameras that are positioned behind the scene of the 3D CO model are not necessary to be shown on the display screen. The viewpoint of the surveillance cameras are also determined through the surface information of the 3D CO model for short listitig»the appropriate visible camera IDs.
[0042] At the step 412, placements of video thumbnails of the shortlisted surveillance cameras on the display screen are being calculated according to the orientation of the 3D CO model on the display screen. The placements of the video thumbnails are changes dynamically as the orientation of the 3D CG model changes. In this process, the system determines a closest point from the surveillance camera location point and place the thumbnails to the right bottom of the display screen by default. When the frame sir.e (i.e. width and height) of each thumbnail appear to be smaller than thai or the intended size, it signifies that the thumbnail is placed out of the display screen, and accordingly, the thumbnail will be relocated at another locations.
[0043] At the step 414, the video thumbnails are displayed according to the assigned placements on the display screen with real time video captured by the surveillance camera. [0044] Referring back to the step 410, each surveillance camera marked in the
3D CG model is assigned with a 3D location denoted by x, y and L ,or a 3D coordinate. The 3D location of the surveillance oamera can be used as a factor for the short-listing process. In one embodiment, the surveillance camera with a 3D location that faces the
user will be short-l isted. Il is knowrf that the surveillance camera are fixed on the actual location and the field of view is also fixed within a range, as the 3D CG model changes its orientation shown on the display screen, the 3D location of each surveillance camera changes accordingly. When the view point the surveillance camera changes and becoming not visible to the user, the marking of surveillance camera in the 3D CG model shown on the display screen will not be shown. An AG method may be desired where a plane of the model is tracked and uses to control through marker's orientation and location. These points are tabulated and will be used to determine if they are in the user visible area. For a given 3D CO model, this information is redefined. Whenever the CG model moves, the surface points of the 3 D CG model are compared with the camera location point table and list out those camera location point that are visible by the user. [0045] FIG. 5A illustrates a 3D CG model 500 of FIG. 2A and FIG. 5B illustrates visible point map 505 of the 3D CG model 500. llie visible point map 505 is generated to determine the surveillance cameras to be shown on the display screen as the 3D CG model changes its orientations. The visible point map 505 is formed by all visible points in two-dimensional (2D) view, i.e. all points that are visible on the display screen. These visible points are surface points on the 3D CG model 500 having corresponding 3D coordinates in the 3D CG model 500.
[0046] When the 3D CG model 500 is oriented as shown in FIG. 5A, visible points of the 3D CG model 500 is determ ined and tabulated as provided in the step 410 (FIG. 4) above in one embodiment. During this step, the entire 3D CG model as shown in the ITG. 5A is scanned and all the surface points that are visible on the display screen
are tabulated with the corresponding coordinates. Then, the coordinates of the surveillance cameras' locations, which are pre-stored in the system, are being checked against the tabulated visible points' coordinates, and the surveillance cameras of the matched coordinates will be displayed on the display scteen. Those surveillance cameras with coordinates that are not matched in the tabic (i.e. not visible) shall be hidden until the orientation of the 3D CG model is oriented with visible points that matches the coordinates of the surveillance cameras.
[0047] For avoidance of doubt, the visible point map 505 is provided in FIG.
5B for illustration purpose only. It is not necessary to be actually shown on the display screen.
[0048] While specific embodiments have been described and illustrated, it is understood that many changes, modifications, variations and combinations (hereof could be made to the present invention without departing from the scope of the invention.
Claims
1. Λ surveillance system for an area under surveillance, the surveillance system having a plurality of surveillance cameras installed within the area, the surveillance system comprising:
a three dimensional (3D) computer generated (CG) model modeling the area under surveillance with locations of all the plurality of surveillance cameras marked therein,
a navigation unit for controlling orientations of the 3D CG model for showing on a display screen;
a detection unit for acquiring orientation information from the navigation unit; and
a processing unit for receiving the orientation information from the detection unit, wherein the orientation information is processed and associated to the orientations of the 3D CG model in a manner that the orientation of the 3D CG model shown on the display screen is directly controlled through the navigation unit in real-time.
2. A surveillance system according to claim I . wherein the navigation unit includes a marker board and the detection unit includes a video camera, wherein the video camera is operable to record orientations of the marker board, wherein the recorded video is process by the processing unit to generate the orientation information.
3. A surveillance system according to claim I , wherein the navigation unit includes an orientation sensor and the detection unit includes a transmission unit to transmit the signals from the orientation sensor lo the processing to generate the orientation information.
4. A surveillance system according to claim 1 , wherein the navigation unit includes a keyboard or computer mouse.
5. A surveillance system according to claim 1 , wherein live videos captured by the plurality of the surveillance cameras are displayed with reference to the respective .surveillance camera on the display screen together with the 3D CG model.
6. A surveillance system according lo claim 5, wherein the live videos arc displayed on the display screen in thumbnails alongside with the 3D CG model.
7. A surveillance system according to claim 6, wherein the thumbnails are positioned dynamically alongside with the 3D CO model.
8. A surveillance system according to 5. wherein the live videos of only the surveillance cameras displaying at the forefront of the 3D CG model are displayed on rhe display screen.
9. A method for controlling a surveillance system for an area under surveillance with a plurality of surveillance cameras installed within the area, the method comprising: creating a 3D CG model by modeling the area under surveillance marking locations of all the plurality of surveillance cameras on the 3D CG model; displaying the 3D CG model on a display screen; controlling the orientation of the 3D CG model through a navigation unit; acquiring orientation information through the navigation unit; processing the orientation information; and associating the orientation information to the orientations of the 3D CG model in a manner that, the orientation of the 3D CG model shown on the display screen is directly controlled through the navigation unit in real-time.
10. Λ method according to claim 9, further comprising recording videos of the navigation unit to detect the orientations of the navigation unit to generate orientation information.
Π . Λ method according to claim 9, further comprising detecting orientations of the navigation unit through a orientation sensor resided therein to generate orientation information.
12. A method according to claim 9, further comprising displaying live videos captured by the plurality of the surveillance cameras with reference to the respective surveillance camera on die display screen together with the 3D CG model.
13. A method according to clairh 12, wherein the live videos are display on the display screen in thumbnails alongside tine 3D CG model.
14. A method according to claim 13, wherein the thumbnails are positioned dynamically alongside the 3D CG model.
1 5. A method according to claim 12, wherein the live videos of only the surveillance cameras displaying at the forefront of the 3 D CO model are displayed on the display screen.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI2010006238A MY171846A (en) | 2010-12-27 | 2010-12-27 | System and method for navigation and visualization |
MYPI20100006238 | 2010-12-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012091537A1 true WO2012091537A1 (en) | 2012-07-05 |
Family
ID=46383347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/MY2011/000129 WO2012091537A1 (en) | 2010-12-27 | 2011-06-22 | System and method for navigation and visualization |
Country Status (2)
Country | Link |
---|---|
MY (1) | MY171846A (en) |
WO (1) | WO2012091537A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150085112A1 (en) * | 2013-09-26 | 2015-03-26 | The Boeing Company | System and Method for Graphically Entering Views of Terrain and Other Features for Surveillance |
CN109640037A (en) * | 2018-11-26 | 2019-04-16 | 安徽吉露科技有限公司 | A kind of remote monitoring and positioning system that supervision effect is good |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030085992A1 (en) * | 2000-03-07 | 2003-05-08 | Sarnoff Corporation | Method and apparatus for providing immersive surveillance |
GB2450235A (en) * | 2007-06-11 | 2008-12-17 | Honeywell Int Inc | 3D display of multiple video feeds |
US20090225164A1 (en) * | 2006-09-13 | 2009-09-10 | Renkis Martin A | Wireless smart camera system and method for 3-D visualization of surveillance |
-
2010
- 2010-12-27 MY MYPI2010006238A patent/MY171846A/en unknown
-
2011
- 2011-06-22 WO PCT/MY2011/000129 patent/WO2012091537A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030085992A1 (en) * | 2000-03-07 | 2003-05-08 | Sarnoff Corporation | Method and apparatus for providing immersive surveillance |
US20090225164A1 (en) * | 2006-09-13 | 2009-09-10 | Renkis Martin A | Wireless smart camera system and method for 3-D visualization of surveillance |
GB2450235A (en) * | 2007-06-11 | 2008-12-17 | Honeywell Int Inc | 3D display of multiple video feeds |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150085112A1 (en) * | 2013-09-26 | 2015-03-26 | The Boeing Company | System and Method for Graphically Entering Views of Terrain and Other Features for Surveillance |
US9860489B2 (en) * | 2013-09-26 | 2018-01-02 | The Boeing Company | System and method for graphically entering views of terrain and other features for surveillance |
CN109640037A (en) * | 2018-11-26 | 2019-04-16 | 安徽吉露科技有限公司 | A kind of remote monitoring and positioning system that supervision effect is good |
Also Published As
Publication number | Publication date |
---|---|
MY171846A (en) | 2019-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107836012B (en) | Projection image generation method and device, and mapping method between image pixel and depth value | |
KR100869447B1 (en) | Apparatus and method for indicating a target by image processing without three-dimensional modeling | |
JP4956626B2 (en) | Augmented reality based system and method providing unmanned vehicle status and control | |
EP2553924B1 (en) | Effortless navigation across cameras and cooperative control of cameras | |
WO2018163804A1 (en) | Information processing system, information processing device, information processing method, and program for causing computer to execute information processing method | |
JP5566281B2 (en) | Apparatus and method for specifying installation condition of swivel camera, and camera control system provided with the apparatus for specifying installation condition | |
US20030085992A1 (en) | Method and apparatus for providing immersive surveillance | |
US20080316203A1 (en) | Information processing method and apparatus for specifying point in three-dimensional space | |
US20070035436A1 (en) | Method to Provide Graphical Representation of Sense Through The Wall (STTW) Targets | |
JP7423683B2 (en) | image display system | |
CA2673908A1 (en) | Cv tag video image display device provided with layer generating and selection functions | |
JP2013105253A5 (en) | ||
JP6174968B2 (en) | Imaging simulation device | |
CN104204848B (en) | There is the search equipment of range finding camera | |
JP2015079444A5 (en) | ||
CN108377361B (en) | Display control method and device for monitoring video | |
JP6310149B2 (en) | Image generation apparatus, image generation system, and image generation method | |
KR101073432B1 (en) | Devices and methods for constructing city management system integrated 3 dimensional space information | |
JP5714960B2 (en) | Monitoring range detector | |
WO2012091537A1 (en) | System and method for navigation and visualization | |
WO2016048960A1 (en) | Three dimensional targeting structure for augmented reality applications | |
JP5213883B2 (en) | Composite display device | |
JP6214653B2 (en) | Remote monitoring system and monitoring method | |
JP6358996B2 (en) | Security simulation device | |
JP5960472B2 (en) | Image monitoring device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11853635 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11853635 Country of ref document: EP Kind code of ref document: A1 |