WO2006128124A2 - Total awareness surveillance system - Google Patents

Total awareness surveillance system Download PDF

Info

Publication number
WO2006128124A2
WO2006128124A2 PCT/US2006/020795 US2006020795W WO2006128124A2 WO 2006128124 A2 WO2006128124 A2 WO 2006128124A2 US 2006020795 W US2006020795 W US 2006020795W WO 2006128124 A2 WO2006128124 A2 WO 2006128124A2
Authority
WO
WIPO (PCT)
Prior art keywords
surveillance
module
area
information
model
Prior art date
Application number
PCT/US2006/020795
Other languages
French (fr)
Other versions
WO2006128124A3 (en
Inventor
Philippe Van Nedervelde
Roger C. Barry
Original Assignee
Panoptic Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panoptic Systems, Inc. filed Critical Panoptic Systems, Inc.
Publication of WO2006128124A2 publication Critical patent/WO2006128124A2/en
Publication of WO2006128124A3 publication Critical patent/WO2006128124A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19691Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/14Central alarm receiver or annunciator arrangements

Definitions

  • the present invention relates to surveillance systems, specifically to computer enhanced surveillance systems.
  • Nonlimiting examples of video background subtraction are hereby incorporated by reference herein and may be found at:
  • Non-limiting examples of database programs/systems include SQL, MYSQL, and any other data storage modules known in the art.
  • Nonlimiting examples of programs used to graphically model an area are listed below and are incorporated by reference herein:
  • each camera provides a single limited perspective and may not properly correlate to perspective characteristics (such as view angle, aspect ratio, image size) of other cameras. Accordingly, such differences may cause an operator to be deceived as to actual characteristics of people, objects, and/or places within a facility.
  • the present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available surveillance systems and methods. Accordingly, the present invention has been developed to provide a surveillance system and/or method.
  • a surveillance system including one or more of: a first surveillance device module that may be in information communication with an area and/or having a first perspective of a portion of the area; and/or a display module that may be in communication with the first surveillance device module and/or showing a graphical model of the area.
  • Information regarding the area from the first surveillance device module may be visually displayed in association with the graphical model, disposed on a representation of the portion, and/or substantially oriented according to the first perspective.
  • the graphical model may include a translucent model of a visual barrier through which may be seen information from the surveillance device module.
  • the translucent model may be color-coded according to a security characteristic. For example, a wall of a security sensitive room may be displayed as a red translucent plane.
  • the area may include a vehicle that may include a second surveillance device module.
  • the display module may further display a translucent graphical vehicle model corresponding to the vehicle and information from the second surveillance device module may be visually displayed in association with the translucent graphical vehicle model.
  • the data processing module may remove background information from information from the first surveillance module, thereby providing object information corresponding to an object, m one non-limiting example, image data of a wall behind a person in a room may be removed, leaving only image data of the person, the object.
  • the data storage module may be in communication with a data processing module.
  • the data storage module may store object information.
  • the data storage module may index object information according to an object identifier and/or time such that object information may be retrieved nnrl rUQninvfiH aro.nrdi ⁇ e to time.
  • object information corresponding to a person traveling through a facility may be stored and indexed according to an object identifier indicating the person and by time, such that the information may be displayed on the graphical display according to object and time.
  • the visual barrier may be one or more of walls, ceilings, floors, desks, chairs, mirrors, dividers, and/or doors.
  • control module such as but not limited to a computer, such as but not limited to a server, that may be in communication with the display module and/or the first surveillance device module.
  • the control module may receive orientation and location commands from a user and alters a point of view displayed by the display module.
  • the control module may receive a command from a user and accordingly effect an alteration of a surveillance characteristic of the first surveillance device module.
  • An object may be displayed in association with a visible tag corresponding to a characteristic.
  • an article of manufacture comprising a program storage medium readable by a processor and embodying one or more instructions executable by the processor to perform a method for surveilling an area, the method comprising one or more of the following steps: receiving surveillance data from a surveillance device disposed at the area; and/or displaying the surveillance data in association with a graphical model representing the area.
  • the graphical model may further include a translucent image representing a visual barrier of the area.
  • the method may include processing the surveillance data to remove background images, thereby leaving object information corresponding to one or more objects in the area.
  • the method may include altering a perspective of the display of the graphical model.
  • the method may include receiving surveillance data from a plurality of surveillance devices disposed at the area; and/or displaying the surveillance data from each of the plurality of surveillance devices in association with _ ⁇ : — I n mn/U fPi->i-f>opntin ⁇ r the area in substantial correlation with a perspective of each surveillance device.
  • the method may include storing object information indexed according to object and time.
  • the method may include displaying object information together with the graphical model in association with a visual tag corresponding to a characteristic of an object.
  • the characteristic of the object may be a characteristic from the group of characteristics including threat status, authority status, and name.
  • Figure 1 illustrates a prior art surveillance system
  • Figure 2 illustrates a surveillance system display station according to one embodiment of the invention
  • Figure 3 illustrates an external see-through display of an airport facility according to one embodiment of the invention
  • Figure 4 illustrates an external see-through display of a facility according to one embodiment of the invention
  • Figure 5 illustrates an internal see-through display of a facility according to one embodiment of the invention
  • Figure 6 illustrates an external see-through display of multiple buildings according to one embodiment of the invention
  • Figure 7 illustrates a zoomed external see-through display of a facility according to one embodiment o the invention
  • Figure 8 illustrates an internal see-through display of a facility, or area, wherein an object includes a visual tag 800 according to one embodiment of the invention
  • Figure 9 shows a block diagram of modules in a security system according to one embodiment of the invention.
  • Figure 10 illustrates a rectangular room including a plurality of surveillance device modules according to one embodiment of the invention.
  • MODE(S) FOR CARRYING OUT THE INVENTION
  • modules may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • a module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
  • Modules may also be implemented in software for execution by various types of processors.
  • An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
  • a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.
  • operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
  • a system and/or a method may include a 3D user- interface system.
  • a facilities' security monitoring system that may be protecting governmental and/or private business venues. Such monitoring may be made to be dramatically more efficient.
  • security agents may be empowered, even those with limited training, to better monitor several orders of magnitude more camera feeds and other security related data-feeds.
  • the system and/or method may enable one or more surveillance agent(s), using a single-high- resolution display (the display may be auto-stereoscopic) to at-a-glance remotely monitor the security situation of an arbitrarily large number of locations, such as surveillance locations and/or distinct geographic locations.
  • surveillance agents may be empowered to have a comprehensive overview of a location.
  • Surveillance agents may be enabled to see, hear and transport their focused viewpoint through walls; floors and ceilings.
  • Surveillance agents may be able to zoom into and monitor a specific location.
  • an agent may be empowered to effectively have eyes and ears in hundreds if not thousands of places at the same time.
  • a security surveillance system may permit surveillance of a large number of security devices while the embodiment still neither overwhelms nor tires the agent with the massive amount of information being processed and displayed.
  • a surveillance agent's mental load is radically optimized towards monitoring only what is most critical to security: presence and movement of (groups of) people and vehicles inside a campus/site and its building(s) as well as security-sensitive events such as fires, smoke, badge access, assemblies, gun-shots, people running etc.
  • An automated root-cause analysis such as wherein security-critical events and alerts may be adapted into one system and correlated against each other based upon defined security policy and rules.
  • An automated root- cause analysis such as that previously described, may enable disparate real-time security data to be combined into an integrated panoptic security command center (over)view.
  • Any information streams and/or elements that may detract from an overarching goal of transit-site security surveillance may be dynamically subtracted and not displayed unless desired.
  • An agent may exclusively monitor those portions of information that may be considered to be most important from a point of view of surveillance and/or transit-site security.
  • a facial recognition system may be included in an embodiment of the invention a facial recognition system.
  • the facial recognition system may be configured such that it is not limited by a need for a "static read" on a suspect's face or limited lighting.
  • There may be a video-over- IP camera-phone that may be configured to be utilized by an on- site security agent to better communicate with a central control and/or with another security agent or similar individual or device.
  • the invention may be configured to meet needs of the military.
  • there may be a digital flight recorder may record digital info ⁇ nation such as audio input as well as video input.
  • a digital flight recorder may be configured to be used to retrace actions of an identified security risk.
  • a system or method may be configured to provide for particular needs of a facility such as but not limited to one or more hotels, casinos, entertainment venues, federal buildings, courthouses, hospitals, retail operations such as department stores, and airports.
  • [0075] There may be a capability to display only information considered to be most important. This capability may result from a combination and/or integration of multiple technologies and methods.
  • the walls of buildings and vehicles may be displayed as transparent thereby enabling at-a-glance total situational awareness of security relevant people, objects and vehicles in motion.
  • Such total situation awareness may facilitate a major effectiveness and/or efficiency boost of security surveillance personnel performance (for example, but not limited to performance effectiveness and/or efficiency as measured in terms of more sq. ft., more people, more luggage, better observed).
  • There may be one or more integrated "drill-down" functions, such as but not limited to facial recognition with automatic 17 sec. full background check
  • panoptic modes for single screen video only panoptic surveillance leveraging latent capacity of human brain for massively parallel processing of defocused viewing combined with peripheral field- of-view imagery.
  • There may be integrated on going or on demand identification through facial recognition against customer definable database of security sensitive persons.
  • There may be integration and correlation of multiple events from physical security monitoring systems.
  • There may be suspicious activity detector (using pattern recognition and/or one or more neural nets).
  • a suspicious activity detector may detect suspicious activity and/or signals for analysis of the suspicious activity. Suspicious activity may be analyzed by a human operator or by a device configured to analyze suspicious activity.
  • the security or safety indicative events may include but are not limited to assembly of groups larger than X people; loitering; running; stressed shouting; suspect clothing (balaclavas, squad-type clothing, bullet-proof vests) or suspect items hidden under clothing; nervousness of individuals and groups; fires; smoke and gasses; explosions etc.
  • a camera having a 360 degree viewing area The viewing area may be a 360 degree horizontal by 60 degree vertical viewing area.
  • There may be a camera with enhanced resolution There may be a camera with selectively enhancable resolution.
  • There may be selectable compression of portions of data such as compression of one portion of a screen capture or video feed.
  • Figure 1 illustrates a prior art surveillance system.
  • one or more displays including one or more portions of screen dedicated to one or more cameras that may be observing portions of a facility.
  • a computer that may permit some control over what is displayed.
  • a display may be configured from the computer, thereby determining which camera is to be displayed on which display.
  • FIG. 2 illustrates a surveillance system display station according to one embodiment of the invention.
  • a display is defined as a device that enables a human or device to have access to data. Displaying is the act of enabling access to data.
  • There may be a user interface, such as a keyboard or a mouse that may enable a user to control a display. What is displayed on a particular display may be determined by a chosen point of view, which may be instead of being determined by a chosen camera or other surveillance device.
  • a display may include a model of a displayed area wherein data from a surveillance device, such as a video camera, mav be su ⁇ erinroosed over the model.
  • a user may examine data from one or more surveillance devices simultaneously, wherein the display may be present the examined data in an integrated format, such as being superimposed over a model of a facility.
  • information from individual surveillance devices may be integrated into a display of the model and viewed according to a relationship, such as a geometrical relationship.
  • a user may have a more complete understanding of information from an location than that which may be provided by disjointed views of each individual surveillance device.
  • a user may have control over a point of view.
  • a user may be able to zoom to view in greater detail a location.
  • a user may be able to choose a new viewing angle, wherein the system may automatically display information from a different set of devices or may control one or more devices intended to adjust a point of view.
  • a user may be oblivious to one or more details of the process of providing a chosen point of view without impacting an ability of the system to successfully display a chosen point of view. For example, a user may not know details regarding the number of cameras installed in a system, the orientations of the cameras, unique identifiers of the cameras, and/or other information that may assist in creating a display from a chosen point of view.
  • Figure 3 illustrates an external see-through display of an airport facility according to one embodiment of the invention.
  • a model of the facility including models of levels of a building, models of vehicles such as cars and airplanes, and models of terrain such as the ground.
  • images superimposed on the model such as images of people.
  • the images may include, but are not limited to, video images, portions of video images, selected portions of video images, information from surveillance devices, and/or sprites generated to represent known locations of objects such as individuals.
  • a user may be enabled to view a display showing a large amount of data, wherein data portions may be displayed in relation to one another by locating display elements in determined portions of the model. Thereby a user may be able to observe an area without being restricted by walls or other obstacles to data while still observing the data in a context that may be similar to the context from which the data is derived.
  • a model may include one or more models independent of one or more other models or model portions. Thereby a display may be configured to disnlav relations between ⁇ ortions of data in a way that mav be similar to a changeable context.
  • an airplane may include a surveillance device that may communicate with a surveillance system
  • a model of the airplane may be included in the model of the facility. Further, a location and orientation of the airplane may be detected, observed, and/or communicated. Thereby, as the airplane moves, data from the airplane may be positioned correctly on a display.
  • Figure 4 illustrates an external see-through display of a facility according to one embodiment of the invention.
  • There may be a wire-frame model of a building.
  • There may be images of vehicles.
  • There may be images of individuals.
  • There may be shaded areas.
  • Images, such as those of vehicles and of people, may be superimposed on a wire-frame grid representative of the facility in from which the images are gathered.
  • Images may be real images.
  • Images may be computer enhanced images.
  • Images may be sprites representing a subject of a surveillance device.
  • Images may be direct images extracted from a larger image.
  • image recognition tools may define an area of an image that may be a person or a car and may enable cropping of the image to show only the recognized image, thereby eliminating an amount of superfluous information.
  • a user may model a facility and may give the model a graphical representation.
  • a user may provide sufficient information about a surveillance device such that a computer may sufficiently accurately depict images representative of or of subjects of surveillance from the surveillance device wherein the accurate depiction may include sufficiently accurate placement of the image within the graphical representation of the model of the facility.
  • a video camera in room 233 of the second floor may feed visual information to a computer that may have sufficient information about the location of the video camera and from other devices such as another video camera in the same room that the computer may place extract images of moving objects from the video camera feed and may display those images or representations of those images in sufficiently correct locations in the graphical representation of the model of the facility.
  • Sufficiency is defined as adequate for the needs of a user.
  • Figure 5 illustrates an internal see-through display of a facility according to one embodiment of the invention.
  • There may be a graphical representation of a model of a facility.
  • There may be images of objects that may be considered more imrjortant than other objects. For example, there may be images of people and images of vehicles, but not images of desks and not images of wall decorations. Images of walls may be implied, such as with wire-frame images, or may be translucent.
  • There may be an ability to move a point of view to various positions. One or more of the points of view may include points of view from inside a facility.
  • a user may be able to shift from a first point of view to a second point of view.
  • a user may be able to see through walls of the model.
  • a user may be able to know where walls are yet have unobstructed vision of important objects.
  • a user may be able to smoothly transition from one viewpoint to another, as if the user were a floating, flying eye able to see through and/or move through walls.
  • a computer may generate portions of an image of an object. For example, a computer may generate a back portion of a person by extrapolation from shapes and colors of a front of a person. In another example, a computer may generate a 360 degree view ability of a person through combining information from multiple cameras.
  • a computer may generate a 360 view ability of a person by recording information about portions of an object as the object travels and rotates within view.
  • an image may simply be presented as cropped in a proper location, but the facing of the image may be allowed to rotate, and thereby not be an actual facing.
  • a computer in one embodiment of the invention may track objects as each object travels within view of at least one surveillance device and may gather information about the object. Also, information may be imputed to an object and/or may be derived from multiple surveillance devices. For example, status information may be graphically displayed with an image of an object and/or with the image of a model. For example, should an object be a suspicious object, a user may impute such information by causing a computer to attach an image to the image representation of the suspect object. For example, there may be a flashing red polyhedron shading displayed about the suspect object.
  • a security officer present in a facility may carry a badge that may be detectable and attributable to the image of the security officer, thereby an image representing the status of the security officer may accompany the image of the security officer.
  • the image of the security officer may have a blue translucent shaded region displayed about the security officer.
  • Activities detected by a surveillance device may be used to tag objects as well. For example, successful security screening by an object may tag the object as an employee. The image of the employee may then be accompanied by an image representing such status. For example, the image of the employee may be outlined in green.
  • Information may be included in a model. Portions of a facility may be colored or depicted differently to clearly establish a different status. For example, translucent red walls may represent especially secure areas.
  • a computer may automatically attach a flashing red translucent image to an image of an object that enters a secure area without having previously been tagged as having permission to enter such a location.
  • Figure 6 illustrates an external see-through display of multiple buildings according to one embodiment of the invention.
  • the buildings are displayed to represent relationships between each building.
  • the graphical representation of each building as displayed may represent actual physical distances between each building.
  • the actual buildings may be very far apart, but may have some other relationship wherein there may be an advantage to displaying the buildings in a proximity.
  • the buildings may be owned, managed, or controlled by a single party or entity or may all be of interest to a single entity.
  • the buildings may serve similar function, and therefore it may be useful to view them adjacently. For example, similar behavior by individual objects or groups of objects may be expected, and when the buildings are viewed together, differences may be more easily discovered.
  • each building may be part of a process.
  • Surveillance of such buildings may be for the purpose of controlling a process instead of or simultaneous to security surveillance. Thereby it may be useful to give a user a overview of objects and object behaviors as compared across multiple buildings that may even be on different sides of the Earth.
  • a user may be enabled to configure displayed relationships of models, thereby allowing configurable viewing of multiple sites.
  • a user may determine desired relational viewing and may instruct a system to display a desired set of facilities and/or models.
  • Figure 7 illustrates a zoomed external see-through display of a facility according to one embodiment o the invention.
  • a graphical image of a model of a facility may include cropped images of objects.
  • the graphical image of a model of a facility may be displayed in a way that may communicate structure but may permit a level of visibility within the structure.
  • floors and/or walls may be translucent or transparent. Perimeters may be solid lines. Inside walls may be translucent and/or transparent.
  • Images may be displayed in a position and perspective sufficient to communicate a three-dimensional location within the model of the facility representing an approximation of an actual location of an object.
  • images of objects may be sized according to distance from a point of view and/or may be distorted to depict a viewing angle relative to an actual viewing angle of a surveillance device.
  • a surveillance device may view an object from a 45 degree angle and a user point of view may be at a 65 degree angle
  • portions of the image extracted from the surveillance device may be distorted to give an extrapolated depiction simulating how an object should appear when viewed from a 65 degree angle instead of a 45 degree angle.
  • a single surveillance device may enable display of an object at many different angles and distances.
  • wire-frame models any models may be used and that properties of each model may be determined and implemented according to needs and desires of users and owners. For example, there may be graphical representations of areas that may not correspond to actual physical sizes and/or locations, hi one embodiment a very long hallway may be depicted as a cube including a number representing a number of objects included therein.
  • objects may be models and models may be objects.
  • Objects may become models after sufficient information is gathered.
  • a vehicle may be initially an object, but after sufficient information may be gathered by one or more surveillance devices, the object may be transformed into a model.
  • a vehicle may be imaged as an object until it may be recognized as a particular make and model and then the image of the object may be replaced by a predefined model designed to simulate the appearance of the vehicle.
  • the occupants of the vehicle may then be detected, for example by an infrared camera and may then be represented within a translucent model of the vehicle as sprites or images in various positions in the vehicle. Thereby a user may be able to see inside structures that may not even be the property and/or not be under the control of the user and/or owner.
  • Figure 8 illustrates an internal see-through display of a facility, or area, wherein an object includes a visual tag 800 according to one embodiment of the invention, hi particular, there is shown a red, or darkened box that is an exemplary visual tag 800 in association with an object, in this case a person.
  • the tag 800 may signal that the object is an identified security risk, or an authorized security agent, or may signal any other determined characteristic of importance.
  • tags include: flashing icons near an object, color filtering of an object image, causing an object image to have an altered or altering characteristic, such as but not limited to flashing the object image off and on to attract attention thereto.
  • FIG. 9 shows a block diagram of modules in a security svstem according to one embodiment of the invention.
  • a display module 910 in communication with a control module 920 in communication with a data storage module 930 and a data processing module 940.
  • the data processing module 940 is in communication with one or more surveillance device modules 950. These modules may be in wired and/or wireless communication. Communication may be encrypted.
  • a display module 910 may include one or more display devices such as LCD or CRT monitors that are commonly known and used in the art.
  • a control module 920 may include one or more computer-type devices such as a server. Such may include a CPU and/or other electronic devices for receiving, processing, and/or relaying commands.
  • a data storage module 930 may include one or more database programs such as SQL and may include one or more modules for interface and/or control of such data storage.
  • There may be storage devices such as magnetic memory devices, such as RAM, hard drives, magnetic tape devices, flash memory, etc..
  • a data processing module 940 may include one or more modules for altering data.
  • a non-limiting example includes a computer program for carrying out steps of a background subtraction method, such as those described herein.
  • a security device module 950 may include one or more security devices such as but not limited to cameras, 180 degree cameras, microphones, chemical detectors, voltage detectors, sound triangulation systems, pressure detectors, etc.. Such may include one or more transducers for converting a physical phenomenon to an electrical and/or digital signal.
  • Figure 10 shows a rectangular room including a plurality of surveillance device modules according to one embodiment of the invention.
  • a rectangular shaped room 1001 in the diagram having proportions of 10 (length) x 5 (width) x 3 (height).
  • Eight imaging sensors 1010 using 180-degree lenses to capture a full visual hemisphere each are placed on the walls and ceilings so as to maximize multi-angle visual coverage of the interior of the room.
  • the optimal field of coverage of a single 180-degree lens in this exemplary case is considered to have a diameter of 5 meters at the base plane of the hemisphere, with the 180-degree lens in the center of the circle at the base plane of the hemisphere. Therefore ⁇ >l a cement of a lens on a 5 meters s ⁇ uare ceilin ⁇ would be in the geometric middle of such a ceiling.
  • the unit should ideally be placed at a distance 1011 off the floor corresponding to an average eye-height of a human being so as to minimize optical distortions for the purposes of facial recognition.
  • units should not be spaced out at greater distances than 5 meters from each other.
  • an embodiment may be designed and/or configured to provide security, to manage a control process, to gather information about one or more objects, to evaluate one or more facilities, to analyze one or more facilities, to compare and/or contrast one or more facilities, etc..
  • an embodiment may be designed and/or configured to satisfy needs of a manufacturing facility, and airport, a mass transit system, an entertainment venue, a highway system, a freeway system, a city, a city center, a sewer system, a bay, a harbor, a lake, a ski area, an outdoor recreation facility, a hotel, a motel, a resort, a shopping center, an office building, a campus, an environment, a game preserve, a military facility, a military training area, a system of caves, a cave, a flood zone, a spacecraft, a space station, a mine, an industrial facility, a library, a garden, a historic site, apolitical capitol, etc.
  • the components of the device may be constructed of a variety of materials. There may be multiple computer systems using multiple protocols. There may be a variety of surveillance devices that may be hidden or observable. There may be surveillance devices that detect sound, light, vibration, temperature, distance, motion, changes in any detectable factor, etc.. [00120]
  • the invention may be produced using electronics and/or microelectronics as well as computers and/or computer programs for carrying out instruction sets.
  • the invention may be exploited in commerce to protect assets.

Abstract

A surveillance system. There is a first surveillance device module in information communication with an area and having a first perspective of a portion of the area; and a display module in communication with the first surveillance device module and showing a graphical model of the area, wherein information regarding the area from the first surveillance device module is visually displayed in association with the graphical model, disposed in association with a representation of the portion, and substantially oriented according to the first perspective, wherein the graphical model includes a translucent model of a visual barrier through which may be seen information from the surveillance device module.

Description

TOTAL AWARENESS SURVEILLANCE SYSTEM
BACKGROUND OF THE INVENTION TECHNICAL FIELD
[0001] The present invention relates to surveillance systems, specifically to computer enhanced surveillance systems.
BACKGROUND ART
[0002] Since the terrorist attacks of 9/11, there is a heightened need for counter-terrorism/security technologies. Current technologies enable display of data, but fail to present data within a context configured to facilitate and/or enhance the value of the data.
[0003] The following are examples of security technologies, which are hereby incorporated by reference herein:
[0004] US Patent Nos.:
[0005] Monitoring apparatus in game hall/ 5,278,643 / Takemoto, et al.
[0006] Expandable home automation system / 5,086,385 / Launey , et al.
[0007] Multiple security video display / 5,258,837 / Gormley
[0008] Security system user interface / 7,046,142 / Hershkovitz , et al.
[0009] Casino video security system/ 6,908,385 / Green
[0010] The following are examples, of cameras and are incorporated by reference herein:
[0011] High Definition, Multiview, EPTZ, High Dynamic Range, Day / Night Video Camera /CoVi Technologies/ EVQ-1000/ 6300 Bridgepoint Parkway, Suite 300, Building II, Austin, Texas 78730
[0012] 1/4 type EXview HAD™ CCD color camera, remote pan/tilt/zoom operation, pan/tilt range 18x optical 200m/ Sony/ EVID70/ Sony Corporation of America, 550 Madison Avenue New York, NY 10022-3211
[0013] Digital CCD Color Camera, 1 /4-Inch, High Resolution, 16X Zoom Lens / Pelco/ CC1400HZ16-2/ Worldwide Headquarters, 3500 Pelco Way, Clovis, Ca 93612-5699 [0014] OV-FEL-NTSC, OV-FEL-NR, and OV-FEL-SHR Fish Eye Lens Cameras (180 or 360 degree camera), Physical Optics Corporation, 20600 Gramercy Place, Building 100, Torrance CA, 90501-1821 (USA).
[0015] Nonlimiting examples of video background subtraction are hereby incorporated by reference herein and may be found at:
[0016] Dar-Shyang Lee, "Effective Gaussian Mixture Learning for Video Background Subtraction," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 827-832, May, 2005.
[0017] K. Kim, T. H. Chalidabhongse, D. Harwood and L. Davis, "Realtime Foreground-Background Segmentation using Codebook Model", Volume 11 , Issue 3, Pages 167-256 (June 2005)
[0018] A. Elgammal, R. Duraiswami, D. Harwood and L. S. Davis "Background and Foreground Modeling using Non-parametric Kernel Density Estimation for Visual Surveillance", Proceedings of the IEEE, July 2002.
[0019] A. Elgammal, D. Harwood, L. S. Davis, "Non-parametric Model for Background Subtraction", 6th European Conference on Computer Vision. Dublin, Ireland, June/July 2000.
[0020] A. Elgammal, D. Harwood, L. S. Davis, "Non-parametric Model for Background Subtraction" IEEE ICCV99 Frame Rate Workshop. IEEE 7th International Conference on Computer Vision. Kerkyra, Greece, September 1999.
[0021] The following are examples of virtual graphical models and modeling modules and are incorporated by reference here:
[0022] US6731279 B2, Computer graphics data generating apparatus, computer graphics animation editing apparatus, and animation path generating apparatus Suzuki, Kaori
[0023] US20010047251 Al CAD system which designs 3D Models Kemp, William H.
[0024] US20030001843 Al Computer graphics data generating apparatus Suzuki, Kaori
[0025] US20050068314 Al Image display apparatus and method Aso, Takashi
[0026] Non-limiting examples of database programs/systems include SQL, MYSQL, and any other data storage modules known in the art. [0027] Nonlimiting examples of programs used to graphically model an area are listed below and are incorporated by reference herein:
[0028] The Q3A software by id Software, Inc. 3819 Town Crossing, Mesquite Texas, USA.
[0029] The Source Engine and/or SDK by Valve Software of Bellevue Washington USA.
[0030] In some existing security systems, there are a plurality of screens or monitors each assigned to one or more cameras. Camera views are then displayed on these monitors, sometimes with multiple views per physical monitor and/or cycling views. Accordingly, it is difficult to develop a comprehensive real-time awareness of a complete facility as the views must be integrated in the mind of the operator. Accordingly, a surveillance operator requires substantial training and experience with a facility before any kind of integrated awareness may be achieved. Should a user change facilities or should there be a substantial change in a facility, such as the moving of a camera, a user may need to be retrained and security may be compromised in the mean time.
[0031] Further, the ability of a surveillance operator to have "big picture" awareness is substantially limited. Each camera provides a single limited perspective and may not properly correlate to perspective characteristics (such as view angle, aspect ratio, image size) of other cameras. Accordingly, such differences may cause an operator to be deceived as to actual characteristics of people, objects, and/or places within a facility.
[0032] What is needed is a surveillance system and/or method that solves one or more of the problems described herein and/or one or more problems that may come to the attention of one skilled in the art upon becoming familiar with this specification.
DISCLOSURE OF THE INVENTION
[0033] The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available surveillance systems and methods. Accordingly, the present invention has been developed to provide a surveillance system and/or method.
[0034] In one embodiment, there is a surveillance system, including one or more of: a first surveillance device module that may be in information communication with an area and/or having a first perspective of a portion of the area; and/or a display module that may be in communication with the first surveillance device module and/or showing a graphical model of the area. Information regarding the area from the first surveillance device module may be visually displayed in association with the graphical model, disposed on a representation of the portion, and/or substantially oriented according to the first perspective.
[0035] The graphical model may include a translucent model of a visual barrier through which may be seen information from the surveillance device module. The translucent model may be color-coded according to a security characteristic. For example, a wall of a security sensitive room may be displayed as a red translucent plane. The area may include a vehicle that may include a second surveillance device module. The display module may further display a translucent graphical vehicle model corresponding to the vehicle and information from the second surveillance device module may be visually displayed in association with the translucent graphical vehicle model.
[0036] There may be a data processing module that may be in communication with the first surveillance device module and/or with the display module. The data processing module may remove background information from information from the first surveillance module, thereby providing object information corresponding to an object, m one non-limiting example, image data of a wall behind a person in a room may be removed, leaving only image data of the person, the object.
[0037] There may be a data storage module that may be in communication with a data processing module. The data storage module may store object information. The data storage module may index object information according to an object identifier and/or time such that object information may be retrieved nnrl rUQninvfiH aro.nrdiυe to time. In one non-limiting example, object information corresponding to a person traveling through a facility may be stored and indexed according to an object identifier indicating the person and by time, such that the information may be displayed on the graphical display according to object and time.
[0038] In non-limiting examples, the visual barrier may be one or more of walls, ceilings, floors, desks, chairs, mirrors, dividers, and/or doors.
[0039] There may be a control module, such as but not limited to a computer, such as but not limited to a server, that may be in communication with the display module and/or the first surveillance device module. The control module may receive orientation and location commands from a user and alters a point of view displayed by the display module. The control module may receive a command from a user and accordingly effect an alteration of a surveillance characteristic of the first surveillance device module.
[0040] There may be a plurality of surveillance device modules in communication with the display module. Information from each of the plurality of surveillance device modules may be displayed together on the graphical model.
[0041] An object may be displayed in association with a visible tag corresponding to a characteristic.
[0042] In another embodiment of the invention, there may be an article of manufacture comprising a program storage medium readable by a processor and embodying one or more instructions executable by the processor to perform a method for surveilling an area, the method comprising one or more of the following steps: receiving surveillance data from a surveillance device disposed at the area; and/or displaying the surveillance data in association with a graphical model representing the area.
[0043] The graphical model may further include a translucent image representing a visual barrier of the area. The method may include processing the surveillance data to remove background images, thereby leaving object information corresponding to one or more objects in the area.
[0044] The method may include altering a perspective of the display of the graphical model. The method may include receiving surveillance data from a plurality of surveillance devices disposed at the area; and/or displaying the surveillance data from each of the plurality of surveillance devices in association with _ ~:I n mn/U fPi->i-f>opntinαr the area in substantial correlation with a perspective of each surveillance device. The method may include storing object information indexed according to object and time. The method may include displaying object information together with the graphical model in association with a visual tag corresponding to a characteristic of an object. The characteristic of the object may be a characteristic from the group of characteristics including threat status, authority status, and name.
[0045] Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
[0046] Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
[0047] These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0048] In order for the advantages of the invention to be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
[0049] Figure 1 illustrates a prior art surveillance system;
[0050] Figure 2 illustrates a surveillance system display station according to one embodiment of the invention;
[0051] Figure 3 illustrates an external see-through display of an airport facility according to one embodiment of the invention;
[0052] Figure 4 illustrates an external see-through display of a facility according to one embodiment of the invention;
[0053] Figure 5 illustrates an internal see-through display of a facility according to one embodiment of the invention;
[0054] Figure 6 illustrates an external see-through display of multiple buildings according to one embodiment of the invention;
[0055] Figure 7 illustrates a zoomed external see-through display of a facility according to one embodiment o the invention;
[0056] Figure 8 illustrates an internal see-through display of a facility, or area, wherein an object includes a visual tag 800 according to one embodiment of the invention;
[0057] Figure 9 shows a block diagram of modules in a security system according to one embodiment of the invention; and
[0058] Figure 10 illustrates a rectangular room including a plurality of surveillance device modules according to one embodiment of the invention. [0059] MODE(S) FOR CARRYING OUT THE INVENTION
[0060] For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the exemplary embodiments illustrated in the drawings, and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Any alterations and further modifications of the inventive features illustrated herein, and any additional applications of the principles of the invention as illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the invention.
[0061] Reference throughout this specification to "one embodiment," "'an embodiment," or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "one embodiment," "an embodiment," and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, different embodiments, or component parts of the same or different illustrated invention. Additionally, reference to the wording "an embodiment," or the like, for two or more features, elements, etc. does not mean that the features are related, dissimilar, the same, etc. The use of the term "an embodiment," or similar wording, is merely a convenient phrase to indicate optional features, which may or may not be part of the invention as claimed.
[0062] Each statement of an embodiment is to be considered independent of any other statement of an embodiment despite any use of similar or identical language characterizing each embodiment. Therefore, where one embodiment is identified as "another embodiment," the identified embodiment is independent of any other embodiments characterized by the language "another embodiment." The independent embodiments are considered to be able to be combined in whole or in part one with another as the claims and/or art may direct, either directly or indirectly, implicitly or explicitly.
[0063] Finally, the fact that the wording "an embodiment," or the like, does not appear at the beginning of every sentence in the specification, such as is the practice of some practitioners, is merely a convenience for the reader's clarity. However, it is the intention of this application to incorporate by reference the phrasing "an embodiment," and the like, at the beginning of every sentence herein where logically possible and appropriate.
[0064] Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
[0065] Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
[0066] Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
[0067] There may be a system and/or a method that may include a 3D user- interface system. Thereby a facilities' security monitoring system that may be protecting governmental and/or private business venues. Such monitoring may be made to be dramatically more efficient. In one embodiment security agents may be empowered, even those with limited training, to better monitor several orders of magnitude more camera feeds and other security related data-feeds. The system and/or method may enable one or more surveillance agent(s), using a single-high- resolution display (the display may be auto-stereoscopic) to at-a-glance remotely monitor the security situation of an arbitrarily large number of locations, such as surveillance locations and/or distinct geographic locations.
[0068] In one embodiment, surveillance agents may be empowered to have a comprehensive overview of a location. Surveillance agents may be enabled to see, hear and transport their focused viewpoint through walls; floors and ceilings. Surveillance agents may be able to zoom into and monitor a specific location. In one embodiment, an agent may be empowered to effectively have eyes and ears in hundreds if not thousands of places at the same time.
[0069] A security surveillance system according to one embodiment may permit surveillance of a large number of security devices while the embodiment still neither overwhelms nor tires the agent with the massive amount of information being processed and displayed. In one embodiment, a surveillance agent's mental load is radically optimized towards monitoring only what is most critical to security: presence and movement of (groups of) people and vehicles inside a campus/site and its building(s) as well as security-sensitive events such as fires, smoke, badge access, assemblies, gun-shots, people running etc.
[0070] There may be an automated root-cause analysis such as wherein security-critical events and alerts may be adapted into one system and correlated against each other based upon defined security policy and rules. An automated root- cause analysis, such as that previously described, may enable disparate real-time security data to be combined into an integrated panoptic security command center (over)view.
[0071] Any information streams and/or elements that may detract from an overarching goal of transit-site security surveillance may be dynamically subtracted and not displayed unless desired. An agent may exclusively monitor those portions of information that may be considered to be most important from a point of view of surveillance and/or transit-site security.
[0072] There may be included in an embodiment of the invention a facial recognition system. The facial recognition system may be configured such that it is not limited by a need for a "static read" on a suspect's face or limited lighting. There may be a video-over- IP camera-phone that may be configured to be utilized by an on- site security agent to better communicate with a central control and/or with another security agent or similar individual or device. In one embodiment the invention may be configured to meet needs of the military. [0073] In one embodiment, there may be a digital flight recorder that may record digital infoπnation such as audio input as well as video input. In one embodiment, a digital flight recorder may be configured to be used to retrace actions of an identified security risk.
[0074] In one embodiment a system or method may be configured to provide for particular needs of a facility such as but not limited to one or more hotels, casinos, entertainment venues, federal buildings, courthouses, hospitals, retail operations such as department stores, and airports.
[0075] There may be a capability to display only information considered to be most important. This capability may result from a combination and/or integration of multiple technologies and methods. In one embodiment of the invention there may be included fusing real-world video imagery with volumetric models in a real time 3D display to help observers comprehend multiple streams of temporal data and imagery including from arbitrary views of the scene. In one embodiment of the invention, there may be one or more 360° panoramic cameras that may be low cost. There may be dynamic background subtraction.
[0076] In one embodiment the walls of buildings and vehicles may be displayed as transparent thereby enabling at-a-glance total situational awareness of security relevant people, objects and vehicles in motion. Such total situation awareness may facilitate a major effectiveness and/or efficiency boost of security surveillance personnel performance (for example, but not limited to performance effectiveness and/or efficiency as measured in terms of more sq. ft., more people, more luggage, better observed). There may be one or more integrated "drill-down" functions, such as but not limited to facial recognition with automatic 17 sec. full background check There may be intuitive integrated 3 C functions. There may be integrated VCR functionality and/or black-box functionality.
[0077] There may be stereoscopic display of video-sprites of people and vehicles inside and among user navigable "transparent" 3D wire frames of building(s); building complexes or battlefield areas. There may be usage of panoramic video imaging systems that may be used instead of classic cameras. There may be rapid production of 3D model of site/campus through reverse "volumetric painting" using DV camcorder-based capture system. There may be real-time dynamic fusion of panoramic video streams into 3D model of site/campus/battlefield. There may be real-time extraction of video imagery of moving and semi-static objects (people, vehicles, animals etc.) with live transposition into 3D scenes. There may be live video-based 3D tracking of moving objects within designated security-sensitive areas.
[0078] There may be auto-stereoscopic display that may provide added accuracy of depth perception by surveillance agent. There may be an intuitive 3D user-interface for empowered analysis and management of security threats or breaches, or battlefield situations (including 3C activities). There may be optimized detection and remediation of surveillance coverage blind spots. There may be realtime dynamic fusion of video data from mobile cameras (on vehicles (e.g. security patrols or battlefield reconnaissance vehicles); helmet mounted; on micro-UAVs or small helium blimps;). There may be an integrated display of results from triangulation based gunshot locator into the 3D model.
[0079] There may be one or more alternative panoptic modes for single screen video only panoptic surveillance leveraging latent capacity of human brain for massively parallel processing of defocused viewing combined with peripheral field- of-view imagery. There may be integrated on going or on demand identification through facial recognition against customer definable database of security sensitive persons. There may be integration and correlation of multiple events from physical security monitoring systems. There may be suspicious activity detector (using pattern recognition and/or one or more neural nets). A suspicious activity detector may detect suspicious activity and/or signals for analysis of the suspicious activity. Suspicious activity may be analyzed by a human operator or by a device configured to analyze suspicious activity.
[0080] There may be one or more security or safety indicative events detectable by an embodiment of the invention. The security or safety indicative events may include but are not limited to assembly of groups larger than X people; loitering; running; stressed shouting; suspect clothing (balaclavas, squad-type clothing, bullet-proof vests) or suspect items hidden under clothing; nervousness of individuals and groups; fires; smoke and gasses; explosions etc.
[0081] There may be a camera configured with a non-standard viewing area. For example there may be a camera having a 360 degree viewing area. The viewing area may be a 360 degree horizontal by 60 degree vertical viewing area. There may be a panoramic angular mirror. There may be a fisheye lens. There may be a camera with enhanced resolution. There may be a camera with selectively enhancable resolution. There mav be a camera with portional enhanced resolution that may be selectable. There may be a camera configured to interface with a personal computer. There may be a camera configured to work with another camera that may be of a different type, such as but not limited to an IR camera or a still camera.
[0082] There may be information compression. There may be selective information compression. There may be selectable compression of portions of data, such as compression of one portion of a screen capture or video feed. There may be image recognition and/or motion recognition that may be interactive with compression technology and/or choices. There may be random block scanning wherein an enhanced high resolution block area may randomly move across a data field, such as a video screen, randomly enhancing portions of a data field. There may be sequential block scanning. There may be object detection and/or object scanning.
[0083] There may be a three-dimensional display that may be configured to display information from one or more surveillance devices and/or one or more models of a location. There may be an ability to zoom into an area. There may be an ability to zoom into a selectable area. There may be a user interface that may permit a user to view greater detail in an area without requiring that the user know details of the configuration of the surveillance system, for example, where a user may desire zooming into a selected viewable area, a user may simply designate the zoom focus and not need to know which camera is providing the video information.
[0084] Turning now to the figures, Figure 1 illustrates a prior art surveillance system. There is illustrated one or more displays including one or more portions of screen dedicated to one or more cameras that may be observing portions of a facility. There is shown a computer that may permit some control over what is displayed. For example, a display may be configured from the computer, thereby determining which camera is to be displayed on which display.
[0085] Figure 2 illustrates a surveillance system display station according to one embodiment of the invention. There may be one or more displays. A display is defined as a device that enables a human or device to have access to data. Displaying is the act of enabling access to data. There may be a user interface, such as a keyboard or a mouse that may enable a user to control a display. What is displayed on a particular display may be determined by a chosen point of view, which may be instead of being determined by a chosen camera or other surveillance device. Also, a display may include a model of a displayed area wherein data from a surveillance device, such as a video camera, mav be suυerinroosed over the model. [0086] In operation, a user may examine data from one or more surveillance devices simultaneously, wherein the display may be present the examined data in an integrated format, such as being superimposed over a model of a facility. Thereby, information from individual surveillance devices may be integrated into a display of the model and viewed according to a relationship, such as a geometrical relationship. Thereby, a user may have a more complete understanding of information from an location than that which may be provided by disjointed views of each individual surveillance device.
[0087] A user may have control over a point of view. A user may be able to zoom to view in greater detail a location. A user may be able to choose a new viewing angle, wherein the system may automatically display information from a different set of devices or may control one or more devices intended to adjust a point of view. A user may be oblivious to one or more details of the process of providing a chosen point of view without impacting an ability of the system to successfully display a chosen point of view. For example, a user may not know details regarding the number of cameras installed in a system, the orientations of the cameras, unique identifiers of the cameras, and/or other information that may assist in creating a display from a chosen point of view.
[0088] Figure 3 illustrates an external see-through display of an airport facility according to one embodiment of the invention. There is shown a model of the facility, including models of levels of a building, models of vehicles such as cars and airplanes, and models of terrain such as the ground. There are images superimposed on the model, such as images of people. The images may include, but are not limited to, video images, portions of video images, selected portions of video images, information from surveillance devices, and/or sprites generated to represent known locations of objects such as individuals.
[0089] In operation, a user may be enabled to view a display showing a large amount of data, wherein data portions may be displayed in relation to one another by locating display elements in determined portions of the model. Thereby a user may be able to observe an area without being restricted by walls or other obstacles to data while still observing the data in a context that may be similar to the context from which the data is derived. Further, a model may include one or more models independent of one or more other models or model portions. Thereby a display may be configured to disnlav relations between ϋortions of data in a way that mav be similar to a changeable context. For example, wherein an airplane may include a surveillance device that may communicate with a surveillance system, a model of the airplane may be included in the model of the facility. Further, a location and orientation of the airplane may be detected, observed, and/or communicated. Thereby, as the airplane moves, data from the airplane may be positioned correctly on a display.
[0090] Figure 4 illustrates an external see-through display of a facility according to one embodiment of the invention. There may be a wire-frame model of a building. There may be images of vehicles. There may be images of individuals. There may be shaded areas. Images, such as those of vehicles and of people, may be superimposed on a wire-frame grid representative of the facility in from which the images are gathered. Images may be real images. Images may be computer enhanced images. Images may be sprites representing a subject of a surveillance device. Images may be direct images extracted from a larger image. For example, image recognition tools may define an area of an image that may be a person or a car and may enable cropping of the image to show only the recognized image, thereby eliminating an amount of superfluous information.
[0091] In operation, a user may model a facility and may give the model a graphical representation. A user may provide sufficient information about a surveillance device such that a computer may sufficiently accurately depict images representative of or of subjects of surveillance from the surveillance device wherein the accurate depiction may include sufficiently accurate placement of the image within the graphical representation of the model of the facility. For example, a video camera in room 233 of the second floor may feed visual information to a computer that may have sufficient information about the location of the video camera and from other devices such as another video camera in the same room that the computer may place extract images of moving objects from the video camera feed and may display those images or representations of those images in sufficiently correct locations in the graphical representation of the model of the facility. Sufficiency is defined as adequate for the needs of a user.
[0092] Figure 5 illustrates an internal see-through display of a facility according to one embodiment of the invention. There may be a graphical representation of a model of a facility. There may be images of objects that may be considered more imrjortant than other obiects. For example, there may be images of people and images of vehicles, but not images of desks and not images of wall decorations. Images of walls may be implied, such as with wire-frame images, or may be translucent. There may be an ability to move a point of view to various positions. One or more of the points of view may include points of view from inside a facility.
[0093] In operation, a user may be able to shift from a first point of view to a second point of view. A user may be able to see through walls of the model. A user may be able to know where walls are yet have unobstructed vision of important objects. A user may be able to smoothly transition from one viewpoint to another, as if the user were a floating, flying eye able to see through and/or move through walls. A computer may generate portions of an image of an object. For example, a computer may generate a back portion of a person by extrapolation from shapes and colors of a front of a person. In another example, a computer may generate a 360 degree view ability of a person through combining information from multiple cameras. In still another example, a computer may generate a 360 view ability of a person by recording information about portions of an object as the object travels and rotates within view. In still another example, an image may simply be presented as cropped in a proper location, but the facing of the image may be allowed to rotate, and thereby not be an actual facing.
[0094] A computer in one embodiment of the invention may track objects as each object travels within view of at least one surveillance device and may gather information about the object. Also, information may be imputed to an object and/or may be derived from multiple surveillance devices. For example, status information may be graphically displayed with an image of an object and/or with the image of a model. For example, should an object be a suspicious object, a user may impute such information by causing a computer to attach an image to the image representation of the suspect object. For example, there may be a flashing red polyhedron shading displayed about the suspect object.
[0095] A security officer present in a facility may carry a badge that may be detectable and attributable to the image of the security officer, thereby an image representing the status of the security officer may accompany the image of the security officer. For example, the image of the security officer may have a blue translucent shaded region displayed about the security officer. [0096] Activities detected by a surveillance device may be used to tag objects as well. For example, successful security screening by an object may tag the object as an employee. The image of the employee may then be accompanied by an image representing such status. For example, the image of the employee may be outlined in green.
[0097] Information may be included in a model. Portions of a facility may be colored or depicted differently to clearly establish a different status. For example, translucent red walls may represent especially secure areas. In one embodiment, a computer may automatically attach a flashing red translucent image to an image of an object that enters a secure area without having previously been tagged as having permission to enter such a location.
[0098] Figure 6 illustrates an external see-through display of multiple buildings according to one embodiment of the invention. In one embodiment the buildings are displayed to represent relationships between each building. For example, the graphical representation of each building as displayed may represent actual physical distances between each building. In one embodiment, the actual buildings may be very far apart, but may have some other relationship wherein there may be an advantage to displaying the buildings in a proximity.
[0099] For example, the buildings may be owned, managed, or controlled by a single party or entity or may all be of interest to a single entity. The buildings may serve similar function, and therefore it may be useful to view them adjacently. For example, similar behavior by individual objects or groups of objects may be expected, and when the buildings are viewed together, differences may be more easily discovered.
[00100] In another embodiment, each building may be part of a process.
Surveillance of such buildings may be for the purpose of controlling a process instead of or simultaneous to security surveillance. Thereby it may be useful to give a user a overview of objects and object behaviors as compared across multiple buildings that may even be on different sides of the Earth.
[00101] In operation, a user may be enabled to configure displayed relationships of models, thereby allowing configurable viewing of multiple sites. A user may determine desired relational viewing and may instruct a system to display a desired set of facilities and/or models. [00102] There may be sprites representing objects that may morph into actual cropped images of objects or generated images of objects as a user may zoom in to an object or may change a point of view to be closer to an object. Thereby enhanced detail may be presented when desired and not presented when not desired, thereby enhancing performance of a system.
[00103] Figure 7 illustrates a zoomed external see-through display of a facility according to one embodiment o the invention. There may be a graphical image of a model of a facility that may include cropped images of objects. The graphical image of a model of a facility may be displayed in a way that may communicate structure but may permit a level of visibility within the structure. For example, floors and/or walls may be translucent or transparent. Perimeters may be solid lines. Inside walls may be translucent and/or transparent. Images may be displayed in a position and perspective sufficient to communicate a three-dimensional location within the model of the facility representing an approximation of an actual location of an object. For example, images of objects may be sized according to distance from a point of view and/or may be distorted to depict a viewing angle relative to an actual viewing angle of a surveillance device. For example, wherein a surveillance device may view an object from a 45 degree angle and a user point of view may be at a 65 degree angle, portions of the image extracted from the surveillance device may be distorted to give an extrapolated depiction simulating how an object should appear when viewed from a 65 degree angle instead of a 45 degree angle. Thereby a single surveillance device may enable display of an object at many different angles and distances.
[00104] It is understood that the above-described preferred embodiments are only illustrative of the application of the principles of the present invention. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiment is to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claim rather than by the foregoing description. AU changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
[00105] For example, although the figures only depict people and vehicles as important objects, it is understood that any object may be treated similarly and that imυortance of various obiects mav der>end on desires of a user, operator, owner, etc. For example, where an embodiment of the invention may track a manufacturing process, important images may be of manufacturing equipment and/or products being manufactures and people and vehicles may be ignored.
[00106] Additionally, although the figures illustrate wire-frame models, it is understood that any models may be used and that properties of each model may be determined and implemented according to needs and desires of users and owners. For example, there may be graphical representations of areas that may not correspond to actual physical sizes and/or locations, hi one embodiment a very long hallway may be depicted as a cube including a number representing a number of objects included therein.
[00107] It is also envisioned that objects may be models and models may be objects. Objects may become models after sufficient information is gathered. For example, a vehicle may be initially an object, but after sufficient information may be gathered by one or more surveillance devices, the object may be transformed into a model. In one embodiment a vehicle may be imaged as an object until it may be recognized as a particular make and model and then the image of the object may be replaced by a predefined model designed to simulate the appearance of the vehicle. Further, the occupants of the vehicle may then be detected, for example by an infrared camera and may then be represented within a translucent model of the vehicle as sprites or images in various positions in the vehicle. Thereby a user may be able to see inside structures that may not even be the property and/or not be under the control of the user and/or owner.
[00108] Figure 8 illustrates an internal see-through display of a facility, or area, wherein an object includes a visual tag 800 according to one embodiment of the invention, hi particular, there is shown a red, or darkened box that is an exemplary visual tag 800 in association with an object, in this case a person. The tag 800 may signal that the object is an identified security risk, or an authorized security agent, or may signal any other determined characteristic of importance.
[00109] Other non-limiting examples of tags include: flashing icons near an object, color filtering of an object image, causing an object image to have an altered or altering characteristic, such as but not limited to flashing the object image off and on to attract attention thereto.
[00110] Figure 9 shows a block diagram of modules in a security svstem according to one embodiment of the invention. There is shown a display module 910 in communication with a control module 920 in communication with a data storage module 930 and a data processing module 940. The data processing module 940 is in communication with one or more surveillance device modules 950. These modules may be in wired and/or wireless communication. Communication may be encrypted.
[00111] A display module 910 may include one or more display devices such as LCD or CRT monitors that are commonly known and used in the art.
[00112] A control module 920 may include one or more computer-type devices such as a server. Such may include a CPU and/or other electronic devices for receiving, processing, and/or relaying commands.
[00113] A data storage module 930 may include one or more database programs such as SQL and may include one or more modules for interface and/or control of such data storage. There may be storage devices such as magnetic memory devices, such as RAM, hard drives, magnetic tape devices, flash memory, etc..
[00114] A data processing module 940 may include one or more modules for altering data. A non-limiting example includes a computer program for carrying out steps of a background subtraction method, such as those described herein.
[00115] A security device module 950 may include one or more security devices such as but not limited to cameras, 180 degree cameras, microphones, chemical detectors, voltage detectors, sound triangulation systems, pressure detectors, etc.. Such may include one or more transducers for converting a physical phenomenon to an electrical and/or digital signal.
[00116] Figure 10 shows a rectangular room including a plurality of surveillance device modules according to one embodiment of the invention. There is shown a rectangular shaped room 1001 in the diagram having proportions of 10 (length) x 5 (width) x 3 (height). Eight imaging sensors 1010 using 180-degree lenses to capture a full visual hemisphere each are placed on the walls and ceilings so as to maximize multi-angle visual coverage of the interior of the room. In one example, as a rule of thumb for the purposes of C-Thru, or see-through applications, the optimal field of coverage of a single 180-degree lens in this exemplary case is considered to have a diameter of 5 meters at the base plane of the hemisphere, with the 180-degree lens in the center of the circle at the base plane of the hemisphere. Therefore τ>l a cement of a lens on a 5 meters sαuare ceilinε would be in the geometric middle of such a ceiling. On walls, the same rule applies but with one difference: the unit should ideally be placed at a distance 1011 off the floor corresponding to an average eye-height of a human being so as to minimize optical distortions for the purposes of facial recognition. On the rectangular walls or ceilings of this exemplary case, units should not be spaced out at greater distances than 5 meters from each other.
[00117] While specific examples of measurements, ratios, are described in this embodiment, it is understood that this is only a non-limiting example and that the number of embodiments of the invention that do not correspond to these are plethoric.
[00118] It is expected that there could be numerous variations of the design and/or configuration of this invention. An example is that there may be more than one computer and/or more than one computer system that may participate in an embodiment of the system. It is envisioned that any device used in surveillance may be incorporated in one or more embodiments of the invention. Further, it is envisioned that the design and/or configuration of an embodiment of the invention may vary with the use of the embodiment. For example, an embodiment may be designed and/or configured to provide security, to manage a control process, to gather information about one or more objects, to evaluate one or more facilities, to analyze one or more facilities, to compare and/or contrast one or more facilities, etc.. In another example an embodiment may be designed and/or configured to satisfy needs of a manufacturing facility, and airport, a mass transit system, an entertainment venue, a highway system, a freeway system, a city, a city center, a sewer system, a bay, a harbor, a lake, a ski area, an outdoor recreation facility, a hotel, a motel, a resort, a shopping center, an office building, a campus, an environment, a game preserve, a military facility, a military training area, a system of caves, a cave, a flood zone, a spacecraft, a space station, a mine, an industrial facility, a library, a garden, a historic site, apolitical capitol, etc..
[00119] Finally, it is envisioned that the components of the device may be constructed of a variety of materials. There may be multiple computer systems using multiple protocols. There may be a variety of surveillance devices that may be hidden or observable. There may be surveillance devices that detect sound, light, vibration, temperature, distance, motion, changes in any detectable factor, etc.. [00120] Thus, while the present invention has been fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred embodiment of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications, including, but not limited to, variations in size, materials, shape, form, function and manner of operation, assembly and use may be made, without departing from the principles and concepts of the invention as set forth in the claims.
INDUSTRIAL APPLICABILITY
[00121] The invention may be produced using electronics and/or microelectronics as well as computers and/or computer programs for carrying out instruction sets. The invention may be exploited in commerce to protect assets.

Claims

What is claimed is:
1. A surveillance system, comprising: a first surveillance device module in information communication with an area and having a first perspective of a portion of the area; and a display module in communication with the first surveillance device module and showing a graphical model of the area, wherein information regarding the area from the first surveillance device module is visually displayed in association with the graphical model, disposed in association with a representation of the portion, and substantially oriented according to the first perspective.
2. The surveillance system of claim 1, wherein the graphical model includes a translucent model of a visual barrier through which may be seen information from the surveillance device module.
3. The surveillance system of claim 2, wherein the translucent model is color-coded according to a security characteristic.
4. The surveillance system of claim 2, wherein the area includes a vehicle including a second surveillance device module and wherein the display module further displays a translucent graphical vehicle model corresponding to the vehicle and information from the second surveillance device module is visually displayed superimposed on the translucent graphical vehicle model.
5. The surveillance system of claim 2, further comprising a data processing module in communication with the first surveillance device module and with the display module, wherein the data processing module removes background information from information from the first surveillance module, thereby providing object information corresponding to an object.
6. The surveillance system of claim 5, further comprising a data storage module in communication with the data processing module, wherein the data storage module stores object information.
7. The surveillance system of claim 6, wherein the data storage module indexes object information according to an object identifier and time such that object information may be retrieved individually and displayed according to time.
8. The surveillance system of claim 2, wherein the visual barrier comprises a barrier from the group of barriers including walls, ceilings, floors, desks, chairs, mirrors, dividers, and doors.
9. The surveillance system of claim 2, further comprising a control module in communication with the display module, wherein the control module receives orientation and location commands from a user and alters a point of view displayed by the display module.
10. The surveillance system of claim 2, further comprising a control module in communication with the first surveillance device module, wherein the control module receives a command from a user and accordingly effects an alteration of a surveillance characteristic of the first surveillance device module.
11. The surveillance system of claim 2, further comprising a plurality of surveillance device modules in communication with the display module, wherein information from each of the plurality of surveillance device modules is displayed together on the graphical model.
12. The surveillance system of claim 5, wherein the object is displayed in association with a visible tag corresponding to a characteristic.
13. An article of manufacture comprising a program storage medium readable by a processor and embodying one or more instructions executable by the processor to perform a method for surveilling an area, the method comprising: receiving surveillance data from a surveillance device disposed at the area; and displaying the surveillance data on a graphical model representing the area.
14. The article of manufacture of claim 13, wherein the graphical model further comprises a translucent image representing a visual barrier of the area.
15. The article of manufacture of claim 14, wherein the method further comprises processing the surveillance data to remove background images, thereby leaving object information corresponding to one or more objects in the area.
16. The article of manufacture of claim 14, wherein the method further comprises altering a perspective of the display of the graphical model.
17. The article of manufacture of claim 14, wherein the method further comprises: receiving surveillance data from a plurality of surveillance devices disposed at the area; and displaying the surveillance data from each of the plurality of surveillance devices on a single graphical model representing the area in substantial correlation with a perspective of each surveillance device.
18. The article of manufacture of claim 15, wherein the method further comprises storing object information indexed according to object and time.
19. The article of manufacture of claim 18, wherein the method further comprises displaying object information together with the graphical model in association with a visual tag corresponding to a characteristic of an object.
20. The article of manufacture of claim 19, wherein the characteristic of the object is a characteristic from the group of characteristics including threat status, authority status, and name.
PCT/US2006/020795 2005-05-27 2006-05-30 Total awareness surveillance system WO2006128124A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68554905P 2005-05-27 2005-05-27
US60/685,549 2005-05-27

Publications (2)

Publication Number Publication Date
WO2006128124A2 true WO2006128124A2 (en) 2006-11-30
WO2006128124A3 WO2006128124A3 (en) 2007-07-19

Family

ID=37452972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/020795 WO2006128124A2 (en) 2005-05-27 2006-05-30 Total awareness surveillance system

Country Status (1)

Country Link
WO (1) WO2006128124A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1975752A1 (en) * 2007-03-30 2008-10-01 Abb Research Ltd. A method for operating remotely controlled cameras in an industrial process
EP2542994A2 (en) * 2010-03-02 2013-01-09 Crown Equipment Limited Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
US8655016B2 (en) 2011-07-29 2014-02-18 International Business Machines Corporation Example-based object retrieval for video surveillance
US9188982B2 (en) 2011-04-11 2015-11-17 Crown Equipment Limited Method and apparatus for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
US9206023B2 (en) 2011-08-26 2015-12-08 Crown Equipment Limited Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
CN106991218A (en) * 2017-03-20 2017-07-28 王金刚 A kind of visualization system emulated based on CBC
US9805582B2 (en) * 2015-01-15 2017-10-31 Eran JEDWAB Integrative security system and method
WO2017201092A1 (en) * 2016-05-16 2017-11-23 Wi-Tronix, Llc Real-time data acquisition and recording system viewer
CN107392501A (en) * 2017-08-10 2017-11-24 重庆大学 The micro- interference method of historical preservation based on dynamic evaluation
US9934623B2 (en) 2016-05-16 2018-04-03 Wi-Tronix Llc Real-time data acquisition and recording system
US10392038B2 (en) 2016-05-16 2019-08-27 Wi-Tronix, Llc Video content analysis system and method for transportation system
US11423706B2 (en) 2016-05-16 2022-08-23 Wi-Tronix, Llc Real-time data acquisition and recording data sharing system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030040815A1 (en) * 2001-04-19 2003-02-27 Honeywell International Inc. Cooperative camera network
US6542075B2 (en) * 2000-09-28 2003-04-01 Vigilos, Inc. System and method for providing configurable security monitoring utilizing an integrated information portal
US6889096B2 (en) * 2000-02-29 2005-05-03 Bently Nevada, Llc Industrial plant asset management system: apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6889096B2 (en) * 2000-02-29 2005-05-03 Bently Nevada, Llc Industrial plant asset management system: apparatus and method
US6542075B2 (en) * 2000-09-28 2003-04-01 Vigilos, Inc. System and method for providing configurable security monitoring utilizing an integrated information portal
US20030040815A1 (en) * 2001-04-19 2003-02-27 Honeywell International Inc. Cooperative camera network

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008119724A1 (en) * 2007-03-30 2008-10-09 Abb Research Ltd A method for operating remotely controlled cameras in an industrial process
CN101681159A (en) * 2007-03-30 2010-03-24 Abb研究有限公司 A method for operating remotely controlled cameras in an industrial process
EP1975752A1 (en) * 2007-03-30 2008-10-01 Abb Research Ltd. A method for operating remotely controlled cameras in an industrial process
EP2542994A4 (en) * 2010-03-02 2014-10-08 Crown Equipment Ltd Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
EP2542994A2 (en) * 2010-03-02 2013-01-09 Crown Equipment Limited Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
US9188982B2 (en) 2011-04-11 2015-11-17 Crown Equipment Limited Method and apparatus for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
US9958873B2 (en) 2011-04-11 2018-05-01 Crown Equipment Corporation System for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
US8655016B2 (en) 2011-07-29 2014-02-18 International Business Machines Corporation Example-based object retrieval for video surveillance
US9206023B2 (en) 2011-08-26 2015-12-08 Crown Equipment Limited Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
US9580285B2 (en) 2011-08-26 2017-02-28 Crown Equipment Corporation Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
US10611613B2 (en) 2011-08-26 2020-04-07 Crown Equipment Corporation Systems and methods for pose development using retrieved position of a pallet or product load to be picked up
US20180130335A1 (en) * 2015-01-15 2018-05-10 Eran JEDWAB Integrative security system and method
US20230132171A1 (en) * 2015-01-15 2023-04-27 Liquid 360, Ltd. Integrative security system and method
US9805582B2 (en) * 2015-01-15 2017-10-31 Eran JEDWAB Integrative security system and method
US11501628B2 (en) * 2015-01-15 2022-11-15 Liquid 360, Ltd. Integrative security system and method
US10445951B2 (en) 2016-05-16 2019-10-15 Wi-Tronix, Llc Real-time data acquisition and recording system
US10392038B2 (en) 2016-05-16 2019-08-27 Wi-Tronix, Llc Video content analysis system and method for transportation system
US10410441B2 (en) 2016-05-16 2019-09-10 Wi-Tronix, Llc Real-time data acquisition and recording system viewer
US9934623B2 (en) 2016-05-16 2018-04-03 Wi-Tronix Llc Real-time data acquisition and recording system
US11055935B2 (en) 2016-05-16 2021-07-06 Wi-Tronix, Llc Real-time data acquisition and recording system viewer
RU2757175C2 (en) * 2016-05-16 2021-10-11 ВАЙ-ТРОНИКС, ЭлЭлСи Method and system for processing, storing and transmitting data from at least one mobile object
US11423706B2 (en) 2016-05-16 2022-08-23 Wi-Tronix, Llc Real-time data acquisition and recording data sharing system
WO2017201092A1 (en) * 2016-05-16 2017-11-23 Wi-Tronix, Llc Real-time data acquisition and recording system viewer
CN106991218A (en) * 2017-03-20 2017-07-28 王金刚 A kind of visualization system emulated based on CBC
CN107392501A (en) * 2017-08-10 2017-11-24 重庆大学 The micro- interference method of historical preservation based on dynamic evaluation

Also Published As

Publication number Publication date
WO2006128124A3 (en) 2007-07-19

Similar Documents

Publication Publication Date Title
WO2006128124A2 (en) Total awareness surveillance system
EP2553924B1 (en) Effortless navigation across cameras and cooperative control of cameras
Haering et al. The evolution of video surveillance: an overview
JP5322237B2 (en) Method and apparatus for efficient and flexible surveillance visualization with context sensitive privacy protection and power lens data mining
Milosavljević et al. Integration of GIS and video surveillance
US20160019427A1 (en) Video surveillence system for detecting firearms
Alshammari et al. Intelligent multi-camera video surveillance system for smart city applications
CN105554440A (en) Monitoring methods and devices
CN104159067A (en) Intelligent monitoring system and method based on combination of 3DGIS with real scene video
RU2742582C1 (en) System and method for displaying moving objects on local map
Conci et al. Camera placement using particle swarm optimization in visual surveillance applications
Galvin Crime Scene Documentation: Preserving the Evidence and the Growing Role of 3D Laser Scanning
CN203968263U (en) The intelligent monitor system combining with outdoor scene video based on 3DGIS
US20240071191A1 (en) Monitoring systems
Cormoş et al. Use of TensorFlow and OpenCV to detect vehicles
Schulte et al. Analysis of combined UAV-based RGB and thermal remote sensing data: A new approach to crowd monitoring
Ntoumanopoulos et al. The DARLENE XR platform for intelligent surveillance applications
US20230102949A1 (en) Enhanced three dimensional visualization using artificial intelligence
Szirányi et al. Observation on Earth and from the sky
WO2022022809A1 (en) Masking device
Francisco et al. Critical infrastructure security confidence through automated thermal imaging
Albahri Simulation-based optimization for the placement of surveillance cameras in buildings using BIM
Ahlberg Visualization Techniques for Surveillance: Visualizing What Cannot Be Seen and Hiding What Should Not Be Seen
You et al. V-Sentinel: a novel framework for situational awareness and surveillance
Das et al. Intelligent surveillance system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

NENP Non-entry into the national phase in:

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06760523

Country of ref document: EP

Kind code of ref document: A2