EP2276007A1 - Method and system for remotely guarding an area by means of cameras and microphones. - Google Patents
Method and system for remotely guarding an area by means of cameras and microphones. Download PDFInfo
- Publication number
- EP2276007A1 EP2276007A1 EP09165782A EP09165782A EP2276007A1 EP 2276007 A1 EP2276007 A1 EP 2276007A1 EP 09165782 A EP09165782 A EP 09165782A EP 09165782 A EP09165782 A EP 09165782A EP 2276007 A1 EP2276007 A1 EP 2276007A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- representation
- location
- area
- sound source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/16—Actuation by interference with mechanical vibrations in air or other fluid
- G08B13/1654—Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems
- G08B13/1672—Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems using sonic detecting means, e.g. a microphone operating in the audio frequency range
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19686—Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19691—Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound
- G08B13/19693—Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound using multiple video sources viewed on a single or compound screen
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19697—Arrangements wherein non-video detectors generate an alarm themselves
Definitions
- the present invention refers to a method and system for remotely guarding an area by means of cameras and microphones at several locations within that area, which are connected to a central surveillance post.
- another aim of the invention is to provide a method and system in which audible information is used, however, without infringing the privacy regulations.
- Still another aim of the invention is to provide a system which makes remote monitoring of (urban) areas more lively for the operator (e.g. guardsman), as the visual information offered by the video cameras is supplemented by accompanying "real live audio", however without passing on (private) conversations etc. in a way that their content could be followed, i.e. understood, by the operator.
- Yet another aim is to provide that the operator can semantically comprehend the emotional components (in particular fear, anger, excitement etc.) in the audible signals picked up in the vicinity of the cameras. These components should, after transfer to the operator, attract his attention in a natural way and trigger him/her to pay attention to the location at which such (e.g. excited) audible signal originated or was recorded.
- emotional components in particular fear, anger, excitement etc.
- the representation is processed such that eavesdropping is prevented, e.g. by time and/or frequency domain filtering and/or scrambling, such as fragmenting the sound representation, causing that the sound representation supplied to the operator includes fragmented parts -e.g. having maximum lengths of for example 10 seconds- of the sound picked up by the sound source, causing that the overall semantic or linguistic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- time and/or frequency domain filtering and/or scrambling such as fragmenting the sound representation
- the sound representation includes at least part of the sound picked up by the relevant sound source, however, processed such that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping, e.g. wherein the Speech Transmission Index (abbreviated STI, see for its definition e.g. en.wikipedia.org/wiki/Speech_Transmission_Index) of the processed sound is reduced, e.g. by means of signal scrambling or addition of noise), to a maximum of e.g. or less 0.35.
- STI Speech Transmission Index
- the location representation of that sound may preferably be performed by spatial (2 or 3 Dimensional) audible reproduction of that sound representation in the vicinity of the observation screen.
- observation screen which may be formed by a group of cooperating display screens
- the operator's attention can be attracted when the sound representations, originated at several microphone locations, are reproduced (i.e. when the attention value of the sound passes a predetermined threshold value) via a spatial audio reproduction system.
- the sounds as such may be picked up by single channel microphones, however their sound representations are reproduced, via a spatial audio system in the vicinity of the observation screen, in such a way that, in the operator's perception, the sound representations comes from the direction of the location, as mapped on the observation screen, where the sound has been produced or recorded.
- the sound originating location may be represented by means of visual display of the location where the sound has been produced, e.g. by means of any form of highlighting that location at the area mapping on the observation screen.
- Figures 1 and 2 show a system for remotely guarding an area (the centre of Utrecht) using cameras and microphones at several locations within that area, which are connected to a central surveillance post, including an observation screen 1 arranged for displaying the various camera (Cam) and microphone (Mic) locations on a map of the area.
- the system includes means for executing the method as discussed hereinbefore including processing means and means for the reproduction of the sound representations, i.e. an event detector (ED) 2 and an intelligibility reductor (IR) 3, as well as means for the reproduction of the relevant location representations, i.e. a 2D renderer (2DR) 4 and a set of loudspeakers 5 for acoustic location representation, as well as a video screen driver (VD) 6 for visual location representation at the observation screen 1.
- processing means and means for the reproduction of the sound representations i.e. an event detector (ED) 2 and an intelligibility reductor (IR) 3, as well as means for the reproduction of the relevant location representations
- the relevant area thus can be monitored by means of cameras 7 and microphones 8 at several locations within the area, which are connected to a central surveillance post which accommodates the components shown in the figures 1 and 2 .
- a screen observing operator 9 is able, e.g. by means of a keyboard, mouse, joystick (not shown) or touch screen, to select and activate cameras and/or camera images to zoom in and out; besides the operator may be able to move the cameras into different positions.
- the attention value is derived based on the sound picked up by that sound source.
- the event detector 2 analyzes the incoming sound and decides -e.g. based on the results of a frequency spectrum and energy level analysis- whether the incoming sound comprises elements like fear, excitement (e.g. screaming), uncommon noise like e.g. breaking glass etc. In such cases the attention value should pass a predetermined threshold value, indicating that there might be an event which should be investigated.
- this detector gives an "on" signal to the intelligibility reductor 3 to pass a representation of the sound picked up by the sound source causing the threshold passage, i.e. a sound representation having a reduced intelligibility.
- an audible representation of the location of the possibly buffered sample of the event sound source causing the threshold passage is performed, viz.
- the operator may have heard (the sound representation of) breaking glass and/or crying voices "Stop thief!!", is guided by that sound to the highlighted location at his screen 1, activates the relevant camera and see at the auxiliary screen 10 a thief running away. The operator then may contact and inform the police.
- this may include to make separated fragmented parts of the sound picked up by the sound source (the microphone(s)), which fragmentation is such that the overall semantic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- the length of each fragmented part is limited (e.g. to 10 seconds or less), the intelligibility will be decreased and thus the possibility to relate a spoken phrase to a particular individual will be made infeasible.
- Another or an additional method for intelligibility reduction is to process (e.g. by scrambling and/or distortion) the sound from the originating sound source such that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- process e.g. by scrambling and/or distortion
- the Speech Transmission Index of the processed sound has a maximum of 0.35, this will fit to the desired lower intelligibility.
- the Speech Transmission Index is a measure for the intelligibility (understanding) of speech, whose value varies from 0 (completely unintelligible) to 1 (perfect intelligibility). On this scale, an STI of at least 0.5 is desirable for most applications ( Steeneken, H. J. M., & Houtgast, T. (1980). A physical method for measuring speech-transmission quality. Journal of the Acoustical Society of America, 67, 318-326 )
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Circuit For Audible Band Transducer (AREA)
- Burglar Alarm Systems (AREA)
- Alarm Systems (AREA)
Abstract
Method and system for remotely guarding an area by means of cameras and microphones at several locations within that area, which are connected to a central surveillance post, comprising the steps of displaying, at an observation screen, the various camera and microphone locations on a map of said area; enabling selective activation, e.g. by an operator, of camera images for zooming in; deriving, per microphone or group of microphones, an attention value based on the sound picked up by that sound source; and outputting, when the attention value passes a predetermined threshold value, a representation of the sound picked up by the sound source causing the threshold passage, called sound representation hereinafter, including an audible and/or visual representation of the location of the sound source causing the threshold passage, called location representation hereinafter. The sound representation may include fragmented parts of the sound picked up by the sound source, the fragmentation being such that the overall semantic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping. The sound representation may in addition be processed such that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping. The location representation may be performed by means of spatial audible reproduction of the relevant sound representation in the vicinity of said observation screen and/or by means of visual display of the location of the sound source causing said threshold passage.
Description
- The present invention refers to a method and system for remotely guarding an area by means of cameras and microphones at several locations within that area, which are connected to a central surveillance post.
- Surveillance cameras for monitoring public areas have widespread applications especially in urban areas. Although the use of such cameras is very useful in guarding such areas, the effectivity of such systems could be improved.
- It is one aim of the present invention to improve the effectivity of such system, by combining it with audible information. One problem which has to be overcome is that in most countries legal privacy regulations forbid eavesdropping (except under special conditions).
- Because of such privacy based restrictions, another aim of the invention is to provide a method and system in which audible information is used, however, without infringing the privacy regulations.
- Still another aim of the invention is to provide a system which makes remote monitoring of (urban) areas more lively for the operator (e.g. guardsman), as the visual information offered by the video cameras is supplemented by accompanying "real live audio", however without passing on (private) conversations etc. in a way that their content could be followed, i.e. understood, by the operator.
- Yet another aim is to provide that the operator can semantically comprehend the emotional components (in particular fear, anger, excitement etc.) in the audible signals picked up in the vicinity of the cameras. These components should, after transfer to the operator, attract his attention in a natural way and trigger him/her to pay attention to the location at which such (e.g. excited) audible signal originated or was recorded.
- To comply with those aims, it is preferred that, in a method for remotely guarding an area by means of cameras and microphones at several locations within that area, which are connected to a central surveillance post, next steps are included:
- displaying, at an observation screen, the various camera and microphone locations on a map of the area;
- enabling selective activation, e.g. by a screen observing operator, of one or more camera images for pointing and/or zooming in;
- deriving, per microphone or group of microphones, called sound source hereinafter, an attention value based on the sound picked up by that sound source;
- outputting, when the attention value passes a predetermined threshold value, a representation of the sound picked up by the sound source causing the threshold passage, called sound representation hereinafter, including an audible and/or visual representation of the location of the sound source causing the threshold passage, called location representation hereinafter.
- To comply with the requirements of the privacy legislation the sound it may be preferred that the representation is processed such that eavesdropping is prevented, e.g. by time and/or frequency domain filtering and/or scrambling, such as fragmenting the sound representation, causing that the sound representation supplied to the operator includes fragmented parts -e.g. having maximum lengths of for example 10 seconds- of the sound picked up by the sound source, causing that the overall semantic or linguistic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- As another way to meet the privacy regulations it may be preferred that the sound representation includes at least part of the sound picked up by the relevant sound source, however, processed such that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping, e.g. wherein the Speech Transmission Index (abbreviated STI, see for its definition e.g. en.wikipedia.org/wiki/Speech_Transmission_Index) of the processed sound is reduced, e.g. by means of signal scrambling or addition of noise), to a maximum of e.g. or less 0.35.
- To comply with the aim to provide that the relevant audible signals, picked up in the vicinity of the cameras and processed as indicated above, will attract the operator's attention and guide him to the location on his observation screen where the (e.g. excited) sound was originated, the location representation of that sound may preferably be performed by spatial (2 or 3 Dimensional) audible reproduction of that sound representation in the vicinity of the observation screen. As such observation screen (which may be formed by a group of cooperating display screens) normally will have rather large dimensions, the operator's attention can be attracted when the sound representations, originated at several microphone locations, are reproduced (i.e. when the attention value of the sound passes a predetermined threshold value) via a spatial audio reproduction system. It has to be noted that the sounds as such may be picked up by single channel microphones, however their sound representations are reproduced, via a spatial audio system in the vicinity of the observation screen, in such a way that, in the operator's perception, the sound representations comes from the direction of the location, as mapped on the observation screen, where the sound has been produced or recorded.
- Additionally or optionally, the sound originating location may be represented by means of visual display of the location where the sound has been produced, e.g. by means of any form of highlighting that location at the area mapping on the observation screen.
- Hereinafter the method will be elucidated with reference to:
-
Figure 1 shows an exemplary embodiment of a system in which the method according to the invention can be performed; -
Figure 2 shows the diagram of an exemplary embodiment of a subsystem for sound processing. -
Figures 1 and2 show a system for remotely guarding an area (the centre of Utrecht) using cameras and microphones at several locations within that area, which are connected to a central surveillance post, including anobservation screen 1 arranged for displaying the various camera (Cam) and microphone (Mic) locations on a map of the area. The system includes means for executing the method as discussed hereinbefore including processing means and means for the reproduction of the sound representations, i.e. an event detector (ED) 2 and an intelligibility reductor (IR) 3, as well as means for the reproduction of the relevant location representations, i.e. a 2D renderer (2DR) 4 and a set ofloudspeakers 5 for acoustic location representation, as well as a video screen driver (VD) 6 for visual location representation at theobservation screen 1. - The relevant area thus can be monitored by means of cameras 7 and microphones 8 at several locations within the area, which are connected to a central surveillance post which accommodates the components shown in the
figures 1 and2 . - By means of the
observation screen 1, the various camera and microphone locations are displayed on a map image of the area to be monitored. Ascreen observing operator 9 is able, e.g. by means of a keyboard, mouse, joystick (not shown) or touch screen, to select and activate cameras and/or camera images to zoom in and out; besides the operator may be able to move the cameras into different positions. - In the vicinity of each camera, microphones are installed, picking up the sound present in the camera's vicinity. In this way the sounds which are present in the vicinity of each camera is transmitted to the surveillance post, which accommodates the system. In the
event detector 2 per microphone or group of microphones (sound source) an attention value is derived based on the sound picked up by that sound source. Theevent detector 2 analyzes the incoming sound and decides -e.g. based on the results of a frequency spectrum and energy level analysis- whether the incoming sound comprises elements like fear, excitement (e.g. screaming), uncommon noise like e.g. breaking glass etc. In such cases the attention value should pass a predetermined threshold value, indicating that there might be an event which should be investigated. - When the attention value passes a predetermined threshold value, detected in the
event detector 2, this detector gives an "on" signal to theintelligibility reductor 3 to pass a representation of the sound picked up by the sound source causing the threshold passage, i.e. a sound representation having a reduced intelligibility. Besides, an audible representation of the location of the possibly buffered sample of the event sound source causing the threshold passage (location representation) is performed, viz. by reproducing the sound representation (having a reduced intelligibility) by means of a 2D sound rendering subsystem (2DR) 4 andloudspeakers 5 which -by means of audio phase manipulation causing pseudo stereo/quadraphonic sound reproduction (see en.wikipedia.org/wiki/Quadraphonic_sound) and/or sound reproduction via a selected set ofloudspeakers operator 9, standing or sitting before his (widescreen) observation screen 1- the sound representation comes from the location at that observation screen (in the corner right below infigure 1 ). Except the audible location representation, audible to the operator, also a visual location representation is presented to the operator, viz. in the form of an image, e.g. as shown infigure 1 (again in the corner right below) where the relevant microphone location and the neighbouring camera location have been accentuated by (bold) encircling the relevant location. In this way theoperator 9 will be guided -in a natural and intuitive way- to pay his attention to the location in which -according to the sound picked up by the microphone(s)- something might be wrong. Then the operator may activate the relevant camera (e.g. by using a touch screen or keyboard function) to zoom in, which may be made visible via thesame observation screen 1 or -as is suggested infigure 1 - via one or more auxiliary screens. In the illustrated example, the operator may have heard (the sound representation of) breaking glass and/or crying voices "Stop thief!!", is guided by that sound to the highlighted location at hisscreen 1, activates the relevant camera and see at the auxiliary screen 10 a thief running away. The operator then may contact and inform the police. - Concerning the sound representation, made in de
IR module 3, this may include to make separated fragmented parts of the sound picked up by the sound source (the microphone(s)), which fragmentation is such that the overall semantic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping. When the length of each fragmented part is limited (e.g. to 10 seconds or less), the intelligibility will be decreased and thus the possibility to relate a spoken phrase to a particular individual will be made infeasible. - Another or an additional method for intelligibility reduction is to process (e.g. by scrambling and/or distortion) the sound from the originating sound source such that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping. In practice it has been proven that when the Speech Transmission Index of the processed sound has a maximum of 0.35, this will fit to the desired lower intelligibility.
- The Speech Transmission Index (STI) is a measure for the intelligibility (understanding) of speech, whose value varies from 0 (completely unintelligible) to 1 (perfect intelligibility). On this scale, an STI of at least 0.5 is desirable for most applications (Steeneken, H. J. M., & Houtgast, T. (1980). A physical method for measuring speech-transmission quality. Journal of the Acoustical Society of America, 67, 318-326)
Claims (8)
- Method for remotely guarding an area by means of cameras and microphones at several locations within that area, which are connected to a central surveillance post, comprising next steps:- displaying, at an observation screen (1), the various camera and microphone locations on a map of said area;- enabling selective activation, e.g. by a screen observing operator (9), of one or more camera images for zooming in;- deriving, per microphone or group of microphones, called sound source hereinafter, an attention value based on the sound picked up by that sound source;- outputting, when the attention value passes a predetermined threshold value, a representation of the sound picked up by the sound source causing the threshold passage, called sound representation hereinafter, including an audible and/or visual representation of the location of the sound source causing the threshold passage, called location representation hereinafter.
- Method according to claim 1, wherein said sound representation includes fragmented parts of the sound picked up by the sound source, the fragmentation being such that the overall semantic intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- Method according to claim 2, wherein the length of each fragmented part has a maximum of 10 seconds.
- Method according to any preceding claim, wherein said sound representation includes at least part of the sound picked up by the relevant sound source, however, processed such, e.g. by means of time and/or frequency domain scrambling, distorting, filtering etc., that the intelligibility of the sound is reduced to a level which complies with the relevant privacy regulations related to eavesdropping.
- Method according to claim 4, wherein the Speech Transmission Index of the processed sound has a maximum of 0.35.
- Method according to any preceding claim, wherein said location representation is performed by means of spatial audible reproduction of the relevant sound representation in the vicinity of said observation screen.
- Method according to any preceding claim, wherein said location representation is performed by means of visual display of the location of the sound source causing said threshold passage.
- System for remotely guarding an area using cameras and microphones at several locations within that area, which are connected to a central surveillance post, including an observation screen (1) arranged for displaying the various camera and microphone locations on a map of said area; the system including means for executing the method according to any of the preceding claims, including processing means and means for the reproduction of said sound representations and location representations respectively.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP09165782A EP2276007A1 (en) | 2009-07-17 | 2009-07-17 | Method and system for remotely guarding an area by means of cameras and microphones. |
PCT/NL2010/050466 WO2011008099A1 (en) | 2009-07-17 | 2010-07-19 | Method and system for remotely guarding an area by means of cameras and microphones |
EP10736847A EP2454725A1 (en) | 2009-07-17 | 2010-07-19 | Method and system for remotely guarding an area by means of cameras and microphones |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP09165782A EP2276007A1 (en) | 2009-07-17 | 2009-07-17 | Method and system for remotely guarding an area by means of cameras and microphones. |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2276007A1 true EP2276007A1 (en) | 2011-01-19 |
Family
ID=41110692
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP09165782A Withdrawn EP2276007A1 (en) | 2009-07-17 | 2009-07-17 | Method and system for remotely guarding an area by means of cameras and microphones. |
EP10736847A Withdrawn EP2454725A1 (en) | 2009-07-17 | 2010-07-19 | Method and system for remotely guarding an area by means of cameras and microphones |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10736847A Withdrawn EP2454725A1 (en) | 2009-07-17 | 2010-07-19 | Method and system for remotely guarding an area by means of cameras and microphones |
Country Status (2)
Country | Link |
---|---|
EP (2) | EP2276007A1 (en) |
WO (1) | WO2011008099A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8167916B2 (en) | 2001-03-15 | 2012-05-01 | Stryker Spine | Anchoring member with safety ring |
WO2014199263A1 (en) | 2013-06-10 | 2014-12-18 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling display of facility information and surveillance data via a map-based user interface |
EP2819108A1 (en) * | 2013-06-24 | 2014-12-31 | Panasonic Corporation | Directivity control system and sound output control method |
CN107257525A (en) * | 2013-03-28 | 2017-10-17 | 三星电子株式会社 | Portable terminal and in portable terminal indicate sound source position method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10134422B2 (en) | 2015-12-01 | 2018-11-20 | Qualcomm Incorporated | Determining audio event based on location information |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2408880A (en) * | 2003-12-03 | 2005-06-08 | Safehouse Internat Inc | Observing monitored image data and highlighting incidents on a timeline |
US20050225634A1 (en) * | 2004-04-05 | 2005-10-13 | Sam Brunetti | Closed circuit TV security system |
WO2005120071A2 (en) * | 2004-06-01 | 2005-12-15 | L-3 Communications Corporation | Method and system for performing video flashlight |
US20070182819A1 (en) * | 2000-06-14 | 2007-08-09 | E-Watch Inc. | Digital Security Multimedia Sensor |
WO2007095994A1 (en) * | 2006-02-23 | 2007-08-30 | Robert Bosch Gmbh | Audio module for a video surveillance system, video surveillance system and method for keeping a plurality of locations under surveillance |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4060803A (en) * | 1976-02-09 | 1977-11-29 | Audio Alert, Inc. | Security alarm system with audio monitoring capability |
US5666157A (en) * | 1995-01-03 | 1997-09-09 | Arc Incorporated | Abnormality detection and surveillance system |
US7346186B2 (en) * | 2001-01-30 | 2008-03-18 | Nice Systems Ltd | Video and audio content analysis system |
GB0709329D0 (en) * | 2007-05-15 | 2007-06-20 | Ipsotek Ltd | Data processing apparatus |
-
2009
- 2009-07-17 EP EP09165782A patent/EP2276007A1/en not_active Withdrawn
-
2010
- 2010-07-19 WO PCT/NL2010/050466 patent/WO2011008099A1/en active Application Filing
- 2010-07-19 EP EP10736847A patent/EP2454725A1/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070182819A1 (en) * | 2000-06-14 | 2007-08-09 | E-Watch Inc. | Digital Security Multimedia Sensor |
GB2408880A (en) * | 2003-12-03 | 2005-06-08 | Safehouse Internat Inc | Observing monitored image data and highlighting incidents on a timeline |
US20050225634A1 (en) * | 2004-04-05 | 2005-10-13 | Sam Brunetti | Closed circuit TV security system |
WO2005120071A2 (en) * | 2004-06-01 | 2005-12-15 | L-3 Communications Corporation | Method and system for performing video flashlight |
WO2007095994A1 (en) * | 2006-02-23 | 2007-08-30 | Robert Bosch Gmbh | Audio module for a video surveillance system, video surveillance system and method for keeping a plurality of locations under surveillance |
Non-Patent Citations (1)
Title |
---|
STEENEKEN, H. J. M.; HOUTGAST, T.: "A physical method for measuring speech-transmission quality", JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 67, 1980, pages 318 - 326 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8167916B2 (en) | 2001-03-15 | 2012-05-01 | Stryker Spine | Anchoring member with safety ring |
US8845695B2 (en) | 2001-03-15 | 2014-09-30 | Stryker Spine | Anchoring member with safety ring |
US9532807B2 (en) | 2001-03-15 | 2017-01-03 | Stryker European Holdings I, Llc | Anchoring member with safety ring |
CN107257525A (en) * | 2013-03-28 | 2017-10-17 | 三星电子株式会社 | Portable terminal and in portable terminal indicate sound source position method |
WO2014199263A1 (en) | 2013-06-10 | 2014-12-18 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling display of facility information and surveillance data via a map-based user interface |
EP3008529A4 (en) * | 2013-06-10 | 2017-03-15 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling display of facility information and surveillance data via a map-based user interface |
EP2819108A1 (en) * | 2013-06-24 | 2014-12-31 | Panasonic Corporation | Directivity control system and sound output control method |
US9747454B2 (en) | 2013-06-24 | 2017-08-29 | Panasonic Intellectual Property Management Co., Ltd. | Directivity control system and sound output control method |
Also Published As
Publication number | Publication date |
---|---|
EP2454725A1 (en) | 2012-05-23 |
WO2011008099A1 (en) | 2011-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2276007A1 (en) | Method and system for remotely guarding an area by means of cameras and microphones. | |
CN102737480B (en) | Abnormal voice monitoring system and method based on intelligent video | |
WO2021023667A1 (en) | System and method for assisting selective hearing | |
US9652961B2 (en) | Alarm notifying system | |
US8704893B2 (en) | Ambient presentation of surveillance data | |
KR20160044363A (en) | Apparatus and Method for recognizing horn using sound signal process | |
DE102009045977A1 (en) | Mobile device, security device with a mobile device and use of a mobile device in a security system | |
EP3945729A1 (en) | System and method for headphone equalization and space adaptation for binaural reproduction in augmented reality | |
JP2007264436A (en) | Sound masking device, sound masking method, and program | |
US10878688B2 (en) | Monitoring system and monitoring method | |
JP2007034238A (en) | On-site operation support system | |
JP6447976B2 (en) | Directivity control system and audio output control method | |
KR20160072678A (en) | Real time monitoring system for prevention of crime and violence | |
US20230112743A1 (en) | System and method to provide emergency alerts | |
CN100483471C (en) | Signalling system with imaging sensor | |
JP6569853B2 (en) | Directivity control system and audio output control method | |
CN203120061U (en) | Remote security monitoring system | |
KR101153191B1 (en) | System and method for preventing crime in parking lot using sound recognition | |
Nilsson | Design of fire alarms: Selecting appropriate sounds and messages to promote fast evacuation | |
JP4434720B2 (en) | Intercom device | |
KR20170007875A (en) | Apparatus for helping safety and System thereof Using Image Shooted by CCTV and Judgment Unusual sound | |
JP2007104546A (en) | Safety management apparatus | |
US12050840B2 (en) | Fire panel audio interface | |
DE102017011315B3 (en) | Alarm-enabled microphone | |
JP2010087865A (en) | Signal-working apparatus and signal-reconstructing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA RS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20110720 |