EP2115698A1 - Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet - Google Patents

Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet

Info

Publication number
EP2115698A1
EP2115698A1 EP07709185A EP07709185A EP2115698A1 EP 2115698 A1 EP2115698 A1 EP 2115698A1 EP 07709185 A EP07709185 A EP 07709185A EP 07709185 A EP07709185 A EP 07709185A EP 2115698 A1 EP2115698 A1 EP 2115698A1
Authority
EP
European Patent Office
Prior art keywords
data
sensor
analysis
rule set
object list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07709185A
Other languages
German (de)
English (en)
Inventor
Mark Bloemendaal
Jelle Foks
Johannes Steensma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ultrawaves design holding BV
Original Assignee
Ultrawaves design holding BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ultrawaves design holding BV filed Critical Ultrawaves design holding BV
Publication of EP2115698A1 publication Critical patent/EP2115698A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19652Systems using zones in a single scene defined for different treatment, e.g. outer zone gives pre-alarm, inner zone gives alarm

Definitions

  • the present invention relates to a surveillance method for monitoring a location, comprising acquiring sensor data from at least one sensor and processing sensor data from the at least one sensor. Furthermore, the present invention relates to a surveillance system.
  • US2003/0163289 which describes an object monitoring system comprising multiple camera's and associated processing units.
  • the processing units process the video data originating from the associated camera and data from further sensors, and generates trigger signals relating to a predetermined object under surveillance.
  • a master processor is present comprising agents which analyze the trigger signals for a specific object and generate an event signal.
  • the event signals are monitored by an event system, which determines whether or not an alarm condition exists based on the event signals.
  • the system is particularly suited to monitor static objects, such as paintings and artworks in a museum, and e.g. detect the sudden disappearance (theft) thereof.
  • a surveillance method according to the preamble defined above is provided, in which the sensor data is processed in order to obtain an extracted object list, at least one virtual object is provided, and at least one rule set is applied, the at least one rule set defining possible responses depending on the extracted object list and the at least one virtual object.
  • processing sensor data comprises indexing (or annotating) objects in the extracted object list to obtain an indexed extracted object list with a predefined set of index parameters.
  • the at least one rule set comprises first type of rules relating to the predefined set of index parameters, and second type of rules.
  • Applying the at least one rule set comprises reducing the indexed extracted object list in size by applying the first type of rules using the predefined set of index parameters, and subsequently applying the second type of rules on the reduced extracted object list.
  • the virtual object is e.g. a virtual fence referenced to the sensor data characteristic (e.g. a box or line in video footage), but may also be of a different nature, e.g. the sound of a breaking glass window.
  • the applying of rules may result in a response, e.g. generating a warning.
  • the extracted object list comprises all objects in a sensor data stream, e.g. all objects extractable from a video data stream. This as opposed to prior art systems, where only objects in a predefined region-of- interest are extracted (thus loosing information) or other systems, where only objects which generate a predefined event are extracted and further processed (e.g. tracked).
  • the objects in the extracted object list are also indexed or annotated, preferably during or directly after object extraction, which allows a more efficient retrieval of object from the extracted object list.
  • the method in a further embodiment, comprises analyzing the at least one rule set for determining the first type of rules and the second type of rules. This allows an operator to define a rule set in a high level language, which is easily understandable for a human operator, and to divide the rules in first and second type rules, which is beneficial for an efficient application of the method on the indexed extracted object list.
  • the method comprises storing results of applying the at least one rule set for application of a further rule set. Also, intermediate results may be stored for later re-use in another application of a further rule set.
  • the extracted object list is updated depending on the update rate of the at least one sensor.
  • This allows to dynamic application of the rule set, in which instant action can be taken when desired or needed.
  • the indexing of objects may be applied at the update rate or a lower frequency. This may vary from frame by frame indexing of objects, up to a limited time frame (e.g. from occurrence of an object up to disappearance of the same object), but may also apply to the entire set of sensor data (e.g. indexing after all objects are extracted and classified).
  • the extracted object list may be stored in a further embodiment, and the at least one rule set may then be applied later in time.
  • This allows to define a rule set depending on what is actually searched, which is advantageously for research and police work, e.g. when re-assessing a recorded situation.
  • This embodiment also allows to adapt a rule set and immediate rerun the analysis to check for improved response.
  • the method in a further embodiment comprises re-ordering the indexed extracted object list. This may e.g. be applied to optimize the stored indexed extracted object list for quick and efficient storage access.
  • the at least one rule set comprises multiple, independent rule sets. This allows to use the surveillance method in a multi-role fashion, in parallel operation (i.e. in real-time if needed).
  • Indexing objects in the extracted object list comprises in a further embodiment determining for each extracted object in the extracted object list associated object attributes, such as classification, color, texture, shape, position, velocity.
  • object attributes such as classification, color, texture, shape, position, velocity.
  • the object attributes may be different, e.g. in the case of audio sensors, the attributed may include frequency, frequency content, amplitude, etc.
  • Obtaining the extracted object list may comprise consecutive operations of data enhancement (e.g. image enhancement), object finding, object analysis and object tracking. As a result, an extracted object list is obtained, which may be used further in the present method.
  • data enhancement e.g. image enhancement
  • object finding e.g. object finding
  • object analysis e.g. object analysis
  • object tracking e.g. object tracking
  • the sensor data comprises video data
  • the data enhancement comprises one or more of the following data operations: noise reduction; image stabilization; contrast enhancement.
  • Object finding in a further embodiment of the present method comprises one or more of the group of data operations comprising: edge analysis, texture analysis; motion analysis; background compensation.
  • Object analysis comprises one or more of the group of data operations comprising: colour analysis, texture analysis; form analysis; object correlation.
  • Object correlation may include classification of an object (human, vehicle, aircraft,...) with a percentage score representing the likelihood that the object is correctly classified.
  • Object tracking may comprise a combination of identity analysis and trajectory analysis.
  • the present invention relates to a surveillance system comprising at least one sensor and a processing system connected to the at least one sensor, in which the processing system is arranged to execute the surveillance method according to any one of the present method embodiments.
  • the processing system in a further embodiment, comprises a local processing system located in the vicinity of the at least one sensor, and a central processing system, located remotely from the at least one sensor, and in which the local processing system is arranged to send the extracted object list (with annotations) to the central processing system.
  • the local processing system further comprises a storage device for storing raw sensor data or preprocessed sensor data.
  • a lossless video coding technique is used (or a high quality compression technique, e.g.
  • the present surveillance system may in an embodiment further comprise at least one operator console arranged for controlling the surveillance system.
  • the at least one operator console comprises a representation device, which is arranged to represent simultaneously the sensor data, objects from the extracted object list and at least one virtual object in overlay. This overlay may be used in live monitoring using the present surveillance system, but also in a post-processing mode of operation, e.g. when fine-tuning the rule sets.
  • FIG. 1 shows a schematic view of a surveillance system according to an embodiment of the present invention
  • Fig. 2 shows a schematic view of a surveillance system according to a further embodiment of the present invention
  • FIG. 3 shows a schematic view of the processing flows according to an embodiment of the present surveillance method
  • Fig. 4 shows a schematic view in more detail of a part of the flow diagram of
  • Fig. 5 shows a schematic view in more detail of a further part of the flow diagram of Fig. 3;
  • Fig. 6 shows a schematic view in more detail of a further part of the flow diagram of Fig. 3;
  • Fig. 7 shows a schematic view in more detail of a further part of the flow diagram of Fig. 3;
  • Fig. 7a shows a flow chart of an embodiment in which the rule set is applied in multiple steps
  • Fig. 8 shows a schematic view of the processing steps in the live rule checking embodiment of the present method
  • Fig. 9 shows a schematic view of the processing steps in the post-processing rule checking embodiment of the present method
  • Fig. 10 shows a view of a first application of the present method to detect intruders
  • Fig. 11 shows a view of a second application of the present method to monitor an aircraft platform
  • Fig. 12 shows a view of a third application of the present method relating to traffic management.
  • a surveillance method and system are provided for monitoring a location (or group of locations), in which use can be made of multi-sensor arrangements, distributed or centralized intelligence.
  • the implemented method is object oriented, allowing transfer of relevant data at real time while requiring only limited bandwidth resources.
  • the present invention may be applied in monitoring systems, guard systems, surveillance systems, sensor research systems, and other systems which allow to provide detailed information on scenery in an area to be monitored.
  • FIG. 1 A schematic diagram of a centralized embodiment of such a system is shown in Fig. 1.
  • a number of sensors 14 are provided, which are interfaced to a network 12 using dedicated interface units 13.
  • a central processing system 10 is connected to the network 12, and to one or more operator consoles 11, equipped with input devices (keyboard, mouse, etc.) and displays as known in the art.
  • the network 12 may be a dedicated or an ad-hoc network, and may be wired, wireless, or a combination of both.
  • the processing system 10 comprises the required interfacing circuitry, and one or more processors, such as CPU's, DSP's, etc, and associated devices, such as memory modules, which as such are known to the person skilled in the art.
  • a distributed embodiment of the intelligence of a surveillance system is shown.
  • the sensor 14 is connected to a local processing system 15, which interfaces to the network 12.
  • the one or more operator console(s) 11 may be directly interfaced to the network 12.
  • Multiple sensors 14 and associated local processing system 15 may be present in an actual surveillance system.
  • the operator console(s) 11 are connected to the network 12 via a central processing system (not shown, but similar to processing system 10 of the embodiment of Fig. 1).
  • the local processing system 15 comprises a signal converter 16, e.g. in the form of an analog to digital converter, which converts the analog signal(s) from the sensor 14 into a digital signal when necessary.
  • Processing of the digitized signal is performed by the processing system 17, which as in the previous embodiment, may comprise one or more processors (CPU, DSP, etc.) and ancillary devices.
  • the processor 17 is connected to a further hardware device 18, which may be arranged to perform compression of output data, and other functions, such as encryption, data shaping etc,, in order to allow data to be sent from the local processing system 15 into the network 12.
  • the processor 17 is connected to a local storage device 19, which is arranged to store local data (such as the raw sensor data and locally processed data).
  • Data from the local storage device 19 may be retrieved upon request, and sent via the network 12.
  • the sensors 14 may comprise any kind of sensor useful in surveillance applications, e.g. a video camera, a microphone, switches, etc.
  • a single sensor 14 may include more than one type of sensor, and provide e.g. both video data and audio data.
  • the full frame of video footage is used for object extraction (i.e. all sensor data), and not only a part of the footage (a region of interest), or only objects which generate certain predefined events, as in existing systems.
  • Object detection is accomplished using motion, texture, and contrast in the video data.
  • an extensive characterization of objects is obtained, such as color, dimension, shape, speed of an object, allowing more sophisticated classification (e.g. human, car, bicycle, etc.).
  • rules may be applied which implement a specific surveillance function.
  • behavior rule analysis may be performed, allowing a fast evaluation on complete lists of objects, or a simple detection of complex behavior of actual objects.
  • a multi-role/multi-camera analysis in which surveillance data may be used for different purposes using different rules.
  • the analysis rules may be changed after a first video analysis, and new results may be obtained without requiring processing of the raw video data anew.
  • Fig. 3 a functional flow diagram is shown of an embodiment of the surveillance method according to the present invention.
  • the video signal from the sensor 14 is converted in a digital signal in block 20.
  • the digitized video is further processed in two parallel streams.
  • the left stream implements the necessary processing for live video review and recording of the video data.
  • the digitized video data is compressed in compression block 21, and then stored in a video data store 22.
  • a lossless compression method is used, or a high quality compression technique such as MPEG4 coding, as this allows to retrieve all stored data in its original form (or at sufficient quality) at a later moment in time.
  • the video data store may be part of the local storage device 19 as shown in the Fig. 2 embodiment, and may include time stamping data (or any other kind of referencing/indexing data). Stored video data may be retrieved at any time, e.g. under the control of the operator console 11, to be able to retrieve the actual imagery of a surveillance site.
  • the right stream in the flow diagram of Fig. 3 shows the functional blocks used to obtain object data from the surveillance video data.
  • the video is enhanced using image enhancement techniques in functional block 31.
  • object are extracted or found in functional block 32.
  • the found objects may then be analyzed in functional block 33.
  • functional block 34 objects are tracked in the subsequent images of a video sequence.
  • the object data output from the object tracking functional block 34 may be submitted to rules in a live manner in functional block 40.
  • the object and associated data (characteristics, annotations), i.e. the extracted object list, are also stored in an object data storage 45 (e.g. the local storage device 19 as shown in Fig. 2, or a central storage device, e.g. part of the processing system 10 of Fig. 1).
  • data may be retrieved (e.g. using structured queries) by functional block 46, in which the recorded objects are submitted to rule checking.
  • the rule set uses predefined virtual objects, e.g. virtual fences/perimeters/lines in a video scenery, and the rules may use the mutual relationship of the virtual objects and the detected objects to provide predefined responses.
  • the responses may include, but are not limited to providing warnings, activation of other devices (e.g. other sensors 14 in vicinity), or control of the sensors 14 in use (e.g. controlling pan-tilt-zoom of a camera).
  • Fig. 4 it is shown that the video signal is first converted into the digital domain in analog to digital conversion functional block 20.
  • Analog to digital conversion of video signals (and signals from other types of sensors) is well known in the art.
  • Various methods implemented in hardware, software or a combination of both may be used. In an exemplary embodiment, this results in a digitized video data stream with 25 frames/sec, corresponding to about 10 Mpixel/sec (or a 160 Mbit/s data rate).
  • this digitized video data is compressed in compression functional block 21, e.g.
  • MPEG4 compression block 211
  • This compressed data stream may be used for live viewing of the video footage, but also for recording (locally or at a central location).
  • the image enhancement functional block 31 is shown in more detail on the right side of Fig. 4.
  • the raw video data is subjected to a noise reduction in functional block 311, and then to a digital image stabilization functional block 312.
  • the video data is subjected to a contract enhancement functional block 313. All the mentioned functions are known as such to the person skilled in the art, and again, the functional blocks 311-313 may be implemented using hardware and/or software implementations. It is noted that the video data is still at 10 Mpixel/sec and
  • Fig. 5 shows the object finding functional block 32 in more detail.
  • the video data is subjected to one or more of a number of functions or algorithms, which may include, but are not limited to, an edge analysis block 321 arranged to detect edges in the video data, a texture analysis block 322 arranged to detect areas with a similar texture, a motion analysis block 323 arranged to detect motion of (blocks) of pixels in subsequent images, and background compensation block 324 arranged to take away any possible disturbing background pixels. From all these functional blocks 321-324, areas of possible objects may be determined in functional block 325. For all the detected objects, furthermore an object shape analysis block 326 may be used to determine the shape of each object.
  • an object shape analysis block 326 may be used to determine the shape of each object.
  • the result of this object finding functional block 32 is an object list, which is updated 25 times per second in the example given. For each object, positions in the picture, boundaries and velocity is available. All the mentioned functions are known image analysis techniques as such, and again, the functional blocks 321-326 may be implemented using hardware and/or software implementations. It is noted that at this stage, the data information flow is already at a much reduced rate, i.e. orders of magnitude smaller than the original video data at 160Mbit/sec.
  • the object analysis functional block 33 is shown in more detail. From the object list with (in the given example) 25 updates/sec, a large number of characteristic features of each of the objects may be derived. For this a number of functional blocks are used, which again may be implemented in hardware and/or software. The (non-limitative) characteristics relate to color (block 331), texture (block 332), and form (block 333) analysis, and a number of correlator functional blocks.
  • the human being correlator block 334 determines the chance whether an object is a human (with an output in e.g. a percentage score). Further correlation functional blocks indicated are vehicle correlator functional block 335, and further correlator functional block 336 (e.g. aircraft correlator). The output of these functional blocks is combined in object annotation functional block 337, in which the various characteristics are assigned to the associated object in an annotated object list.
  • FIG. 7 further details of the object tracking functional block 34 are shown schematically.
  • an identity analysis and a trajectory analysis are performed in functional blocks 341, and 342, respectively.
  • the output thereof is received by identified object functional block 343, which then outputs an identified (extracted) object list, which has an update rate of 25 updates/sec.
  • the objects may then be stored or logged in the object database 45 as discussed above (e.g. an SQL database), or transferred to the liver rule checking function, indicated as live intelligence application triggering in Fig. 7.
  • the annotations found in the functional blocks described with reference to Fig. 5, 6 and/or 7 may be used to index the extracted objects. This allows to divide the extracted object list from the original (video) data stream in a number of groups depending on index parameters. E.g. it is possible to select only objects which are more than likely (>75% confidence) a car, and apply further rule sets to this (smaller) set of object data. This reduces the capacity required for transferring the data to be searched, and for processing the data.
  • This embodiment is shown in the flow diagram of Fig. 7a.
  • the extracted object list as a collection of objects extracted from an entire set of sensor data (e.g. 12 hours of video data) are represented as a whole collection.
  • indexing is applied to the extracted object list, resulting in various sets of object data characterised by one or more indexing parameters, as shown by reference number 102.
  • the indexing is used to reduce the set of (annotated) objects to a smaller group of objects 104, by first applying logical rules from the set of rules (e.g. objects with a color red). Only this smaller object data set 104 is then used to apply the more resource intensive graphical rules (e.g. find objects crossing a virtual perimeter) in block 105, eventually resulting in the desired result 106 of the search query.
  • the indexing of extracted objects is used primarily to enable acceleration of later applications of rule sets. Indexing splits the extracted object data set stored in the object data storage 45, such that groups of (partly overlapping) objects are created, which allows to read only a part of the extracted object data for further rule checking.
  • An example of such an indexing method in block 103 is annotating an object with a time stamp, e.g. when the object is first detected in the sensor data.
  • object fingerprints of extracted objects e.g. biometric data of persons
  • the indexing of objects may include, but is not limited to small/large (e.g.
  • the indexing may be used to quickly discard objects that e.g. have not been within a certain (video image) distance from another, stationary object.
  • the indexing may be applied by determining a value for a predetermined indexing parameter, e.g. the likelihood of an object being a human, a car, a truck, etc.
  • Indexing may be implemented at various stages of the data processing described in the embodiments above. In some cases, indexing may already be performed on the digital sensor data directly, e.g. when analyzing frame pixel data of a video sensor. For other types of analysis, e.g. multiple frames of a video sequence are necessary, e.g. for determining a direction of motion of an object. Also, indexing may be executed when all video footage of a predefined period has been processed, e.g. to optimize further the access to object data storage 45 using all available indexes.
  • the method as described above may be implemented for a single camera, but also for a large number of camera's and sensors 14.
  • the rule checking output may include more complex camera control operations, such as pan-tilt-zoom operations of a camera, or handover to another camera.
  • the functions described above may in this case be implemented locally in the camera 14 (see exemplary embodiment of Fig. 20), such that each video stream is processed locally, and only the object data has to be transferred over the network 12.
  • Fig. 8 a more detailed schematic is shown of the rule checking functional block 40 of Fig. 3. Extracted object lists of all camera's 14 in the surveillance system are input (real-time) to the rule checking functional block 40.
  • this functional block 40 one or more rule set functional blocks 401-403 may be present, which each provide their associated response.
  • a structure as shown schematically in Fig. 9 may be used.
  • the extracted object lists of each camera (A, B, C) are retrieved from the object database 45, and one or more rule sets may be applied to one or all of the object lists in functional blocks 461-463. Again, each rule set provides its own response.
  • the rule sets may be changed instantly (due to changing circumstances, or as a result of one of the rule sets), and the resulting response of the surveillance system is also virtually instantaneous.
  • the rule sets may be fine-tuned, and after each amendment, the same extracted object list data may be used again to see whether the fine-tuning provides a better result.
  • a rule set is advantageously divided in two types of rules, i.e. a first type of rules (logical rules) and a second type of rules ((graphical) analysis rules).
  • the logical rules apply rules related to one or more of the indexing parameters (or annotations), and allow to make a selection of a group of relevant objects from the extracted object list.
  • the analysis rules require more processing intensive functions, e.g. involving graphical processing such as determining objects in a period of video footage which cross a virtual perimeter.
  • the operator console 11 may be arranged to implement this re-arranging of the set of rules in a query, allowing the operator to enter a query in a higher level language (understandable by humans), which is then optimized according to the present invention embodiments.
  • results of a query input by an operator may be stored locally, for further fine-tuning of a search query, or for later re-use in other queries. This furthermore may enhance the execution speed of a search query.
  • a search query When executing a search involving a large amount of (video) data, usually a number of queries are performed in sequence. E.g. first a search query may be executed to find all objects crossing a line X, resulting in a first result. This result may then be added to the respective objects as a new annotation (or index).
  • these indices may advantageously be used to further improve the search query.
  • FIG. 10 shows a camera frame (indicated by dashed line) with virtual fences and virtual lines in an image from a video camera.
  • a building is located at the actual surveillance site.
  • the camera is viewing along a road (within zone A), which is bordered by a roadside (within zone B).
  • a trench is located, the middle of which is indicated by the virtual fence line C.
  • a further virtual fence line E is located.
  • the rules applied in the live rule checking functional block 40 or in recorded object rule checking functional block 46, and possible responses, may look like: • Object X in public Zone A - No suspect situation: public area
  • Zone D Intruder alert • Object X disappears in Zone D
  • a further example is shown for a surveillance system in an airport environment.
  • An aircraft parking zone on an airfield is indicated by the virtual fence Zone P inside a camera frame (indicated by dashed line).
  • the following responses are executed: Wait until the object X stops; Identify as aircraft (according to shape and size of object X; After n minutes of standstill: apply virtual object fence zones A en B (indicated by Zone IA, 2A, 3A, and IB, 2B, 3B in Fig. 11 for three different objects); and After m minutes activate "Aircraft Security Rules" for Aircraft #.
  • Aircraft Security Rules for each Aircraft # (# being 1, 2, or 3 in Fig. 11) on the aircraft parking zone may have the following form:
  • a view is shown in Fig. 12 with virtual fences for a traffic measurement and safety application.
  • a roadside is located in the actual location, along which a number of parking spaces are provided, which scenery is viewed in a camera frame indicated by a dashed line.
  • a virtual fence Zone B is raised on the roadside, and a virtual fence Zone D is raised around the parking spaces.
  • a first line A is drawn across the road in the distance, and a second line C is drawn across the road nearer to the camera position.
  • a number of rules with different purpose may be set.
  • a first rule set allows to assist in traffic management:
  • a second rule set may be applied related to safety:
  • the sensors are chosen as providing video data.
  • other sensors such as audio sensors (microphone), vibration sensors, which also are able to provide data which can be processed to obtain extracted object data.
  • audio sensors microphone
  • vibration sensors which also are able to provide data which can be processed to obtain extracted object data.
  • a virtual object may e.g. be 'Sound of braking glass' and the rule may be: Object is 'Sound of breaking glass': then activate nearest camera to instantly view the scene.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Procédé et système de surveillance permettant de surveiller un endroit. Un ou plusieurs capteur(s) (14), comme par exemple une caméra, est/sont utilisé(s) afin d'acquérir des données de capteur provenant de l'endroit. Les données de capteur sont traitées afin d'obtenir une liste d'objets extraits (100), comprenant des attributs d'objets. Un certain nombre d'objets virtuels, tels qu'un obstacle virtuel, est défini, et un ensemble de règles est appliqué. L'ensemble de règles définit des réponses possibles selon la liste d'objets extraits et les objets virtuels. Les ensembles de règles peuvent être adaptés, et des réponses modifiées peuvent être évaluées immédiatement. Une partie de l'ensemble de règles est appliquée (103) afin de réduire la quantité de données auxquelles le reste de l'ensemble de règles est appliqué (105), pour lesquelles un traitement plus complexe est nécessaire.
EP07709185A 2007-01-31 2007-01-31 Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet Withdrawn EP2115698A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/NL2007/050037 WO2008094029A1 (fr) 2007-01-31 2007-01-31 Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet

Publications (1)

Publication Number Publication Date
EP2115698A1 true EP2115698A1 (fr) 2009-11-11

Family

ID=38515452

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07709185A Withdrawn EP2115698A1 (fr) 2007-01-31 2007-01-31 Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet

Country Status (2)

Country Link
EP (1) EP2115698A1 (fr)
WO (1) WO2008094029A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014039050A1 (fr) * 2012-09-07 2014-03-13 Siemens Aktiengesellschaft Procédés et appareils permettant d'établir des critères d'entrée/de sortie pour un endroit sécurisé
CN104077311B (zh) 2013-03-28 2017-11-14 国际商业机器公司 车辆位置索引方法及装置
JP2016197795A (ja) * 2015-04-03 2016-11-24 日立オートモティブシステムズ株式会社 撮像装置

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050169367A1 (en) * 2000-10-24 2005-08-04 Objectvideo, Inc. Video surveillance system employing video primitives

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050169367A1 (en) * 2000-10-24 2005-08-04 Objectvideo, Inc. Video surveillance system employing video primitives

Also Published As

Publication number Publication date
WO2008094029A1 (fr) 2008-08-07

Similar Documents

Publication Publication Date Title
US20090315712A1 (en) Surveillance method and system using object based rule checking
AU2009243916B2 (en) A system and method for electronic surveillance
JP6088541B2 (ja) クラウドベースの映像監視管理システム
JP3876288B2 (ja) 状態認識システムおよび状態認識表示生成方法
KR101935399B1 (ko) 심층 신경망 알고리즘 기반 광역 다중 객체 감시 시스템
Trivedi et al. Distributed interactive video arrays for event capture and enhanced situational awareness
KR100973930B1 (ko) 감시카메라, 전광판 및 방송장치를 포함하는 다기능 무인단속 시스템
US20060200307A1 (en) Vehicle identification and tracking system
KR102039277B1 (ko) 보행자 얼굴 인식 시스템 및 그 방법
KR102144531B1 (ko) 딥 러닝 영상분석을 활용한 객체 메타데이터 기반의 자동 선별관제 방법
KR102282800B1 (ko) 라이다와 영상카메라를 이용한 멀티 표적 추적 방법
KR101492473B1 (ko) 사용자 기반 상황 인지형 씨씨티비 통합관제시스템
KR102434154B1 (ko) 영상감시시스템에서의 고속 이동물체의 위치 및 모션 캡쳐 방법
JP5047382B2 (ja) ビデオ監視時に移動物体を分類するシステムおよび方法
US11727580B2 (en) Method and system for gathering information of an object moving in an area of interest
CN110677619A (zh) 一种智能监控视频处理方法
CN117998039A (zh) 视频数据处理方法、装置、设备及存储介质
EP2115698A1 (fr) Procédé et système de surveillance utilisant une vérification de règles optimisée basée sur un objet
KR20100077662A (ko) 지능형 영상감시 시스템 및 영상감시 방법
JP5712401B2 (ja) 行動監視システム、行動監視プログラム、及び行動監視方法
KR101453386B1 (ko) 차량 지능형 검색 시스템 및 그 동작방법
KR101686851B1 (ko) Cctv 카메라를 이용한 통합 관제 시스템
KR101669885B1 (ko) 영상 기반 보행자 추돌 자동 검출 방법 및 이를 적용한 사물 인터넷 장치
CN114143506A (zh) 一种基于智能摄像头的车间监控管理系统
JP2005346545A (ja) 監視装置及びプログラム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090831

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20100330

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20110707