US10878689B2 - Methods and systems for detecting intrusions in a monitored volume - Google Patents

Methods and systems for detecting intrusions in a monitored volume Download PDF

Info

Publication number
US10878689B2
US10878689B2 US16/303,440 US201716303440A US10878689B2 US 10878689 B2 US10878689 B2 US 10878689B2 US 201716303440 A US201716303440 A US 201716303440A US 10878689 B2 US10878689 B2 US 10878689B2
Authority
US
United States
Prior art keywords
tridimensional
sensor
local point
point cloud
monitored volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/303,440
Other versions
US20200175844A1 (en
Inventor
Raul BRAVO ORELLANA
Olivier GARCIA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Outsight SA
Original Assignee
Outsight SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Outsight SA filed Critical Outsight SA
Assigned to DIBOTICS reassignment DIBOTICS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Bravo Orellana, Raul, GARCIA, OLIVIER
Assigned to BEYOND SENSING reassignment BEYOND SENSING NUNC PRO TUNC ASSIGNMENT (SEE DOCUMENT FOR DETAILS). Assignors: DIBOTICS
Assigned to OUTSIGHT reassignment OUTSIGHT CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BEYOND SENSING
Publication of US20200175844A1 publication Critical patent/US20200175844A1/en
Application granted granted Critical
Publication of US10878689B2 publication Critical patent/US10878689B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/181Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using active radiation detection systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/02Monitoring continuously signalling or alarm systems
    • G08B29/04Monitoring of the detection circuits
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/16Actuation by interference with mechanical vibrations in air or other fluid
    • G08B13/1654Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems
    • G08B13/1672Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems using sonic detecting means, e.g. a microphone operating in the audio frequency range
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19682Graphic User Interface [GUI] presenting system data to the user, e.g. information on a screen helping a user interacting with an alarm system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19691Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound

Definitions

  • the instant invention relates to methods and system for detecting intrusions in a 3-dimensional volume or space.
  • the present application belongs to the field of area and volume monitoring for surveillance applications such as safety engineering or site security.
  • regular or continuous checks are performed to detect whether an object, in particular a human body, intrudes into a monitored volume, for instance a danger zone surrounding a machine or a forbidden zone in a private area.
  • an operator of the monitoring system is notified and/or the installation may be stopped or rendered harmless.
  • Such a monitoring system usually comprises several 3D sensors or stereo-cameras spread across the monitored area in order to avoid shadowing effect from objects located inside the monitored volume.
  • each sensor is considered independently, calibrated separately and have its acquisition information treated separately from the other sensors.
  • the operator of the system can then combine the information from several 3D sensors to solve shadowing issues.
  • Calibration and setup of such a system is a time expensive process since each 3D sensor has to be calibrated independently, for instance by specifying a dangerous or forbidden area separately for each sensor.
  • the use of such a system is cumbersome since the information from several sensors has to be mentally combined by the operator.
  • U.S. Pat. Nos. 7,652,238 and 9,151,446 disclose another approach in which a uniform coordinate system is defined for all 3D sensors of the monitoring system.
  • the sensors are thus calibrated in a common coordinates system of the monitored volume.
  • the respective position of each sensor with respect to the monitored zone has to be fixed and stable over time to be able to merge the measurements in a reliable manner, which is often difficult to guarantee over time and result in the need to periodically recalibrate the monitoring system.
  • the present invention aims at improving this situation.
  • a first object of the invention is a method for detecting intrusions in a monitored volume, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
  • the global tridimensional map of the monitored volume is determined by
  • Another object of the invention is a method for extending a volume monitored by a method as detailed above, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
  • Another object of the invention is a method for determining a tridimensional location of a camera for a self-calibrated monitoring system, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit,
  • Another object of the invention is a self-calibrated monitoring system for detecting intrusions in a monitored volume, the system comprising:
  • Another object of the invention is a non-transitory computer readable storage medium, having stored thereon a computer program comprising program instructions, the computer program being loadable into a central processing unit of a monitoring system as detailed above and adapted to cause the processing unit to carry out the steps of a method as detailed above, when the computer program is run by the central processing unit.
  • FIG. 1 is a schematic top view of a monitoring system for detecting intrusions in a monitored volume according to an embodiment of the invention
  • FIG. 2 is a flowchart detailing a method for detecting intrusions in a monitored volume according to an embodiment of the invention
  • FIG. 3 is a flowchart detailing a method for determining a global tridimensional map of a monitored volume and a method for extending a monitored volume according to embodiments of the invention
  • FIG. 4 is a flowchart detailing a method for determining a tridimensional location of a camera for a self-calibrated monitoring system according to an embodiment of the invention.
  • FIG. 1 illustrates a self-calibrated monitoring system 1 for detecting intrusions in a monitored volume V, able to perform a method for detecting intrusions in a monitored volume as detailed further below.
  • the monitoring system 1 can be used for monitoring valuable objects (strongroom monitoring et al.) and/or for monitoring entry areas in public buildings, at airports etc.
  • the monitoring system 1 may also be used for monitoring hazardous working areas around a robot or a factory installation for instance.
  • the invention is not restricted to these applications and can be used in other fields.
  • the monitored volume V may for instance be delimited by a floor F extending along a horizontal plane H and real or virtual walls extending along a vertical direction Z perpendicular to said horizontal plane H.
  • the monitored volume V may comprise one or several danger zones or forbidden zones F.
  • a forbidden zone F may for instance be defined by the movement of a robot arm inside volume V. Objects intruding into the forbidden zone F can be put at risk by the movements of the robot arm so that an intrusion of this kind must, for example, result in a switching off of the robot.
  • a forbidden zone F may also be defined as a private zone that should only be accessed by accredited persons for security reasons.
  • a forbidden zone F is thus a spatial area within the monitoring zone that may encompass the full monitoring zone in some embodiments of the invention.
  • the monitoring system 1 comprises a plurality of N tridimensional sensors 2 and a central processing unit 3 .
  • the central processing unit 3 is separated from the sensors 2 and is functionally connected to each sensor 2 in order to be able to receive data from each sensor 2 .
  • the central processing unit 3 may be connected to each sensor 2 by a wired or wireless connection.
  • the central processing unit 3 may be integrated in one of the sensors 2 , for instance by being a processing circuit integrated in said sensor 2 .
  • the central processing unit 3 collects and processes the point clouds from all the sensors 2 and is thus advantageously a single centralized unit.
  • the central processing unit 3 comprises for instance a processor 4 and a memory 5 .
  • the number N of tridimensional sensors 2 of the monitoring system 1 may be comprised between 2 and several tens of sensors.
  • Each tridimensional sensor 2 is able to monitor a local volume L surrounding said sensor 2 that overlaps the monitored volume V.
  • each tridimensional sensor 2 is able to acquire a local point cloud C in a local coordinate system S of said sensor 2 .
  • a local point cloud C comprises a set of tridimensional data points D.
  • Each of the data points D of the local point cloud C corresponds to a point P of a surface of an object located in the local volume L surrounding the sensor 2 .
  • tridimensional data point By a “tridimensional data point”, it is understood three-dimensional coordinates of a point P in the environment of the sensor 2 .
  • a tridimensional data point D may further comprise additional characteristics, for instance the intensity of the signal detected by the sensor 2 at said point P.
  • the local coordinate system S of said sensor 2 is a coordinate system S related to said sensor 2 , for instance with an origin point located at the sensor location.
  • the local coordinate system S may be a cartesian, cylindrical or polar coordinate system.
  • a tridimensional sensor 2 may for instance comprise a laser rangefinder such as a light detection and ranging (LIDAR) module, a radar module, an ultrasonic ranging module, a sonar module, a ranging module using triangulation or any other device able to acquire the position of a single or a plurality of points P of the environment in a local coordinate system S of the sensor 2 .
  • a laser rangefinder such as a light detection and ranging (LIDAR) module, a radar module, an ultrasonic ranging module, a sonar module, a ranging module using triangulation or any other device able to acquire the position of a single or a plurality of points P of the environment in a local coordinate system S of the sensor 2 .
  • a tridimensional sensor 2 emits an initial physical signal and receives a reflected physical signal along controlled direction of the local coordinate system.
  • the emitted and reflected physical signals can be for instance light beams, electromagnetic waves or acoustic waves.
  • the sensor 2 then computes a range, corresponding to a distance from the sensor 2 to a point P of reflection of the initial signal on a surface of an object located in the local volume L surrounding the sensor 2 .
  • Said range may be computed by comparing the initial signal and the reflected signal, for instance by comparing the time or the phases of emission and reception.
  • a tridimensional data point D can then be computed from said range and said controlled direction.
  • the senor 2 comprises a laser emitting light pulses with a constant time rate, said light pulses being deflected by a moving mirror rotating along two directions. Reflected light pulses are collected by the sensor and the time difference between the emitted and the received pulses give the distance of reflecting surfaces of objects in the local environment of the sensor 2 .
  • a processor of the sensor 2 or a separate processing unit, then transform, using simple trigonometric formulas, each observation acquired by the sensor into a three-dimensional data point D.
  • a full scan of the local environment of sensor 2 is periodically acquired and comprises a set of tridimensional data points D representative of the objects in the local volume of the sensor 2 .
  • full scan of the local environment it is meant that the sensor 2 has covered a complete field of view. For instance, after a full scan of the local environment, the moving mirror of a laser-based sensor is back to an original position and ready to start a new period of rotational movement.
  • a local point cloud C of the sensor 2 is thus also sometimes called a “frame” and is the three-dimensional equivalent of a frame acquired by a bidimensional camera.
  • a set of tridimensional data points D acquired in a full scan of the local environment of sensor 2 is called a local point cloud C.
  • the sensor 2 is able to periodically acquire local point clouds C with a given framerate.
  • the local point clouds C of each sensor 2 are transmitted to the central processing unit 3 and stored in the memory 5 of the central processing unit 3 .
  • the memory 5 of the central processing unit 3 also store a global tridimensional map M of the monitored volume V.
  • the global tridimensional map M comprises a set of tridimensional data points D of object surfaces in the monitored volume V.
  • a method for detecting intrusions in a monitored volume will now be disclosed in greater details with reference to FIG. 2 .
  • the method for detecting intrusions is performed by a monitoring system 1 as detailed above.
  • each sensor 2 of the N tridimensional sensors acquires a local point cloud C in a local coordinate system S of said sensor 2 as detailed above.
  • the central processing unit 3 then receives the acquired local point clouds C from the N sensors 2 and stores said acquired point clouds C in the memory 5 .
  • the memory 5 may contain other local point clouds C from previous acquisitions of each sensor 2 .
  • the central processing unit 3 perform several operations for each sensor 2 of the N tridimensional sensors.
  • the central processing unit 3 first computes updated tridimensional position and orientation of each sensor 2 in a global coordinate system G of the monitored volume V by aligning at least one local point cloud C acquired by said sensor 2 with the global tridimensional map M of the monitored volume V stored in the memory 5 .
  • tridimensional position and orientation it is understood 6D localisation information for a sensor 2 , for instance comprising 3D position and 3D orientation of said sensor 2 in a global coordinate system G.
  • the global coordinate system G is a virtual coordinate system obtained by aligning the local point clouds C.
  • the global coordinate system G may not need to be calibrated with regards to the real physical environment of the system 1 , in particular if no forbidden zone F has to be defined.
  • the updated tridimensional position and orientation of a sensor 2 are computed only from the local point clouds C acquired by said sensor 2 and from the global tridimensional map M of the monitored volume stored in a memory, and without additional positioning information.
  • the computation of the updated tridimensional position and orientation of a sensor does not require other input data than the local point clouds C acquired by said sensor 2 and the global tridimensional map M.
  • no additional localisation or orientation device such as a GPS or an accelerometer, is required.
  • no assumption has to be made on the location or movement of the sensor.
  • the central processing unit 3 performs a simultaneous multi-scans alignment of each point cloud C acquired by said sensor with the global tridimensional map of the monitored volume.
  • spatial multi-scans alignment it is meant that the point clouds C acquired by the N sensors, together with the global tridimensional map M of the monitored volume are considered as scans that needs to be aligned together simultaneously.
  • the point clouds C acquired by the N sensors over the operating time are aligned at each step.
  • the system may have performed M successive acquisition frames of the sensors 2 up to a current time t.
  • the M point clouds C acquired by the N sensors are thus grouped with the global tridimensional map M to form M*N+1 scans to be aligned together by the central processing unit 3 .
  • the M ⁇ 1 previously acquired point clouds C may be replaced by their respectively associated aligned point clouds A as detailed further below.
  • the (M ⁇ 1)*N aligned point cloud A may thus be grouped with the N latest acquired point clouds C and with the global tridimensional map M to form again M*N+1 scans to be aligned together by the central processing unit 3 .
  • Such a simultaneous multi-scans alignment may be performed for instance by using an Iterative Closest Point algorithm (ICP) as detailed by P. J. Besl and N. D. McKay in “ A method for registration of 3- d shapes ” published in IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239-256, 1992 or in “ Object modelling by registration of multiple range images ” by Yang Chen and Gerard Medioni published in Image Vision Comput., 10(3), 1992.
  • An ICP algorithm involves search in transformation space trying to find the set of pair-wise transformations of scans by optimizing a function defined on transformation space.
  • the variant of ICP involves optimization functions that range from being error metrics like “sum of least square distances” to quality metrics like “image distance” or probabilistic metrics.
  • the central processing unit 3 may thus optimize a function defined on a transformation space of each point cloud C to determine the updated tridimensional position and orientation of a sensor 2 .
  • the central processing unit 3 generates an aligned local point cloud A associated to each acquired point cloud C in which the data points D of said point cloud C are translated from the local coordinate system S to the global coordinate system G of the global tridimensional map M.
  • the aligned local point cloud A is determined on the basis of the updated tridimensional position and orientation of the sensor 2 .
  • the aligned local point cloud A of each sensor 2 can then be reliably compared together since each sensor's position and orientation has been updated during the process.
  • the central processing unit 3 may monitor an intrusion in the monitored volume V.
  • the central processing unit 3 may compare a free space of each aligned local point cloud A with a free space of the global tridimensional map M.
  • the monitoring volume V may for instance be divided in a matrix of elementary volumes E and each elementary volume E may be flagged as “free-space” or “occupied space” on the basis of the global tridimensional map M.
  • the aligned local point cloud A can then be used to determine an updated flag for the elementary volume E contained in the local volume L surrounding a sensor 2 .
  • a change in flagging of an elementary volume E from “free-space” to “occupied space”, for instance by intrusion of an object 0 as illustrated on FIG. 1 , can then trigger the detection of an intrusion in the monitored volume V by the central processing unit 3 .
  • the global tridimensional map M of the monitored volume V can be determined by the monitoring system 1 itself in an automated manner as it will now be described with reference to FIG. 3 .
  • the N tridimensional sensors may be located so that the union of the local volumes L surrounding said sensors 2 is a connected space. This connected space forms the monitored volume.
  • connected space it is meant that the union of the local volumes L surrounding the N sensors 2 form a single space and not two or more disjoint nonempty open subspaces.
  • a global tridimensional map M of the monitored volume V can be determined by first receiving at least one local point cloud C from each of said sensors and storing said local point clouds C in the memory 5 of the system.
  • the central processing unit 5 then performs a simultaneous multi-scans alignment of the stored local point clouds C to generate a plurality of aligned local point clouds A as detailed above.
  • Each aligned local point cloud A is respectively associated to a local point cloud C acquired from a tridimensional sensor 2 .
  • the frames used for the simultaneous multi-scans alignment do not comprise the global tridimensional map M since it has yet to be determined.
  • the frames used for the simultaneous multi-scans alignment may comprise a plurality of M successively acquired point clouds C for each sensor 2 .
  • the M point clouds C acquired by the N sensors are thus grouped to form M*N+1 scans to be aligned together by the central processing unit 3 as detailed above.
  • a global coordinate system G is obtained in which the aligned local point clouds A can be compared together.
  • the central processing unit 5 can thus merge the plurality of aligned local point clouds A to form a global tridimensional map M of the monitored volume V.
  • the global tridimensional map M is then stored in the memory 5 of the system 1 .
  • the method may further involve displaying to a user a graphical indication I of the intrusion on a display device 6 .
  • the display device 6 may be any screen, LCD, OLED, and the like, that is convenient for an operator of the system 1 .
  • the display device 6 is connected to, and controlled by, the central processing unit 3 of the system 1 .
  • a bidimensional image B of the monitored volume V may be generated by the processing unit 3 by projecting the global tridimensional map M of the monitored volume V along a direction of observation.
  • the processing unit 3 may then command the display device 6 to display the graphical indication I of the intrusion overlaid over said bidimensional image B of the monitored volume V.
  • the system 1 may further comprise at least one camera 7 .
  • the camera 7 may be able to directly acquire a bidimensional image B of a part of the monitored volume V.
  • the camera 7 is connected to, and controlled by, the central processing unit 3 of the system 1 .
  • the central processing unit 3 may then command the display device 6 to display the graphical indication I of the intrusion overlaid over the bidimensional image B acquired by the camera 7 .
  • the central processing unit 3 may be able to control the pan, rotation or zoom of the camera 7 so that the detected intrusion can be located in a field of view of the camera 7 .
  • another object of the invention is a method to determine a tridimensional location of a camera 7 of a self-calibrated monitoring system 1 as described above. This method allows for easy calibration without requiring a manual measurement and input of the position of the camera 7 in the monitoring volume V. An embodiment of this method is illustrated in FIG. 4 .
  • the camera 7 is provided with at least one reflective pattern 8 .
  • the reflective pattern 8 is such that a data point of said reflective pattern acquired by a tridimensional sensor 2 of the self-calibrated monitoring system 1 can be associated to said camera by the central processing unit 3 of the system 1 .
  • the reflective pattern 8 may be made of a high reflectivity material so that the data points of the reflective pattern 8 acquired by the sensor 2 present a high intensity, for instance an intensity over a predefined threshold intensity.
  • the reflective pattern 8 may also have a predefined shape, for instance the shape of a cross or a circle or “L” markers. Such a shape can be identified by the central processing unit 3 by using commonly known data and image analysis algorithms.
  • a first step of the method to determine a tridimensional location of a camera 7 the camera is positioned in the monitored volume V.
  • the camera 7 is disposed in at least one local volume L surrounding a sensor 2 of the system 1 , so that the reflective pattern 8 of the camera 7 is in a field of view of at least one sensor 2 of the plurality of N tridimensional sensors.
  • Said at least one sensor 2 is thus able to acquire a local point cloud C comprising at least one tridimensional data point D corresponding to the reflective pattern 8 of the camera 7 .
  • the central processing unit 3 then receives a local point cloud C from said at least one tridimensional sensor and computes an aligned local point cloud A by aligning said local point cloud C with the global tridimensional map M of the self-calibrated monitoring system as detailed above.
  • the central processing unit 3 can then identify at least one data point corresponding to the reflective pattern 8 of the camera 7 .
  • this identification may be conducted on the basis of the intensity of the data points D received from the sensor 2 and/or the shape of high intensity data points acquired by the sensor 2 .
  • This identification may be performed by using known data and image processing algorithms, for instance the OpenCV library.
  • a tridimensional location and/or orientation of the camera in the global coordinate system G of the global tridimensional map M may be determined by the central processing unit 3 on the basis of the coordinates of said identified data point of the reflective pattern 8 of the camera 7 in the aligned local point cloud A.
  • the underlying concept of the invention can also be used for easily and efficiently extend a volume monitored by a system and a method as detailed above.
  • the present invention provides for a self-calibrating system and method that overcome those problems.
  • Another object of the invention is thus a method for extending a volume monitored by a method and system as detailed above.
  • a plurality of N tridimensional sensors 2 respectively monitor at least a part of the monitored volume V and respectively communicate with a central processing unit 3 as detailed above.
  • a global tridimensional map M is associated to the volume V monitored by the N tridimensional sensors 2 as detailed above.
  • the method for extending the volume monitored by system 1 thus involves determining an updated global tridimensional map M′ of the self-calibrated monitoring system associated to an updated volume V′ monitored by the N+1 tridimensional sensors 2 .
  • the method for extending the volume monitored by system 1 involves first positioning an additional N+1th tridimensional sensor 2 able to communicate with the central processing unit 3 .
  • the additional N+1th tridimensional sensor 2 is similar to the N sensors 2 of the monitoring system 1 and is thus able to acquire a local point cloud C in a local coordinate system L of said sensor 2 .
  • This local point cloud C comprises a set of tridimensional data points D of object surfaces in a local volume L surrounding said sensor 2 .
  • the local volume L at least partially overlaps the volume V monitored by the plurality of N tridimensional sensors.
  • the updated global tridimensional map M of the self-calibrated monitoring system may then be determined as follows.
  • the central processing unit 3 receives at least one local point cloud C acquired from each of said at least two tridimensional sensors and storing said local point clouds in a memory.
  • the central processing unit 3 performs a simultaneous multi-scans alignment of the stored local point clouds C to generate a plurality of aligned local point clouds A respectively associated to the local point clouds C acquired from each sensor 2 as detailed above.
  • the multi-scans alignment can be computed on a group of scans comprising the global tridimensional map M.
  • the multi-scans alignment can also be computed only on the point clouds C acquired by the sensors 2 .
  • the determination of the updated global tridimensional map M is similar to computation of the global tridimensional map M of the monitored volume V by the monitoring system 1 as detailed above.
  • the central processing unit 5 can then merge the plurality of aligned local point clouds A and, if necessary, the global tridimensional map M, to form an updated global tridimensional map M′ of the updated monitored volume V′.
  • the updated global tridimensional map M′ is then stored in the memory 5 of the system 1 for future use in a method for detecting intrusions in a monitored volume as detailed above.

Abstract

A method for detecting intrusions in a monitored volume in which: N tridimensional sensors acquire local point clouds in respective local coordinate systems; a central processing unit receives the acquired local point clouds and, for each sensor; computes updated tridimensional position and orientation of the sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by the tridimensional sensor with a global tridimensional map of the monitored volume; and generates an aligned local point cloud on the basis of the updated tridimensional position and orientation of the sensor; the central processing unit monitors an intrusion in the monitored volume by comparing a free space of the aligned local point cloud with a free space of the global tridimensional map.

Description

FIELD OF THE INVENTION
The instant invention relates to methods and system for detecting intrusions in a 3-dimensional volume or space.
BACKGROUND OF THE INVENTION
The present application belongs to the field of area and volume monitoring for surveillance applications such as safety engineering or site security. In such applications, regular or continuous checks are performed to detect whether an object, in particular a human body, intrudes into a monitored volume, for instance a danger zone surrounding a machine or a forbidden zone in a private area. When an intrusion has been detected, an operator of the monitoring system is notified and/or the installation may be stopped or rendered harmless.
Traditional approaches for area monitoring involve using a 2D camera to track individuals and objects in the spatial area. US 20060033746 describes an example of such a camera monitoring.
Using a bidimensional camera provides a low-cost and easy-to-setup monitoring solution. However, an important drawback of these approaches lays in the fact that a single camera only gives bidimensional position information and provides no information on the distance of the detected object from the camera. As a result, false alerts may be triggered for distant objects that appear to be lying in the monitored volume but are actually outside of the danger or forbidden zone.
To overcome this problem, it was proposed to use distance or three-dimensional sensors or stereo-cameras to acquire tridimensional information on the individuals and objects located in the monitored spatial area. Such a monitoring system usually comprises several 3D sensors or stereo-cameras spread across the monitored area in order to avoid shadowing effect from objects located inside the monitored volume.
U.S. Pat. Nos. 7,164,116, 7,652,238 and 9,151,446 describe examples of such 3D sensors systems.
In U.S. Pat. No. 7,164,116, each sensor is considered independently, calibrated separately and have its acquisition information treated separately from the other sensors. The operator of the system can then combine the information from several 3D sensors to solve shadowing issues. Calibration and setup of such a system is a time expensive process since each 3D sensor has to be calibrated independently, for instance by specifying a dangerous or forbidden area separately for each sensor. Moreover, the use of such a system is cumbersome since the information from several sensors has to be mentally combined by the operator.
U.S. Pat. Nos. 7,652,238 and 9,151,446 disclose another approach in which a uniform coordinate system is defined for all 3D sensors of the monitoring system. The sensors are thus calibrated in a common coordinates system of the monitored volume. However, in such systems, the respective position of each sensor with respect to the monitored zone has to be fixed and stable over time to be able to merge the measurements in a reliable manner, which is often difficult to guarantee over time and result in the need to periodically recalibrate the monitoring system.
Moreover, the calibration process of these systems requires an accurate determination of each sensor three-dimensional position and orientation which involves 3D measurement tools and 3D input interface that are difficult to manage for a layman operator.
The present invention aims at improving this situation.
To this aim, a first object of the invention is a method for detecting intrusions in a monitored volume, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
    • each sensor of said plurality of N tridimensional sensors acquiring a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping the monitored volume,
    • said central processing unit receiving the acquired local point clouds from the plurality of N tridimensional sensors, storing said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
computing updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory, and
generating an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor,
    • monitoring an intrusion in the monitored volume by comparing a free space of said aligned local point cloud with a free space of the global tridimensional map.
In some embodiments, one might also use one or more of the following features:
    • for each sensor of said at least two tridimensional sensors, the updated tridimensional position and orientation of said sensor in the global coordinate system is computed by performing a simultaneous multi-scans alignment of each point clouds acquired by said sensor with the global tridimensional map of the monitored volume;
    • the updated tridimensional position and orientation of each sensor of said at least two sensors is computed only from the local point clouds acquired by said tridimensional sensor and the global tridimensional map of the monitored volume stored in a memory, and without additional positioning information;
    • the N tridimensional sensors are located so that the union of the local volumes surrounding said sensors is a connected space, said connected space forming the monitored volume,
the global tridimensional map of the monitored volume is determined by
    • receiving at least one local point cloud from each of said at least two tridimensional sensors and storing said local point clouds in a memory,
    • performing a simultaneous multi-scans alignment of the stored local point clouds to generate a plurality of aligned local point clouds respectively associated to the local point clouds acquired from each of said at least two tridimensional sensors, and
    • merging said plurality of aligned local point clouds to determine a global tridimensional map of the monitored volume and storing said global tridimensional map in the memory;
    • the method further comprises displaying to a user a graphical indication of the intrusion on a display device;
    • the method further comprises generating a bidimensional image of the monitored volume by projecting the global tridimensional map of the monitored volume, and commanding the display device to display the graphical indication of the intrusion overlaid over said bidimensional image of the monitored volume;
    • the method further comprises commanding the display device to display the graphical indication of the intrusion overlaid over a bidimensional image of at least a part of the monitored volume acquired by a camera of the self-calibrated monitoring system;
    • the method further comprises orienting the camera of the self-calibrated monitoring system so that the detected intrusion is located in a field of view of the camera.
Another object of the invention is a method for extending a volume monitored by a method as detailed above, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
    • positioning an additional N+1th tridimensional sensor communicating with the central processing unit, the additional N+1th tridimensional sensor acquiring a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and at least partially overlapping the volume monitored by the plurality of N tridimensional sensors,
    • determining an updated global tridimensional map of the self-calibrated monitoring system by
receiving at least one local point cloud acquired from each of said at least two tridimensional sensors and storing said local point clouds in a memory,
performing a simultaneous multi-scans alignment of the stored local point clouds to generate a plurality of aligned local point clouds respectively associated to the local point clouds acquired from each of said at least two tridimensional sensors, and
determining a global tridimensional map of a monitored volume by merging said plurality of aligned local point clouds.
Another object of the invention is a method for determining a tridimensional location of a camera for a self-calibrated monitoring system, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit,
    • providing a camera comprising at least one reflective pattern such that a data point of said reflective pattern acquired by a tridimensional sensor of the self-calibrated monitoring system can be associated to said camera,
    • positioning the camera in the monitored volume, in a field of view of at least one sensor of the plurality of N tridimensional sensors so that said sensor acquire a local point cloud comprising at least one tridimensional data point of the reflective pattern of the camera,
    • receiving a local point cloud from said at least one tridimensional sensor and computing an aligned local point cloud by aligning said local point cloud with the global tridimensional map of the self-calibrated monitoring system,
    • identifying, in the aligned local point cloud at least one data point corresponding to the reflective pattern of the camera, and
    • determining at least a tridimensional location of the camera in a global coordinate system of the global tridimensional map on the basis of the coordinates of said identified data point of the aligned local point cloud corresponding to the reflective pattern of the camera.
Another object of the invention is a self-calibrated monitoring system for detecting intrusions in a monitored volume, the system comprising:
    • a plurality of N tridimensional sensors respectively able to monitor at least a part of the monitored volume, each sensor of said plurality of N tridimensional sensors being able to acquire a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping the monitored volume,
    • a memory to store said local point cloud and a global tridimensional map of a monitored volume comprising a set of tridimensional data points of object surfaces in a monitored volume, the local volume at least partially overlapping the monitored volume,
    • a central processing unit able to receive the acquired local point clouds from the plurality of N tridimensional sensors, store said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
compute updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory,
generate an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor, and
monitor an intrusion in the monitored volume by comparing a free space of said aligned local point cloud with a free space of the global tridimensional map.
In some embodiments, one might also use one or more of the following features:
    • the system further comprises at least one camera able to acquire a bidimensional image of a portion of the monitored volume;
    • said at least one camera comprises at least one reflective pattern such that a data point of said reflective pattern acquired by a tridimensional sensor of the self-calibrated monitoring system can be associated to said camera by the central processing unit of the system;
    • the system further comprises at least one display device able to display to a user a graphical indication of the intrusion.
Another object of the invention is a non-transitory computer readable storage medium, having stored thereon a computer program comprising program instructions, the computer program being loadable into a central processing unit of a monitoring system as detailed above and adapted to cause the processing unit to carry out the steps of a method as detailed above, when the computer program is run by the central processing unit.
BRIEF DESCRIPTION OF THE DRAWINGS
Other characteristics and advantages of the invention will readily appear from the following description of several of its embodiments, provided as non-limitative examples, and of the accompanying drawings.
On the drawings:
FIG. 1 is a schematic top view of a monitoring system for detecting intrusions in a monitored volume according to an embodiment of the invention,
FIG. 2 is a flowchart detailing a method for detecting intrusions in a monitored volume according to an embodiment of the invention,
FIG. 3 is a flowchart detailing a method for determining a global tridimensional map of a monitored volume and a method for extending a monitored volume according to embodiments of the invention,
FIG. 4 is a flowchart detailing a method for determining a tridimensional location of a camera for a self-calibrated monitoring system according to an embodiment of the invention.
On the different figures, the same reference signs designate like or similar elements.
DETAILED DESCRIPTION
FIG. 1 illustrates a self-calibrated monitoring system 1 for detecting intrusions in a monitored volume V, able to perform a method for detecting intrusions in a monitored volume as detailed further below.
The monitoring system 1 can be used for monitoring valuable objects (strongroom monitoring et al.) and/or for monitoring entry areas in public buildings, at airports etc. The monitoring system 1 may also be used for monitoring hazardous working areas around a robot or a factory installation for instance. The invention is not restricted to these applications and can be used in other fields.
The monitored volume V may for instance be delimited by a floor F extending along a horizontal plane H and real or virtual walls extending along a vertical direction Z perpendicular to said horizontal plane H.
The monitored volume V may comprise one or several danger zones or forbidden zones F. A forbidden zone F may for instance be defined by the movement of a robot arm inside volume V. Objects intruding into the forbidden zone F can be put at risk by the movements of the robot arm so that an intrusion of this kind must, for example, result in a switching off of the robot. A forbidden zone F may also be defined as a private zone that should only be accessed by accredited persons for security reasons.
A forbidden zone F is thus a spatial area within the monitoring zone that may encompass the full monitoring zone in some embodiments of the invention.
As illustrated on FIG. 1, the monitoring system 1 comprises a plurality of N tridimensional sensors 2 and a central processing unit 3.
In one embodiment, the central processing unit 3 is separated from the sensors 2 and is functionally connected to each sensor 2 in order to be able to receive data from each sensor 2. The central processing unit 3 may be connected to each sensor 2 by a wired or wireless connection.
In a variant, the central processing unit 3 may be integrated in one of the sensors 2, for instance by being a processing circuit integrated in said sensor 2.
The central processing unit 3 collects and processes the point clouds from all the sensors 2 and is thus advantageously a single centralized unit.
The central processing unit 3 comprises for instance a processor 4 and a memory 5.
The number N of tridimensional sensors 2 of the monitoring system 1 may be comprised between 2 and several tens of sensors.
Each tridimensional sensor 2 is able to monitor a local volume L surrounding said sensor 2 that overlaps the monitored volume V.
More precisely, each tridimensional sensor 2 is able to acquire a local point cloud C in a local coordinate system S of said sensor 2. A local point cloud C comprises a set of tridimensional data points D. Each of the data points D of the local point cloud C corresponds to a point P of a surface of an object located in the local volume L surrounding the sensor 2.
By a “tridimensional data point”, it is understood three-dimensional coordinates of a point P in the environment of the sensor 2. A tridimensional data point D may further comprise additional characteristics, for instance the intensity of the signal detected by the sensor 2 at said point P.
The local coordinate system S of said sensor 2 is a coordinate system S related to said sensor 2, for instance with an origin point located at the sensor location. The local coordinate system S may be a cartesian, cylindrical or polar coordinate system.
A tridimensional sensor 2 may for instance comprise a laser rangefinder such as a light detection and ranging (LIDAR) module, a radar module, an ultrasonic ranging module, a sonar module, a ranging module using triangulation or any other device able to acquire the position of a single or a plurality of points P of the environment in a local coordinate system S of the sensor 2.
In a preferred embodiment, a tridimensional sensor 2 emits an initial physical signal and receives a reflected physical signal along controlled direction of the local coordinate system. The emitted and reflected physical signals can be for instance light beams, electromagnetic waves or acoustic waves.
The sensor 2 then computes a range, corresponding to a distance from the sensor 2 to a point P of reflection of the initial signal on a surface of an object located in the local volume L surrounding the sensor 2. Said range may be computed by comparing the initial signal and the reflected signal, for instance by comparing the time or the phases of emission and reception.
A tridimensional data point D can then be computed from said range and said controlled direction.
In one example, the sensor 2 comprises a laser emitting light pulses with a constant time rate, said light pulses being deflected by a moving mirror rotating along two directions. Reflected light pulses are collected by the sensor and the time difference between the emitted and the received pulses give the distance of reflecting surfaces of objects in the local environment of the sensor 2. A processor of the sensor 2, or a separate processing unit, then transform, using simple trigonometric formulas, each observation acquired by the sensor into a three-dimensional data point D.
A full scan of the local environment of sensor 2 is periodically acquired and comprises a set of tridimensional data points D representative of the objects in the local volume of the sensor 2.
By “full scan of the local environment”, it is meant that the sensor 2 has covered a complete field of view. For instance, after a full scan of the local environment, the moving mirror of a laser-based sensor is back to an original position and ready to start a new period of rotational movement. A local point cloud C of the sensor 2 is thus also sometimes called a “frame” and is the three-dimensional equivalent of a frame acquired by a bidimensional camera.
A set of tridimensional data points D acquired in a full scan of the local environment of sensor 2 is called a local point cloud C.
The sensor 2 is able to periodically acquire local point clouds C with a given framerate.
The local point clouds C of each sensor 2 are transmitted to the central processing unit 3 and stored in the memory 5 of the central processing unit 3.
As detailed below, the memory 5 of the central processing unit 3 also store a global tridimensional map M of the monitored volume V.
The global tridimensional map M comprises a set of tridimensional data points D of object surfaces in the monitored volume V.
A method for detecting intrusions in a monitored volume will now be disclosed in greater details with reference to FIG. 2.
The method for detecting intrusions is performed by a monitoring system 1 as detailed above.
In a first step of the method, each sensor 2 of the N tridimensional sensors acquires a local point cloud C in a local coordinate system S of said sensor 2 as detailed above.
The central processing unit 3 then receives the acquired local point clouds C from the N sensors 2 and stores said acquired point clouds C in the memory 5.
The memory 5 may contain other local point clouds C from previous acquisitions of each sensor 2.
In a second step, the central processing unit 3 perform several operations for each sensor 2 of the N tridimensional sensors.
The central processing unit 3 first computes updated tridimensional position and orientation of each sensor 2 in a global coordinate system G of the monitored volume V by aligning at least one local point cloud C acquired by said sensor 2 with the global tridimensional map M of the monitored volume V stored in the memory 5.
By “tridimensional position and orientation”, it is understood 6D localisation information for a sensor 2, for instance comprising 3D position and 3D orientation of said sensor 2 in a global coordinate system G.
The global coordinate system G is a virtual coordinate system obtained by aligning the local point clouds C. The global coordinate system G may not need to be calibrated with regards to the real physical environment of the system 1, in particular if no forbidden zone F has to be defined.
Thanks to these features of the method and system according to the invention, it is possible to automatically recalibrate the position of each sensor 2 at each frame. Calibration errors are thus greatly reduced and the ease of use of the system is increased. This solves the problem of reliability when sensors move in the wind or move due to mechanical shocks.
The updated tridimensional position and orientation of a sensor 2 are computed only from the local point clouds C acquired by said sensor 2 and from the global tridimensional map M of the monitored volume stored in a memory, and without additional positioning information.
By “without additional positioning information”, it is in particular meant that the computation of the updated tridimensional position and orientation of a sensor does not require other input data than the local point clouds C acquired by said sensor 2 and the global tridimensional map M. For instance, no additional localisation or orientation device, such as a GPS or an accelerometer, is required. Moreover, no assumption has to be made on the location or movement of the sensor.
To this aim, the central processing unit 3 performs a simultaneous multi-scans alignment of each point cloud C acquired by said sensor with the global tridimensional map of the monitored volume.
By “simultaneous multi-scans alignment”, it is meant that the point clouds C acquired by the N sensors, together with the global tridimensional map M of the monitored volume are considered as scans that needs to be aligned together simultaneously.
In one embodiment, the point clouds C acquired by the N sensors over the operating time are aligned at each step. For instance, the system may have performed M successive acquisition frames of the sensors 2 up to a current time t. The M point clouds C acquired by the N sensors are thus grouped with the global tridimensional map M to form M*N+1 scans to be aligned together by the central processing unit 3.
In a variant, the M−1 previously acquired point clouds C may be replaced by their respectively associated aligned point clouds A as detailed further below. The (M−1)*N aligned point cloud A may thus be grouped with the N latest acquired point clouds C and with the global tridimensional map M to form again M*N+1 scans to be aligned together by the central processing unit 3.
Such a simultaneous multi-scans alignment may be performed for instance by using an Iterative Closest Point algorithm (ICP) as detailed by P. J. Besl and N. D. McKay in “A method for registration of 3-d shapes” published in IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239-256, 1992 or in “Object modelling by registration of multiple range images” by Yang Chen and Gerard Medioni published in Image Vision Comput., 10(3), 1992. An ICP algorithm involves search in transformation space trying to find the set of pair-wise transformations of scans by optimizing a function defined on transformation space. The variant of ICP involves optimization functions that range from being error metrics like “sum of least square distances” to quality metrics like “image distance” or probabilistic metrics. In this embodiment, the central processing unit 3 may thus optimize a function defined on a transformation space of each point cloud C to determine the updated tridimensional position and orientation of a sensor 2.
This way, it is possible to easily and efficiently perform a simultaneous multi-scans alignment of each point cloud C to compute updated tridimensional position and orientation of a sensor 2.
Then, the central processing unit 3 generates an aligned local point cloud A associated to each acquired point cloud C in which the data points D of said point cloud C are translated from the local coordinate system S to the global coordinate system G of the global tridimensional map M. The aligned local point cloud A is determined on the basis of the updated tridimensional position and orientation of the sensor 2.
The aligned local point cloud A of each sensor 2 can then be reliably compared together since each sensor's position and orientation has been updated during the process.
In a subsequent step of the method, the central processing unit 3 may monitor an intrusion in the monitored volume V.
To this aim, the central processing unit 3 may compare a free space of each aligned local point cloud A with a free space of the global tridimensional map M.
To this aim, the monitoring volume V may for instance be divided in a matrix of elementary volumes E and each elementary volume E may be flagged as “free-space” or “occupied space” on the basis of the global tridimensional map M.
The aligned local point cloud A can then be used to determine an updated flag for the elementary volume E contained in the local volume L surrounding a sensor 2.
A change in flagging of an elementary volume E from “free-space” to “occupied space”, for instance by intrusion of an object 0 as illustrated on FIG. 1, can then trigger the detection of an intrusion in the monitored volume V by the central processing unit 3.
In one embodiment of the invention, the global tridimensional map M of the monitored volume V can be determined by the monitoring system 1 itself in an automated manner as it will now be described with reference to FIG. 3.
To this aim, the N tridimensional sensors may be located so that the union of the local volumes L surrounding said sensors 2 is a connected space. This connected space forms the monitored volume.
By “connected space”, it is meant that the union of the local volumes L surrounding the N sensors 2 form a single space and not two or more disjoint nonempty open subspaces.
Then, a global tridimensional map M of the monitored volume V can be determined by first receiving at least one local point cloud C from each of said sensors and storing said local point clouds C in the memory 5 of the system.
The central processing unit 5 then performs a simultaneous multi-scans alignment of the stored local point clouds C to generate a plurality of aligned local point clouds A as detailed above. Each aligned local point cloud A is respectively associated to a local point cloud C acquired from a tridimensional sensor 2.
Unlike what has been detailed above, the frames used for the simultaneous multi-scans alignment do not comprise the global tridimensional map M since it has yet to be determined. The frames used for the simultaneous multi-scans alignment may comprise a plurality of M successively acquired point clouds C for each sensor 2. The M point clouds C acquired by the N sensors are thus grouped to form M*N+1 scans to be aligned together by the central processing unit 3 as detailed above.
By aligning the stored local point clouds C, a global coordinate system G is obtained in which the aligned local point clouds A can be compared together.
Once the plurality of aligned local point clouds A has been determined, the central processing unit 5 can thus merge the plurality of aligned local point clouds A to form a global tridimensional map M of the monitored volume V. The global tridimensional map M is then stored in the memory 5 of the system 1.
In one embodiment of the invention, once an intrusion has been detected by the system 1, the method may further involve displaying to a user a graphical indication I of the intrusion on a display device 6.
The display device 6 may be any screen, LCD, OLED, and the like, that is convenient for an operator of the system 1. The display device 6 is connected to, and controlled by, the central processing unit 3 of the system 1.
In a first embodiment of the method, a bidimensional image B of the monitored volume V may be generated by the processing unit 3 by projecting the global tridimensional map M of the monitored volume V along a direction of observation.
The processing unit 3 may then command the display device 6 to display the graphical indication I of the intrusion overlaid over said bidimensional image B of the monitored volume V.
In another embodiment, the system 1 may further comprise at least one camera 7. The camera 7 may be able to directly acquire a bidimensional image B of a part of the monitored volume V. The camera 7 is connected to, and controlled by, the central processing unit 3 of the system 1.
The central processing unit 3 may then command the display device 6 to display the graphical indication I of the intrusion overlaid over the bidimensional image B acquired by the camera 7.
In a variant, the central processing unit 3 may be able to control the pan, rotation or zoom of the camera 7 so that the detected intrusion can be located in a field of view of the camera 7.
To this aim, another object of the invention is a method to determine a tridimensional location of a camera 7 of a self-calibrated monitoring system 1 as described above. This method allows for easy calibration without requiring a manual measurement and input of the position of the camera 7 in the monitoring volume V. An embodiment of this method is illustrated in FIG. 4.
The camera 7 is provided with at least one reflective pattern 8. The reflective pattern 8 is such that a data point of said reflective pattern acquired by a tridimensional sensor 2 of the self-calibrated monitoring system 1 can be associated to said camera by the central processing unit 3 of the system 1.
The reflective pattern 8 may be made of a high reflectivity material so that the data points of the reflective pattern 8 acquired by the sensor 2 present a high intensity, for instance an intensity over a predefined threshold intensity.
The reflective pattern 8 may also have a predefined shape, for instance the shape of a cross or a circle or “L” markers. Such a shape can be identified by the central processing unit 3 by using commonly known data and image analysis algorithms.
In a first step of the method to determine a tridimensional location of a camera 7, the camera is positioned in the monitored volume V. The camera 7 is disposed in at least one local volume L surrounding a sensor 2 of the system 1, so that the reflective pattern 8 of the camera 7 is in a field of view of at least one sensor 2 of the plurality of N tridimensional sensors. Said at least one sensor 2 is thus able to acquire a local point cloud C comprising at least one tridimensional data point D corresponding to the reflective pattern 8 of the camera 7.
The central processing unit 3 then receives a local point cloud C from said at least one tridimensional sensor and computes an aligned local point cloud A by aligning said local point cloud C with the global tridimensional map M of the self-calibrated monitoring system as detailed above.
In the aligned local point cloud A, the central processing unit 3 can then identify at least one data point corresponding to the reflective pattern 8 of the camera 7. As mentioned above, this identification may be conducted on the basis of the intensity of the data points D received from the sensor 2 and/or the shape of high intensity data points acquired by the sensor 2. This identification may be performed by using known data and image processing algorithms, for instance the OpenCV library.
Eventually, a tridimensional location and/or orientation of the camera in the global coordinate system G of the global tridimensional map M may be determined by the central processing unit 3 on the basis of the coordinates of said identified data point of the reflective pattern 8 of the camera 7 in the aligned local point cloud A.
The underlying concept of the invention can also be used for easily and efficiently extend a volume monitored by a system and a method as detailed above.
Such a method can find interest in many situations in which a slight change in the monitored volume involves moving or adding additional sensors 2 and usually requires a time-consuming and complex manual calibration of the monitoring system. On the contrary, the present invention provides for a self-calibrating system and method that overcome those problems.
Another object of the invention is thus a method for extending a volume monitored by a method and system as detailed above.
In the monitoring system 1, a plurality of N tridimensional sensors 2 respectively monitor at least a part of the monitored volume V and respectively communicate with a central processing unit 3 as detailed above. A global tridimensional map M is associated to the volume V monitored by the N tridimensional sensors 2 as detailed above.
The method for extending the volume monitored by system 1 thus involves determining an updated global tridimensional map M′ of the self-calibrated monitoring system associated to an updated volume V′ monitored by the N+1 tridimensional sensors 2.
The method for extending the volume monitored by system 1 involves first positioning an additional N+1th tridimensional sensor 2 able to communicate with the central processing unit 3.
The additional N+1th tridimensional sensor 2 is similar to the N sensors 2 of the monitoring system 1 and is thus able to acquire a local point cloud C in a local coordinate system L of said sensor 2. This local point cloud C comprises a set of tridimensional data points D of object surfaces in a local volume L surrounding said sensor 2. The local volume L at least partially overlaps the volume V monitored by the plurality of N tridimensional sensors.
The updated global tridimensional map M of the self-calibrated monitoring system may then be determined as follows.
First, the central processing unit 3 receives at least one local point cloud C acquired from each of said at least two tridimensional sensors and storing said local point clouds in a memory.
Then, the central processing unit 3 performs a simultaneous multi-scans alignment of the stored local point clouds C to generate a plurality of aligned local point clouds A respectively associated to the local point clouds C acquired from each sensor 2 as detailed above.
The multi-scans alignment can be computed on a group of scans comprising the global tridimensional map M.
This is in particular interesting if the union of the local volumes L surrounding the tridimensional sensors 2 is not a connected space.
The multi-scans alignment can also be computed only on the point clouds C acquired by the sensors 2.
In this case, the determination of the updated global tridimensional map M is similar to computation of the global tridimensional map M of the monitored volume V by the monitoring system 1 as detailed above.
Once the plurality of aligned local point clouds A has been determined, the central processing unit 5 can then merge the plurality of aligned local point clouds A and, if necessary, the global tridimensional map M, to form an updated global tridimensional map M′ of the updated monitored volume V′.
The updated global tridimensional map M′ is then stored in the memory 5 of the system 1 for future use in a method for detecting intrusions in a monitored volume as detailed above.

Claims (12)

The invention claimed is:
1. A method for detecting intrusions in a monitored volume, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
each sensor of said plurality of N tridimensional sensors acquiring a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping the monitored volume,
said central processing unit receiving the acquired local point clouds from the plurality of N tridimensional sensors, storing said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
computing an updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory, and
generating an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor,
monitoring an intrusion in the monitored volume by comparing a free space of said aligned local point cloud with a free space of the global tridimensional map;
wherein the N tridimensional sensors in the plurality of N tridimensional sensors are located so that a union of the local volumes surrounding said sensors is a connected space, said connected space forming the monitored volume,
and wherein the global tridimensional map of the monitored volume is determined by:
receiving at least one local point cloud from each of said N tridimensional sensors and storing said local point clouds in a memory,
performing a simultaneous multi-scans alignment of the stored local point clouds to generate a plurality of aligned local point clouds respectively associated to the local point clouds acquired from each of said N tridimensional sensors, and
merging said plurality of aligned local point clouds to determine a global tridimensional map of the monitored volume and storing said global tridimensional map in the memory.
2. The method according to claim 1 wherein, for each sensor of said plurality of N tridimensional sensors, the updated tridimensional position and orientation of said sensor in the global coordinate system is computed by performing a simultaneous multi-scans alignment of each point clouds acquired by said sensor with the global tridimensional map of the monitored volume.
3. The method according to claim 1, wherein the updated tridimensional position and orientation of each sensor of said plurality of N tridimensional sensors is computed only from the local point clouds acquired by said tridimensional sensor and the global tridimensional map of the monitored volume stored in a memory, and without additional positioning information.
4. The method according to claim 1, further comprising displaying to a user a graphical indication of the intrusion on a display device.
5. The method according to claim 4, further comprising generating a bidimensional image of the monitored volume by projecting the global tridimensional map of the monitored volume, and commanding the display device to display the graphical indication of the intrusion overlaid over said bidimensional image of the monitored volume.
6. The method according to claim 4, wherein the method is for a self-calibrated monitoring system, the method further comprising commanding the display device to display the graphical indication of the intrusion overlaid over a bidimensional image of at least a part of the monitored volume acquired by a camera of the self-calibrated monitoring system.
7. The method according to claim 6, further comprising orienting the camera of the self-calibrated monitoring system so that the detected intrusion is located in a field of view of the camera.
8. A method for determining a tridimensional location of a camera for a self-calibrated monitoring system, in which a plurality of N tridimensional sensors respectively monitor at least a part of a monitored volume and respectively communicate with a central processing unit,
providing the camera, wherein the camera comprises at least one reflective pattern such that a data point of said reflective pattern acquired by a tridimensional sensor of the self-calibrated monitoring system can be associated to said camera in the monitored volume, in a field of view of at least one sensor of the plurality of N tridimensional sensors so that said sensor of the plurality of N tridimensional sensors acquire a local point cloud comprising at least one tridimensional data point of the reflective pattern of the camera,
receiving a local point cloud from said at least one tridimensional sensor and computing an aligned local point cloud by aligning said local point cloud with a global tridimensional map of the self-calibrated monitoring system,
identifying, in the aligned local point cloud at least one data point corresponding to the reflective pattern of the camera, and
determining at least a tridimensional location of the camera in a global coordinate system of the global tridimensional map on the basis of the coordinates of said identified data point of the aligned local point cloud corresponding to the reflective pattern of the camera.
9. A self-calibrated monitoring system for detecting intrusions in a monitored volume, the system comprising:
a plurality of N tridimensional sensors respectively able to monitor at least a part of the monitored volume, each sensor of said plurality of N tridimensional sensors being able to acquire a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping the monitored volume;
a memory to store said local point cloud and a global tridimensional map of the monitored volume comprising a set of tridimensional data points of object surfaces in the monitored volume, the local volume at least partially overlapping the monitored volume;
at least one camera able to acquire a bidimensional image of a portion of the monitored volume; and
a central processing unit able to receive the acquired local point clouds from the plurality of N tridimensional sensors, store said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
compute updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory,
generate an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor, and
monitor an intrusion in the monitored volume by comparing a free space of said aligned local point cloud with a free space of the global tridimensional map;
wherein said at least one camera comprises at least one reflective pattern such that a data point of said reflective pattern acquired by a tridimensional sensor of the self-calibrated monitoring system can be associated to said camera by the central processing unit of the system.
10. The monitoring system according to claim 9, further comprising at least one display device able to display to a user a graphical indication of the intrusion.
11. A non-transitory computer readable storage medium, having stored thereon a computer program comprising program instructions, the computer program being loadable into a central processing unit of a monitoring system and adapted to cause the processing unit to carry out the steps of a method when the computer program is run by the central processing unit, the method comprising:
each sensor of a plurality of N tridimensional sensors acquiring a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping a monitored volume,
said central processing unit receiving the acquired local point clouds from the plurality of N tridimensional sensors, storing said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
computing an updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory, and
generating an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor,
monitoring an intrusion in the monitored volume by comparing a free space of said aligned local point cloud with a free space of the global tridimensional map;
wherein the N tridimensional sensors in the plurality of N tridimensional sensors are located so that a union of the local volumes surrounding said sensors is a connected space, said connected space forming the monitored volume,
and wherein the global tridimensional map of the monitored volume is determined by:
receiving at least one local point cloud from each of said N tridimensional sensors and storing said local point clouds in a memory,
performing a simultaneous multi-scans alignment of the stored local point clouds to generate a plurality of aligned local point clouds respectively associated to the local point clouds acquired from each of said N tridimensional sensors, and
merging said plurality of aligned local point clouds to determine a global tridimensional map of the monitored volume and storing said global tridimensional map in the memory.
12. A method for extending a monitored volume of a self-calibrated monitoring system, in which a plurality of N tridimensional sensors respectively monitor at least a part of the monitored volume and respectively communicate with a central processing unit, comprising:
each sensor of said plurality of N tridimensional sensors acquiring a local point cloud in a local coordinate system of said sensor, said local point cloud comprising a set of tridimensional data points of object surfaces in a local volume surrounding said sensor and overlapping the monitored volume,
said central processing unit receiving the acquired local point clouds from the plurality of N tridimensional sensors, storing said acquired point clouds in a memory and,
for each sensor of said plurality of N tridimensional sensors,
computing an updated tridimensional position and orientation of said sensor in a global coordinate system of the monitored volume by aligning a local point cloud acquired by said tridimensional sensor with a global tridimensional map of the monitored volume stored in a memory, and
generating an aligned local point cloud from said acquired point cloud on the basis of the updated tridimensional position and orientation of the sensor; and
extending the monitored volume by:
positioning an additional N+1th tridimensional sensor communicating with the central processing unit, the additional N+1th tridimensional sensor acquiring an additional local point cloud in a local coordinate system of said sensor, said additional local point cloud comprising an additional set of tridimensional data points of object surfaces in a local volume surrounding the N+1th tridimensional sensor and at least partially overlapping the volume monitored by the plurality of N tridimensional sensors,
determining an updated global tridimensional map by:
receiving at least one local point cloud acquired from the N+1th tridimensional sensor and storing said at least one local point cloud in the memory as part of a second set of local point clouds stored in the memory,
performing a simultaneous multi-scans alignment using the second set of stored local point clouds to generate a second plurality of aligned local point clouds, and
determining a second global tridimensional map of an extended monitored volume by merging said second plurality of aligned local point clouds.
US16/303,440 2016-06-22 2017-06-22 Methods and systems for detecting intrusions in a monitored volume Active 2037-08-01 US10878689B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP16175808.1A EP3261071B1 (en) 2016-06-22 2016-06-22 Methods and systems for detecting intrusions in a monitored volume
EP16175808 2016-06-22
EP16175808.1 2016-06-22
PCT/EP2017/065359 WO2017220714A1 (en) 2016-06-22 2017-06-22 Methods and systems for detecting intrusions in a monitored volume

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/065359 A-371-Of-International WO2017220714A1 (en) 2016-06-22 2017-06-22 Methods and systems for detecting intrusions in a monitored volume

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/136,529 Continuation US11335182B2 (en) 2016-06-22 2020-12-29 Methods and systems for detecting intrusions in a monitored volume

Publications (2)

Publication Number Publication Date
US20200175844A1 US20200175844A1 (en) 2020-06-04
US10878689B2 true US10878689B2 (en) 2020-12-29

Family

ID=56148318

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/303,440 Active 2037-08-01 US10878689B2 (en) 2016-06-22 2017-06-22 Methods and systems for detecting intrusions in a monitored volume
US17/136,529 Active US11335182B2 (en) 2016-06-22 2020-12-29 Methods and systems for detecting intrusions in a monitored volume

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/136,529 Active US11335182B2 (en) 2016-06-22 2020-12-29 Methods and systems for detecting intrusions in a monitored volume

Country Status (6)

Country Link
US (2) US10878689B2 (en)
EP (2) EP3657455A1 (en)
CN (1) CN109362237B (en)
CA (1) CA3024504A1 (en)
ES (1) ES2800725T3 (en)
WO (1) WO2017220714A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949347B (en) * 2019-03-15 2021-09-17 百度在线网络技术(北京)有限公司 Human body tracking method, device, system, electronic equipment and storage medium
CN111724558B (en) * 2019-03-21 2021-10-19 杭州海康威视数字技术股份有限公司 Monitoring method, monitoring device and intrusion alarm system
US10943456B1 (en) * 2019-09-30 2021-03-09 International Business Machines Corporation Virtual safety guardian
CN110927731B (en) * 2019-11-15 2021-12-17 深圳市镭神智能系统有限公司 Three-dimensional protection method, three-dimensional detection device and computer readable storage medium
US11327506B2 (en) * 2019-11-20 2022-05-10 GM Global Technology Operations LLC Method and system for localized travel lane perception
US11216669B1 (en) * 2020-01-16 2022-01-04 Outsight SA Single frame motion detection and three-dimensional imaging using free space information
US20230045319A1 (en) * 2020-01-30 2023-02-09 Outsight A surveillance sensor system
CN111553844B (en) * 2020-04-29 2023-08-29 阿波罗智能技术(北京)有限公司 Method and device for updating point cloud
CN112732313B (en) * 2020-12-21 2021-12-21 南方电网电力科技股份有限公司 Method and system for updating map increment of substation inspection robot

Citations (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0645644A1 (en) 1993-08-08 1995-03-29 State Of Israel - Ministry Of Defence Intrusion detector
US5517429A (en) * 1992-05-08 1996-05-14 Harrison; Dana C. Intelligent area monitoring system
US6188319B1 (en) 1997-07-11 2001-02-13 Laser Guard Ltd. Intruder detector system
US20030235331A1 (en) * 2002-03-13 2003-12-25 Omron Corporation Three-dimensional monitoring apparatus
US20040169131A1 (en) * 1999-07-06 2004-09-02 Hardin Larry C. Intrusion detection system
US20040263625A1 (en) * 2003-04-22 2004-12-30 Matsushita Electric Industrial Co., Ltd. Camera-linked surveillance system
US20050036036A1 (en) * 2001-07-25 2005-02-17 Stevenson Neil James Camera control apparatus and method
US20060049930A1 (en) * 2002-07-15 2006-03-09 Levi Zruya Method and apparatus for implementing multipurpose monitoring system
US7084761B2 (en) * 2001-12-19 2006-08-01 Hitachi, Ltd. Security system
US7164116B2 (en) 2002-03-13 2007-01-16 Omron Corporation Monitor for intrusion detection
US20070035627A1 (en) * 2005-08-11 2007-02-15 Cleary Geoffrey A Methods and apparatus for providing fault tolerance in a surveillance system
US20070039030A1 (en) * 2005-08-11 2007-02-15 Romanowich John F Methods and apparatus for a wide area coordinated surveillance system
US7317456B1 (en) * 2002-12-02 2008-01-08 Ngrain (Canada) Corporation Method and apparatus for transforming point cloud data to volumetric data
US7433493B1 (en) * 2000-09-06 2008-10-07 Hitachi, Ltd. Abnormal behavior detector
US20090033746A1 (en) 2007-07-30 2009-02-05 Brown Lisa M Automatic adjustment of area monitoring based on camera motion
US20090153326A1 (en) * 2007-12-13 2009-06-18 Lucent Technologies, Inc. Method for locating intruder
US20090288011A1 (en) * 2008-03-28 2009-11-19 Gadi Piran Method and system for video collection and analysis thereof
US7652238B2 (en) 2004-09-08 2010-01-26 Sick Ag Method and apparatus for detecting an object through the use of multiple sensors
US20100053330A1 (en) 2008-08-26 2010-03-04 Honeywell International Inc. Security system using ladar-based sensors
US20100271615A1 (en) 2009-02-20 2010-10-28 Digital Signal Corporation System and Method for Generating Three Dimensional Images Using Lidar and Video Measurements
US20100321492A1 (en) * 2009-06-18 2010-12-23 Honeywell International Inc. System and method for displaying video surveillance fields of view limitations
US7940955B2 (en) * 2006-07-26 2011-05-10 Delphi Technologies, Inc. Vision-based method of determining cargo status by boundary detection
US7995096B1 (en) * 1999-09-23 2011-08-09 The Boeing Company Visual security operations system
US20120086780A1 (en) * 2010-10-12 2012-04-12 Vinay Sharma Utilizing Depth Information to Create 3D Tripwires in Video
US20120286136A1 (en) * 2010-11-08 2012-11-15 Johns Hopkins University Lidar system and method for monitoring space
US20130141543A1 (en) * 2011-05-26 2013-06-06 Lg Cns Co., Ltd Intelligent image surveillance system using network camera and method therefor
US20130208098A1 (en) * 2010-08-27 2013-08-15 Telefonica, S.A. Method for generating a model of a flat object from views of the object
US8648923B2 (en) * 2010-06-28 2014-02-11 Canon Kabushiki Kaisha Image pickup apparatus
US9087258B2 (en) * 2010-08-17 2015-07-21 Lg Electronics Inc. Method for counting objects and apparatus using a plurality of sensors
US20150249807A1 (en) * 2014-03-03 2015-09-03 Vsk Electronics Nv Intrusion detection with directional sensing
US9151446B2 (en) * 2005-12-22 2015-10-06 Pilz Gmbh & Co. Kg Method and system for configuring a monitoring device for monitoring a spatial area
US20150288951A1 (en) * 2014-04-08 2015-10-08 Lucasfilm Entertainment Company, Ltd. Automated camera calibration methods and systems
US20160073081A1 (en) * 2013-09-24 2016-03-10 Faro Technologies, Inc. Automated generation of a three-dimensional scanner video
US9412040B2 (en) * 2013-12-04 2016-08-09 Mitsubishi Electric Research Laboratories, Inc. Method for extracting planes from 3D point cloud sensor data
US20160370220A1 (en) * 2015-06-16 2016-12-22 Hand Held Products, Inc. Calibrating a volume dimensioner
US9575180B2 (en) * 2012-09-13 2017-02-21 Mbda Uk Limited Room occupancy sensing apparatus and method
US20170282369A1 (en) * 2016-03-29 2017-10-05 The Boeing Company Collision prevention in robotic manufacturing environments
US9784566B2 (en) * 2013-03-13 2017-10-10 Intermec Ip Corp. Systems and methods for enhancing dimensioning
WO2017176882A1 (en) * 2016-04-07 2017-10-12 Tyco Fire & Security Gmbh Security sensing method and apparatus
US9823059B2 (en) * 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US9841311B2 (en) * 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9857167B2 (en) * 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US10142538B2 (en) * 2015-02-24 2018-11-27 Redrock Microsystems, Llc LIDAR assisted focusing device
US10203402B2 (en) * 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US10249030B2 (en) * 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US20200097755A1 (en) * 2018-09-24 2020-03-26 Rockwell Automation Technologies, Inc. Object intrusion detection system and method

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6560354B1 (en) * 1999-02-16 2003-05-06 University Of Rochester Apparatus and method for registration of images to physical space using a weighted combination of points and surfaces
JP4609125B2 (en) 2004-05-06 2011-01-12 日本電気株式会社 Data transfer system and method
EP2211721B1 (en) * 2007-11-19 2019-07-10 Pyronia Medical Technologies, Inc. Patient positioning system and methods for diagnostic radiology and radiotherapy
CN101350125A (en) * 2008-03-05 2009-01-21 中科院嘉兴中心微系统所分中心 Three-dimensional intelligent intrusion-proof system
CN101236688B (en) * 2008-03-05 2011-08-24 中国科学院嘉兴无线传感网工程中心 Invasion-proof sensor system test platform based on sensor network technique
US8086876B2 (en) * 2008-07-02 2011-12-27 Dell Products L.P. Static and dynamic power management for a memory subsystem
US7961137B2 (en) * 2008-11-10 2011-06-14 The Boeing Company System and method for detecting performance of a sensor field at all points within a geographic area of regard
CN103415876B (en) * 2010-11-17 2017-03-22 欧姆龙科学技术公司 A method and apparatus for monitoring zones
EP2772676B1 (en) * 2011-05-18 2015-07-08 Sick Ag 3D camera and method for three dimensional surveillance of a surveillance area
EP2754129A4 (en) * 2011-09-07 2015-05-06 Commw Scient Ind Res Org System and method for three-dimensional surface imaging
DE102012212613A1 (en) * 2012-07-18 2014-01-23 Robert Bosch Gmbh Surveillance system with position-dependent protection area, procedure for monitoring a surveillance area and computer program
WO2014039050A1 (en) * 2012-09-07 2014-03-13 Siemens Aktiengesellschaft Methods and apparatus for establishing exit/entry criteria for a secure location
EP2923174A2 (en) * 2012-11-22 2015-09-30 GeoSim Systems Ltd. Point-cloud fusion
US9182812B2 (en) * 2013-01-08 2015-11-10 Ayotle Virtual sensor systems and methods
US20150062123A1 (en) * 2013-08-30 2015-03-05 Ngrain (Canada) Corporation Augmented reality (ar) annotation computer system and computer-readable medium and method for creating an annotated 3d graphics model
CN104574722A (en) * 2013-10-12 2015-04-29 北京航天长峰科技工业集团有限公司 Harbor safety control system based on multiple sensors
WO2016116946A2 (en) * 2015-01-20 2016-07-28 Indian Institute Of Technology, Bombay A system and method for obtaining 3-dimensional images using conventional 2-dimensional x-ray images
US10436904B2 (en) * 2015-04-15 2019-10-08 The Boeing Company Systems and methods for modular LADAR scanning
CN104935893B (en) * 2015-06-17 2019-02-22 浙江大华技术股份有限公司 Monitor method and apparatus
US10795000B2 (en) * 2015-07-10 2020-10-06 The Boeing Company Laser distance and ranging (LADAR) apparatus, array, and method of assembling thereof
US10718613B2 (en) * 2016-04-19 2020-07-21 Massachusetts Institute Of Technology Ground-based system for geolocation of perpetrators of aircraft laser strikes
US11379688B2 (en) * 2017-03-16 2022-07-05 Packsize Llc Systems and methods for keypoint detection with convolutional neural networks

Patent Citations (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517429A (en) * 1992-05-08 1996-05-14 Harrison; Dana C. Intelligent area monitoring system
EP0645644A1 (en) 1993-08-08 1995-03-29 State Of Israel - Ministry Of Defence Intrusion detector
US6188319B1 (en) 1997-07-11 2001-02-13 Laser Guard Ltd. Intruder detector system
US20040169131A1 (en) * 1999-07-06 2004-09-02 Hardin Larry C. Intrusion detection system
US7995096B1 (en) * 1999-09-23 2011-08-09 The Boeing Company Visual security operations system
US7433493B1 (en) * 2000-09-06 2008-10-07 Hitachi, Ltd. Abnormal behavior detector
US20050036036A1 (en) * 2001-07-25 2005-02-17 Stevenson Neil James Camera control apparatus and method
US7084761B2 (en) * 2001-12-19 2006-08-01 Hitachi, Ltd. Security system
US20030235331A1 (en) * 2002-03-13 2003-12-25 Omron Corporation Three-dimensional monitoring apparatus
US7164116B2 (en) 2002-03-13 2007-01-16 Omron Corporation Monitor for intrusion detection
US20060049930A1 (en) * 2002-07-15 2006-03-09 Levi Zruya Method and apparatus for implementing multipurpose monitoring system
US7317456B1 (en) * 2002-12-02 2008-01-08 Ngrain (Canada) Corporation Method and apparatus for transforming point cloud data to volumetric data
US20040263625A1 (en) * 2003-04-22 2004-12-30 Matsushita Electric Industrial Co., Ltd. Camera-linked surveillance system
US7652238B2 (en) 2004-09-08 2010-01-26 Sick Ag Method and apparatus for detecting an object through the use of multiple sensors
US20070035627A1 (en) * 2005-08-11 2007-02-15 Cleary Geoffrey A Methods and apparatus for providing fault tolerance in a surveillance system
US20070039030A1 (en) * 2005-08-11 2007-02-15 Romanowich John F Methods and apparatus for a wide area coordinated surveillance system
US9151446B2 (en) * 2005-12-22 2015-10-06 Pilz Gmbh & Co. Kg Method and system for configuring a monitoring device for monitoring a spatial area
US7940955B2 (en) * 2006-07-26 2011-05-10 Delphi Technologies, Inc. Vision-based method of determining cargo status by boundary detection
US20090033746A1 (en) 2007-07-30 2009-02-05 Brown Lisa M Automatic adjustment of area monitoring based on camera motion
US20090153326A1 (en) * 2007-12-13 2009-06-18 Lucent Technologies, Inc. Method for locating intruder
US20090288011A1 (en) * 2008-03-28 2009-11-19 Gadi Piran Method and system for video collection and analysis thereof
US20100053330A1 (en) 2008-08-26 2010-03-04 Honeywell International Inc. Security system using ladar-based sensors
US20100271615A1 (en) 2009-02-20 2010-10-28 Digital Signal Corporation System and Method for Generating Three Dimensional Images Using Lidar and Video Measurements
US9536348B2 (en) * 2009-06-18 2017-01-03 Honeywell International Inc. System and method for displaying video surveillance fields of view limitations
US20100321492A1 (en) * 2009-06-18 2010-12-23 Honeywell International Inc. System and method for displaying video surveillance fields of view limitations
US8648923B2 (en) * 2010-06-28 2014-02-11 Canon Kabushiki Kaisha Image pickup apparatus
US9087258B2 (en) * 2010-08-17 2015-07-21 Lg Electronics Inc. Method for counting objects and apparatus using a plurality of sensors
US20130208098A1 (en) * 2010-08-27 2013-08-15 Telefonica, S.A. Method for generating a model of a flat object from views of the object
US8890936B2 (en) * 2010-10-12 2014-11-18 Texas Instruments Incorporated Utilizing depth information to create 3D tripwires in video
US20120086780A1 (en) * 2010-10-12 2012-04-12 Vinay Sharma Utilizing Depth Information to Create 3D Tripwires in Video
US20120286136A1 (en) * 2010-11-08 2012-11-15 Johns Hopkins University Lidar system and method for monitoring space
US20130141543A1 (en) * 2011-05-26 2013-06-06 Lg Cns Co., Ltd Intelligent image surveillance system using network camera and method therefor
US9575180B2 (en) * 2012-09-13 2017-02-21 Mbda Uk Limited Room occupancy sensing apparatus and method
US9841311B2 (en) * 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9784566B2 (en) * 2013-03-13 2017-10-10 Intermec Ip Corp. Systems and methods for enhancing dimensioning
US10203402B2 (en) * 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US20160073081A1 (en) * 2013-09-24 2016-03-10 Faro Technologies, Inc. Automated generation of a three-dimensional scanner video
US9412040B2 (en) * 2013-12-04 2016-08-09 Mitsubishi Electric Research Laboratories, Inc. Method for extracting planes from 3D point cloud sensor data
US20150249807A1 (en) * 2014-03-03 2015-09-03 Vsk Electronics Nv Intrusion detection with directional sensing
US20150288951A1 (en) * 2014-04-08 2015-10-08 Lucasfilm Entertainment Company, Ltd. Automated camera calibration methods and systems
US9823059B2 (en) * 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US10142538B2 (en) * 2015-02-24 2018-11-27 Redrock Microsystems, Llc LIDAR assisted focusing device
US20160370220A1 (en) * 2015-06-16 2016-12-22 Hand Held Products, Inc. Calibrating a volume dimensioner
US9857167B2 (en) * 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US10249030B2 (en) * 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US20170282369A1 (en) * 2016-03-29 2017-10-05 The Boeing Company Collision prevention in robotic manufacturing environments
WO2017176882A1 (en) * 2016-04-07 2017-10-12 Tyco Fire & Security Gmbh Security sensing method and apparatus
US20200097755A1 (en) * 2018-09-24 2020-03-26 Rockwell Automation Technologies, Inc. Object intrusion detection system and method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Besl et al., "A Method for Registration of 3-D Shapes", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, No. 2, Feb. 1992.
Chen et al., "Object Modeling by Registration of Multiple Range Images"., Proceedings of the 1991 IEEE International Conference on Robotics and Automation Sacramento, California-Apr. 1991.
Chen et al., "Object Modeling by Registration of Multiple Range Images"., Proceedings of the 1991 IEEE International Conference on Robotics and Automation Sacramento, California—Apr. 1991.
International Search Report, PCT/EP2017/065359, dated Sep. 5, 2017.
Larry L i : "Time-of- Flight Camera-An Introduction", May 31, 2014 (May 31, 2014), XP055300210 , Retrieved from the Internet: URL:http://www.ti.com/lit/wp/sloa190b/sloa190b.pdf [retrieved on Sep. 6, 2016] figures 4,5 p. 3, right-hand column, paragraph 3.
Larry L i : "Time-of- Flight Camera—An Introduction", May 31, 2014 (May 31, 2014), XP055300210 , Retrieved from the Internet: URL:http://www.ti.com/lit/wp/sloa190b/sloa190b.pdf [retrieved on Sep. 6, 2016] figures 4,5 p. 3, right-hand column, paragraph 3.

Also Published As

Publication number Publication date
US20210125487A1 (en) 2021-04-29
EP3261071B1 (en) 2020-04-01
WO2017220714A1 (en) 2017-12-28
US11335182B2 (en) 2022-05-17
ES2800725T3 (en) 2021-01-04
CN109362237A (en) 2019-02-19
CA3024504A1 (en) 2017-12-28
EP3657455A1 (en) 2020-05-27
EP3261071A1 (en) 2017-12-27
CN109362237B (en) 2021-06-25
US20200175844A1 (en) 2020-06-04

Similar Documents

Publication Publication Date Title
US11335182B2 (en) Methods and systems for detecting intrusions in a monitored volume
Kim et al. SLAM-driven robotic mapping and registration of 3D point clouds
US11292700B2 (en) Driver assistance system and a method
US6061644A (en) System for determining the spatial position and orientation of a body
US9342890B2 (en) Registering of a scene disintegrating into clusters with visualized clusters
US9989353B2 (en) Registering of a scene disintegrating into clusters with position tracking
US10657691B2 (en) System and method of automatic room segmentation for two-dimensional floorplan annotation
US20230064071A1 (en) System for 3d surveying by an autonomous robotic vehicle using lidar-slam and an estimated point distribution map for path planning
Ferri et al. Dynamic obstacles detection and 3d map updating
WO2018169467A1 (en) A vehicle with a crane with object detecting device
Martín et al. Deterioration of depth measurements due to interference of multiple RGB-D sensors
Glas et al. SNAPCAT-3D: Calibrating networks of 3D range sensors for pedestrian tracking
Gallegos et al. Appearance-based slam relying on a hybrid laser/omnidirectional sensor
US20220057518A1 (en) Capturing environmental scans using sensor fusion
US11619725B1 (en) Method and device for the recognition of blooming in a lidar measurement
US11614528B2 (en) Setting method of monitoring system and monitoring system
Hebel et al. Change detection in urban areas by direct comparison of multi-view and multi-temporal ALS data
US20220414925A1 (en) Tracking with reference to a world coordinate system
WO2020179382A1 (en) Monitoring device and monitoring method
EP4181063A1 (en) Markerless registration of image data and laser scan data
EP4345412A1 (en) On-site compensation of measurement devices
US20230400348A1 (en) Vibration monitoring system and method
WO2023163760A1 (en) Tracking with reference to a world coordinate system
CN115565058A (en) Robot, obstacle avoidance method, device and storage medium
CN114746826A (en) Data processing method and movable platform

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE