WO2011054971A2 - Procédé et système pour détecter le déplacement d'objets - Google Patents

Procédé et système pour détecter le déplacement d'objets Download PDF

Info

Publication number
WO2011054971A2
WO2011054971A2 PCT/EP2010/067140 EP2010067140W WO2011054971A2 WO 2011054971 A2 WO2011054971 A2 WO 2011054971A2 EP 2010067140 W EP2010067140 W EP 2010067140W WO 2011054971 A2 WO2011054971 A2 WO 2011054971A2
Authority
WO
WIPO (PCT)
Prior art keywords
camera
zone
objects
people
ground plane
Prior art date
Application number
PCT/EP2010/067140
Other languages
English (en)
Other versions
WO2011054971A3 (fr
Inventor
Dorr Niall
Fogarty Graham
Original Assignee
Alpha Vision Design Research Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alpha Vision Design Research Ltd. filed Critical Alpha Vision Design Research Ltd.
Publication of WO2011054971A2 publication Critical patent/WO2011054971A2/fr
Publication of WO2011054971A3 publication Critical patent/WO2011054971A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • the present invention relates to a method and system for detecting the movement of objects using 3D images generated by a time of flight multi array camera.
  • the invention relates to a system and method for identification of objects for automated people counting, queue detection, monitoring people entering and leaving a secure area and security surveillance.
  • Optical sensors detect objects by interrupting a single point beam of visible, Ultra- Violet or Infra Red light. More advanced single line of sight 'time of flight' lasers can be used to detect the presence of objects due to a change in distance measured.
  • Mechanical sensors detect objects through their weight.
  • Thermal sensors segment the presence of objects due to their differing heat characteristics from the ambient surroundings.
  • Electromagnetic sensors can detect metals due to the presence of induced currents. These sensors are usually installed in particular applications whereby the scene or area of interest is severely constrained. In a general sense these sensors are limited in their practical uses as they can only detect objects moving through a narrow constrained space. Sensors such as microwave or ultrasonic signals detect the presence of objects due to change in reflected signal power.
  • Monocular camera vision based approaches have been developed based on the colour and geometric shape patterns of images acquired within their field of views. Some monocular camera systems also focus on depth cues with the use of external structured light and texture analysis. The images produced by a camera system are by default 2D data. It is only by interpreting these images that 3D data can be inferred. However, a problem with these systems is that they suffer from 'prior knowledge' installations and thus in real world situations the inferred 3D data can easily be misrepresented. Two or more camera systems can be combined to produce depth cues from multiple images of the same objects from varying perspectives by matching the corresponding points. This produces a disparity map which can be normalized to produce a range map. Any objects that fall within this 3D space can detected.
  • a method for detecting the movement of 3D objects in a defined zone using a camera comprising the steps of:
  • calibrating the camera to define said zone with respect to a ground plane such that the zone is set at a desired height relative to the ground plane and/or position of the camera;
  • Advantages of the present invention are superior performance in highlighting and tracking the presence of objects, for example people, in the field of view.
  • the various embodiments of the invention with respect to heretofore known people counting and motion detection systems include superior shadow discrimination, background ground suppression and ability to operate indoors and outdoors without ambient light interference.
  • the 3D pixel data is an actual measurement of distance of the object from the camera, to provide a full 3D map comprising spatial coordinates.
  • the invention uses a direct means of detecting objects in 3D space as opposed to indirect and inferred means.
  • the invention has superior performance with respect to intensity based camera systems in that each 3D pixel is an actual measurement of distance of the object from the sensor.
  • a full 3D map including x, y spatial coordinates can be obtained.
  • the invention overcomes the issues highlighted with traditional multi-camera systems in that no matching of points between cameras is required. This negates the effects of occlusion and the requirement of texture in the image. Also planar surfaces with no texture can now be measured with certainty.
  • said step of capturing further comprises the step of eliminating shadows from said defined zone.
  • the invention comprises the step of eliminating background patterns from the 3D objects.
  • the invention comprises the step of subsequent image filtering to eliminate background patterns from the 3D objects.
  • the invention comprises the step of image filtering to connect features in 3D space. In one embodiment the invention comprises the further step of performing subsequent image analysis to filter range data to clusters.
  • the invention comprises the further step of filtering interest points into relative 3D planes.
  • the invention comprises the step of connecting 3D features through multiple frames.
  • the invention comprises the step of extracting threshold and acquire apex areas from said 3D features. In one embodiment the invention comprises the further step of eliminating ambient lighting effects.
  • said filtering step eliminates features outside of preselected 3D zones.
  • the invention comprises selecting 3D spaces of interest; and eliminating features outside said selected 3D spaces of interest.
  • the invention comprises performing different run-time algorithms on each of a plurality of said 3D spaces of interest.
  • a single detected object represents a person.
  • the invention comprises the step of processing features detected as objects to be counted as people. Ideally people are tracked and counted through multiple frames. Ideally wherein people can be indentified and tracked in zero lux ambient light conditions.
  • a 3D camera vision apparatus for people identification, said apparatus comprising: A single multi-array time of flight camera, a processer for normalizing the data into a 3D range maps, a processor for determining the presence of people from the 3D range maps, a trajectory processor for receiving frames from 3D processor, a processor for people detection, people counting, rate of flow of people, directional people counting and people queue analysis.
  • said objects having a close proximity to said ground plane relative to a threshold are filtered out as ground plane noise.
  • said trajectory processor determines an object's trajectory by tracking said people objects in multiple frames.
  • said time of flight image acquisition device comprises a monocular camera configured for acquiring a plurality of images.
  • a system for detecting the movement of 3D objects in a defined zone using a camera comprising: means for calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane; means for capturing a multi-array of 3D range pixel data from a single camera in one single frame in said defined zone; and
  • a method for counting people by detecting the movement of 3D objects in a defined zone using a camera comprising the steps of:
  • calibrating the camera to define said zone with respect to a ground plane such that the zone is set at a desired height relative to the ground plane and/or position of the camera;
  • a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
  • Fig. 1 illustrates the field of view of the camera, i.e. the surveillance area, according to one aspect of the invention
  • Fig. 2 illustrates objects in the surveillance area of Fig. 1, objects can be in the centre of the image or to the extreme corner of the X and Y axis within the surveillance area;
  • Fig. 3 illustrates obtained images from the time of flight sensor showing 2D luminance values representing the area under surveillance and 3D data representing the height values relative to the camera;
  • Fig. 4 illustrates obtaining three dimensional height points of the ground plane relative to the camera orientation and position
  • Fig. 5 illustrates segmented objects relative to the camera, according to one aspect of the invention
  • Fig. 6 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems
  • Fig. 7 illustrates another perspective distortion inherent in all point source 3D and 2D imaging systems
  • Fig. 8 illustrates 3D point clouds of connected and unconnected objects in a surveillance area
  • Fig. 9 illustrates a further view of 3D point clouds of connected and unconnected objects shown in Fig 8.
  • Fig. 10 illustrates the decision point or virtual beam on which a cross over point is defined, according to one aspect of the invention
  • Fig. 11 illustrates the decision point or virtual beam on which a cross over point is defined, according to another aspect of the invention.
  • Fig. 12 illustrates a virtual beam with the two zones A and B.
  • Fig. 13 and 14 illustrates an object in zone B at frame number t+1 and at frame number t+2 in Zone A of Figure 12;
  • Fig. 15 illustrates a flow chart illustrating operation of the invention.
  • a multi-array time of flight camera based vision system is calibrated to provide heights above a planar surface for any point in the field of view, as shown in Figure 1, indicated generally by the reference numeral 1.
  • Figure 1 illustrates the field of view of the camera, i.e. the surveillance area, by the shaded area 2 which represents a ground plane, i.e. the floor.
  • a camera 3, for example a time of flight camera, can be placed directly overhead or at an angle relative to the ground plane 2.
  • the Time of Flight camera 3 captures both 2D luminance values and 3D range data.
  • the 3D range data is colour coded to represent the differences in height within the image scene from a sensor.
  • Fig. 2 illustrates a number of objects 4 within the scene above the ground plane 2.
  • Objects 4 can be in the centre of the image or to the extreme corner of the X and Y axis within the surveillance area.
  • an object 4 for example a person
  • enters the camera's field of view it generates interest points called “features,” the heights of which are measured relative to the camera 3.
  • features the heights of which are measured relative to the camera 3.
  • These points are clustered in 3D space to provide volumes of interest.
  • These volumes of interest clusters are transformed into single and multiple people objects that are identified and tracked in multiple frames to provide "trajectories.” These people objects and associated "trajectories" are then used for automated people counting, queue detection and security surveillance, the operation of which is discussed in more detail below.
  • Embodiments of the present invention can use a factory calibrated time of flight camera system that provides 3D coordinates of points in the field of view.
  • FIG. 3 illustrates the obtained image from the time of flight sensor showing 2D luminance values representing the area under surveillance and 3D data representing the height values relative to the sensor.
  • the object in the centre 4a has minimal perspective distortion while the object 4b at the extreme X and Y coordinates suffers from the most extreme perspective distortion. This is caused by image warping at the sensor due to the sensor being a single fixed point in space.
  • Fig. 4 illustrates obtaining the 3D height points of the ground plane relative to the camera's orientation and position.
  • Method A these 3D planar cloud points can be filtered out to provide only above ground 3D points. This eliminates the effects of various flooring materials.
  • Fig. 5 illustrates segmented objects relative to the camera.
  • Method B these objects can be segmented by increasing the distance value in 3D from the cameras 3D generated point cloud. In this case only objects a certain distance within range of the camera are filtered and shown.
  • the plane of the ground is calibrated relative to the camera. Only those points that have some height relative to the ground plane are of interest. Therefore, unwanted or unknown objects and highlights can be filtered out due to their lack of height relative to the ground plane.
  • the points of interest are then clustered directly in 3D space.
  • Fig 6 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems, illustrated by the reference numeral 10.
  • the invention normalises against this distortion and obtains a persons scale by a user drawing a box 11 around a detected object 4.
  • the position and size of this box is known within the image and the appropriate weighting factors applied to normalise the perspective distortion is applied across both the X and Y axis as shown in the X and Y diagrams.
  • Fig. 7 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems.
  • the invention normalises against this distortion to obtain a persons scale by a user drawing a line 12 across an object 4.
  • the position of and the width of this line is known within the image and the appropriate weighting factors can be applied to normalise the perspective distortion is applied across both the X and Y axis as shown in the X and Y diagrams.
  • Fig. 8 illustrates 3D point clouds of connected and unconnected blop objects 21.
  • a box 22 of similar dimensions to that acquired at calibration time is superimposed in software to a best fit model enclosing the blops. These blops are subsequently categorised as individual objects or people. The height and scale of each object can be easily extracted, using the process of the present invention.
  • Fig. 9 illustrates 3D point clouds of connected and unconnected objects. These objects may shave the same height profile or may have different height apexes.
  • Through software a count is made highlighting the number of objects or people that are present within the surveillance area.
  • Each separate cluster can be considered an object and is tracked from frame to frame. Therefore, at each frame selected information is available including, the number of objects; their positions in 3D space (centroid); and the instantaneous motion vector (magnitude and direction). Using this raw data, people can be identified, counted and tracked extremely accurately.
  • Fig. 10 illustrates a decision point or virtual beam 30 on which a cross over point is defined to define a boundary between two zones. If an object is tracked through a number of frames and crosses this point an appropriate count is incremented and a direction flag is assigned.
  • the beam 30 can be moved on the screen through software and can be shortened or lengthened as desired by simple control instructions. Therefore, and advantageously, the defined zone can be adapted easily by simple program instructions without the need for physically re -positioning the camera.
  • Fig. 11 illustrates the decision point or virtual beam 40 on which a cross over point is defined. If an object is tracked through a number of frames and crosses this point an appropriate count is incremented and a direction flag is assigned. As in Fig. 10, this boundary line can be translated, rotated and a number of curves added to mimic real world scenarios and requirements.
  • Fig. 12 illustrates the virtual beam with two zones labelled, Zone A and Zone B. These zones exist on either side of a virtual beam 50.
  • the system has a hysteresis in that an object must pass from Zone A to Zone B or vice versa to be counted as a valid single count.
  • Fig. 13 illustrates an object 51 in zone B at frame number t+1.
  • Fig. 14 illustrates an object in zone B at frame number t+2. This object 51 is tracked via a velocity vector from frame t+1 to frame t+2. Since the object 51 has crossed the boundary line a single count is activated.
  • the present invention provides an easy to use people counter and detection camera system based on the time off flight camera principle to populate a semiconductor imager. People within its field of view are indentified, counted and tracked through its area of interest.
  • Fig. 15 illustrates a flow chart illustrating operation of the invention implemented in software and with reference to above description, illustrated generally by the reference numeral 60.
  • Advantages of the present invention are superior performance in highlighting and tracking the presence of people in the field of view.
  • the various embodiments of the invention with respect to heretofore known people counting and motion detection systems include superior shadow discrimination, multiple people identification, background ground suppression and ability to operate indoors and outdoors without ambient light interference.
  • the invention uses a direct means of detecting objects in 3D space as opposed to indirect and inferred means.
  • the invention has superior performance with respect to intensity based camera systems in that each 3D pixel is an actual measurement of distance of the object from the sensor.
  • a full 3D map including x, y spatial coordinates can be obtained.
  • the invention overcomes the issues highlighted with traditional multi-camera systems in that no matching of points between cameras is required. This negates the effects of occlusion and the requirement of texture in the image. Also planar surfaces with no texture can now be measured with certainty. This is particularly important due the variety of surfaces present under a people counter i.e. carpet, tiles etc. Furthermore the appearance of these surfaces changes with the passage of time. Problems caused by shadows, movement of mats etc can now be ignored. Highlights in the prior art are thus eliminated in the various embodiments of the present invention because detection of an object's motion in the invention is based on physical coordinates rather than on the appearance of the object.
  • the present invention also features easy installation and set up without requiring initial training procedures.
  • the invention upon initial set up, self calibrates to acquire the ground plane. This involves only a one-time installation setup and requires no further training of any sort.
  • Another advantage of the system is that stationary or slow moving objects do not become invisible as they would to a motion detection system.
  • the present invention also provides a very easy to use graphic user interface for calibrating the scale of the system to the ground height. It will be further appreciated that the system can be placed directly above an area of interest, i.e. placement is at a normal to the planar surface of the ground or can be placed at an angle to the scene of interest.
  • the present invention also features a flexible masking capability. The masking capability allows a user during set up to graphically specify zones to be masked out in either 2D or in 3D. This feature can be used, for example, to account for either noncustom doorways or stationary background scenery in the field of view.
  • the system can be applied to areas which require access control to a secure area or clean room environment.
  • the accuracy of the method in counting objects overcomes the problem of tailgating when two or more people are classified as one person and counted within a specified time period that exist with prior art systems. Therefore the system highlights the presence of two or more counted people as separate objects with extreme accuracy within the sensors field of view or surveillance area overcoming the problem of tailgating.
  • the system and method of the present invention can be used to calculate the human occupancy level of a room, a building or an enclosed zone by counting the number of people entering and leaving an entrance or entrances. This information can be used to effectively control the atmospheric environment, for example air conditioning systems and other control systems.
  • the invention can be used in a building to conform with fire regulations.
  • the system and method of the invention can be used to calculate the number of people in a selected zone or zones of a building at any one time.
  • the people count data can be fed back to a central control panel either located in the front entrance of the building or at a remote location.
  • the central control panel can display on a visual display unit, for example a digital display, the number of people in each zone or the entire building.
  • the invention provides an effective system to accurately count the number of people in building at any time, by either giving an overall count for an entire building or on a zone by zone basis.
  • the present invention also provides for elimination of excessive blind spots.
  • a non- stationary background like the motion of a door opening can be easily masked off. Accordingly, the present invention is easier to use, install and more robust than heretofore known people counting, people queue detection and people tracking systems.
  • the embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus. However, the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice.
  • the program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention.
  • the carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk.
  • the carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un système et un procédé pour détecter le déplacement d'objets en 3D, en particulier de personnes, dans une zone définie en utilisant une caméra, comprenant les étapes suivantes: calibrer la caméra de manière à définir ladite zone par rapport à un plan de sol, de telle sorte que la zone soit réglée à une hauteur souhaitée par rapport au plan de sol; capturer un réseau multiple de données de pixel à portée 3D à partir d'une caméra simple dans une image simple dans ladite zone définie; et segmenter les données de pixel en 3D sous la forme de caractéristiques de volume pour l'objectif suivant d'identification de l'objet et de suivi à travers de multiples images capturées. L'invention remédie aux problèmes rencontrés avec les systèmes traditionnels à caméras multiples dans la mesure où aucun point de coïncidence entre les caméras n'est requis. Ceci élimine les effets d'occlusion ainsi que l'exigence de texture dans l'image. Des surfaces planes sans aucune texture peuvent désormais être mesurées de façon certaine. Ceci est particulièrement important à cause de la variété de surfaces que l'on peut retrouver en dessous d'un compteur de personnes, c'est-à-dire un tapis, des carrelages, etc.
PCT/EP2010/067140 2009-11-09 2010-11-09 Procédé et système pour détecter le déplacement d'objets WO2011054971A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0919553.8 2009-11-09
GB0919553A GB2475104A (en) 2009-11-09 2009-11-09 Detecting movement of 3D objects using a TOF camera

Publications (2)

Publication Number Publication Date
WO2011054971A2 true WO2011054971A2 (fr) 2011-05-12
WO2011054971A3 WO2011054971A3 (fr) 2011-06-30

Family

ID=41502062

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/067140 WO2011054971A2 (fr) 2009-11-09 2010-11-09 Procédé et système pour détecter le déplacement d'objets

Country Status (2)

Country Link
GB (1) GB2475104A (fr)
WO (1) WO2011054971A2 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942773A (zh) * 2013-01-21 2014-07-23 浙江大华技术股份有限公司 一种通过图像分析获取排队长度的方法及装置
DE102015014320A1 (de) 2014-11-26 2016-06-02 Scania Cv Ab Verfahren und System zur Verbesserung der Qualität von Bildinformationen einer 3D-Erkennungseinheit zur Verwendung bei einem Fahrzeug
DE102015014318A1 (de) 2014-11-26 2016-06-02 Scania Cv Ab Verfahren und System zur Verbesserung der Qualität von Bildinformationen einer 3D-Erkennungseinheit zur Verwendung bei einem Fahrzeug
WO2017023202A1 (fr) * 2015-07-31 2017-02-09 Vadaro Pte Ltd Système de surveillance de temps de vol
CN107767407A (zh) * 2016-08-16 2018-03-06 北京万集科技股份有限公司 一种基于tof相机的道路车辆目标提取系统及方法
US10221610B2 (en) 2017-05-15 2019-03-05 Otis Elevator Company Depth sensor for automatic doors
US10386460B2 (en) 2017-05-15 2019-08-20 Otis Elevator Company Self-calibrating sensor for elevator and automatic door systems
US11126863B2 (en) 2018-06-08 2021-09-21 Southwest Airlines Co. Detection system
US11354809B2 (en) 2017-12-06 2022-06-07 Koninklijke Philips N.V. Device, system and method for detecting body movement of a patient
RU2821377C1 (ru) * 2023-05-29 2024-06-21 Федеральное государственное автономное образовательное учреждение высшего образования "Уральский федеральный университет имени первого Президента России Б.Н. Ельцина" Способ отслеживания объектов на видео

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839308B (zh) * 2012-11-26 2016-12-21 北京百卓网络技术有限公司 人数获取方法、装置及系统
CN104376575B (zh) * 2013-08-15 2018-02-16 汉王科技股份有限公司 一种基于多摄像头监控的行人计数方法和装置
WO2015168406A1 (fr) * 2014-04-30 2015-11-05 Cubic Corporation Affichage au sol de passage de portillon adaptatif

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239558A1 (en) 2005-02-08 2006-10-26 Canesta, Inc. Method and system to segment depth images and to detect shapes in three-dimensionally acquired data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203356B2 (en) * 2002-04-11 2007-04-10 Canesta, Inc. Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications
US7400744B2 (en) * 2002-09-05 2008-07-15 Cognex Technology And Investment Corporation Stereo door sensor
US7831087B2 (en) * 2003-10-31 2010-11-09 Hewlett-Packard Development Company, L.P. Method for visual-based recognition of an object
US7623674B2 (en) * 2003-11-05 2009-11-24 Cognex Technology And Investment Corporation Method and system for enhanced portal security through stereoscopy
US7796780B2 (en) * 2005-06-24 2010-09-14 Objectvideo, Inc. Target detection and tracking from overhead video streams
US7787656B2 (en) * 2007-03-01 2010-08-31 Huper Laboratories Co., Ltd. Method for counting people passing through a gate
CA2684523A1 (fr) * 2007-04-20 2008-10-30 Softkinetic S.A. Procede et systeme de reconnaissance de volume

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239558A1 (en) 2005-02-08 2006-10-26 Canesta, Inc. Method and system to segment depth images and to detect shapes in three-dimensionally acquired data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUOMUNDSSON ET AL., IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, 2008, pages 1 - 6
WALHOFF ET AL., PROCEEDINGS OF THE INTL CONFERENCE ON IMAGE PROCESSING, 2007, pages 53 - 56

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942773A (zh) * 2013-01-21 2014-07-23 浙江大华技术股份有限公司 一种通过图像分析获取排队长度的方法及装置
CN103942773B (zh) * 2013-01-21 2017-05-10 浙江大华技术股份有限公司 一种通过图像分析获取排队长度的方法及装置
DE102015014320A1 (de) 2014-11-26 2016-06-02 Scania Cv Ab Verfahren und System zur Verbesserung der Qualität von Bildinformationen einer 3D-Erkennungseinheit zur Verwendung bei einem Fahrzeug
DE102015014318A1 (de) 2014-11-26 2016-06-02 Scania Cv Ab Verfahren und System zur Verbesserung der Qualität von Bildinformationen einer 3D-Erkennungseinheit zur Verwendung bei einem Fahrzeug
WO2017023202A1 (fr) * 2015-07-31 2017-02-09 Vadaro Pte Ltd Système de surveillance de temps de vol
CN107767407A (zh) * 2016-08-16 2018-03-06 北京万集科技股份有限公司 一种基于tof相机的道路车辆目标提取系统及方法
CN107767407B (zh) * 2016-08-16 2020-09-22 北京万集科技股份有限公司 一种基于tof相机的道路车辆目标提取系统及方法
US10221610B2 (en) 2017-05-15 2019-03-05 Otis Elevator Company Depth sensor for automatic doors
US10386460B2 (en) 2017-05-15 2019-08-20 Otis Elevator Company Self-calibrating sensor for elevator and automatic door systems
US11354809B2 (en) 2017-12-06 2022-06-07 Koninklijke Philips N.V. Device, system and method for detecting body movement of a patient
US11126863B2 (en) 2018-06-08 2021-09-21 Southwest Airlines Co. Detection system
RU2821377C1 (ru) * 2023-05-29 2024-06-21 Федеральное государственное автономное образовательное учреждение высшего образования "Уральский федеральный университет имени первого Президента России Б.Н. Ельцина" Способ отслеживания объектов на видео

Also Published As

Publication number Publication date
GB0919553D0 (en) 2009-12-23
WO2011054971A3 (fr) 2011-06-30
GB2475104A (en) 2011-05-11

Similar Documents

Publication Publication Date Title
WO2011054971A2 (fr) Procédé et système pour détecter le déplacement d'objets
US7397929B2 (en) Method and apparatus for monitoring a passageway using 3D images
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
US7400744B2 (en) Stereo door sensor
US7929017B2 (en) Method and apparatus for stereo, multi-camera tracking and RF and video track fusion
KR101608889B1 (ko) 대기열 모니터링 장치 및 방법
US8224026B2 (en) System and method for counting people near external windowed doors
CN111753609A (zh) 一种目标识别的方法、装置及摄像机
WO2011139734A2 (fr) Procédé de détection d'objet mobile à l'aide d'un capteur d'image et d'une lumière structurée
WO2017183769A1 (fr) Dispositif et procédé destinés à la détection de situation anormale
KR20160035121A (ko) 깊이 영상정보에서 추출된 위치정보를 이용한 개체계수 방법 및 장치
Stahlschmidt et al. Applications for a people detection and tracking algorithm using a time-of-flight camera
Snidaro et al. Automatic camera selection and fusion for outdoor surveillance under changing weather conditions
Yuan et al. An automated 3D scanning algorithm using depth cameras for door detection
US11734834B2 (en) Systems and methods for detecting movement of at least one non-line-of-sight object
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
Bravo et al. Outdoor vacant parking space detector for improving mobility in smart cities
Zhang et al. Fast crowd density estimation in surveillance videos without training
IES85733Y1 (en) Method and system for detecting the movement of objects
IES20100715A2 (en) Method and system for detecting the movement of objects
IE20100715U1 (en) Method and system for detecting the movement of objects
Jędrasiak et al. The comparison of capabilities of low light camera, thermal imaging camera and depth map camera for night time surveillance applications
KR20220009953A (ko) 물체의 움직임을 포착하는 방법 및 모션 캡처 시스템
Zhang et al. 3D pedestrian tracking and frontal face image capture based on head point detection
CN110232353B (zh) 一种获取场景人员深度位置的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10787043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/08/2012)

122 Ep: pct application non-entry in european phase

Ref document number: 10787043

Country of ref document: EP

Kind code of ref document: A2