WO2009146756A1 - Dispositif de traitement d’image avec module d’étalonnage, procédé d’étalonnage et programme d’ordinateur - Google Patents
Dispositif de traitement d’image avec module d’étalonnage, procédé d’étalonnage et programme d’ordinateur Download PDFInfo
- Publication number
- WO2009146756A1 WO2009146756A1 PCT/EP2008/065529 EP2008065529W WO2009146756A1 WO 2009146756 A1 WO2009146756 A1 WO 2009146756A1 EP 2008065529 W EP2008065529 W EP 2008065529W WO 2009146756 A1 WO2009146756 A1 WO 2009146756A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detectors
- surveillance
- monitoring
- area
- image processing
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/12—Acquisition of 3D measurements of objects
Definitions
- Image processing device with calibration module with calibration module, method for calibration and computer program
- DE 10 2006 027120 A1 proposes an image processing method for detecting and processing visual obstacles in a surveillance scene, wherein a plurality of state data records of a surveillance object are detected, each having an object position and a size measured at the object position, ie height, of the surveillance object Include an image sequence in an image. By comparing the measured size of the monitored object with a modeled, perspective size of the monitored object at the same object position, one of the visual obstacles is concluded.
- the image processing method is based on the fact that the monitoring object is assigned to a class, for example persons, of whom a middle one belongs or usual size is known and based on this size, the modeled, perspective size is determined.
- the invention relates to an image processing device which is suitable and / or designed as an additional module or as an integral part of a video surveillance system.
- the video surveillance system preferably includes a plurality of surveillance cameras directed to relevant surveillance areas in a real 3D scene.
- at least one monitoring object, but also several monitoring objects, and at least one masking geometry or several, can be present in the surveillance area (s)
- the detection module is designed in terms of programming and / or circuitry such that the quantity of the subarea detectors after the evaluation are subdivided into a positive quantity and a negative quantity and optionally into a neutral quantity.
- the positive quantity are assigned to partial area detectors which have positively recognized the respective corresponding partial area of the monitored object.
- the negative quantity relates to the subarea detectors for which the associated subarea of the surveillance object was not recognized. Partial range detectors are classified in the optional neutral quantity, which have not provided any information for any other reason.
- the image processing device has a calibration module, which is formed by circuitry and / or programming technology, based on a detected monitoring object with a negative quantity, a depth map and / or a masking geometry for the
- Calibrate monitoring area in particular to generate and / or to update.
- the depth map is preferably formed as a relation between the spatial and / or 3D properties of the surveillance area and the image of the surveillance area by the video camera, ie the 2D image of the surveillance scene in the image plane of the video camera.
- each point of the monitored area, ie the surveillance scene is assigned a real, three-dimensional object point in the surveillance area.
- Depth card can also be calibrated in monitoring areas where almost no floor surface of the monitoring area can be seen.
- the amount of subregion detectors is specific and / or selective for a surveillance object type or genus.
- a corresponding surveillance object type are, for example, persons, in particular adult persons or children, cars, motorcycles, shopping carts, etc. Because the quantity of subarea detectors is specifically or selectively designed for a surveillance object type, expert knowledge of the interpretation of the detection results
- Monitoring object type are introduced. For example, an average adult size of 1.80 m is assumed.
- the calibration module is formed, so that when a negative amount Masking geometry is identified or suspected in the region of the subset detectors of the negative set.
- the subarea detectors of a common set assign a defined spatial relation to one another, which is given by the surveillance object type. For example, divide one person into several
- Partial area detectors which comprise a head detector, a body detector and a bed detector as subareas detectors, can, in the assignment of the head detector and the body detector to the positive quantities and the Beindetektors to the negative amount to a concealment geometry in the area of Beindetektors, ie vertically below the head. and / or the Beindetektors be closed.
- the surveillance object in the case of a monitoring object with a negative quantity, is extrapolated using the positive quantity and a depth line or edge and / or a
- the monitoring object is extrapolated in a vertical, ground-facing direction of the surveillance area in order to generate the depth edge.
- the position information of the depth edge or the foot point which is concealed in the current situation of the monitoring area by a Verdeckungsgeometrie, can be used to calibrate the depth map.
- Another object of the invention relates to a method for calibration, in particular generation and / or updating, a depth map and / or concealment geometry in a surveillance area, preferably using the image processing apparatus according to one of the preceding claims or as just described with the features of the claim 8th.
- a set of partial area detectors is applied to a surveillance object, wherein the partial area detectors detect different partial areas of the surveillance object, in particular independently of one another.
- first moving objects are segmented from the background, in particular the scene background, which takes place, for example, by subtracting a current image from a scene reference image.
- One or the set of partial area detectors is applied to each found object.
- a single set of subarea detectors suffices. For monitoring areas in which several monitoring object types can occur, several quantities can optionally be used.
- the set of partial area detectors is designed to detect a person. It is particularly preferred that the quantity of subarea detectors has at least one cardinality of two, wherein preferably at least four subareate detectors are arranged vertically one above the other with respect to the surveillance area.
- the subarea detectors may be adjacent, adjacent and / or overlapping each other.
- FIG. 2 shows a schematic representation of a set of partial area detectors on
- Figure 3 is a schematic application example of the amount of
- a base point 4 of the moving monitoring object 2 represents the free end of the vertical extent of the monitored object 2 in the direction of the ground.
- Determination of the foot 4 is implemented by the moving monitoring object 2 is circumscribed by a so-called bounding box 5 and the center of the foot edge 6 of the bounding box 5 is interpreted as foot 4.
- Other monitoring objects 2 can also be detected by this procedure.
- the bounding box 5 is determined by known image processing algorithms, wherein, for example, in a first step, the moved monitoring object is separated from the substantially static background and then segmented. The segmented regions are subsequently circumscribed with search rectangles or bounding boxes 5, and the content of the search rectangles is verified as searched surveillance objects 2 via classification devices.
- FIG. 2 shows a modified approach for detecting the surveillance object 2 as an exemplary embodiment of the invention, wherein subareas detectors 7 a to 7 e are used instead of a detector covering the entire surveillance object 2, each covering only a specific subarea of the surveillance object 2.
- the head detector 7 a for the detection of the head
- the shoulder detector 7 b for detecting the shoulder area
- the upper body detector 7 c for detection of the upper body
- the center body detector 7 d for detection of the hip area
- the foot detector 7e designed for the detection of the feet / legs.
- the knowledge of the concealment edge 10 can be used to supplement a collection about the concealment geometries in the surveillance area 1, the base point 4 can be used to calibrate a depth map, that is a relation between the course of a horizontal plane in the surveillance area 1 with the two-dimensional image of the surveillance area 1, be used.
- FIG. 4 shows a schematic block diagram of a video surveillance system 11, which is connected to a plurality of surveillance cameras 12, which are designed and / or arranged to monitor one or more surveillance areas 1.
- the image data recorded by the surveillance cameras 12 are forwarded to an image processing device 13, which is designed to calibrate masking geometries or a depth map, respectively.
- moving monitoring objects 2 are segmented from the background of the surveillance area or zones 1.
- the segmentation is carried out, for example, by subtracting the current surveillance image from a scene reference image.
- At least one set of partial area detectors 7 a to e is executed on one, some or each segmented monitoring object 2, the partial area detectors 7 a to e depending on the result, for example a classification device of the partial area detector 7 a to e, into a positive quantity 8 or a negative quantity 9 are divided.
- the data of the detected and partially concealed monitoring object 2 are transferred from the detection module 15 into a first obscuring geometry estimation module 16 and into a second footfall estimation module 17.
- the second estimation module 17 extrapolates as described
- Monitoring object 2 passes the estimated foot point 4 together with the further information of the monitoring object 2 to a database 19, which includes information about a depth map, that is, about the depth extent of the monitored area monitored 1.
- a database 19 which includes information about a depth map, that is, about the depth extent of the monitored area monitored 1.
- Both databases 18 and 19 can so in the monitoring mode optionally calibrated for the first time, supplemented or constantly updated.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Dans le but de décharger le personnel de surveillance et d’améliorer la qualité de la surveillance, des procédés de traitement d’image qui évaluent les données vidéo enregistrées par l’intermédiaire d’un traitement d’image numérique ont été proposés. Dans les procédés habituels, des objets de surveillance en mouvement sont séparés de l’arrière plan essentiellement statique, suivis au cours du temps et une alarme est déclenchée en cas de mouvements pertinents. Toutefois, les données vidéo ne montrent d’abord qu’une représentation en deux dimensions des zones de surveillance. Une poursuite de l’objet de surveillance au cours du temps – également appelée trajectoire – ne fournit donc, sans autre évaluation, aucune conclusion sur le déplacement effectif de l’objet de surveillance dans la zone de surveillance.
L’invention concerne un dispositif de traitement d’image 13 pour un système de surveillance vidéo permettant de surveiller une zone de surveillance 1, qui peut présenter au moins un objet de surveillance 2 et a moins une géométrie de couverture 3, avec un module de détection 15 qui est conçu pour détecter un objet de surveillance 2, la détection de l’objet de surveillance 2 ayant lieu sur la base d’une quantité de détecteurs de zone partielle 7 a – e qui détectent les différentes zones partielles de l’objet de surveillance 2 et, la quantité de détecteurs de zone partielle 7 a – e étant subdivisée, lors de la détection de l’objet de surveillance 2, en une quantité positive 8 de détecteurs de zone partielle qui ont reconnu la zone partielle associée de l’objet de surveillance 2 et en une quantité négative 9 de détecteurs de zone partielle qui n’ont pas reconnu la zone partielle associée de l’objet de surveillance 2. Le dispositif de traitement d’image comprend également un module d’étalonnage 16, 17 qui est conçu pour étalonner, sur la base d’un objet de surveillance détecté 2 avec une quantité négative 8, une carte de profondeur et/ou une géométrie de couverture pour la zone de surveillance 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08874527A EP2300961A1 (fr) | 2008-06-06 | 2008-11-14 | Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102008002275.6 | 2008-06-06 | ||
DE102008002275A DE102008002275A1 (de) | 2008-06-06 | 2008-06-06 | Bildverarbeitungsvorrichtung mit Kalibrierungsmodul, Verfahren zur Kalibrierung sowie Computerprogramm |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009146756A1 true WO2009146756A1 (fr) | 2009-12-10 |
Family
ID=40688430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2008/065529 WO2009146756A1 (fr) | 2008-06-06 | 2008-11-14 | Dispositif de traitement d’image avec module d’étalonnage, procédé d’étalonnage et programme d’ordinateur |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP2300961A1 (fr) |
DE (1) | DE102008002275A1 (fr) |
WO (1) | WO2009146756A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130063556A1 (en) * | 2011-09-08 | 2013-03-14 | Prism Skylabs, Inc. | Extracting depth information from video from a single camera |
CN110991485A (zh) * | 2019-11-07 | 2020-04-10 | 成都傅立叶电子科技有限公司 | 一种目标检测算法的性能评估方法及系统 |
CN113534136A (zh) * | 2020-04-22 | 2021-10-22 | 郑州宇通客车股份有限公司 | 一种车内遗留儿童检测方法及系统 |
CN115002110A (zh) * | 2022-05-20 | 2022-09-02 | 深圳市云帆自动化技术有限公司 | 一种基于多协议转换的海上平台无人化数据传输系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105917355B (zh) * | 2013-12-14 | 2020-07-03 | 维亚凯姆有限责任公司 | 用于判定物理、生理和/或生物计量数据和/或用于风险评估的基于相机的跟踪系统 |
FR3015730B1 (fr) * | 2013-12-20 | 2017-07-21 | Thales Sa | Procede de detection de personnes et ou d'objets dans un espace |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006027120A1 (de) | 2006-06-12 | 2007-12-13 | Robert Bosch Gmbh | Bildverarbeitungsverfahren, Videoüberwachungssystem sowie Computerprogramm |
-
2008
- 2008-06-06 DE DE102008002275A patent/DE102008002275A1/de not_active Withdrawn
- 2008-11-14 WO PCT/EP2008/065529 patent/WO2009146756A1/fr active Application Filing
- 2008-11-14 EP EP08874527A patent/EP2300961A1/fr not_active Withdrawn
Non-Patent Citations (3)
Title |
---|
BO WU ET AL: "Detection of Multiple, Partially Occluded Humans in a Single Image by Bayesian Combination of Edgelet Part Detectors", COMPUTER VISION, 2005. ICCV 2005. TENTH IEEE INTERNATIONAL CONFERENCE ON BEIJING, CHINA 17-20 OCT. 2005, PISCATAWAY, NJ, USA,IEEE, vol. 1, 17 October 2005 (2005-10-17), pages 90 - 97, XP010854775, ISBN: 978-0-7695-2334-7 * |
GREENHILL ET AL: "Occlusion analysis: Learning and utilising depth maps in object tracking", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 26, no. 3, 4 December 2007 (2007-12-04), pages 430 - 441, XP022374462, ISSN: 0262-8856 * |
See also references of EP2300961A1 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130063556A1 (en) * | 2011-09-08 | 2013-03-14 | Prism Skylabs, Inc. | Extracting depth information from video from a single camera |
CN110991485A (zh) * | 2019-11-07 | 2020-04-10 | 成都傅立叶电子科技有限公司 | 一种目标检测算法的性能评估方法及系统 |
CN113534136A (zh) * | 2020-04-22 | 2021-10-22 | 郑州宇通客车股份有限公司 | 一种车内遗留儿童检测方法及系统 |
CN113534136B (zh) * | 2020-04-22 | 2023-07-28 | 宇通客车股份有限公司 | 一种车内遗留儿童检测方法及系统 |
CN115002110A (zh) * | 2022-05-20 | 2022-09-02 | 深圳市云帆自动化技术有限公司 | 一种基于多协议转换的海上平台无人化数据传输系统 |
CN115002110B (zh) * | 2022-05-20 | 2023-11-03 | 深圳市云帆自动化技术有限公司 | 一种基于多协议转换的海上平台无人化数据传输系统 |
Also Published As
Publication number | Publication date |
---|---|
EP2300961A1 (fr) | 2011-03-30 |
DE102008002275A1 (de) | 2009-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2297701B1 (fr) | Analyse vidéo | |
EP2386092B1 (fr) | Dispositif, procédé et ordinateur pour le comptage basé sur des images d'objets qui parcourent un tronçon de comptage dans une direction prédéfinie | |
WO2009146756A1 (fr) | Dispositif de traitement d’image avec module d’étalonnage, procédé d’étalonnage et programme d’ordinateur | |
DE102014210820A1 (de) | Verfahren zum Nachweis von großen und Passagierfahrzeugen von festen Kameras | |
EP2174260A2 (fr) | Dispositif pour identifier et/ou classifier des modèles de mouvements dans une séquence d'images d'une scène de surveillance, procédé et programme informatique | |
WO2010028933A1 (fr) | Système de surveillance, procédé et programme informatique pour détecter et/ou suivre un objet à surveiller | |
DE102018118423A1 (de) | Systeme und verfahren zur verfolgung bewegter objekte in der videoüberwachung | |
EP2521070A2 (fr) | Procédé et système de détection d'une scène dynamique ou statique, de détermination d'événements bruts et de reconnaissance de surfaces libres dans une zone d'observation | |
WO2014012753A1 (fr) | Système de surveillance à zone de protection dépendante de la position, procédé de surveillance d'une zone à surveiller et programme informatique | |
DE112009003648T5 (de) | Verfahren und Vorrichtung zur Barrierentrennung | |
DE4332753A1 (de) | Verfahren zur Erkennung bewegter Objekte | |
WO2012110654A1 (fr) | Procédé pour analyser une pluralité d'images décalées dans le temps, dispositif pour analyser des images, système de contrôle | |
EP3614299A1 (fr) | Procédé et dispositif de détection des objets dans des installations | |
DE102007000449A1 (de) | Vorrichtung und Verfahren zur automatischen Zaehlung von Objekten auf beweglichem oder bewegtem Untergrund | |
EP2219155B1 (fr) | Appareil, procédé et programme d'ordinateur pour segmentation d'un objet dans une image, et système de vidéosurveillance | |
DE112022002520T5 (de) | Verfahren zur automatischen Kalibrierung von Kameras und Erstellung von Karten | |
DE102012200504A1 (de) | Analysevorrichtung zur Auswertung einer Überwachungsszene, Verfahren zur Analyse der Überwachungsszenen sowie Computerprogramm | |
EP4107654A1 (fr) | Procédé et dispositif pour générer des scénarios combinés | |
DE102021206625A1 (de) | Computerimplementiertes Verfahren und System zur Unterstützung einer Installation eines bildgebenden Sensors und Trainingsverfahren | |
DE102006027120A1 (de) | Bildverarbeitungsverfahren, Videoüberwachungssystem sowie Computerprogramm | |
DE102018101014B3 (de) | Verfahren zum Detektieren charakteristischer Merkmale eines Lichtmusters in einem mittels einer Fahrzeugkamera erfassten Abbild davon | |
EP3352111B1 (fr) | Procédé de détection d'événements critiques | |
DE102008057176B4 (de) | Automatisierbares 3D-Rekonstruktionsverfahren und Überwachungsvorrichtung | |
AT501882A1 (de) | Verfahren zum erkennen von gegenständen | |
DE102006039832B9 (de) | Verfahren und Vorrichtung zum Auffinden und Unterscheiden von Personen, Tieren oder Fahrzeugen mittels automatischer Bildüberwachung digitaler oder digitalisierter Bilder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08874527 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008874527 Country of ref document: EP |