EP2300961A1 - Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur - Google Patents

Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur

Info

Publication number
EP2300961A1
EP2300961A1 EP08874527A EP08874527A EP2300961A1 EP 2300961 A1 EP2300961 A1 EP 2300961A1 EP 08874527 A EP08874527 A EP 08874527A EP 08874527 A EP08874527 A EP 08874527A EP 2300961 A1 EP2300961 A1 EP 2300961A1
Authority
EP
European Patent Office
Prior art keywords
detectors
surveillance
monitoring
area
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08874527A
Other languages
German (de)
English (en)
Inventor
Marcel Merkel
Hartmut Loos
Jan Karl Warzelhan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP2300961A1 publication Critical patent/EP2300961A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • Image processing device with calibration module with calibration module, method for calibration and computer program
  • the invention relates to an image processing apparatus for a video surveillance system for monitoring a surveillance area, which may have at least one surveillance object and at least one masking geometry, with a detection module, which is designed to detect one of the monitored objects, wherein the detection of the monitored object is based on a set of partial area detectors , which detect different portions of the surveillance object, and wherein upon detection of the surveillance object, the amount of the detail area detectors into a positive amount of
  • Subarea detectors which have detected the associated portion of the surveillance object, and in a negative amount of subarea detectors, which have not recognized the associated portion of the surveillance object is divided.
  • the invention also relates to a method for calibration and to a corresponding computer program.
  • Video surveillance systems are often used to monitor public places or buildings, such as train stations, intersections, libraries, hospitals, etc., but also for private use, such as factory surveillance.
  • Such video surveillance systems typically include one or more surveillance cameras directed to relevant surveillance areas.
  • the video data generated by the surveillance cameras are combined in many embodiments in a monitoring center and there evaluated either manually by monitoring personnel or automated.
  • image processing methods have been proposed which evaluate the recorded video data automatically via digital image processing.
  • moving monitoring objects are separated from the essentially static scene background, tracked over time and triggered an alarm for relevant movements.
  • the video data initially only show a two-dimensional representation of the surveillance areas or 2D images of the surveillance areas in the image plane of the surveillance cameras.
  • a tracking of the surveillance object over time - also called trajectory - gives without further
  • Another problem in this context results from the existence of further objects, in particular stationary and / or quasi-stationary objects in the surveillance area, which obscure the surveillance object in sections and / or temporarily. Such obscuration obstructs the detection and tracking of the surveillance objects.
  • DE 10 2006 027120 A1 proposes an image processing method for detecting and processing visual obstacles in a surveillance scene, wherein a plurality of state data records of a surveillance object are detected, each having an object position and a size measured at the object position, ie height, of the surveillance object Include an image sequence in an image. By comparing the measured size of the monitored object with a modeled, perspective size of the monitored object at the same object position, one of the visual obstacles is concluded.
  • the image processing method is based on the fact that the monitoring object is assigned to a class, for example persons, of whom a middle one belongs or usual size is known and based on this size, the modeled, perspective size is determined.
  • Subarea is independently learned by a specific detector or classifier.
  • the individual local parts of the body are assembled in a further step to the person in the totality. This scientific article is probably the closest state of the art.
  • the invention relates to an image processing device which is suitable and / or designed as an additional module or as an integral part of a video surveillance system.
  • the video surveillance system preferably includes a plurality of surveillance cameras directed to relevant surveillance areas in a real 3D scene.
  • at least one monitoring object, but also several monitoring objects, and at least one masking geometry or several, can be present in the surveillance area (s)
  • the masking geometry may be formed as a temporary, quasi-stationary or stationary geometry.
  • Concealment geometry is preferably understood to mean any object which leads or can lead to an optical concealment of the surveillance object. It may be formed, for example, as a wall, a bench, a cabinet, a shelf, a staircase, etc.
  • the image processing device comprises a detection module, which is designed in terms of programming technology and / or circuitry to detect one of the monitored objects. The step of detecting is preferably also the segmentation and / or recognition and / or classification of the
  • the detection takes place on the basis of a set of partial area detectors, which respectively detect different partial areas of the monitored object.
  • the partial area detectors can be used, for example, in analogy to the partial area detectors in the scientific article by Sotelo et al. be educated.
  • the subareas detectors are designed as classification devices which examine the image content of a search area, often designed as a rectangular bounding box, for the existence of the subarea.
  • the detection module is designed in terms of programming and / or circuitry such that the quantity of the subarea detectors after the evaluation are subdivided into a positive quantity and a negative quantity and optionally into a neutral quantity.
  • the positive quantity are assigned to partial area detectors which have positively recognized the respective corresponding partial area of the monitored object.
  • the negative quantity relates to the subarea detectors for which the associated subarea of the surveillance object was not recognized. Partial range detectors are classified in the optional neutral quantity, which have not provided any information for any other reason.
  • the image processing device has a calibration module, which is formed by circuitry and / or programming technology, based on a detected monitoring object with a negative quantity, a depth map and / or a masking geometry for the
  • Calibrate monitoring area in particular to generate and / or to update.
  • the depth map is preferably formed as a relation between the spatial and / or 3D properties of the surveillance area and the image of the surveillance area by the video camera, ie the 2D image of the surveillance scene in the image plane of the video camera.
  • each point of the monitored area, ie the surveillance scene is assigned a real, three-dimensional object point in the surveillance area.
  • the advantage of the image processing device can be seen in the high significance, since after a positive recognition of a partial area of the monitored object and a non-recognition of further partial areas of the monitored object, the last-mentioned partial areas must be covered.
  • the depth map there is an advantage in that the
  • Depth card can also be calibrated in monitoring areas where almost no floor surface of the monitoring area can be seen.
  • the amount of subregion detectors is specific and / or selective for a surveillance object type or genus.
  • a corresponding surveillance object type are, for example, persons, in particular adult persons or children, cars, motorcycles, shopping carts, etc. Because the quantity of subarea detectors is specifically or selectively designed for a surveillance object type, expert knowledge of the interpretation of the detection results
  • Monitoring object type are introduced. For example, an average adult size of 1.80 m is assumed.
  • the calibration module is formed, so that when a negative amount Masking geometry is identified or suspected in the region of the subset detectors of the negative set.
  • the subarea detectors of a common set assign a defined spatial relation to one another, which is given by the surveillance object type. For example, divide one person into several
  • Partial area detectors which comprise a head detector, a body detector and a bed detector as subareas detectors, can, in the assignment of the head detector and the body detector to the positive quantities and the Beindetektors to the negative amount to a concealment geometry in the area of Beindetektors, ie vertically below the head. and / or the Beindetektors be closed.
  • the proposed image processing device thus preferably identifies or suspects a concealment line or edge as an edge line of a masking geometry between adjacent and / or adjacent and / or overlapping partial area detectors of a positive quantity and a negative quantity.
  • the surveillance object in the case of a monitoring object with a negative quantity, is extrapolated using the positive quantity and a depth line or edge and / or a
  • the person is extrapolated in length by taking averages of persons, for example, a height of 1.80 m. In this way, the probable position of the foot area of the person is closed. The conclusion of the presumed foot area is interpreted as the depth edge, ie the vertical edge of the person.
  • the extrapolation for the surveillance object is corrected by supplementary evaluation of the subarea detectors from the positive quantity by a perspective correction factor.
  • the perspective correction factor is determined by a specific head size on or via a relationship between the head and the body.
  • the monitoring object is extrapolated in a vertical, ground-facing direction of the surveillance area in order to generate the depth edge.
  • the position information of the depth edge or the foot point which is concealed in the current situation of the monitoring area by a Verdeckungsgeometrie, can be used to calibrate the depth map.
  • Another object of the invention relates to a method for calibration, in particular generation and / or updating, a depth map and / or concealment geometry in a surveillance area, preferably using the image processing apparatus according to one of the preceding claims or as just described with the features of the claim 8th.
  • a set of partial area detectors is applied to a surveillance object, wherein the partial area detectors detect different partial areas of the surveillance object, in particular independently of one another.
  • first moving objects are segmented from the background, in particular the scene background, which takes place, for example, by subtracting a current image from a scene reference image.
  • One or the set of partial area detectors is applied to each found object.
  • a single set of subarea detectors suffices. For monitoring areas in which several monitoring object types can occur, several quantities can optionally be used.
  • Partial area detectors divided into at least a positive amount and a negative amount. If the negative quantity is not empty, that is, if not all partial area detectors recognize "their partial area" of the monitored object, this indicates a masking at the position of the partial area detector. With regard to the said example with the person as monitoring object type are typically the
  • the hidden portions are suspected or identified as hidden by a masking geometry.
  • the method becomes the height, depth or vertical extent of the monitored object estimated by using expert knowledge of the dimensions or dimensions of the hidden subsections.
  • the set of partial area detectors is designed to detect a person. It is particularly preferred that the quantity of subarea detectors has at least one cardinality of two, wherein preferably at least four subareate detectors are arranged vertically one above the other with respect to the surveillance area.
  • the subarea detectors may be adjacent, adjacent and / or overlapping each other.
  • a last subject of the invention relates to a computer program with program code means having the features of claim 11.
  • FIG. 2 shows a schematic representation of a set of partial area detectors on
  • Figure 3 is a schematic application example of the amount of
  • Figure 4 is a block diagram illustrating an apparatus according to the invention.
  • FIGS 1 a and 1 b show a schematic video image of a monitoring area 1, which is formed in the example shown as a store scene.
  • a moving surveillance object in the form of a person 2 is shown, which is arranged half hidden behind a shelf 3 and according to the figure 1 b uncovered next to the shelf 3 according to the figure 1 a.
  • a base point 4 of the moving monitoring object 2 represents the free end of the vertical extent of the monitored object 2 in the direction of the ground.
  • Determination of the foot 4 is implemented by the moving monitoring object 2 is circumscribed by a so-called bounding box 5 and the center of the foot edge 6 of the bounding box 5 is interpreted as foot 4.
  • Other monitoring objects 2 can also be detected by this procedure.
  • the bounding box 5 is determined by known image processing algorithms, wherein, for example, in a first step, the moved monitoring object is separated from the substantially static background and then segmented. The segmented regions are subsequently circumscribed with search rectangles or bounding boxes 5, and the content of the search rectangles is verified as searched surveillance objects 2 via classification devices.
  • a correct foot point 4 which represents the vertical extent of the monitoring object 2 in the direction of the floor, could be determined in FIG. In FIG. 1a, however, the foot point 4 is not the actual foot point but a point which is arranged in the center of the body of the monitored object 2. Such inaccurate footsteps 4 as in FIG. 1a may lead to misinterpretation of the events in the surveillance area 1.
  • FIG. 2 shows a modified approach for detecting the surveillance object 2 as an exemplary embodiment of the invention, wherein subareas detectors 7 a to 7 e are used instead of a detector covering the entire surveillance object 2, each covering only a specific subarea of the surveillance object 2.
  • the head detector 7 a for the detection of the head
  • the shoulder detector 7 b for detecting the shoulder area
  • the upper body detector 7 c for detection of the upper body
  • the center body detector 7 d for detection of the hip area
  • the foot detector 7e designed for the detection of the feet / legs.
  • the subareas detectors 7 a to e are arranged overlapping in sections (compare subarea detectors 7 a to 7 c), partially adjoining (see subarea detectors 7 c to e) and partially spaced (not shown) and have a defined and / or content-based relative arrangement to each other.
  • the partial area detectors 7a-e are arranged, for example, such that the head detector 7a can not lie between the central body detector 7e and the foot detector 7e.
  • FIG. 3 shows, in a schematic representation, the quantity of the subarea detectors 7 a to 7 e which was applied to one or the surveillance object 2 in the surveillance area 1 in FIG. Due to the masking by the shelf 3, only the partial area detectors 7 a to c detect the associated partial areas positively, the
  • partial area detectors 7 d and e can not recognize the assigned partial area.
  • the partial area detectors 7 a to c are assigned to a positive quantity 8, the partial area detectors 7 d and e to a negative quantity 9.
  • a concealment edge 10 can be identified or at least suspected.
  • the concealment edge 10 describes an edge line of a concealment geometry, such as the shelf 3.
  • the monitoring object 2 can be extrapolated or estimated in the direction of the foot 4 by deducing from the properties of the subareas detectors 7 a to c of the positive quantity 8 the properties of the hidden subareas detectors 7 d and e of the negative quantity 9.
  • Possible information-rich properties are: the relative Arrangement of the partial area detectors 7 a to c of the positive quantity 8, length and / or height of the individual partial area detectors 7 a to c of the positive quantity 8, detailed information of the subareas of the monitored object 2 enclosed by the partial area detectors 7 a to 7 c of the positive quantity 8, an estimate of the depth position of, some or all of the partial area detectors 7 a to c of the positive quantity, etc.
  • the knowledge of the concealment edge 10 can be used to supplement a collection about the concealment geometries in the surveillance area 1, the base point 4 can be used to calibrate a depth map, that is a relation between the course of a horizontal plane in the surveillance area 1 with the two-dimensional image of the surveillance area 1, be used.
  • FIG. 4 shows a schematic block diagram of a video surveillance system 11, which is connected to a plurality of surveillance cameras 12, which are designed and / or arranged to monitor one or more surveillance areas 1.
  • the image data recorded by the surveillance cameras 12 are forwarded to an image processing device 13, which is designed to calibrate masking geometries or a depth map, respectively.
  • moving monitoring objects 2 are segmented from the background of the surveillance area or zones 1.
  • the segmentation is carried out, for example, by subtracting the current surveillance image from a scene reference image.
  • the monitoring objects 2 segmented by this step are respectively verified in a detection module 15 by a set of partial area detectors 7 a to 7 e. If only persons can or should be detected in the monitoring area 1 as monitoring objects 2, a single quantity of subarea detectors 7 a to e is sufficient, which are specific to the In the case of more complex monitoring areas 1, different sets of subareas detectors may also be used, each set being specifically or selectively tuned to a type of surveillance object, for example a first set for persons, a second set for bicycles, a third lot on cars, a fourth
  • At least one set of partial area detectors 7 a to e is executed on one, some or each segmented monitoring object 2, the partial area detectors 7 a to e depending on the result, for example a classification device of the partial area detector 7 a to e, into a positive quantity 8 or a negative quantity 9 are divided.
  • the data of the detected and partially concealed monitoring object 2 are transferred from the detection module 15 into a first obscuring geometry estimation module 16 and into a second footfall estimation module 17.
  • a cover edge 10 is used on the basis of the transitions or peripheral positions of the subarea detectors of the positive quantity 8 and the negative quantity 9. This derived
  • Masking edge 10 is transferred to a database 18 which collects information about masking geometries, thereby allowing modeling of the monitoring area 1 with respect to the masking geometries.
  • the second estimation module 17 extrapolates as described
  • Monitoring object 2 passes the estimated foot point 4 together with the further information of the monitoring object 2 to a database 19, which includes information about a depth map, that is, about the depth extent of the monitored area monitored 1.
  • a database 19 which includes information about a depth map, that is, about the depth extent of the monitored area monitored 1.
  • Both databases 18 and 19 can so in the monitoring mode optionally calibrated for the first time, supplemented or constantly updated.
  • the proposed device or the proposed method allows the analysis of partial occlusions in the
  • the thereby improved modeled scene geometry can be used in object tracking to make the automatic video surveillance system 11 more efficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Dans le but de décharger le personnel de surveillance et d'améliorer la qualité de la surveillance, des procédés de traitement d'image qui évaluent les données vidéo enregistrées par l'intermédiaire d'un traitement d'image numérique ont été proposés. Dans les procédés habituels, des objets de surveillance en mouvement sont séparés de l'arrière plan essentiellement statique, suivis au cours du temps et une alarme est déclenchée en cas de mouvements pertinents. Toutefois, les données vidéo ne montrent d'abord qu'une représentation en deux dimensions des zones de surveillance. Une poursuite de l'objet de surveillance au cours du temps - également appelée trajectoire - ne fournit donc, sans autre évaluation, aucune conclusion sur le déplacement effectif de l'objet de surveillance dans la zone de surveillance. L'invention concerne un dispositif de traitement d'image 13 pour un système de surveillance vidéo permettant de surveiller une zone de surveillance 1, qui peut présenter au moins un objet de surveillance 2 et a moins une géométrie de couverture 3, avec un module de détection 15 qui est conçu pour détecter un objet de surveillance 2, la détection de l'objet de surveillance 2 ayant lieu sur la base d'une quantité de détecteurs de zone partielle 7 a - e qui détectent les différentes zones partielles de l'objet de surveillance 2 et, la quantité de détecteurs de zone partielle 7 a - e étant subdivisée, lors de la détection de l'objet de surveillance 2, en une quantité positive 8 de détecteurs de zone partielle qui ont reconnu la zone partielle associée de l'objet de surveillance 2 et en une quantité négative 9 de détecteurs de zone partielle qui n'ont pas reconnu la zone partielle associée de l'objet de surveillance 2. Le dispositif de traitement d'image comprend également un module d'étalonnage 16, 17 qui est conçu pour étalonner, sur la base d'un objet de surveillance détecté 2 avec une quantité négative 8, une carte de profondeur et/ou une géométrie de couverture pour la zone de surveillance 1.
EP08874527A 2008-06-06 2008-11-14 Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur Withdrawn EP2300961A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102008002275A DE102008002275A1 (de) 2008-06-06 2008-06-06 Bildverarbeitungsvorrichtung mit Kalibrierungsmodul, Verfahren zur Kalibrierung sowie Computerprogramm
PCT/EP2008/065529 WO2009146756A1 (fr) 2008-06-06 2008-11-14 Dispositif de traitement d’image avec module d’étalonnage, procédé d’étalonnage et programme d’ordinateur

Publications (1)

Publication Number Publication Date
EP2300961A1 true EP2300961A1 (fr) 2011-03-30

Family

ID=40688430

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08874527A Withdrawn EP2300961A1 (fr) 2008-06-06 2008-11-14 Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur

Country Status (3)

Country Link
EP (1) EP2300961A1 (fr)
DE (1) DE102008002275A1 (fr)
WO (1) WO2009146756A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063556A1 (en) * 2011-09-08 2013-03-14 Prism Skylabs, Inc. Extracting depth information from video from a single camera
WO2015086855A1 (fr) * 2013-12-14 2015-06-18 Viacam Sarl Système de suivi basé sur des caméras et permettant la détermination de données physiques, physiologiques et/ou biométriques et l'évaluation des risques
FR3015730B1 (fr) * 2013-12-20 2017-07-21 Thales Sa Procede de detection de personnes et ou d'objets dans un espace
CN110991485B (zh) * 2019-11-07 2023-04-14 成都傅立叶电子科技有限公司 一种目标检测算法的性能评估方法及系统
CN113534136B (zh) * 2020-04-22 2023-07-28 宇通客车股份有限公司 一种车内遗留儿童检测方法及系统
CN115002110B (zh) * 2022-05-20 2023-11-03 深圳市云帆自动化技术有限公司 一种基于多协议转换的海上平台无人化数据传输系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006027120A1 (de) 2006-06-12 2007-12-13 Robert Bosch Gmbh Bildverarbeitungsverfahren, Videoüberwachungssystem sowie Computerprogramm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VINAY D SHET ET AL: "Bilattice-based Logical Reasoning for Human Detection", CVPR '07. IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION; 18-23 JUNE 2007; MINNEAPOLIS, MN, USA, IEEE, PISCATAWAY, NJ, USA, 1 June 2007 (2007-06-01), pages 1 - 8, XP031114390, ISBN: 978-1-4244-1179-5 *

Also Published As

Publication number Publication date
WO2009146756A1 (fr) 2009-12-10
DE102008002275A1 (de) 2009-12-10

Similar Documents

Publication Publication Date Title
EP2297701B1 (fr) Analyse vidéo
DE69523698T2 (de) Verfahren und Gerät zum richtungsselektiven Zählen von sich bewegenden Objekten
EP2386092B1 (fr) Dispositif, procédé et ordinateur pour le comptage basé sur des images d'objets qui parcourent un tronçon de comptage dans une direction prédéfinie
EP2300961A1 (fr) Dispositif de traitement d image avec module d étalonnage, procédé d étalonnage et programme d ordinateur
DE102014210820A1 (de) Verfahren zum Nachweis von großen und Passagierfahrzeugen von festen Kameras
WO2009003793A2 (fr) Dispositif pour identifier et/ou classifier des modèles de mouvements dans une séquence d'images d'une scène de surveillance, procédé et programme informatique
WO2010028933A1 (fr) Système de surveillance, procédé et programme informatique pour détecter et/ou suivre un objet à surveiller
EP2521070A2 (fr) Procédé et système de détection d'une scène dynamique ou statique, de détermination d'événements bruts et de reconnaissance de surfaces libres dans une zone d'observation
WO2014012753A1 (fr) Système de surveillance à zone de protection dépendante de la position, procédé de surveillance d'une zone à surveiller et programme informatique
DE112009003648T5 (de) Verfahren und Vorrichtung zur Barrierentrennung
DE4332753A1 (de) Verfahren zur Erkennung bewegter Objekte
WO2012110654A1 (fr) Procédé pour analyser une pluralité d'images décalées dans le temps, dispositif pour analyser des images, système de contrôle
DE102005006989A1 (de) Verfahren für die Überwachung eines Überwachungsbereichs
EP2219155B1 (fr) Appareil, procédé et programme d'ordinateur pour segmentation d'un objet dans une image, et système de vidéosurveillance
DE112022002520T5 (de) Verfahren zur automatischen Kalibrierung von Kameras und Erstellung von Karten
DE102012200504A1 (de) Analysevorrichtung zur Auswertung einer Überwachungsszene, Verfahren zur Analyse der Überwachungsszenen sowie Computerprogramm
WO2021165129A1 (fr) Procédé et dispositif pour générer des scénarios combinés
DE102006027120A1 (de) Bildverarbeitungsverfahren, Videoüberwachungssystem sowie Computerprogramm
DE102018101014B3 (de) Verfahren zum Detektieren charakteristischer Merkmale eines Lichtmusters in einem mittels einer Fahrzeugkamera erfassten Abbild davon
EP3352111B1 (fr) Procédé de détection d'événements critiques
DE102008057176B4 (de) Automatisierbares 3D-Rekonstruktionsverfahren und Überwachungsvorrichtung
AT501882A1 (de) Verfahren zum erkennen von gegenständen
DE102006039832B9 (de) Verfahren und Vorrichtung zum Auffinden und Unterscheiden von Personen, Tieren oder Fahrzeugen mittels automatischer Bildüberwachung digitaler oder digitalisierter Bilder
DE102019210518A1 (de) Verfahren zum Erkennen eines Objektes in Sensordaten, Fahrerassistenzsystem sowie Computerprogramm
AT524640B1 (de) Verfahren zur Bestimmung der Passantenfrequenz in einem geografischen Gebiet

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110107

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20170206

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20190601