EP1087351A1 - Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images - Google Patents

Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images Download PDF

Info

Publication number
EP1087351A1
EP1087351A1 EP99810863A EP99810863A EP1087351A1 EP 1087351 A1 EP1087351 A1 EP 1087351A1 EP 99810863 A EP99810863 A EP 99810863A EP 99810863 A EP99810863 A EP 99810863A EP 1087351 A1 EP1087351 A1 EP 1087351A1
Authority
EP
European Patent Office
Prior art keywords
image
blob
region
monitoring
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99810863A
Other languages
German (de)
English (en)
Inventor
Raffaella Mattone
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ascom Systec AG
Original Assignee
Ascom Systec AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ascom Systec AG filed Critical Ascom Systec AG
Priority to EP99810863A priority Critical patent/EP1087351A1/fr
Publication of EP1087351A1 publication Critical patent/EP1087351A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/1968Interfaces for setting up or customising the system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19691Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound

Definitions

  • the invention relates to a method for classifying a monitoring situation within a surveillance area regarding their normality based on a Image sequence, which comprises at least one image of this situation.
  • the captured images are displayed on a monitor by video cameras, which is controlled by a guard.
  • a guard To keep the cost of surveillance as low as possible to be able to hold, usually several cameras can be switched to one or some few common monitors switched. Switching from the images of a camera to the images of another camera takes place in predetermined cycles or selectively, for example when a movement is discovered.
  • a method for automatic video surveillance is known from WO98 / 56182, in which the individual images are analyzed and a variety of such images statistical "normal value” is formed. By comparing a current picture With the statistical "normal value” extraordinary situations are recognized without that it must be determined from the outset what is going on in a particular monitoring situation has to be considered extraordinary.
  • Alarm situations can often be identified based on the movement within the Detect surveillance area. Due to the insufficient or even missing Alarm situations cannot take into account the movement in the monitoring area or can only be recognized inaccurately. As a result, the switching of images from different cameras either too often or too rarely. In the first case the guard will soon ignore the switch and in the second case Overlook alarm situations.
  • the object of the invention is to provide a method of the type mentioned at the beginning with which in particular is a reliable classification of situations in the Perform surveillance area in ordinary or extraordinary situations leaves.
  • monitoring situations are classified within a Surveillance area with regard to their normality based on an image sequence which includes at least one picture of the monitoring situation. From the image sequence as well Information about a predetermined or calculated segmentation of the monitoring area A feature vector is generated in at least two segments. Based the image sequence is classified by this feature vector. Taking into account the The feature vector is the segmentation of the monitoring area or the images of the Monitoring area recalculated and subsequent image sequences with the new processed calculated information.
  • the method according to the invention enables simple and efficient monitoring any surveillance area, for example a bank foyer with several Counters and / or ATMs by making extraordinary situations more reliable recognized, classified and signaled to a guard. It is in flexible in several respects, because on the one hand the classification criteria do not differ from fixed in advance, but determined based on real situations and thus adapt to changing conditions and on the other hand implemented in a modular way so that it can be easily adapted to changing monitoring situations.
  • the image sequence is made from any number of image processing modules preferably one or more feature images are calculated.
  • a feature picture will with regard to a certain classification criterion, for example "location of the registered movements "," total movement in a certain image region " or "presence / absence of edges in an image”.
  • Out A feature picture is then displayed with any number of Feature collection modules generates a so-called image vector which contains a plurality of Components includes, each component with respect to a characteristic value the selected feature.
  • the image vectors of others can also be used when forming the image vectors Feature collection modules are taken into account.
  • Feature images are used to generate an image vector and / or a Feature image is taken into account when generating multiple image vectors.
  • the image vectors of the various feature collection modules become the feature vector put together. In the event that only one feature collection module is available, the Feature vector identical to the image vector of this module.
  • the feature vector before Classification typically extracts a plurality of sub-vectors, which then be classified individually with one classification module each, by determining a classification result with each classification module. It is but it is also possible that only a partial vector is extracted from the feature vector, all of them Contains components of the feature vector, i.e. that the feature vector as a whole with is classified into a single classification module.
  • classification modules i.e. are multiple classification results are determined, these results are used to classify the image sequence or appropriate monitoring situation in a suitable manner with each other connected.
  • this linkage follows a set of rules, which are based on existing knowledge.
  • a feature image is determined by at least two images of the image sequence are compared with one another.
  • the result of this Comparison consists of a large number of individual movement points, which are initially one Data reduction can be subjected to each by a plurality of movement points can be combined into so-called blobs according to specified criteria. This The process is called blobbing. So after editing multiple images you get a "map" of the surveillance area with a variety of blobs, which distributed within the monitored area according to the detected movements are. In order to describe a blob, a kind is created from the associated movement points Center, i.e. calculated its position and weighting.
  • the segmentation is preferably carried out by dividing the Surveillance area in at least two regions.
  • the blobs with a Clustering algorithm into a plurality of clusters, each with different ones divided more or less homogeneous movement density. This clustering enables a simpler classification of the examined image sequence into ordinary and extraordinary situations by recognizing extraordinary situations not by assessing the overall situation, but by examining one at a time individual or a few regions can take place simultaneously.
  • the number of components of the image vector of such a calculated The feature image is equal to the number of regions that result from clustering.
  • a Component is calculated as the sum of the weights of all blobs in a region.
  • a difference between the two images is preferably calculated. Every picture consists of a large number of pixels, which e.g. from the dissolution of the Video camera depends. With a horizontal resolution of x pixels and a vertical one Resolution of y pixels results in a total of x * y pixels, with each pixel being one corresponding gray value is assigned to the brightness of the pixel to be imaged.
  • the Individual pixels of the difference image are calculated by the amount of each Difference between the gray values of the corresponding pixels of the two images is formed. Is this Difference other than zero, the corresponding pixel of the difference image is considered Movement point.
  • the amount of the difference greater than a predetermined Barrier is.
  • the criteria according to which the individual movement points are allocated to a blob are derived from the distance of a movement point to the blob.
  • a movement point is assigned to the blob that already has at least one Movement point, of which the new movement point comprises a maximum of n pixels is removed. If n is 1, this means for example:
  • a blob includes all Movement points, which are direct neighbors of a blob already assigned Are moving point.
  • Each blob thus comprises a plurality of movement points that make up its own Have the position and its weight determined.
  • the position of the blob will preferably as the focus of the movement points covered by the blob, and the Weight of the blob calculated as the size of a rectangle, the rectangle is formed by horizontal and vertical tangents to the blob, i.e. through the Intersection of those rows and columns of the picture that have at least one Contain this blob's movement point.
  • the weight of the blob is thus on the outline or measured on the surface and not on the number of movement points. In the latter Fall with a checkered sweater would have an immense amount of movement larger weight than a plain-colored sweater, because it is not only on the edges of the Pullovers, but also on its entire surface causes movement points. Such one however, unequal weighting is not desirable.
  • Clustering is crucial for the further processing and evaluation of the data Process.
  • the aim of clustering is to identify regions of the surveillance area or the Identify the difference image within which the detected movements, i.e. the Blobs, are roughly similar. Ideally, these regions correspond to real ones Subareas of the surveillance area. For example, regions like "next to Window ",” in front of the cash withdrawal machine ",” at the door "etc.
  • the movement density, i.e. the blob density is roughly homogeneous, with each region having a specific movement density having.
  • the result of clustering is a division of the surveillance area into a number N of regions, where N is greater than or equal to one.
  • the regions are formed by first determining a node and a standard deviation for each region. Then each blob is assigned to exactly one region according to specified criteria and after assigning a blob to a region, both the node and the Standard deviation of this region redetermined. Redefining both parameters takes place depending on the blob, i.e. its position and its weighting, the previous node, the previous standard deviation and the nodes of others Regions.
  • the criterion for assigning a blob to a specific region is a pseudo distance between a blob and the node of a region.
  • the pseudo-distance will advantageously calculated as a function of the weighting of the blob, the geometric Position of the knot and blob, i.e. the geometric distance of the blob from the Nodes as well as the standard deviation of the region. So before the assignment is first calculates the pseudo distance from a blob to the node of each region and the blob then assigned to the region to whose node it has the smallest pseudo distance having.
  • Another preferred embodiment of the invention is that the method is divided into a training phase and a monitoring phase, with the two Can overlap phases or alternate several times.
  • the evaluation of the feature vector or one Sub-vector thereof carried out with one or more classification modules.
  • the Evaluation is carried out either with the absolute values of the corresponding components or with values that are calculated relative to other component values. Because of The classification of the underlying image sequence is carried out, if necessary Signaling this image sequence as extraordinary. For example, it will be on one switched by a guard controlled screen so that he the appropriate Can judge situation, or the guard's attention gets mixed with any other means directed to the assessed situation.
  • a sub-vector of the feature vector is classified, which based on the pseudo-distances of a blob.
  • This feature could e.g. "Movement outside the normal area of activity”.
  • the Monitoring phase is determined with a corresponding classification module, whether the shortest pseudo distance of a blob, i.e. the pseudo distance to the node of those Region to which it was assigned exceeds a specified maximum distance. Meets if this is the case, the corresponding image sequence is classified as extraordinary and according to the further processing of the individual classification results also as Alarm situation signals if this is provided.
  • Another advantageous classification relates to a feature "total movement quantity in a region ".
  • the corresponding feature collection module during the Training phase generated an image vector for each difference image.
  • the value of a component is calculated as the sum of the weights of all blobs in a region.
  • the corresponding classification module With the corresponding classification module becomes the associated sub-vector of each resulting Feature vectors are analyzed and a limit is set for each component of this sub-vector certainly. This limit is determined for each component (i.e. for each region) in such a way that a certain percentage of the previously calculated values of this component are smaller or is equal to the limit. With these limits, another preferred Realize classification.
  • a "total movement quantity in a single region” an image sequence as extraordinary classified if a component of the resulting feature vector exceeds the limit of this Component exceeds.
  • two components of the feature vector are used for each cross product also a limit, here a product limit is determined.
  • a product limit is determined. Analogous to the determination of the limits for a single component becomes this limit for two components (i.e. for two Regions) determined in such a way that a predetermined percentage of all previously calculated Products of the two components is less than or equal to the product limit.
  • another Classification module "total movement quantity in two regions at the same time" one Image sequence classified as extraordinary if the product of two components of the resulting feature vector exceeds the corresponding product limit.
  • classification option would be a module "static Objects ", in which the image sequence is based on a feature" static objects in the Monitoring area "is classified. For example, based on the time period during an object is recognized as “available” or “not available”. This feature could e.g. with the feature collection module "presence / absence" already mentioned from edges in an image ".
  • the method according to the invention enables the recognition of extraordinary Situations in a mostly complex surveillance area, where often at the same time correlated or uncorrelated movement takes place at different locations simplify.
  • the method according to the invention is used, for example, for monitoring publicly accessible areas such as ticket halls or passenger areas in public areas Means of transport. It is particularly suitable for monitoring a bank foyer.
  • a monitoring arrangement for carrying out the method according to the invention includes at least one Image recording device, such as a video camera, with a classifying device an image input and a classification output as well as an alarm device.
  • a video camera which is connected to the image input of the classifying device pictures of the surveillance area are taken at any or regular intervals recorded and passed on to the classifying device.
  • Monitoring system 1 shows the general structure of an inventive structure Monitoring system 1 outlined. This consists of three subsystems, one Image / feature subsystem 2, a classification subsystem 3 and one Segmentation subsystem 4.
  • the monitoring system 1 has an image input 5 and a classification output 6. This receives via the image input 5
  • Monitoring system 1 an image sequence, for example, from a video camera has been recorded.
  • the image / feature subsystem 2 the image sequence with several Image processing modules 7.1 to 7.3 processed and at least one feature image created for each image processing module 7.1 to 7.3.
  • the feature images then become with a number of feature collection modules 8.1 to 8.3 the desired features extracted the image sequence and formed an image vector from each. Eventually they will individual image vectors assembled to a feature vector 9, which at the output of the image / feature subsystem 2 is output. He serves the other two Subsystems as input.
  • the segmentation subsystem 4 is used by the segmentation subsystem 4 to segment the Monitoring area used. Segmentation into a plurality of clusters, i.e. Regions are created using the extracted features with a clustering algorithm.
  • the information about the segmentation of the monitoring area is the Image / feature subsystem 2 provided as an additional input. This subsystem takes this information into account when generating the feature images and the Feature vector 9.
  • the feature vector 9 in the classification subsystem 3 is converted into one or decomposed several sub-vectors, the sub-vectors with the image vectors of the Image / feature subsystem 2 may match but need not. Every sub-vector is evaluated with its own classification module 10.1, 10.2, 10.3 and classified by each classification module 10.1, 10.2, 10.3 a separate Classification result produced.
  • a link unit 11 the Classification results of the individual sub-vectors of the feature vector 9 for Classification output 6, the output signal of the classification subsystem 3 connected.
  • the logic unit 11 can be of a kind of AND or OR gate exist or the classification results of the classification modules 10.1, 10.2, 10.3 are based on complex rules based on knowledge of the surveillance system linked together.
  • Figure 2 shows in a block diagram the process flow using the example of the feature "Total Movement Amount”.
  • a video camera delivers the image sequence to be processed 12, which each consists of two pictures.
  • Each image in the sequence has an optical one Resolution of n1 * n2 pixels and the colors are given as gray values of a certain Rendered resolution.
  • motion detection 13 becomes two successive ones
  • a feature image is determined for images of image sequence 12.
  • the feature image is called an n1 * n2 Image matrix shown.
  • the gray values of the pixels are shown in one for each individual image n1 * n2 image matrix BM1 or BM2 saved and then one Difference matrix DM of the two image matrices formed.
  • the elements of DM calculated as the difference between the corresponding elements of BM1 and BM2.
  • the feature image i.e. the feature image matrix is determined by everywhere where this difference is greater than zero or greater than a certain threshold, a one and everywhere else a zero is entered.
  • the threshold is either fixed adjustable or it is adapted depending on the image information.
  • the Matrix elements with a value of one are called movement points because they are all Points of the surveillance area, i.e. Mark pixels of the corresponding image, where the gray value changes between the two image acquisition times Has.
  • blobbing 14 the individual movement points of the feature image combined into so-called blobs.
  • all movement points belong to a certain blob, which are direct neighbors of another movement point, which is already "belonging to this blob" was qualified.
  • a blob is a coherent area of Movement points (or matrix elements of the feature image matrix), which also include holes, i.e. can have single or multiple pixels (or matrix elements) with a value of zero.
  • the Defining properties of a blob are its position and its weighting. The The position of a blob is determined as the center of gravity of the blob belonging to it Movement points.
  • the blobbing 14 provides a blob list 15 as a result, i.e. a list of all Blobs with the respective position and weighting.
  • clustering 16 the individual blobs become so-called clusters summarized.
  • the surveillance area is thus divided into a number of regions segmented with an inner context.
  • the regions identified in this way point e.g. a homogeneous density of movement and often correspond to real image areas such as an area "next to the window” or "in front of the cash machine".
  • Clustering 16 is carried out using an incremental method. Starting point of this The procedure consists of two regions, each with a node and a standard deviation. The Knots as well as the standard deviation are freely selectable and due to certain Previous knowledge or just chosen at random. After blobbing 14 one The blobs are taken into account individually for clustering. In doing so, a Blob initially assigned to the region to whose node, the so-called Winner node, it has the smallest pseudo-distance. The pseudo-distance between However, a node and a blob cannot be understood geometrically as a distance but as a distance measure, when calculating not only this distance, but also the weight of the blob and the value of the standard deviation be taken into account.
  • the blob is called an error blob and there are two more Error values introduced or adapted if they already exist.
  • the blob is called an error blob and there are two more Error values introduced or adapted if they already exist.
  • the blob is called an error blob and there are two more Error values introduced or adapted if they already exist.
  • the blob to everyone Region-related local error with a value and a position and on the other hand Global error where only the value counts. If the value of the global error becomes too large, introduced a new region, the node of which is at the location of the local fault, and whose standard deviation is determined using the error blobs.
  • the blobs of a plurality of difference images are now one after the others are taken into account for clustering and assigned to a region and form the different regions.
  • the clustering 16 is explained in more detail below with reference to FIG. 2.
  • Two regions 26 and 27 which have already been identified are shown in a two-dimensional vector space, each with a node 20 and 21, respectively, the node 20 of the region 26 being designated as S.
  • Local error 22 of region 26 is also shown. It is located at a kind of center of gravity for all previous error blobs in this region 26.
  • ( ⁇ x , ⁇ y ) correspond to the standard deviation in the x and y directions of the pseudo-distances of this region 26.
  • the blob 23 is assigned to the region to whose node it has the smallest pseudo-distance. In this case, this is region 26 with node 20 as the winner node.
  • the blob has the second smallest pseudo distance to node 21, which is why a topological connection 29 is introduced between the two nodes 20 and 21.
  • the node 20 and the variance ⁇ of the region 26, the winning region, are then recalculated.
  • the value G W of the global error is checked. If it is larger than a given limit, a new node is introduced at the position of the local error with the greatest value.
  • the bound for the global error is determined, for example, depending on the assumed Gaussian distribution of the blobs around the nodes.
  • a topological connection can be renewed, by your "age” i.e. the period of time since its introduction is reset to zero. This is done with a view to a further measure in which those topological Connections whose age is higher than a predefined maximum age are deleted. Finally, nodes that have no topological connections to another can Nodes, are also deleted.
  • the information about the identified regions with their nodes and standard deviations as well as the associated blobs is processed further as a topology map in a next feature extraction feature step 17.
  • the movement quantity ie the total weighting of all blobs in this region, is calculated separately for each region.
  • an image vector F of length N with F (F 1 , F 2 , F 3 ,..., F N ) is formed therefrom by assigning the movement quantity of a region to one of the N components of F.
  • this image sequence is only classified with respect to this one feature, which is why this image vector also corresponds to the feature vector 9.
  • this feature vector 9 is classified, i.e. it it is checked whether the image sequence 12 corresponding to the feature vector 9 of the Surveillance area a normal or an extraordinary situation in the Surveillance area.
  • the curve shape in turn can be used to make a statement about the correlation of F 1 and F 2 , ie the correlation of movement within the two regions.
  • the N-dimensional surface area p (F) can be approximated by the convex envelope of the projections from p (F) onto the coordinate surfaces. In the present example this means: Assuming that the vectors (F 1 , F 2 , 0), (F 1 , 0, F 3 ) and (0, F 2 , F 3 ) all have a proper situation (in the Coordinate surfaces), then the vector (F 1 , F 2 , F 3 ) also indicates a proper situation in the 3-dimensional area.
  • a starting value for a limit can be the first calculated value of a component of a feature vector or of a cross product of the corresponding components.
  • the classifier just described can be used whenever that is classifying feature is a vector with non-negative components and there are only two Classes, one near the origin of the feature space and one for the rest of the Feature space into which the vector can be divided.
  • the blob list 15 can also be used directly in the Feature step Extraction 17 are processed. It is also possible that the blob list 15 with both the step clustering 16, as well as directly with the step feature extraction 17 is processed further. As a result, an image sequence 12 can be derived from it resulting blobs are recognized as neat or extraordinary on the one hand and at the same time, the blobs of this image sequence 12 can also be used to adapt the Topology map can be made.
  • the present method allows extraordinary from ordinary situations in a surveillance area to distinguish from motion information by first a plurality of mathematical models of the surveillance area is created, being a model is only valid for part of the surveillance area and the situations then analyzed separately for each model or for a pair of models be assessed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
EP99810863A 1999-09-24 1999-09-24 Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images Withdrawn EP1087351A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP99810863A EP1087351A1 (fr) 1999-09-24 1999-09-24 Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP99810863A EP1087351A1 (fr) 1999-09-24 1999-09-24 Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images

Publications (1)

Publication Number Publication Date
EP1087351A1 true EP1087351A1 (fr) 2001-03-28

Family

ID=8243044

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99810863A Withdrawn EP1087351A1 (fr) 1999-09-24 1999-09-24 Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images

Country Status (1)

Country Link
EP (1) EP1087351A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0564858A2 (fr) * 1992-04-06 1993-10-13 Siemens Aktiengesellschaft Méthode pour la résolution de groupes de segments

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0564858A2 (fr) * 1992-04-06 1993-10-13 Siemens Aktiengesellschaft Méthode pour la résolution de groupes de segments

Similar Documents

Publication Publication Date Title
EP3466239B1 (fr) Procédé de fonctionnement d'un engin agricole automatique
EP3044760B1 (fr) Procédé d'analyse de la distribution d'objets dans des files d'attente libres
DE60224324T2 (de) Verfahren und Vorrichtung zur Verarbeitung von Fahrzeugbildern
DE102019134141A1 (de) Sicherheitsgurtzustandsbestimmungssystem und verfahren
DE102015104954A1 (de) Sichtbasiertes Überwachungssystem zur Validierung von Aktivitätssequenzen
DE102015206178A1 (de) Ein Videoverfolgungsbasiertes Verfahren zur automatischen Reihung von Fahrzeugen in Drivethrough-Anwendungen
DE102014106211A1 (de) Sichtbasierte Mehrkamera-Fabriküberwachung mit dynamischer Integritätsbewertung
DE102014106210A1 (de) Probabilistische Personennachführung unter Verwendung der Mehr- Ansichts-Vereinigung
DE69738287T2 (de) Verfahren zum Anzeigen eines sich bewegenden Objekts, dessen Bahn zu identifizieren ist, Anzeigesystem unter Verwendung dieses Verfahrens und Programmaufzeichnungsmedium dafür
EP2220588B1 (fr) Module de configuration pour un système de surveillance, système de surveillance, procédé de configuration dudit système de surveillance et programme informatique
DE19634768A1 (de) Vorrichtung und Verfahren zur Erfassung eines Gesichts in einem Videobild
DE112017007246T5 (de) Bildanalysevorrichtung, bildanalyseverfahren und bildanalyseprogramm
EP1531342B1 (fr) Procédé de détection des piétons
DE102017215079A1 (de) Erfassen von Verkehrsteilnehmern auf einem Verkehrsweg
DE102011011931A1 (de) Verfahren zum Auswerten einer Mehrzahl zeitlich versetzter Bilder, Vorrichtung zur Auswertung von Bildern, Überwachungssystem
DE69821225T2 (de) Verfahren zur kontrolle der oberfläche einer laufenden materialbahn mit vorklassifikation von ermittelten unregelmässigkeiten
DE102009026091A1 (de) Verfahren und System zur Überwachung eines dreidimensionalen Raumbereichs mit mehreren Kameras
WO2006105949A2 (fr) Procede pour determiner l'occupation d'un espace
EP2359308B1 (fr) Dispositif de production et/ou de traitement d'une signature d'objet, dispositif de contrôle, procédé et produit-programme
DE10049366A1 (de) Verfahren zum Überwachen eines Sicherheitsbereichs und entsprechendes System
EP1087351A1 (fr) Procédé de classification d'une situation de surveillance à l'aide d'une séquence d'images
WO2021180547A1 (fr) Procédé et dispositif de traitement d'images
DE102006027120A1 (de) Bildverarbeitungsverfahren, Videoüberwachungssystem sowie Computerprogramm
DE10046859A1 (de) System zur Blickrichtungsdetektion aus Bilddaten
EP1087326A1 (fr) Méthode de création des grappes

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20010818

AKX Designation fees paid

Free format text: AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050401