WO2010139495A1 - Procédé et dispositif pour le classement de situations - Google Patents

Procédé et dispositif pour le classement de situations Download PDF

Info

Publication number
WO2010139495A1
WO2010139495A1 PCT/EP2010/054031 EP2010054031W WO2010139495A1 WO 2010139495 A1 WO2010139495 A1 WO 2010139495A1 EP 2010054031 W EP2010054031 W EP 2010054031W WO 2010139495 A1 WO2010139495 A1 WO 2010139495A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
situation
optical
segments
pixels
Prior art date
Application number
PCT/EP2010/054031
Other languages
German (de)
English (en)
Inventor
Christopher Gaudig
Georg Von Wichert
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Publication of WO2010139495A1 publication Critical patent/WO2010139495A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention relates to a method for detecting situations and a situation monitoring device, for example, to classify image-monitored scenes.
  • Dynamic situations in which many moving objects occur are often monitored by means of sensor data and computer-assisted. In this case, it is desirable to classify the scenes, which are monitored, for example, via a camera, into specific situation classes in order to initiate further measures.
  • Automated approaches are usually based on the microscopic analysis of the spatio-temporal relations of image data.
  • first certain key patterns, such as persons are recognized, and then, for example, a trajectory is determined on the image surface.
  • a trajectory is determined on the image surface.
  • the steps are provided: acquiring an image sequence of images with image pixels, associating the image pixels in image segments, calculating an optical flow for a plurality of image segments, and classifying the image sequence into a situation class depending on a time course of the optical Rivers of image segments.
  • optical flux is calculated in several segments of the images as a macroscopic recognition variable.
  • an optical flow is often called a vector field indicating directions of movement and speed for each pixel of an image sequence.
  • the calculated optical flux is related to velocity vectors projected on the image plane of displayed objects.
  • Various methods are known for calculating or estimating the optical flux. As a rule, the brightness patterns around a respective pixel are considered. In the method, for example, an optical flux is calculated per image segment.
  • image pixels are grouped into segments or blocks of pixels for which a respective optical one Flow is determined, for example, as a direction vector and magnitude. Therefore, compared to conventional methods, in which microscopic detection of image components must first be carried out, only a significantly lower computing power is required.
  • the classification now takes place as a function of the time profile of the specific optical flows for the image segments. For example, operating phases of a detected and considered situation can be used as situation classes.
  • a situation model for the time course of the optical flows is determined.
  • the traffic light phases at an intersection or other traffic systems can also be captured by a respective situation model. It is also possible to record fluent vehicle traffic on any traffic route and classify it using situation models.
  • the most suitable situation model for a particular situation class with regard to the temporal courses of optical flows then provides the highest probability for the presence of the respective class.
  • the situation model is a hidden Markov model, in which the hidden states are influenced by the statistics of the combinations of optical flows of the image segments.
  • the optical flow of a respective image segment is calculated as the mean value of a plurality of image pixels associated with the image segments.
  • logical or grammatical links between situational classes can be taken into account. For example, it is possible that predetermined sequences of situation classes must be present. For example, it is impossible for the situation at a platform, where people get off a train to be recognized, to immediately follow the situation class for a train exit from the platform. By using logical or grammatical links, the classification in situational classes can be made more reliable.
  • the temporal course of the optical flows is calculated, for example, over a running, specifiable time interval. For example, a time interval of a few seconds around the current time may be used to calculate or also to include optical flows.
  • the optical flow can also be determined at predetermined times, that is to say in a clocking manner.
  • the clock density with which the predefined times are determined, the accuracy and thus also the computing load for a corresponding computer-implemented algorithm can be set.
  • situations are classified with moving objects that cause particularly high fluctuations in the optical flow of the individual segments.
  • Image pixel an optical flux is calculated. Furthermore, it is possible to assign the pixels associated with an image segment such that contiguous image regions correspond to a single segment. It is also possible that an image segment comprises image pixels which form non-contiguous image areas.
  • the image segments are preferably determined as a function of static image contents of the images. For example, it is possible that the respective image area representing a street that does not change over time is considered as an image segment.
  • the method can also be made more reliable if a constant image section of a situation is detected as a sequence of images.
  • moving cameras can also be used to monitor squares.
  • the invention further relates to a computer program product which causes the execution of a corresponding method on a program-controlled computer or control device.
  • a program-controlled computer or control device is for example a PC or a computer of a control room for the control and regulation of equipment in question, is installed on the appropriate software.
  • the computer program product may, for example, be in the form of a data carrier such as a USB stick, floppy disk, CD-ROM, DVD be implemented or implemented on a server device as a downloadable program file.
  • a situation monitoring device which comprises an image acquisition device for capturing image sequences of images with image pixels, and a processing platform which is set up in such a way that a method as described above is carried out.
  • a processing platform which is set up in such a way that a method as described above is carried out.
  • computer systems, computers or PCs or dedicated microprocessors can be used as the processing platform
  • digital video cameras can be used as image capture.
  • the processing platform preferably has a flow calculation module for calculating optical flows and a recognition module for performing a pattern recognition algorithm.
  • the modules may be implemented as computer routines or functions.
  • a pattern recognition algorithm in certain embodiments, is based on hidden Markov models used for situation modeling.
  • the situation monitoring device can furthermore have a situation model database for providing situation models for the situation classes.
  • a situation model database for providing situation models for the situation classes.
  • a simple possibility is provided, without having to resort to microscopic analyzes of the sensor data, that is to say the image data, but rather a macroscopic recognition of situation classes. In this case, no recognition of the participating entities or imaged objects in the respective dynamic situation is necessary.
  • state sequences can be represented. With the aid of image acquisition sensors or cameras, these macroscopic features, ie the optical flows, can be incorporated into situation models for the state sequences.
  • traffic monitoring situations for example at a junction with traffic light phases, at a railway block, at a factory, assembly line stations or other situations in which substantially movable objects are considered, come into consideration as applications.
  • Figure 1 is a schematic representation of an image with image pixels
  • Figure 2A is a schematic representation of an image corresponding to a situation
  • FIG. 2B is a schematic representation of a segmented image
  • Figure 2C is a schematic representation of optical flow vectors for segments of an image
  • Figure 2D, 2E schematic representations of images with situations
  • FIG. 3 optical flux vectors calculated at predetermined times
  • FIG. 4 is a block diagram for an example
  • Figure 5 a schematic representation of a
  • FIG. 1 schematically shows an image with pixels and a screened division.
  • the image area 1 is supplied, for example, via a digital camera which records a situation.
  • pixel areas 10 and individual pixels 1-18 are indicated.
  • the pixels cover the entire image area 1.
  • image areas are combined into segments which, in the example of FIG. 1 with eight segments 2-9, cover the image area 1 in a regular grid. It is possible to view divisions 2-9 as individual image segments or blocks of pixels.
  • the contiguous pixels 10 form the segment 2, for example. Not all pixels of the segment 2 are explicitly shown.
  • pixels 11, 12, 16-18 then belong to a single segment. As such, it is not necessary that pixels associated with an image segment overlap contiguous image areas.
  • the pixels 11 and 18 then belong to the combined segment 3 and 9, but are separated by image areas located in the segments 4 and 8.
  • FIG. 2A shows a schematic representation of an image that corresponds to the situation at a subway platform.
  • the image 1 shows on the left side a subway train 19, which has opened doors and is at a platform edge 20.
  • persons 22, 23, 24 are indicated.
  • Segments 2-7 are defined, which overlap image 1.
  • the boundary between the segments 2, 4 and 6 with the segments 3, 5 and 7 is defined by a line along the platform edge 20.
  • the retracted or moving subway train 19 is present, and in the segments 3, 5 and 7, the platform surface, the platform edge and persons standing on the platform.
  • the person 24 is, for example, in the segment 5.
  • the individual picture elements captured in the image 1 are not recognized or subjected to a pattern recognition. Rather, the proposed recognition method provides for calculating the optical flows in the segments 2-7 and for recognizing or classifying the situation from their time course.
  • the operating phases on the platform are chosen as the situation class. Possible operating phases are the waiting period in which passengers wait for a train, a phase in which the train enters the train, the departure phase of persons who step onto the platform from the train, the boarding phase in which people board the train and the exit phase of the train. Depending on the operating phase, different flow processes result. In FIG.
  • each segment of the image 1 is given an opti- shear flow vector F1-F7 assigned or calculated from the image data.
  • the captured video image is thus mapped into a flowchart 1 '.
  • One way to calculate the optical flux is to look at the time course of the intensity of the pixels I (x, y, t), where x and y are coordinates in the image plane, t is the time, and I is an intensity measure.
  • the intensity may be a brightness related to an average brightness of the image.
  • Lucas-Kanade method Horn-Schunck method
  • Buxton-Buxn method Black-Japson method
  • optical flux vectors or optical flows are continuously or for example clocked, determined for the image data contained in a video image stream. Depending on the detected situation or operating phase in the present example, different temporal courses of the optical flows result.
  • FIG. 3 shows a time profile at different times t1-t8 of an optical flow vector F4.
  • the arrow F4 could for example be assigned to the segment 4.
  • the resulting optical flux reflects the movement of the incoming and outgoing subway train 19 as image pixels, as shown in FIGS. 2A, 2D and 2E. is is.
  • This results in an optical flow vector as is indicated schematically in FIG.
  • FIG. 2D shows an operating phase in which a subway train 19 enters the platform.
  • the persons 23, 24, 25 are waiting on the platform 21.
  • Figure 2E shows a situation in which the train has entered the platform and the people get in and out of it.
  • the picture 2 thus shows a person 22, who wears a light coat and waits for the entrance to the train 19.
  • the person 23 moves away from the door of the subway 19. All of these time-varying image pixels of the image 2 lead to different optical flows.
  • the classification into the respective operating phase now takes place as a function of a created situation model for the temporal sequences of the optical flows. For example, hidden Markov models are used.
  • HMM Hidden Markov Model
  • (A, B, ⁇ )
  • each HMM corresponds to a situation class, so that by selecting the highest probability, the recorded sequence of optical flows can be assigned to a class or a situation.
  • situation model for the respective HMM, which corresponds to a situation class, is also used.
  • a situation monitoring device implementing a corresponding detection method is shown as a block diagram in FIG.
  • a situation recognition system 33 is connected, for example, to a video surveillance camera 26 which supplies video image data Cl.
  • FIG. 4 shows a user or operator 32 who supplies annotation data C3 to an annotation module 30 for training purposes of the corresponding situation models.
  • the situation recognition system 33 includes, for example, computer-implemented optical flow calculation modules 27, an annotation module 30, for example, to manually pre-classify optical flow data C5, a model generation module 31 to generate situation models, such as HMM models, for the optical flows, as indicated by arrow C6 is indicated to be stored.
  • situation models such as HMM models
  • a recognition module 28 is provided which, on the basis of the optical flows C2 derived from the image data C1 and the access to the model database C7, makes a classification and outputs C8 as the result.
  • known data For example, Cl
  • image data C1 are used which represents a situation as shown in FIG. 2D, that is to say an operating phase when a train enters on a platform.
  • the user 32 annotates this optical flow data C5 and gives a manual classification C3.
  • several training sequences with image data are manually annotated and used as input data for the actual model generation, for example with an HMM algorithm 31.
  • Each situation model then corresponds to a situation class and maps the characteristic feature time course.
  • the optical flow vectors for the segments are provided here.
  • an HMM model is available in the model database 29 for each situation class.
  • the recognition module 28 carries out a comparison of the modeled time sequences of the optical flows that are present in the model database 29 with the actually calculated online optical flows C2.
  • the most appropriate model currently used is a situation-classification maximum likelihood selection.
  • the probability is calculated that the observed time profile, for example over a running time interval for the optical flows, can be explained by the respective situation model.
  • FIG. FIG. 5 can also be understood as a process or process flow diagram.
  • image data or sequences of images are used.
  • a determination of the optical flow vectors is made by the flow calculation module 27 of FIG. 4.
  • the recognition module 28 now probable calculated for the existence of a particular situation class.
  • the recognition module 28 thus provides as a result C8 of the classification method that the present image data corresponds to a train departure from a platform. Thus, the situation was recognized and classified.
  • the recognition module can indicate via an output unit, for example, with which probability a respective situation class exists, ie. an output probability R1-R5, as indicated in FIG.
  • a grammar it can be determined, for example, that a train entry must always be followed by an operating phase "boarding” and “getting off” before an operating phase “exit” of the train may take place. Deviations from a correspondingly modeled sequence can thus be detected become. If, for malfunction reasons, a train crossing over the platform takes place without people being able to get in or out, this is automatically detected by the situation detection device, which implements a corresponding method. The operating personnel notified, for example, via an alarm can then initiate suitable measures.
  • the implementation of the method or of the method steps mentioned can be implemented in each case by the devices or modules 26-31. It is clear to the person skilled in the art which parts of the process are assigned to which device.
  • the flow calculation module may execute an algorithm for determining an optical flow or quantities derived therefrom.
  • the devices are configured such that the respective method steps or sub-processes are implemented and carried out.
  • the functional blocks 27-31 of FIG. 4 should also be understood as program modules or routines, or else as dedicated devices which serve to implement the respective function during the situation recognition process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Dans un procédé de détection de situations, des pixels d'une séquence d'images (1) sont réunis en segments d'images (2-7) et des flux optiques (F2-F7) sont calculés pour plusieurs segments d'images (2-9). Le classement de la séquence d'images dans une classe de situations s'effectue en fonction de l'allure dans le temps des flux optiques (F2‑F7) calculés. De ce fait, on utilise les flux optiques comme des grandeurs macroscopiques dans la détection des motifs, sans devoir utiliser des algorithmes détaillés de détection pour les images isolées (1). Un dispositif (32) de surveillance de la situation est équipé d'un dispositif de capture d'images (26) pour la production de séquences d'images faites de pixels (10-18), et d'une plate-forme de traitement (33), laquelle exécute un procédé correspondant de détection des situations. La détection de situations proposée peut en particulier être utilisée pour la surveillance de la circulation ou, dans les usines, pour surveiller le processus de production. En raison de l'extraction de caractéristiques macroscopiques et du traitement sous forme de flux optiques, il est possible d'obtenir une mise en œuvre robuste et économe en calculs.
PCT/EP2010/054031 2009-06-05 2010-03-26 Procédé et dispositif pour le classement de situations WO2010139495A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102009024066A DE102009024066A1 (de) 2009-06-05 2009-06-05 Verfahren und Vorrichtung zum Klassifizieren von Situationen
DE102009024066.7 2009-06-05

Publications (1)

Publication Number Publication Date
WO2010139495A1 true WO2010139495A1 (fr) 2010-12-09

Family

ID=42315331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/054031 WO2010139495A1 (fr) 2009-06-05 2010-03-26 Procédé et dispositif pour le classement de situations

Country Status (2)

Country Link
DE (1) DE102009024066A1 (fr)
WO (1) WO2010139495A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3499473A1 (fr) * 2017-12-15 2019-06-19 Siemens Mobility GmbH Détection automatisée de situations dangereuses

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019204751A1 (de) * 2019-04-03 2020-10-08 Thyssenkrupp Ag Verfahren und Einrichtung zum automatisierbaren Betrieb einer Materialgewinnungsanlage an der Abbaufront einer Materialgewinnungsstätte
BE1027207B1 (de) 2019-04-03 2020-11-23 Thyssenkrupp Ind Solutions Ag Verfahren und Einrichtung zum automatisierbaren Betrieb einer Materialgewinnungsanlage an der Abbaufront einer Materialgewinnungsstätte
BE1027160B1 (de) 2019-04-03 2020-11-03 Thyssenkrupp Ind Solutions Ag Verfahren und Einrichtung zum Betrieb von insbesondere im Tagebau einsetzbaren Abraum- und Fördermaschinen
DE102019204752A1 (de) * 2019-04-03 2020-03-26 Thyssenkrupp Ag Verfahren und Einrichtung zum Betrieb von insbesondere im Tagebau einsetzbaren Abraum- und Fördermaschinen
DE102021206618A1 (de) 2021-06-25 2022-12-29 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren zur Erhöhung der Erkennungsgenauigkeit eines Überwachungssystems

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ANDRADE E L ET AL: "Hidden Markov models for optical flow analysis in crowds", 2006 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 20-24 SEPT. 2006 HONG KONG, CHINA, 2006 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION IEEE COMPUT. SOC LOS ALAMITOS, CA, USA LNKD- DOI:10.1109/ICPR.2006.621, vol. 1, 20 September 2006 (2006-09-20), pages 460 - 463, XP008103779, ISBN: 978-0-7695-2521-1 *
ANDRADE ET AL: "Modelling Crowd Scenes for Event Detection", 2006 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 20-24 SEPT. 2006 HONG KONG, CHINA, 2006 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION IEEE COMPUT. SOC LOS ALAMITOS, CA, USA LNKD- DOI:10.1109/ICPR.2006.806, 1 January 2006 (2006-01-01), pages 175 - 178, XP031001413, ISBN: 978-0-7695-2521-1 *
BRAND M ET AL: "DISCOVERY AND SEGMENTATION OF ACTIVITIES IN VIDEO", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US LNKD- DOI:10.1109/34.868685, vol. 22, no. 8, 1 August 2000 (2000-08-01), pages 844 - 851, XP008056696, ISSN: 0162-8828 *
HU W ET AL: "A Survey on Visual Surveillance of Object Motion and Behaviors", IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: PART C:APPLICATIONS AND REVIEWS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US LNKD- DOI:10.1109/TSMCC.2004.829274, vol. 34, no. 3, 1 August 2004 (2004-08-01), pages 334 - 352, XP011114887, ISSN: 1094-6977 *
KETTNAKER V ET AL: "Minimum-entropy models of scene activity", PROCEEDINGS OF THE 1999 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, JUNE 23-25, 1999; FORT COLLINS, COLORADO, IEEE, THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, INC, US, vol. 1, 23 June 1999 (1999-06-23), pages 281 - 286, XP010347682, ISBN: 978-0-7695-0149-9 *
OLIVER N M ET AL: "A BAYESIAN COMPUTER VISION SYSTEM FOR MODELING HUMAN INTERACTIONS", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US LNKD- DOI:10.1109/34.868684, vol. 22, no. 8, 1 August 2000 (2000-08-01), pages 831 - 843, XP000976489, ISSN: 0162-8828 *
PORIKLI F ET AL: "Traffic congestion estimation using HMM models without vehicle tracking", INTELLIGENT VEHICLES SYMPOSIUM, 2004 IEEE PARMA, ITALY JUNE 14-17, 2004, PISCATAWAY, NJ, USA,IEEE LNKD- DOI:10.1109/IVS.2004.1336379, 14 June 2004 (2004-06-14), pages 188 - 193, XP010727466, ISBN: 978-0-7803-8310-4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3499473A1 (fr) * 2017-12-15 2019-06-19 Siemens Mobility GmbH Détection automatisée de situations dangereuses

Also Published As

Publication number Publication date
DE102009024066A1 (de) 2010-12-09

Similar Documents

Publication Publication Date Title
EP3044760B1 (fr) Procédé d'analyse de la distribution d'objets dans des files d'attente libres
EP2297701B1 (fr) Analyse vidéo
WO2009003793A2 (fr) Dispositif pour identifier et/ou classifier des modèles de mouvements dans une séquence d'images d'une scène de surveillance, procédé et programme informatique
EP1574986B1 (fr) Appareil et procédé de détection et de suivi de personnes dans une zone d'intérêt
WO2010139495A1 (fr) Procédé et dispositif pour le classement de situations
DE102006053286A1 (de) Verfahren zur Detektion von bewegungsauffälligen Bildbereichen, Vorrichtung sowie Computerprogramm zur Durchführung des Verfahrens
DE102009028604A1 (de) Vorrichtung zur Erkennung einer Objektschlange, Verfahren sowie Computerprogramm
AT502551A1 (de) Verfahren und bildauswertungseinheit zur szenenanalyse
EP2517149A2 (fr) Dispositif et procédé pour la surveillance d'objets vidéo
EP3557549B1 (fr) Procédé d'évaluation d'un événement
DE102015207047A1 (de) Verfahren und System automatisierten Sequenzieren von Fahrzeugen in nebeneinander angeordneten Durchfahrtskonfigurationen über eine bildbasierte Einstufung
WO2020239540A1 (fr) Procédé et dispositif de détection de fumée
EP2462534B1 (fr) Procédé de surveillance d'une zone
EP2483834B1 (fr) Methode et appareil pour la reconnaissance d'une detection fausse d'un objet dans un image
DE102010003669B4 (de) Verfahren und Vorrichtung zur Lokalisierung von Personen in einem vorgegebenen Bereich
DE102012109390A1 (de) Überwachungsvorrichtung, Verfahren zum Überwachen einer sicherheitskritischen Einheit und Beförderungssystem
WO2012110654A1 (fr) Procédé pour analyser une pluralité d'images décalées dans le temps, dispositif pour analyser des images, système de contrôle
DE102007000449A1 (de) Vorrichtung und Verfahren zur automatischen Zaehlung von Objekten auf beweglichem oder bewegtem Untergrund
DE19641000C2 (de) Verfahren und Anordnung zur automatischen Erkennung der Anzahl von Personen in einer Personenschleuse
WO2013083327A1 (fr) Dispositif et procédé de détection automatique d'événements dans des données de capteur
DE102017122554A1 (de) Informationenverarbeitungsvorrichtung, informationenverarbeitungsverfahren und speichermedium
DE102019132012A1 (de) Verfahren und System zur Detektion von kleinen unklassifizierten Hindernissen auf einer Straßenoberfläche
WO2020239632A1 (fr) Procédé et dispositif de prédiction sûre d'une trajectoire
CN111144248B (zh) 基于st-fhcd网络模型的人数统计方法、系统及介质
WO2023194135A1 (fr) Procédé et dispositif de surveillance automatisée de l'opération de conduite d'un système de transport de passagers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10718501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10718501

Country of ref document: EP

Kind code of ref document: A1