WO2018040306A1 - 一种监控视频中检测频繁过人的方法 - Google Patents

一种监控视频中检测频繁过人的方法 Download PDF

Info

Publication number
WO2018040306A1
WO2018040306A1 PCT/CN2016/106672 CN2016106672W WO2018040306A1 WO 2018040306 A1 WO2018040306 A1 WO 2018040306A1 CN 2016106672 W CN2016106672 W CN 2016106672W WO 2018040306 A1 WO2018040306 A1 WO 2018040306A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
passer
extraordinary
detecting frequent
video
Prior art date
Application number
PCT/CN2016/106672
Other languages
English (en)
French (fr)
Inventor
俞梦洁
Original Assignee
上海依图网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海依图网络科技有限公司 filed Critical 上海依图网络科技有限公司
Priority to SG11201806418TA priority Critical patent/SG11201806418TA/en
Publication of WO2018040306A1 publication Critical patent/WO2018040306A1/zh
Priority to PH12018501518A priority patent/PH12018501518A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of video security, and in particular to a method for detecting frequent people in a surveillance video.
  • the object of the present invention is to provide a method for detecting frequent passing people in surveillance video in order to overcome the drawbacks of the prior art described above.
  • a method for detecting frequent people in a surveillance video comprising the steps of:
  • S1 loading the video code stream collected by the surveillance camera, acquiring the face image of the passerby, and generating an extraordinary record
  • S3 retrieve in the data storage module according to the portrait feature descriptor, and derive the number of times each passer passes the surveillance camera area within a set time period.
  • the passerby face image is specifically acquired by using an AdaBoost classifier.
  • the step S3 specifically includes the following steps:
  • S31 Perform similarity matching on the extraordinary records stored in the data storage module, and classify the extraordinary records with similar portrait features
  • the attribute features preset in the step S32 include: a mask, a sunglasses, an age, and a gender.
  • step S33 after the number of times that each passer passes the surveillance camera area within the set time period, the face image of the passerby whose number of occurrences in the unit time is greater than the set number of times is output.
  • the present invention has the following advantages:
  • Figure 1 is a schematic flow chart of the main steps of the present invention.
  • a method for detecting frequent people in a surveillance video comprising the steps of:
  • S1 loading the video code stream collected by the surveillance camera, acquiring the face image of the passerby, and generating an extraordinary record, which is obtained by using the AdaBoost classifier after the passerby face image;
  • S3 Retrieving in the data storage module according to the portrait feature descriptor, and deriving the number of times each passer passes the surveillance camera area within a set time period, specifically including the steps:
  • S31 Perform similarity matching on the extraordinary records stored in the data storage module, and classify the extraordinary records with similar portrait features
  • S32 Filter the classified record after the classification according to the preset attribute feature, and the preset attribute features include: a mask, a sunglasses, an age, a gender, and the like;
  • S33 Deriving the number of times each passer passes the surveillance camera area within a set time period, and outputting a face image of the passerby appearing in the unit time more than the set number of times.
  • the input is a video stream
  • the output is a frequent recording
  • the software includes the following 5 processes (modules)
  • portrait detection and tracking module The face is detected in the input video stream, the face detection uses the general AdaBoost classifier, and the face tracking uses the optical flow method.
  • the module borrows the concept of "main frame” and "secondary frame” in the video stream, and reduces the computational complexity to more than 80% compared to the full calculation of each frame.
  • the combination of detection and tracking is optimized.
  • the obtained mutual local regions are used to perform algorithm estimation to accelerate the function.
  • Humanity feature extraction module For each person on the video, get the face size, face facial features, face posture information, judge whether it is suitable for face comparison; here adopt dynamic way to ensure each There are at least N feature extractions.
  • a variety of feature operators such as LBP, SIFT, and neural network are selected to maximize the expression of facial features.
  • Portrait storage module Provides multi-machine consistency portrait storage, saves each camera, time, location in the video, portrait features, face screenshots, etc., can access data through the interface, but also directly
  • the retrieval module provides data support.
  • portrait retrieval module Based on the two similarity models obtained by offline training, each person is matched in the history record to obtain a one-to-many similar list. In order to increase the retrieval speed, the preprocessing of the clustering algorithm based on kmeans is adopted here, so that the single retrieval speed can be kept within 1 s even in the order of 10 million.
  • Frequent post-processing module In order to improve the hit rate, follow the general practice of the search engine, strategically do two or more extended retrieval; at the same time, in order to reduce the false positive rate, extract the attribute information of the face, such as age. , gender, posture, whether wearing sunglasses, etc., filter the type of higher false positives.
  • Portrait feature extraction module This module first performs key point positioning on the human face (a total of 35 feature points), and then uses high-density mining with different feature operators (LBP, SIFT, neural network) at key points. Extract the features of more than 100,000 dimensions, and then perform dimension reduction processing to about 100 dimensions to obtain small volume eigenvectors.
  • LBP high-density mining
  • SIFT feature operators
  • Portrait retrieval module Calculate the similarity between two features using L2 similarity. For accelerated calculation, the portrait feature is pre-indexed, and the index is the class center obtained by using kmeans method. To ensure the recall rate, the randomization method is used. Get multiple class centers. After this treatment, the retrieval acceleration ratio can be more than 30 times.
  • Frequent post-processing module The module contains 2 sub-modules. The first is to make an extended query list of the preliminary similar personnel. This process may also bring certain false positives while improving the hit rate. Strong restrictions, such as similar scores must be greater than a high threshold for extended search; second, filtering of frequently occurring types of false positives, common types such as the elderly, children, the same hairstyle, wearing a mask, The filtering method is to classify the attributes to determine whether they belong to these types, and then use a higher score threshold to cut off false positives.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明涉及一种监控视频中检测频繁过人的方法,包括步骤:S1:载入监控摄像头采集的视频码流,获取经过路人的人脸图像,并生成过人记录;S2:根据人脸图像提取其人像特征,并将人像特征和过人记录存储至数据存储模块;S3:根据人像特征描述符在数据存储模块中检索,并导出每一路人在设定时间段内经过该监控摄像头区域的次数。与现有技术相比,本发明提出一套成熟的在处理视频中人脸过人匹配的方法,处理匹配精度高,对于百万级别的人脸库能做到60%以上的命中率,同时保持0.1%以下的误报率。6

Description

一种监控视频中检测频繁过人的方法 技术领域
本发明涉及一种视频安防领域,尤其是涉及一种监控视频中检测频繁过人的方法。
背景技术
现如今在很多场所都安装了摄像装置,但是目前这些摄像装置所采集的视频一般只能起到事后查询的用处,因为在事前预防性的分析视频往往一般是通过人工分析的方式,尤其是对于频繁过人的分析,在视频场景上,由于其复杂性以及高计算量带来的对算法精度和速度的挑战,缺少相关的方法。此外,现有技术并没有在视频内部挖掘相同过人的技术方法,其中,过人指路人在监控摄像装置所监控区域内的经过。
发明内容
本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种监控视频中检测频繁过人的方法。
本发明的目的可以通过以下技术方案来实现:
一种监控视频中检测频繁过人的方法,包括步骤:
S1:载入监控摄像头采集的视频码流,获取经过路人的人脸图像,并生成过人记录;
S2:根据人脸图像提取其人像特征,并将人像特征和过人记录存储至数据存储模块;
S3:根据人像特征描述符在数据存储模块中检索,并导出每一路人在设定时间段内经过该监控摄像头区域的次数。
所述步骤S1中,经过路人人脸图像具体采用AdaBoost分类器获取。
所述步骤S2中根据人脸图像提取其人像特征的过程中,共提取35个特征点。
所述步骤S3具体包括步骤:
S31:将数据存储模块中存储的过人记录进行相似度匹配,将人像特征相似的过人记录归类;
S32:根据预设的属性特征对归类后的过人记录进行过滤;
S33:导出每一路人在设定时间段内经过该监控摄像头区域的次数。
所述步骤S32中预设的属性特征包括:口罩、墨镜、年龄、性别。
所述步骤S33中,导出每一路人在设定时间段内经过该监控摄像头区域的次数后,还输出在单位时间内出现次数大于设定次数的路人的人脸图像。
与现有技术相比,本发明具有以下优点:
1)提出一套成熟的在处理视频中人脸过人匹配的方法,处理匹配精度高,对于百万级别的人脸库能做到60%以上的命中率,同时保持0.1%以下的误报率。
2)速度快,每个过人可在其出现3秒内得到其过人记录。
3)鲁棒性强,可以在不同场景下使用。
附图说明
图1为本发明的主要步骤流程示意图。
具体实施方式
下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。
一种监控视频中检测频繁过人的方法,如图1所示,包括步骤:
S1:载入监控摄像头采集的视频码流,获取经过路人的人脸图像,并生成过人记录,经过路人人脸图像具体采用AdaBoost分类器获取;
S2:根据人脸图像提取其人像特征,并将人像特征和过人记录存储至数据存储模块,根据人脸图像提取其人像特征的过程中,共提取35个特征点;
S3:根据人像特征描述符在数据存储模块中检索,并导出每一路人在设定时间段内经过该监控摄像头区域的次数,具体包括步骤:
S31:将数据存储模块中存储的过人记录进行相似度匹配,将人像特征相似的过人记录归类;
S32:根据预设的属性特征对归类后的过人记录进行过滤,预设的属性特征包括:口罩、墨镜、年龄、性别等;
S33:导出每一路人在设定时间段内经过该监控摄像头区域的次数,并输出在单位时间内出现次数大于设定次数的路人的人脸图像。
本申请技术中,输入为视频码流,输出为频繁过人记录
实现过程:软件共包含以下5个过程(模块)
1.人像检测和追踪模块:在输入的视频码流检测人脸,人脸检测采用通用的AdaBoost分类器,人脸追踪采用光流法。该模块借用视频码流中的“主帧”与“辅帧”的概念,相比对每帧做全量的计算,将算法对计算量减少到80%以上;同时,检测与追踪做了组合优化,在引用检测与追踪算法时,都采用已得到的相互的局部区域进行算法推算,以起到加速的作用。
2.人性特征抽取模块:对于每个视频上的过人,得到人脸大小,人脸五官位置,人脸姿态信息,判断是否合适用于做人脸比对;这里采用动态的方式,保证每个过人至少有N帧的特征抽取。在人像特征上,选取了LBP,SIFT,以及神经网络等多种特征算子,使得人脸特征得以最大化的表达。
3.人像存储模块:提供多机一致性的人像存储,将每个过人所在的摄像头,时间,视频中的位置,人像特征,人脸截图等保存,既可通过接口访问数据,也直接对检索模块提供数据支持。
4.人像检索模块:基于离线训练得到的人俩相似度模型,将每个过人在历史记录中进行相似度匹配,得到1对多的相似列表。为了加开检索速度,这里采用了类kmeans的聚类算法的预处理,使得单个检索速度即使在千万量级也能保持在1s以内。
5.频繁过人后处理模块:为了提高命中率,仿照搜索引擎的一般做法,策略性地做了二次或多次扩展检索;同时为了降低误报率,提取人脸的属性信息,如年龄,性别,姿态,是否戴墨镜等,对较高误报的类型进行过滤。
1.人像特征抽取模块:该模块首先在人脸上进行关键点定位(共计35个特征点),之后在关键点上用不同的特征算子(LBP,SIFT,神经网络)高密度地采 样抽取10万维以上的特征,再做降维处理到约100维,得到体积小特征向量
2.人像检索模块:使用L2相似度计算两个特征间的相似度,为加速计算,对人像特征预先建立了索引,索引为使用kmeans方式得到的类中心,为保证召回率,用随机化方法得到多个类中心。经过该处理后,检索加速比能达到30倍以上。
3.频繁过人后处理模块:该模块包含2个子模块,一是将得到的初步相似人员列表做扩展查询,该过程在提高命中率的同时也可能会带来一定的误报,因此做了较强的限制条件,例如相似分数必须大于一个高阈值才可做扩展检索;二是对常出现的误报类型做了过滤,常见的类型如老人,小孩,相同发型的,戴了口罩的,过滤方法是进行属性分类来明确是否属于这些类型,然后使用更高的分数阈值来切断误报。

Claims (6)

  1. 一种监控视频中检测频繁过人的方法,其特征在于,包括步骤:
    S1:载入监控摄像头采集的视频码流,获取经过路人的人脸图像,并生成过人记录;
    S2:根据人脸图像提取其人像特征,并将人像特征和过人记录存储至数据存储模块;
    S3:根据人像特征描述符在数据存储模块中检索,并导出每一路人在设定时间段内经过该监控摄像头区域的次数。
  2. 根据权利要求1所述的一种监控视频中检测频繁过人的方法,其特征在于,所述步骤S1中,经过路人人脸图像具体采用AdaBoost分类器获取。
  3. 根据权利要求1所述的一种监控视频中检测频繁过人的方法,其特征在于,所述步骤S2中根据人脸图像提取其人像特征的过程中,共提取35个特征点。
  4. 根据权利要求1所述的一种监控视频中检测频繁过人的方法,其特征在于,所述步骤S3具体包括步骤:
    S31:将数据存储模块中存储的过人记录进行相似度匹配,将人像特征相似的过人记录归类;
    S32:根据预设的属性特征对归类后的过人记录进行过滤;
    S33:导出每一路人在设定时间段内经过该监控摄像头区域的次数。
  5. 根据权利要求4所述的一种监控视频中检测频繁过人的方法,其特征在于,所述步骤S32中预设的属性特征包括:口罩、墨镜、年龄、性别。
  6. 根据权利要求4所述的一种监控视频中检测频繁过人的方法,其特征在于,所述步骤S33中,导出每一路人在设定时间段内经过该监控摄像头区域的次数后,还输出在单位时间内出现次数大于设定次数的路人的人脸图像。
PCT/CN2016/106672 2016-08-31 2016-11-21 一种监控视频中检测频繁过人的方法 WO2018040306A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG11201806418TA SG11201806418TA (en) 2016-08-31 2016-11-21 Method for detecting frequent passer-passing in monitoring video
PH12018501518A PH12018501518A1 (en) 2016-08-31 2018-07-13 Method for detecting frequent passer-passing in monitoring video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610793181.7A CN106355154B (zh) 2016-08-31 2016-08-31 一种监控视频中检测频繁过人的方法
CN201610793181.7 2016-08-31

Publications (1)

Publication Number Publication Date
WO2018040306A1 true WO2018040306A1 (zh) 2018-03-08

Family

ID=57858174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/106672 WO2018040306A1 (zh) 2016-08-31 2016-11-21 一种监控视频中检测频繁过人的方法

Country Status (4)

Country Link
CN (1) CN106355154B (zh)
PH (1) PH12018501518A1 (zh)
SG (1) SG11201806418TA (zh)
WO (1) WO2018040306A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552681A (zh) * 2020-04-30 2020-08-18 山东众志电子有限公司 一种动态的基于大数据技术的场所出入次数异常计算方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018165863A1 (zh) * 2017-03-14 2018-09-20 华平智慧信息技术(深圳)有限公司 安防监控中数据分类的方法及装置
CN106897460A (zh) * 2017-03-14 2017-06-27 华平智慧信息技术(深圳)有限公司 安防监控中数据分类的方法及装置
CN110019963B (zh) * 2017-12-11 2021-08-10 罗普特科技集团股份有限公司 嫌疑人关系人员的搜索方法
CN110134812A (zh) * 2018-02-09 2019-08-16 杭州海康威视数字技术股份有限公司 一种人脸搜索方法及其装置
CN109492604A (zh) * 2018-11-23 2019-03-19 北京嘉华科盈信息系统有限公司 人脸模型特征统计分析系统
CN111143594A (zh) * 2019-12-26 2020-05-12 北京橘拍科技有限公司 人像搜索方法、服务器、存储介质、视频处理方法及系统
CN111401315B (zh) * 2020-04-10 2023-08-22 浙江大华技术股份有限公司 基于视频的人脸识别方法、识别装置及存储装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101618542A (zh) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 一种智能机器人迎宾系统和方法
CN101847218A (zh) * 2009-03-25 2010-09-29 微星科技股份有限公司 人流计数系统及其方法
CN102176746A (zh) * 2009-09-17 2011-09-07 广东中大讯通信息有限公司 一种用于局部小区域安全进入的智能监控系统及实现方法
CN103971103A (zh) * 2014-05-23 2014-08-06 西安电子科技大学宁波信息技术研究院 一种人数统计系统
KR20150071920A (ko) * 2013-12-19 2015-06-29 한국전자통신연구원 얼굴 식별을 이용한 사람 수 카운팅 장치 및 방법

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376679A (zh) * 2014-11-24 2015-02-25 苏州立瓷电子技术有限公司 一种智能家居预警方法
CN105160319B (zh) * 2015-08-31 2018-10-16 电子科技大学 一种在监控视频下实现行人再识别的方法
CN105357496B (zh) * 2015-12-09 2018-07-27 武汉大学 一种多源大数据融合的视频监控行人身份识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847218A (zh) * 2009-03-25 2010-09-29 微星科技股份有限公司 人流计数系统及其方法
CN101618542A (zh) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 一种智能机器人迎宾系统和方法
CN102176746A (zh) * 2009-09-17 2011-09-07 广东中大讯通信息有限公司 一种用于局部小区域安全进入的智能监控系统及实现方法
KR20150071920A (ko) * 2013-12-19 2015-06-29 한국전자통신연구원 얼굴 식별을 이용한 사람 수 카운팅 장치 및 방법
CN103971103A (zh) * 2014-05-23 2014-08-06 西安电子科技大学宁波信息技术研究院 一种人数统计系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552681A (zh) * 2020-04-30 2020-08-18 山东众志电子有限公司 一种动态的基于大数据技术的场所出入次数异常计算方法

Also Published As

Publication number Publication date
SG11201806418TA (en) 2018-08-30
CN106355154A (zh) 2017-01-25
PH12018501518A1 (en) 2019-03-18
CN106355154B (zh) 2020-09-11

Similar Documents

Publication Publication Date Title
WO2018040306A1 (zh) 一种监控视频中检测频繁过人的方法
CN110458101B (zh) 基于视频与设备结合的服刑人员体征监测方法及设备
CN107230267B (zh) 基于人脸识别算法的幼儿园智能签到方法
CN103049459A (zh) 一种基于特征识别的快速录像检索方法
Arigbabu et al. Integration of multiple soft biometrics for human identification
TWI704505B (zh) 人臉辨識系統、建立人臉辨識之資料之方法及其人臉辨識之方法
Haji et al. Real time face recognition system (RTFRS)
CN111353338A (zh) 一种基于营业厅视频监控的能效改进方法
Xia et al. Face occlusion detection using deep convolutional neural networks
CN112989950A (zh) 一种面向多模态特征语义关联特征的暴力视频识别系统
Mercaldo et al. A proposal to ensure social distancing with deep learning-based object detection
CN108596057B (zh) 一种基于人脸识别的信息安全管理系统
Arbab‐Zavar et al. On forensic use of biometrics
WO2023093241A1 (zh) 行人重识别方法及装置、存储介质
Singh et al. Student Surveillance System using Face Recognition
Sitepu et al. FaceNet with RetinaFace to Identify Masked Face
Nguyen et al. Towards recognizing facial expressions at deeper level: Discriminating genuine and fake smiles from a sequence of images
Bharathi et al. An automatic real-time face mask detection using CNN
Abed et al. Face retrieval in videos using face quality assessment and convolution neural networks
Agarwal et al. A Framework for Dress Code Monitoring System using Transfer Learning from Pre-Trained YOLOv4 Model
Nandhis et al. Realtime face mask detection using machine learning
Meshkinfamfard et al. Tackling rare false-positives in face recognition: a case study
Archana et al. Tracking based event detection of singles broadcast tennis video
Yohannan et al. Optimal camera positions for human identification
Mustapha et al. A Survey On Video Face Recognition Using Deep Learning

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 12018501518

Country of ref document: PH

WWE Wipo information: entry into national phase

Ref document number: 11201806418T

Country of ref document: SG

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16914868

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16914868

Country of ref document: EP

Kind code of ref document: A1