WO2018023516A1 - 一种语音交互识别控制方法 - Google Patents

一种语音交互识别控制方法 Download PDF

Info

Publication number
WO2018023516A1
WO2018023516A1 PCT/CN2016/093162 CN2016093162W WO2018023516A1 WO 2018023516 A1 WO2018023516 A1 WO 2018023516A1 CN 2016093162 W CN2016093162 W CN 2016093162W WO 2018023516 A1 WO2018023516 A1 WO 2018023516A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
emotion recognition
information
voice information
emotion
Prior art date
Application number
PCT/CN2016/093162
Other languages
English (en)
French (fr)
Inventor
易晓阳
Original Assignee
易晓阳
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 易晓阳 filed Critical 易晓阳
Priority to PCT/CN2016/093162 priority Critical patent/WO2018023516A1/zh
Publication of WO2018023516A1 publication Critical patent/WO2018023516A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Definitions

  • the present invention relates to the field of smart home technology, and more particularly to a voice interactive recognition control method.
  • Smart home is the embodiment of materialization under the influence of the Internet. Smart Home connects various devices in the home through IoT technology, providing home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, burglar alarm, environmental monitoring, HVAC control, infrared forwarding and programmable timing control. Functions and means. Compared with ordinary homes, smart homes not only have traditional living functions, but also combine construction, network communication, information appliances, equipment automation, and integrate efficient systems, structures, services and management into a highly efficient, comfortable, safe, convenient and environmentally friendly living environment. Provide a full range of information interaction functions to help families and the outside to maintain information exchange, optimize people's lifestyles, help people to effectively arrange time, enhance the safety of home life, and even save money for various energy costs.
  • the technical problem to be solved by the present invention is to provide a voice interactive recognition control method for the above-mentioned drawbacks of the prior art.
  • Constructing a voice interaction recognition control method includes the following steps:
  • the voice interactive recognition control method of the present invention wherein the method further comprises the steps of:
  • the user emotion recognition result is generated according to the predetermined emotion recognition result judgment method.
  • the speech interaction recognition control method wherein the emotion recognition comprises derogatory emotion recognition and derogatory emotion recognition.
  • the voice interactive recognition control method of the present invention wherein the method further comprises the steps of:
  • the voice interaction recognition control method further includes the step of generating a control instruction of a specific operation and transmitting the control instruction to the control module when the received literal meaning is positive.
  • the voice interaction recognition control method of the present invention further includes the steps of:
  • the voice interaction recognition control method of the present invention wherein the response voice information includes smart device type information that needs to be controlled.
  • the invention has the beneficial effects of realizing humanized control of the smart device by adopting a voice interaction manner.
  • FIG. 1 is a flow chart of a voice interactive recognition control method according to a preferred embodiment of the present invention
  • FIG. 2 is a further flowchart of a voice interaction recognition control method according to a preferred embodiment of the present invention.
  • FIG. 3 is a schematic block diagram of a voice interactive recognition control system according to a preferred embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of an emotion recognition judgment module of a voice interactive recognition control system according to a preferred embodiment of the present invention.
  • the flow of the voice interactive recognition control method according to the preferred embodiment of the present invention is as shown in FIG. 1 and includes the following steps. Step:
  • Step S1 collecting and filtering external input voice information
  • Step S2 performing emotion recognition according to external input voice information, and determining the literal meaning and emotion category of the input voice;
  • Step S3 generating corresponding response voice information according to the phonetic meaning and the emotion category, and transmitting the response voice information to the voice output module or the control module;
  • Step S4 Send a control instruction to the corresponding smart device according to the received response voice information.
  • the above method further includes the steps of:
  • Step S5 performing voice tone emotion recognition on the voice information to generate a first emotion recognition result
  • Step S6 After converting the voice information into text information, performing semantic emotion recognition on the text information to generate a second emotion recognition result;
  • Step S7 Generate a user emotion recognition result according to the predetermined emotion recognition result judgment method based on the first emotion recognition result and the second emotion recognition result.
  • emotion recognition includes derogatory emotion recognition and derogatory emotion recognition.
  • the above method further includes the steps of:
  • the above method further comprises the steps of:
  • a control instruction for generating a specific operation is generated and sent to the control module when the received literal meaning is positive.
  • the above method further comprises the steps of:
  • the response voice information is identified, and the response voice information that can be used as the control command is sent to the corresponding smart device through the wireless transceiver module.
  • the response voice information includes information about the type of smart device that needs to be controlled.
  • FIG. 3 A schematic block diagram of a voice interactive recognition control system according to a preferred embodiment of the present invention is shown in FIG. 3, including a connected audio signal acquisition module 1, an emotion recognition determination module 2, a voice intelligence generation module 3, a voice output module 4, and a control module 5. And the wireless transceiver module 6; wherein the audio signal acquisition module 1 is configured to collect and filter external input voice information; the emotion recognition determination module 2 is configured to perform emotion recognition according to external input voice information, and determine the input literal meaning and emotion category The voice intelligence generating module 3 is configured to generate corresponding response voice information according to the phonetic meaning and the emotion category, and send the response voice information to the voice output module or the control module; the control module 5 is configured to receive the response voice message according to the received voice message Send control instructions to the corresponding smart device.
  • This embodiment implements humanized control of the smart device by adopting a voice interaction manner.
  • the emotion recognition determination module 2 includes: a first emotion recognition unit 21, configured to perform voice tone emotion recognition on the voice information, and generate a first emotion recognition result; the second emotion recognition The unit 22 is configured to: after the voice information is converted into the text information, perform semantic emotion recognition on the text information to generate a second emotion recognition result; the emotion recognition result output unit 23 is configured to use the first emotion recognition result and the second emotion recognition result, The user emotion recognition result is generated based on the predetermined emotion recognition result judgment system.
  • emotion recognition includes derogatory emotion recognition and derogatory emotion recognition.
  • the emotion recognition judging module includes: a third emotion recognition unit configured to perform image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.
  • a number of derogatory seed words and a number of derogatory seed words are selected to generate an sentiment dictionary; the word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary are respectively calculated;
  • the semantic emotion analysis system is configured to generate the second emotion recognition result.
  • the words in the text information and the ⁇ may be separately calculated according to a semantic similarity calculation system. The word similarity of the semantic seed word and the word similarity between the words in the text information and the derogatory seed word.
  • the step of generating the second emotion recognition result by using the preset semantic sentiment analysis system is: calculating the word sentiment tendency value by using the word sentiment tendency formula: when the word sentiment tendency value is greater than the predetermined When the threshold value is determined, the words in the text information are judged as derogatory emotions; when the word sentiment tendency value is less than a predetermined threshold, the words in the text information are judged as derogatory emotions.
  • the voice intelligence generation module is further configured to generate a specific operation control instruction and send it to the control module when the received voice literal is positive, for example, determining what kind of smart device is needed. When controlling, a control command can be generated to the current smart device.
  • the control module includes: an information receiving unit, configured to receive response voice information generated by the voice intelligence generating module; and an information generating unit configured to identify the response voice information, and use the response voice as a control command
  • the information is sent to the corresponding smart device through the wireless transceiver module. That is, in the control module, a plurality of smart device detailed information that needs to be controlled is stored, and the user can query the status information of any smart device by means of voice interaction, and control the status information according to the status information.
  • the response voice information includes the type or number information of the smart device to be controlled, and the control module determines, according to the information, which device the control command needs to be sent to.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种语音交互识别控制方法,包括以下步骤:采集并过滤外部输入语音信息(S1);根据外部输入语音信息进行情感识别,判断输入的语音字面意义及情感类别(S2);根据语音字面意义及情感类别生成相应的应答语音信息,并将应答语音信息发送至语音输出模块或控制模块(S3);根据接收到的应答语音信息向相应的智能设备发送控制指令(S4)。通过采用语音交互的方式实现对智能设备的人性化控制。

Description

一种语音交互识别控制方法 技术领域
本发明涉及智能家居技术领域,更具体地说,涉及一种语音交互识别控制方法。
背景技术
智能家居是在互联网的影响之下物联化的体现。智能家居通过物联网技术将家中的各种设备连接到一起,提供家电控制、照明控制、电话远程控制、室内外遥控、防盗报警、环境监测、暖通控制、红外转发以及可编程定时控制等多种功能和手段。与普通家居相比,智能家居不仅具有传统的居住功能,兼备建筑、网络通信、信息家电、设备自动化,集系统、结构、服务、管理为一体的高效、舒适、安全、便利、环保的居住环境,提供全方位的信息交互功能,帮助家庭与外部保持信息交流畅通,优化人们的生活方式,帮助人们有效安排时间,增强家居生活的安全性,甚至为各种能源费用节约资金。
随着智能家居系统的越来越普及,单一的识别模式已经不能满足人们的需要。
发明内容
本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种语音交互识别控制方法。
本发明解决其技术问题所采用的技术方案是:
构造一种语音交互识别控制方法,包括以下步骤:
采集并过滤外部输入语音信息;
根据所述外部输入语音信息进行情感识别,判断输入的语音字面意义及情感类别;
根据语音字面意义及情感类别生成相应的应答语音信息,并将所述应答语音信息发送至所述语音输出模块或所述控制模块;
根据接收到的应答语音信息向相应的智能设备发送控制指令。
本发明所述的语音交互识别控制方法,其中,所述方法还包括步骤:
对所述语音信息进行语音音调情感识别,生成第一情感识别结果;
将所述语音信息转换为文字信息后,对所述文字信息进行语义情感识别生成第二情感识别结果;
基于所述第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果。
本发明所述的语音交互识别控制方法,其中,所述情感识别包括褒义情感识别和贬义情感识别。
本发明所述的语音交互识别控制方法,其中,所述方法还包括步骤:
对获取的面部图像信息进行图像识别判断,生成第三情感识别结果。
本发明所述的语音交互识别控制方法,其中,进一步包括步骤,在收到的语音字面意义为肯定时生成具体操作的控制指令并发送给所述控制模块。
本发明所述的语音交互识别控制方法,其中,进一步包括步骤:
接收所述语音智能生成模块生成的应答语音信息;
对所述应答语音信息进行识别,将可作为控制指令的应答语音信息通过所述无线收发模块发送至相应的智能设备。
本发明所述的语音交互识别控制方法,其中,所述应答语音信息中包含需要控制的智能设备类型信息。
本发明的有益效果在于:通过采用语音交互的方式实现对智能设备的人性化控制。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将结合附图及实施例对本发明作进一步说明,下面描述中的附图仅仅是本发明的部分实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图:
图1是本发明较佳实施例的语音交互识别控制方法流程图;
图2是本发明较佳实施例的语音交互识别控制方法进一步的流程图;
图3是本发明较佳实施例的语音交互识别控制系统原理框图;
图4是本发明较佳实施例的语音交互识别控制系统的情感识别判断模块原理框图。
具体实施方式
为了使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明的部分实施例,而不是全部实施例。基于本发明的实施例,本领域普通技术人员在没有付出创造性劳动的前提下所获得的所有其他实施例,都属于本发明的保护范围。
本发明较佳实施例的语音交互识别控制方法流程如图1所示,包括以下步 骤:
步骤S1、采集并过滤外部输入语音信息;
步骤S2、根据外部输入语音信息进行情感识别,判断输入的语音字面意义及情感类别;
步骤S3、根据语音字面意义及情感类别生成相应的应答语音信息,并将应答语音信息发送至语音输出模块或控制模块;
步骤S4、根据接收到的应答语音信息向相应的智能设备发送控制指令。
如图2所示,上述方法还包括步骤:
步骤S5、对语音信息进行语音音调情感识别,生成第一情感识别结果;
步骤S6、将语音信息转换为文字信息后,对文字信息进行语义情感识别生成第二情感识别结果;
步骤S7、基于第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果。
其中,情感识别包括褒义情感识别和贬义情感识别。
上述方法还包括步骤:
对获取的面部图像信息进行图像识别判断,生成第三情感识别结果。
上述方法进一步包括步骤:
在收到的语音字面意义为肯定时生成具体操作的控制指令并发送给控制模块。
上述方法进一步包括步骤:
接收语音智能生成模块生成的应答语音信息;
对应答语音信息进行识别,将可作为控制指令的应答语音信息通过无线收发模块发送至相应的智能设备。
其中,应答语音信息中包含需要控制的智能设备类型信息。
本发明较佳实施例的语音交互识别控制系统原理框图如图3所示,包括相连接的音频信号采集模块1、情感识别判断模块2、语音智能生成模块3、语音输出模块4、控制模块5和无线收发模块6;其中,音频信号采集模块1,用于采集并过滤外部输入语音信息;情感识别判断模块2,用于根据外部输入语音信息进行情感识别,判断输入的语音字面意义及情感类别;语音智能生成模块3,用于根据语音字面意义及情感类别生成相应的应答语音信息,并将应答语音信息发送至语音输出模块或控制模块;控制模块5,用于根据接收到的应答语音信息向相应的智能设备发送控制指令。本实施例通过采用语音交互的方式实现对智能设备的人性化控制。
上述语音交互识别控制系统中,如图4所示,情感识别判断模块2包括:第一情感识别单元21,用于对语音信息进行语音音调情感识别,生成第一情感识别结果;第二情感识别单元22,用于将语音信息转换为文字信息后,对文字信息进行语义情感识别生成第二情感识别结果;情感识别结果输出单元23,用于基于第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断系统生成用户情感识别结果。
上述语音交互识别控制系统中,情感识别包括褒义情感识别和贬义情感识别。情感识别判断模块包括:第三情感识别单元,用于对视频信号采集模块获取的面部图像信息进行图像识别判断,生成第三情感识别结果。
例如,选定若干个褒义种子词和若干个贬义种子词生成情感词典;分别计算文字信息中的词语与情感词典中的褒义种子词和贬义种子词的词语相似度;根据词语相似度,通过预设的语义情感分析系统生成所述第二情感识别结果。具体地,可根据语义相似度计算系统分别计算所述文字信息中的词语与所述褒 义种子词的词语相似度以及所述文字信息中的词语与所述贬义种子词的词语相似度。
上述实施例中,根据词语相似度,通过预设的语义情感分析系统生成所述第二情感识别结果的步骤具体系统为:通过词语情感倾向算式计算词语情感倾向值:当词语情感倾向值大于预定的阈值时,判断文字信息中的词语为褒义情感;当词语情感倾向值小于预定的阈值时,判断文字信息中的词语为贬义情感。
上述语音交互识别控制系统中,语音智能生成模块还用于在收到的语音字面意义为肯定时生成具体操作的控制指令并发送给控制模块,例如在确定了需要对何种智能设备做何种控制时,即可产生控制指令至当前智能设备。
上述语音交互识别控制系统中,控制模块包括:信息接收单元,用于接收语音智能生成模块生成的应答语音信息;信息产生单元,用于对应答语音信息进行识别,将可作为控制指令的应答语音信息通过无线收发模块发送至相应的智能设备。即,在控制模块中,存储有多个需要控制的智能设备详细信息,用户可通过语音交互的方式询问任一智能设备的状况信息,并根据状况信息对其进行控制。
上述语音交互识别控制系统中,应答语音信息中包含需要控制的智能设备类型或编号信息,控制模块根据该信息确定需要将控制指令发送至哪一设备。
应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。

Claims (7)

  1. 一种语音交互识别控制方法,其特征在于,包括以下步骤:
    采集并过滤外部输入语音信息;
    根据所述外部输入语音信息进行情感识别,判断输入的语音字面意义及情感类别;
    根据语音字面意义及情感类别生成相应的应答语音信息,并将所述应答语音信息发送至所述语音输出模块或所述控制模块;
    根据接收到的应答语音信息向相应的智能设备发送控制指令。
  2. 根据权利要求1所述的语音交互识别控制方法,其特征在于,所述方法还包括步骤:
    对所述语音信息进行语音音调情感识别,生成第一情感识别结果;
    将所述语音信息转换为文字信息后,对所述文字信息进行语义情感识别生成第二情感识别结果;
    基于所述第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果。
  3. 根据权利要求1所述的语音交互识别控制方法,其特征在于,所述情感识别包括褒义情感识别和贬义情感识别。
  4. 根据权利要求1所述的语音交互识别控制方法,其特征在于,所述方法还包括步骤:
    对获取的面部图像信息进行图像识别判断,生成第三情感识别结果。
  5. 根据权利要求1所述的语音交互识别控制方法,其特征在于,进一步包括步骤,在收到的语音字面意义为肯定时生成具体操作的控制指令并发送给 所述控制模块。
  6. 根据权利要求1所述的语音交互识别控制方法,其特征在于,进一步包括步骤:
    接收所述语音智能生成模块生成的应答语音信息;
    对所述应答语音信息进行识别,将可作为控制指令的应答语音信息通过所述无线收发模块发送至相应的智能设备。
  7. 根据权利要求6所述的语音交互识别控制方法,其特征在于,所述应答语音信息中包含需要控制的智能设备类型信息。
PCT/CN2016/093162 2016-08-04 2016-08-04 一种语音交互识别控制方法 WO2018023516A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/093162 WO2018023516A1 (zh) 2016-08-04 2016-08-04 一种语音交互识别控制方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/093162 WO2018023516A1 (zh) 2016-08-04 2016-08-04 一种语音交互识别控制方法

Publications (1)

Publication Number Publication Date
WO2018023516A1 true WO2018023516A1 (zh) 2018-02-08

Family

ID=61072350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/093162 WO2018023516A1 (zh) 2016-08-04 2016-08-04 一种语音交互识别控制方法

Country Status (1)

Country Link
WO (1) WO2018023516A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179928A (zh) * 2019-12-30 2020-05-19 上海欣能信息科技发展有限公司 一种基于语音交互的变配电站智能控制方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226743A (zh) * 2007-12-05 2008-07-23 浙江大学 基于中性和情感声纹模型转换的说话人识别方法
CN103456314A (zh) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 一种情感识别方法以及装置
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN104992715A (zh) * 2015-05-18 2015-10-21 百度在线网络技术(北京)有限公司 一种智能设备的界面切换方法及系统
CN105206269A (zh) * 2015-08-14 2015-12-30 百度在线网络技术(北京)有限公司 一种语音处理方法和装置
CN105632496A (zh) * 2016-03-21 2016-06-01 珠海市杰理科技有限公司 语音识别控制装置和智能家具系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226743A (zh) * 2007-12-05 2008-07-23 浙江大学 基于中性和情感声纹模型转换的说话人识别方法
CN103456314A (zh) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 一种情感识别方法以及装置
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN104992715A (zh) * 2015-05-18 2015-10-21 百度在线网络技术(北京)有限公司 一种智能设备的界面切换方法及系统
CN105206269A (zh) * 2015-08-14 2015-12-30 百度在线网络技术(北京)有限公司 一种语音处理方法和装置
CN105632496A (zh) * 2016-03-21 2016-06-01 珠海市杰理科技有限公司 语音识别控制装置和智能家具系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179928A (zh) * 2019-12-30 2020-05-19 上海欣能信息科技发展有限公司 一种基于语音交互的变配电站智能控制方法

Similar Documents

Publication Publication Date Title
JP6902136B2 (ja) システムの制御方法、システム、及びプログラム
US10992491B2 (en) Smart home automation systems and methods
US11354089B2 (en) System and method for dialog interaction in distributed automation systems
CN112051743A (zh) 设备控制方法、冲突处理方法、相应的装置及电子设备
CN106228989A (zh) 一种语音交互识别控制方法
WO2017166462A1 (zh) 一种环境变化提醒方法、系统与头戴式vr设备
CN109308018A (zh) 一种智能家居分布式语音控制系统
WO2018023515A1 (zh) 一种手势及情感识别家居控制系统
CN106251871A (zh) 一种语音控制音乐本地播放装置
CN114067798A (zh) 一种服务器、智能设备及智能语音控制方法
CN106254186A (zh) 一种语音交互识别控制系统
WO2018023514A1 (zh) 一种家居背景音乐控制系统
WO2018023523A1 (zh) 一种运动及情感识别家居控制系统
WO2018023518A1 (zh) 一种语音交互识别智能终端
CN108417008A (zh) 基于语音识别的红外控制方法及系统
WO2018023516A1 (zh) 一种语音交互识别控制方法
CN106297783A (zh) 一种语音交互识别智能终端
WO2018023517A1 (zh) 一种语音交互识别控制系统
WO2018023513A1 (zh) 一种基于运动识别的家居控制方法
CN113674738A (zh) 一种全屋分布式语音的系统和方法
CN106251866A (zh) 一种语音控制音乐网络播放装置
CN106019977A (zh) 一种手势及情感识别家居控制系统
CN106297837A (zh) 一种语音控制音乐本地播放方法
WO2018023521A1 (zh) 一种语音控制音乐网络播放方法
WO2018023519A1 (zh) 一种语音控制音乐本地播放方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16911113

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16911113

Country of ref document: EP

Kind code of ref document: A1