WO2010105396A1 - Apparatus and method for recognizing speech emotion change - Google Patents

Apparatus and method for recognizing speech emotion change Download PDF

Info

Publication number
WO2010105396A1
WO2010105396A1 PCT/CN2009/070801 CN2009070801W WO2010105396A1 WO 2010105396 A1 WO2010105396 A1 WO 2010105396A1 CN 2009070801 W CN2009070801 W CN 2009070801W WO 2010105396 A1 WO2010105396 A1 WO 2010105396A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
speech emotion
window
change
speaker
Prior art date
Application number
PCT/CN2009/070801
Other languages
English (en)
French (fr)
Inventor
Yingliang Lu
Qing Guo
Bin Wang
Original Assignee
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Limited filed Critical Fujitsu Limited
Priority to CN2009801279599A priority Critical patent/CN102099853B/zh
Priority to PCT/CN2009/070801 priority patent/WO2010105396A1/en
Publication of WO2010105396A1 publication Critical patent/WO2010105396A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices

Definitions

  • the present invention relates to the field of speech signal processing and in particular to an apparatus and a method for recognizing a speech emotion change of a speaker from speech data of the speaker.
  • the speech emotion recognition technology may be applied to the field of human-machine interaction, and thus may greatly improve friendliness and accuracy of human-machine interaction.
  • the conventional solutions only focus on recognizing a speech emotion of a speaker by extracting speech emotion features such as pitch, energy and formant from speech data of the speaker.
  • speech emotion features of different speakers are different and even speech emotion features of the same speaker are also different at different time periods, it is difficult to accurately recognize speech emotions of personalized speech data in the conventional solutions.
  • the emotion change recognition from a speech of a speaker rather than the emotion recognition from a speech is more interesting in many applications.
  • a time point at which an emotion of an actor is changed from "calm” to "exciting” in a video is an appropriate time point of inserting an advertisement into the video. Therefore, in such applications, it is enough to accurately recognize a speech emotion change of a speaker from speech data of the speaker.
  • speech emotion recognition due to inaccuracy on speech emotion recognition in the conventional solutions, it is difficult to accurately recognize speech emotion changes of personalized speech data according to speech emotion recognition results of the conventional solutions.
  • an object of the invention is to provide an apparatus and a method for recognizing a speech emotion change of a speaker from speech data of the speaker, which are capable of providing good performance on speech emotion change recognition of personalized speech data.
  • an embodiment of the invention provides a method of recognizing a speech emotion change of a speaker from speech data of the speaker, which may comprise the following steps: a window dividing step of dividing the speech data of the speaker into a plurality of windows by a window width; a window speech emotion feature calculating step of calculating a speech emotion feature for each of the plurality of windows; and a speech emotion change recognizing step of recognizing the speech emotion change of the speaker for a window set consisting of at least two contiguous windows by comparing the speech emotion features of the window set with each of a plurality of speech emotion feature change templates stored in a speech emotion feature change database to find out a speech emotion feature change template which matches the speech emotion features of the window set.
  • an embodiment of the invention provides an apparatus for recognizing a speech emotion change of a speaker from speech data of the speaker, which may comprise: a window dividing means for dividing the speech data of the speaker into a plurality of windows by a window width; a window speech emotion feature calculating means for calculating a speech emotion feature for each of the plurality of windows; and a speech emotion change recognizing means for recognizing the speech emotion change of the speaker for a window set consisting of at least two contiguous windows by comparing the speech emotion features of the window set with each of a plurality of speech emotion feature change templates stored in a speech emotion feature change database to find out a speech emotion feature change template which matches the speech emotion features of the window set.
  • an embodiment of the invention provides a computer-readable storage medium with a computer program stored thereon, wherein said computer program, when being executed, causes a computer to execute the above method of recognizing a speech emotion change of a speaker from speech data of the speaker.
  • Figure 1 is a flow chart illustrating a method of recognizing a speech emotion change of a speaker from speech data of the speaker according to an embodiment of the invention
  • Figure 2 is a flow chart illustrating an implementing example of the speech emotion change recognizing step S 130 of Figure 1 ;
  • Figure 3 schematically illustrates waveform graphs of two speech segments of speaker A extracted from dialogue data between speakers A and B;
  • Figure 4 schematically illustrates pitch change graphs respectively extracted from two speech segments of Figure 3
  • Figure 5 schematically illustrates a pitch change graph of two windows corresponding to two speech segments of Figure 3, where the window width is the minimum length of the two speech segments and the singularities are removed;
  • Figure 6 schematically illustrates a pitch change graph of many windows corresponding to two speech segments of Figure 3, where the window width is 10ms and the singularities are removed;
  • Figure 7 illustrates an exemplary structure of a speech emotion feature change database employed in the embodiment of the invention
  • Figure 8 is a block diagram illustrating a construction of an apparatus for recognizing a speech emotion change of a speaker from speech data of the speaker according to an embodiment of the invention
  • Figure 9 is a block diagram illustrating an exemplary construction of the speech emotion change recognizing means 830 of Figure 8; and Figure 10 is a block diagram illustrating an exemplary construction of a computer in which the invention may be implemented.
  • Figure 1 is a flow chart illustrating a method of recognizing a speech emotion change of a speaker from speech data of the speaker according to an embodiment of the invention.
  • the speech data of the speaker may be inputted via an external device such as a sound recording device, a phone, a PDA or the like.
  • the speech data of the speaker may be a whole piece of continuous speech data from the speaker, for example, an oral lecture made by a lecturer.
  • the speech data of the speaker may be constituted by one or more continuous speech segments of the speaker extracted from dialogue data of a plurality of speakers comprising the speaker, for example, one or more continuous speech segments of a customer extracted from telephone conversation data between the customer and a call center agent in the application of call center.
  • the discrimination of different speakers may be implemented using sndpeek or the like.
  • Figure 3 schematically illustrates waveform graphs of two speech segments (a) and (b) of speaker A extracted from dialogue data between speakers A and B.
  • the speech data of the speaker is constituted by two speech segments (a) and (b) of the speaker A.
  • the method may include a window dividing step SIlO, a window speech emotion feature calculating step S 120 and a speech emotion change recognizing step S 130.
  • the window dividing step SIlO the speech data of the speaker is divided into a plurality of windows by a window width.
  • the window width may be a predetermined time width such as 10ms, 100ms, Is or the like.
  • the window width may be a predetermined time width such as 10ms, 100ms, Is or the like, or may be determined by a larger one of the minimum length of the one or more continuous speech segments and a predetermined time width such as 10ms, 100ms, Is or the like.
  • the speech data of the speaker is constituted by one or more continuous speech segments of the speaker
  • one window covers only one speech segment at most, and when one speech segment can not be fully divided, the final reminder whose length is less than the window width may be omitted.
  • a speech emotion feature is calculated for each of the plurality of windows.
  • the speech emotion feature may comprise one or more of speech pitch, speech energy and speech speed.
  • an average value of the speech emotion features of respective feature extraction intervals in the window is calculated as the speech emotion feature of the window.
  • the feature extraction interval may be set to 10ms or another value depending on a specific design.
  • the speech emotion feature of the window may be calculated in another manner depending on a specific design.
  • speech emotion feature singularities are removed from the speech emotion features of respective feature extraction intervals in the window.
  • the speech emotion feature singularities refer to those feature values equal to or approximate to zero (for example, caused by a silence period or the like), those feature values having a large fluctuation compared with their neighboring feature values (for example, caused by a noise or the like), and so on.
  • the window may be removed.
  • the calculated speech emotion features of the respective windows are schematically shown in Figure 6, wherein one point in the time axis represents one window and those windows whose speech emotion features are equal to or approximate to zero are removed.
  • the speech emotion change of the speaker for a window set consisting of at least two contiguous windows is recognized by comparing the speech emotion features of the window set with each of a plurality of speech emotion feature change templates stored in a speech emotion feature change database to find out a speech emotion feature change template which matches the speech emotion features of the window set.
  • the window set may include a predetermined number of windows, and may be sequentially selected with a moving step whose window number is less than the predetermined number.
  • the window set may include all the windows of two successive speech segments, and may be sequentially selected with a move step of one speech segment.
  • one type of speech emotion change may have a predetermined number of speech emotion feature change templates, each speech emotion feature change template associates one or more representative speech emotion feature change curves (e.g., speech pitch change curve, speech energy change curve, or the like) with one type of speech emotion change, and the speech emotion feature change templates may be generated in advance through a clustering algorithm by statistical analysis of a large corpus of representative speech data from different speakers.
  • each speech emotion feature change template associates one or more representative speech emotion feature change curves (e.g., speech pitch change curve, speech energy change curve, or the like) with one type of speech emotion change
  • the speech emotion feature change templates may be generated in advance through a clustering algorithm by statistical analysis of a large corpus of representative speech data from different speakers.
  • Figure 7 illustrates an exemplary structure of a speech emotion feature change database employed in the embodiment of the invention.
  • the speech emotion feature change database includes the following two tables: a speech motion feature change type table (a) and a speech emotion feature template table (b).
  • the speech motion feature change type table (a) in Figure 7 has two field of "Change type ID” and "Change type name” and schematically shows four types of exemplary speech emotion changes: "Calm -> Angry”, “Angry -> Calm”, “Calm -> Happy”, and
  • the speech emotion feature template table (b) in Figure 7 has three fields of "ID”, "Feature value (pitch)" and “Change type ID” and schematically shows one exemplary speech emotion feature curve associated with the speech emotion change of "Calm -> Angry".
  • Figure 2 is a flow chart illustrating an implementing example of the speech emotion change recognizing step S 130 of Figure 1.
  • the speech emotion features of the window set are normalized.
  • the Euclidean distance calculating step S220 an Euclidean distance between the normalized speech emotion features of the window set and each of the plurality of speech emotion feature change templates stored in the speech emotion feature change database is calculated.
  • a speech emotion feature change template whose Euclidean distance with the normalized speech emotion features of the window set is the smallest and less than a predetermined threshold is determined as the matching speech emotion feature change template.
  • the exemplary speech emotion change template in the speech emotion change template table (b) of Figure 7 is determined as the matching speech emotion feature change template of the speech data in Figure 3 through the above matching process, and thus the speech emotion feature change of the speech data in Figure 3 is recognized as "Calm -> Angry".
  • the speech emotion change recognizing step S 130 in Figure 1 may be performed only if there is any one of speech emotion feature changes between neighboring windows in the window set exceeding a predetermined threshold.
  • the method may further comprise a speech emotion recognizing step of recognizing speech emotions of respective windows in the window set according to a recognition result of speech emotion change in the window set.
  • a speech emotion recognizing step of recognizing speech emotions of respective windows in the window set according to a recognition result of speech emotion change in the window set. For example, when the speech emotion feature change of the speech data in Figure 3 is recognized as "Calm -> Angry", the speech emotion features of respective windows of the speech segment (a) may be recognized as "Calm” and the speech emotion features of respective windows of the speech segment (b) may be recognized as "Angry”.
  • Figure 8 is a block diagram illustrating a construction of an apparatus for recognizing a speech emotion change of a speaker from speech data of the speaker according to an embodiment of the invention.
  • the apparatus 800 may include a window dividing means 810, a window speech emotion feature calculating means 820 and a speech emotion change recognizing means 830.
  • the window dividing means 810 may divide the speech data of the speaker into a plurality of windows by a window width.
  • the window speech emotion feature calculating means 820 may calculate a speech emotion feature for each of the plurality of windows.
  • the speech emotion change recognizing means 830 may recognize the speech emotion change of the speaker for a window set consisting of at least two contiguous windows by comparing the speech emotion features of the window set with each of a plurality of speech emotion feature change templates stored in a speech emotion feature change database to find out a speech emotion feature change template which matches the speech emotion features of the window set.
  • Figure 9 is a block diagram illustrating an exemplary construction of the speech emotion change recognizing means 830 of Figure 8.
  • the speech emotion change recognizing means 830 may include a normalizing means 910, an Euclidean distance calculating means 920 and a determining means 930.
  • the normalizing means 910 may normalize the speech emotion features of the window set.
  • the Euclidean distance calculating means 920 may calculate an Euclidean distance between the normalized speech emotion features of the window set and each of the plurality of speech emotion feature change templates stored in the speech emotion feature change database.
  • the determining means 930 may determine a speech emotion feature change template whose Euclidean distance with the normalized speech emotion features of the window set is the smallest and less than a predetermined threshold as the matching speech emotion feature change template.
  • the apparatus 800 may further comprise a speech emotion recognizing means for recognizing speech emotions of respective windows in the window set according to a recognition result of speech emotion change in the window set.
  • a speech emotion recognizing means for recognizing speech emotions of respective windows in the window set according to a recognition result of speech emotion change in the window set.
  • the above apparatus and method for recognizing a speech emotion change of a speaker from speech data of the speaker may be applied to many applications.
  • a speech emotion change recognition result of a customer may be provided to a call center agent in the form of speech or image during the telephone conversion between the customer and the call center agent so that the call center agent may respond to the speech emotion change of the customer appropriately and rapidly.
  • the desired contents of the lecture can be extracted according to a speech emotion change recognition result of the lecturer. For example, the portions of the lecture which exhibit the speech emotion of "sad" may be filtered out so as to extract the optimistic contents of the lecture.
  • the above method and apparatus may be implemented by hardware.
  • Such hardware may be a single processing device or a plurality of processing devices.
  • Such processing device may be a microprocessor, a microcontroller, a digital processor, a microcomputer, a part of a central processing unit, a state machine, a logic circuit and/or any device capable of manipulating a signal.
  • the above method and apparatus may be implemented by either software or firmware.
  • a program that constitutes the software is installed, from a storage medium or a network, into a computer having a dedicated hardware configuration, e. g., a general - purpose personal computer 1000 as illustrated in Figure 10, that when various programs are installed therein, becomes capable of performing various functions, or the like.
  • a central processing unit (CPU) 1001 performs various processes in accordance with a program stored in a read only memory (ROM) 1002 or a program loaded from a storage section 1008 to a random access memory (RAM) 1003.
  • ROM read only memory
  • RAM random access memory
  • data required when the CPU 1001 performs the various processes or the like is also stored as required.
  • the CPU 1001, the ROM 1002 and the RAM 1003 are connected to one another via a bus 1004.
  • An input / output interface 1005 is also connected to the bus 1004.
  • the following components are connected to input / output interface 1005:
  • An input section 1006 including a keyboard, a mouse, or the like;
  • An output section 1007 including a display such as a cathode ray tube (CRT), a liquid crystal display (LCD), or the like, and a loudspeaker or the like;
  • the storage section 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like.
  • the communication section 1009 performs a communication process via the network such as the internet.
  • a drive 1010 is also connected to the input / output interface 1005 as required.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto - optical disk, a semiconductor memory, or the like, is mounted on the drive 1010 as required, so that a computer program read therefrom is installed into the storage section 1008 as required.
  • the program that constitutes the software is installed from the network such as the internet or the storage medium such as the removable medium 1011.
  • the network such as the internet or the storage medium such as the removable medium 1011.
  • this storage medium is not limit to the removable medium 1011 having the program stored therein as illustrated in Figure 10, which is delivered separately from the device for providing the program to the user.
  • the removable medium 1011 examples include the magnetic disk (including a floppy disk (register trademark )), the optical disk (including a compact disk - read only memory (CD-ROM) and a digital versatile disk (DVD)), the magneto - optical disk
  • the storage medium may be the ROM 1002, the hard disk contained in the storage section 1008, or the like, which have the program stored therein and is delivered to the user together with the device that containing them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
PCT/CN2009/070801 2009-03-16 2009-03-16 Apparatus and method for recognizing speech emotion change WO2010105396A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2009801279599A CN102099853B (zh) 2009-03-16 2009-03-16 用于识别语音情感变化的设备和方法
PCT/CN2009/070801 WO2010105396A1 (en) 2009-03-16 2009-03-16 Apparatus and method for recognizing speech emotion change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/070801 WO2010105396A1 (en) 2009-03-16 2009-03-16 Apparatus and method for recognizing speech emotion change

Publications (1)

Publication Number Publication Date
WO2010105396A1 true WO2010105396A1 (en) 2010-09-23

Family

ID=42739098

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/070801 WO2010105396A1 (en) 2009-03-16 2009-03-16 Apparatus and method for recognizing speech emotion change

Country Status (2)

Country Link
CN (1) CN102099853B (zh)
WO (1) WO2010105396A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8948893B2 (en) 2011-06-06 2015-02-03 International Business Machines Corporation Audio media mood visualization method and system
WO2019226406A1 (en) * 2018-05-25 2019-11-28 Microsoft Technology Licensing, Llc Dynamic extraction of contextually-coherent text blocks
CN116578691A (zh) * 2023-07-13 2023-08-11 江西合一云数据科技股份有限公司 一种智能养老机器人对话方法及其对话系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971729A (zh) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 一种基于声音特征范围提高声纹识别速度的方法及系统
CN106971711A (zh) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 一种自适应的声纹识别方法及系统
CN107133567B (zh) * 2017-03-31 2020-01-31 北京奇艺世纪科技有限公司 一种创可贴广告点位选取方法及装置
CN107154257B (zh) * 2017-04-18 2021-04-06 苏州工业职业技术学院 基于客户语音情感的客服服务质量评价方法及系统
CN109087670B (zh) * 2018-08-30 2021-04-20 西安闻泰电子科技有限公司 情绪分析方法、系统、服务器及存储介质
CN108986430A (zh) * 2018-09-13 2018-12-11 苏州工业职业技术学院 基于语音识别的网约车安全预警方法和系统
CN111048075A (zh) * 2018-10-11 2020-04-21 上海智臻智能网络科技股份有限公司 智能客服系统及智能客服机器人
CN110619894B (zh) * 2019-09-30 2023-06-27 北京淇瑀信息科技有限公司 基于语音波形图的情绪识别方法、装置和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812739A (en) * 1994-09-20 1998-09-22 Nec Corporation Speech recognition system and speech recognition method with reduced response time for recognition
WO2007017853A1 (en) * 2005-08-08 2007-02-15 Nice Systems Ltd. Apparatus and methods for the detection of emotions in audio interactions
CN1979491A (zh) * 2005-12-10 2007-06-13 三星电子株式会社 对音乐文件分类的方法及其系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812739A (en) * 1994-09-20 1998-09-22 Nec Corporation Speech recognition system and speech recognition method with reduced response time for recognition
WO2007017853A1 (en) * 2005-08-08 2007-02-15 Nice Systems Ltd. Apparatus and methods for the detection of emotions in audio interactions
CN1979491A (zh) * 2005-12-10 2007-06-13 三星电子株式会社 对音乐文件分类的方法及其系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAN, WEIJING ET AL.: "Speech emotion recognition with combined short and long erm features", J TSINGHUA UNIV (SCI & TECH), vol. 48, no. SL, 2008, pages 709 - 713 *
ZHAO LASHENG ET AL.: "Survey on speech emotion recognition", APPLICATION RESEARCH OF COMPUTERS, vol. 26, no. 2, February 2009 (2009-02-01), pages 428 - 431 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8948893B2 (en) 2011-06-06 2015-02-03 International Business Machines Corporation Audio media mood visualization method and system
US9235918B2 (en) 2011-06-06 2016-01-12 International Business Machines Corporation Audio media mood visualization
US9953451B2 (en) 2011-06-06 2018-04-24 International Business Machines Corporation Audio media mood visualization
US10255710B2 (en) 2011-06-06 2019-04-09 International Business Machines Corporation Audio media mood visualization
WO2019226406A1 (en) * 2018-05-25 2019-11-28 Microsoft Technology Licensing, Llc Dynamic extraction of contextually-coherent text blocks
US11031003B2 (en) 2018-05-25 2021-06-08 Microsoft Technology Licensing, Llc Dynamic extraction of contextually-coherent text blocks
CN116578691A (zh) * 2023-07-13 2023-08-11 江西合一云数据科技股份有限公司 一种智能养老机器人对话方法及其对话系统

Also Published As

Publication number Publication date
CN102099853B (zh) 2012-10-10
CN102099853A (zh) 2011-06-15

Similar Documents

Publication Publication Date Title
WO2010105396A1 (en) Apparatus and method for recognizing speech emotion change
CN110085251B (zh) 人声提取方法、人声提取装置及相关产品
US9396724B2 (en) Method and apparatus for building a language model
US10593333B2 (en) Method and device for processing voice message, terminal and storage medium
CN107305541B (zh) 语音识别文本分段方法及装置
US10068570B2 (en) Method of voice recognition and electronic apparatus
US20200082808A1 (en) Speech recognition error correction method and apparatus
CN111145756B (zh) 一种语音识别方法、装置和用于语音识别的装置
JP2021526242A (ja) 保険の録音による品質検査方法、装置、機器及びコンピュータ記憶媒体
CN108305618B (zh) 语音获取及搜索方法、智能笔、搜索终端及存储介质
US11810546B2 (en) Sample generation method and apparatus
JP6622681B2 (ja) 音素崩れ検出モデル学習装置、音素崩れ区間検出装置、音素崩れ検出モデル学習方法、音素崩れ区間検出方法、プログラム
CN113823323B (zh) 一种基于卷积神经网络的音频处理方法、装置及相关设备
US10950221B2 (en) Keyword confirmation method and apparatus
CN113450771B (zh) 唤醒方法、模型训练方法和装置
CN110827853A (zh) 语音特征信息提取方法、终端及可读存储介质
CN113450774A (zh) 一种训练数据的获取方法及装置
CN110222331A (zh) 谎言识别方法及装置、存储介质、计算机设备
CN113626614B (zh) 资讯文本生成模型的构造方法、装置、设备及存储介质
CN114267342A (zh) 识别模型的训练方法、识别方法、电子设备及存储介质
Jia et al. A deep learning system for sentiment analysis of service calls
CN115630643A (zh) 语言模型的训练方法、装置、电子设备及存储介质
CN113823326B (zh) 一种高效语音关键词检测器训练样本使用方法
CN115512698A (zh) 一种语音语义分析方法
CN114120425A (zh) 一种情绪识别方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980127959.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09841681

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09841681

Country of ref document: EP

Kind code of ref document: A1