WO2004003888A1 - Verfahren zur natürlichen spracherkennung auf basis einer generativen transformations-/phrasenstruktur-grammatik - Google Patents
Verfahren zur natürlichen spracherkennung auf basis einer generativen transformations-/phrasenstruktur-grammatik Download PDFInfo
- Publication number
- WO2004003888A1 WO2004003888A1 PCT/DE2003/002135 DE0302135W WO2004003888A1 WO 2004003888 A1 WO2004003888 A1 WO 2004003888A1 DE 0302135 W DE0302135 W DE 0302135W WO 2004003888 A1 WO2004003888 A1 WO 2004003888A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- grammar
- recognized
- words
- phrase
- word
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000009466 transformation Effects 0.000 title claims abstract description 8
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 230000001755 vocal effect Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 claims 2
- 230000001960 triggered effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 7
- 241000282312 Proteles Species 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
- Y10S707/99934—Query formulation, input preparation, or translation
Definitions
- the invention relates to a method for natural speech recognition based on a generative transformation / phrase structure grammar (GT / PS grammar).
- NLU Natural Language Understanding
- Speech recognition systems with natural speech recognition are able to understand a variety of possible utterances and implement them in complex command structures, the speech recognition systems, e.g. Computer, to take certain actions. They do this on the basis of predefined, meaningful sample sets, which are defined by application developers and so-called dialog designers.
- This collection of sample sentences - also called "grammar" - includes individual command words as well as complicated nesting sentences that make sense at a certain point in the dialog. If the user expresses such a sentence, the system will understand it with great certainty and the instructions associated with it is running.
- the Grammar is an indispensable component. It is generated using a special tool, the so-called Grammar Specification Language (GSL). It is used to reproduce the words to be understood as well as their links in advance and to lay them down for the speech recognizer.
- GSL Grammar Specification Language
- the predefined sentences are formed from combinations of words that are interchangeable (paradigmatic axis) and combinable (syntagmatic axis). An example of this is shown in FIG. 7. The possible utterances result from the syntagmatic connection of the paradigmatic word combinations.
- the object of the invention is to provide a method for speech recognition on the basis of a generative transformation / phrase structure grammar which, compared to conventional recognition methods, requires less system resources and thereby enables reliable and fast recognition of speech while reducing over-generation.
- a spoken phrase is analyzed for triphones contained therein, words contained in the spoken phrase are formed from the recognized triphones with the aid of phonetic word databases
- the linking rules of grammatical sentences are not reproduced on the surface, but the depth structures are shown, which are followed by the syntagmatic links of all Indo-European languages.
- Each sentence is described using a syntactic model in the form of so-called structure trees.
- the GT / PS grammar is not based on the potential statements of a specific application, but on the deep structure of the syntax (sentence formation rules) of Indo-European languages. It provides a framework that can be filled with different words and depicts the reality of the spoken language better than the previously used "mimetic" process.
- Subgrammars in the GT / PS model on e.g. 30 subgrammars can be reduced in just two hierarchical levels.
- the new grammar type depicts natural language expressions in a structured form and is only around 25% the size of the previous grammar, for example. Because of its small size, this grammar is easier to maintain, and the times for compilation decrease rapidly. Due to their small size, the Detection reliability (Accuracy) and decreases the detection delay (Latency). Current computer capacities are better used and the performance of the servers increases. In addition, the new Grammar is not related to a specific application, but can be used in its basic structures for different applications, which increases the homogeneity of the systems and reduces development times.
- the universal code of the deep structure enables the use and added value for multilingual language systems in a dimension that has not yet been achieved, especially the standard Western European languages can be processed with comparatively little effort.
- the new GT / PS grammar is based on current linguistic models that provide natural-language utterances in the context of surface and
- GSL Grammar Specification Language
- the GT / PS grammar is much smaller than the previous grammar because it only needs two levels instead of the up to seven subgrammar levels; - The number of grammatically incorrect sentences covered by the grammar
- Figure 1 A triphone analysis as the first step in the recognition process
- Figure 2 Word recognition from the recognized triphones as a second step in the recognition process
- Figure 3 a syntactic reconstruction of the recognized words as the third step of the recognition process
- Figure 4 An example of the structure of the recognized words in
- Figure 5 A sample program for a possible grammar
- Figure 6 An overview of the structure of a PSG grammar
- Figure 7 An example of the formation of word combinations in a grammar according to the prior art.
- Figure 1 shows the first step of speech recognition: the triphone analysis.
- the continuous flow of speech of a person 1 is e.g. accepted by a microphone of a telephone and fed to a speech recognizer 2 as an analog signal.
- the analog voice signal is converted into a digital voice signal 3.
- the speech signal contains a variety of triphones, i.e. Sound segments that in speech recognizer 2 with existing, i.e. Predefined triphon linking rules are compared.
- the existing triphones are stored in a database which contains one or more phonebooks.
- the recognized triphones are then present as a triphone chain 4, e.g. "Pro", “red”, “ote", "tel”.
- useful words are formed from the recognized triphones.
- the phonetic dictionary 5 can comprise a certain vocabulary from the colloquial language as well as a special vocabulary tailored to the respective application.
- the recognized words 7 are reconstructed using the grammar 8.
- the recognized words are assigned to their part of speech, such as noun, verb, adverb, article, adjective, etc., as shown in FIG 6 is shown.
- the databases 9-15 can contain both the conventional part of speech categories mentioned above and special part of speech types, such as yes / no grammar 9, telephone numbers 14, 15.
- a detection of DTMF inputs 16 can also be provided.
- the described assignment of the part of speech type to the recognized words can already take place during the word recognition process.
- the recognized words are based on their word categories of a verbal phrase, i.e. a verb-based phrase, and a nominal phrase, i.e. assigned to a phrase based on a noun, cf. Figure 6.
- step 18 the objects for multitasking are linked to the corresponding voice-controlled application.
- Each object 19 comprises a target sentence stored in the grammar 8, more precisely a sentence model.
- a sentence model e.g. can be defined by a word order "subject, verb, object” or "object, verb, subject”.
- Many other sentence structures are stored in this general form in Grammar 8. If the word categories of the recognized words 7 correspond to the order of one of the predefined sentence models, they are assigned to the associated object. The sentence is considered recognized. In other words, each sentence model comprises a number of variables assigned to the different word categories, which are filled with the corresponding word categories of the recognized words 7.
- the procedure uses the traditional Grammar Specification Language (GSL), but structures the stored sentences in an innovative way. It is based on the rules of phrase structure grammar and the concept of a generative transformation grammar.
- GSL Grammar Specification Language
- the GT / PS grammar is therefore based on a theoretical model that is suitable for determining the abstract principles of natural language utterances.
- it opens up the possibility for the first time to reverse the abstraction of sentence formation rules and to substantiate them as a prediction of the statements made by application users. This enables systematic access to speech recognition grammars that have always been based on the intuitive accumulation of example sentences.
- a central feature of conventional and GT / PS grammars is the hierarchical nesting into so-called subgrammars, which combine individual words and variables at the highest level to form an entire sentence.
- the GT / PS grammar is much smaller and hierarchically much clearer than the previously known grammars.
- "meaningful" sentences are almost exclusively stored in the new grammar, so that the degree of overgeneration, ie stored sentences that are incorrect in the natural language sense, decreases. This, in turn, is the prerequisite for improved recognition performance, since the Application only has to choose between a few stored alternatives.
Landscapes
- Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/519,653 US7548857B2 (en) | 2002-06-28 | 2003-06-26 | Method for natural voice recognition based on a generative transformation/phrase structure grammar |
EP03761435A EP1518221A1 (de) | 2002-06-28 | 2003-06-26 | Verfahren zur naturlichen spracherkennung auf basis einer generativen transformations-/phrasenstruktur-grammatik |
JP2004516499A JP4649207B2 (ja) | 2002-06-28 | 2003-06-26 | 生成変形句構造文法に基づいて自然言語認識をする方法 |
CA2493429A CA2493429C (en) | 2002-06-28 | 2003-06-26 | Method for natural voice recognition based on a generative transformation/phrase structure grammar |
AU2003250272A AU2003250272A1 (en) | 2002-06-28 | 2003-06-26 | Method for natural voice recognition based on a generative transformation/phrase structure grammar |
IL165957A IL165957A (en) | 2002-06-28 | 2004-12-23 | Method for natural voice recognition based on a generative transformation/phrase structure grammar |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10229207A DE10229207B3 (de) | 2002-06-28 | 2002-06-28 | Verfahren zur natürlichen Spracherkennung auf Basis einer Generativen Transformations-/Phrasenstruktur-Grammatik |
DE10229207.8 | 2002-06-28 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004003888A1 true WO2004003888A1 (de) | 2004-01-08 |
WO2004003888B1 WO2004003888B1 (de) | 2004-03-25 |
Family
ID=29795990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2003/002135 WO2004003888A1 (de) | 2002-06-28 | 2003-06-26 | Verfahren zur natürlichen spracherkennung auf basis einer generativen transformations-/phrasenstruktur-grammatik |
Country Status (10)
Country | Link |
---|---|
US (1) | US7548857B2 (de) |
EP (1) | EP1518221A1 (de) |
JP (1) | JP4649207B2 (de) |
CN (1) | CN1315109C (de) |
AU (1) | AU2003250272A1 (de) |
CA (1) | CA2493429C (de) |
DE (1) | DE10229207B3 (de) |
IL (1) | IL165957A (de) |
PL (1) | PL373306A1 (de) |
WO (1) | WO2004003888A1 (de) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2141692A1 (de) | 2008-06-26 | 2010-01-06 | Deutsche Telekom AG | Automatisierte Sprachgesteuerte Unterstützung eines Benutzers |
US8040103B2 (en) | 2005-08-19 | 2011-10-18 | City University Of Hong Kong | Battery charging apparatus with planar inductive charging platform |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7295981B1 (en) * | 2004-01-09 | 2007-11-13 | At&T Corp. | Method for building a natural language understanding model for a spoken dialog system |
KR101195812B1 (ko) * | 2010-07-08 | 2012-11-05 | 뷰모션 (주) | 규칙기반 시스템을 이용한 음성인식 시스템 및 그 방법 |
US9817813B2 (en) * | 2014-01-08 | 2017-11-14 | Genesys Telecommunications Laboratories, Inc. | Generalized phrases in automatic speech recognition systems |
CN110164449B (zh) * | 2019-04-26 | 2021-09-24 | 安徽美博智能科技有限公司 | 语音识别的空调机控制方法及装置 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6182039B1 (en) * | 1998-03-24 | 2001-01-30 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus using probabilistic language model based on confusable sets for speech recognition |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0242743B1 (de) * | 1986-04-25 | 1993-08-04 | Texas Instruments Incorporated | Spracherkennungssystem |
EP0590173A1 (de) * | 1992-09-28 | 1994-04-06 | International Business Machines Corporation | Computersystem zur Spracherkennung |
JPH0769710B2 (ja) * | 1993-03-23 | 1995-07-31 | 株式会社エイ・ティ・アール自動翻訳電話研究所 | 自然言語解析方法 |
US6070140A (en) * | 1995-06-05 | 2000-05-30 | Tran; Bao Q. | Speech recognizer |
WO1998009228A1 (en) * | 1996-08-29 | 1998-03-05 | Bcl Computers, Inc. | Natural-language speech control |
US6163768A (en) * | 1998-06-15 | 2000-12-19 | Dragon Systems, Inc. | Non-interactive enrollment in speech recognition |
JP2950823B1 (ja) * | 1998-09-29 | 1999-09-20 | 株式会社エイ・ティ・アール音声翻訳通信研究所 | 音声認識誤り訂正装置 |
JP3581044B2 (ja) * | 1999-05-20 | 2004-10-27 | 株式会社東芝 | 音声対話処理方法、音声対話処理システムおよびプログラムを記憶した記憶媒体 |
US7120582B1 (en) * | 1999-09-07 | 2006-10-10 | Dragon Systems, Inc. | Expanding an effective vocabulary of a speech recognition system |
US6633846B1 (en) * | 1999-11-12 | 2003-10-14 | Phoenix Solutions, Inc. | Distributed realtime speech recognition system |
DE10032255A1 (de) * | 2000-07-03 | 2002-01-31 | Siemens Ag | Verfahren zur Sprachanalyse |
US7058567B2 (en) * | 2001-10-10 | 2006-06-06 | Xerox Corporation | Natural language parser |
-
2002
- 2002-06-28 DE DE10229207A patent/DE10229207B3/de not_active Expired - Lifetime
-
2003
- 2003-06-26 WO PCT/DE2003/002135 patent/WO2004003888A1/de active Application Filing
- 2003-06-26 CA CA2493429A patent/CA2493429C/en not_active Expired - Fee Related
- 2003-06-26 JP JP2004516499A patent/JP4649207B2/ja not_active Expired - Fee Related
- 2003-06-26 PL PL03373306A patent/PL373306A1/xx not_active Application Discontinuation
- 2003-06-26 CN CNB038152843A patent/CN1315109C/zh not_active Expired - Fee Related
- 2003-06-26 US US10/519,653 patent/US7548857B2/en not_active Expired - Fee Related
- 2003-06-26 EP EP03761435A patent/EP1518221A1/de not_active Ceased
- 2003-06-26 AU AU2003250272A patent/AU2003250272A1/en not_active Abandoned
-
2004
- 2004-12-23 IL IL165957A patent/IL165957A/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6182039B1 (en) * | 1998-03-24 | 2001-01-30 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus using probabilistic language model based on confusable sets for speech recognition |
Non-Patent Citations (3)
Title |
---|
BATES M ET AL: "The BBN/HARC spoken language understanding system", ICASSP-93. 1993 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (CAT. NO.92CH3252-4), PROCEEDINGS OF ICASSP '93, MINNEAPOLIS, MN, USA, 27-30 APRIL 1993, 1993, New York, NY, USA, IEEE, USA, pages 111 - 114 vol.2, XP002258536, ISBN: 0-7803-0946-4 * |
See also references of EP1518221A1 * |
YE-YI WANG ET AL: "A unified context-free grammar and n-gram model for spoken language processing", 2000 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. PROCEEDINGS (CAT. NO.00CH37100), PROCEEDINGS OF 2000 INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ISTANBUL, TURKEY, 5-9 JUNE 2000, 2000, Piscataway, NJ, USA, IEEE, USA, pages 1639 - 1642 vol.3, XP002258537, ISBN: 0-7803-6293-4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8040103B2 (en) | 2005-08-19 | 2011-10-18 | City University Of Hong Kong | Battery charging apparatus with planar inductive charging platform |
EP2141692A1 (de) | 2008-06-26 | 2010-01-06 | Deutsche Telekom AG | Automatisierte Sprachgesteuerte Unterstützung eines Benutzers |
Also Published As
Publication number | Publication date |
---|---|
IL165957A0 (en) | 2006-01-15 |
CN1666254A (zh) | 2005-09-07 |
WO2004003888B1 (de) | 2004-03-25 |
JP4649207B2 (ja) | 2011-03-09 |
US20060161436A1 (en) | 2006-07-20 |
DE10229207B3 (de) | 2004-02-05 |
CN1315109C (zh) | 2007-05-09 |
CA2493429A1 (en) | 2004-01-08 |
EP1518221A1 (de) | 2005-03-30 |
IL165957A (en) | 2010-11-30 |
CA2493429C (en) | 2011-09-13 |
JP2005539249A (ja) | 2005-12-22 |
PL373306A1 (en) | 2005-08-22 |
AU2003250272A1 (en) | 2004-01-19 |
US7548857B2 (en) | 2009-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE69607601T2 (de) | System und verfahren zur spracherkennung mit automatischer erzeugung einer syntax | |
DE69527229T2 (de) | Sprachinterpretator mit einem Kompiler mit vereinheitlicher Grammatik | |
DE69923191T2 (de) | Interaktive anwenderschnittstelle mit spracherkennung und natursprachenverarbeitungssystem | |
DE602005001125T2 (de) | Erlernen der Aussprache neuer Worte unter Verwendung eines Aussprachegraphen | |
DE69622565T2 (de) | Verfahren und vorrichtung zur dynamischen anpassung eines spracherkennungssystems mit grossem wortschatz und zur verwendung von einschränkungen aus einer datenbank in einem spracherkennungssystem mit grossem wortschatz | |
DE69937176T2 (de) | Segmentierungsverfahren zur Erweiterung des aktiven Vokabulars von Spracherkennern | |
DE69822296T2 (de) | Mustererkennungsregistrierung in einem verteilten system | |
EP1466317B1 (de) | Betriebsverfahren eines automatischen spracherkenners zur sprecherunabhängigen spracherkennung von worten aus verschiedenen sprachen und automatischer spracherkenner | |
DE69829235T2 (de) | Registrierung für die Spracherkennung | |
DE69834553T2 (de) | Erweiterbares spracherkennungssystem mit einer audio-rückkopplung | |
DE60222093T2 (de) | Verfahren, modul, vorrichtung und server zur spracherkennung | |
DE60016722T2 (de) | Spracherkennung in zwei Durchgängen mit Restriktion des aktiven Vokabulars | |
DE69923379T2 (de) | Nicht-interaktive Registrierung zur Spracherkennung | |
DE69914131T2 (de) | Positionshandhabung bei der Spracherkennung | |
EP1611568B1 (de) | Dreistufige einzelworterkennung | |
DE69220825T2 (de) | Verfahren und System zur Spracherkennung | |
DE19847419A1 (de) | Verfahren zur automatischen Erkennung einer buchstabierten sprachlichen Äußerung | |
EP0804788B1 (de) | Verfahren zur spracherkennung | |
DE102006036338A1 (de) | Verfahren zum Erzeugen einer kontextbasierten Sprachdialogausgabe in einem Sprachdialogsystem | |
DE69519229T2 (de) | Verfahren und vorrichtung zur anpassung eines spracherkenners an dialektische sprachvarianten | |
DE60026366T2 (de) | Spracherkennung mit einem komplementären sprachmodel für typischen fehlern im sprachdialog | |
EP1097447A1 (de) | Verfahren und vorrichtung zur erkennung vorgegebener schlüsselwörter in gesprochener sprache | |
DE69331247T2 (de) | Spracherkennungssystem | |
DE10229207B3 (de) | Verfahren zur natürlichen Spracherkennung auf Basis einer Generativen Transformations-/Phrasenstruktur-Grammatik | |
EP2034472B1 (de) | Spracherkennungsverfahren und Spracherkennungsvorrichtung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
B | Later publication of amended claims |
Effective date: 20040112 |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003761435 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 165957 Country of ref document: IL |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2493429 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038152843 Country of ref document: CN Ref document number: 2004516499 Country of ref document: JP Ref document number: 373306 Country of ref document: PL |
|
WWP | Wipo information: published in national office |
Ref document number: 2003761435 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006161436 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10519653 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10519653 Country of ref document: US |