JPH05281989A - Speech language interface device - Google Patents

Speech language interface device

Info

Publication number
JPH05281989A
JPH05281989A JP4080657A JP8065792A JPH05281989A JP H05281989 A JPH05281989 A JP H05281989A JP 4080657 A JP4080657 A JP 4080657A JP 8065792 A JP8065792 A JP 8065792A JP H05281989 A JPH05281989 A JP H05281989A
Authority
JP
Japan
Prior art keywords
morpheme
speech
grammar
speech recognition
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP4080657A
Other languages
Japanese (ja)
Other versions
JPH0782349B2 (en
Inventor
Masaaki Nagata
昌明 永田
Toshiyuki Takezawa
寿幸 竹沢
Takuma Morimoto
逞 森元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A T R JIDO HONYAKU DENWA KENKYUSHO KK
Original Assignee
A T R JIDO HONYAKU DENWA KENKYUSHO KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by A T R JIDO HONYAKU DENWA KENKYUSHO KK filed Critical A T R JIDO HONYAKU DENWA KENKYUSHO KK
Priority to JP4080657A priority Critical patent/JPH0782349B2/en
Publication of JPH05281989A publication Critical patent/JPH05281989A/en
Publication of JPH0782349B2 publication Critical patent/JPH0782349B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

PURPOSE:To provide the speech language interface device of morpheme level equipped with a function for adjusting the difference between grammar for speech recognition and grammar for language processing. CONSTITUTION:A speech recognizing device 1 recognizes an input speech by using the grammar for speech recognition stored in a memory 4 and outputs a morpheme series to a morpheme adjustment part 2. The morpheme adjustment part 2 converts the morpheme series outputted by the speech recognizing device 1 into a morpheme series to be inputted to a syntax analyzing device 3 by using a rule for morpheme adjustment stored in a memory 5. The syntax analyzing device 3 inputs the morpheme series outputted by the morpheme adjustment part 2 analyzes the syntax by using grammar for syntax analysis stored in a memory 6, and outputs the meaning expression of the input sentence.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】この発明は音声言語インタフェー
ス装置に関し、特に、音声認識装置と構文解析装置が、
それぞれの用途のために最適化された別々の文法を使用
した場合に、双方の文法の間の背景となる言語学的枠
組,語彙の区切り,品詞体系などの差異を調整しなが
ら、音声認識装置から構文解析装置へ形態素列を引き渡
すことにより、音声認識装置と構文解析装置とで形態素
解析処理が重複して行なわれることを避け、音声言語シ
ステム全体の負荷を軽減するような音声言語インタフェ
ース装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a speech language interface device, and more particularly, to a speech recognition device and a syntax analysis device.
When different grammars optimized for each application are used, the speech recognition device is adjusted while adjusting the background linguistic framework, vocabulary division, part-of-speech system, etc., between the two grammars. The present invention relates to a speech language interface device that reduces the load on the entire speech language system by avoiding redundant morphological analysis processing between the speech recognition device and the syntactic analysis device by passing the morpheme sequence from the syntactic analysis device to the syntactic analysis device. .

【0002】[0002]

【従来の技術】従来より、音声言語システムには、「音
声認識」と「言語処理」という2つのフェーズの処理が
必要であった。音声認識は、音響処理により発話された
音声データから音韻列,文字列または単語列のようなシ
ンボリックなデータを生成する。これに対して、言語処
理は、記号処理により音声認識が出力するシンボリック
なデータから統語構造または意味構造を生成する。
2. Description of the Related Art Conventionally, a speech language system has been required to be processed in two phases: "speech recognition" and "language processing". The speech recognition generates symbolic data such as a phoneme string, a character string, or a word string from speech data uttered by acoustic processing. On the other hand, the language processing generates a syntactic structure or a semantic structure from the symbolic data output by the speech recognition by the symbol processing.

【0003】しかしながら、近年、音声認識は、その精
度の向上のために、従来の音響処理に加えて、形態論や
構文論に関する知識を利用した言語処理も行なうように
なった。その結果、音声認識装置は、品詞や活用などの
文法情報が付加された形態素列を出力することができる
ようになった。一方、言語処理装置は、その用途(機械
翻訳,対話インタフェースなど)によって多少異なる
が、一般に、(広義の)構文解析部と意味解析部から構
成されている。構文解析部は、形態素情報を扱う形態素
解析部と、構文情報を扱う(狭義の)構文解析部からな
っている。
However, in recent years, in order to improve the accuracy of speech recognition, in addition to the conventional acoustic processing, speech processing has come to perform language processing utilizing knowledge about morphology and syntax. As a result, the voice recognition device is now able to output a morpheme string to which grammatical information such as a part-of-speech or a conjugation is added. On the other hand, a language processing device is generally composed of a (broadly defined) syntactic analysis part and a semantic analysis part, although it is somewhat different depending on its use (machine translation, dialogue interface, etc.). The syntactic analysis unit includes a morphological analysis unit that handles morphological information and a (narrowly defined) syntactic analysis unit that handles syntactic information.

【0004】[0004]

【発明が解決しようとする課題】ところが、音声認識用
の文法と言語処理用の文法は、その使用目的が異なるた
めに、背景となる言語的枠組,語彙区切り,品詞体系な
どが異なっている。そのため、従来は、音声認識装置が
出力する形態素情報を、そのままの形で言語処理装置が
利用できず、結局、言語処理装置は音韻列または文字列
のレベルから解析をやり直さなければならなかった。す
なわち、音声認識装置と言語処理装置の従来のインタフ
ェース方式では、音声認識装置と言語処理装置で形態素
解析処理が重複するために、冗長かつ非効率であるとい
う問題点があった。
However, the grammar for speech recognition and the grammar for language processing are different in their linguistic frameworks, vocabulary delimiters, part-of-speech systems, etc., because they are used for different purposes. Therefore, conventionally, the morpheme information output from the speech recognition apparatus cannot be used as it is by the language processing apparatus, and the language processing apparatus must eventually redo the analysis from the level of the phoneme string or the character string. That is, the conventional interface system of the speech recognition device and the language processing device has a problem that it is redundant and inefficient because the morphological analysis processing overlaps between the speech recognition device and the language processing device.

【0005】音声認識装置と言語処理装置の構文解析部
が別々の文法を採用する理由は、両者の文法(言語モデ
ル)に対する要求が異なっているためである。音声認識
は探索空間を効率的に制限して認識精度を上げる手段と
して言語モデルを用いるのに対して、構文解析は言語学
的に妥当な解析的構造を得ることが目的である。また、
音声認識用の文法は予測すなわち生成に用いられるのに
対して、言語処理用の文法は解析に用いられる。したが
って、両者の統合には、双方向文法の場合と同様の困難
さがある。すなわち、現状では、同じ文法を両方の目的
に用いようとすると、音声認識装置の実行効率か、また
は言語処理装置文法の一般性のいずれかを犠牲にしなけ
ればならない。
The reason why the syntactic analysis units of the speech recognition device and the language processing device adopt different grammars is that the requirements for the grammars (language models) of the two are different. While speech recognition uses a language model as a means to efficiently limit the search space and improve recognition accuracy, syntactic analysis aims to obtain a linguistically valid analytic structure. Also,
The grammar for speech recognition is used for prediction or generation, while the grammar for language processing is used for analysis. Therefore, there is the same difficulty in integrating the two as in the case of bidirectional grammar. That is, under the present circumstances, if the same grammar is used for both purposes, either the execution efficiency of the speech recognizer or the generality of the language processor grammar must be sacrificed.

【0006】また、音声認識用の文法では、形態素間の
連接制約の記述が主眼となるために、短単位の単語が語
彙項目として用いられる傾向があるのに対して、言語処
理用の文法では、意味表現を得ることが目的なので、意
味的なまとまりを表わす長単位の単語を採用することが
多い。たとえば、「なくてはなりません」という文末表
現を、音声認識装置では「なく/て/は/なり/ませ/
ん」という形態素列として扱うに対して、言語処理部で
は当為のムードを表わす1つの助動詞と考える。このよ
うに、両者の形態素単位の間には若干のずれが存在す
る。
Further, in the grammar for speech recognition, since the description of the connection constraint between morphemes is the main focus, short unit words tend to be used as vocabulary items, whereas in the grammar for language processing, , Since the purpose is to obtain a semantic expression, a long unit word that represents a semantic unit is often adopted. For example, the end-of-sentence expression "must-have" is replaced with "no / te / wa / nari / mase /"
However, the language processing unit considers it as one auxiliary verb that represents the mood of the subject. In this way, there is a slight deviation between the two morpheme units.

【0007】それゆえに、この発明の主たる目的は、音
声認識用文法と言語処理用文法の間の差異(文法的枠
組,語彙区切り,品詞体系など)を調整する機能を備
え、音声認識装置と言語処理装置における形態素解析の
重複を回避し得る音声言語インタフェース装置を提供す
ることである。
Therefore, a main object of the present invention is to provide a function of adjusting a difference (grammatical framework, vocabulary segmentation, part-of-speech system, etc.) between a speech recognition grammar and a language processing grammar, and a speech recognition device and a language. It is an object of the present invention to provide a spoken language interface device that can avoid duplication of morphological analysis in a processing device.

【0008】[0008]

【課題を解決するための手段】この発明は連続音声認識
手段で認識された入力音声の形態素列を構文解析手段に
入力データの形式で引き渡すための音声言語インタフェ
ース装置であって、連続音声認識手段から出力される形
態素列を構文解析手段の形態素列へ変換するための形態
素調整規則を記憶する記憶手段と、連続音声認識手段か
ら出力された形態素列を、記憶手段に記憶されている形
態素調整規則を参考にして、その語彙の区切りおよび品
詞,活用などの文法情報が構文解析手段の語彙,項目と
一致するように変換するための形態素調整手段を備えて
構成される。
SUMMARY OF THE INVENTION The present invention is a speech language interface device for delivering a morpheme string of an input speech recognized by a continuous speech recognition means to a syntax analysis means in the form of input data. Storage means for storing the morpheme adjustment rule for converting the morpheme sequence output from the morpheme sequence into the morpheme sequence of the syntactic analysis means, and the morpheme adjustment rule stored in the storage means for the morpheme sequence output from the continuous speech recognition means. With reference to, morphological adjustment means for converting grammatical information such as vocabulary delimiters, parts of speech, and conjugations to match the vocabulary and items of the syntactic analysis means.

【0009】[0009]

【作用】この発明に係る音声言語インタフェース装置
は、文法的枠組,語彙区切り,品詞体系などに関する音
声認識用文法と言語処理用文法の間の差異が存在する場
合に、形態素調整手段が形態素調整規則に従って、連続
音声認識装置の出力する形態素データを言語処理装置の
入力として扱えるような形式に変換することによって、
連続音声認識装置と言語処理装置で形態素解析処理が重
複するのを避ける。
In the speech language interface apparatus according to the present invention, when there is a difference between the speech recognition grammar and the language processing grammar regarding the grammatical framework, the vocabulary segmentation, the part-of-speech system, etc., the morpheme adjusting means performs the morpheme adjustment rule. According to the above, by converting the morpheme data output from the continuous speech recognition device into a format that can be treated as an input of the language processing device,
Avoid duplication of morphological analysis processing between the continuous speech recognizer and the language processor.

【0010】[0010]

【実施例】図1はこの発明の一実施例の概略ブロック図
である。まず、図1を参照して、この発明の一実施例の
構成について説明する。入力音声は、メモリ4に記憶さ
れている音声認識用文法を用いて音声認識装置1によっ
て認識され、形態素列が形態素調整部2に出力される。
形態素調整部2は音声認識装置1が出力した形態素列を
メモリ5に記憶されている形態素調整規則を用いて、構
文解析装置3の入力となるような形態素列に変換する。
構文解析装置3は形態素調整部2が出力した形態素列を
入力として、メモリ6に記憶されている構文解析用文法
を用いて構文解析を行ない、入力文の意味表現を出力す
る。
1 is a schematic block diagram of an embodiment of the present invention. First, the configuration of an embodiment of the present invention will be described with reference to FIG. The input voice is recognized by the voice recognition device 1 using the voice recognition grammar stored in the memory 4, and a morpheme string is output to the morpheme adjustment unit 2.
The morpheme adjustment unit 2 uses the morpheme adjustment rule stored in the memory 5 to convert the morpheme sequence output by the speech recognition device 1 into a morpheme sequence that is input to the syntax analysis device 3.
The syntactic analysis device 3 receives the morpheme string output by the morpheme adjusting unit 2 as input, performs syntactic analysis using the syntactic analysis grammar stored in the memory 6, and outputs the semantic representation of the input sentence.

【0011】図2〜図4はこの発明の一実施例による音
声言語インタフェース方式の具体例を示す図であり、特
に、図2は「登録用紙をお送り頂かなくてはなりませ
ん」という文に対する音声認識装置が出力する形態素列
を記述したものであり、図3はこの文に対して適用され
る形態素調整規則を記述した例であり、図4は形態素調
整部により構文解析装置の入力となるように変換された
形態素列である。
2 to 4 are views showing a concrete example of a voice language interface system according to an embodiment of the present invention, and in particular, FIG. 2 is a sentence saying "You must send us a registration form". Is a description of the morpheme sequence output by the speech recognition apparatus for the sentence, FIG. 3 is an example of the description of the morpheme adjustment rule applied to this sentence, and FIG. It is a morpheme string converted to

【0012】次に、図1〜図4を参照して、この発明の
一実施例の具体的な動作について説明する。音声認識装
置1は、文節単位で発話された連続音声を認識して、音
声認識候補の列を出力する。たとえば、「登録用紙をお
送り頂かなくてはなりません」という文は、「登録用紙
を」と「お送り頂かなくてはなりません」という2つの
文節に分けて入力され、図2に示すような認識結果が得
られる。図2のデータ構造は、同一の音声区間に対して
複数の候補を扱えるようにラティス構造になっており、
“ ”で囲まれた文字列が認識された形態素であり、:
posが形態素の品詞を表わし、:probが音声認識
の尤度を表わしている。
Next, the specific operation of the embodiment of the present invention will be described with reference to FIGS. The voice recognition device 1 recognizes a continuous voice uttered in phrase units and outputs a sequence of voice recognition candidates. For example, the sentence "You must send us your registration form" is divided into two clauses, "Please send us your registration form" and "You must send us". The recognition result as shown is obtained. The data structure of FIG. 2 has a lattice structure so that a plurality of candidates can be handled for the same voice section.
The character string enclosed in "" is the recognized morpheme:
Pos represents the part of speech of the morpheme, and: prob represents the likelihood of speech recognition.

【0013】形態素調整部2は音声認識装置1が出力し
た形態素列にメモリ5に記憶されている形態素調整規則
を適用して形態素単位の変換を行なう。たとえば、図3
に示すような形態素調整規則がある場合、形態素調整部
2は各形態素についてメモリ4に記憶されている音声認
識用文法の品詞からメモリ6に記憶されている構文解析
用文法の品詞へ変換すると同時に、文末表現の形態素列
「なく/て/は/なり/ませ/ん」を1つの助動詞にま
とめあげる。また、文節境界は取り除かれ、図4に示す
うような形態素列が出力される。メモリ5に記憶されて
いる形態素調整規則は注釈付正規文法(regular
grammar)で記述された書替え規則を扱う。こ
の規則の左辺は構文解析装置3の語彙項目で、右辺は音
声認識装置1の語彙項目の注釈付正規表現である。注釈
部には、表記,品詞,活用型,活用形を書くことができ
る。正規文法の範囲で記述された書替え規則は、決定性
有限状態オートマトンに変換することができるので、形
態素調整部2は高速に動作する。構文解析装置3は形態
素調整部2が出力した形態素列を入力として、メモリ6
に記憶されている構文解析用文法を用いて構文解析を行
ない、入力文の意味表現を出力する。
The morpheme adjustment unit 2 applies the morpheme adjustment rule stored in the memory 5 to the morpheme sequence output by the speech recognition apparatus 1 to perform conversion on a morpheme unit basis. For example, in FIG.
If there is a morpheme adjustment rule as shown in, the morpheme adjusting unit 2 converts the part of speech of the grammar for speech recognition stored in the memory 4 into the part of speech of the grammar for parsing stored in the memory 6 at the same time. , The morpheme sequence of sentence end expression "n / te / ha / nari / masen / n" is put together into one auxiliary verb. Also, the bunsetsu boundary is removed, and a morpheme string as shown in FIG. 4 is output. The morpheme adjustment rule stored in the memory 5 is an annotated regular grammar (regular grammar).
The rewriting rules described in (grammar) are handled. The left side of this rule is the vocabulary item of the syntax analysis device 3, and the right side is the annotated regular expression of the vocabulary item of the speech recognition device 1. Notations, parts of speech, inflectional forms, and inflectional forms can be written in the annotation section. Since the rewriting rule described in the range of the regular grammar can be converted into the deterministic finite state automaton, the morpheme adjusting unit 2 operates at high speed. The syntactic analysis device 3 receives the morpheme string output by the morpheme adjustment unit 2 as an input, and stores it in the memory 6
Parsing is performed using the parsing grammar stored in, and the semantic representation of the input sentence is output.

【0014】[0014]

【発明の効果】以上のように、この発明によれば、音声
認識手段と言語処理装置のインタフェース部分におい
て、形態素調整手段が形態素調整規則に従って連続音声
認識装置の出力する形態素データを言語処理装置の入力
として使用できるように変換するようにしたので、音声
認識装置と言語処理装置で形態素解析処理が重複するこ
とのない効率的な音声言語インタフェースが実現でき
る。
As described above, according to the present invention, in the interface portion between the speech recognition means and the language processing device, the morpheme adjusting means outputs the morpheme data output by the continuous speech recognition device according to the morpheme adjustment rule to the language processing device. Since the conversion is made so that it can be used as an input, it is possible to realize an efficient speech language interface in which the morphological analysis processing does not overlap between the speech recognition apparatus and the language processing apparatus.

【図面の簡単な説明】[Brief description of drawings]

【図1】この発明の一実施例の概略ブロック図である。FIG. 1 is a schematic block diagram of an embodiment of the present invention.

【図2】「登録用紙をお送り頂かなくてはなりません」
という文に対する音声認識装置が出力する形態素列を記
述した例を示す図である。
[Fig.2] "You must send us a registration form."
It is a figure which shows the example which described the morpheme sequence which the voice recognition device outputs with respect to the sentence.

【図3】図2に示した文に対して適用される形態素調整
規則を記述した例を示す図である。
3 is a diagram showing an example in which a morpheme adjustment rule applied to the sentence shown in FIG. 2 is described.

【図4】形態素調整部により構文解析装置の入力となる
ように変換された形態素列を示す図である。
FIG. 4 is a diagram showing a morpheme string converted by a morpheme adjusting unit so as to be input to a syntax analysis device.

【符号の説明】[Explanation of symbols]

1 音声認識装置 2 形態素調整部 3 構文解析装置 4,5,6 メモリ 1 voice recognition device 2 morpheme adjustment unit 3 syntax analysis device 4, 5, 6 memory

───────────────────────────────────────────────────── フロントページの続き (72)発明者 竹沢 寿幸 京都府相楽郡精華町大字乾谷小字三平谷5 番地 株式会社エイ・ティ・アール自動翻 訳電話研究所内 (72)発明者 森元 逞 京都府相楽郡精華町大字乾谷小字三平谷5 番地 株式会社エイ・ティ・アール自動翻 訳電話研究所内 ─────────────────────────────────────────────────── ─── Continuation of the front page (72) Inventor Toshiyuki Takezawa Kyoto, Soraku-gun Seika-cho, Osamu Osamu, Osamu Osamu, 5 Hiratani, Arai Co., Ltd. Automatic translation telephone laboratory (72) Inventor, Takuma Morimoto Kyoto, Soraku Gunma Seika-cho, Osamu Inui, Osamu, 5 Mihiratani Co., Ltd.

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 連続音声認識手段で認識された入力音声
の形態素列を構文解析手段に、該構文解析手段の入力デ
ータの形式で引き渡すための音声言語インタフェース装
置であって、 前記連続音声認識手段から出力される形態素列を前記構
文解析手段の形態素列へ変換するための形態素調整規則
を記憶する記憶手段、および前記連続音声認識手段から
出力された形態素列を、前記記憶手段に記憶されている
形態素調整規則を参考にして、その語彙の区切りおよび
品詞,活用などの文法情報が前記構文解析手段の語彙,
項目と一致するように変換するための形態素調整手段を
備えた、音声言語インタフェース装置。
1. A speech language interface device for delivering a morpheme string of input speech recognized by a continuous speech recognition means to a syntax analysis means in the form of input data of the syntax analysis means, wherein the continuous speech recognition means. A storage unit that stores a morpheme adjustment rule for converting the morpheme sequence output from the morpheme sequence into the morpheme sequence of the syntax analysis unit, and a morpheme sequence output from the continuous speech recognition unit are stored in the storage unit. With reference to the morpheme adjustment rule, the vocabulary delimiter, grammatical information such as part of speech, and conjugation are used in the vocabulary of the parsing means
A spoken language interface device comprising morpheme adjusting means for converting so as to match an item.
JP4080657A 1992-04-02 1992-04-02 Spoken language interface device Expired - Lifetime JPH0782349B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4080657A JPH0782349B2 (en) 1992-04-02 1992-04-02 Spoken language interface device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4080657A JPH0782349B2 (en) 1992-04-02 1992-04-02 Spoken language interface device

Publications (2)

Publication Number Publication Date
JPH05281989A true JPH05281989A (en) 1993-10-29
JPH0782349B2 JPH0782349B2 (en) 1995-09-06

Family

ID=13724432

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4080657A Expired - Lifetime JPH0782349B2 (en) 1992-04-02 1992-04-02 Spoken language interface device

Country Status (1)

Country Link
JP (1) JPH0782349B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7409342B2 (en) 2003-06-30 2008-08-05 International Business Machines Corporation Speech recognition device using statistical language model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6455596A (en) * 1987-08-26 1989-03-02 Matsushita Electric Ind Co Ltd Voice recognition
JPS6456493A (en) * 1987-08-27 1989-03-03 Matsushita Electric Ind Co Ltd Voice recognition
JPH01260494A (en) * 1988-04-12 1989-10-17 Matsushita Electric Ind Co Ltd Voice recognizing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6455596A (en) * 1987-08-26 1989-03-02 Matsushita Electric Ind Co Ltd Voice recognition
JPS6456493A (en) * 1987-08-27 1989-03-03 Matsushita Electric Ind Co Ltd Voice recognition
JPH01260494A (en) * 1988-04-12 1989-10-17 Matsushita Electric Ind Co Ltd Voice recognizing method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7409342B2 (en) 2003-06-30 2008-08-05 International Business Machines Corporation Speech recognition device using statistical language model
US7603277B2 (en) 2003-06-30 2009-10-13 Nuance Communications, Inc. Speech recognition device using statistical language model
US7698137B2 (en) 2003-06-30 2010-04-13 Nuance Communications, Inc. Speech recognition device using statistical language model

Also Published As

Publication number Publication date
JPH0782349B2 (en) 1995-09-06

Similar Documents

Publication Publication Date Title
US6952665B1 (en) Translating apparatus and method, and recording medium used therewith
US7860719B2 (en) Disfluency detection for a speech-to-speech translation system using phrase-level machine translation with weighted finite state transducers
Kitano Phi DM-Dialog: an experimental speech-to-speech dialog translation system
US6374224B1 (en) Method and apparatus for style control in natural language generation
US8566076B2 (en) System and method for applying bridging models for robust and efficient speech to speech translation
Rayner The spoken language translator
WO1999063456A1 (en) Language conversion rule preparing device, language conversion device and program recording medium
WO2009014465A2 (en) System and method for multilingual translation of communicative speech
Galley et al. Hybrid natural language generation for spoken dialogue systems
Rojc et al. Time and space-efficient architecture for a corpus-based text-to-speech synthesis system
Block The language components in Verbmobil
JP3441400B2 (en) Language conversion rule creation device and program recording medium
JP3009636B2 (en) Spoken language analyzer
Isotani et al. An automatic speech translation system on PDAs for travel conversation
JPH05281989A (en) Speech language interface device
Maskey et al. A phrase-level machine translation approach for disfluency detection using weighted finite state transducers
Watanabe et al. An automatic interpretation system for travel conversation.
US20020143525A1 (en) Method of decoding telegraphic speech
Isotani et al. Speech-to-speech translation software on PDAs for travel conversation
JP2001117583A (en) Device and method for voice recognition, and recording medium
Hirose et al. Statistical language modeling with prosodic boundaries and its use for continuous speech recognition.
JP2001117922A (en) Device and method for translation and recording medium
JP2001100788A (en) Speech processor, speech processing method and recording medium
Penn et al. ALE for speech: a translation prototype.
KR20220050496A (en) Apparatus and method for automatically changing declarative text into conversational text

Legal Events

Date Code Title Description
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 19960227

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080906

Year of fee payment: 13

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090906

Year of fee payment: 14

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090906

Year of fee payment: 14

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100906

Year of fee payment: 15

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110906

Year of fee payment: 16

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120906

Year of fee payment: 17

EXPY Cancellation because of completion of term
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120906

Year of fee payment: 17