JPH02289900A - Japanese voice input assisting device - Google Patents

Japanese voice input assisting device

Info

Publication number
JPH02289900A
JPH02289900A JP1062851A JP6285189A JPH02289900A JP H02289900 A JPH02289900 A JP H02289900A JP 1062851 A JP1062851 A JP 1062851A JP 6285189 A JP6285189 A JP 6285189A JP H02289900 A JPH02289900 A JP H02289900A
Authority
JP
Japan
Prior art keywords
reference information
input
word
kana
kanji
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP1062851A
Other languages
Japanese (ja)
Inventor
Masaki Yamashina
正樹 山階
Sueji Miyahara
末治 宮原
Fumihiko Kobashi
小橋 史彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP1062851A priority Critical patent/JPH02289900A/en
Publication of JPH02289900A publication Critical patent/JPH02289900A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To facilitate the operation for confirming an input sentence by extracting a vague part automatically from the conversion result of KANA (Japanese syllabary)-KANJI(Chinese character) conversion, retrieving reference information as to the words of the vague part, and displaying it together with the conversion result. CONSTITUTION:The very vauge part which is generated when an input operator inputs a Japanese sentence received in voice after KANA-KANJI conversion is detected by using a case pattern storage part 5, a reference information storage part 6 stored with examples, equivalent expressions, and meaning explanations of the words of the vague part is retrieved, and explanatory sentences of the respective words are generated based upon those pieces of reference information and displayed. Consequently, the input operator confirms the intention of a speaker efficiently.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 本発明は.口述筆記や電話で入力文を受け付け発話者と
は異なった入力オペレータが日本語入力を行う際などに
,入力オペレータが音声のみで発話者に対して入力文の
確認を行う作業を支援する日本語音声入力支援装置に関
する. 〔従来の技術と問題点〕 電話等音声のみによって情報伝達が可能な手段によって
発話者は入力オペレータに日本語文を伝え,入力オペレ
ータは発話者とは異なった場所で日本語文を発話を基に
かな漢字変換して入力する際に,同音異義語等の曖昧さ
を生じる場合がある.このような場合に入力オペレータ
は,曖昧さを持つ単語を使った例文や意味的説明を考え
て,それを発話者に伝えることによって発話者の意図を
確認する場合が多い.ところが、入力オペレータが発話
者の意図を理解できなかったり.未知な用語を使った場
合には人力オペレータに妥当な例文を提示できなかった
りして,意味的な説明ができず.発話者が入力オペレー
タに意図を伝えるのに多大の労力を要する場合がある. 従来,日本語ワードプロセッサにおいては人力支援、推
敲支援機能として,類義語や意味記述の提示機能が提供
されているが,これらの機能ぱあ《まで筆者が自分自身
で入力することを前提として提供されているものであり
9発話者と人力オペレータとが異なる場合の問題点,す
なわち,曖昧度の大きい部分を自動的に抽出するガイダ
ンス機能やその部分に対する参照情報を提示する機能は
提供されていないという問題点があった.〔問題点を解
決するための手段〕 本発明は.電話等音声のみによって情報伝達が可能な手
段によって,発話者は入力オペレータに日本語文を伝え
,入力オペレータは発話者とは異なった場所で発話を基
に日本語文を入力する際に.同音異義語等の曖昧さを生
じる場合があるが,このような場合に入力オペレータが
発話者に意図(日本語文)を確認するのを支援するため
に,変換結果で曖昧な部分を自動的に抽出するとともに
,それらの単語について参照情報(用例,言い換え表現
,意味的説明など)を検索し,変換結果とともに表示す
ることにより.入力文の確認作業を容易にすることを目
的としている. 〔実施例〕 以下,本発明の一実施例を図面を用いて説明する. lは英数かなの入力機能とそれに対する変換結果を表示
したり参照情報を表示したりする入力表示部,2は入力
表示部1を制御する入力表示制御部,3は全ての候補を
出力するかな漢字変換部.4は格パターンおよび文間因
果関係ルールによって曖昧度の大きい部分を抽出する曖
昧部分抽出部,5は格パターン蓄積部,6は各単語の参
照情報と検索キーとの関係をフラグとともに各単語の参
照情報を蓄積する参照情報蓄積部,7は漢字表記の単語
をキーとして用例や言い換え表現や意味的説明を検索す
るとともに参照情報蓄積部6に各単語の参照情報に付与
されている検索キーと参照情報との関係を示すフラグと
を基に説明文を生成する参照.情報検索部,8は装置全
体を制御する装置制御部である. 以下.例文を用いて本発明の装置の動作を説明する. 入力オペレータが電話等によって入力文「ゴゴタナ力ト
ウキツウニツク」を受け取ると,かな文字列を入力表示
部1によって人力し6かな漢字変換部3は当該かな文字
列をかな漢字変換する.この場合,第3図に示す如き候
補が得られ,曖昧部分抽出部4は格パターン蓄積部5を
参照して変換結果で曖昧な部分を抽出する。
[Detailed Description of the Invention] [Industrial Application Field] The present invention... A Japanese language that supports tasks in which the input operator confirms the input sentence to the speaker using only voice, such as when an input operator different from the speaker inputs the input text by dictation or over the phone. Concerning voice input support devices. [Conventional technology and problems] A speaker conveys a Japanese sentence to an input operator using a means that can transmit information only by voice, such as a telephone, and the input operator converts the Japanese sentence into kana-kanji based on the utterance at a location different from the speaker. When converting and inputting, ambiguities such as homonyms may occur. In such cases, the input operator often confirms the speaker's intention by thinking of example sentences or semantic explanations using the ambiguous word and conveying them to the speaker. However, the input operator may not be able to understand the speaker's intent. If an unknown term is used, it may not be possible to present a valid example sentence to the human operator, making it impossible to provide a semantic explanation. It may take a lot of effort for the speaker to convey his/her intention to the input operator. Conventionally, Japanese word processors have provided a function to present synonyms and semantic descriptions as a human support and elaboration support function, but these functions are provided on the assumption that the author will input the information himself/herself. 9 Problems when the speaker and the human operator are different, that is, there is no guidance function that automatically extracts parts with a high degree of ambiguity or a function that presents reference information for those parts. was there. [Means for solving the problems] The present invention. A speaker conveys a Japanese sentence to an input operator using a means that can transmit information only by voice, such as a telephone, and the input operator inputs the Japanese sentence based on the utterance at a location different from the speaker. Ambiguity such as homophones may occur, but in such cases, ambiguous parts are automatically removed in the conversion result to help the input operator confirm the speaker's intention (Japanese sentence). By extracting the words, searching for reference information (examples, paraphrases, semantic explanations, etc.) for those words and displaying them along with the conversion results. The purpose is to make it easier to check input sentences. [Example] An example of the present invention will be described below with reference to the drawings. 1 is an input display section that displays alphanumeric and kana input functions and conversion results and reference information; 2 is an input display control section that controls input display section 1; and 3 is an input display section that outputs all candidates. Kana-kanji conversion department. 4 is an ambiguous part extraction unit that extracts parts with a high degree of ambiguity based on case patterns and intersentence causality rules; 5 is a case pattern storage unit; 6 is a unit that extracts the relationship between the reference information of each word and the search key along with a flag for each word. A reference information storage unit 7 that stores reference information searches for examples, paraphrase expressions, and semantic explanations using words written in kanji as keys, and a reference information storage unit 6 that uses search keys assigned to the reference information of each word. A reference that generates an explanatory text based on a flag indicating the relationship with reference information. Information retrieval section 8 is a device control section that controls the entire device. below. The operation of the device of the present invention will be explained using example sentences. When the input operator receives the input sentence ``Gogotanari Toukitsuunitsu'' over the telephone or the like, the input display unit 1 inputs the kana character string manually, and the kana-kanji conversion unit 3 converts the kana character string into kana-kanji. In this case, candidates as shown in FIG. 3 are obtained, and the ambiguous part extraction section 4 refers to the case pattern storage section 5 and extracts the ambiguous part from the conversion result.

格パターン蓄積部5には第2図に示すような格パターン
が蓄積されており.変換結果に含まれる各々の動詞につ
いて.変換結果における格パターンと格パターン蓄積部
5に蓄積されている格パターンとを比較する. 「突く」は格パターンが一致しないため.候補としての
光度は下げられ,「付く」のrに」格は具象物.抽象物
等非常に広い意味カテゴリを取り得るため候補には残る
が6 「着く』の場合には「に」格に場所格を取るパタ
ーンが存在するとともに変換結果の全ての場合について
「に」格が地名である.そのため,「着く」と「付ク」
,「突く」との間の曖昧度は少ないと判断され,「着く
」が第1順位の候補と判定されるとともに.当該単語に
対する参照情報は自動的に表示されない.一方,「トウ
キョウ」に対するかな漢字変換結果はそれぞれ地名を含
み,格パターンによっては尤度を付けられないため.曖
昧度の高い部分と判定され,これらに含まれる単語につ
いて参照情報が検索される. 参照情報蓄積部6には,各単語の参照情報がそれらがど
ういう意味の参照情報かを示すフラグとともに蓄積され
ており.参照情報検索部7は当該単語の参照情報を検索
するとともにそれらのフラグによって説明文を生成する
. 『京」に対しては「京部」が,「等Jに対しては「など
」が言い換え表現を示すフラグとともに蓄積されており
,参照情報検索部7はそれらの参照情報が言い換え表現
であることを検出して「京都という意味の京」,「など
という意味の等」という説明文を生成する. また,「右京」に対しては「京都」が上位の地名である
ことを示すフラグとともに蓄積されており,この場合に
は「京都の中の右京」という説明文が生成される. 第3図は画面表示の一実施例を示しており.変換結果と
曖昧度が高いと判定された部分に対する参照情報(東京
:首都の東京,等:「など」という意味の「等」,京:
京都という意味の京.右京:京都の中の右京)とが表示
され,入力オペレータはこれらの参照情報を発話者に伝
えることにより,発話者の意図を確認する. また、人名等の固有名詞の場合には多くの同音異義語が
存在するため固有名詞を構成する漢字1文字ずつについ
て,それらの漢字を含む一般的な単語や訓読みが参照情
報蓄積部6に蓄積されており.変換結果に同音異義語が
存在する場合には,入力表示制1n部2はそれらの参照
情報を入力表示部lに表示する.その例を以下に示す.
例 幸 治  [幸福 明治] 耕 司  〔たがやず つかさ〕 康 二  〔康 数字の二] さらに,曖昧部分抽出部4が曖昧度が小さいと判断した
任意の単語についても入力表示部1においてカーソル等
によって単語を指定し,参照情報の検索指示を与えるこ
とによって入力表示制御部2は当該単語を検知するとと
もに参照情報の検索要求を装置制12I1部8に伝え,
装置制御部8は検索キーとともに検索要求を参照情報検
索部7に伝え.参照情報の検索を実行するとともに参照
情報を人力表示制御部2に返すことにより当該単語の参
照情報を表示する. 〔発明の効果〕 以上説明したように,本発明によれば.入力オペレータ
が音声で受け付けた日本語文をかな漢字変換して入力す
る際に生じる曖昧度の高い部分を格パターンを用いて検
出するとともに,それらの単語の用例2言い換え表現,
意味的説明を蓄積した参照情報蓄積部を検索し.更に.
それらの参照情報を基に各単語の説明文を生成し提示す
るようにしており6人カオペレータが発話者の意図を確
認する作業を効率化する利点がある. また,本発明では人力オペレータが介在する場合の実施
例を説明したが,入力オペレータの代わりに音声認識装
置を用いて人力文を認識し,変換結果に対する確認文を
生成するとともに,その611認文を音声合成して発話
者に伝えるようにすることもでき.音声で伝えられた日
本語の入力を無人化できる利点がある.
The case pattern storage section 5 stores case patterns as shown in FIG. For each verb included in the conversion result. The case pattern in the conversion result is compared with the case pattern stored in the case pattern storage section 5. This is because the case patterns of ``tsuki'' do not match. The luminosity as a candidate is lowered, and the case of ``r'' in ``attached'' is concrete. Although it remains a candidate because it can take on a very wide semantic category such as abstract objects, there is a pattern in which the locative case is added to the ni case in the case of ``arriku'', and the ``ni'' case is the case in all cases of the conversion result. is the place name. Therefore, ``arrive'' and ``tukuku''
It was determined that there was little ambiguity between ``Tsu'' and ``Tsu'', and ``Aruku'' was determined to be the first candidate. Reference information for the word is not automatically displayed. On the other hand, the kana-kanji conversion results for "Tokyo" each include place names, and likelihood cannot be assigned depending on the case pattern. It is determined that the parts have a high degree of ambiguity, and reference information is searched for the words contained in these parts. The reference information storage unit 6 stores reference information for each word along with a flag indicating the meaning of each word. The reference information search unit 7 searches for reference information for the word and generates an explanatory text using those flags. ``Kyobe'' is stored for ``Kyo'', and ``etc.'' for ``etc. It detects this and generates explanatory sentences such as ``Kyo, meaning Kyoto,'' and ``etc, meaning etc.'' In addition, for "Ukyo", a flag indicating that "Kyoto" is a high-ranking place name is stored, and in this case, an explanation such as "Ukyo in Kyoto" is generated. Figure 3 shows an example of screen display. Conversion results and reference information for parts judged to have a high degree of ambiguity (Tokyo: the capital Tokyo, etc.: "etc." meaning "etc.", Kyoto:
Kyo means Kyoto. Ukyo: Ukyo in Kyoto) is displayed, and the input operator confirms the speaker's intention by conveying this reference information to the speaker. In addition, in the case of proper nouns such as personal names, there are many homophones, so for each kanji character that makes up a proper noun, common words and kunyomi containing those kanji characters are stored in the reference information storage unit 6. It has been done. If there are homophones in the conversion result, the input display system 1n section 2 displays their reference information on the input display section l. An example is shown below.
Example: Koji [Happiness, Meiji] Kouji [Tagayazu Tsukasa] Kouji [Yasu number 2] Furthermore, any word that the ambiguous part extraction unit 4 has judged to have a low ambiguity can also be displayed with the cursor on the input display unit 1. etc., and by giving a reference information search instruction, the input display control unit 2 detects the word and transmits a reference information search request to the device system 12I1 section 8,
The device control section 8 transmits a search request together with a search key to the reference information search section 7. The reference information of the word is displayed by executing a search for the reference information and returning the reference information to the manual display control unit 2. [Effects of the Invention] As explained above, according to the present invention. It uses case patterns to detect parts with a high degree of ambiguity that occur when the input operator converts Japanese sentences received by voice into kana-kanji and inputs them, and also uses example 2 paraphrase expressions of those words.
Search the reference information storage unit that stores semantic explanations. Furthermore.
An explanatory sentence for each word is generated and presented based on the reference information, which has the advantage of streamlining the work of six operators to confirm the speaker's intention. Furthermore, although the present invention has described an embodiment in which a human operator intervenes, a voice recognition device is used instead of the input operator to recognize the human sentence, generate a confirmation sentence for the conversion result, and generate the 611 recognition sentence. It is also possible to synthesize speech and convey it to the speaker. It has the advantage of being able to unattend the input of Japanese that is conveyed by voice.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の一実施例を示すブロノク図,第2図は
格パターン蓄積部の例1第3図は表示画面の一実施例で
ある。 l・・・入力表示部. 2・・・人力表示制御部、 3・・・かな漢字変換部, 4・・・曖昧部分抽出部, 5・・・格パターン蓄積部. 6・・・参照情報蓄積部, 7・・・参照情報検索部, 8・・・装置制御部. 特許出願人  日本電信電話株式会社
FIG. 1 is a Bronok diagram showing one embodiment of the present invention, and FIG. 2 is an example of a case pattern storage unit. FIG. 3 is an embodiment of a display screen. l...Input display section. 2...Manual display control unit, 3...Kana-Kanji conversion unit, 4...Ambiguous part extraction unit, 5...Case pattern storage unit. 6...Reference information storage unit, 7...Reference information search unit, 8...Device control unit. Patent applicant Nippon Telegraph and Telephone Corporation

Claims (1)

【特許請求の範囲】 英数かなの入力が可能な入力手段と、 入力かな文字列に対する全ての候補を含むかな漢字変換
結果、変換された単語の用例、言い換え表現、意味的な
説明を含む表示を表示可能に構成した表示手段を持つ入
力表示部と、 入力表示部を制御する入力表示制御部と、 入力されたかな文字列に対して全ての候補を出力するか
な漢字変換部と、 変換結果の中で曖昧度の高い部分を抽出する曖昧部分抽
出部と、 各単語についての用例、言い換え表現、意味的説明を含
む参照情報と共に、各単語と参照情報の間の意味的な関
係を示すフラグを蓄積する参照情報蓄積部と、 漢字表記の単語を検索キーとして当該単語の参照情報を
検索するとともに、検索された参照情報と検索キーであ
る単語と参照情報の意味的関係を示すフラグとによって
当該単語の説明文を生成する参照情報検索部とを具備す
る ことを特徴とする日本語音声入力支援装置。
[Claims] An input means capable of inputting alphanumeric and kana characters, and a display including kana-kanji conversion results including all candidates for the input kana character string, usage examples, paraphrase expressions, and semantic explanations of the converted words. An input display section that has a display means configured to display, an input display control section that controls the input display section, a kana-kanji conversion section that outputs all candidates for the input kana character string, and a conversion result. ambiguous part extraction unit that extracts parts with a high degree of ambiguity, and reference information that includes usage examples, paraphrases, and semantic explanations for each word, as well as flags that indicate the semantic relationship between each word and the reference information. A reference information storage unit searches for reference information of the word using the word written in kanji as a search key, and searches for the word using the searched reference information and a flag indicating the semantic relationship between the word as the search key and the reference information. 1. A Japanese voice input support device, comprising: a reference information search unit that generates an explanatory text.
JP1062851A 1989-03-15 1989-03-15 Japanese voice input assisting device Pending JPH02289900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1062851A JPH02289900A (en) 1989-03-15 1989-03-15 Japanese voice input assisting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1062851A JPH02289900A (en) 1989-03-15 1989-03-15 Japanese voice input assisting device

Publications (1)

Publication Number Publication Date
JPH02289900A true JPH02289900A (en) 1990-11-29

Family

ID=13212226

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1062851A Pending JPH02289900A (en) 1989-03-15 1989-03-15 Japanese voice input assisting device

Country Status (1)

Country Link
JP (1) JPH02289900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007094344A1 (en) * 2006-02-14 2007-08-23 Nec Corporation Operator work supporting method and device, program, and recording medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007094344A1 (en) * 2006-02-14 2007-08-23 Nec Corporation Operator work supporting method and device, program, and recording medium

Similar Documents

Publication Publication Date Title
CN107025217B (en) Synonymy-converted sentence generation method, synonymy-converted sentence generation device, recording medium, and machine translation system
JPH02289900A (en) Japanese voice input assisting device
JPS58123129A (en) Converting device of japanese syllabary to chinese character
JPH1011431A (en) Kanji retrieval device and method
JP4007630B2 (en) Bilingual example sentence registration device
JP4136055B2 (en) Similar character string search system and recording medium
KR20110072496A (en) System for searching of electronic dictionary using functionkey and method thereof
JPH0630052B2 (en) Voice recognition display
JPS61128364A (en) Retrieving device of dictionary
JPS62209667A (en) Sentence producing device
JPH027159A (en) Japanese processor
JP3278889B2 (en) Machine translation equipment
JPH03129568A (en) Document processor
JPH03160555A (en) Japanese word input device
JPH04365166A (en) Sentence inspecting device
JPH05165805A (en) Japanese syllabary/chinese character converter
JPS62226270A (en) Sentence preparing device
JPS5840650A (en) Japanese input system
JPS60112175A (en) Abbreviation conversion system of kana (japanese syllabary)/kanji (chinese character) convertor
JPH03256164A (en) Kana/kanji conversion system
JPH06208560A (en) Ambiguous kanji converting device
JPH03225462A (en) Roman character/kanji converter
JPH06259413A (en) Japanese language input system
JPS63133228A (en) Information extracting device
JPH08241315A (en) Word registering mechanism for document processor