JP2005316022A - Navigation device and program - Google Patents

Navigation device and program Download PDF

Info

Publication number
JP2005316022A
JP2005316022A JP2004132091A JP2004132091A JP2005316022A JP 2005316022 A JP2005316022 A JP 2005316022A JP 2004132091 A JP2004132091 A JP 2004132091A JP 2004132091 A JP2004132091 A JP 2004132091A JP 2005316022 A JP2005316022 A JP 2005316022A
Authority
JP
Japan
Prior art keywords
voice recognition
conversation
recording
data
navigation device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2004132091A
Other languages
Japanese (ja)
Inventor
Naoki Miura
直樹 三浦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisin AW Co Ltd
Original Assignee
Aisin AW Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin AW Co Ltd filed Critical Aisin AW Co Ltd
Priority to JP2004132091A priority Critical patent/JP2005316022A/en
Publication of JP2005316022A publication Critical patent/JP2005316022A/en
Pending legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To enable setting of a place without uttering made toward a voice recognition device consciously. <P>SOLUTION: A navigation device is provided with a storage means (5) which stores map data including names of places, a recording means (11) which records the contents of conversation, a voice recognition means (12) which recognizes the contents of the conversation recorded in the recording means (11) when a voice recognition button (3) is operated, an analysis means (13) which analyzes character string data obtained by the voice recognition of the voice recognition means and a control means (14) which extracts place information by comparing the analyzed data of the analysis means with the place name data stored in the storage means. <P>COPYRIGHT: (C)2006,JPO&NCIPI

Description

本発明は音声認識手段を備えたナビゲーション装置及びプログラムに関する。   The present invention relates to a navigation apparatus and a program provided with voice recognition means.

従来のナビゲーション装置において、使用者の発話を音声認識して案内対象を選定するものが提案されている(特許文献1)。このような従来の装置で、例えば搭乗者の会話の中で行きたい地点が出てきた場合に、再度ユーザが行きたい場所を発話して音声認識することにより行きたい場所が設定されることになる。
特開2003−329476号公報
In a conventional navigation device, a device that recognizes a user's speech and selects a guidance target has been proposed (Patent Document 1). In such a conventional device, for example, when a point where the user wants to go in a passenger's conversation comes out, the user wants to go again by speaking and recognizing the place where he wants to go. Become.
JP 2003-329476 A

しかし、特許文献1のような方法では、会話の中で行きたい地点が出てきているにもかかわらず、再度ナビゲーション装置の音声認識装置に向かって発話しなければならず、手間がかかるとともに、ナビゲーション装置に向かって発話することが不自然なため使い勝手が十分とは言えなかった。   However, in the method such as Patent Document 1, it is necessary to speak again toward the voice recognition device of the navigation device, even though the point where the user wants to go in the conversation has come out. Since it is unnatural to speak to the navigation device, it was not easy to use.

本発明は上記課題を解決しようとするものであり、自然な会話の中で普通に話した内容から音声認識して地点情報を抽出できるようにすることを目的とする。
そのために本発明は、地点名称を含む地図データが格納されている記憶手段と、会話の内容を録音する録音手段と、音声認識ボタンが操作がされたとき、前記録音手段に録音された会話の内容を認識する音声認識手段と、前記音声認識手段による音声認識で得られた文字列データを解析する解析手段と、前記解析手段で解析されたデータと前記記憶手段に格納されている地点名称データとを比較して地点情報を抽出する制御手段とを備えたことを特徴とする。
また、本発明は、音声認識手段を備えたナビゲーション装置に搭載されるプログラムにおいて、録音手段により会話の内容を録音するステップ、音声認識ボタンが操作されたとき、録音した会話の内容を前記音声認識手段により音声認識するステップ、前記音声認識手段による音声認識で得られた文字列データを解析するステップ、解析されたデータと地点名称データとを比較して地点情報を抽出するステップを有することを特徴とする。
An object of the present invention is to solve the above-described problems, and an object of the present invention is to make it possible to extract point information by recognizing speech from the content of a normal conversation in a natural conversation.
For this purpose, the present invention provides a storage means for storing map data including a point name, a recording means for recording the contents of conversation, and a conversation recorded by the recording means when the voice recognition button is operated. Voice recognition means for recognizing contents, analysis means for analyzing character string data obtained by voice recognition by the voice recognition means, data analyzed by the analysis means, and point name data stored in the storage means And a control means for extracting point information.
According to the present invention, in the program installed in the navigation device having voice recognition means, the step of recording the contents of the conversation by the recording means, and when the voice recognition button is operated, the contents of the recorded conversation are recognized by the voice recognition. A step of recognizing speech by means, a step of analyzing character string data obtained by speech recognition by the speech recognition means, and a step of extracting point information by comparing the analyzed data with point name data. And

本発明によれば、会話の中で自然に出てきた地点名称を使用することができるので、ユーザは意識して音声認識装置に向かって発話することなく地点設定を行うことが可能となる。   According to the present invention, a point name that naturally appears in a conversation can be used, so that the user can consciously set a point without speaking to the voice recognition device.

以下、実施の形態について説明する。
図1は本実施形態に係るナビゲーション装置の例を示す図である。ナビゲータ処理手段として地図の表示処理、経路探索処理、経路案内に必要な表示/音声案内処理、音声認識処理や文章解析、さらにシステム全体の制御を行う中央処理装置1、出発地や目的地、登録地点等の情報を入力するキーボード、マウス、ジョイステック、マイクロフォン等からなる入力装置2、音声認識を開始するための操作ボタン3、自車両の現在位置に関する情報を検出する現在位置検出装置4、地図データ、地点データ、経路の探索に必要なナビゲーション用データ、経路案内に必要な表示/音声の案内データ、さらに地図の表示、経路探索、音声案内等の案内を行うためのプログラム(アプリケーション及び/又はOS)等が記録されている情報記憶装置5、経路案内に関する情報を出力するディスプレイやスピーカその他の出力装置6、車両の走行に関する情報である、例えば道路情報、交通情報を送受信したり、車両の現在位置に関する情報を検出したり、さらに現在位置に関する情報を送受信したりする情報送受信装置5から構成されている。
Hereinafter, embodiments will be described.
FIG. 1 is a diagram illustrating an example of a navigation device according to the present embodiment. As a navigator processing means, a map display process, a route search process, a display / voice guidance process necessary for route guidance, a voice recognition process and a sentence analysis, and a central processing unit 1 for controlling the entire system, departure and destination, registration Input device 2 composed of a keyboard, mouse, joystick, microphone, etc. for inputting information such as points, operation button 3 for starting speech recognition, current position detection device 4 for detecting information on the current position of the host vehicle, map Data, point data, navigation data necessary for route search, display / voice guidance data necessary for route guidance, and programs for displaying maps, route searching, voice guidance, etc. (applications and / or OS) and other information storage device 5, a display for outputting information on route guidance, a speaker, etc. Output device 6, for example, information relating to traveling of the vehicle, such as road information and traffic information, detecting information relating to the current position of the vehicle, and further transmitting and receiving information relating to the current position It is configured.

本実施形態の中央処理装置1は、運転者や搭乗者達が車中で自然に会話している内容を録音する録音手段11、音声認識の操作ボタンが操作されたときに録音されている会話の内容を音声認識する音声認識手段12、音声認識で得られた文字列データを解析する解析手段13、情報記憶装置5に格納されている地点名称データを読み出して、解析手段13で解析されたデータと比較し、地点情報を抽出する制御手段14を備えている。   The central processing unit 1 according to the present embodiment includes a recording unit 11 that records contents of conversations that drivers and passengers naturally have in the vehicle, and a conversation that is recorded when a voice recognition operation button is operated. The voice recognition means 12 for recognizing the content of the voice, the analysis means 13 for analyzing the character string data obtained by the voice recognition, and the point name data stored in the information storage device 5 are read and analyzed by the analysis means 13 Control means 14 for extracting point information in comparison with data is provided.

録音手段11は、例えば、数分間の単位で会話内容をメモリ(バッファ)に録音して順次内容を更新し、音声認識手段12は、音声認識の操作ボタンが操作されたときに、例えば、1分間だけ遡って会話内容の音声認識を行い、その結果得られた文字列データを解析手段13で解析し、単語ごとに切り抜いていく。こうして得られた単語と地点名称データベースの内容とを比較することにより、自然な会話の中で出てきた地点情報を抽出することができ、これを使って地点設定を行うことができる。   For example, the recording unit 11 records the conversation contents in a memory (buffer) in units of several minutes and sequentially updates the contents. The voice recognition unit 12 is, for example, 1 when the voice recognition operation button is operated. Voice recognition of the conversation content is performed retroactively for a minute, and character string data obtained as a result is analyzed by the analyzing means 13 and cut out for each word. By comparing the words thus obtained with the contents of the point name database, it is possible to extract point information that has come out in a natural conversation, and use this to set points.

次に、録音した会話内容を使って地点情報を抽出する処理の流れについて図2により説明する。
図2(a)において、ユーザAとユーザBとが次のような会話をしているとする。
Next, the flow of processing for extracting point information using the recorded conversation contents will be described with reference to FIG.
In FIG. 2A, it is assumed that user A and user B have the following conversation.

ユーザA:最近仕事どう? ユーザB:忙しいよ
ユーザA:そうなんだ。おいしい物でも食べに行きたいよね。
User A: How about work recently? User B: Busy User A: That's right. I want to eat delicious food.

ユーザB:この前いったラーメン屋のハンジョウテンなんてどう?
ユーザA:いいね
この内容はそのまま録音されてバッファに蓄積される。そして、「いいね」といった後、音声認識ボタンが操作されたとすると、遡ってボタンの押される前、例えば1分間の会話内容がバッフから読み出されて音声認識が行われる。この音声認識で、「この前いったラーメン屋のハンジョウテンなんてどう?」という文字列データが得られたものとする(図2(b))。
User B: How about the ramen shop Hanjoten?
User A: This content is recorded as it is and stored in the buffer. Then, if the voice recognition button is operated after “Like”, for example, the conversation content for one minute is read from the buff before the button is pushed back, and voice recognition is performed. It is assumed that character string data “How about the ramen shop Hanjoten?” Was obtained by this voice recognition (FIG. 2B).

解析手段では、この文字列データを文章解析し、「この前:いった:ラーメン屋:の:ハンジョウテン:なんて:どう?」というように単語ごとに切り抜いていく(図2(c))。そして、この切り抜かれた単語を、情報記憶装置5に格納されている地点名称データベース(図2(d))と比較し、地点情報を抽出することにより、目的地候補に自動で登録される。この比較において、一致ないしは類似のものが複数ある場合には、これらをリスト表示してユーザが選択すれば、ユーザは実際に行く際に会話で出てきた候補から選択するだけで容易に目的地等の設定ができる。   The analysis means analyzes the text of the character string data, and cuts out the words for each word such as “Previous: Going: Ramen shop: No: Hanjoten: What: How?” (FIG. 2C). Then, the extracted word is compared with the point name database (FIG. 2D) stored in the information storage device 5, and the point information is extracted, so that it is automatically registered as the destination candidate. In this comparison, if there are a plurality of matches or similar ones, if these are displayed in a list and the user selects them, the user can easily select the destination by simply selecting from the candidates that appear in the conversation when actually going. Etc. can be set.

図3は地点情報の抽出処理フローを示す図である。
ユーザ同士の会話を音声データとしてバッファに録音し(ステップS1)、音声認識操作ありか否か判断し(ステップS2)、音声認識ボタンが操作されると、バッファの録音データを音声認識する(ステップS3)。そして、音声認識で得られた文字列データを解析し(ステップS4)、単語ごとに切り抜いていく。そして、この各単語と地点名称データベースとを比較し、一致する地名をリスト表示し(ステップS5)、ユーザはこの中から所望のものを選択して設定する。
FIG. 3 is a diagram showing a process flow for extracting point information.
The conversation between users is recorded as voice data in the buffer (step S1), it is determined whether or not a voice recognition operation is performed (step S2), and when the voice recognition button is operated, the recorded data in the buffer is recognized as voice (step S1). S3). Then, the character string data obtained by the speech recognition is analyzed (step S4) and cut out for each word. Then, each word and the point name database are compared, a list of matching place names is displayed (step S5), and the user selects and sets a desired one from these.

本発明によれば、ユーザは意識して音声認識装置に向かって発話することなく地点設定を行うことができるので産業上の利用価値は極めて大きい。   According to the present invention, since the user can consciously perform the point setting without speaking to the voice recognition apparatus, the industrial utility value is extremely large.

本実施形態に係るナビゲーション装置の例を示す図である。It is a figure which shows the example of the navigation apparatus which concerns on this embodiment. 音声認識と文章解析による地点情報の抽出処理を説明する図である。It is a figure explaining extraction processing of point information by voice recognition and sentence analysis. 地点情報の抽出処理フローを示す図である。It is a figure which shows the extraction process flow of point information.

符号の説明Explanation of symbols

1…中央処理装置、2…入力装置、3…操作ボタン、4…現在位置検出装置、5…情報記憶装置、6…出力装置、7…情報送受信装置、11…録音手段、12…音声認識手段、13…解析手段、14…制御手段。 DESCRIPTION OF SYMBOLS 1 ... Central processing unit, 2 ... Input device, 3 ... Operation button, 4 ... Current position detection device, 5 ... Information storage device, 6 ... Output device, 7 ... Information transmission / reception device, 11 ... Recording means, 12 ... Voice recognition means , 13 ... analysis means, 14 ... control means.

Claims (2)

地点名称を含む地図データが格納されている記憶手段と、
会話の内容を録音する録音手段と、
音声認識ボタンが操作されたとき、前記録音手段に録音された会話の内容を認識する音声認識手段と、
前記音声認識手段による音声認識で得られた文字列データを解析する解析手段と、
前記解析手段で解析されたデータと前記記憶手段に記憶された地点名称データとを比較して地点情報を抽出する制御手段と、
を備えたナビゲーション装置。
Storage means for storing map data including the location name;
A recording means for recording the content of the conversation;
Voice recognition means for recognizing the content of the conversation recorded in the recording means when the voice recognition button is operated;
Analyzing means for analyzing character string data obtained by voice recognition by the voice recognition means;
Control means for extracting point information by comparing the data analyzed by the analyzing means and the point name data stored in the storage means;
A navigation device comprising:
音声認識手段を備えたナビゲーション装置に搭載されるプログラムにおいて、
録音手段により会話の内容を録音するステップ、
音声認識ボタンが操作されたとき、録音した会話の内容を前記音声認識手段により音声認識するステップ、
前記音声認識手段による音声認識で得られた文字列データを解析するステップ、
解析されたデータと地点名称データとを比較して地点情報を抽出するステップ、
を有することを特徴とするプログラム。
In a program installed in a navigation device provided with voice recognition means,
Recording the conversation content by means of recording;
When the voice recognition button is operated, the step of recognizing the recorded conversation by the voice recognition means;
Analyzing character string data obtained by voice recognition by the voice recognition means;
A step of extracting point information by comparing the analyzed data with point name data;
The program characterized by having.
JP2004132091A 2004-04-27 2004-04-27 Navigation device and program Pending JP2005316022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004132091A JP2005316022A (en) 2004-04-27 2004-04-27 Navigation device and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004132091A JP2005316022A (en) 2004-04-27 2004-04-27 Navigation device and program

Publications (1)

Publication Number Publication Date
JP2005316022A true JP2005316022A (en) 2005-11-10

Family

ID=35443551

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004132091A Pending JP2005316022A (en) 2004-04-27 2004-04-27 Navigation device and program

Country Status (1)

Country Link
JP (1) JP2005316022A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007256198A (en) * 2006-03-24 2007-10-04 Denso It Laboratory Inc Navigation system and navigation method
JP2008014818A (en) * 2006-07-06 2008-01-24 Denso Corp Operation control device and program
US8965697B2 (en) 2011-11-10 2015-02-24 Mitsubishi Electric Corporation Navigation device and method
JP2015207191A (en) * 2014-04-22 2015-11-19 株式会社エヌ・ティ・ティ・データ Foreign language conversation comprehension support device and method and program for foreign language conversation comprehension support

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01112299A (en) * 1987-07-16 1989-04-28 Fujitsu Ltd Voice recognition equipment
JP2001092493A (en) * 1999-09-24 2001-04-06 Alpine Electronics Inc Speech recognition correcting system
JP2002182690A (en) * 2000-12-14 2002-06-26 Sharp Corp Voice operated device and method for discriminating voice command in this device
JP2004045900A (en) * 2002-07-12 2004-02-12 Toyota Central Res & Dev Lab Inc Voice interaction device and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01112299A (en) * 1987-07-16 1989-04-28 Fujitsu Ltd Voice recognition equipment
JP2001092493A (en) * 1999-09-24 2001-04-06 Alpine Electronics Inc Speech recognition correcting system
JP2002182690A (en) * 2000-12-14 2002-06-26 Sharp Corp Voice operated device and method for discriminating voice command in this device
JP2004045900A (en) * 2002-07-12 2004-02-12 Toyota Central Res & Dev Lab Inc Voice interaction device and program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007256198A (en) * 2006-03-24 2007-10-04 Denso It Laboratory Inc Navigation system and navigation method
JP4653684B2 (en) * 2006-03-24 2011-03-16 株式会社デンソーアイティーラボラトリ Navigation system and navigation method
JP2008014818A (en) * 2006-07-06 2008-01-24 Denso Corp Operation control device and program
US8965697B2 (en) 2011-11-10 2015-02-24 Mitsubishi Electric Corporation Navigation device and method
JP2015207191A (en) * 2014-04-22 2015-11-19 株式会社エヌ・ティ・ティ・データ Foreign language conversation comprehension support device and method and program for foreign language conversation comprehension support

Similar Documents

Publication Publication Date Title
JP4270611B2 (en) Input system
JP5158174B2 (en) Voice recognition device
US8965697B2 (en) Navigation device and method
US20170010859A1 (en) User interface system, user interface control device, user interface control method, and user interface control program
US20130103405A1 (en) Operating system and method of operating
US7310602B2 (en) Navigation apparatus
JP4466379B2 (en) In-vehicle speech recognition device
EP3588492A1 (en) Information processing device, information processing system, information processing method, and program
JPH06208389A (en) Method and device for information processing
JP2011179917A (en) Information recording device, information recording method, information recording program, and recording medium
WO2005064275A1 (en) Navigation device
JP2002123290A (en) Speech recognition device and speech recognition method
JP2008234427A (en) Device, method, and program for supporting interaction between user
JP5364412B2 (en) Search device
CN107885720B (en) Keyword generation device and keyword generation method
JP5455355B2 (en) Speech recognition apparatus and program
JP2011065526A (en) Operating system and operating method
JP6100101B2 (en) Candidate selection apparatus and candidate selection method using speech recognition
JP2005275228A (en) Navigation system
JP2016035387A (en) Navigation device, terminal device, navigation method, information processing method, and program
JP2005316022A (en) Navigation device and program
US20040015354A1 (en) Voice recognition system allowing different number-reading manners
US20150192425A1 (en) Facility search apparatus and facility search method
JP2009282835A (en) Method and device for voice search
JP2003005783A (en) Navigation system and its destination input method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20070222

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20091109

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20091120

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20100402