CN110556091A - Information providing device - Google Patents

Information providing device Download PDF

Info

Publication number
CN110556091A
CN110556091A CN201910469652.2A CN201910469652A CN110556091A CN 110556091 A CN110556091 A CN 110556091A CN 201910469652 A CN201910469652 A CN 201910469652A CN 110556091 A CN110556091 A CN 110556091A
Authority
CN
China
Prior art keywords
utterance
driver
speed
speech
information providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910469652.2A
Other languages
Chinese (zh)
Inventor
米泽拓臣
光成贵宏
熊木优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Access Corp
Original Assignee
Honda Access Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Access Corp filed Critical Honda Access Corp
Priority to CN202310642834.1A priority Critical patent/CN116645949A/en
Publication of CN110556091A publication Critical patent/CN110556091A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation

Abstract

An information providing device (10) starts voice guidance at a voice utterance speed corresponding to a driver at an accurate utterance start point (Xs), and the information providing device (10) has the following configuration. Comprising: a driver utterance speed setting unit (42A) that sets a driver utterance speed (Sdriv) corresponding to the utterance speed of the driver; a speech type determination unit (44) that determines which of a plurality of speech Types (TS) that are pre-assigned according to a speech speed (Saud), which is a speech speed of speech guidance output from a speaker (20), the set driver speech speed (Sdriv) belongs to; and a utterance starting point calculation unit (50) that calculates the utterance starting point (Xs) of the voice guidance from the utterance speed (Saud) corresponding to the determined utterance Type (TS) and the number of utterances of the voice guidance.

Description

Information providing device
Technical Field
The present invention relates to an information providing device that starts voice guidance from a speaker to a driver of a traveling vehicle when the vehicle reaches a sound emission start point (spot).
Background
Japanese patent application laid-open No. 2015-158573 (hereinafter, referred to as JPA 2015-158573) of patent document 1 discloses a vehicle voice response system that causes an information processing unit to operate an external device such as a vehicle air conditioner in accordance with a voice input command of an occupant, and determines the intonation and utterance speed of a response voice from the vehicle with respect to the operation result based on the intonation and utterance speed of the voice input command of the occupant (paragraph 0051-0054 of JPA 2015-158573).
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2015-158573
Disclosure of Invention
However, the above-described voice response system for a vehicle of the background art has a problem that the voice response system for a vehicle does not operate unless a voice input command of a user is issued.
the present invention has been made in view of the above problems, and an object of the present invention is to provide an information providing device capable of starting voice guidance at a voice utterance speed corresponding to a driver at an accurate utterance start point.
One aspect of the present invention is an information providing device that starts voice guidance from a speaker to a driver of a vehicle when the vehicle reaches a sound emission start point while the vehicle is traveling, the information providing device including: a driver utterance speed setting unit that sets a driver utterance speed corresponding to an utterance speed of the driver; a utterance type determination unit that determines which of a plurality of utterance types allocated in advance according to an utterance speed of the voice guidance output from the speaker, that is, a voice utterance speed, the set utterance speed of the driver belongs to; and a utterance starting point calculation unit that calculates the utterance starting point of the voice guidance from the utterance speed of the voice corresponding to the determined utterance type and the number of uttered words of the voice guidance.
effects of the invention
According to the present invention, it is possible to start voice guidance at a voice utterance speed corresponding to a driver at an accurate utterance start point.
The above objects, features and advantages can be easily understood from the following embodiments described with reference to the accompanying drawings.
Drawings
Fig. 1 is a block diagram showing an example of the configuration of an information providing apparatus according to the embodiment.
fig. 2 is an internal view of the interior of the vehicle mounted with the information providing device shown in fig. 1, as viewed from the rear of the vehicle interior.
Fig. 3 is a table showing the contents of the utterance type table stored in the utterance type storage unit in fig. 1.
fig. 4 is an exemplary view of a traffic condition used for explanation of the operation of the information providing apparatus according to the embodiment.
Fig. 5 is a flowchart used for explaining the operation of the information providing apparatus according to the embodiment.
Fig. 6 is an explanatory diagram showing an example of a method of calculating a utterance starting point for ending voice guidance at the same utterance ending point.
Detailed Description
Hereinafter, an information providing apparatus according to the present invention will be described in detail by way of embodiments with reference to the accompanying drawings.
[ constitution ]
Fig. 1 is a block diagram showing an example of the configuration of an information providing apparatus 10 according to the embodiment. Fig. 2 is an internal view of the interior of the vehicle 12 mounted with the information providing device 10 shown in fig. 1, as viewed from the rear in the vehicle.
As shown in fig. 1, the information providing apparatus 10 is basically configured by a navigation apparatus 14, a microphone 16, a power switch 18, a speaker 20, a GPS antenna 22 for capturing satellite radio waves from GNSS satellites, for example, GPS satellites, and a vehicle speed sensor 24, which are electrically connected to the navigation apparatus 14.
The power switch 18 is a switch that replaces a conventional ignition switch and can switch between a key-off mode, an ACC power mode, and a start mode for starting the power system.
As shown in the example of fig. 2, the microphone 16 is provided in a steering wheel 26 in the vehicle 12, and the power switch 18 is provided below the center of an instrument panel 28. The speaker 20 is provided on kick panels of the two front doors, and the GPS antenna 22 (not shown in fig. 2) is provided inside the upper surface of the instrument panel 28. In addition, microphone 16 may also utilize a built-in microphone of navigation device 14.
The navigation device 14 is disposed in the center of the front surface of the dashboard 28, and includes a touch display (also referred to as a touch panel 30A or a display 30B) 30 that serves as both a touch panel and a display.
A turn signal handle (direction indicator) 31 is provided on the left side of the steering column that covers the shaft of the steering wheel 26.
a vehicle speed sensor 24 that outputs a vehicle speed Vv is provided on an axle, not shown.
As shown in fig. 1, the navigation device 14 includes a gyro sensor 32 and a direction sensor 34 for autonomous navigation or the like, a map database 36 containing specific road information, and a GPS receiving unit 38 that detects the position of the vehicle 12 (own vehicle).
The navigation device 14 also includes: a voice guidance sound emission control unit (hereinafter also referred to as a control unit) 40 that is a computer having a processor such as a CPU and a memory and that realizes a function by the CPU executing a program stored in the memory; a driver utterance speed setting learning unit (driver utterance speed setting unit 42A or driver utterance speed learning unit 42B) 42; a sound emission type determination unit 44; and a sound emission type storage section 46. The control unit 40 includes a sound emission start point calculation unit 50 and a sound emission end point calculation unit 52.
Fig. 3 shows the contents of the utterance type table 60 stored in the utterance type storage unit 46 of the navigation device 14.
In the utterance type table 60, utterance types TS realized by speech rate conversion in which the playback speed is changed without changing the interval by signal processing of the control section 40 are assigned to 3 types of "slow", "normal", and "fast", where the "slow" speech utterance speed Saud is set to 7.5[ mora/sec ], "fast" speech utterance speed Saud is set to 9.5[ mora/sec ], and "normal" speech utterance speed Saud is set to 8.5[ mora/sec ].
Further, 1[ mora/sec ] is a unit representing the number of uttered words when a voice uttered for 1 second is written as hiragana.
When the driver utterance speed sdiv set or learned as described later is less than 8.0[ mora/sec ], the utterance speed Saud of the utterance type TS is determined to be "slow" at 7.5[ mora/sec ], when the driver utterance speed sdiv is not less than 8.0[ mora/sec ] and less than 9.0[ mora/sec ], the utterance speed Saud of the utterance type TS is determined to be "normal" at 8.5[ mora/sec ], and when the driver utterance speed sdiv exceeds 9.0[ mora/sec ], the utterance speed Saud of the utterance type TS is determined to be "fast" at 9.5[ mora/sec ].
[ actions ]
The operation of the information providing apparatus 10 according to the embodiment configured basically as described above will be described with reference to the flowchart of fig. 5, taking the traffic condition shown in fig. 4 as an example of voice guidance.
fig. 4 is an explanatory diagram for calculating a distance Ds from the intersection 54, i.e., a sound emission start point Xs, which is a voice guide for "please turn left at the next intersection" (japanese hiragana characters are 18 characters, i.e., "つ ぎ こ う さ て'/" を さ せ つ し て く だ さ い ") from the speaker 20 of the information providing apparatus 10 when the vehicle 12 traveling on the road 53 toward the intersection 54 turns left at the intersection 54, taking the case where the vehicle speed Vv is 40[ km/h ] at the intersection 54 as an example.
In this case, the sound emission end point Xe is set at a distance De of 30[ m ] before the intersection 54 where the driver operates the winker handle 31 (performs a left turn).
In step S1 of fig. 5, the control unit 40 determines whether the power switch 18 is set to the start mode (power switch start) by the driver while the vehicle 12 is stopped.
When the start mode is detected (yes in step S1), in step S2, the control unit 40 prompts the driver to input whether the vehicle is an elderly person, for example, a person over the age of 70, for a predetermined time, for example, several seconds, by voice from the speaker 20 or display on the display 30B.
When the driver utterance speed setting unit 42A detects "elderly person" from the microphone 16. "input or input from the touch panel 30A" is an elderly person. When "yes" is input (step S2), in step S3, the driver utterance speed setting unit 42A sets the driver utterance speed sdiv to 7.5[ mora/sec ] (utterance type TS is slow) default for elderly people, and proceeds to step S4.
On the other hand, when the determination is negative in step S2 (step S2: no), that is, when no input by the driver is made from the microphone 16 and the touch panel 30A within the above several seconds, or when an intention of not being an elderly person is input from the microphone 16 and the touch panel 30A, in step S5, the driver utterance speed learning unit 42B presents the utterance of the fixed sentence to the driver for a predetermined time, for example, several seconds, by the voice from the speaker 20 and the display on the display 30B.
Here, the fixed sentence is, for example, "good morning". "or the like, and the language registered in advance can be changed arbitrarily.
If the utterance by the fixed sentence from the driver is not detected within the predetermined time (no in step S5), it is determined in step S6 whether or not the utterance type TS of the previous time is stored in the utterance type storage unit 46, and if not (no in step S6), the driver utterance speed setting unit 42A sets the utterance type TS of the driver to "slow" as a default in step S7, and the setting is stored in the utterance type storage unit 46 in step S8.
When the utterance type TS of the previous time is stored (step S6: yes), it proceeds to step S9.
It should be noted that if the input is not an elderly person in step S2, and the utterance type TS is set to "slow" as a default in step S6 if the utterance type of the previous time is not stored in the utterance type storage unit 46.
On the other hand, when it is detected by the microphone 16 that the driver has performed utterance of the fixed sentence in step S5 (yes in step S5), the driver utterance speed learning unit 42B learns (acquires) the driver utterance speed Sdriv [ mora/sec ] in step S10.
Next, in step S4, the utterance type determination unit 44 determines the utterance type TS of the driver with reference to the utterance type table 60 shown in fig. 3.
That is, in step S4, it is determined (decided) that the utterance type TS is slow when the driver utterance speed sdiv learned in step S10 is less than 8.0 (sdiv < 8.0), normal when 8.0 ≦ sdiv < 9.0, and fast when sdiv > 9.0.
Further, when the driver utterance speed sdiv is set to 7.5[ mora/sec ] in step S3, it is determined that the utterance type TS is "slow" in step S4.
Next, in step S8, the utterance type determination unit 44 rewrites and stores the utterance type TS determined in step S4 and the utterance type TS set in step S7 in the utterance type storage unit 46.
After the utterance type TS is stored in step S8, and the determination of step S6 is affirmative (storage with the previous utterance type TS), the running of the vehicle 12 is started.
Next, after step S9, the navigation device 14 receives GPS radio waves from GPS satellites to obtain position information (vehicle position) of the vehicle (vehicle) 12, and displays the vehicle position (current position of the vehicle) on a map shown on the display 30B as indicated by an arrow or the like.
The navigation device 14 displays the road on which the vehicle is to travel to the destination on the display 30B by the route guidance function, and performs the route guidance and the like for the driver of the traveling vehicle 12 by voice guidance.
Next, in step S11, the utterance end point calculation unit 52 calculates an utterance end point Xe of the next voice guidance based on the GPS reception result in step S9.
When the intersection 54 turns left, the utterance termination point Xe is calculated and set to a position 30[ m ] before the intersection 54 (see fig. 4).
That is, the sound of the last word "turn" of "please turn left at the next intersection" (the last word "い" in japanese) is guided by the voice from the speaker 20, and the sound is controlled to end at the determined sound-emission end point Xe of 30[ m ] without depending on the voice-emission speed Saud (Saud of 7.5, 8.5, and 9.5).
in order to perform control such that voice guidance ends at the determined utterance ending point Xe of 30[ m ] without depending on the voice utterance speed Saud, the utterance starting point calculation unit 50 reads the utterance type TS of the driver stored in the utterance type storage unit 46, acquires the vehicle speed Vv [ km/h ] from the vehicle speed sensor 24, and calculates the distance Ds from the intersection 54 to the utterance starting point Xs, that is, calculates the utterance starting point Xs, by substituting the following equation (1) in step S13.
xs (ds) ═ xe (de) +(sound production/Saud) × Vv … (1)
In the above equation (1), Xs is a utterance start point (Ds is an utterance start distance), Xe is an utterance end point (De is an utterance end distance), an utterance amount is the number of hiragana utterances, Saud is a speech utterance speed, and Vv is a vehicle speed.
To explain by way of specific example, in step S4, when the utterance type TS of the driver is set to "normal", the vehicle speed Vv is 40[ km/h ], and the hiragana utterance characters number is 18 characters, the utterance starting point Xs is calculated as Xs (ds) (30 [ m ] + (18/8.5) × 11.1[ m/S ] ≈ 54[ m ] (refer to fig. 4).
The calculation point (calculation point) of the utterance start point Xs is, for example, a point several hundred meters before the intersection 54 where a left turn is to be made. At this point, for example, "turn in the left direction at an intersection in front of about 300 meters" is performed. "voice guidance.
Fig. 6 shows an example of a method for calculating the utterance starting point Xs which indicates that, when the utterance speed Saud is different from 7.5, 8.5, or 9.5[ mora/sec ], the voice guidance can be ended at the same (specified) utterance ending point Xe by setting a predetermined utterance starting point Xs based on the above expression (1) without depending on the utterance speed Saud (Xs: A, B, C).
When the utterance type TS is set to "slow", the utterance start point Xs is set to a position farthest from the intersection 54(0[ m ]), where Xs is a, when the utterance type TS is set to "normal", the utterance start point Xs is set to "B", and when the utterance type TS is set to "fast", the utterance start point Xs is set to a position closest to the intersection 54, where Xs is C.
In any case, in step S14, the voice guidance "please turn left at the next intersection" is spoken with a change in speech rate, and thus the voice guidance can be ended at the same utterance ending point Xe { the point 30[ m ] away from the intersection 54 }.
[ conclusion ]
The information providing apparatus 10 according to the above embodiment starts voice guidance from the speaker 20 to the driver of the vehicle 12 when the running vehicle 12 reaches the utterance starting point Xs, and the information providing apparatus 10 includes: a driver utterance speed setting unit 42A that sets a driver utterance speed sdiv corresponding to the utterance speed of the driver; a speech type determination unit 44 that determines to which of a plurality of speech types Ts that are assigned in advance in accordance with a speech speed Saud that is a speech speed of the speech guidance output from the speaker 20 the set driver speech speed Sdriv belongs; and a utterance start point calculation unit 50 that calculates the utterance start point Xs of the voice guidance from the utterance speed Saud corresponding to the determined utterance type Ts and the number of utterances of the voice guidance.
With this configuration, the utterance start point Xs of the voice guidance is calculated from the utterance type TS to which the driver utterance speed sdiv belongs, the utterance speed Saud corresponding to the utterance type TS, and the number of utterances of the voice guidance, and therefore, the voice guidance at the utterance speed Saud corresponding to the driver can be started at the accurate utterance start point Xs.
In the information providing device 10, the driver utterance speed setting unit 42A may set the driver utterance speed sdiv (sdiv is 7.5[ mora/sec ]) slower than that of the non-elderly person (young person) when the driver is the elderly person (yes in step S2).
Thus, when the driver is assumed to be a senior citizen, the driver utterance speed sdiv, which is appropriate for the senior citizen, that is, slower than that of a non-senior citizen (young person), can be set (speech speed conversion) to 7.5[ mora/sec ]. Thus, the navigation device 14 can perform voice guidance with a margin even when the driver is an elderly person.
In the information providing device 10, the driver utterance speed setting unit 42A may be a driver utterance speed learning unit 42B that learns and sets the driver utterance speed sdiv (step S10).
By learning and setting the driver utterance speed sdiv in this way, the utterance type TS corresponding to the driver utterance speed sdiv can be accurately determined.
In the information providing device 10, the sound emission start point calculation unit 50 may calculate the sound emission start point Xs in addition to reflecting the vehicle speed Vv of the vehicle 12.
In this case, the voice guidance can be started in time in accordance with the vehicle speed Vv as the running state. For example, even if the speech sound generation speed Saud is slowed down, the speech guidance can be completed at a required timing.
In the information providing device 10, the utterance start point calculation unit 50 may calculate the utterance start point Xs so that the utterance of the voice guidance ends at the predetermined utterance end point Xe.
Thus, the utterance starting point Xs is calculated so that the utterance of the voice guidance ends at the predetermined utterance ending point Xe, and therefore, the necessary voice guidance can be controlled so that the utterance ends reliably at the utterance ending point Xe without depending on the voice utterance speed Saud at which the speech speed is converted.
In the information providing apparatus 10, the predetermined sound emission end point Xe may be set at a timing at which execution of a predetermined flag based on the performance of the vehicle 12 is started, for example, at which operation of the turn signal handle 31 is started.
this makes it possible to reliably fulfill the predetermined mark at the predetermined sound emission end point Xe.
Further, in the information providing apparatus 10, the sound emission type TS is classified into three types of slow, normal, and fast, and the default setting is slow, whereby a setting friendly to elderly people can be made.
The present invention is not limited to the above-described embodiment, and can be applied to, for example, the case of a traffic situation other than an intersection, such as a branch guidance on a toll road, a warning of attention to an incoming traffic stream, and a guidance to a destination on a general road, and can be basically applied to the case where voice guidance by the navigation device 14 depends on a point on a map, and it is needless to say that various configurations can be adopted based on the contents described in the present specification.

Claims (7)

1. An information providing device (10) that starts voice guidance from a speaker (20) to a driver of a running vehicle (12) when the vehicle (12) reaches a speech start point (Xs), the information providing device (10) comprising:
A driver utterance speed setting unit (42A) that sets a driver utterance speed (Sdriv) corresponding to the utterance speed of the driver;
A speech type determination unit (44) that determines which of a plurality of speech Types (TS) that are pre-assigned in accordance with a speech speed (Saud), which is a speech speed of the speech guidance output from the speaker (20), the set driver speech speed (Sdriv) belongs to; and
And a speech starting point calculation unit (50) that calculates the speech starting point (Xs) of the speech guidance from the speech utterance speed (Saud) corresponding to the determined speech Type (TS) and the number of spoken words of the speech guidance.
2. The information providing apparatus (10) according to claim 1,
The driver utterance speed setting unit (42A) sets, when the driver is an elderly person, the driver utterance speed (sdiv) slower than that of a non-elderly person.
3. The information providing apparatus (10) according to claim 1,
The driver utterance speed setting unit (42A) is a driver utterance speed learning unit (42B) that learns and sets the driver utterance speed (Sdriv).
4. The information providing apparatus (10) according to any one of claims 1 to 3,
The sound emission starting point calculation unit (50) also calculates the sound emission starting point (Xs) in a manner that reflects the vehicle speed (Vv) of the vehicle (12).
5. The information providing apparatus (10) according to claim 4,
The utterance starting point calculation unit (50) calculates the utterance starting point (Xs) so that the utterance of the voice guidance ends at a predetermined utterance ending point (Xe).
6. The information providing apparatus (10) according to claim 5,
The prescribed sound emission end point (Xe) is set at a timing at which execution is started based on a prescribed flag fulfilled by the vehicle (12).
7. The information providing apparatus (10) according to claim 1,
The sounding type is classified into three types of slow, normal, and fast, and is set to slow by default.
CN201910469652.2A 2018-06-04 2019-05-31 Information providing device Pending CN110556091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310642834.1A CN116645949A (en) 2018-06-04 2019-05-31 information providing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-106738 2018-06-04
JP2018106738A JP6936772B2 (en) 2018-06-04 2018-06-04 Information provider

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310642834.1A Division CN116645949A (en) 2018-06-04 2019-05-31 information providing device

Publications (1)

Publication Number Publication Date
CN110556091A true CN110556091A (en) 2019-12-10

Family

ID=68735593

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310642834.1A Pending CN116645949A (en) 2018-06-04 2019-05-31 information providing device
CN201910469652.2A Pending CN110556091A (en) 2018-06-04 2019-05-31 Information providing device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310642834.1A Pending CN116645949A (en) 2018-06-04 2019-05-31 information providing device

Country Status (2)

Country Link
JP (1) JP6936772B2 (en)
CN (2) CN116645949A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7339124B2 (en) * 2019-02-26 2023-09-05 株式会社Preferred Networks Control device, system and control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000131084A (en) * 1998-10-22 2000-05-12 Matsushita Electric Ind Co Ltd Navigation apparatus
JP2008094228A (en) * 2006-10-11 2008-04-24 Denso Corp Call warning device for vehicle
CN101689366A (en) * 2007-07-02 2010-03-31 三菱电机株式会社 Voice recognizing apparatus
CN101874196A (en) * 2007-11-26 2010-10-27 三洋电机株式会社 Navigation device
CN104412323A (en) * 2012-06-25 2015-03-11 三菱电机株式会社 On-board information device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004348367A (en) * 2003-05-21 2004-12-09 Nissan Motor Co Ltd In-vehicle information providing device
JP2008026463A (en) * 2006-07-19 2008-02-07 Denso Corp Voice interaction apparatus
JP5018671B2 (en) * 2008-07-07 2012-09-05 株式会社デンソー Vehicle navigation device
JP2010230245A (en) * 2009-03-27 2010-10-14 Sanyo Electric Co Ltd Voice guidance device
US9625270B2 (en) * 2014-12-01 2017-04-18 Thinkware Corporation Electronic apparatus, control method thereof, computer program, and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000131084A (en) * 1998-10-22 2000-05-12 Matsushita Electric Ind Co Ltd Navigation apparatus
JP2008094228A (en) * 2006-10-11 2008-04-24 Denso Corp Call warning device for vehicle
CN101689366A (en) * 2007-07-02 2010-03-31 三菱电机株式会社 Voice recognizing apparatus
CN101874196A (en) * 2007-11-26 2010-10-27 三洋电机株式会社 Navigation device
CN104412323A (en) * 2012-06-25 2015-03-11 三菱电机株式会社 On-board information device

Also Published As

Publication number Publication date
CN116645949A (en) 2023-08-25
JP6936772B2 (en) 2021-09-22
JP2019211586A (en) 2019-12-12

Similar Documents

Publication Publication Date Title
JP2907079B2 (en) Navigation device, navigation method and automobile
EP0768638B1 (en) Apparatus and methods for voice recognition, map display and navigation
JP2644376B2 (en) Voice navigation method for vehicles
US9644985B2 (en) Navigation device that evaluates points of interest based on user utterance
US20110288871A1 (en) Information presentation system
JP2006251888A (en) Vehicular driving support system and vehicular navigation system
JPH09292255A (en) Navigation method and navigation system
US20190228767A1 (en) Speech recognition apparatus and method of controlling the same
CN110556091A (en) Information providing device
JP2867589B2 (en) Voice guidance device
JP2007192619A (en) Lane-guiding system and on-vehicle device
JP3677833B2 (en) Navigation device, navigation method, and automobile
JP2000338993A (en) Voice recognition device and navigation system using this device
JPH11342808A (en) Direction indicator for vehicle with voice input function
JP2947143B2 (en) Voice recognition device and navigation device
US20220208187A1 (en) Information processing device, information processing method, and storage medium
JP3818352B2 (en) Navigation device and storage medium
JPH11183190A (en) Voice recognition unit for navigation and navigation unit with voice recognition function
JPH08328584A (en) Speach recognition device, method therefor and navigation device
JP2877045B2 (en) Voice recognition device, voice recognition method, navigation device, navigation method, and automobile
JPH0696389A (en) Speech path guide device for automobile
JP3000601B2 (en) Travel guide device
JP2773381B2 (en) Voice guidance device
WO2023163045A1 (en) Content output device, content output method, program, and storage medium
JP2019100130A (en) Vehicle control device and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20231229