WO2013168254A1 - Système de navigation pour corps mobiles - Google Patents

Système de navigation pour corps mobiles Download PDF

Info

Publication number
WO2013168254A1
WO2013168254A1 PCT/JP2012/061946 JP2012061946W WO2013168254A1 WO 2013168254 A1 WO2013168254 A1 WO 2013168254A1 JP 2012061946 W JP2012061946 W JP 2012061946W WO 2013168254 A1 WO2013168254 A1 WO 2013168254A1
Authority
WO
WIPO (PCT)
Prior art keywords
language
voice
guidance
navigation system
output
Prior art date
Application number
PCT/JP2012/061946
Other languages
English (en)
Japanese (ja)
Inventor
西村 猛
章太郎 吉岡
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2012/061946 priority Critical patent/WO2013168254A1/fr
Priority to JP2014514302A priority patent/JP5922229B2/ja
Publication of WO2013168254A1 publication Critical patent/WO2013168254A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech

Definitions

  • the present invention relates to a mobile navigation system that outputs guidance voices in a plurality of languages simultaneously.
  • route guidance methods for the mobile navigation system for guiding the route to the destination to the user: a method using a map displayed on the screen and a method for outputting guidance voice.
  • map display and audio output must be provided in a language understandable by the user.
  • Patent Document 1 discloses a navigation system that specifies a user's native language from a user's utterance voice acquired by a microphone and outputs a guidance voice in the user's native language.
  • Patent Document 2 discloses a navigation system that stores voice guidance data related to place names corresponding to a plurality of languages, and performs voice output in a language specified by the user.
  • Patent Document 2 Since the navigation system of Patent Document 2 can display a map with place names written in a plurality of languages, for example, when a driver and a passenger are in different native languages, any person understands the guidance display. It is possible. However, there is a problem that the guidance voice is output only in one selected language.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a mobile navigation system that enables passengers in different language areas to simultaneously understand guidance voices.
  • a navigation system for a moving body includes a language selection unit that selects a language of guidance voice, and a voice output unit that outputs guidance voices in the plurality of languages simultaneously when the language selection unit selects a plurality of languages.
  • a navigation system for a moving body includes a language selection unit that selects a language of guidance voice, and a voice output unit that outputs guidance voices in the plurality of languages simultaneously when the language selection unit selects a plurality of languages. Therefore, all of a plurality of passengers with different language zones can understand the guidance voice at the same time.
  • FIG. 6 is a diagram illustrating an operation of the mobile navigation system according to the first embodiment.
  • FIG. 1 is a block diagram illustrating a configuration of a mobile navigation system according to Embodiment 1.
  • FIG. 3 is a flowchart showing an operation of the mobile navigation system according to the first embodiment.
  • 10 is a flowchart showing an operation of the mobile navigation system according to the second modification of the first embodiment.
  • FIG. 1 is a schematic diagram of a vehicle which is an example of a moving object, and shows an operation example of the moving object navigation system of the present invention.
  • the mobile navigation system of the present invention can also be applied to a mobile body other than a vehicle such as a ship or an aircraft on which a plurality of people can board. is there.
  • the mobile navigation system of the present invention enables a passenger in a plurality of language areas to understand the guidance sound in real time by simultaneously outputting a plurality of guidance sounds in accordance with the language area of the passenger.
  • a Japanese guidance voice is output from the speaker 10a close to the driver's seat
  • the assistant An English guidance voice is output from the speaker 10b close to the seat.
  • Japanese guidance voice and English guidance voice simultaneously from both the speakers 10a and 10b
  • both the driver and the passenger in the passenger seat can understand the guidance voice at the same time.
  • each speaker 10a, 10b outputs only the language understandable by the person in the nearest seat, so that each passenger listens mainly to the guidance speech in an understandable language uttered from a closer place. Can improve understanding.
  • output a plurality of sounds simultaneously means that a plurality of sounds are output at the same time that human hearing can be heard, and has a time lag of about several ns that is not recognized by hearing. The case where a plurality of sounds are output is included.
  • FIG. 2 is a block diagram showing a configuration of the mobile navigation system according to the first embodiment.
  • the mobile navigation system includes a display unit 1, a display control device 2, an operation unit 3, a GPS device 4, a navigation ECU 5, a voice output control device 6, a map database 7, a display data storage unit 8, An audio data storage unit 9 and speakers 10a and 10b are provided.
  • the display unit 1 is a liquid crystal display, for example, and displays a map screen, a route guidance screen, and the like under the control of the display control device 2.
  • the operation unit 3 is a means for receiving a user operation, and is configured by a mechanical mechanism such as a dial or a switch. Alternatively, a touch panel configured to overlap the display unit 1 may be used.
  • the GPS device 4 acquires the current location information of the vehicle.
  • the navigation ECU 5 operates as a language selection unit that selects the output language of the guidance voice for each speaker during voice output. Also, guidance voice data for each output language is acquired from the voice data storage unit 9 or generated internally.
  • the voice output control device 6 is means for receiving guidance voice data from the navigation ECU 5 and allocating the guidance voice data to the speakers 10a and 10b.
  • the navigation ECU 5 determines which speaker 10a, 10b (or both) to output the guidance voice, and the voice output control device 6 drives the speakers 10a, 10b accordingly. That is, when a plurality of languages are selected by the navigation ECU 5, the voice output control device 6 operates as a voice output unit that outputs guidance voices in a plurality of languages at the same time.
  • 1 and 2 show two speakers 10a and 10b, but the present invention does not limit the number of speakers to two.
  • the guidance voice is output to each passenger in an individual language, it is desirable that a speaker is provided corresponding to each seat.
  • a speaker having directivity to the corresponding seat such as a parametric speaker, is used, the passenger mainly listens to the guidance voice that is selectively delivered from the speaker corresponding to the seat, thereby improving the ease of hearing.
  • the object of the present invention by outputting guidance voices in two or more languages from one speaker.
  • the map database 7 stores map data.
  • the display data storage unit 8 stores information to be displayed (display data) other than the map data.
  • the navigation ECU 5 extracts map data from the map database 7 as needed, extracts display data from the display data storage unit 8, and outputs these to the display control device 2.
  • the display control device 2 controls the display unit 1 using these, and performs screen display in a general navigation system such as a route guidance screen.
  • the voice data storage unit 9 stores voice data used for guidance voices for a plurality of languages.
  • the navigation ECU 5 can acquire voice data from the voice data storage unit 9 and can create guidance voice data using a known technique such as a TTS (text-to-speech) system. It is also possible to create voice data of place names by using a TTS system and synthesize the voice data acquired from the voice data storage unit 9 to generate guidance voice data. As a result, it is possible to reduce the amount of data compared to the case where the voice data for all place names is stored in the voice data storage unit 9.
  • TTS text-to-speech
  • the output language and the output destination speaker are selected according to, for example, a user setting input on the operation unit 3.
  • the user may specify a language for each speaker.
  • the user may specify a language for each seat, and the navigation ECU 5 may select the output language of each speaker according to the setting result. If the speakers are not provided in one-to-one correspondence with the seat, the latter setting method is more user-friendly.
  • the utterance of the passenger may be collected by a microphone installed in the vehicle, and the recognized utterance language may be selected as the output language of the speaker.
  • the microphones be provided on the front, rear, left, and right sides of the vehicle, and it is desirable that the microphones be provided corresponding to each seat.
  • the navigation ECU 5 determines whether or not an utterance language is selected for each speaker (step S1). If the utterance language has already been selected manually or automatically (Yes in step S1), the process waits until an utterance event occurs (No in step S2).
  • the navigation ECU 5 refers to the vehicle position information acquired from the GPS device 4 to determine whether or not an utterance event has occurred.
  • a guidance voice is output in the language selected by each speaker (step S3). More specifically, the navigation ECU 5 acquires voice data for all selected languages from the voice data storage unit 9 and uses it as guidance voice data. Alternatively, the navigation ECU 5 creates guidance voice data for all selected languages from the voice text by the internal TTS system. Alternatively, the navigation ECU 5 creates guidance voice data by combining the voice data acquired from the voice data storage unit 9 and the voice data synthesized by the TTS system. Then, the navigation ECU 5 outputs the guidance voice data for each selected language to the voice output control device 6 with the output destination speaker information. The audio output control device 6 drives the output destination speaker and outputs the guidance voice of the corresponding selected language from the output destination speaker.
  • step S4 it is determined whether or not the route guidance is completed. If the route guidance is still in progress (No in step S4), the process returns to step S2 and waits until an utterance event occurs. When the route guidance ends (Yes in step S4), the process ends.
  • step S1 If no utterance language is selected for each speaker 10a, 10b in step S1 (No in step S1), the process waits until an utterance event occurs (No in step S5).
  • a speech event occurs (Yes in step S5), a so-called normal navigation system voice output operation is performed. That is, the utterance output is performed in the same language from all the speakers 10a and 10b (step S6).
  • step S7 it is determined whether or not the route guidance is completed. If the route guidance is still in progress (No in step S7), the process returns to step S5 and waits until an utterance event occurs. When the route guidance ends (Yes in step S7), the process ends.
  • guidance voices are simultaneously output from each speaker in a plurality of selected languages. Therefore, even when a plurality of people with different language areas are sharing, information by guidance voices can be shared in real time. it can.
  • the guidance voice has entertainment properties such as voice using a comical expression of a comedian
  • the entertainment property is impaired.
  • the entertainment property is not impaired in the above case.
  • the frequency band (bus band) used preferentially differs depending on the language.
  • the Japanese bus band is 125 Hz to 1500 Hz
  • the American bus band is 750 Hz to 5000 Hz
  • the English bus band is 2000 Hz to 16000 Hz.
  • the frequency band of 750 Hz to 1500 Hz overlaps in Japanese and American
  • the frequency band of 2000 Hz to 5000 Hz overlaps in American and English. Therefore, in the mobile communication system according to the first modification, for example, when the guidance voice is output in Japanese and American at the same time, the frequency band for Japanese is set lower and the frequency band for American English is set higher. To reduce the overlap of frequency bands and improve the ease of hearing the guidance voice.
  • the navigation ECU 5 performs such frequency modulation processing.
  • the frequency band to be modulated is determined in advance by a combination of languages to be output simultaneously.
  • the navigation ECU 5 performs frequency modulation on the voice data acquired from the voice data storage unit 9.
  • guidance voice data is synthesized in a predetermined frequency band.
  • audio data of multiple frequency bands per language such as Japanese voice data in the frequency band when output simultaneously with American English, Japanese voice data in the frequency band when output simultaneously with English, etc. You may memorize
  • An object of the navigation system for a moving body of the present invention is to allow passengers of different linguistic spheres to share guidance voice information simultaneously.
  • information important for driving such as route guidance and traffic regulation information, is more important in preventing the driver from being overheard than sharing information in real time between passengers. In such a case, it is desirable to output the guidance voice in the language for the driver from all the speakers at the same time.
  • the mobile navigation system determines the content of the guidance voice, and switches between outputting from each speaker in a specified language or outputting from all speakers in one language accordingly.
  • step S11 the operation of the navigation ECU 5 in the mobile navigation system according to Modification 2 will be described.
  • step S11 the operation of the navigation ECU 5 in the mobile navigation system according to Modification 2 will be described.
  • the navigation ECU 5 determines whether or not an utterance language is selected for each speaker (step S11). Since the operation (steps S17 to S19) when the utterance language is not selected (No in step S11) is the same as steps S5 to S7 in FIG. 3, the description thereof is omitted.
  • step S11 If the utterance language has already been selected manually or automatically (Yes in step S11), it waits until the utterance event occurs (No in step S12).
  • the navigation ECU 5 determines the utterance content of the guidance voice (step S13).
  • various criteria for discriminating the content of speech can be considered. For example, if the utterance content is a tourist guide such as ancillary information of buildings on the route, sharing information in real time is given priority over the driver's oversight, so the utterance is output in the language selected by each speaker ( Step S14). If the utterance content is route guidance information, accident (traffic jam) information, etc., it is highly important information for driving, so the utterance is output from all speakers in the language for the driver (step S15).
  • step S16 it is determined whether or not the route guidance is completed. If the route guidance is still in progress (No in step S15), the process returns to step S12 and waits until the utterance event occurs. When the route guidance ends (Yes in step S16), the process ends.
  • the language selection unit (navigation ECU 5) that selects the language of the guidance voice, and when the language selection unit selects a plurality of languages, the guidance voice is simultaneously used in the plurality of languages.
  • an audio output unit audio output control device 6
  • the voice output unit (voice output control device 6) drives a plurality of speakers 10a and 10b installed in the vehicle to output guidance voices, and the language selection unit (navigation ECU 5) outputs a speaker for each of the speakers 10a and 10b. Select the language for the guidance voice. Therefore, each speaker 10a, 10b is in charge of the native language of the passenger seated in the close seat so that each passenger can listen mainly to the guidance voice of the native language spoken from nearby. It becomes easier to hear.
  • the language selection unit selects one language of the guidance voice for each speaker 10a and 10b, so Ease of listening is improved while outputting a guidance voice in a corresponding language.
  • the language selection unit selects the language of the guidance voice based on the setting operation, so that, for example, each passenger performs setting input such as assigning his / her native language to the nearby speakers 10a and 10b. A guidance voice in a language corresponding to each passenger is output.
  • each speaker 10a and 10b can selectively output guidance voice to the corresponding seat. It is possible to listen mainly to the guidance voice of the language for oneself that is selectively delivered, and the ease of hearing is improved.
  • the mobile navigation system of the present embodiment further includes a voice modulation unit (navigation ECU 5) that modulates the guidance voice into a frequency band set in advance for the language combination selected by the language selection unit (navigation ECU 5).
  • the voice output unit (voice output control device 6) outputs the guidance voice modulated by the voice modulation unit from the speaker. Even if voices in multiple languages are output from the same speaker or from different speakers at the same time, if the voices are modulated so that the frequency bands do not overlap as much as possible for each language, the guidance voices in the native language are distinguished from them. Is easy.
  • the voice output unit voice output control device 6
  • the voice output unit voice output control device 6
  • the guidance voice for a predetermined guidance. For this reason, for example, guidance voices that are highly important for driving can be prevented from being overheard by outputting voices from all the speakers simultaneously in a language intended for the driver.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

La présente invention vise à produire un système de navigation pour corps mobiles qui permet à des passagers parlant des langues différentes de comprendre simultanément des instructions vocales. Le système de navigation pour corps mobiles selon l'invention comprend : une unité de sélection de langue permettant de sélectionner des langues pour les instructions vocales ; et une unité d'émission audio permettant d'émettre des instructions vocales simultanément dans plusieurs langues, lorsque lesdites langues ont été sélectionnées par le biais de l'unité de sélection de langue.
PCT/JP2012/061946 2012-05-10 2012-05-10 Système de navigation pour corps mobiles WO2013168254A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2012/061946 WO2013168254A1 (fr) 2012-05-10 2012-05-10 Système de navigation pour corps mobiles
JP2014514302A JP5922229B2 (ja) 2012-05-10 2012-05-10 移動体用ナビゲーションシステム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/061946 WO2013168254A1 (fr) 2012-05-10 2012-05-10 Système de navigation pour corps mobiles

Publications (1)

Publication Number Publication Date
WO2013168254A1 true WO2013168254A1 (fr) 2013-11-14

Family

ID=49550338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/061946 WO2013168254A1 (fr) 2012-05-10 2012-05-10 Système de navigation pour corps mobiles

Country Status (2)

Country Link
JP (1) JP5922229B2 (fr)
WO (1) WO2013168254A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098585A1 (en) * 2013-10-03 2015-04-09 Russell Louis Storms, Sr. Method and apparatus for transit system annunciators
CN104978015A (zh) * 2014-04-14 2015-10-14 博世汽车部件(苏州)有限公司 具有语种自适用功能的导航系统及其控制方法
JP2016206394A (ja) * 2015-04-22 2016-12-08 ヤマハ株式会社 情報提供システム
GB2541519A (en) * 2014-02-21 2017-02-22 Jaguar Land Rover Ltd A system for use in a vehicle
WO2017064929A1 (fr) * 2015-10-16 2017-04-20 ソニー株式会社 Dispositif de traitement d'informations et système de traitement d'informations
JP2018115058A (ja) * 2017-01-19 2018-07-26 三菱電機ビルテクノサービス株式会社 エレベーター用情報提供装置
US20180234782A1 (en) * 2017-02-14 2018-08-16 Russell Louis Storms, Sr. Method and apparatus for multilingual simultaneous announcement system
JP2019151447A (ja) * 2018-03-01 2019-09-12 東芝エレベータ株式会社 エレベータの音声案内装置
JP2020053792A (ja) * 2018-09-26 2020-04-02 ソニー株式会社 情報処理装置、および情報処理方法、プログラム、情報処理システム
CN111301438A (zh) * 2018-11-27 2020-06-19 丰田自动车株式会社 自动驾驶装置、汽车导航装置以及驾驶辅助系统
FR3093607A1 (fr) * 2019-03-08 2020-09-11 Orange Procédé de restitution d’un contenu audiovisuel
JP2021131572A (ja) * 2020-05-29 2021-09-09 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド 放送テキストの決定方法、放送テキストの決定装置、電子機器、記憶媒体及びコンピュータプログラム

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6682912B2 (ja) * 2016-02-26 2020-04-15 フジテック株式会社 警告装置及び乗客コンベア
JP7026967B2 (ja) * 2020-09-14 2022-03-01 千蔵工業株式会社 トイレ用自動ドアシステム、音声案内装置、音声案内方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236987A (ja) * 2009-03-31 2010-10-21 Adohotsuku:Kk 自動ガイドシステム

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005326209A (ja) * 2004-05-13 2005-11-24 Pioneer Electronic Corp 報知制御装置、その方法、そのプログラム、および、そのプログラムを記録した記録媒体
JP2007237831A (ja) * 2006-03-07 2007-09-20 Nissan Motor Co Ltd 車外警報装置および車両警報方法
JP2009180509A (ja) * 2008-01-29 2009-08-13 Aisin Aw Co Ltd 車両用案内支援装置、車両用案内支援方法及びプログラム
JP5750839B2 (ja) * 2010-06-14 2015-07-22 日産自動車株式会社 音声情報提示装置および音声情報提示方法

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010236987A (ja) * 2009-03-31 2010-10-21 Adohotsuku:Kk 自動ガイドシステム

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098585A1 (en) * 2013-10-03 2015-04-09 Russell Louis Storms, Sr. Method and apparatus for transit system annunciators
GB2541519A (en) * 2014-02-21 2017-02-22 Jaguar Land Rover Ltd A system for use in a vehicle
GB2541519B (en) * 2014-02-21 2018-02-21 Jaguar Land Rover Ltd A system for use in a vehicle
CN104978015A (zh) * 2014-04-14 2015-10-14 博世汽车部件(苏州)有限公司 具有语种自适用功能的导航系统及其控制方法
EP2933607A1 (fr) * 2014-04-14 2015-10-21 Bosch Automotive Products (Suzhou) Co., Ltd. Système de navigation ayant une fonction auto-adaptative de catégorie de langage et procédé de commande du système
CN104978015B (zh) * 2014-04-14 2018-09-18 博世汽车部件(苏州)有限公司 具有语种自适用功能的导航系统及其控制方法
JP2016206394A (ja) * 2015-04-22 2016-12-08 ヤマハ株式会社 情報提供システム
US10438577B2 (en) 2015-10-16 2019-10-08 Sony Corporation Information processing device and information processing system
WO2017064929A1 (fr) * 2015-10-16 2017-04-20 ソニー株式会社 Dispositif de traitement d'informations et système de traitement d'informations
JP2018115058A (ja) * 2017-01-19 2018-07-26 三菱電機ビルテクノサービス株式会社 エレベーター用情報提供装置
US20180234782A1 (en) * 2017-02-14 2018-08-16 Russell Louis Storms, Sr. Method and apparatus for multilingual simultaneous announcement system
JP2019151447A (ja) * 2018-03-01 2019-09-12 東芝エレベータ株式会社 エレベータの音声案内装置
JP2020053792A (ja) * 2018-09-26 2020-04-02 ソニー株式会社 情報処理装置、および情報処理方法、プログラム、情報処理システム
CN111301438A (zh) * 2018-11-27 2020-06-19 丰田自动车株式会社 自动驾驶装置、汽车导航装置以及驾驶辅助系统
FR3093607A1 (fr) * 2019-03-08 2020-09-11 Orange Procédé de restitution d’un contenu audiovisuel
WO2020183079A1 (fr) * 2019-03-08 2020-09-17 Orange Procédé de restitution d'un contenu audiovisuel
JP2021131572A (ja) * 2020-05-29 2021-09-09 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド 放送テキストの決定方法、放送テキストの決定装置、電子機器、記憶媒体及びコンピュータプログラム

Also Published As

Publication number Publication date
JPWO2013168254A1 (ja) 2015-12-24
JP5922229B2 (ja) 2016-05-24

Similar Documents

Publication Publication Date Title
JP5922229B2 (ja) 移動体用ナビゲーションシステム
US10070242B2 (en) Devices and methods for conveying audio information in vehicles
JP7133029B2 (ja) エージェント装置、エージェント制御方法、およびプログラム
JP5413321B2 (ja) 通信システム、車載端末、および携帯端末
JP2009251388A (ja) 母国語発話装置
CN111007968A (zh) 智能体装置、智能体提示方法及存储介质
JP5052241B2 (ja) 車載用の音声処理装置、音声処理システム、及び音声処理方法
JP7489391B2 (ja) 車内ヘッドフォンの音響拡張現実システム
JP2023126870A (ja) 車両向けの空間インフォテインメントレンダリングシステム
JP2020113150A (ja) 音声翻訳対話システム
JP2020144264A (ja) エージェント装置、エージェント装置の制御方法、およびプログラム
JP2019159559A (ja) 情報提供装置
JP2018087871A (ja) 音声出力装置
US10773592B2 (en) Sound output and text display device for a vehicle
JP6841536B1 (ja) 翻訳システム
US20050129250A1 (en) Virtual assistant and method for providing audible information to a user
JP2019212168A (ja) 音声認識システムおよび情報処理装置
JP6509098B2 (ja) 音声出力装置および音声出力制御方法
JPH1021049A (ja) 音声合成装置
KR20090096337A (ko) 큰소리 발성에 기반을 둔 어학 시스템 및 방법
WO2022124154A1 (fr) Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations
US11518399B2 (en) Agent device, agent system, method for controlling agent device, and storage medium
CN116580699A (zh) 车辆及其控制方法
JP2020030322A (ja) 音声操作装置および音声操作システム
JP2022175171A (ja) 車両及び車両システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12876526

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014514302

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12876526

Country of ref document: EP

Kind code of ref document: A1