WO2023062816A1 - Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage - Google Patents

Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage Download PDF

Info

Publication number
WO2023062816A1
WO2023062816A1 PCT/JP2021/038223 JP2021038223W WO2023062816A1 WO 2023062816 A1 WO2023062816 A1 WO 2023062816A1 JP 2021038223 W JP2021038223 W JP 2021038223W WO 2023062816 A1 WO2023062816 A1 WO 2023062816A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
output device
variable
information
voice
Prior art date
Application number
PCT/JP2021/038223
Other languages
English (en)
Japanese (ja)
Inventor
敦博 山中
高志 飯澤
敬太 倉持
勇志 角田
将士 高野
敬介 栃原
航太朗 宮部
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2021/038223 priority Critical patent/WO2023062816A1/fr
Publication of WO2023062816A1 publication Critical patent/WO2023062816A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation

Definitions

  • the present invention relates to technology that can be used in content output.
  • Conventionally known technology is to output content corresponding to the information to the user based on various information obtained through sensors and the like.
  • Patent Literature 1 discloses a technique for outputting a greeting voice when a passenger gets in and out of a vehicle based on information obtained through a vibration sensor or the like that detects opening and closing of the vehicle door. disclosed.
  • Patent Document 1 does not disclose a method of outputting the content while distinguishing between the portion that changes according to the situation in which the content is output and the other portion of the content.
  • Patent Document 1 there is a case where the user who is the output destination of the content is subjected to an unnecessary mental burden for recognizing important information that may be included in the content. There is a problem that there is
  • the present invention has been made to solve the above problems, and a main object of the present invention is to provide a content output device capable of recognizing important information contained in content more easily than before. do.
  • the claimed invention is a content output device, which acquires content including a variable part and tag information for outputting voice while emphasizing the variable part according to the driving situation of a vehicle.
  • An acquisition unit and an output unit that outputs the content are provided.
  • the claimed invention is a content output method, in which content including a variable part and tag information for outputting voice while emphasizing the variable part is acquired according to the driving situation of a vehicle. and output the content.
  • the claimed invention is a program executed by a content output device provided with a computer, the content including a variable part and tag information for outputting voice while emphasizing the variable part,
  • the computer is caused to function as a content acquisition unit that acquires content according to the driving situation of the vehicle and an output unit that outputs the content.
  • FIG. 1 is a diagram showing a configuration example of an audio output system according to an embodiment
  • FIG. 1 is a block diagram showing a schematic configuration of an audio output device
  • FIG. 4 is a diagram for explaining the data structure of text data stored in the server device;
  • FIG. 4 is a diagram showing a specific example of text data stored in the server device;
  • 4 is a flowchart for explaining processing performed in the server device;
  • a content output device acquires content including a variable part and tag information for outputting voice while emphasizing the variable part according to the driving situation of the vehicle.
  • a content acquisition unit and an output unit for outputting the content are provided.
  • the above content output device includes a content acquisition unit and an output unit.
  • the content acquisition unit acquires content including a variable part and tag information for outputting voice while emphasizing the variable part according to the driving situation of the vehicle.
  • the output unit outputs the content. This makes it easier to recognize important information that may be included in content than in the past.
  • the tag information includes a set value of at least one of volume, pitch, and speed when outputting the variable portion as voice, fixed included in the content. It contains information for setting a value different from the set value when outputting the part as voice.
  • the tag information includes information for setting the volume setting value when outputting the variable portion as sound to be higher than the volume setting value of the fixed portion.
  • the tag information includes information for setting the pitch setting value when outputting the variable portion as sound higher than the pitch setting value of the fixed portion.
  • the tag information includes information for making the set value of the speed when outputting the variable portion slower than the set value of the speed of the fixed portion.
  • a content output method acquires content including a variable part and tag information for outputting voice while emphasizing the variable part according to the driving situation of a vehicle, Output content. This makes it easier to recognize important information that may be included in content than in the past.
  • a program executed by a content output device provided with a computer outputs content including a variable portion and tag information for outputting voice while emphasizing the variable portion to a vehicle.
  • the computer is caused to function as a content acquisition unit that acquires content according to driving conditions and an output unit that outputs the content.
  • This program can be stored in a storage medium and used.
  • FIG. 1 is a diagram illustrating a configuration example of an audio output system according to an embodiment.
  • a voice output system 1 according to this embodiment includes a voice output device 100 and a server device 200 .
  • the audio output device 100 is mounted on the vehicle Ve.
  • the server device 200 communicates with a plurality of audio output devices 100 mounted on a plurality of vehicles Ve.
  • the voice output device 100 basically performs route search processing, route guidance processing, etc. for the user who is a passenger of the vehicle Ve. For example, when a destination or the like is input by the user, the voice output device 100 transmits an upload signal S1 including position information of the vehicle Ve and information on the designated destination to the server device 200 . Server device 200 calculates the route to the destination by referring to the map data, and transmits control signal S2 indicating the route to the destination to audio output device 100 . The voice output device 100 provides route guidance to the user by voice output based on the received control signal S2.
  • the voice output device 100 provides various types of information to the user through interaction with the user.
  • the audio output device 100 supplies the server device 200 with an upload signal S1 including information indicating the content or type of the information request and information about the running state of the vehicle Ve.
  • the server device 200 acquires and generates information requested by the user, and transmits it to the audio output device 100 as a control signal S2.
  • the audio output device 100 provides the received information to the user by audio output.
  • the voice output device 100 moves together with the vehicle Ve and performs route guidance mainly by voice so that the vehicle Ve travels along the guidance route.
  • route guidance based mainly on voice refers to route guidance in which the user can grasp information necessary for driving the vehicle Ve along the guidance route at least from only voice, and the voice output device 100 indicates the current position. It does not exclude the auxiliary display of a surrounding map or the like.
  • the voice output device 100 outputs at least various information related to driving, such as points on the route that require guidance (also referred to as “guidance points”), by voice.
  • the guidance point corresponds to, for example, an intersection at which the vehicle Ve turns right or left, or other passing points important for the vehicle Ve to travel along the guidance route.
  • the voice output device 100 provides voice guidance regarding guidance points such as, for example, the distance from the vehicle Ve to the next guidance point and the traveling direction at the guidance point.
  • the voice regarding the guidance for the guidance route will also be referred to as "route voice guidance”.
  • the audio output device 100 is installed, for example, on the upper part of the windshield of the vehicle Ve or on the dashboard. Note that the audio output device 100 may be incorporated in the vehicle Ve.
  • FIG. 2 is a block diagram showing a schematic configuration of the audio output device 100.
  • the audio output device 100 mainly includes a communication unit 111, a storage unit 112, an input unit 113, a control unit 114, a sensor group 115, a display unit 116, a microphone 117, a speaker 118, and an exterior camera 119. and an in-vehicle camera 120 .
  • Each element in the audio output device 100 is interconnected via a bus line 110 .
  • the communication unit 111 performs data communication with the server device 200 under the control of the control unit 114 .
  • the communication unit 111 may receive, for example, map data for updating a map DB (DataBase) 4 to be described later from the server device 200 .
  • Map DB DataBase
  • the storage unit 112 is composed of various memories such as RAM (Random Access Memory), ROM (Read Only Memory), and non-volatile memory (including hard disk drive, flash memory, etc.).
  • the storage unit 112 stores a program for the audio output device 100 to execute predetermined processing.
  • the above programs may include an application program for providing route guidance by voice, an application program for playing back music, an application program for outputting content other than music (such as television), and the like.
  • Storage unit 112 is also used as a working memory for control unit 114 . Note that the program executed by the audio output device 100 may be stored in a storage medium other than the storage unit 112 .
  • the storage unit 112 also stores a map database (hereinafter, the database is referred to as "DB") 4. Various data required for route guidance are recorded in the map DB 4 .
  • the map DB 4 stores, for example, road data representing a road network by a combination of nodes and links, and facility data indicating facilities that are candidates for destinations, stop-off points, or landmarks.
  • the map DB 4 may be updated based on the map information received by the communication section 111 from the map management server under the control of the control section 114 .
  • the input unit 113 is a button, touch panel, remote controller, etc. for user operation.
  • the display unit 116 is a display or the like that displays based on the control of the control unit 114 .
  • the microphone 117 collects sounds inside the vehicle Ve, particularly the driver's utterances.
  • a speaker 118 outputs audio for route guidance to the driver or the like.
  • the sensor group 115 includes an external sensor 121 and an internal sensor 122 .
  • the external sensor 121 is, for example, one or more sensors for recognizing the surrounding environment of the vehicle Ve, such as a lidar, radar, ultrasonic sensor, infrared sensor, and sonar.
  • the internal sensor 122 is a sensor that performs positioning of the vehicle Ve, and is, for example, a GNSS (Global Navigation Satellite System) receiver, a gyro sensor, an IMU (Inertial Measurement Unit), a vehicle speed sensor, or a combination thereof.
  • GNSS Global Navigation Satellite System
  • IMU Inertial Measurement Unit
  • vehicle speed sensor or a combination thereof.
  • the sensor group 115 may have a sensor that allows the control unit 114 to directly or indirectly derive the position of the vehicle Ve from the output of the sensor group 115 (that is, by performing estimation processing).
  • the vehicle exterior camera 119 is a camera that captures the exterior of the vehicle Ve.
  • the exterior camera 119 may be only a front camera that captures the front of the vehicle, or may include a rear camera that captures the rear of the vehicle in addition to the front camera. good too.
  • the in-vehicle camera 120 is a camera for photographing the interior of the vehicle Ve, and is provided at a position capable of photographing at least the vicinity of the driver's seat.
  • the control unit 114 includes a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and controls the audio output device 100 as a whole. For example, the control unit 114 estimates the position (including the traveling direction) of the vehicle Ve based on the outputs of one or more sensors in the sensor group 115 . Further, when a destination is specified by the input unit 113 or the microphone 117, the control unit 114 generates route information indicating a guidance route to the destination, Based on the positional information and the map DB 4, route guidance is provided. In this case, the control unit 114 causes the speaker 118 to output route voice guidance. Further, the control unit 114 controls the display unit 116 to display information about the music being played, video content, a map of the vicinity of the current position, or the like.
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • control unit 114 is not limited to being implemented by program-based software, and may be implemented by any combination of hardware, firmware, and software. Also, the processing executed by the control unit 114 may be implemented using a user-programmable integrated circuit such as an FPGA (field-programmable gate array) or a microcomputer. In this case, this integrated circuit may be used to implement the program executed by the control unit 114 in this embodiment. Thus, the control unit 114 may be realized by hardware other than the processor.
  • FPGA field-programmable gate array
  • the configuration of the audio output device 100 shown in FIG. 2 is an example, and various changes may be made to the configuration shown in FIG.
  • the control unit 114 may receive information necessary for route guidance from the server device 200 via the communication unit 111 .
  • the audio output device 100 is electrically connected to an audio output unit configured separately from the audio output device 100, or by a known communication means, so as to output the audio. Audio may be output from the output unit.
  • the audio output unit may be a speaker provided in the vehicle Ve.
  • the audio output device 100 does not have to include the display section 116 .
  • the audio output device 100 does not need to perform display-related control at all. may be executed.
  • the audio output device 100 may acquire information output by sensors installed in the vehicle Ve based on a communication protocol such as CAN (Controller Area Network) from the vehicle Ve. .
  • CAN Controller Area Network
  • the server device 200 generates route information indicating a guidance route that the vehicle Ve should travel based on the upload signal S1 including the destination and the like received from the voice output device 100 .
  • the server device 200 then generates a control signal S2 relating to information output in response to the user's information request based on the user's information request indicated by the upload signal S1 transmitted by the audio output device 100 and the running state of the vehicle Ve.
  • the server device 200 then transmits the generated control signal S ⁇ b>2 to the audio output device 100 .
  • the server device 200 generates content for providing information to the user of the vehicle Ve and for interacting with the user, and transmits the content to the audio output device 100 .
  • the provision of information to the user is primarily a push-type information provision that is triggered by the server device 200 when the vehicle Ve reaches a predetermined driving condition.
  • the dialog with the user is basically a pull-type dialog that starts with a question or inquiry from the user. However, interaction with the user may start with push-type content provision.
  • FIG. 3 is a diagram showing an example of a schematic configuration of the server device 200.
  • the server device 200 mainly has a communication section 211 , a storage section 212 and a control section 214 .
  • Each element in the server device 200 is interconnected via a bus line 210 .
  • the communication unit 211 performs data communication with an external device such as the audio output device 100 under the control of the control unit 214 .
  • the storage unit 212 is composed of various types of memory such as RAM, ROM, nonvolatile memory (including hard disk drive, flash memory, etc.). Storage unit 212 stores a program for server device 200 to execute a predetermined process. Moreover, the memory
  • the control unit 214 includes a CPU, GPU, etc., and controls the server device 200 as a whole. Further, the control unit 214 operates together with the audio output device 100 by executing a program stored in the storage unit 212, and executes route guidance processing, information provision processing, and the like for the user. For example, based on the upload signal S1 received from the audio output device 100 via the communication unit 211, the control unit 214 generates route information indicating a guidance route or a control signal S2 relating to information output in response to a user's information request. Then, the control unit 214 transmits the generated control signal S2 to the audio output device 100 through the communication unit 211 .
  • push-type content provision means that when the vehicle Ve is in a predetermined driving situation, the audio output device 100 outputs content related to the driving situation to the user by voice. Specifically, the voice output device 100 acquires the driving situation information indicating the driving situation of the vehicle Ve based on the output of the sensor group 115 as described above, and transmits it to the server device 200 .
  • the server device 200 stores table data for providing push-type content in the storage unit 212 .
  • the server device 200 refers to the table data, and when the driving situation information received from the voice output device 100 mounted on the vehicle Ve matches the trigger condition defined in the table data, the text corresponding to the trigger condition is generated.
  • output content is acquired and transmitted to the audio output device 100 .
  • the audio output device 100 audio-outputs the content for output received from the server device 200 . In this way, the content corresponding to the driving situation of the vehicle Ve is output to the user by voice.
  • the driving situation information includes, for example, the position of the vehicle Ve, the direction of the vehicle, traffic information around the position of the vehicle Ve (including speed regulation and congestion information, etc.), the current time, the destination, etc. At least one piece of information that can be acquired based on the function of each unit of the output device 100 may be included. Also, the driving situation information includes any of the voice (excluding user's speech) obtained by the microphone 117, the image captured by the exterior camera 119, and the image captured by the interior camera 120. may The driving status information may also include information received from the server device 200 through the communication unit 111 .
  • FIG. 4 is a diagram for explaining the data structure of text data stored in the server device.
  • the storage unit 212 of the server device 200 stores, for example, text data TX having a data structure as shown in FIG.
  • the text data TX has a fixed part corresponding to a part in which a predetermined wording is maintained regardless of the driving situation indicated by the driving situation information, and a part in which the wording changes according to the driving situation indicated by the driving situation information.
  • the text data TX includes three fixed parts FD corresponding to the words “required time”, “distance” and “will be”, and the words “50 minutes shorter". It includes a variable part VDA and a variable part VDB corresponding to the words "10 km shorter". Also, the variable part VDA is arranged in a state sandwiched between the tag information TGA and TGB. Also, the variable part VDB is arranged in a state sandwiched between the tag information TGC and TGD.
  • the setting information for voice output while emphasizing the variable parts VDA and VDB is included.
  • FIG. 5 is a diagram showing a specific example of text data stored in the server device.
  • Text data having a data structure similar to that of text data TX includes, for example, the data shown in FIG. Note that in the text data of FIG. 5, description of the tag information shown in the description of the text data TX is omitted for convenience of illustration.
  • [Variable-A] corresponds to the variable part, and the other part corresponds to the fixed part. Therefore, for example, if the driving situation information includes information indicating that the set value of the volume of the speaker 118 is 8, the text data TXA in which [Variable-A] is replaced with "8" is obtained. Further, according to such text data TXA, the portion of "8" is output as voice while being emphasized. Note that [Variable-A] may be replaced with any numerical value as long as it can be included in the driving situation information.
  • [Variable-B] corresponds to the variable part, and the other part corresponds to the fixed part. Therefore, for example, if the driving status information includes information indicating that the current location of the vehicle Ve is Kawagoe City, Saitama Prefecture, then [Variable-B] is replaced with "Kawagoe City, Saitama Prefecture". Text data TXB is obtained. Further, according to such text data TXB, the portion of "Kawagoe City, Saitama Prefecture" is output as voice while being emphasized. Note that [Variable-B] may be replaced with any place name as long as it can be included in the driving situation information.
  • [Variable-C] corresponds to the variable part, and the other corresponds to the fixed part. Therefore, for example, if the driving status information includes information indicating that the reservation for restaurant R, which the passenger of vehicle Ve wishes to stop by, has been successful, [Variable-C] is replaced with "restaurant R".
  • the text data TXC in the state of being wrapped is acquired. Further, according to such text data TXC, the portion of "Restaurant R" is output as voice while being emphasized. Note that [Variable-C] may be replaced with any store name as long as it can be included in the driving situation information.
  • [Variable-F] may be replaced with a wording different from "today", or may be set to blank (silent period).
  • [Variable-G] may be replaced with any place name as long as it can be included in the driving situation information.
  • [Variable-H] may be replaced with a word representing any weather condition as long as it can be included in the driving situation information.
  • FIG. 6 is a flowchart for explaining the processing performed in the server device 200. As shown in FIG.
  • control unit 114 of the voice output device 100 acquires driving situation information related to the current driving situation of the vehicle Ve and transmits it to the server device 200 .
  • the server device 200 acquires the driving situation information from the voice output device 100 (step S11).
  • control unit 214 of the server device 200 determines whether or not the driving status information acquired in step S11 satisfies the trigger condition (step S12).
  • control unit 214 determines that the driving status information acquired in step S11 of FIG. 6 does not satisfy the trigger condition of the table data TB (step S12: NO), it performs the operation of step S11 again.
  • step S12 When the control unit 214 determines that the driving situation information acquired in step S11 satisfies the trigger condition (step S12: YES), tag information for outputting voice while emphasizing the variable part is added to the variable part.
  • the text data obtained is obtained (step S13).
  • control unit 214 sets the wording of the variable portion included in the text data acquired in step S13 based on the driving situation information acquired in step S11 (step S14).
  • control unit 214 acquires the text data in which the wording of the variable part is set in step S14 as content for output, and outputs the acquired content for output to the voice output device 100 (step S15). In this way, the content acquisition by the server device 200 ends.
  • the audio output device 100 audio-outputs the content received from the server device 200 to passengers of the vehicle Ve.
  • control unit 214 of the server device 200 has a function as a content acquisition unit. Further, according to this embodiment, the communication unit 211 of the server device 200 has a function as an output unit.
  • text data including a variable part and tag information for outputting voice while emphasizing the variable part is acquired according to the driving situation of the vehicle. and the acquired text data is output as voice. Therefore, according to the present embodiment, important information that may be included in content can be recognized more easily than before. Further, according to the present embodiment, for example, by creating data such as text data TXA to TXF in advance, important information in various categories can be voice-output while being emphasized.
  • control unit 114 has a function as a content acquisition unit and the speaker 118 has a function as an output unit
  • a series of processes substantially similar to those in FIG. Processing can be performed in the audio output device 100 .
  • Non-transitory computer readable media include various types of tangible storage media.
  • Examples of non-transitory computer-readable media include magnetic storage media (e.g., floppy disks, magnetic tapes, hard disk drives), magneto-optical storage media (e.g., magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, CD-R/W, semiconductor memory (eg mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)).
  • audio output device 200 server device 111, 211 communication unit 112, 212 storage unit 113 input unit 114, 214 control unit 115 sensor group 116 display unit 117 microphone 118 speaker 119 exterior camera 120 interior camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

Un dispositif de sortie de contenu est pourvu d'une unité d'acquisition de contenu et d'une unité de sortie. L'unité d'acquisition de contenu acquiert, en fonction d'un état de conduite de véhicule, un contenu comprenant une partie variable, et des informations d'étiquette pour délivrer des données audio tout en mettant en évidence la partie variable L'unité de sortie délivre le contenu.
PCT/JP2021/038223 2021-10-15 2021-10-15 Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage WO2023062816A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/038223 WO2023062816A1 (fr) 2021-10-15 2021-10-15 Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/038223 WO2023062816A1 (fr) 2021-10-15 2021-10-15 Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage

Publications (1)

Publication Number Publication Date
WO2023062816A1 true WO2023062816A1 (fr) 2023-04-20

Family

ID=85988220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/038223 WO2023062816A1 (fr) 2021-10-15 2021-10-15 Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage

Country Status (1)

Country Link
WO (1) WO2023062816A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312157A (ja) * 2001-04-13 2002-10-25 Yoshito Suzuki 音声ガイダンスモニタソフトウェア
JP2010175717A (ja) * 2009-01-28 2010-08-12 Mitsubishi Electric Corp 音声合成装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312157A (ja) * 2001-04-13 2002-10-25 Yoshito Suzuki 音声ガイダンスモニタソフトウェア
JP2010175717A (ja) * 2009-01-28 2010-08-12 Mitsubishi Electric Corp 音声合成装置

Similar Documents

Publication Publication Date Title
US20190120649A1 (en) Dialogue system, vehicle including the dialogue system, and accident information processing method
JP6604151B2 (ja) 音声認識制御システム
US11189274B2 (en) Dialog processing system, vehicle having the same, dialog processing method
JP2006317573A (ja) 情報端末
KR20220058492A (ko) 긴급 차량 오디오 및 시각적 감지를 융합한 기계 학습 모델
JP7020098B2 (ja) 駐車場評価装置、駐車場情報提供方法およびプログラム
JP5018671B2 (ja) 車両用ナビゲーション装置
JP2023164659A (ja) 情報処理装置、情報出力方法、プログラム及び記憶媒体
JP2019100130A (ja) 車両制御装置及びコンピュータプログラム
WO2023062816A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage
JP2023105143A (ja) 情報処理装置、情報出力方法、プログラム及び記憶媒体
WO2023163197A1 (fr) Dispositif d'évaluation de contenu, procédé d'évaluation de contenu, programme et support de stockage
WO2023286826A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage
WO2023163196A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme, et support d'enregistrement
WO2023286827A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage
JP4575493B2 (ja) ナビゲーション装置、並びに経路誘導方法及びプログラム
WO2023162192A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de d'enregistrement
WO2023276037A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage
US20240134596A1 (en) Content output device, content output method, program and storage medium
WO2023162189A1 (fr) Dispositif de sortie de contenu, procédé de sortie de contenu, programme et support de stockage
WO2023112147A1 (fr) Dispositif d'émission vocale, procédé d'émission vocale, programme et support de stockage
WO2023063405A1 (fr) Dispositif de génération de contenu, procédé de génération de contenu, programme et support d'enregistrement
WO2023112148A1 (fr) Dispositif de sortie audio, procédé de sortie audio, programme et support de stockage
WO2023062814A1 (fr) Dispositif de sortie audio, procédé de sortie audio, programme et support de stockage
WO2023062817A1 (fr) Dispositif de reconnaissance vocale, procédé de commande, programme et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960672

Country of ref document: EP

Kind code of ref document: A1