JP2010130223A - Voice activation system and voice activation method - Google Patents

Voice activation system and voice activation method Download PDF

Info

Publication number
JP2010130223A
JP2010130223A JP2008301496A JP2008301496A JP2010130223A JP 2010130223 A JP2010130223 A JP 2010130223A JP 2008301496 A JP2008301496 A JP 2008301496A JP 2008301496 A JP2008301496 A JP 2008301496A JP 2010130223 A JP2010130223 A JP 2010130223A
Authority
JP
Japan
Prior art keywords
voice
unit
mobile terminal
command
voice recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2008301496A
Other languages
Japanese (ja)
Other versions
JP2010130223A5 (en
Inventor
Noriaki Inoue
Satoshi Ota
典昭 井上
悟司 太田
Original Assignee
Fujitsu Ten Ltd
富士通テン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ten Ltd, 富士通テン株式会社 filed Critical Fujitsu Ten Ltd
Priority to JP2008301496A priority Critical patent/JP2010130223A/en
Publication of JP2010130223A publication Critical patent/JP2010130223A/en
Publication of JP2010130223A5 publication Critical patent/JP2010130223A5/en
Pending legal-status Critical Current

Links

Images

Abstract

A driver who is driving can safely perform voice operations on a portable terminal device, and a passenger can easily perform voice operations on a predetermined portable terminal device.
A voice operation system and a voice operation method input a voice uttered by a user, recognizes the input voice, converts a voice recognition result obtained by voice recognition into an operation command, and executes the operation command. . Further, the in-vehicle device stores distribution information associated with a device identifier for identifying whether or not the voice input unit, the voice recognition unit, and the command execution unit exist in the device, and is based on the distribution information. The output from the voice input means is distributed to the voice recognition means, and the output from the voice recognition means is assigned to the command execution means.
[Selection] Figure 3

Description

  The present invention relates to a voice operation system and a voice operation method for performing voice operation of a mobile terminal device using an in-vehicle device, and in particular, a driver who is driving can safely perform voice operation of the mobile terminal device, and passengers The present invention also relates to a voice operation system and a voice operation method that can easily perform voice operation on a predetermined portable terminal device.

  With the spread of car navigation systems using GPS (Global Positioning System), in-vehicle devices having a navigation function are often mounted on vehicles. At the same time, it is indispensable that the driving driver can operate safely.

  For example, there is an in-vehicle device equipped with a speech recognition engine that recognizes input speech and converts it into character data so that the navigation function can be operated by speech. It is not preferable.

  On the other hand, portable terminal devices such as mobile phones are becoming inexpensive, and those equipped with a speech recognition engine and those equipped with a navigation function have become widespread. In addition, portable terminal devices equipped with a short-range wireless communication function such as Bluetooth (registered trademark) have also become widespread.

  For these reasons, efforts have been made to link the mobile terminal device and the vehicle-mounted device with such a short-range wireless communication function and to use the function on the mobile terminal device side on the vehicle-mounted device side. This makes it possible to reduce the price of the in-vehicle device.

  For example, Patent Literature 1 discloses a technique for transmitting a display screen displayed on a screen of a mobile terminal device to an in-vehicle device and displaying a display screen generated by the mobile terminal device on a display of the in-vehicle device. Yes.

JP 2003-244343 A

  However, in the linkage system of Patent Document 1, an attempt is made to remotely operate the mobile terminal device using the operation screen of the in-vehicle device, but there is a problem that the mobile terminal device cannot be operated by voice.

  For example, a case where a voice recognition engine is not mounted on an in-vehicle device and a voice recognition engine is mounted on a driver's mobile terminal device will be described.

  In this case, since the driver is prohibited by law from operating the portable terminal device during driving, the driver cannot touch the portable terminal device. For this reason, it is impossible to use the speech recognition engine of the mobile terminal device from the in-vehicle device.

  In addition, when the driver's mobile terminal device has a navigation function and the passenger's mobile terminal device does not have a navigation function, the passenger operates the driver's mobile terminal device to operate the navigation function. May be used.

  However, since the mobile terminal device has information on personal privacy such as an address book, outgoing / incoming call history, mail, and the history thereof, it is not preferable that it is operated by a person other than the owner.

  For these reasons, when operating a mobile terminal device using an in-vehicle device, a driver who is driving can safely perform a voice operation, and even from a passenger's mobile terminal device, the driver's mobile terminal device can be operated. How to realize a voice operation system and a voice operation method capable of operating an installed application has become a major issue.

  The present invention has been made to solve the above-described problems caused by the prior art, and when operating a mobile terminal device using an in-vehicle device, a voice driver of the mobile terminal device can safely operate even when the driver is driving. It is an object of the present invention to provide a voice operation system and a voice operation method in which a passenger can easily voice-operate a predetermined portable terminal device.

  In order to solve the above-described problems and achieve the object, the present invention is a voice operation system that performs voice operation of a mobile terminal device using an in-vehicle device, and a voice input unit that inputs voice uttered by a user; Voice recognition means for recognizing the voice, and command execution means for executing the operation command after converting a voice recognition result by the voice recognition means into an operation command, the in-vehicle device includes the voice input means, Based on the distribution information, distribution information storage means for storing distribution information associated with a device identifier for identifying the device, whether the voice recognition unit and the command execution unit exist in the device. Distribution means for distributing the output from the voice input means to the voice recognition means, and the output from the voice recognition means to the command execution means. And features.

  According to the present invention, a voice uttered by a user is input, the input voice is voice-recognized, and the voice recognition result obtained by voice recognition is converted into an operation command, and then the operation command is executed. The in-vehicle device stores distribution information in which whether or not the voice input unit, the voice recognition unit, and the command execution unit are present in the device is associated with a device identifier for identifying the device, and the voice is based on the distribution information. There is an effect that it is possible to provide a voice operation system and a voice operation method that can distribute the output from the input means to the voice recognition means and the output from the voice recognition means to the command execution means.

  Exemplary embodiments of a voice operation system and a voice operation method according to the present invention will be described below in detail with reference to the accompanying drawings. In the following, a voice operation system in which an in-vehicle device called DA (display Audio) mounted on a vehicle and a mobile terminal device cooperate with each other by a short-range wireless communication function will be described.

  Here, DA refers to an in-vehicle device that is mounted with only basic functions such as a display function, an audio playback function, and a communication function with a mobile terminal device, and becomes multi-functional by cooperating with the mobile terminal device.

  In the following, an outline of the voice operation system according to the present invention will be described with reference to FIG. 1, and then an embodiment of the voice operation system according to the present invention will be described with reference to FIGS. 2 to 16.

  In a conventional operation system in which a portable terminal device and an in-vehicle device are linked, even if the driver's portable terminal device is equipped with a navigation function, the user's own portable terminal device cannot be operated.

  Therefore, in the voice operation device and the voice operation system according to the present invention, an application (hereinafter simply referred to as “application”) such as a navigation function installed in the mobile terminal device is operated by voice operation.

  FIG. 1 is a diagram illustrating an outline of a voice operation system according to the present embodiment. As shown in the table at the top of the figure, the in-vehicle device associates the voice recognition engine and whether or not the application exists in the device and the information specifying the result output display destination with the device identifier for identifying the device. It is stored as distribution information.

  In this case, the voice recognition engine is mounted on a device whose device identifier is “333”, and the application of the navigation function is mounted on a device whose device identifier is “222”. A case will be described in which the setting is made to output to a device whose identifier is “111”.

  For example, when the passenger issues a command “destination home” from the microphone of the portable terminal A of the passenger assigned with the device identifier “222” (see (1) in FIG. 5), the device identifier is “ The input voice data is transmitted to the DA assigned “111”.

  Then, the DA transmits the input voice data based on the distribution information indicating in which device the voice recognition engine is present. In this case, the input voice data is transmitted to the portable terminal B of the driver assigned with the device identifier “333” (see (2) in the figure).

  The voice recognition engine installed in the portable terminal B recognizes the received input voice data (see (3) in the figure), and sends the voice recognition result converted to the character string “destination home” to the DA. (Refer to (4) in the figure).

  On the other hand, the DA that has received the voice recognition result displays the voice recognition result based on the device information of the result output display destination of the distribution information. In this case, the command character string “Destination Home” is displayed on the DA display (see (5) in the figure).

  Then, the passenger responds to the displayed voice recognition result again from the microphone of the portable terminal A. In this case, since the voice recognition result is correct, the passenger issues “OK” and responds (see (6) in FIG. 6).

  Thereafter, the input voice data is transmitted to the DA, and the DA recognizes the input voice data of “OK” as the voice recognition mounted on the portable terminal B in the same manner as the voice recognition processes (2) to (4) described above. The character is converted into a character string by the engine, and the portable terminal B transmits the voice recognition result to DA.

  Then, the DA transmits a command character string based on distribution information indicating in which device the application exists. In this case, since the application exists in the mobile terminal A, the DA transmits a command character string to the mobile terminal A.

  On the other hand, in the portable terminal A, processing such as displaying a map or performing route guidance, where the destination is “home”, based on the received command character string “destination home” is executed ((7 in the figure)). )reference).

  Here, voice input is performed from the microphone of the passenger's portable terminal A. However, the driver may issue a command by connecting a microphone mounted on the DA to a hands-free microphone.

  As described above, when the voice recognition engine and the application exist in any one of the portable terminal devices, the driver can safely operate the voice of the portable terminal device, and the passenger can also perform the predetermined portable terminal. The device can be easily operated by voice.

  Next, the apparatus configuration of the voice operation system according to the present embodiment will be described with reference to FIG. FIG. 2 is a diagram illustrating an apparatus configuration pattern of the voice operation system according to the present embodiment.

  In the first embodiment, as shown in FIG. 5A, the vehicle includes a portable terminal device equipped with a microphone, a speaker, and an application, and a portable terminal device equipped with a voice recognition engine, each communicating with DA. A case of an apparatus configuration capable of processing will be described.

  In the second embodiment, as shown in FIG. 5B, the vehicle includes a portable terminal device equipped with a microphone and a display, a portable terminal device equipped with a voice recognition engine, and a portable terminal device equipped with an application. In the following, a case will be described in which there is a device configuration capable of communicating with DA.

  In Example 3, as shown in (C) of the figure, the vehicle has a portable terminal device equipped with an application and a voice recognition engine, and can perform communication processing with a DA equipped with a microphone and a touch panel display. The case of will be described.

  Hereinafter, a first embodiment of the voice operation system and the voice operation method according to the present invention will be described in detail with reference to FIGS. FIG. 3 is a block diagram of the configuration of the voice operation system according to the first embodiment. As shown in FIG. 1, the voice operation system includes an in-vehicle device 10, a mobile terminal device 20, and a mobile terminal device 30.

  First, the configuration of the in-vehicle device 10 will be described. As shown in the figure, the in-vehicle device 10 includes a short-range communication unit 11, a storage unit 12, and a control unit 13. The storage unit 12 further includes distribution information 12a, and the control unit 13 further includes a reception unit 13a and a distribution unit 13b.

  The short-range communication unit 11 establishes communication links with the mobile terminal device 20 and the mobile terminal device 30 using short-range wireless communication such as Bluetooth (registered trademark), and uses the established communication links to install the in-vehicle device. 10 / Communication processing between each mobile terminal device is performed.

  Here, Bluetooth (registered trademark) is a short-range wireless communication standard for performing wireless communication with a radius of about 10 m using a 2.4 GHz frequency band. In recent years, electronic devices such as mobile phones and personal computers are used. Widely applied to equipment.

  In addition, although Example 1 demonstrates the case where communication between in-vehicle apparatus 10 / each portable terminal device is performed using Bluetooth (registered trademark), Wi-Fi (Wi-Fi: registered trademark), ZigBee (ZigBee: registered trademark). Other wireless communication standards such as) may be used. Moreover, it is good also as performing communication between the vehicle-mounted apparatus 10 / each portable terminal device by wired communication.

  The storage unit 12 is a storage unit configured by a nonvolatile RAM (Random Access Memory), an HDD (Hard Disk Drive), or the like, and stores distribution information 12a.

  The distribution information 12a is information associated with a device identifier for identifying a device, information specifying whether a speech recognition engine and an application exist in the device, a speech input source device and a speech recognition result output destination device. And stored in the storage unit 12.

  The distribution information 12a accumulated in the storage unit 12 can be changed from the DA display or the like. The detailed items of the distribution information 12a will be described later with reference to FIG.

  The receiving unit 13a receives input voice data and voice recognition result data from the mobile terminal device 20 and the mobile terminal device 30 via the short-range communication unit 11, and performs processing for passing the received various data to the sorting unit 13b. Part.

  The distribution unit 13b receives various data from the reception unit 13a, and transmits the received data to the mobile terminal device 20 and the mobile terminal device 30 via the short-range communication unit 11 based on the distribution information 12a. It is a processing part to perform.

  Next, the configuration of the mobile terminal device 20 will be described. As shown in the figure, the mobile terminal device 20 includes a short-range communication unit 21 that performs communication with the in-vehicle device 10, a microphone 22, a speaker 23, operation command information 24, a voice input unit 25, and a voice output. A unit 26 and an application 27 are provided.

  Note that the actual mobile terminal device 20 has functional units (for example, an operation unit, a display unit, etc.) other than the illustrated functional units, but FIG. 3 illustrates the characteristics of the mobile terminal device 20 according to the first embodiment. Only the necessary components are extracted.

  The short-range communication unit 21 establishes a communication link with the in-vehicle device 10 using short-range wireless communication such as Bluetooth (registered trademark) as well as the short-range communication unit 11 of the in-vehicle device 10 and the established communication link. Is used to perform communication processing between the mobile terminal device 20 and the in-vehicle device 10.

  The microphone 22 is a device that converts sound into an electric signal, and passes the converted electric signal to the sound input unit 25 as sound data. The speaker 23 is a device that converts audio data, which is an electrical signal from the audio output unit 26, into physical vibration and reproduces it as sound.

  The operation command information 24 sets a command character string associated with an operation command and an execution limit of the operation command for each command converted into a format executable by the application 27 (hereinafter referred to as “operation command”). Information to be stored is stored in a nonvolatile RAM or the like. The operation command information 24 will be described later with reference to FIG.

  The voice input unit 25 is a processing unit that performs processing to pass input voice data from the microphone 22 to the short-range communication unit 21. The audio output unit 26 is a processing unit that performs processing of passing audio data received via the short-range communication unit 21 to the speaker 23.

  The application 27 is an application that operates on the mobile terminal device 20. Here, as applications, a GPS (Global Positioning System) antenna is used to acquire the current position, and a navigation application that provides route guidance to the destination by superimposing the current position and map information, and playing music and other music There are music playback apps to play.

  In the application 27, in order to operate on the mobile terminal device 20 based on the command character string received via the short-range communication unit 21, it is converted into an operation command that can be processed inside the device, and the converted command and Processing is executed based on the operation command information 24.

  Next, the configuration of the mobile terminal device 30 will be described. As shown in the figure, the mobile terminal device 30 includes a short-range communication unit 31 that communicates with the in-vehicle device 10, a voice recognition unit 32, and a result generation unit 33.

  Note that the actual mobile terminal device 30 includes functional units (for example, an operation unit, a display unit, a speaker, a voice input unit such as a microphone, etc.) other than the illustrated functional units. FIG. Only the components necessary for explaining the characteristics of the mobile terminal device 30 are extracted.

  The short-range communication unit 31 is the same as the short-range communication unit 21, and thus description thereof is omitted here. The speech recognition unit 32 adjusts speech data that is easy to analyze, such as noise removal of input speech data received via the short-range communication unit 31, and then analyzes the speech language spoken by humans and extracts it as character string data (hereinafter referred to as character string data). , Which is described as “voice recognition”).

  The result generation unit 33 is a processing unit that performs processing to convert the speech recognition result recognized by the speech recognition unit 32 into image data for result display or speech data for result speech output.

  The in-vehicle device according to the present invention is the in-vehicle device 10, the portable terminal device is the portable terminal device 20 and the portable terminal device 30, the voice input means is the microphone 22 and the voice input unit 25, the voice recognition means is the voice recognition unit 32, The command execution means corresponds to the application 27, and the distribution means corresponds to the distribution unit 13b.

  Next, the distribution information 12a will be described in detail with reference to FIG. FIG. 4 is a diagram showing distribution information. As shown in the figure, the distribution information 12a includes information on “voice recognition” item, “application” item, “input priority” item, “output display” item, and “output voice” item for each device identifier. In addition, the setting can be changed by displaying on the DA display.

  The device identifier is a device identifier for identifying a device, and assigns a predetermined device identifier when the DA recognizes each device that performs communication processing. The “voice recognition” item is information indicating whether or not a voice recognition engine exists in the apparatus.

  The “application” item is information indicating whether an application exists in the apparatus. The “input priority” item is information used by the DA to determine which device's sound is to be processed with priority when sound is simultaneously emitted from each device.

  For example, when sound is simultaneously emitted from the devices having the device identifier “222” and the device identifier “444”, the “input priority” of the device identifier “222” is “1”, and the device identifier “444”. Since “input priority” of “3” is “3”, the DA performs processing on the voice from the device with the device identifier “222” having a higher priority.

  In the “output display” item, a device for displaying the voice recognition result is set. For example, in order to display on the display of the device having the device identifier “111” regardless of the device to which voice is input, “a” is set in the “output display” item of the device identifier “111”.

  In addition, when displaying on the display of the mobile terminal device that has been voice-input, if “b” is set in the “output display” item of the predetermined mobile terminal device, the mobile phone set as “b” It displays on the display of the portable terminal device into which the voice was inputted from among the terminal devices.

  Furthermore, when displaying on the display of all the specified portable terminal devices, if “c” is set in the “output display” item of the predetermined portable terminal device, the portable device set to “c” is set. Display on the display of all terminal devices.

  The “output voice” item sets a device that outputs a voice recognition result by voice. This is the same as the “output display” item, and “a” is set in the “output voice” item for a terminal that outputs the voice recognition result by voice regardless of the device to which the voice is input.

  Also, when trying to output sound to a mobile terminal device that has been input by voice, setting “b” in the “output audio” item of a predetermined mobile terminal device and attempting to output sound to all specified mobile terminal devices Then, “c” is set in the “output voice” item of the predetermined portable terminal device.

  Here, the “output display” item and the “output audio” item are set to “a”, “b”, and “c”, but may be expressed by other characters, numerical values, or the like.

  The distribution information 12a is displayed on the DA display so that the setting can be changed. However, the DA transmits the distribution information 12a to a predetermined mobile terminal device, and the received mobile terminal device receives the distribution information 12a. The setting may be changed.

  Next, the operation command information 24 will be described in detail with reference to FIG. FIG. 5 is a diagram showing an operation command change screen. As shown in the table of the figure, the operation command information 24 includes information on a “character string” item and a “use” item for each operation command.

  The “character string” item is a command character string associated with the operation command. When the command character string of the “character string” item is input by voice, the corresponding operation command is executed by the application 27.

  The “use” item is information for setting a command operation limit, and sets whether or not the operation command may be executed by voice operation. For example, if “music playback” of “command: 30” is set to “unusable” (“×” in this case), “music playback” is supported when “music playback” is input. The operation to be performed is not executed.

  For example, as shown in the lower part of the figure, the mobile terminal device 20 on which the application 27 is installed has an operation command displayed on the operation command setting screen having an “add” button, an “update” button, and an “end” button. Can be set.

  When the “Add” button is pressed, a new operation command can be registered in the operation command information 24. When the “update” button is pressed, the operation command can be updated after the operation command information 24 is changed.

  Specifically, as shown in the figure, the command character string “command: 10” is set to “OK”, but it should be changed to another command character string such as “execute” or “yes”. , “Command: 10” is executed by voice input of “execute” or “yes”.

  When the “Finish” button is pressed, the operation command setting process is terminated. Note that, at the time of shipment of the mobile terminal device 20, a common operation command used in an application may be initialized.

  Here, FIG. 6 shows the setting state of the distribution information 12a when explaining the operation command input processing procedure and the result confirmation processing procedure performed by the voice operation system. FIG. 6 is a diagram illustrating distribution information according to the first embodiment.

  In this case, if the in-vehicle device 10 is assigned to the device identifier “111” and the passenger's mobile terminal is the mobile terminal device 20, the in-vehicle device 10 is assigned to the device identifier “222” and the driver's mobile terminal is the mobile terminal device 30. Assume that the device identifier “333” is assigned.

  The voice recognition engine is mounted on the mobile terminal device 30, and the application is mounted on the mobile terminal device 20. It is assumed that “input priority” is set to “1” for the mobile terminal device 20 and “2” for the in-vehicle device 10. In addition, the following description will be given on the assumption that the voice recognition result is set to be output by voice from the mobile terminal device 20.

  First, an operation command input processing procedure performed by the voice operation system will be described with reference to FIG. FIG. 7 is a sequence diagram of the operation command input processing procedure according to the first embodiment.

  In the in-vehicle device 10, first, a button for starting a voice operation provided in the in-vehicle device 10 (not shown) is assigned, and when the start button for the voice operation is pressed, a voice operation can be performed. To do.

  As shown in the figure, when a passenger inputs a command character string by voice from the portable terminal device 20 (step S101), the portable terminal device 20 receives the input voice data via the short-range communication unit 21 as in-vehicle. Send to device 10.

  Then, the allocating unit 13b of the in-vehicle device 10 determines, on the basis of the allocating information 12a, which device the voice recognition engine is mounted on as the mobile terminal device 30 (step S102), and the short-range communication unit 11 Then, the input voice data is sent to the portable terminal device 30 determined in step S102.

  On the other hand, the voice recognition unit 32 of the portable terminal device 30 recognizes the received input voice data (step S103), converts it into a command character string, and the result generation unit 33 further uses a command character for voice output. The voice data of the column is generated (step S104), and the voice data and the command character string are sent to the in-vehicle device 10 via the short-range communication unit 31.

  Thereafter, the distribution unit 13b of the in-vehicle device 10 determines that the output destination device of the voice recognition result and the device on which the application is installed are both the mobile terminal device 20 based on the distribution information 12a (step S105). The voice data and the command character string are sent to the mobile terminal device 20 via the distance communication unit 11.

  Then, the mobile terminal device 20 outputs the received voice data through the speaker 23 (step S106), and stores the received command character string in a temporary storage memory or the like (step S107).

  A result confirmation processing procedure performed by the voice operation system following the processing procedure of FIG. 7 will be described with reference to FIG. FIG. 8 is a sequence diagram of the result confirmation processing procedure according to the first embodiment.

  As shown in the figure, when the passenger inputs the confirmation character string by voice from the portable terminal device 20 (step S201), the portable terminal device 20 sends the input voice data to the vehicle via the short-range communication unit 21. Send to device 10.

  Then, the distribution unit 13b of the in-vehicle device 10 determines which device the voice recognition engine is mounted on, based on the distribution information 12a, as the mobile terminal device 30 (step S202), and the short-range communication unit 11 Then, the input voice data is sent to the portable terminal device 30 determined in step S202.

  On the other hand, the voice recognition unit 32 of the mobile terminal device 30 recognizes the received input voice data (step S203), and the result generation unit 33 generates character string data using the voice confirmation result as a confirmation character string ( In step S204), the confirmation character string is sent to the in-vehicle device 10 via the short-range communication unit 31.

  Thereafter, the distribution unit 13b of the in-vehicle device 10 determines that the device on which the application is mounted is the mobile terminal device 20 based on the distribution information 12a (step S205), and via the short-range communication unit 11, The confirmation character string is sent to the mobile terminal device 20.

  Then, if the received confirmation character string is “OK”, the application 27 of the mobile terminal device 20 sends an operation command to the command character string stored in the temporary storage memory or the like based on the operation command information 24. Generate (step S206), and execute the application based on the operation command (step S207).

  As described above, in the first embodiment, when the voice recognition engine is installed in the driver's portable terminal device 30 and the application is installed in the passenger's portable terminal device 20, the voice from the passenger's portable terminal device 20 is voiced. The in-vehicle device 10 is configured to distribute the data to the mobile terminal device 30 on which the speech recognition engine is mounted.

  Thereby, for example, a passenger in the rear seat can perform voice operation without touching the driver's portable terminal device 30 and needs to operate the in-vehicle device 10 installed in front of the vehicle. Absent. Therefore, the passenger can perform a voice operation in a free posture without taking any action that hinders the driving of the driver, such as going forward.

  In the first embodiment, voice input is performed from the passenger's portable terminal device. However, voice input may be performed from a hands-free unit such as a headset microphone connected to the in-vehicle device 10.

  In the first embodiment, the voice operation system in which the in-vehicle device 10 and the two mobile terminal devices 20 and 30 cooperate with each other through the short-range wireless communication function has been described. However, the in-vehicle device and the three or more mobile terminals You may make it cooperate with an apparatus.

  Therefore, in the second embodiment, a case where the in-vehicle device and the three mobile terminal devices cooperate with each other with the short-range wireless communication function will be described in detail with reference to FIGS. 9 to 12. FIG. 9 is a block diagram of the configuration of the voice operation system according to the second embodiment.

  As shown in the figure, the voice operation system includes an in-vehicle device 110, a mobile terminal device 120, a mobile terminal device 130, and a mobile terminal device 140.

  Since the in-vehicle device 110 and the mobile terminal device 130 are the same as the in-vehicle device 10 and the mobile terminal device 30 of the first embodiment, the description thereof is omitted here.

  Next, the configuration of the mobile terminal device 120 will be described. As shown in the figure, the mobile terminal device 120 includes a short-range communication unit 121 that communicates with the in-vehicle device 110, a microphone 122, a display 123, a voice input unit 124, and a display unit 125. .

  Note that the actual mobile terminal device 120 has functional units (for example, an operation unit, a speaker, etc.) other than the illustrated functional units, but FIG. 9 illustrates the characteristics of the mobile terminal device 120 according to the second embodiment. Only the necessary components are extracted.

  The short-range communication unit 121 is the same as the short-range communication unit 21 of the first embodiment, and a description thereof is omitted here. Further, since the microphone 122 is the same as the microphone 22 of the first embodiment, description thereof is omitted here. The display 123 is a device that displays the image data from the display unit 125.

  The voice input unit 124 is a processing unit that performs processing of passing input voice data from the microphone 122 to the short-range communication unit 121. The display unit 125 is a processing unit that performs processing for passing image data received via the short-range communication unit 21 to the display 123.

  Next, the configuration of the mobile terminal device 140 will be described. As shown in the figure, the mobile terminal device 140 includes a short-range communication unit 141 that performs communication with the in-vehicle device 110, an application 142, and operation command information 143.

  The actual mobile terminal device 140 includes functional units (for example, an operation unit, a display unit, a speaker, a voice input unit such as a microphone, etc.) other than the illustrated functional units. Only the components necessary for explaining the characteristics of the mobile terminal device 140 are extracted.

  Since the operation command information 143 is the same as the operation command information 24 of the first embodiment, description thereof is omitted here. The application 142 is the same as the application 27 of the first embodiment, and a description thereof is omitted here.

  The in-vehicle device according to the present invention is the in-vehicle device 110, the portable terminal device is the portable terminal device 120, the portable terminal device 130 and the portable terminal device 140, the voice input means is the microphone 122 and the voice input unit 124, and the voice recognition means is The voice recognition unit 132, the command execution unit corresponds to the application 142, and the distribution unit corresponds to the distribution unit 113b.

  Here, FIG. 10 shows the setting state of the distribution information 112a when explaining the operation command input processing procedure and the result confirmation processing procedure performed by the voice operation system. FIG. 10 is a diagram illustrating distribution information according to the second embodiment.

  In this case, the in-vehicle device 110 is assigned to the device identifier “111”, and if the passenger's mobile terminal is the mobile terminal device 120 and the mobile terminal device 140, the on-vehicle device 110 is assigned to the device identifier “222” and the device identifier “444”. When the mobile terminal device of the driver is the mobile terminal device 130, it is assumed that the device identifier “333” is assigned.

  The voice recognition engine is mounted on the mobile terminal device 130, and the application is mounted on the mobile terminal device 140. The “input priority” is set to “1” for the mobile terminal device 120, “2” for the mobile terminal device 140, and “3” for the in-vehicle device 110. Further, the following description will be given assuming that the voice recognition result is set to be displayed on the display from the mobile terminal device 120.

  First, an operation command input processing procedure performed by the voice operation system will be described with reference to FIG. FIG. 11 is a sequence diagram of the operation command input processing procedure according to the second embodiment.

  In the in-vehicle device 110, first, a button for starting a voice operation provided in the in-vehicle device 110 (not shown) is assigned, and when a voice operation start button is pressed, a voice operation can be performed. To do.

  As shown in the figure, when a passenger inputs a command character string by voice from the portable terminal device 120 (step S301), the portable terminal device 120 receives the input voice data via the short-range communication unit 121 as in-vehicle. Send to device 110.

  Based on the distribution information 112a, the distribution unit 113b of the in-vehicle device 110 determines which device the voice recognition engine is mounted on as the mobile terminal device 130 (step S302), and the short-range communication unit 111. Then, the input voice data is sent to the portable terminal device 130 determined in step S302.

  On the other hand, the speech recognition unit 132 of the mobile terminal device 130 recognizes the received input speech data (step S303), converts it into a command character string, and the result generation unit 133 further displays the speech recognition result for display. Is generated (step S304), and the image data and the command character string are sent to the in-vehicle device 110 via the short-range communication unit 131.

  Thereafter, the distribution unit 113b of the in-vehicle device 110 determines that the output destination device of the voice recognition result is the mobile terminal device 120 based on the distribution information 112a, and the device on which the application is mounted is the mobile terminal device 140. (Step S305), and sends the image data to the mobile terminal device 120 via the short-range communication unit 111.

  Then, the mobile terminal device 120 displays the received image data on the display 123 (step S306). For example, an image displayed on the DA (111) in FIG. 1 is displayed on the display 123 of the mobile terminal device 120.

  Thereafter, the in-vehicle device 110 sends a command character string to the mobile terminal device 140 via the short-range communication unit 111, and the mobile terminal device 140 stores the received command character string in a temporary storage memory or the like ( Step S307).

  A result confirmation processing procedure performed by the voice operation system following the processing procedure of FIG. 11 will be described with reference to FIG. FIG. 12 is a sequence diagram of the result confirmation processing procedure according to the second embodiment.

  As shown in the figure, when the passenger inputs the confirmation character string by voice from the portable terminal device 120 (step S401), the portable terminal device 120 sends the input voice data to the in-vehicle device 110.

  Based on the distribution information 112a, the distribution unit 113b of the in-vehicle device 110 determines which device the voice recognition engine is mounted on as the mobile terminal device 130 (step S402), and the short-range communication unit 111. Then, the input voice data is sent to the portable terminal device 130 determined in step S402.

  On the other hand, the voice recognition unit 132 of the mobile terminal device 130 recognizes the received input voice data (step S403), and the result generation unit 133 generates character string data using the voice confirmation result as a confirmation character string ( In step S404, the confirmation character string is sent to the in-vehicle device 110 via the short-range communication unit 131.

  Thereafter, the distribution unit 113b of the in-vehicle device 110 determines that the device on which the application is installed is the mobile terminal device 140 based on the distribution information 112a (step S405), and passes through the short-range communication unit 111, The confirmation character string is sent to the mobile terminal device 120.

  Then, if the received confirmation character string is “OK”, the application 142 of the mobile terminal device 140 sends an operation command to the command character string stored in the temporary storage memory or the like based on the operation command information 143. Generate (step S406) and execute the application based on the operation command (step S407).

  As described above, in the second embodiment, the in-vehicle device 110 is configured to perform the distribution process so that each device can perform each process regardless of the voice recognition engine, the application, and the device that performs voice input / display. .

  Accordingly, the passenger can perform a voice operation without touching the driver's mobile terminal device 130, and can also perform a voice operation from a mobile terminal device that is not equipped with a voice recognition engine or application. Furthermore, by using the portable terminal device owned by the passenger, it is not necessary to purchase a new device using the voice operation system, and many passengers can use it.

  By the way, the case where voice input is performed from the passenger's portable terminal device has been described so far. However, when only the driver is on board, that is, the in-vehicle device and one portable terminal device are linked. You may make it make it.

  Therefore, in the third embodiment, a case where the in-vehicle device and one mobile terminal device cooperate with each other by the short-range wireless communication function will be described in detail with reference to FIGS. FIG. 13 is a block diagram of the configuration of the voice operation system according to the third embodiment.

  As shown in the figure, the voice operation system includes an in-vehicle device 210 and a mobile terminal device 220. First, the configuration of the in-vehicle device 210 will be described. As shown in the figure, the in-vehicle device 210 includes a short-range communication unit 211, a microphone 212, a display 213, and a control unit 214.

  The control unit 214 further includes a reception unit 214a, a distribution unit 214b, a voice input unit 214c, and a display operation unit 214d. Since the short-range communication unit 211 is the same as the short-range communication unit 11 of the first embodiment, description thereof is omitted here.

  The microphone 212 is a hands-free unit such as a headset microphone, and is composed of an input device that can perform voice operation even while the driver is driving. The microphone 212 converts voice into an electric signal, and the converted electric signal is used as voice data as a voice input unit. To 214c.

  The display 213 includes an input / output device such as a touch panel display, displays image data from the display operation unit 214d, and acquires touch information and the like for the displayed display screen.

  Here, the touch information is information including coordinate position information on the touched display, a time interval from the touch to the next touch, and the like. The display 213 passes the acquired touch information to the display operation unit 214d.

  The receiving unit 214a receives voice recognition result data from the mobile terminal device 220 via the short-range communication unit 11, receives input voice data from the voice input unit 214c, and further receives coordinate position information and the like from the display operation unit 214d. It is a processing unit that performs processing to receive and receive various received data to the distribution unit 214b.

  The distribution unit 214b is a processing unit that performs processing of receiving various data from the reception unit 214a and transmitting the received data to the display operation unit 214d or the mobile terminal device 220 based on distribution information (not shown).

  The voice input unit 214c is a processing unit that performs processing of passing input voice data from the microphone 212 to the reception unit 214a. The display operation unit 214d is a processing unit that performs processing to pass the image data received from the sorting unit 214b to the display 213 and to pass touch information received from the display 213 to the receiving unit 214a.

  Next, the configuration of the mobile terminal device 220 will be described. As shown in the figure, the mobile terminal device 220 includes a short-range communication unit 221 that communicates with the in-vehicle device 210, a voice recognition unit 222, a result generation unit 223, an application 224, and operation command information 225. I have.

  Note that the actual mobile terminal device 220 includes functional units (for example, an operation unit, a display unit, a speaker, a voice input unit such as a microphone, etc.) other than the illustrated functional units. In FIG. Only the components necessary for explaining the characteristics of the mobile terminal device 220 are extracted.

  The short-range communication unit 221, the voice recognition unit 222, the result generation unit 223, the application 224, and the operation command information 225 are the short-range communication unit 21, the voice recognition unit 32, the result generation unit 33, the application 27, and the operation command of the first embodiment. Since it is the same as the information 24, description is abbreviate | omitted here.

  The in-vehicle device according to the present invention is the in-vehicle device 210, the portable terminal device is the portable terminal device 220, the voice input means is the microphone 212 and the voice input unit 214c, the voice recognition means is the voice recognition unit 222, and the command execution means is an application. 224. The distribution means corresponds to the distribution unit 214b.

  Here, FIG. 14 shows the setting state of the distribution information when explaining the operation command input processing procedure and result confirmation processing procedure performed by the voice operation system. FIG. 14 is a diagram illustrating distribution information according to the third embodiment.

  In this case, it is assumed that the in-vehicle device 210 is assigned to the device identifier “111” and is assigned to the device identifier “222” when the driver's mobile terminal is the mobile terminal device 220.

  The voice recognition engine and the application are installed in the mobile terminal device 220. The “input priority” is set to only “1” in the in-vehicle device 210 because the driver cannot use the portable terminal device 220. Further, the following description will be given assuming that the voice recognition result is set to be displayed on the display 213 of the in-vehicle device 210 for the same reason as described above.

  First, an operation command input processing procedure performed by the voice operation system will be described with reference to FIG. FIG. 15 is a sequence diagram of the operation command input processing procedure according to the third embodiment.

  In the in-vehicle device 210, first, a button for starting a voice operation provided in the in-vehicle device 210 (not shown) is assigned, and when a voice operation start button is pressed, the voice operation can be performed. To do.

  As shown in the figure, when the driver inputs a command character string by voice from the hands-free microphone 212 (step S501), the sorting unit 214b of the in-vehicle device 210 uses the voice recognition engine based on the sorting information. Which device is installed is determined to be the mobile terminal device 220 (step S502), and the input voice data is sent to the mobile terminal device 220 determined in step S502 via the short-range communication unit 211.

  On the other hand, the voice recognition unit 222 of the mobile terminal device 220 recognizes the received input voice data (step S503), converts it into a command character string, and the result generation unit 223 further displays a voice recognition result for display. Is generated (step S504), and the image data and the command character string are sent to the in-vehicle device 210 via the short-range communication unit 221.

  Thereafter, the distribution unit 214b of the in-vehicle device 210 determines that the output device of the voice recognition result is the in-vehicle device 210 based on the distribution information, and determines that the device in which the application is installed is the mobile terminal device 220. (Step S505).

  And the vehicle equipment 210 displays image data on the display 213 (step S506). For example, an image displayed on the DA (111) in FIG. 1 is displayed on the display 213 of the in-vehicle device 210.

  Thereafter, the in-vehicle device 210 sends a command character string to the mobile terminal device 220 via the short-range communication unit 211, and the mobile terminal device 220 stores the received command character string in a temporary storage memory or the like ( Step S507).

  A result confirmation processing procedure performed by the voice operation system following the processing procedure of FIG. 15 will be described with reference to FIG. FIG. 16 is a sequence diagram of the result confirmation processing procedure according to the third embodiment.

  As shown in the figure, the driver performs confirmation input by touching the “OK” portion of the display 213 on which an image prompting confirmation of the speech recognition result is displayed (step S601).

  The display 213 of the in-vehicle device 210 acquires coordinate position information on the display touched as touch information from the display 213 (step S602), and sends the coordinate position information to the display operation unit 214d.

  Then, the display operation unit 214d of the in-vehicle device 210 determines the confirmation result based on the acquired coordinate position information and the image data that prompts confirmation of the speech recognition result (step S603), and converts the confirmation result into a character string. The converted confirmation character string is sent to the receiving unit 214a.

  Thereafter, the distribution unit 214b of the in-vehicle device 210 determines that the device on which the application is mounted is the mobile terminal device 220 based on the distribution information (step S604), and confirms via the short-range communication unit 211. The character string is sent to the mobile terminal device 220.

  Then, if the received confirmation character string is “OK”, the application 224 of the mobile terminal device 220 sends an operation command to the command character string stored in the temporary storage memory or the like based on the operation command information 225. Generate (step S605), and execute the application based on the operation command (step S606).

  As described above, in the third embodiment, the driver's portable terminal device includes the voice recognition engine and the application, and the in-vehicle device 210 can perform voice input / display and use the function of the driver's portable terminal device. Configured.

  Thereby, even a driver who is driving can safely perform voice operation of the mobile terminal device 220, and by using the function of the mobile terminal device 220, the configuration of the in-vehicle device 210 can be configured in both hardware configuration and software configuration. Simplified and inexpensive products can be provided.

  In the above-described embodiment, the operation of the application has been described with respect to the operation command for performing the voice operation. However, the input item displayed or provided in the DA, for example, the numeric key, the cross key, etc. may be supported. In this case, the driver can drive more safely by voice-operating the DA function.

  In the above embodiment, when confirming the voice recognition result, the application is executed by recognizing the confirmation operation by issuing “OK” by voice input or touching the DA display.

  However, after displaying or outputting the voice recognition result, for example, if there is no operation for a certain period of time, it is regarded as “OK”, or if there is no operation for a certain period of time, it is regarded as “cancel”. Can be simplified and the driver can drive more safely.

  In the above embodiment, the noise removal of the input voice data is performed by the mobile terminal device. However, by using the DA, the processing load of the mobile terminal device that performs the voice recognition process can be reduced. Can do.

  Moreover, in the said Example, although the speech recognition engine mounted in the portable terminal device was used, even if the portable terminal device which mounts an application only exists by mounting the speech recognition engine in DA. Therefore, the burden when a driver or a passenger purchases a mobile terminal device can be reduced.

  In the above embodiment, the DA is provided with a button for starting voice operation. However, a button for starting voice operation is assigned to a hands-free unit such as a portable terminal device or a headset microphone. Also good.

  Alternatively, a start button may be displayed on the portable terminal device or the DA display. In this case, since it is not necessary to provide a voice operation button in the DA, the DA configuration can be simplified in both hardware and software configurations, and a product can be provided at a low price.

  As described above, the voice operation system and the voice operation method according to the present invention can perform voice operation without impairing the safety of a driver during driving, particularly when operating a mobile terminal device using an in-vehicle device. Even if the voice recognition engine and the application are present in any portable terminal device, it is suitable for a case where a driver or a passenger wants to easily perform a voice operation from his / her own portable terminal device.

It is a figure which shows the outline | summary of the voice operation system concerning a present Example. It is a figure which shows the apparatus structure pattern of the voice operation system concerning a present Example. 1 is a block diagram illustrating a configuration of a voice operation system according to Example 1. FIG. It is a figure which shows distribution information. It is a figure which shows the operation command change screen. It is a figure which shows the distribution information concerning Example 1. FIG. FIG. 6 is a sequence diagram illustrating an operation command input processing procedure according to the first embodiment. It is a sequence diagram which shows the result confirmation processing procedure concerning Example 1. FIG. It is a block diagram which shows the structure of the voice operation system concerning Example 2. FIG. It is a figure which shows the distribution information concerning Example 2. FIG. FIG. 10 is a sequence diagram illustrating an operation command input processing procedure according to the second embodiment. It is a sequence diagram which shows the result confirmation processing procedure concerning Example 2. FIG. FIG. 10 is a block diagram illustrating a configuration of a voice operation system according to a third embodiment. It is a figure which shows the distribution information concerning Example 3. FIG. 10 is a sequence diagram illustrating operation command input processing procedures according to the third embodiment. FIG. 10 is a sequence diagram illustrating a result confirmation processing procedure according to the third embodiment.

Explanation of symbols

DESCRIPTION OF SYMBOLS 10 In-vehicle apparatus 11 Near field communication part 12 Storage part 12a Distribution information 13 Control part 13a Reception part 13b Distribution part 20 Portable terminal device 21 Near field communication part 22 Microphone 23 Speaker 24 Operation command information 25 Voice input part 26 Voice output part 27 Application 30 Mobile terminal device 31 Short-range communication unit 32 Voice recognition unit 33 Result generation unit


Claims (5)

  1. A voice operation system that performs voice operation of a mobile terminal device using an in-vehicle device,
    Voice input means for inputting voice uttered by the user;
    Voice recognition means for voice recognition of the voice;
    Command execution means for executing the operation command after converting the voice recognition result by the voice recognition means into an operation command;
    The in-vehicle device is
    Distribution information storage means for storing distribution information associated with a device identifier for identifying the device, whether the voice input means, the voice recognition means, and the command execution means are present in the device;
    And a distribution unit that distributes an output from the voice input unit to the voice recognition unit and an output from the voice recognition unit to the command execution unit based on the distribution information. Operation system.
  2. The voice recognition means
    A result confirmation means for notifying the user of the voice recognition result and obtaining confirmation;
    The distribution information is
    It further includes the device identifier of the device to be notified by the result confirmation means,
    The distribution means includes
    2. The voice operation system according to claim 1, wherein the output is distributed to the result confirmation unit before the output from the voice recognition unit is distributed to the command execution unit based on the distribution information.
  3. The distribution information is
    When the voice input means is provided in a plurality of devices and a plurality of users input voices simultaneously by the voice input means, which voice input means is used preferentially. Which further includes an input priority to indicate,
    The distribution means includes
    The voice operation system according to claim 1 or 2, wherein the voice input by the voice input unit is selected based on the distribution information.
  4. The change means which changes matching with the said operation command which the said command execution means performs, and the said voice recognition result by the said voice recognition means is further provided. The voice operation system described in 1.
  5. A voice operation method for performing voice operation of a mobile terminal device using an in-vehicle device,
    A voice input process for inputting voice uttered by the user;
    A voice recognition step for recognizing the voice;
    A command execution step of executing the operation command after converting the voice recognition result in the voice recognition step into an operation command,
    The in-vehicle device is
    A distribution information storage step for storing distribution information associated with a device identifier for identifying the device, whether the voice input step, the voice recognition step and the command execution step exist in the device;
    A distribution step of allocating the output from the voice input step to the voice recognition step and the output from the voice recognition step to the command execution step based on the distribution information, respectively. Method of operation.
JP2008301496A 2008-11-26 2008-11-26 Voice activation system and voice activation method Pending JP2010130223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008301496A JP2010130223A (en) 2008-11-26 2008-11-26 Voice activation system and voice activation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008301496A JP2010130223A (en) 2008-11-26 2008-11-26 Voice activation system and voice activation method

Publications (2)

Publication Number Publication Date
JP2010130223A true JP2010130223A (en) 2010-06-10
JP2010130223A5 JP2010130223A5 (en) 2012-08-09

Family

ID=42330307

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008301496A Pending JP2010130223A (en) 2008-11-26 2008-11-26 Voice activation system and voice activation method

Country Status (1)

Country Link
JP (1) JP2010130223A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012150456A (en) * 2011-01-19 2012-08-09 Denso Corp Remote operation method for portable terminal by integrated operation device for vehicle, and integrated operation device for vehicle
JP2012213132A (en) * 2011-03-23 2012-11-01 Denso Corp Device for vehicle and information display system thereof
JP2013198085A (en) * 2012-03-22 2013-09-30 Sony Corp Information processing device, information processing method, information processing program and terminal device
WO2014103015A1 (en) * 2012-12-28 2014-07-03 パイオニア株式会社 Portable terminal device, car onboard device, information presentation method, and information presentation program
EP2755201A2 (en) 2013-01-11 2014-07-16 Clarion Co., Ltd. Information processing apparatus, sound operating system, and sound operating method for information processing apparatus
WO2017022879A1 (en) * 2015-08-05 2017-02-09 엘지전자 주식회사 Vehicle driving assist and vehicle having same
KR101755376B1 (en) 2010-12-23 2017-07-26 엘지전자 주식회사 Method for controlling using voice action and the mobile terminal
JP2018042254A (en) * 2017-10-12 2018-03-15 ソニー株式会社 Terminal device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0837687A (en) * 1994-07-22 1996-02-06 Nippondenso Co Ltd Telephone set for vehicle with control function
JP2002150039A (en) * 2000-08-31 2002-05-24 Hitachi Ltd Service intermediation device
JP2005181391A (en) * 2003-12-16 2005-07-07 Sony Corp Device and method for speech processing
JP2005266192A (en) * 2004-03-18 2005-09-29 Matsushita Electric Ind Co Ltd Apparatus and method for speech recognition
JP2006003696A (en) * 2004-06-18 2006-01-05 Toyota Infotechnology Center Co Ltd Voice recognition device, voice recognition method and voice recognition program
WO2008076765A2 (en) * 2006-12-13 2008-06-26 Johnson Controls, Inc. Source content preview in a media system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0837687A (en) * 1994-07-22 1996-02-06 Nippondenso Co Ltd Telephone set for vehicle with control function
JP2002150039A (en) * 2000-08-31 2002-05-24 Hitachi Ltd Service intermediation device
JP2005181391A (en) * 2003-12-16 2005-07-07 Sony Corp Device and method for speech processing
JP2005266192A (en) * 2004-03-18 2005-09-29 Matsushita Electric Ind Co Ltd Apparatus and method for speech recognition
JP2006003696A (en) * 2004-06-18 2006-01-05 Toyota Infotechnology Center Co Ltd Voice recognition device, voice recognition method and voice recognition program
WO2008076765A2 (en) * 2006-12-13 2008-06-26 Johnson Controls, Inc. Source content preview in a media system
JP2010514271A (en) * 2006-12-13 2010-04-30 ジョンソン コントロールズ テクノロジー カンパニー Media system source content preview

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101755376B1 (en) 2010-12-23 2017-07-26 엘지전자 주식회사 Method for controlling using voice action and the mobile terminal
JP2012150456A (en) * 2011-01-19 2012-08-09 Denso Corp Remote operation method for portable terminal by integrated operation device for vehicle, and integrated operation device for vehicle
JP2012213132A (en) * 2011-03-23 2012-11-01 Denso Corp Device for vehicle and information display system thereof
US8700408B2 (en) 2011-03-23 2014-04-15 Denso Corporation In-vehicle apparatus and information display system
JP2013198085A (en) * 2012-03-22 2013-09-30 Sony Corp Information processing device, information processing method, information processing program and terminal device
WO2014103015A1 (en) * 2012-12-28 2014-07-03 パイオニア株式会社 Portable terminal device, car onboard device, information presentation method, and information presentation program
US9248788B2 (en) 2013-01-11 2016-02-02 Clarion Co., Ltd. Information processing apparatus, operating system, and operating method for information processing apparatus
US9739625B2 (en) 2013-01-11 2017-08-22 Clarion Co., Ltd. Information processing apparatus, operating system, and operating method for information processing apparatus
EP2755201A2 (en) 2013-01-11 2014-07-16 Clarion Co., Ltd. Information processing apparatus, sound operating system, and sound operating method for information processing apparatus
KR20170017178A (en) * 2015-08-05 2017-02-15 엘지전자 주식회사 Driver assistance apparatus and vehicle including the same
WO2017022879A1 (en) * 2015-08-05 2017-02-09 엘지전자 주식회사 Vehicle driving assist and vehicle having same
KR101910383B1 (en) * 2015-08-05 2018-10-22 엘지전자 주식회사 Driver assistance apparatus and vehicle including the same
JP2018042254A (en) * 2017-10-12 2018-03-15 ソニー株式会社 Terminal device

Similar Documents

Publication Publication Date Title
US9576575B2 (en) Providing voice recognition shortcuts based on user verbal input
US8914163B2 (en) System and method for incorporating gesture and voice recognition into a single system
US9374679B2 (en) Service providing device, service providing system including user profile server, and service providing method for service providing device
US9799334B2 (en) Speech recognition apparatus, vehicle including the same, and method of controlling the same
CN102246136B (en) Navigation device
US7702130B2 (en) User interface apparatus using hand gesture recognition and method thereof
EP1082671B1 (en) Handwritten and voice control of vehicle appliance
US8700408B2 (en) In-vehicle apparatus and information display system
US9267813B2 (en) On-board system working a mobile device
EP1883561B1 (en) Connection of personal terminals to the communication system of a motor vehicle
US7539618B2 (en) System for operating device using animated character display and such electronic device
US9532160B2 (en) Method of determining user intent to use services based on proximity
EP2229576B1 (en) Vehicle user interface systems and methods
US10170111B2 (en) Adaptive infotainment system based on vehicle surrounding and driver mood and/or behavior
JP5306851B2 (en) In-vehicle device and communication control method
JP4859447B2 (en) Navigation device
US9736679B2 (en) System for controlling a vehicle computer using a mobile telephone
US20100138149A1 (en) In-vehicle device and wireless communication system
EP1678008B1 (en) System and method for selecting a user speech profile for a device in a vehicle
CN104218969A (en) Apparatus and System for Interacting with a Vehicle and a Device in a Vehicle
CN102263801B (en) Vehicle-mounted integrated system and method for providing integrated information
JP2006080617A (en) Hands-free system and mobile phone
JP2010130670A (en) In-vehicle system
JP5652913B2 (en) In-vehicle terminal
KR101588190B1 (en) Vehicle, controlling method thereof and multimedia apparatus therein

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20111020

A521 Written amendment

Effective date: 20120618

Free format text: JAPANESE INTERMEDIATE CODE: A523

A977 Report on retrieval

Effective date: 20130307

Free format text: JAPANESE INTERMEDIATE CODE: A971007

A131 Notification of reasons for refusal

Effective date: 20130326

Free format text: JAPANESE INTERMEDIATE CODE: A131

A02 Decision of refusal

Effective date: 20131001

Free format text: JAPANESE INTERMEDIATE CODE: A02