WO2013015364A1 - ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム - Google Patents
ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム Download PDFInfo
- Publication number
- WO2013015364A1 WO2013015364A1 PCT/JP2012/068982 JP2012068982W WO2013015364A1 WO 2013015364 A1 WO2013015364 A1 WO 2013015364A1 JP 2012068982 W JP2012068982 W JP 2012068982W WO 2013015364 A1 WO2013015364 A1 WO 2013015364A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- command
- touch
- voice
- input
- unit
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims description 10
- 238000003672 processing method Methods 0.000 title claims description 4
- 238000000034 method Methods 0.000 claims abstract description 300
- 238000006243 chemical reaction Methods 0.000 claims abstract description 143
- 230000008569 process Effects 0.000 claims description 101
- 238000001514 detection method Methods 0.000 claims description 98
- 238000012545 processing Methods 0.000 claims description 73
- 238000013500 data storage Methods 0.000 claims description 51
- 230000009471 action Effects 0.000 claims description 6
- 230000007704 transition Effects 0.000 description 266
- 230000006870 function Effects 0.000 description 122
- 238000003825 pressing Methods 0.000 description 45
- 230000002093 peripheral effect Effects 0.000 description 32
- 230000000694 effects Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 16
- 238000010411 cooking Methods 0.000 description 14
- 241000209094 Oryza Species 0.000 description 11
- 235000007164 Oryza sativa Nutrition 0.000 description 11
- 235000009566 rice Nutrition 0.000 description 11
- 230000008859 change Effects 0.000 description 8
- 238000012790 confirmation Methods 0.000 description 5
- 235000012054 meals Nutrition 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 244000269722 Thea sinensis Species 0.000 description 1
- 241001504592 Trachurus trachurus Species 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 235000019688 fish Nutrition 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 239000008267 milk Substances 0.000 description 1
- 235000013336 milk Nutrition 0.000 description 1
- 210000004080 milk Anatomy 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000010025 steaming Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3608—Destination input or retrieval using speech input, e.g. using speech recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/16—Transforming into a non-visible representation
Definitions
- the present invention relates to a user interface device, an in-vehicle information device, an information processing method, and an information processing program that execute processing according to a touch display operation and a voice operation by a user.
- in-vehicle information devices such as navigation devices, audio devices, and hands-free telephones, operation methods using a touch display, a joystick, a rotary dial, and voice have been adopted.
- a user touches a button displayed on a display screen integrated with a touch panel, and screen transition is repeated to execute a target function.
- buttons displayed on the display can be directly touched, an intuitive operation can be performed.
- other devices such as joysticks, rotary dials, and remote controls, the user can operate these devices, move the cursor to the buttons displayed on the display screen, select the screen, and repeat the screen transition. Execute.
- it is necessary to move the cursor to a target button which is not an intuitive operation compared to a touch display operation.
- these operation methods are easy to understand because they can be operated if the user selects a button displayed on the screen, but they require a large number of operation steps and operation time.
- a user speaks a vocabulary called a voice recognition keyword once or a plurality of times, and executes a target function. Since items that are not displayed on the screen can be operated, the operation steps and operation time can be shortened. However, the user must remember a unique voice operation method and a voice recognition keyword that have been determined in advance, and operate as long as the user does not speak accordingly. It is difficult to use because it cannot.
- the voice operation is usually started by pressing only one utterance button (hard button) prepared near the handle or one utterance button prepared on the screen. In many cases, it is necessary to perform a plurality of dialogs with the in-vehicle information device before executing the operation. In this case, the number of operation steps and the operation time are increased.
- an operation method combining a touch display operation and a voice operation has been proposed.
- the user presses a button associated with each data input field displayed on the touch display and speaks, thereby inputting the result of speech recognition into the data input field.
- the navigation device when searching for a place name or road name by voice recognition, the user inputs and confirms the first character or character string of the place name or road name from the keyboard on the touch display. Then speak.
- the touch display operation has a deep operation hierarchy, and there is a problem that the number of operation steps and the operation time cannot be reduced.
- the voice operation has a problem that it is difficult to operate because it is necessary to remember a unique operation method and a voice recognition keyword determined in advance and to speak as it is.
- Patent Document 1 is a technology for inputting data by voice recognition in a data input field, and cannot perform operations and function executions involving screen transitions. Furthermore, since there is no method for listing predetermined items that can be entered in the data entry field, or a method for selecting a target item from the list, there is a problem that operation is not possible unless the voice recognition keywords of the items that can be entered are memorized. there were.
- Patent Document 2 is a technique for improving the certainty of voice recognition by inputting a head character or a character string and speaking before performing voice recognition. Character input and confirmation operations are performed by a touch display operation. There was a need. For this reason, there is a problem that the number of operation steps and operation time cannot be reduced as compared with the conventional voice operation for searching for a spoken place name or road name.
- the present invention has been made in order to solve the above-described problems. Intuitive and easy-to-understand voice without learning a unique voice operation method and voice recognition keyword while ensuring easy understanding of touch display operation.
- the purpose is to realize the operation and reduce the number of operation steps and the operation time.
- the user interface device includes a touch-command conversion unit that generates a first command for executing processing corresponding to a button displayed on the touch display and touched based on an output signal of the touch display. And a voice recognition dictionary composed of voice recognition keywords associated with the process, and a command for performing voice recognition of a user utterance substantially simultaneously with or following the touch operation and executing a process corresponding to the result of the voice recognition
- a voice-command conversion unit for converting to a second command for executing a process classified into a lower layer in the process group related to the process of the first command, and an output signal of the touch display Corresponds to the first command generated by the touch-command converter according to the state of touch operation That handles either a touch operation mode execution, the audio - in which and an input switching control unit for switching whether audio operation mode for executing a process corresponding to the second command generated by the command conversion unit.
- An in-vehicle information device is for causing a touch display and a microphone mounted on a vehicle to execute processing corresponding to a button displayed on the touch display and subjected to a touch operation based on an output signal of the touch display.
- Voice recognition of a user's utterance almost simultaneously with or following the touch action that the microphone collects using a voice recognition dictionary that includes a touch-command conversion unit that generates a first command and a voice recognition keyword associated with the process
- a second command for executing a process corresponding to the result of the voice recognition and executing a process classified in a lower layer than the process in the process group related to the process of the first command.
- the information processing method includes a touch input detection step for detecting a touch operation on a button displayed on the touch display based on an output signal of the touch display, and a touch operation based on a detection result of the touch input detection step.
- the input method determination step for determining whether the touch operation mode or the voice operation mode is in accordance with the state of the touch, and when the touch operation mode is determined in the input method determination step, the touch operation is performed based on the detection result of the touch input detection step.
- a voice recognition keyword associated with the process when the voice operation mode is determined in the touch-command conversion step for generating the first command for executing the process corresponding to the button that has been performed and the input method determination step A user who uses a voice recognition dictionary consisting of A command for recognizing a speech and executing a process corresponding to the result of the voice recognition and executing a process classified in a lower layer than the process in a process group related to the process of the first command
- Process execution step for executing a process corresponding to the first command generated in the voice-command conversion step to be converted into the second command, the first command generated in the touch-command conversion step, or the second command generated in the voice-command conversion step Are provided.
- An information processing program includes a touch input detection procedure for detecting a touch operation on a button displayed on the touch display based on an output signal of the touch display, and a touch operation based on a detection result of the touch input detection procedure
- the input method determination procedure for determining whether the operation mode is the touch operation mode or the voice operation mode according to the state of the touch operation, and if the touch operation mode is determined by the input method determination procedure, the touch operation is performed based on the detection result of the touch input detection procedure.
- a voice recognition keyword associated with the process when the touch-command conversion procedure for generating the first command for executing the process corresponding to the button that has been performed and the voice operation mode is determined in the input method determination procedure Using a speech recognition dictionary consisting of the following: A command for executing a process corresponding to the result of voice recognition is converted into a second command for executing a process classified into a lower layer in the process group related to the process of the first command. Causing the computer to execute a voice-command conversion procedure and a process execution procedure for executing a process corresponding to the first command generated in the touch-command conversion procedure or the second command generated in the voice-command conversion procedure. Is.
- the user interface device is configured to execute a process associated with the input device or a process being selected by the input device based on an output signal from the input device on which the user performs a touch operation.
- a touch-command conversion unit that generates a command and a speech recognition dictionary including speech recognition keywords associated with the process
- speech recognition is performed on a user utterance substantially simultaneously with or following the touch operation on the input device, and the speech It is a command for executing processing corresponding to the recognition result, and is converted into a second command for executing processing classified into lower layers in the processing group related to the processing of the first command.
- the voice-command converter that performs the touch and the touch according to the state of the touch operation based on the output signal of the input device.
- Input switching control for switching between a touch operation mode for executing processing corresponding to the first command generated by the command conversion unit and a voice operation mode for executing processing corresponding to the second command generated by the voice-command conversion unit Part.
- the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the button displayed on the touch display. It is possible to switch and input related voice operations, and to ensure the ease of touch operations.
- the second command is a command for executing processing classified in a lower layer than the processing in the processing group related to the processing of the first command, and the user speaks while touching one button. Can execute the underlying processing related to this button, so you can realize intuitive and easy-to-understand voice operation without memorizing unique voice operation methods and voice recognition keywords, and reduce the number of operation steps and operation time. be able to.
- buttons displayed on the touch display but also the touch operation mode or the voice operation mode may be determined according to the state of the touch operation on the input device such as a hard button.
- One input device can switch and input a normal touch operation and a voice operation related to the input device.
- FIG. 3 is a flowchart showing an operation of the in-vehicle information device according to the first embodiment. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 1, and is an example of a screen regarding AV function.
- 4 is a flowchart illustrating an input method determination process of the in-vehicle information device according to the first embodiment. It is a figure explaining the relationship between a touch operation
- FIG. 4 is a flowchart showing application execution command creation processing by touch operation input of the in-vehicle information device according to the first embodiment. It is a figure explaining an example of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has. It is a continuation figure of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has. It is a continuation figure of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has. It is a continuation figure of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has. It is a continuation figure of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has. It is a continuation figure of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 1 has.
- FIG. 4 is a flowchart showing application execution command creation processing by voice operation input of the in-vehicle information device according to Embodiment 1; It is a figure explaining the speech recognition dictionary of the vehicle-mounted information apparatus which concerns on Embodiment 1.
- FIG. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 1, and is an example of a screen regarding a navigation function. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 1, and is an example of a screen regarding a navigation function.
- FIG. 6 is a flowchart illustrating an operation of the in-vehicle information device according to the second embodiment. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 2, and is an example of a screen regarding a telephone function. It is a figure explaining an example of the state transition table which the vehicle-mounted information apparatus which concerns on Embodiment 2 has.
- 10 is a flowchart showing application execution command creation processing by voice operation input of the in-vehicle information device according to the second embodiment. It is a figure explaining the speech recognition object word dictionary of the vehicle-mounted information apparatus which concerns on Embodiment 1.
- FIG. 14 is a flowchart illustrating an output method determination process of the in-vehicle information device according to the third embodiment. It is a figure which shows the telephone screen at the time of voice operation input of the vehicle-mounted information apparatus which concerns on Embodiment 3.
- FIG. 14 is a flowchart illustrating an output method determination process of the in-vehicle information device according to the third embodiment. It is a figure which shows the telephone screen at the time of voice operation input of the vehicle-mounted information apparatus which concerns on Embodiment 3.
- FIG. It is a figure which shows the list screen at the time of voice operation input of the vehicle-mounted information apparatus which concerns on Embodiment 3.
- FIG. It is a figure which shows the structural example of the hard button with which the vehicle-mounted information apparatus which concerns on Embodiment 4 of this invention is provided, and a touch display. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 4, and is an example of a screen at the time of touch operation mode. It is a figure explaining the example of a screen transition of the vehicle-mounted information apparatus which concerns on Embodiment 4, and is an example of a screen at the time of a voice operation mode.
- FIG. 38 is a diagram illustrating a configuration example of a hard button and a display included in the in-vehicle information device according to the tenth embodiment.
- FIG. 38 is a diagram for explaining an example of screen transition of the in-vehicle information device according to the tenth embodiment.
- the in-vehicle information device includes a touch input detection unit 1, an input method determination unit 2, a touch-command conversion unit 3, an input switching control unit 4, a state transition control unit 5, and a state transition table storage unit 6. , A voice recognition dictionary DB 7, a voice recognition dictionary switching unit 8, a voice recognition unit 9, a voice-command conversion unit 10, an application execution unit 11, a data storage unit 12, and an output control unit 13.
- This in-vehicle information device is connected to an input / output device (not shown) such as a touch display in which a touch panel and a display are integrated, a microphone, a speaker, etc., and inputs / outputs information.
- an input / output device such as a touch display in which a touch panel and a display are integrated, a microphone, a speaker, etc., and inputs / outputs information.
- an input / output device such as a touch display in which a touch panel and a display are integrated, a microphone, a speaker, etc., and inputs / outputs information.
- a user interface for executing functions.
- the touch input detection unit 1 detects whether or not the user has touched a button (or a specific touch area) displayed on the touch display based on an input signal from the touch display. Based on the detection result of the touch input detection unit 1, the input method determination unit 2 determines whether the user is making an input by a touch operation (touch operation mode) or an input by a voice operation (voice) Operation mode) is determined.
- the touch-command conversion unit 3 converts the button touched by the user detected by the touch input detection unit 1 into a command. As will be described in detail later, this command includes an item name and an item value.
- the command (item name and item value) is passed to the state transition control unit 5, and the item name is passed to the input switching control unit 4. . This item name constitutes the first command.
- the input switching control unit 4 notifies the state transition control unit 5 whether the user desires the touch operation mode or the voice operation mode according to the input method determination result (touch operation or voice operation) by the input method determination unit 2. Then, the process of the state transition control unit 5 is switched between the touch operation mode and the voice operation mode. Further, the input switching control unit 4 switches the item name (that is, information indicating the button touched by the user) input from the touch-command conversion unit 3 to the state transition control unit 5 and the voice recognition dictionary in the voice operation mode. Pass to part 8.
- the state transition control unit 5 When the touch operation mode is notified from the input switching control unit 4, the state transition control unit 5 is input from the touch-command conversion unit 3 based on the state transition table stored in the state transition table storage unit 6.
- the command (item name, item value) is converted into an application execution instruction and passed to the application execution unit 11.
- the application execution instruction includes information for specifying the transition destination screen and / or information for specifying the application execution function.
- the state transition control unit 5 waits until a command (item value) is input from the voice-command conversion unit 10.
- a command (item value) is input, based on the state transition table stored in the state transition table storage unit 6, the command combining these item name and item value is converted into an application execution instruction, and the application execution unit 11
- the state transition table storage unit 6 stores an information transition table that defines the correspondence between commands (item names, item values) and application execution instructions (transition destination screen, application execution function). Details will be described later.
- the speech recognition dictionary DB 7 is a speech recognition dictionary database used for speech recognition processing in the speech operation mode, and stores speech recognition keywords. Corresponding commands (item names) are associated with the voice recognition keywords.
- the voice recognition dictionary switching unit 8 notifies the voice recognition unit 9 of a command (item name) input from the input switching control unit 4 and switches to a voice recognition dictionary including a voice recognition keyword associated with the item name. Let The voice recognition unit 9 is a voice recognition dictionary including a voice recognition keyword group associated with a command (item name) notified from the voice recognition dictionary switching unit 8 among the voice recognition dictionaries stored in the voice recognition dictionary DB 7.
- voice recognition processing is performed to convert the voice signal into a character string and the like, and the voice signal is converted to the voice-command converter 10.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and passes it to the state transition control unit 5. This item value constitutes the second command.
- the application execution unit 11 uses various data stored in the data storage unit 12 to execute screen transitions or application functions according to application execution instructions notified from the state transition control unit 5.
- the application execution unit 11 is connected to the network 14 and can communicate with the outside. Although details will be described later, depending on the type of the application function, the application execution unit 11 communicates with the outside and makes a telephone call.
- the acquired data can also be stored in 12.
- the application execution unit 11 and the state transition control unit 5 constitute a process execution unit.
- the data storage unit 12 includes data for navigation (hereinafter referred to as navigation) function (including a map database) and audio / visual (hereinafter referred to as AV) function that are required when the application execution unit 11 executes screen transitions or application functions.
- navigation data for navigation
- AV audio / visual
- Data including music data and video data
- data for controlling vehicle equipment such as air conditioners mounted on vehicles
- data for telephone functions such as hands-free calls (including phone book)
- application execution via network 14 Various data such as information (congestion information, URL of a specific website, etc.) acquired from the outside by the unit 11 and provided to the user when executing the application function are stored.
- the output control unit 13 displays the execution result of the application execution unit 11 on the screen of the touch display or outputs the sound from the speaker.
- FIG. 2 is a flowchart showing the operation of the in-vehicle information device according to the first embodiment.
- FIG. 3 shows an example of screen transition by the in-vehicle information device.
- the in-vehicle information device displays a list of functions executable by the application execution unit 11 as buttons on the touch display as an initial state.
- Application list screen P01 FIG. 3 is a screen transition example of the AV function that is developed from the “AV” button of the application list screen P01 as a base point, and the application list screen P01 is the top-level screen (and the function associated with each button).
- a screen of the AV source list screen P11 associated with the “AV” button (and a function associated with each button).
- one level below the AV source list screen P11 is an FM station list screen P12, a CD screen P13, a traffic information radio screen P14, an MP3 screen P15 associated with each button of the AV source list screen P11, and each screen.
- the case where the screen transitions to the next lower layer is simply referred to as “transition”.
- the screen is changed from the application list screen P01 to the AV source list screen P11.
- a case where the screen transitions to one or more lower layers or different functions is referred to as “jump transition”.
- the screen is changed from the application list screen P01 to the FM station list screen P12, or the AV source list screen P11 is changed to the navigation function screen.
- step ST100 the touch input detection unit 1 detects whether or not the user touches a button displayed on the touch display. Further, when a touch is detected (step ST100 “YES”), the touch input detection unit 1 indicates a touch signal indicating which button is touched based on an output signal from the touch display (a pressing operation or a predetermined time). Touch operation etc.).
- step ST110 the touch-command conversion unit 3 converts the touched button into a command (item name, item value) based on the touch signal input from the touch input detection unit 1, and outputs the command.
- a button name is set for the button, and the touch-command conversion unit 3 sets the button name to the command item name and item value.
- the command (item name, item value) of the “AV” button displayed on the touch display is (AV, AV).
- step ST120 the input method determination unit 2 determines whether the user is performing a touch operation or a voice operation based on the touch signal input from the touch input detection unit 1, and outputs the determination. .
- the input method determination unit 2 receives an input of a touch signal from the touch input detection unit 1 in step ST121, and determines an input method based on the touch signal in a subsequent step ST122. As shown in FIG. 5, it is assumed that the touch operation is determined in advance for each of the touch operation and the voice operation.
- Example 1 when the user wants to execute an application function in the touch operation mode, the user presses a button for the application function on the touch display, and when the user wants to execute the application function in the voice operation mode, the user touches the button for a certain time. Perform the action.
- the input method determination unit 2 may determine which touch operation is performed according to the touch signal. Also, for example, the input method may determine whether the user desires a touch operation or a voice operation depending on whether the button is fully pressed or half-pressed as in Example 2, or the button as in Example 3 May be determined based on whether the button is single-tapped or double-tapped, or may be determined based on whether the button is pressed shortly or longly as in Example 4.
- processing such as full press when the pressed pressure is equal to or higher than a threshold value and half press when the pressed pressure is less than the threshold value may be performed. In this way, by properly using two types of touch operations for one button, it is possible to determine which one of the touch operation and the voice operation is to be used for input to the one button.
- the input method determination unit 2 outputs a determination result indicating the input method of either touch operation or voice operation to the input switching control unit 4.
- step ST130 if the determination result input from the input switching control unit 4 is the touch operation mode (step ST130 "YES"), the state transition control unit 5 proceeds to step ST140 and generates an application execution command by the touch operation input. On the other hand, if the determination result is the voice operation mode (“NO” in step ST130), the process proceeds to step ST150 to generate an application execution command by voice operation input.
- step ST141 the state transition control unit 5 acquires the command (item name, item value) of the button touched during the input method determination process from the touch-command conversion unit 3, and in the subsequent step ST142, the state transition table storage unit 6 The acquired command (item name, item value) is converted into an application execution instruction based on the state transition table stored in the.
- FIG. 7A is a diagram for explaining an example of the state transition table.
- the state transition table includes three pieces of information of “current state”, “command”, and “application execution instruction”.
- the current state is a screen displayed on the touch display at the time of touch detection in step ST100.
- the command item name has the same name as the button name displayed on the screen.
- the item name of the “AV” button on the application list screen P01 is “AV”.
- the command item values may have the same name as the button name, or may have different names.
- the command item value in the touch operation mode, is the same as the item name, that is, the button name.
- the item value is a voice recognition result, which is a voice recognition keyword of a function that the user wants to execute.
- the command AV, AV
- the command has the same item name and item value.
- the command has a different item name and item value (AV, FM).
- the application execution command includes one or both of “transition destination screen” and “application execution function”.
- the transition destination screen is information indicating the destination screen moved by the corresponding command.
- the application execution function is information indicating a function executed by a corresponding command.
- the application list screen P01 is set as the uppermost layer
- AV is set as the lower layer
- FM, CD, traffic information, and MP3 are set as the lower layer of AV.
- a broadcast station and B broadcast station are set below FM.
- telephone and navigation in the same hierarchy as AV have different application functions.
- the current state is the application list screen P01 shown in FIG.
- the command (AV, AV) is associated with the “AV” button on this screen, and the transition destination screen “P11 (AV source list screen)” as the corresponding application execution instruction.
- the application execution function “-(none)” is set. Therefore, the state transition control unit 5 converts the command (AV, AV) input from the touch-command conversion unit 3 into an application execution command “transition to the AV source list screen P11”.
- the state transition control unit 5 converts the command (A broadcast station, A broadcast station) input from the touch-command conversion unit 3 into an application execution command “select A broadcast station”.
- the current state is the telephone directory list screen P22 shown in FIG. FIG. 8 is a screen transition example of a telephone function that is developed with the “telephone” button on the application list screen P01 as a base point.
- the command “Yamada XX” and “Yamada XX” are associated with the “Yamada XX” button in the telephone directory list on this screen, and the transition is performed as the corresponding application execution instruction.
- the previous screen “P23 (phone book screen)” and the application execution function “display the phone book of Yamada XX” are set.
- the state transition control unit 5 changes the command (Yamada XX, Yamada XX) input from the touch-command conversion unit 3 to "Phonebook screen P23 and displays Yamada XX's phonebook. To an application execution instruction.
- step ST143 the state transition control unit 5 outputs the application execution instruction converted from the command to the application execution unit 11.
- step ST ⁇ b> 151 the voice recognition dictionary switching unit 8 outputs an instruction to switch to the voice recognition dictionary related to the item name (that is, the button touched by the user) input from the input switching control unit 4 to the voice recognition unit 9.
- FIG. 10 is a diagram illustrating the voice recognition dictionary.
- the voice recognition dictionary to be switched includes (1) the voice recognition keyword of the touched button, and (2) the lower layer screen of the touched button. (3) Voice recognition keywords related to this button are included, although they are not in the layer below the touched button.
- (1) is a voice recognition keyword that includes a button name of the touched button and the like, and can perform transition to the next screen and a function in the same manner as when the button is pressed by touch operation input.
- (2) is a voice recognition keyword that can make a jump transition to a lower layer of the touched button or execute a function on the screen that has made the jump transition.
- (3) is a voice recognition keyword that can jump to a screen of a related function that is not in the lower layer of the touched button, or can execute a function on the screen that has been jump-translated.
- the voice recognition dictionary to be switched includes (1) the voice recognition keyword of the touched list item button, (2 ) All voice recognition keywords on the lower layer screen of the touched list item button, and (3) Voice recognition keywords related to this button that are not in the lower layer of the touched list item button.
- the voice recognition keyword of (3) is not essential and need not be included if there is nothing related to it.
- the current state is the application list screen P01 shown in FIG.
- the item name (AV) of the commands (AV, AV) of the “AV” button detected in the touch in the input method determination process is input to the speech recognition dictionary switching unit 8.
- the voice recognition dictionary switching unit 8 issues an instruction to switch to the voice recognition dictionary related to “AV” from the voice recognition dictionary DB 7.
- the speech recognition dictionary related to “AV” is as follows. (1) “AV” as a voice recognition keyword of the touched button. (2) “FM”, “AM”, “Traffic information”, “CD”, “MP3”, “TV” as all voice recognition keywords on the lower layer screen of the touched button. “A broadcast station”, “B broadcast station”, “C broadcast station”, etc.
- buttons of the “FM” button also include the voice recognition keywords on each lower layer screen (P13, P14, P15).
- a voice recognition keyword related to this button for example, a voice recognition keyword on the lower layer screen of the “information” button.
- the item name (FM) of the commands (FM, FM) of the “FM” button touched in the input method determination process is input from the input switching control unit 4 to the speech recognition dictionary switching unit 8. Therefore, the voice recognition dictionary switching unit 8 issues an instruction to switch to the voice recognition dictionary related to “FM” from the voice recognition dictionary DB 7.
- the speech recognition dictionary related to “FM” is as follows. (1) “FM” as the voice recognition keyword of the touched button. (2) “A broadcast station”, “B broadcast station”, “C broadcast station”, etc. as all voice recognition keywords on the lower layer screen of the touched button.
- a voice recognition keyword related to this button for example, a voice recognition keyword on the lower layer screen of the “information” button.
- the information-related voice recognition keyword “homepage” for example, the homepage of the currently selected broadcasting station is displayed, details of the program being broadcast, and the song name and artist name of the music being played are displayed. You can see it.
- the voice recognition unit 9 performs voice recognition processing on the voice signal input from the microphone using the voice recognition dictionary instructed by the voice recognition dictionary switching unit 8 in the voice recognition dictionary DB7. Detects operation input and outputs it. For example, when the user touches the “AV” button for a certain period of time on the application list screen P01 shown in FIG. 3 (or half-press, double-tap, long-press, etc.), the voice recognition dictionary mainly includes voices related to “AV”. Switch to one composed of recognition keywords. Further, when the hierarchy is changed to a lower screen, for example, when the user touches the “FM” button on the AV source list screen P11 for a certain period of time, the speech recognition dictionary is mainly composed of speech recognition keywords related to “FM”. Switch to That is, the voice recognition keywords are narrowed down from the AV voice recognition dictionary. Therefore, an improvement in the speech recognition rate can be expected by switching to a more narrowed speech recognition dictionary.
- step ST153 the voice-command conversion unit 10 converts the voice recognition result indicating the voice recognition keyword input from the voice recognition unit 9 into a corresponding command (item value) and outputs it.
- step ST154 the state transition control unit 5 receives the item name input from the input switching control unit 4 and the voice-command conversion unit 10 based on the state transition table stored in the state transition table storage unit 6. A command consisting of an item value is converted into an application execution instruction.
- the current state is the application list screen P01 shown in FIG.
- the command obtained by the state transition control unit 5 is (AV, AV). Therefore, the state transition control unit 5 applies the command (AV, AV) to the application execution instruction “transition to AV source list screen P11” based on the state transition table of FIG. 7A as in the case of the touch operation input. Convert.
- the state transition control unit 5 executes an application that “command transitions to the FM station list screen P12 and selects the A broadcast station” for the command (AV, A broadcast station). Convert to instruction.
- the command that the state transition control unit 5 obtains is (telephone, Yamada XX). . Therefore, based on the state transition table of FIG. 7A, the state transition control unit 5 sends the command (telephone, Yamada ⁇ ) to “transition to the phonebook screen P23 and display the phonebook of Yamada ⁇ ”. Convert to execution instruction.
- step ST155 the state transition control unit 5 outputs the application execution instruction converted from the command to the application execution unit 11.
- step ST160 the application execution unit 11 acquires necessary data from the data storage unit 12 and performs one or both of screen transition and function execution in accordance with an application execution instruction input from the state transition control unit 5.
- step ST170 the output control unit 13 outputs the result of screen transition and function execution of the application execution unit 11 by display and sound.
- the “AV” button on the application list screen P01 shown in FIG. 3 is pressed to change to the AV source list screen P11.
- the “FM” button on the AV source list screen P11 is pressed to make a transition to the FM station list screen P12.
- the “A broadcast station” button on the FM station list screen P12 is pressed to select the A broadcast station.
- the in-vehicle information device detects the push of the “AV” button on the application list screen P01 by the touch input detection unit 1, determines the touch operation by the input method determination unit 2, and switches the input.
- the control unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal representing the pressing of the “AV” button into a command (AV, AV), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7A.
- the command is converted to “transition to AV source list screen P11”.
- the application execution unit 11 acquires the data constituting the AV source list screen P11 from the AV function data group of the data storage unit 12 to generate a screen, and the output control unit 13 generates the screen. Display on the touch display.
- the touch input detection unit 1 detects the pressing of the “FM” button on the AV source list screen P11, the input method determination unit 2 determines the touch operation, and the input switching control unit 4
- the state transition control unit 5 is notified of the touch operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the pressing of the “FM” button into a command (FM, FM), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7B.
- the command is converted to “Transition to FM station list screen P12”.
- the application execution unit 11 acquires data constituting the FM station list screen P12 from the AV function data group of the data storage unit 12 to generate a screen, and the output control unit 13 displays the screen on the touch display. To do.
- the touch input detection unit 1 detects the pressing of the “A broadcast station” button on the FM station list screen P12, the input method determination unit 2 determines that the touch operation is performed, and the input switching control.
- the unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal representing the pressing of the “A broadcast station” button into a command (A broadcast station, A broadcast station), and the state transition control unit 5 converts the command into the state transition of FIG. 7A. Based on the table, it is converted into an application execution command “select A broadcast station”.
- the application execution unit 11 acquires a command for controlling the car audio from the data group for the AV function in the data storage unit 12, and the output control unit 13 controls the car audio to select the A broadcast station.
- the in-vehicle information device detects touch for a certain period of time on the “AV” button by the touch input detection unit 1, determines voice operation by the input method determination unit 2, and performs input switching control.
- the unit 4 notifies the state transition control unit 5 that it is a voice operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the touch of the “AV” button into an item name (AV)
- the input switching control unit 4 converts the item name into the state transition control unit 5 and the voice recognition dictionary switching unit.
- the state transition control unit 5 converts the command (AV, A broadcast station) into an application execution command “transition to FM station list screen P12 and select A broadcast station” based on the state transition table of FIG. 7A. Then, the application execution unit 11 obtains data constituting the FM station list screen P12 from the AV function data group of the data storage unit 12, generates a screen, and commands for controlling the car audio from the data group
- the output control unit 13 displays the screen on the touch display and controls the car audio to select the station A.
- the “telephone” button on the application list screen P01 shown in FIG. 8 is pressed to make a transition to the telephone screen P21.
- the “phone book” button on the telephone screen P21 is pressed to make a transition to the telephone book list screen P22.
- Scrolling is repeated until “Yamada OO” is displayed on the phone book list screen P22, and the “Yamada OO” button is pressed to make a transition to the phone book screen P23.
- the in-vehicle information device detects the push of the “telephone” button by the touch input detection unit 1, determines the touch operation by the input method determination unit 2, The transition control unit 5 is notified that it is a touch operation input. Further, the touch-command conversion unit 3 converts a touch signal representing the pressing of the “telephone” button into a command (telephone, telephone), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7A. It is converted into the command “Transition to phone screen P21”. And the application execution part 11 acquires the data which comprise the telephone screen P21 from the data group for telephone functions of the data storage part 12, produces
- the touch input detection unit 1 detects the pressing of the “phone book” button on the telephone screen P21, the input method determination unit 2 determines the touch operation, and the input switching control unit 4
- the state transition control unit 5 is notified that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal representing the pressing of the “phone book” button into a command (phone book, phone book), and the state transition control unit 5 converts the command based on the state transition table of FIG. 7C.
- To the application execution command “transition to the phone book list screen P22”.
- the application execution part 11 acquires the data which comprise the telephone directory list screen P22 from the data group for telephone functions of the data storage part 12, produces
- the touch input detection unit 1 detects the pressing of the “Yamada ⁇ ” button on the phone book list screen P22, and the input method determination unit 2 determines the touch operation, and the input switching control.
- the unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the pressing of the “Yamada XX” button into a command (Yamada XX, Yamada XX), and the state transition control unit 5 converts the command into the state transition of FIG. 7C. Based on the table, the application execution command “transition to the phone book screen P23 and display the phone book of Yamada XX” is converted.
- the application execution part 11 acquires the data which comprise the telephone directory screen P23 and the telephone number data of Yamada OO from the data group for telephone functions of the data storage part 12, and produces
- the touch input detection unit 1 detects the pressing of the “call” button on the phone book screen P23, the input method determination unit 2 determines the touch operation, and the input switching control unit 4
- the state transition control unit 5 is notified of the touch operation input.
- the touch-command conversion unit 3 converts a touch signal indicating the pressing of the “call” button into a command (calling, calling), and the state transition control unit 5 converts the command based on the state transition table of FIG. 7C.
- To the application execution command “connect to the telephone line”. And the application execution part 11 connects to a telephone line through the network 14, and the output control part 13 outputs an audio
- the voice operation input is used, the user speaks “Yamada ⁇ ” while touching the “telephone” button on the application list screen P01 shown in FIG. 8 for a certain period of time to display the telephone directory screen P23. After that, you can make a call by pressing the “call” button. At this time, according to the flowchart shown in FIG.
- the in-vehicle information device detects touch for a certain period of time on the “telephone” button by the touch input detection unit 1, determines voice operation by the input method determination unit 2, and touch-command
- the conversion unit 3 converts the touch signal representing the touch of the “telephone” button into an item name (telephone), and the input switching control unit 4 notifies the state transition control unit 5 and the voice recognition dictionary switching unit 8 of the item name.
- the voice recognition unit 9 switches to the voice recognition dictionary instructed by the voice recognition dictionary switching unit 8 and recognizes the speech “Yamada ⁇ ”, and the voice-command conversion unit 10 sets the voice recognition result as the item value (Yamada ⁇ Is converted into ()) and notified to the state transition control unit 5.
- the state transition control unit 5 converts the command (telephone, Yamada ⁇ ) into an application execution command “transition to the phonebook screen P23 and display Yamada ⁇ phonebook” based on the state transition table of FIG. 7A.
- the application execution part 11 acquires the data which comprise the telephone directory screen P23 and the telephone number data of Yamada OO from the data group for telephone functions of the data storage part 12, and produces
- the phone book screen P23 can be displayed in 3 steps in the touch operation input, it can be executed in the shortest 1 step in the voice operation input.
- the “telephone” button on the application list screen P01 shown in FIG. Transition if the touch operation input is used, the “telephone” button on the application list screen P01 shown in FIG. Transition. Next, the “number input” button on the telephone screen P21 is pressed to make a transition to the number input screen P24. Next, on the number input screen P24, a 10-digit number is input by pressing the number button, and the “confirm” button is pressed to change the screen to the number input call screen P25. As a result, a screen for making a call to 03-3333-4444 can be displayed.
- the user speaks “0333334444” while touching the “telephone” button on the application list screen P01 shown in FIG. 8 for a predetermined time to display the number input calling screen P25.
- the number input calling screen P25 can be displayed in 13 steps in the touch operation input, but can be executed in the shortest one step in the voice operation input.
- FIG. 11A is a diagram for explaining a screen transition example of the in-vehicle information device according to Embodiment 1, and is a screen example related to a navigation function.
- 7D and 7E are state transition tables corresponding to the screens related to the navigation function. For example, when the user wants to find a convenience store around the current location, if the touch operation input is used, the “navi” button on the application list screen P01 shown in FIG. 11A is pressed to make a transition to the navigation screen (current location) P31. Next, the “menu” button on the navigation screen (current location) P31 is pressed to make a transition to the navigation menu screen P32.
- the “search for peripheral facilities” button on the navigation menu screen P32 is pressed to make a transition to the peripheral facility genre selection screen 1P34.
- the list on the peripheral facility genre selection screen 1P34 is scrolled and the “shopping” button is pressed to make a transition to the peripheral facility genre selection screen 2P35.
- the list on the peripheral facility genre selection screen 2P35 is scrolled and the “convenience store” button is pressed to make a transition to the convenience store brand selection screen P36.
- the “all convenience stores” button on the convenience store brand selection screen P36 is pressed to make a transition to the peripheral facility search result screen P37. Thereby, the search result list of the nearby convenience stores can be displayed.
- the in-vehicle information device detects the push of the “navigation” button on the application list screen P01 by the touch input detection unit 1, determines the touch operation by the input method determination unit 2, and switches the input.
- the control unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal representing the push of the “navigation” button into a command (navigation, navigation), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7A. It is converted into the command “Transition to the navigation screen (current location) P31”.
- the application execution unit 11 acquires the current location from a GPS receiver (not shown) and the like, acquires map data around the current location from the navigation function data group of the data storage unit 12 and generates a screen, and outputs an output control unit. 13 displays the screen on the touch display.
- the touch input detection unit 1 detects the push of the “menu” button on the navigation screen (current location) P31, the input method determination unit 2 determines the touch operation, and the input switching control unit 4 notifies the state transition control unit 5 that it is a touch operation input. Further, the touch-command conversion unit 3 converts a touch signal indicating the pressing of the “menu” button into a command (menu, menu), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7D. The command is converted to “transition to the navigation menu screen P32”. And the application execution part 11 acquires the data which comprise the navigation menu screen P32 from the data group for navigation functions of the data storage part 12, and produces
- the touch input detection unit 1 detects the pressing of the “search for nearby facilities” button on the navigation menu screen P32, the input method determination unit 2 determines the touch operation, and the input switching control.
- the unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the pressing of the “search for peripheral facility” button into a command (search for peripheral facility, search for peripheral facility), and the state transition control unit 5 converts the command into FIG. 7D. Is converted into an application execution command “transition to the peripheral facility genre selection screen 1P34” based on the state transition table.
- the application execution unit 11 acquires peripheral facility list items from the navigation function data group of the data storage unit 12, and the output control unit 13 displays a list screen (P34) on which the list items are arranged on the touch display. .
- the list items for configuring the list screen are grouped in the data storage unit 12 according to the contents of the list items, and further hierarchized in this group.
- the list items “traffic”, “meal”, “shopping”, and “accommodation” on the peripheral facility genre selection screen 1P34 are group names, and are classified into the top floor of each group.
- the list items “department store”, “supermarket”, “convenience store”, and “home appliance” are stored in the hierarchy immediately below the list item “shopping”.
- the list items “all convenience stores”, “A convenience store”, “B convenience store”, and “C convenience store” are stored in the hierarchy immediately below “convenience store”.
- the touch input detection unit 1 detects the push of the “shopping” button on the peripheral facility genre selection screen 1P34, the input method determination unit 2 determines the touch operation, and the input switching control unit 4 notifies the state transition control unit 5 that it is a touch operation input. Further, the touch-command conversion unit 3 converts the touch signal indicating the push of the “shopping” button into a command (shopping, shopping), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7D. It is converted into the command “transition to the peripheral facility genre selection screen 2P35”. And the application execution part 11 acquires the list item of the surrounding facility linked
- the touch input detection unit 1 detects the pressing of the “convenience store” button on the peripheral facility genre selection screen 2P35, the input method determination unit 2 determines the touch operation, and the input switching control unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the pressing of the “convenience store” button into a command (convenience store, convenience store), and the state transition control unit 5 executes the application based on the state transition table of FIG. 7E. It is converted into the instruction “Transition to the convenience store brand selection screen P36”.
- the application execution part 11 acquires the list item of the convenience store brand type of surrounding facilities from the data group for navigation functions of the data storage part 12, and the output control part 13 displays the list screen (P36) on a touch display. To do.
- the touch input detection unit 1 detects the pressing of the “all convenience store” button on the convenience store brand selection screen P36, the input method determination unit 2 determines the touch operation, and the input switching control.
- the unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts the touch signal indicating the pressing of the “all convenience stores” button into a command (all convenience stores, all convenience stores), and the state transition control unit 5 converts the command into the state transition of FIG. 7E.
- the application execution command “transition to the peripheral facility search result screen P37, search for peripheral facilities at all convenience stores, and display the search results” is converted.
- the application execution unit 11 creates a list item by searching for a convenience store from the map data of the data group for the navigation function of the data storage unit 12 around the current location acquired earlier, and the output control unit 13 displays the list screen ( P37) is displayed on the touch display.
- the touch input detection unit 1 detects the pressing of the “B convenience store XX store” button on the peripheral facility search result screen P37, and the input method determination unit 2 determines that the touch operation is performed.
- the input switching control unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal indicating the pressing of the “B convenience store XX store” button into a command (B convenience store XX store, B convenience store XX store), and the state transition control unit 5 performs the command.
- the application execution part 11 acquires the map data containing B convenience store OO store from the data group for navigation functions of the data storage part 12, and produces
- the touch input detection unit 1 detects the pressing of the “go here” button on the destination facility confirmation screen P38, the input method determination unit 2 determines that the touch operation is performed, and the input is switched.
- the control unit 4 notifies the state transition control unit 5 that it is a touch operation input.
- the touch-command conversion unit 3 converts a touch signal representing the pressing of the “go here” button into a command (going here, B convenience store ⁇ store), and the state transition control unit 5 displays the command (not shown). It is converted into an application execution instruction based on the state transition table.
- the application execution unit 11 uses the map data of the data group for the navigation function in the data storage unit 12 to perform a route search from the current location acquired earlier to the B convenience store XX store as a destination and display a navigation screen (current location) P39 is generated, and the output control unit 13 displays the screen on the touch display.
- the in-vehicle information device detects touch for a predetermined time with the “navigation” button by the touch input detection unit 1, determines voice operation by the input method determination unit 2, and touch-command
- the conversion unit 3 converts the touch signal representing the touch of the “navigation” button into an item name (navigation), and the input switching control unit 4 notifies the state transition control unit 5 and the voice recognition dictionary switching unit 8 of the item name.
- the voice recognition unit 9 switches to the voice recognition dictionary designated by the voice recognition dictionary switching unit 8 to recognize the speech “convenience store”, and the voice-command conversion unit 10 converts the voice recognition result into item values (convenience store).
- the state transition control unit 5 is notified.
- the state transition control unit 5 transitions the command (navigation, convenience store) to the application execution instruction “Peripheral facility search result screen P37 based on the state transition table of FIG. 7A, searches for peripheral facilities at all convenience stores, and displays the search results. To "display".
- the application execution part 11 searches a convenience store from the map data of the data group for navigation functions of the data storage part 12, creates a list item, and the output control part 13 displays the list screen (P37) on a touch display. .
- the operation (the destination facility confirmation screen P38 and the navigation screen (with current location route) P39) that guides the route from the peripheral facility search result screen P37 to the specific convenience store as the destination is substantially the same as the above-described processing, Is omitted.
- the peripheral facility search result screen P37 can be displayed in 6 steps in the touch operation input, but can be executed in the shortest 1 step in the voice operation input.
- the “navi” button on the application list screen P01 shown in FIG. 11A is pressed to change to the navigation screen (current location) P31.
- the “menu” button on the navigation screen (current location) P31 is pressed to make a transition to the navigation menu screen P32.
- the “search for destination” button on the navigation menu screen P32 is pressed to make a transition to the destination setting screen P33 shown in FIG. 11B.
- the “facility name” button on the destination setting screen P33 shown in FIG. 11B is pressed to make a transition to the facility name input screen P43.
- the search result screen P44 On the facility name input screen P43, seven characters “Tokyo Kyoeki” are input by pressing the character button, and the “Confirm” button is pressed to change the screen to the search result screen P44. Thereby, the search result list of Tokyo Station can be displayed.
- the voice operation input if the user speaks “Tokyo Station” while touching the “navigation” button on the application list screen P01 shown in FIG. 11A for a certain period of time, the search result screen P44 shown in FIG. 11B is displayed. Can be made.
- the search result screen P44 can be displayed in 12 steps in the touch operation input, but can be executed in the shortest 1 step in the voice operation input.
- the user can switch to voice operation input in the middle of touch operation input. For example, the user presses the “navi” button on the application list screen P01 shown in FIG. 11A to make a transition to the navigation screen (current location) P31. Next, the “menu” button on the navigation screen (current location) P31 is pressed to make a transition to the navigation menu screen P32.
- the nearby facility search result screen P37 can be displayed. In this case, a list of search results for convenience stores around the current location can be displayed in three steps from the application list screen P01.
- the search result screen P44 shown in FIG. 11B can be displayed.
- the search result list of Tokyo Station can be displayed in three steps from the application list screen P01.
- the search result screen P44 can be displayed by saying “Tokyo Station” while touching the “facility name” button on the destination setting screen P33 shown in FIG. 11B for a certain period of time.
- the search result list of Tokyo Station can be displayed in 4 steps from the application list screen P01. In this way, the same voice input “Tokyo Station” can be performed on different screens P32 and P33, and the number of steps varies depending on the screen on which the voice input is performed.
- different voice inputs can be made to the same button on the same screen to display a screen desired by the user.
- the user speaks “Convenience Store” while touching the “Navi” button on the application list screen P01 shown in FIG. 11A for a certain period of time to display the peripheral facility search result screen P37, but the same “Navi” button
- the peripheral facility search result screen P40 can be displayed (based on the state transition table of FIG. 7A).
- a user who wants to search for a convenience store vaguely can obtain a search result for convenience stores of all brands by saying “Convenience store”, while a user who wants to search only “A convenience store” says “A convenience store”. If you speak, you can get search results that focus on the A convenience store.
- the in-vehicle information device detects the touch operation based on the output signal of the touch display, and the touch operation based on the detection result of the touch input detection unit 1.
- a touch-command conversion unit 3 that generates a command (item name, item value) including an item name for executing a process (one or both of the transition destination screen and the application execution function) corresponding to the button that has been performed;
- a voice recognition unit 9 that recognizes a user utterance substantially simultaneously with or following a touch operation using a voice recognition dictionary that includes voice recognition keywords associated with the process, and a process for executing a process corresponding to the voice recognition result
- the state of the touch operation is the touch operation.
- An input method determining unit 2 that determines whether the mode is indicated or a voice operation mode; an input switching control unit 4 that switches between a touch operation mode and a voice operation mode according to a determination result of the input method determination unit 2;
- a touch operation mode instruction is received from the input switching control unit 4
- a command (item name, item value) is acquired from the touch-command conversion unit 3 and converted into an application execution command, and a voice operation is performed from the input switching control unit 4.
- an item name is obtained from the input switching control unit 4 and an item value is obtained from the voice-command conversion unit 10 and converted into an application execution command, and processing is executed according to the application execution command
- the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the button, the normal touch operation and the voice operation related to the button can be switched and input with one button. This makes it easy to understand the touch operation.
- the item value obtained by converting the speech recognition result is information for executing processing classified in a lower layer within the same processing group as the item name that is the button name.
- the in-vehicle information device includes a voice recognition dictionary DB7 that stores a voice recognition dictionary that includes voice recognition keywords associated with processing, and a touch of the voice recognition dictionary DB7.
- a voice recognition dictionary switching unit 8 for switching to a voice recognition dictionary associated with a process related to an operated button (that is, an item name).
- the voice-command conversion unit 10 includes a voice recognition dictionary switching unit 8. Using the switched speech recognition dictionary, the speech recognition of the user utterance is performed almost simultaneously with the touch operation or subsequent to the touch operation. For this reason, it is possible to narrow down to the speech recognition keywords related to the button that has been touched, and the speech recognition rate can be improved.
- FIG. 1 for example, a list screen displaying a list item such as the telephone directory list screen P22 shown in FIG. 8 and a screen other than the list screen perform the same operation, but in the second embodiment,
- the screen is configured to perform a more suitable operation.
- a voice recognition dictionary related to the list item is dynamically created on the list screen, and a voice operation input such as selecting a list item by detecting a touch operation on the scroll bar is determined.
- FIG. 12 is a block diagram showing a configuration of the in-vehicle information device according to the second embodiment.
- This in-vehicle information device is newly provided with a speech recognition target word dictionary creation unit 20. 12 that are the same as or equivalent to those in FIG. 1 are assigned the same reference numerals, and detailed descriptions thereof are omitted.
- the touch input detection unit 1a detects whether or not the user has touched the scroll bar (display area) based on an input signal from the touch display. Based on the determination result (touch operation or voice operation) of the input method determination unit 2, the input switching control unit 4a informs the state transition control unit 5 which input operation is being performed by the user and also notifies the application execution unit 11a. Also tell.
- the application execution unit 11a scrolls the list on the list screen.
- the application execution unit 11a uses various data stored in the data storage unit 12 to control state transition as in the first embodiment. The screen transition or application function is executed in accordance with the application execution command notified from the unit 5.
- the speech recognition target word dictionary creation unit 20 acquires list data of list items displayed on the screen from the application execution unit 11a, and creates a speech recognition target word dictionary related to the list items acquired using the speech recognition dictionary DB7.
- the voice recognition unit 9a refers to the voice recognition target word dictionary created by the voice recognition target word dictionary creation unit 20, performs voice recognition processing on the voice signal from the microphone, The data is converted into a sequence or the like and output to the voice-command conversion unit 10.
- the on-vehicle information device only needs to perform the same processing as in the first embodiment except for the list screen, and the voice recognition dictionary switching unit 8 (not shown) is selected from the voice recognition keyword group associated with the item name.
- the voice recognition unit 9a is instructed to switch to the voice recognition dictionary.
- FIG. 13 is a flowchart showing the operation of the in-vehicle information device according to the second embodiment.
- FIG. 14 shows an example of screen transition by the in-vehicle information device.
- the in-vehicle information device displays the telephone function phone book list screen P51, which is one of the functions of the application execution unit 11, on the touch display. I will do it.
- step ST200 the touch input detection unit 1a detects whether or not the user has touched the scroll bar displayed on the touch display.
- the touch input detection unit 1a displays a touch signal indicating how the touch is touched based on an output signal from the touch display (the operation to be scrolled is a fixed time). Touch operation etc.).
- step ST210 the touch-command conversion unit 3 converts the scroll bar command (item name, item value) into (scroll bar, scroll bar) based on the touch signal input from the touch input detection unit 1a and outputs it. To do.
- the input method determination unit 2 determines an input method based on the touch signal input from the touch input detection unit 1a and determines whether the user is performing a touch operation or a voice operation, and outputs the input method. .
- This input method determination process is as shown in the flowchart of FIG.
- the touch operation mode is a touch signal indicating an operation of pressing a button
- the voice operation mode is a touch signal indicating an operation of touching the button for a certain time.
- the touch operation mode is determined when the touch signal indicates an operation to scroll while pressing the scroll bar
- the voice operation mode is determined when the touch signal indicates an operation that simply touches the scroll bar for a certain period of time. Conditions may be set as appropriate.
- step ST230 if the determination result input from the input switching control unit 4a is the touch operation mode (step ST230 "YES"), the state transition control unit 5 receives the command input from the touch-command conversion unit 3 in the next step ST240. Are converted into application execution instructions based on the state transition table of the state transition table storage unit 6.
- FIG. 15 illustrates an example of a state transition table included in the state transition table storage unit 6 according to the second embodiment.
- commands corresponding to the scroll bar displayed on each screen P51, P61, P71
- the item name is “scroll bar”.
- Some command item values have the same “scroll bar” as the item name, while others have different names.
- a command having the same item name and item value is a command used for touch operation input, and a command having a different item name and item value is a command mainly used for voice operation input.
- step ST240 the state transition control unit 5 converts the command (scrolling and scrolling) input from the touch-command conversion unit 3 into an application execution command that “list scrolls without screen transition”.
- the application execution unit 11a that has received the application execution command “does not make screen transition and scrolls the list” from the state transition control unit 5 scrolls the list on the currently displayed list screen.
- step ST230 determines whether the determination result input from the input switching control unit 4a is the voice operation mode ("NO" in step ST230).
- the process proceeds to step ST250, and an application execution command is generated by the voice operation input.
- a method of generating an application execution command by voice operation input will be described using the flowchart shown in FIG.
- step ST251 when the voice recognition target word dictionary creation unit 20 receives a notification of the result of the voice operation input determination from the input switching control unit 4a, the list item of the list screen currently displayed on the touch display is displayed from the application execution unit 11a. Get list data.
- the speech recognition target word dictionary creation unit 20 creates a speech recognition target word dictionary related to the acquired list item.
- FIG. 17 is a diagram for explaining the speech recognition target word dictionary.
- this speech recognition target word dictionary (1) speech recognition keywords of items arranged in the list, (2) speech recognition keywords for narrowing down search of list items, and (3) lower layer screen of items arranged in the list. There are three types of all speech recognition keywords.
- (1) is, for example, names (Akiyama XX, Kato XX, Suzuki XX, Tanaka XX, Yamada XX, etc.) lined up on the telephone directory list screen.
- (2) is, for example, convenience store brand names (A convenience store, B convenience store, C convenience store, D convenience store, E convenience store, etc.) lined up on the peripheral facility search result screen showing the result of searching for “convenience store” among facilities around the current location. It is.
- (3) is, for example, a genre name (convenience store, department store, etc.) included in the lower layer screen of “shopping” items arranged in the peripheral facility genre selection screen 1 and a convenience store brand name (XX in each genre name).
- the voice recognition unit 9a performs voice recognition processing on the voice signal input from the microphone using the voice recognition target word dictionary created by the voice recognition target word dictionary creation unit 20, and performs voice operation input. Detect and output. For example, when the user touches the scroll bar for a certain period of time (or half-press, double-tap, long-press, etc.) on the phone book list screen P51 shown in FIG. Is created as a speech recognition keyword. Accordingly, the speech recognition keywords related to the list are narrowed down, and an improvement in the speech recognition rate can be expected.
- step ST254 the voice-command conversion unit 10 converts the voice recognition result input from the voice recognition unit 9a into a command (item value) and outputs the command.
- step ST ⁇ b> 255 the state transition control unit 5 is input from the item name input from the input switching control unit 4 a and the voice-command conversion unit 10 based on the state transition table stored in the state transition table storage unit 6.
- a command (item name, item value) consisting of an item value is converted into an application execution instruction.
- the current state is the telephone directory list screen P51 shown in FIG.
- the item name input from the input switching control unit 4a to the state transition control unit 5 is scroll.
- the item value input from the voice-command converter 10 to the state transition controller 5 is Yamada OO. Therefore, it becomes a command (scroll bar, Yamada OO).
- the command is converted into an application execution command “transition to the phone book screen P52 and display the phone book of Yamada OO”. Accordingly, the user can easily select and determine a list item such as “Yamada OO” that is arranged below the list item and is not displayed on the list screen.
- the current state is the peripheral facility search result screen P61 shown in FIG.
- the item value input from the voice-command conversion unit 10 to the state transition control unit 5 is the A convenience store.
- Scroll bar A convenience store.
- the command is converted into an application execution command “does not perform screen transition but performs a narrowing search at the A convenience store and displays the search result”. Thereby, the user can narrow down and search the list items easily.
- the current state is the peripheral facility genre selection screen 1P71 shown in FIG.
- the item value input from the voice-command conversion unit 10 to the state transition control unit 5 is A convenience store.
- the command is “execution of the screen transition to the peripheral facility search result screen P74, search for the facility near the A convenience store, and display the search result”. Converted to an instruction. Accordingly, the user can easily transition to a lower layer screen from the displayed list screen or execute a lower layer application function.
- step ST256 the state transition control unit 5 outputs the application execution instruction converted from the command to the application execution unit 11a.
- step ST260 the application execution unit 11a acquires necessary data from the data storage unit 12 according to the application execution instruction input from the state transition control unit 5, and performs one or both of screen transition and function execution.
- step ST270 the output control unit 13 outputs the result of screen transition and function execution of the application execution unit 11a by display and sound. Since the operations of the application execution unit 11a and the output control unit 13 are the same as those in the first embodiment, description thereof is omitted.
- the speech recognition target word dictionary creation unit 20 creates the speech recognition target word dictionary in step ST252.
- the dictionary creation timing is not limited to this. For example, it is configured to create a speech recognition target word dictionary related to the list screen when the screen transitions to the list screen (when the application execution unit 11a generates the list screen or when the output control unit 13 displays the list screen). May be.
- a speech recognition target word dictionary for the list screen is prepared. You may keep it. Then, when the scroll bar of the list screen is detected or when the list screen is transitioned to, it may be switched to the speech recognition target word dictionary prepared in advance.
- the in-vehicle information device is divided into groups and further associated with the list items and the data storage unit 12 that stores the data of the list items that are hierarchized within the groups.
- a speech recognition target word dictionary creating unit 20 that creates a speech recognition target word dictionary by extracting the speech recognition keywords associated with the list items arranged in the list screen and the list items below the list items in the speech recognition dictionary DB 7;
- the voice-command conversion unit 10 uses the voice recognition target word dictionary created by the voice recognition target word dictionary creation unit 20 to enter the scroll bar area.
- the timing at which the speech recognition target word dictionary creating unit 20 creates the speech recognition target word dictionary may be when the list screen is displayed instead of after the scroll bar is touched.
- the voice recognition keyword to be extracted does not have to be associated with each list item arranged on the list screen and the list item below it, for example, only the list items arranged on the list screen, or the list screen
- Each list item arranged in the list and the list item in the lower layer may be used, or each list item arranged in the list screen and all the list items in the lower layer may be used.
- FIG. 20 is a block diagram illustrating a configuration of the in-vehicle information device according to the third embodiment.
- This in-vehicle information device newly includes an output method determination unit 30 and an output data storage unit 31, and notifies the user of the touch operation mode or the voice operation mode. 20 that are the same as or equivalent to those in FIG. 1 are assigned the same reference numerals, and detailed descriptions thereof are omitted.
- the input switching control unit 4b Based on the determination result (touch operation mode or voice operation mode) of the input method determination unit 2, the input switching control unit 4b informs the state transition control unit 5 which input operation the user desires and determines the output method. Tell part 30 too. Further, the input switching control unit 4 b outputs the item name of the commands input from the touch-command conversion unit 3 to the output method determination unit 30 when determining the voice operation input.
- the output method determination unit 30 When the touch operation mode is notified from the input switching control unit 4b, the output method determination unit 30 notifies the user that the touch operation input is an input method (button color indicating the touch operation mode, sound effect, touch display mode) A click feeling and a vibration method) are determined, and output data is acquired from the output data storage unit 31 and output to the output control unit 13b as necessary. Further, the output method determining unit 30 outputs an output method (button color, sound effect, touch indicating the voice operation mode) that notifies the user that the voice operation mode is input when the voice operation mode is notified from the input switching control unit 4b. The display click feeling and vibration method, voice recognition mark, voice guidance, etc.) are determined, and output data corresponding to this voice operation item name is acquired from the output data storage unit 31 and output to the output control unit 13b.
- the output data storage unit 31 stores data used to notify the user whether the input method is a touch operation input or a voice operation input.
- the data includes, for example, sound effect data that allows the user to identify whether the operation mode is the touch operation mode or the voice operation mode, image data of a voice recognition mark that informs the voice operation mode, and voice recognition corresponding to the button (item name) that the user touches There are voice guidance data that encourages the utterance of keywords.
- the output data storage unit 31 is individually provided. However, other storage devices may be used.
- the output data may be stored in the state transition table storage unit 6 or the data storage unit 12.
- the output control unit 13b displays the execution result of the application execution unit 11 on the touch display or outputs the sound from the speaker, and changes the button color to the touch operation mode according to the output method input from the input switching control unit 4b. Change in the voice operation mode, change the click feeling of the touch display, change the vibration method, and output voice guidance. Any one of these output methods may be used, or a plurality of types may be arbitrarily combined.
- FIG. 21 is a flowchart showing an output method control operation of the in-vehicle information device according to the third embodiment. Steps ST100 to ST130 in FIG. 21 are the same processes as steps ST100 to ST130 in FIG. If the determination result of the input method is a touch operation (step ST130 “YES”), the input switching control unit 4b notifies the output method determination unit 30 to that effect. In subsequent step ST300, the output method determination unit 30 receives a notification that the input is a touch operation input from the input switching control unit 4b, and determines the output method of the application execution result. For example, the button on the screen is changed to a button color for touch operation, or the sound effect, click feeling and vibration when the user touches the touch display is changed for touch operation.
- the input switching control unit 4b notifies the output method determination unit 30 that it is a voice operation input and its command (item name).
- the output method determination unit 30 receives a notification that the input is a voice operation input from the input switching control unit 4b, and determines the output method of the application execution result. For example, the button on the screen is changed to a button color for voice operation, and the sound effect, click feeling, and vibration when the user touches the touch display are changed for voice operation. Further, the output method determination unit 30 acquires voice guidance data from the output data storage unit 31 based on the item name of the button touched at the time of input method determination.
- FIG. 22 is a telephone screen when it is determined that the voice operation input is made. Assume that the user touches the “phone book” button for a certain period of time when the telephone screen is displayed. In this case, the output method determination unit 30 receives notification from the input switching control unit 4b that it is a voice operation input, and receives an item name (phone book). Subsequently, the output method determination unit 30 acquires the voice recognition mark data from the output data storage unit 31, and outputs an instruction to display the voice recognition mark near the “phone book” button to the output control unit 13b.
- the output control unit 13b superimposes and arranges the voice recognition mark in the vicinity of the phone book button on the telephone screen so that the voice recognition mark is blown out from the “phone book” button touched by the user, and outputs it to the touch display. Thereby, it can be shown to a user in an easy-to-understand manner that the state is switched to the voice operation input and which button is associated with the voice operation. If the user speaks “Yamada XX” in this state, a lower-level telephone directory screen having a calling function can be displayed.
- the output method determination unit 30 that has received the notification that it is a voice operation input stores the voice guidance “Who will make a call” associated with the item name (phone book)? Are obtained from the output data storage unit 31 and output to the output control unit 13b. And the output control part 13b outputs this audio
- the output method determination unit 30 receives a notification that the input operation is a voice operation input from the input switching control unit 4b, and receives an item name (search for nearby facilities).
- the output method determination unit 30 acquires voice guidance data associated with this item name, such as “Which facility do you want to go to” or “Please tell us the facility name” from the output data storage unit 31 and output it. It outputs to the control part 13b. Thereby, it is possible to guide the voice operation input more naturally while asking the user by voice guidance the content to be uttered according to the touched button. This can be said to be easier to understand than the voice guidance “Please speak when you hear a beep” that is output when the utterance button is used, which is used in general voice operation input.
- FIG. 23 is an example of a list screen at the time of voice operation input.
- the output method determination unit 30 controls the voice recognition mark to be superimposed and arranged near the scroll bar on the list screen to notify the user that the voice operation input is in progress.
- the in-vehicle information device receives the instruction of the touch operation mode or the voice operation mode from the input switching control unit 4b, and changes the output method of the execution result by the output unit to the instructed mode.
- the output method determining unit 30 that determines the output method is provided, and the output control unit 13b is configured to control the output unit according to the output method determined by the output method determining unit 30. For this reason, by returning different feedback between the touch operation mode and the voice operation mode, it is possible to intuitively tell the user which operation mode state is in effect.
- the in-vehicle information device stores, for each command (item name), voice guidance data that prompts the user to speak the voice recognition keyword associated with the command (item value).
- the output method storage unit 31 includes an output data storage unit 31.
- the output method determination unit 30 performs a voice corresponding to the command (item name) generated by the touch-command conversion unit 3.
- the guidance data is acquired from the output data storage unit 31 and output to the output control unit 13b, and the output control unit 13b is configured to output the voice guidance data output from the output method determination unit 30 from the speaker. For this reason, when the voice operation mode is entered, voice guidance in accordance with the touch-operated button can be output, and it is possible to guide the user to speak the voice recognition keyword naturally.
- the application has been described by taking the AV function, the telephone function, and the navigation function as examples, but it goes without saying that other applications may be used.
- the in-vehicle information device accepts inputs such as a command for operating and stopping the in-vehicle air conditioner, a command for raising and lowering the set temperature, and the air conditioner function data stored in the data storage unit 12 May be controlled.
- the user's favorite URL may be stored in the data storage unit 12, and an input of a command or the like for acquiring and displaying the URL data via the network 14 may be received and displayed on the screen.
- it may be an application that executes functions other than this.
- the present invention is not limited to an in-vehicle information device, but is applied to a user interface device of a portable terminal such as a PND (Portable / Personal Navigation Device) and a smartphone that can be brought into a vehicle. You may apply.
- the present invention is not limited to vehicles, and may be applied to user interface devices such as household electric appliances.
- this user interface device When this user interface device is configured by a computer, the touch input detection unit 1, the input method determination unit 2, the touch-command conversion unit 3, the input switching control unit 4, the state transition control unit 5, and the state transition table storage unit 6 , Speech recognition dictionary DB 7, speech recognition dictionary switching unit 8, speech recognition unit 9, speech-command conversion unit 10, application execution unit 11, data storage unit 12, output control unit 13, speech recognition target word dictionary creation unit 20, output
- An information processing program describing the processing contents of the method determining unit 30 and the output data storage unit 31 may be stored in a computer memory, and the computer CPU may execute the information processing program stored in the memory. .
- Embodiment 4 FIG.
- the touch operation mode execution of the button function
- the touch operation state short press or long press
- the voice operation mode activation of voice recognition related to the button
- the buttons of the touch display not only the buttons of the touch display but also the touch operation mode depending on the state of the touch operation to the input device such as a mechanical hard button. It is possible to switch between voice operation modes. Therefore, in Embodiment 4 and Embodiments 5 to 10 to be described later, an information device that switches an operation mode according to the state of a touch operation on an input device such as a hard button will be described.
- the in-vehicle information device has the same configuration as the in-vehicle information device shown in FIG. 1, FIG. 12, or FIG. 20, and therefore FIG. 1, FIG. 12, and FIG. To explain.
- the touch display is used as the input device.
- the following (1) to (6) are used as examples of the input device.
- (1) Example of combination of hard buttons and touch display (2) Example of combination of hard buttons and display (3) Example of only hard buttons corresponding to display items on display (4) Hardware for cursor operation such as display and joystick Example of combination of devices (5) Example of combination of display and touchpad (6) Example of hard buttons only
- ⁇ Hard buttons are mechanical physical buttons, including rubber buttons for remote controllers (hereinafter referred to as remote control) and sheet keys used for thin mobile phones. Details of the cursor operating hardware device will be described later.
- the touch input detection unit 1 of the in-vehicle information device detects how the user presses the hard button, and the input method determination unit 2 determines which of the two operation modes is the input method. For example, in the case of a hard button without a tactile sensor, the input method may be determined based on whether the button is pressed short or long, or the input method may be determined based on whether the button is pressed once or twice. In the case of a hard button with a tactile sensor, the input method may be determined depending on whether the user touched or pressed the hard button. In the case of a hard button that can detect half-press (for example, a shutter button of a camera), the input method may be determined based on whether the button is pressed halfway or fully. Thus, by properly using two types of touch operations for one hard button, it is possible to determine whether input is performed by touch operation or voice operation for one hard button. it can.
- FIG. 24 is a diagram showing a configuration example of the hard buttons 100 to 105 and the touch display 106 provided in (or connected to) the in-vehicle information device.
- hard buttons 100 to 105 are installed around the touch display 106, and item names of higher-level functions that can be executed by the application execution unit 11 are associated with the hard buttons 100 to 105.
- the touch operation mode is determined when the hard buttons 100 to 105 are pressed for a short time
- the voice operation mode is determined when the hard buttons 100 to 105 are pressed for a long time.
- the touch input detection unit 1 detects this short press and outputs a touch signal.
- the touch-command conversion unit 3 converts the touch signal into a command (PHONE, PHONE).
- the input method determination unit 2 determines that the input method is the touch operation mode based on the touch signal, and the state transition control unit 5 that has received this determination converts the command (PHONE, PHONE) into an application execution command. And output to the application execution unit 11.
- the application execution unit 11 displays the PHONE menu on the touch display 106 based on the application execution command.
- a “phone book” button, a “number input” button, and the like are displayed on the PHONE menu screen, and each button is associated with functions such as a telephone directory and a number input one level lower than the PHONE menu. The user performs these button operations using the touch display 106.
- the input method determination unit 2 determines that the input method is the voice operation mode based on the touch signal, and the input switching control unit. 4 outputs the command item name (PHONE) to the voice recognition dictionary switching unit 8 to switch to the voice recognition dictionary related to PHONE.
- the voice recognition unit 9 performs voice recognition processing using a voice recognition dictionary related to PHONE, and detects a voice operation input that the user speaks following the touch operation on the hard button 103.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 searches for a telephone number corresponding to the item value. Execute.
- a sound effect or display for example, display of a voice recognition mark as shown in FIG. 26
- voice guidance prompting the user to speak for example, “Who will you call?”
- the in-vehicle information device is based on the touch input detection unit 1 that detects the touch operation based on the output signals of the hard buttons 100 to 105 and the detection result of the touch input detection unit 1.
- Touch-command conversion unit 3 for generating a command (item name, item value) including an item name for executing processing corresponding to hard buttons 100 to 105 that have been touched, and voice associated with the processing
- the speech recognition unit 9 recognizes a user utterance substantially simultaneously with or following the touch operation, and converts it into a command (item value) for executing processing corresponding to the speech recognition result.
- the voice operation mode indicates whether the touch operation state indicates the touch operation mode based on the detection result of the voice-command conversion unit 10 and the touch input detection unit 1.
- Input method determining unit 2 that determines whether the input method is determined
- input switching control unit 4 that switches between the touch operation mode and the voice operation mode according to the determination result of input method determining unit 2, and touch operation from input switching control unit 4
- a mode instruction is received, a command (item name, item value) is acquired from the touch-command conversion unit 3 and converted into an application execution command.
- a state transition control unit 5 that acquires an item name from the input switching control unit 4 and an item value from the voice-command conversion unit 10 and converts it into an application execution command, an application execution unit 11 that executes processing according to the application execution command, and an application And an output control unit 13 for controlling an output unit such as a touch display 106 that outputs an execution result of the execution unit 11. It was constructed to so that. For this reason, since the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the hard button, the normal touch operation and the voice operation related to the hard button are switched and input with one hard button. can do. In addition, the same effects as those of the first to third embodiments are obtained.
- Embodiment 5 Since the in-vehicle information device according to the fifth embodiment has the same configuration as the in-vehicle information device shown in FIG. 1, FIG. 12, or FIG. 20, the following description uses FIG. 1, FIG. 12, and FIG. To explain.
- FIG. 27 shows a configuration example of the hard buttons 103 to 105 and the display 108 included in (or connected to) the in-vehicle information device.
- the display 108 and the hard button Assume that 103 to 105 are installed around the handle 107 of the vehicle.
- the item names of the hard buttons 103 to 105 are displayed on the display 108.
- the display 108 and the hard buttons 103 to 105 may be arranged anywhere.
- the touch operation mode is determined when the hard buttons 103 to 105 are pressed for a short time
- the voice operation mode is determined when the hard buttons 103 to 105 are pressed for a long time.
- the touch input detection unit 1 detects the short press and outputs a touch signal.
- the touch-command conversion unit 3 converts the touch signal into a command (PHONE, PHONE).
- the input method determination unit 2 determines that the input method is the touch operation mode based on the touch signal, and the state transition control unit 5 that receives this determination converts the command (PHONE, PHONE) into an application execution command. And output to the application execution unit 11.
- the application execution unit 11 causes the display 108 to display a PHONE menu (for example, the PHONE menu screen shown in FIG. 25) based on the application execution command.
- the operation method for the PHONE menu screen is not limited.
- the user may operate an input device such as a joystick (not shown) or a rotary dial.
- the input method determination unit 2 determines that the input method is the voice operation mode based on the touch signal, and the voice recognition dictionary switching unit from the input switching control unit 4.
- the command item name (PHONE) is output to 8 to switch to a speech recognition dictionary related to PHONE.
- the voice recognition unit 9 performs voice recognition processing using a voice recognition dictionary related to PHONE, and detects a voice operation input that the user speaks following the touch operation on the hard button 103.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 searches for a telephone number corresponding to the item value. Execute.
- a sound effect or display for example, display of a voice recognition mark as shown in FIG. 27
- voice guidance that prompts the user to speak (for example, a voice “Who are you calling?”) May be output.
- a sentence for prompting the user to speak as shown in FIG. 28 may be displayed on the display 108.
- the in-vehicle information device is based on the touch input detection unit 1 that detects the touch operation based on the output signals of the hard buttons 103 to 105 and the detection result of the touch input detection unit 1.
- the touch-command conversion unit 3 for generating a command (item name, item value) including an item name for executing processing corresponding to the hard button 103 to 105 that has been touched, and voice associated with the processing
- the speech recognition unit 9 recognizes a user utterance substantially simultaneously with or following the touch operation, and converts it into a command (item value) for executing processing corresponding to the speech recognition result.
- the voice operation mode indicates whether the touch operation state indicates the touch operation mode based on the detection result of the voice-command conversion unit 10 and the touch input detection unit 1.
- Input method determining unit 2 that determines whether the input method is determined
- input switching control unit 4 that switches between the touch operation mode and the voice operation mode according to the determination result of input method determining unit 2, and touch operation from input switching control unit 4
- a mode instruction is received, a command (item name, item value) is acquired from the touch-command conversion unit 3 and converted into an application execution command.
- a state transition control unit 5 that acquires an item name from the input switching control unit 4 and an item value from the voice-command conversion unit 10 and converts it into an application execution command, an application execution unit 11 that executes processing according to the application execution command, and an application And an output control unit 13 that controls an output unit such as a display 108 that outputs an execution result of the execution unit 11. It was constructed in. For this reason, since the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the hard button, the normal touch operation and the voice operation related to the hard button are switched and input with one hard button. can do. In addition, the same effects as those of the first to third embodiments are obtained.
- Embodiment 6 FIG.
- the in-vehicle information device according to the sixth embodiment has the same configuration as the in-vehicle information device shown in FIG. 1, FIG. 12, or FIG. 20, and therefore FIG. 1, FIG. 12, and FIG. To explain.
- FIG. 29 shows a configuration example of hard buttons 100 to 102 and display 108 included in (or connected to) the in-vehicle information device. It is assumed that 108 and hard buttons 100 to 102 are installed around the handle 107 of the vehicle. In this example, the touch operation mode is determined when the hard buttons 100 to 102 are pressed for a short time, and the voice operation mode is determined when the hard buttons 100 to 102 are pressed for a long time.
- buttons 100 to 105 specific functions are associated with the hard buttons 100 to 105.
- the hard buttons 100 are similar to the buttons on the touch display in the first to third embodiments.
- the functions of .about.102 are made variable.
- a “search for destination” function executed in conjunction with the press of the “1” hard button 100 and a “call” function executed in response to the press of the “2” hard button 101 “3” is displayed on the screen with a “listen to music” function executed in conjunction with pressing of the hard button 102.
- the touch input detection unit 1 detects this short press and outputs a touch signal including the position information of the hard button pressed shortly. .
- the touch-command conversion unit 3 creates a command (searches for a destination, searches for a destination) based on the position information of the hard buttons.
- the input method determination unit 2 determines that the input method is the touch operation mode based on the touch signal, and the state transition control unit 5 that has received this determination performs a command (searches for a destination, searches for a destination). Is converted into an application execution instruction and output to the application execution unit 11.
- the application execution unit 11 displays a destination setting screen as shown in FIG.
- the input method determination unit 2 determines that the input method is the voice operation mode based on the touch signal, and the input switching control unit 4 outputs the command item name (searches for the destination) to the voice recognition dictionary switching unit 8 to switch to the voice recognition dictionary related to the destination search.
- the voice recognition unit 9 performs voice recognition processing using a voice recognition dictionary related to destination search, and detects a voice operation input that the user speaks following a touch operation on the hard button 100.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 performs a search with the item value as the destination. To do.
- a sound effect or display for example, display of a voice recognition mark as shown in FIG. 31
- voice guidance prompting the user to speak for example, “Where are you going?”
- the in-vehicle information device is based on the touch input detection unit 1 that detects the touch operation based on the output signals of the hard buttons 100 to 102 and the detection result of the touch input detection unit 1.
- Touch that generates a command (item name, item value) including an item name for executing processing (one or both of the transition destination screen and the application execution function) corresponding to the hard buttons 100 to 102 that have been touched.
- the command conversion unit 3 and the speech recognition dictionary 9 composed of the speech recognition keywords associated with the processing, and the speech recognition unit 9 that recognizes the user utterance substantially simultaneously with or following the touch action.
- the input method determination unit 2 that determines whether the touch operation state indicates the touch operation mode or the voice operation mode, and the touch operation mode or the voice operation mode according to the determination result of the input method determination unit 2
- a command is acquired from the touch-command conversion unit 3 and converted into an application execution command
- a state transition control unit 5 that obtains an item name from the input switching control unit 4 and an item value from the voice-command conversion unit 10 when receiving an instruction of the voice operation mode from the input switching control unit 4 and converts it into an application execution command
- the application execution unit 11 that executes processing according to the application execution instruction, and the output that outputs the execution result of the application execution unit 11 Configured as an output control unit 13 for controlling the output unit such as a play 108.
- the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the hard button corresponding to the item displayed on the display, the normal touch operation and the hard button can be selected with one hard button. It is possible to switch and input related voice operations.
- the hard buttons and functions are fixed in the fourth and fifth embodiments, the association between the hard buttons and the functions is variable in the sixth embodiment, so that touch operation modes and voice operations can be performed on various screens. The mode can be switched and input.
- voice input can be performed in the voice operation mode at any stage where the user has descended the floor.
- Embodiment 7 FIG.
- the in-vehicle information device according to the seventh embodiment has the same configuration as the in-vehicle information device shown in FIG. 1, FIG. 12, or FIG. 20, and therefore, FIG. 1, FIG. 12, and FIG. To explain.
- FIG. 32 shows a configuration example of display 108 and joystick 109 included in (or connected to) the in-vehicle information device.
- the joystick 109 is assumed to be installed around the handle 107 of the vehicle.
- the display 108 and the joystick 109 may be disposed anywhere.
- the joystick 109 is illustrated as an example of a cursor operation hardware device, other input devices such as a rotary dial and an up / down selector may be used.
- the touch operation mode is determined when the joystick 109 is pressed for a short time
- the voice operation mode is determined when the joystick 109 is pressed for a long time.
- the user operates the joystick 109 and short-presses the cursor in a state selected according to “1. Search for destination”.
- the touch input detection unit 1 detects a short press of the joystick 109 and outputs a touch signal including position information of the cursor that has been pressed shortly.
- the touch-command conversion unit 3 creates a command (searches for a destination, searches for a destination) based on the position information of the cursor.
- the input method determination unit 2 determines that the input method is the touch operation mode based on the touch signal, and the state transition control unit 5 that has received this determination performs a command (searches for a destination, searches for a destination). Is output to the application execution unit 11.
- the application execution unit 11 causes the display 108 to display a destination setting screen (for example, the destination setting screen shown in FIG. 30) based on the application execution command.
- the input method determination unit 2 determines that the input method is the voice operation mode based on the touch signal. Then, the item name of the command (search for the destination) is output from the input switching control unit 4 to the voice recognition dictionary switching unit 8 to switch to the voice recognition dictionary related to the destination search. Then, the voice recognition unit 9 performs voice recognition processing using a voice recognition dictionary related to destination search, and detects a voice operation input that the user speaks following a touch operation on the joystick 109.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 performs a search with the item value as the destination. To do.
- a sound effect and a display (for example, display of a voice recognition mark as shown in FIG. 32) indicating that the voice operation mode has been switched are output.
- voice guidance for prompting the user to speak (for example, “Where are you going?”) May be output.
- the in-vehicle information device detects the touch operation based on the output signal of the joystick 109, and the joystick 109 based on the detection result of the touch input detection unit 1.
- Touch-command conversion unit 3 that generates a command (item name, item value) including an item name for executing the process (one or both of the transition destination screen and the application execution function) that is selected, and the process A command (item) for executing a process corresponding to a voice recognition result, and a voice recognition unit 9 that recognizes a user utterance substantially simultaneously with or following a touch operation using a voice recognition dictionary including attached voice recognition keywords.
- the voice-command conversion unit 10 to convert the value into (value) and the touch operation state based on the detection result of the touch input detection unit 1
- An input method determination unit 2 for determining whether the operation mode is indicated or the voice operation mode, and an input switching control unit 4 for switching between the touch operation mode and the voice operation mode according to the determination result of the input method determination unit 2
- a command is acquired from the touch-command conversion unit 3 and converted into an application execution command.
- the state transition control unit 5 obtains an item name from the input switching control unit 4 and an item value from the voice-command conversion unit 10 and converts it into an application execution command.
- the touch operation mode or the voice operation mode is determined according to the state of the touch operation to the input device such as a rotary dial for selecting an item displayed on the display, the normal touch operation can be performed with one hard button.
- the voice operation related to the hard button can be switched and input.
- the hard buttons and functions are fixed, but in the seventh embodiment, since the association between the hard buttons and the functions is variable, the touch operation mode and voice operation can be performed on various screens. The mode can be switched and input. Furthermore, voice input can be performed in the voice operation mode at any stage where the user has descended the floor.
- Embodiment 8 Since the in-vehicle information device according to the eighth embodiment has the same configuration as the in-vehicle information device shown in FIG. 1, FIG. 12, or FIG. 20, the following description uses FIG. 1, FIG. 12, and FIG. To explain.
- FIG. 33 shows a configuration example of the display 108 and the touch pad 110 included in (or connected to) the in-vehicle information device. It is assumed that it is installed around the handle 107 of the vehicle. Note that the display 108 and the touch pad 110 may be disposed anywhere.
- the touch pad 110 can detect the pressing pressure, the input method is determined based on whether the touch pad 110 is touched or pressed, or the input method is determined based on whether the touch pad 110 is half-pressed or fully pressed. Even when the pressure cannot be detected, the input method can be determined by the difference in touch operations such as tracing, tapping, and long pressing. In this example, the touch operation mode is determined when pressed strongly, and the voice operation mode is determined when pressed for a long time.
- the user traces the touch pad 110, aligns the cursor with “facility name”, and presses it strongly.
- the touch input detection unit 1 detects a strong press of the touch pad 110 and outputs a touch signal including position information of the strongly pressed cursor.
- the touch-command conversion unit 3 creates a command (facility name, facility name) based on the cursor position information.
- the input method determination unit 2 determines that the input method is the touch operation mode based on the touch signal, and the state transition control unit 5 that has received this determination issues a command (facility name, facility name) as an application execution command. And output to the application execution unit 11.
- the application execution unit 11 displays a facility name input screen on the display 108 based on the application execution command.
- the input method determination unit 2 determines that the input method is the voice operation mode based on the touch signal, and performs input switching control.
- the item name (facility name) of the command is output from the unit 4 to the voice recognition dictionary switching unit 8 to switch to the voice recognition dictionary related to the facility name search.
- the voice recognition unit 9 performs voice recognition processing using the voice recognition dictionary related to the facility name search, and detects a voice operation input that the user speaks following the touch operation on the touch pad 110.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 searches for a facility name corresponding to the item value. .
- a sound effect or display for example, display of a voice recognition mark as shown in FIG. 33
- voice guidance that prompts the user to speak (for example, a voice saying “Please tell facility name”) may be output or displayed as a sentence.
- the in-vehicle information device touches based on the touch input detection unit 1 that detects the touch operation based on the output signal of the touch pad 110 and the detection result of the touch input detection unit 1.
- Touch-command conversion unit 3 for generating a command (item name, item value) including an item name for executing processing (one or both of the transition destination screen and the application execution function) being selected by the pad 110, and processing
- a voice recognition unit 9 that recognizes a user utterance substantially simultaneously with or following the touch operation using a voice recognition dictionary associated with the voice recognition keyword, and a command for executing processing corresponding to the voice recognition result
- the state of the touch operation is the touch operation mode.
- An input method determining unit 2 that determines whether the input method indicates a voice operation mode, an input switching control unit 4 that switches between a touch operation mode and a voice operation mode according to a determination result of the input method determination unit 2, and an input
- a touch operation mode instruction is received from the switching control unit 4
- a command (item name, item value) is acquired from the touch-command conversion unit 3 and converted into an application execution command
- the voice operation mode is input from the input switching control unit 4.
- an item name from the input switching control unit 4 and an item value from the voice-command conversion unit 10 are acquired and converted into an application execution command, and processing is executed according to the application execution command.
- the application execution unit 11 Controls the application execution unit 11 and an output unit such as the display 108 that outputs the execution result of the application execution unit 11 And configured to include a power control unit 13. Since the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the touch pad for selecting the item displayed on the display, the normal touch operation with one hard button and the sound related to the hard button You can switch between operations.
- the hardware buttons and functions are fixed. However, in the eighth embodiment, the association between the hard buttons and the functions is variable, so that the touch operation mode and the voice operation can be performed on various screens. The mode can be switched and input. Furthermore, voice input can be performed in the voice operation mode at any stage where the user has descended the floor.
- Embodiment 9 FIG.
- a user interface such as a home appliance
- FIG. 34 is a diagram showing a configuration example of the television 111 with a recording function and the remote control 112 for operating it.
- the information device shown in FIG. 1, FIG. 12, or FIG. 20 is applied to the user interface device of the television 111 and the remote control 112.
- the touch operation mode is determined when the “play” hard button 113 and the “reservation” hard button 114 of the remote controller 112 are pressed for a short time
- the voice operation mode is determined when the button is pressed for a long time.
- the determination of the input method is substantially the same as in Embodiments 4 to 8 above, and a description thereof will be omitted.
- the remote control 112 switches the input to the touch operation mode, and executes an application execution command (recorded program) corresponding to the command (play, play). Display a list of reproduction lists) to the TV 111. Based on the application execution command, the television 111 displays a list of recorded program reproduction lists on the display.
- the remote control 112 switches the input to the voice operation mode, and performs voice recognition related to the command item name (play).
- Voice recognition processing is performed using a dictionary (for example, a word such as a program name included in a playlist list), and an application execution command corresponding to a command (playback, sky wars) (plays a command item value program) Is output to the television 111.
- the television 111 selects and reproduces “Sky Wars” from the recorded programs and displays it on the display.
- the user interface device applied to the television 111 and the remote control 112 may be configured as shown in FIG. 20 to output a sound effect or the like indicating the switching to the voice operation mode, or voice guidance that prompts the user to speak. (For example, as shown in FIG. 34, “What do you want to play?” Or “Please tell me the program you want to play”) may be output.
- the TV 111 is notified from the remote control 112, and a text indicating that the mode has been switched to the voice operation mode (for example, a voice recognition mark as shown in FIG. 33) and “Please tell me the program you want to play” are displayed. It may be output to the display of the television 111.
- the remote control 112 switches the input to the touch operation mode and displays an application execution command (displays a program reservation table) corresponding to the command (reservation, reservation). Is output to the television 111.
- the television 111 displays a program reservation table on the display based on the application execution command.
- the remote control 112 switches the input to the voice operation mode, and recognizes the voice associated with the command item name (reservation).
- Voice recognition processing is performed using a dictionary (for example, including a word such as a program name included in a program reservation table), and an application execution instruction (command recording reservation of a command item value program) corresponding to a command (reservation, Sky Wars) is performed.
- a dictionary for example, including a word such as a program name included in a program reservation table
- an application execution instruction command recording reservation of a command item value program
- the television 111 sets a program recording reservation based on the application execution command.
- the utterance is not limited to the program name such as “Sky Wars”, but may be information necessary for the reservation such as “from 8:00 pm, 2 channels”.
- the user interface device applied to the television 111 and the remote control 112 may be configured as shown in FIG. 20 to output a sound effect or the like indicating the switching to the voice operation mode, or voice guidance that prompts the user to speak. (For example, “What do you want to reserve?” Or “Please tell me the program you want to reserve”) may be output.
- a notification is sent from the remote control 112 to the television 111 to indicate that the mode has been switched to the voice operation mode (for example, a voice recognition mark as shown in FIG. 33) and a sentence such as “Please tell me the program you want to reserve”. It may be output to the display of the television 111.
- a voice guidance or display such as “Sky Wars reservation set” may be output.
- FIG. 35 is a diagram illustrating a configuration example of the rice cooker 120.
- the rice cooker 120 switches the input to the touch operation mode, and executes an application execution instruction (rice cooking reservation operation) corresponding to the command (reservation, reservation). ), The user makes a reservation setting using the display on the display 121 and the “setting” hard button 123.
- the rice cooker 120 switches the input to the voice operation mode, performs voice recognition processing using the voice recognition dictionary related to the item name (reservation) of the command,
- the user's utterance for example, XX hour XX minutes
- the user interface apparatus applied to the rice cooker 120 may be configured as shown in FIG. 20 to output a sound effect or the like indicating that the mode has been switched to the voice operation mode, or voice guidance that prompts the user to speak (for example, , "What time do you want to make a reservation?" Furthermore, after completing the reservation setting, a voice guidance or display such as “A reservation has been set for XX hour at XX minutes” may be output.
- FIG. 36 is a diagram illustrating a configuration example of the microwave oven 130.
- the microwave oven 130 switches the input to the touch operation mode, and displays an application execution command (cooking selection menu screen) corresponding to the command (cooking, cooking). ) To display the cooking selection menu screen on the display 131.
- the microwave oven 130 switches the input to the voice operation mode, performs voice recognition processing using the voice recognition dictionary related to the command item name (cooking), Based on an application execution command in which the user's utterance is a command item value (for example, steamed tea), the output and time of the microwave oven 130 are set to values suitable for steaming.
- a command item value for example, steamed tea
- the user presses the “warm” hard button and speaks “hot rice”, “milk”, etc., or presses the “baked food” hard button and speaks “dried fish of horse mackerel”, etc.
- the output and time suitable for the spoken menu can be set.
- a user interface device applied to the microwave oven 130 may be configured as shown in FIG. 20 to output a sound effect or the like indicating that the mode has been switched to the voice operation mode, or voice guidance that prompts the user to speak (for example, , "What do you want to cook?" Further, even when a display indicating that the mode has been switched to the voice operation mode (for example, a voice recognition mark as shown in FIG. 33) and a sentence such as “What do you want to cook?” Are output to the display 131. Good. In addition, when the user speaks “chawanmushi”, the voice guidance or display that says “I will cook chawanmushi” is output, and when the preparation is ready for cooking, the voice guidance or display that “please press the start button” is output. Good.
- a user interface device such as a household electrical appliance detects the touch operation based on the output signal of the hard button and the detection result of the touch input detection unit 1.
- Touch-command that generates a command (item name, item value) including an item name for executing a process (one or both of the transition destination screen and the application execution function) corresponding to the hard button that has been touched based on
- the speech recognition unit 9 that recognizes a user utterance substantially simultaneously with or following the touch operation using the conversion unit 3, a speech recognition dictionary including speech recognition keywords associated with the process, and a process corresponding to the speech recognition result
- the input method determination unit 2 that determines whether the touch operation state indicates the touch operation mode or the voice operation mode, and the touch operation mode or the voice operation mode according to the determination result of the input method determination unit 2 When receiving an instruction of the touch operation mode from the input switching control unit 4 to be switched
- the touch operation mode or the voice operation mode is determined according to the state of the touch operation on the hard button, the normal touch operation and the voice operation related to the hard button are switched and input with one hard button. can do.
- the same effects as those of the first to third embodiments are obtained.
- the information device or user interface device
- the remote control 112 the rice cooker 120, and the microwave oven 130
- the present invention is not limited to these devices.
- the present invention may be applied to elevator board information boards, huge shopping mall digital information boards, huge parking lot parking position information boards, station ticket machines, and the like.
- Guidance can be displayed (voice operation mode).
- the user can short-press the input device to display a menu screen, and operate the screen to find out what kind of store is available and what kind of product is available (touch operation mode).
- the user checks the route map posted on the ticket vending machine, confirms the fare to the target station, and then purchases a ticket by pressing the fare button on the ticket vending machine. You have to do it and it's cumbersome. Therefore, if you install a ticket vending machine equipped with an input device and the user speaks the target station name while pressing and holding the button labeled "Destination" on the ticket vending machine, the fare can be displayed and the ride Tickets can be purchased (voice operation mode).
- the user can press the “destination” button for a short time to display a screen for searching for a target station name or display a normal fare button to purchase a ticket (touch operation mode).
- the “destination” button may be a button displayed on the touch display or a hard button.
- Embodiment 10 FIG.
- the two modes of the touch operation mode and the voice operation mode are switched according to the state of the touch operation on one input device such as a touch display or a hard button. It is also possible to switch to the mode. That is, n types of modes are switched according to n types of touch operations on one input device.
- Embodiment 10 an information device that switches between three modes using one button or one input device will be described.
- mode switching include a touch operation mode as the first mode, a voice operation mode 1 as the second mode, a voice operation mode 2 as the third mode, and a touch operation mode 1 as the first mode.
- touch operation mode 2 As the second mode and the voice operation mode as the third mode.
- the input device for example, a touch display, a touch pad, a hard button, an easy selector, etc. can be used.
- the easy selector is an input device that can perform three operations of pressing a lever, tilting up (or right), and tilting down (or left).
- the touch operation is determined in advance for each of the first to third modes.
- the input device is a touch display and a touch pad
- the user desires one of the first to third modes depending on whether the input device is short-pressed, long-pressed, or double-tapped as in Example 1.
- the input method is determined.
- the input device is a hard button
- the input method may be determined based on whether the input device is short-pressed, long-pressed, or double-clicked as in Example 2, or half-pressed as in Example 3.
- the determination may be made based on whether the button is pressed, fully pressed by a short press, or fully pressed (or half pressed) by a long press.
- the input device is an easy selector, the determination is made based on whether the input device is pushed down, upside down, or downed as in Example 4.
- FIG. 38A is a diagram illustrating a configuration example of the hard buttons 100 to 105 and the display 108 that are included in the in-vehicle information device (or connected to the in-vehicle information device).
- the same or corresponding parts as those in FIGS. 27 to 31 are designated by the same reference numerals and description thereof is omitted.
- FIG. 38B shows an example of screen transition displayed on the display 108 of FIG. 38A.
- hard buttons 100 to 105 are used as input devices.
- the touch operation mode is determined when the hard buttons 100 to 105 are pressed for a short time
- the voice operation mode 1 is determined when the hard button 100 is pressed for a long time
- the voice operation mode 2 is determined when the hard button 100 to 105 is double-clicked.
- the function executed in conjunction with the pressing of the hard buttons 100 to 102 varies depending on the transition screen, and the function of one of the hard buttons 103 to 105 is fixed.
- the input method determination unit 2 determines whether the touch operation mode, the voice operation mode 1, or the voice operation mode 2 based on the touch signal, and the state transition control is performed via the input switching control unit 4. Notification to part 5.
- the state transition table storage unit 6 stores a state transition table that defines the correspondence between operation modes, commands (item names, item values), and application execution instructions. Based on the state transition table stored in the state transition table storage unit 6, the state transition control unit 5 compares the operation mode determination result and the command notified from the touch-command conversion unit 3 or the voice-command conversion unit 10. The combination is converted into an application execution instruction.
- the content of the application execution command to be converted differs between the voice operation mode 1 and the voice operation mode 2.
- the command item name NAVI
- the detailed item of the NAVI function is displayed on the screen and converted into an application execution command for receiving an utterance related to the detailed item. Converts to an application execution command that accepts utterances related to the entire NAVI function.
- the touch operation mode when the “NAVI” hard button 105 is pressed for a short time, the touch input detection unit 1 detects this short press, and the touch-command conversion unit 3 generates a command (NAVI, NAVI).
- the input method determination unit 2 determines that the operation mode is the touch operation mode, and the state transition control unit 5 that receives this determination converts the command (NAVI, NAVI) into an application execution command and outputs the command to the application execution unit 11.
- the application execution unit 11 displays the NAVI menu screen P100 on the display 108 based on the application execution command. In the NAVI menu screen P100, the “1.
- Destination search” function executed in conjunction with the press of the “1” hard button 100 and the “2” executed in conjunction with the press of the “2” hard button 101 are displayed.
- the “congestion information” display function and the “3. navigation setting” function executed in conjunction with the pressing of the “3” hard button 102 are included.
- the touch input detection unit 1 detects this long press, and the touch-command conversion unit 3 generates a command (NAVI, NAVI). Further, the input method determination unit 2 determines that the voice operation mode 1 is used, and notifies the state transition control unit 5 via the input switching control unit 4 that the command item name (NAVI) and the voice operation mode 1 are set.
- the control unit 5 converts the voice operation mode 1 into an application execution command for displaying the NAVI voice operation dedicated menu screen P101.
- the application execution unit 11 displays the voice operation dedicated menu screen P101 on the display 108 based on the application execution command.
- the touch input detection unit 1 detects this press and the touch-command conversion unit 3 outputs a command (searches by facility name). Then, the voice recognition dictionary switching unit 8 switches to a voice recognition dictionary related to the command item name (searches by facility name), and the voice recognition unit 9 performs voice recognition processing of the user utterance using the voice recognition dictionary, and the user Detects a voice operation input uttered following depression of the hard button 100.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the application execution unit 11 searches for a facility name corresponding to the item value. .
- the screen transition from the voice operation dedicated menu screen P101 to the voice operation dedicated menu screen P102 may output a sound effect or a display (voice recognition mark or the like) indicating that the voice operation mode has been switched.
- voice guidance that prompts the user to speak for example, a voice saying “Please tell facility name” may be output or displayed as a sentence.
- voice recognition processing relating to the entire NAVI function is directly activated so that voice operation can be started immediately.
- the touch input detection unit 1 detects this double-click, and the touch-command conversion unit 3 generates a command (NAVI, NAVI).
- the input method determination unit 2 determines the voice operation mode 2 and notifies the state transition control unit 5 via the input switching control unit 4 that the command item name (NAVI) and the voice operation mode 2 are set.
- the state transition control unit 5 stands by until a command item value is input from the voice-command conversion unit 10.
- the voice recognition dictionary switching unit 8 switches to a voice recognition dictionary related to NAVI, and the voice recognition unit 9.
- the voice-command conversion unit 10 converts the voice recognition result of the voice recognition unit 9 into a command (item value) and outputs it to the state transition control unit 5, and the state transition control unit 5 corresponds to the item value of the NAVI function. It is converted into an application execution instruction and executed by the application execution unit 11.
- voice operation mode 1 specific function items that can be operated by voice recognition are displayed as in the voice operation dedicated menu screen P101. Possible content can be suggested to the user. Thereby, the user can restrict the contents that can be uttered unconsciously, and can suppress the utterance of words that are not included in the speech recognition dictionary. Furthermore, since the content that can be spoken is displayed on the screen, it is possible to reduce the anxiety of not knowing what to speak. In addition, since the user's speech can be guided by voice guidance or the like of specific contents (such as “Please tell the facility name”), it is easier for the user to perform voice operations.
- voice recognition can be started directly by double-clicking the “NAVI” hard button 105, so that voice operation can be started immediately. Therefore, for a user who has become accustomed to voice operation and has learned the contents that can be spoken, the operation can be completed with fewer operation steps and less operation time. Further, a user who knows a voice recognition keyword other than the detailed function items displayed on the voice operation mode exclusive menu screen P101 in the voice operation mode 1 has more users in the voice operation mode 2 than in the voice operation mode 1. The function can be executed.
- a single input device can be used to switch between a normal touch operation mode and two voice operation modes (e.g., simple mode and expert mode) in total. Although description is omitted, one input device may be used to switch between a total of three operation modes of two touch operation modes and one voice operation mode.
- two voice operation modes e.g., simple mode and expert mode
- the in-vehicle information device has n types of functions according to the state of the touch operation based on the output signal from the input device capable of n types of touch operation by the user. It was configured to switch. For this reason, it is possible to operate by switching n types of operation modes with one input device.
- the user interface device reduces the number of operation steps and the operation time by combining the touch panel operation and the voice operation. Therefore, the user interface device is suitable for use in a vehicle-mounted user interface device or the like. Yes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Otolaryngology (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Navigation (AREA)
Abstract
Description
ジョイスティック、回転ダイヤルおよびリモコンなどの別デバイスによる操作は、ユーザがこれらデバイスを操作してディスプレイ画面上に表示されているボタンにカーソルを合わせ、選択または決定することによる画面遷移を繰り返し、目的の機能を実行する。この方法では、目的のボタンにカーソルを合わせる必要があり、タッチディスプレイ操作と比べると直感的な操作とはいえない。
また、これらの操作方法は、ユーザが画面に表示されているボタンを選んでいけば操作できるので分かりやすいが、操作ステップ数が多く、操作時間がかかる。
また例えば特許文献2に係るナビゲーション装置において、地名または道路名を音声認識により検索するときに、ユーザは、地名または道路名の先頭の文字または文字列をタッチディスプレイ上のキーボードから入力して確定し、その後発話する。
他方、音声操作は予め決められた独特な操作方法および音声認識キーワードを覚えてそのとおりに発話する必要があり、操作が難しいという課題があった。また、発話ボタンを押しても「何をしゃべったらよいか分からない」ということが多く、操作できないという課題もあった。
実施の形態1.
図1に示すように、車載用情報装置は、タッチ入力検出部1、入力方法判定部2、タッチ-コマンド変換部3、入力切換制御部4、状態遷移制御部5、状態遷移表記憶部6、音声認識辞書DB7、音声認識辞書切換部8、音声認識部9、音声-コマンド変換部10、アプリケーション実行部11、データ格納部12、および出力制御部13から構成されている。この車載用情報装置は、タッチパネルとディスプレイが一体になったタッチディスプレイ、マイク、スピーカなどの入出力デバイス(不図示)に接続して情報の入出力を行い、ユーザの操作に従って所望の画面表示および機能実行を行うユーザインタフェースを提供する。
入力方法判定部2は、タッチ入力検出部1の検出結果に基づいて、ユーザがタッチ操作により入力を行おうとしているのか(タッチ操作モード)、または音声操作により入力を行おうとしているのか(音声操作モード)の判定を行う。
タッチ-コマンド変換部3は、タッチ入力検出部1により検出されるユーザがタッチしたボタンを、コマンドに変換する。詳細は後述するが、このコマンドには項目名と項目値が含まれており、状態遷移制御部5へはコマンド(項目名、項目値)を渡し、入力切換制御部4へは項目名を渡す。この項目名が第1のコマンドを構成する。
入力切換制御部4は、入力方法判定部2による入力方法の判定結果(タッチ操作または音声操作)に従ってユーザがタッチ操作モードと音声操作モードのどちらを希望しているかを状態遷移制御部5へ通知して、状態遷移制御部5の処理をタッチ操作モードか音声操作モードかに切り換える。さらに、入力切換制御部4は、音声操作モードの場合にタッチ-コマンド変換部3から入力された項目名(即ち、ユーザがタッチしたボタンを指す情報)を状態遷移制御部5と音声認識辞書切換部8へ渡す。
また、状態遷移制御部5は、入力切換制御部4から音声操作モードとコマンド(項目名)が通知された場合、音声-コマンド変換部10からコマンド(項目値)が入力されるまで待機し、コマンド(項目値)が入力されると状態遷移表記憶部6に格納されている状態遷移表に基づいて、これらの項目名と項目値を組み合わせたコマンドをアプリケーション実行命令に変換し、アプリケーション実行部11へ渡す。
音声認識辞書切換部8は、入力切換制御部4から入力されるコマンド(項目名)を音声認識部9に通知して、この項目名に紐付けされた音声認識キーワードを含む音声認識辞書に切り換えさせる。
音声認識部9は、音声認識辞書DB7に格納された音声認識辞書のうち、音声認識辞書切換部8から通知されたコマンド(項目名)が紐付けられた音声認識キーワード群からなる音声認識辞書を参照して、マイクからの音声信号を音声認識処理して文字列などに変換し、音声-コマンド変換部10へ出力する。
音声-コマンド変換部10は、音声認識部9の音声認識結果をコマンド(項目値)に変換して状態遷移制御部5へ渡す。この項目値が第2のコマンドを構成する。
データ格納部12は、アプリケーション実行部11による画面遷移またはアプリケーション機能の実行に際して必要となるナビゲーション(以下、ナビ)機能用のデータ(地図データベースを含む)、オーディオ・ビジュアル(以下、AV)機能用のデータ(音楽データおよび映像データを含む)、車両に搭載されたエアコンなどの車両機器制御用のデータ、ハンズフリー通話などの電話機能用のデータ(電話帳を含む)、ネットワーク14を介してアプリケーション実行部11が外部より取得した情報(渋滞情報、特定のウェブサイトのURLなどを含む)であってアプリケーション機能実行時にユーザに提供する情報など、各種データを格納している。
出力制御部13は、アプリケーション実行部11の実行結果を、タッチディスプレイに画面表示したり、スピーカから音声出力したりする。
図2は、実施の形態1に係る車載用情報装置の動作を示すフローチャートである。図3は車載用情報装置による画面遷移例を示し、ここでは、車載用情報装置が初期状態として、アプリケーション実行部11の実行可能な機能の一覧を、ボタンとしてタッチディスプレイ上に表示していることとする(アプリケーション一覧画面P01)。この図3は、アプリケーション一覧画面P01の「AV」ボタンを基点として展開するAV機能の画面遷移例であり、アプリケーション一覧画面P01が最上階層の画面(と各ボタンに関連付けられた機能)である。アプリケーション一覧画面P01の一つ下層には「AV」ボタンに関連付けられたAVソース一覧画面P11の画面(と各ボタンに関連付けられた機能)がある。さらにAVソース一覧画面P11の一つ下層には、AVソース一覧画面P11の各ボタンに関連付けられたFM局一覧画面P12、CD画面P13、交通情報ラジオ画面P14、MP3画面P15と、各画面の各ボタンに関連付けられた機能とがある。
以下では、一つ下の階層に画面が遷移する場合を単に「遷移」と称する。例えばアプリケーション一覧画面P01からAVソース一覧画面P11に画面が変更される場合である。他方、一つ以上離れた下の階層、または異なる機能へ画面が遷移する場合を「ジャンプ遷移」と称する。例えばアプリケーション一覧画面P01からFM局一覧画面P12に画面が変更される場合、またはAVソース一覧画面P11からナビ機能の画面に変更される場合である。
入力方法判定部2は、ステップST121においてタッチ入力検出部1からタッチ信号の入力を受け、続くステップST122においてタッチ信号に基づいて入力方法を判定する。
図5に示すように、タッチ操作および音声操作それぞれに対して予めタッチ動作が決められているものとする。例1の場合、ユーザがタッチ操作モードによりアプリケーション機能を実行させたいときはタッチディスプレイ上のそのアプリケーション機能用のボタンを押し込む動作を行い、音声操作モードにより実行させたいときはそのボタンに一定時間触れる動作を行う。タッチディスプレイの出力信号はタッチ動作によって異なるので、入力方法判定部2はタッチ信号に応じてどちらのタッチ動作が行われたか判定すればよい。
また例えば、例2のようにボタンが全押しされたか半押しされたかによって、ユーザがタッチ操作と音声操作のどちらを希望しているか入力方法を判定してもよいし、例3のようにボタンがシングルタップされたかダブルタップされたかによって判定してもよいし、例4のようにボタンが短押しされたか長押しされたかによって判定してもよい。タッチディスプレイが物理的に全押しと半押しを区別できない構成の場合には、押される圧力が閾値以上なら全押し、閾値未満なら半押しと見なすなどの処理を行ってもよい。
このように、1つのボタンに対して2種類のタッチ動作を使い分けることにより、1つのボタンに対してタッチ操作と音声操作のどちらの操作により入力を行おうとしているかの判定を行うことができる。
ステップST141において状態遷移制御部5は、入力方法の判定処理時にタッチされたボタンのコマンド(項目名、項目値)をタッチ-コマンド変換部3より取得し、続くステップST142において状態遷移表記憶部6に格納されている状態遷移表に基づいて、取得したコマンド(項目名、項目値)をアプリケーション実行命令へ変換する。
状態遷移表は、「現在の状態」、「コマンド」および「アプリケーション実行命令」の3つの情報で構成されている。現在の状態とは、ステップST100のタッチ検出時にタッチディスプレイ上に表示されている画面のことである。
上述の通り、コマンドの項目名は、画面に表示されているボタン名称と同一の名前がつけられている。例えばアプリケーション一覧画面P01の「AV」ボタンの項目名は「AV」である。
他方、音声操作モードの場合、項目値は音声認識結果であり、ユーザが実行したい機能の音声認識キーワードとなる。ユーザが「AV」ボタンをタッチし、そのボタン名称「AV」を発話した場合は項目名と項目値が同じコマンド(AV、AV)になる。ユーザがボタン「AV」をタッチし、異なる音声認識キーワード「FM」を発話した場合は項目名と項目値が異なるコマンド(AV、FM)となる。
現在の状態は、図3に示すアプリケーション一覧画面P01である。そして、図7Aの状態遷移表によれば、この画面の「AV」ボタンにはコマンド(AV、AV)が紐付いており、対応するアプリケーション実行命令として遷移先画面「P11(AVソース一覧画面)」とアプリケーション実行機能「-(無し)」とが設定されている。よって、状態遷移制御部5は、タッチ-コマンド変換部3から入力されるコマンド(AV、AV)を、「AVソース一覧画面P11へ画面遷移する」というアプリケーション実行命令に変換する。
ステップST151において音声認識辞書切換部8が、入力切換制御部4から入力される項目名(即ち、ユーザがタッチしたボタン)に関連した音声認識辞書に切り換える指示を音声認識部9へ出力する。
図10は、音声認識辞書を説明する図である。例えば、タッチディスプレイ上にボタンが表示された状態でユーザがボタンの操作を行った場合、切り換えるべき音声認識辞書には(1)タッチしたボタンの音声認識キーワード、(2)タッチしたボタンの下層画面にある全ての音声認識キーワード、(3)タッチしたボタンの下層にはないが、このボタンに関連する音声認識キーワードが含まれる。
(2)は、タッチしたボタンの下層へジャンプ遷移したり、ジャンプ遷移した画面にある機能を実行したりすることができる音声認識キーワードである。
(3)は、タッチしたボタンの下層にはないが関連する機能の画面へジャンプ遷移したり、ジャンプ遷移した画面にある機能を実行したりすることができる音声認識キーワードである。
なお、ボタン操作およびリスト項目ボタン操作の場合において、(3)の音声認識キーワードは必須ではなく、関連するものがなければ含む必要はない。
現在の状態は、図3に示すアプリケーション一覧画面P01である。そして、入力方法の判定処理においてタッチ検出した「AV」ボタンのコマンド(AV、AV)のうちの項目名(AV)が音声認識辞書切換部8に入力される。よって、音声認識辞書切換部8は、音声認識辞書DB7のうちから「AV」に関連する音声認識辞書に切り換える指示を出す。
「AV」に関連する音声認識辞書とは、以下になる。
(1)タッチしたボタンの音声認識キーワードとして「AV」。
(2)タッチしたボタンの下層画面にある全ての音声認識キーワードとして「FM」、「AM」、「交通情報」、「CD」、「MP3」、「TV」。「FM」ボタンの下層画面(P12)にある音声認識キーワードとして「A放送局」、「B放送局」、「C放送局」など。「FM」ボタンの他のボタンについても、各下層画面(P13,P14,P15・・・)にある音声認識キーワードが含まれる。
(3)タッチしたボタンの下層にはないが、このボタンに関連する音声認識キーワードとして、例えば、「情報」ボタンの下層画面にある音声認識キーワード。情報関連の音声認識キーワード「番組表」を含めておくことにより、例えば現在聴くことができるラジオ番組または観ることができるテレビ番組の番組表を表示することができるようになる。
「FM」に関連する音声認識辞書とは、以下になる。
(1)タッチしたボタンの音声認識キーワードとして「FM」。
(2)タッチしたボタンの下層画面にある全ての音声認識キーワードとして「A放送局」、「B放送局」、「C放送局」など。
(3)タッチしたボタンの下層にはないが、このボタンに関連する音声認識キーワードとして、例えば、「情報」ボタンの下層画面にある音声認識キーワード。情報関連の音声認識キーワード「ホームページ」を含めておくことにより、例えば現在選局中の放送局のホームページを表示し、放送されている番組の詳細、ならびに流れている楽曲の曲名およびアーティスト名などを見ることができるようになる。
従って、より絞り込まれた音声認識辞書に切り換えることにより、音声認識率の向上が期待できる。
ステップST154において状態遷移制御部5が、状態遷移表記憶部6に格納されている状態遷移表に基づいて、入力切換制御部4から入力される項目名と音声-コマンド変換部10から入力される項目値とからなるコマンドをアプリケーション実行命令へ変換する。
現在の状態は、図3に示すアプリケーション一覧画面P01である。そして、ユーザが「AV」ボタンに一定時間触れながら音声認識キーワード「AV」と発話した場合、状態遷移制御部5が得るコマンドは(AV、AV)である。よって、状態遷移制御部5は、タッチ操作入力の場合と同様に図7Aの状態遷移表に基づいて、コマンド(AV、AV)を「AVソース一覧画面P11へ画面遷移する」というアプリケーション実行命令に変換する。
ユーザがFM局のA放送局を選局したい場合、タッチ操作入力を使用するなら、図3に示すアプリケーション一覧画面P01の「AV」ボタンを押し込んでAVソース一覧画面P11に遷移させる。次に、AVソース一覧画面P11の「FM」ボタンを押し込んでFM局一覧画面P12に遷移させる。次に、FM局一覧画面P12の「A放送局」ボタンを押し込んでA放送局を選局する。
このとき、車載用情報装置は図2に示すフローチャートに従って、タッチ入力検出部1で「AV」ボタンへの一定時間の接触を検出し、入力方法判定部2で音声操作と判定し、入力切換制御部4から状態遷移制御部5に対して音声操作入力である旨を通知する。また、タッチ-コマンド変換部3が「AV」ボタンの接触を表すタッチ信号を項目名(AV)に変換し、入力切換制御部4がその項目名を状態遷移制御部5と音声認識辞書切換部8へ通知する。そして、音声認識部9が、音声認識辞書切換部8の指示する音声認識辞書に切り換えて発話「A放送局」を音声認識し、音声-コマンド変換部10が音声認識結果を項目値(A放送局)に変換して状態遷移制御部5に通知する。状態遷移制御部5はコマンド(AV、A放送局)を図7Aの状態遷移表に基づいてアプリケーション実行命令「FM局一覧画面P12に遷移し、A放送局を選局する」に変換する。そして、アプリケーション実行部11が、データ格納部12のAV機能用のデータ群からFM局一覧画面P12を構成するデータを取得して画面を生成すると共に、そのデータ群からカーオーディオを制御するコマンドなどを取得し、出力制御部13がその画面をタッチディスプレイに表示すると共にカーオーディオを制御してA放送局に選局する。
このとき、車載用情報装置は図2に示すフローチャートに従って、タッチ入力検出部1で「電話」ボタンへの一定時間の接触を検出し、入力方法判定部2で音声操作と判定し、タッチ-コマンド変換部3が「電話」ボタンの接触を表すタッチ信号を項目名(電話)に変換し、入力切換制御部4がその項目名を状態遷移制御部5と音声認識辞書切換部8へ通知する。そして、音声認識部9が、音声認識辞書切換部8の指示する音声認識辞書に切り換えて発話「山田○○」を音声認識し、音声-コマンド変換部10が音声認識結果を項目値(山田○○)に変換して状態遷移制御部5に通知する。状態遷移制御部5はコマンド(電話、山田○○)を図7Aの状態遷移表に基づいてアプリケーション実行命令「電話帳画面P23へ画面遷移し、山田○○の電話帳を表示する」に変換する。そして、アプリケーション実行部11がデータ格納部12の電話機能用のデータ群から電話帳画面P23を構成するデータと山田○○の電話番号データを取得して画面を生成し、出力制御部13がその画面をタッチディスプレイに表示する。
他方、音声操作入力を使用するなら、ユーザは、図8に示すアプリケーション一覧画面P01の「電話」ボタンに一定時間触れながら「0333334444」と発話して番号入力発呼画面P25を表示させる。
このように、タッチ操作入力では13ステップで番号入力発呼画面P25が表示可能であるが、音声操作入力では最短1ステップで実行可能となる。
例えば、ユーザが現在地周辺のコンビニを探したい場合、タッチ操作入力を使用するなら、図11Aに示すアプリケーション一覧画面P01の「ナビ」ボタンを押し込んでナビ画面(現在地)P31に遷移させる。次に、ナビ画面(現在地)P31の「メニュー」ボタンを押し込んでナビメニュー画面P32に遷移させる。次に、ナビメニュー画面P32の「周辺施設を探す」ボタンを押し込んで周辺施設ジャンル選択画面1P34に遷移させる。次に、周辺施設ジャンル選択画面1P34のリストをスクロールして「買い物」ボタンを押し込んで周辺施設ジャンル選択画面2P35に遷移させる。次に、周辺施設ジャンル選択画面2P35のリストをスクロールして「コンビニ」ボタンを押し込んでコンビニブランド選択画面P36に遷移させる。次に、コンビニブランド選択画面P36の「すべてのコンビニ」ボタンを押し込んで周辺施設検索結果画面P37に遷移させる。これにより、周辺のコンビニの検索結果一覧を表示させることができる。
このとき、車載用情報装置は図2に示すフローチャートに従って、タッチ入力検出部1で「ナビ」ボタンへの一定時間の接触を検出し、入力方法判定部2で音声操作と判定し、タッチ-コマンド変換部3が「ナビ」ボタンの接触を表すタッチ信号を項目名(ナビ)に変換し、入力切換制御部4がその項目名を状態遷移制御部5と音声認識辞書切換部8へ通知する。そして、音声認識部9が、音声認識辞書切換部8の指示する音声認識辞書に切り換えて発話「コンビニ」を音声認識し、音声-コマンド変換部10が音声認識結果を項目値(コンビニ)に変換して状態遷移制御部5に通知する。状態遷移制御部5はコマンド(ナビ、コンビニ)を図7Aの状態遷移表に基づいてアプリケーション実行命令「周辺施設検索結果画面P37に画面遷移し、すべてのコンビニで周辺施設を検索し、検索結果を表示する」に変換する。そして、アプリケーション実行部11がデータ格納部12のナビ機能用のデータ群の地図データからコンビニを検索してリスト項目を作成し、出力制御部13がそのリスト画面(P37)をタッチディスプレイに表示する。
なお、周辺施設検索結果画面P37から特定のコンビニを目的地にして経路案内する動作(目的地施設確認画面P38およびナビ画面(現在地ルートあり)P39)は上述した処理と略同じであるため、説明は省略する。
他方、音声操作入力を使用するなら、ユーザは、図11Aに示すアプリケーション一覧画面P01の「ナビ」ボタンに一定時間触れながら「東京駅」と発話すれば、図11Bに示す検索結果画面P44を表示させることができる。
このように、タッチ操作入力では12ステップで検索結果画面P44が表示可能であるが、音声操作入力では最短1ステップで実行可能となる。
例えば、ユーザが、図11Aに示すアプリケーション一覧画面P01の「ナビ」ボタンを押し込んでナビ画面(現在地)P31に遷移させる。次に、ナビ画面(現在地)P31の「メニュー」ボタンを押し込んでナビメニュー画面P32に遷移させる。
ここで、ユーザが音声操作入力に切り換えるなら、ナビメニュー画面P32の「周辺施設を探す」ボタンに一定時間触れながら「コンビニ」と発話すれば、周辺施設検索結果画面P37を表示させることができる。この場合は、アプリケーション一覧画面P01から3ステップで現在地周辺のコンビニの検索結果一覧を表示可能となる。
あるいは、ナビメニュー画面P32の「目的地を探す」ボタンに一定時間に触れながら「東京駅」と発話すれば、図11Bに示す検索結果画面P44を表示させることができる。この場合は、アプリケーション一覧画面P01から3ステップで東京駅の検索結果一覧を表示することができる。
あるいは、図11Bに示す目的地設定画面P33の「施設名称」ボタンに一定時間触れながら「東京駅」と発話すれば、検索結果画面P44を表示させることができる。この場合は、アプリケーション一覧画面P01から4ステップで東京駅の検索結果一覧を表示することができる。このように、違う画面P32,P33に対して同じ音声入力「東京駅」を行うことができ、音声入力を行う画面によってステップ数が変わる。
例えば、上記例では、ユーザが図11Aに示すアプリケーション一覧画面P01の「ナビ」ボタンに一定時間触れながら「コンビニ」と発話して周辺施設検索結果画面P37を表示させたが、同じ「ナビ」ボタンに一定時間触れながら「Aコンビニ」と発話した場合には周辺施設検索結果画面P40を表示させることができる(図7Aの状態遷移表に基づく)。この例の場合、漠然とコンビニを検索したいユーザは「コンビニ」と発話すれば、全ブランドのコンビニの検索結果を得ることができ、一方、「Aコンビニ」だけを検索したいユーザは「Aコンビニ」と発話すれば、ブランドをAコンビニに絞った検索結果を得ることができる。
このため、ボタンへのタッチ動作の状態に応じてタッチ操作モードか音声操作モードかを判定するので、1つのボタンで通常のタッチ操作とそのボタンに関連する音声操作とを切り換えて入力することができ、タッチ操作の分かりやすさを確保することができる。
また、音声認識結果を変換した項目値は、ボタン名称である項目名と同じ処理グループのなかのより下層に分類された処理を実行するための情報であるので、ユーザが目的をもってタッチしたボタンに関連する内容を発話するだけでこのボタンに関連する下層の処理を実行させることができる。従って、従来のように予め決められた独特な音声操作方法および音声認識キーワードを覚える必要がない。また、従来のように単なる「発話ボタン」を押して発話する場合に比べ、本実施の形態1では「ナビ」、「AV」などの名称が表示されたボタンを押してそのボタンに関連する音声認識キーワードを発話するようにしたので、直感的で分かりやすい音声操作を実現でき、「何をしゃべったらよいか分からない」という音声操作の問題点を解決することができる。さらに、操作ステップ数と操作時間を短縮することができる。
上記実施の形態1では、例えば図8に示す電話帳リスト画面P22のようなリスト項目を表示したリスト画面も、リスト画面以外の画面も区別なく同じ動作を行ったが、本実施の形態2ではリスト画面を表示している場合にこの画面により適した動作を行う構成にする。具体的には、リスト画面においてリスト項目に関連した音声認識辞書を動的に作成し、また、スクロールバーへのタッチ動作を検出してリスト項目を選択するなどの音声操作入力を判定する。
入力切換制御部4aは、入力方法判定部2の判定結果(タッチ操作または音声操作)に基づき、ユーザがどちらの入力操作を行っているかを状態遷移制御部5へ伝えると共に、アプリケーション実行部11aにも伝える。
アプリケーション実行部11aは、入力切換制御部4aからタッチ操作が通知された場合、リスト画面に対してリストのスクロールを行う。
また、アプリケーション実行部11aは、入力切換制御部4aから音声操作が通知された場合には上記実施の形態1と同様に、データ格納部12に格納された各種データを利用して、状態遷移制御部5から通知されたアプリケーション実行命令に応じた画面遷移またはアプリケーション機能の実行を行う。
音声認識部9aは、リスト画面が表示されている場合に、音声認識対象語辞書作成部20により作成された音声認識対象語辞書を参照して、マイクからの音声信号を音声認識処理して文字列などに変換し、音声-コマンド変換部10へ出力する。
図13は、実施の形態2に係る車載用情報装置の動作を示すフローチャートである。図14は車載用情報装置による画面遷移例を示し、ここでは、車載用情報装置がアプリケーション実行部11の機能の一つである電話機能の電話帳リスト画面P51をタッチディスプレイ上に表示していることとする。
コマンドの項目値は、項目名と同じ「スクロールバー」と付けられているものと、違う名前が付けられているものとがある。項目名と項目値が同じコマンドはタッチ操作入力の場合に使用するコマンドであり、項目名と項目値が異なるコマンドは主に音声操作入力の場合に使用するコマンドである。
ここで、図16に示すフローチャートを用いて、音声操作入力によるアプリケーション実行命令の生成方法を説明する。
ステップST251において音声認識対象語辞書作成部20は、入力切換制御部4aから音声操作入力の判定結果の通知を受けると、アプリケーション実行部11aから現在タッチディスプレイに表示しているリスト画面のリスト項目の一覧データを取得する。
図17は、音声認識対象語辞書を説明する図である。この音声認識対象語辞書には、(1)リストに並んでいる項目の音声認識キーワード、(2)リスト項目を絞り込み検索する音声認識キーワード、(3)リストに並んでいる項目の下層画面にあるすべての音声認識キーワードの三種類がある。
(2)は、例えば現在地周辺の施設のうち「コンビニ」を検索した結果を示す周辺施設検索結果画面に並んでいるコンビニブランド名(Aコンビニ、Bコンビニ、Cコンビニ、Dコンビニ、Eコンビニなど)である。
(3)は、例えば周辺施設ジャンル選択画面1に並んでいる「買い物」項目の下層画面に含まれるジャンル名(コンビニ、デパートなど)、ジャンル名の各下層画面に含まれるコンビニブランド名(○○コンビニなど)、デパートブランド名(△△デパートなど)と、「宿泊」項目の下層画面に含まれるジャンル名(ホテルなど)、ジャンル名の各下層画面に含まれるホテルブランド名(□□ホテルなど)と、この他にも「交通」および「食事」の下層画面の音声認識キーワードとを含む。これにより、現在表示している画面より下層の画面へジャンプ遷移したり、下層の画面にある機能を直接実行したりできるようになる。
ステップST255において状態遷移制御部5が、状態遷移表記憶部6に格納されている状態遷移表に基づいて、入力切換制御部4aから入力される項目名と音声-コマンド変換部10から入力される項目値とからなるコマンド(項目名、項目値)をアプリケーション実行命令へ変換する。
現在の状態は、図14に示す電話帳リスト画面P51である。そして、ユーザがスクロールバーに一定時間触れながら音声認識キーワード「山田○○」と発話した場合、入力切換制御部4aから状態遷移制御部5に入力される項目名はスクロールである。また、音声-コマンド変換部10から状態遷移制御部5に入力される項目値は山田○○である。よって、コマンド(スクロールバー、山田○○)となる。
コマンド(スクロールバー、山田○○)は、図15の状態遷移表によれば、「電話帳画面P52へ画面遷移し、山田○○の電話帳を表示する」というアプリケーション実行命令に変換される。これにより、ユーザは、リスト項目の下方に並んでいてリスト画面に表示されていない「山田○○」などのリスト項目を容易に選択および決定することができる。
コマンド(スクロールバー、Aコンビニ)は、図15の状態遷移表によれば、「画面遷移せず、Aコンビニで絞込み検索を行い、検索結果を表示する」というアプリケーション実行命令に変換される。これにより、ユーザは容易に、リスト項目を絞り込み検索することができる。
図15の状態遷移表によれば、同じコマンド(スクロールバー、Aコンビニ)であっても、現在の状態に応じてアプリケーション実行命令が異なる。よって、周辺施設ジャンル選択画面1P71の場合のコマンド(スクロールバー、Aコンビニ)は、「周辺施設検索結果画面P74に画面遷移し、Aコンビニ周辺施設を検索し、検索結果を表示する」というアプリケーション実行命令に変換される。これにより、ユーザは容易に、表示中のリスト画面より下層の画面に遷移したり、下層のアプリケーション機能を実行したりすることができる。
図20は、本実施の形態3に係る車載用情報装置の構成を示すブロック図である。この車載用情報装置は、新たに出力方法決定部30と出力データ記憶部31とを備え、タッチ操作モードか音声操作モードかをユーザに報知する。その他、図20において図1と同一または相当の部分については同一の符号を付し、詳細な説明を省略する。
また、出力方法決定部30は、入力切換制御部4bから音声操作モードが通知された場合、音声操作入力であることをユーザに通知する出力方法(音声操作モードを示すボタン色、効果音、タッチディスプレイのクリック感および振動方法、音声認識マーク、音声ガイダンスなど)を決定し、この音声操作の項目名に対応する出力データを出力データ記憶部31から取得して出力制御部13bへ出力する。
なお、図示例では出力データ記憶部31を個別に設けたが、他の記憶装置で兼用してもよく、例えば状態遷移表記憶部6またはデータ格納部12に出力データを格納してもよい。
図21は、実施の形態3に係る車載用情報装置の出力方法制御動作を示すフローチャートである。図21のステップST100~ST130は、図2のステップST100~ST130と同一の処理であるため説明を省略する。
入力方法の判定結果がタッチ操作なら(ステップST130“YES”)、入力切換制御部4bが出力方法決定部30へその旨を通知する。続くステップST300において出力方法決定部30は、入力切換制御部4bからタッチ操作入力である旨の通知を受け、アプリケーション実行結果の出力方法を決定する。例えば、画面のボタンをタッチ操作用のボタン色に変更したり、タッチディスプレイ上をユーザがタッチしたときの効果音、クリック感および振動をタッチ操作用に変更したりする。
ここで、出力の具体例を説明する。図22は、音声操作入力と判定された場合の電話画面である。この電話画面を表示している場合に、ユーザが「電話帳」ボタンを一定時間触れたとする。この場合、出力方法決定部30は入力切換制御部4bから音声操作入力である旨の通知を受け、かつ、項目名(電話帳)を受け取る。続いて出力方法決定部30は、出力データ記憶部31から音声認識マークのデータを取得して、「電話帳」ボタン付近に音声認識マークを表示する指示を出力制御部13bへ出力する。そして、出力制御部13bが、ユーザがタッチした「電話帳」ボタンから音声認識マークが吹き出るように、電話画面上の電話帳ボタン付近に音声認識マークを重畳配置してタッチディスプレイへ出力する。
これにより、音声操作入力に切り替わった状態であること、およびどのボタンに関連した音声操作を行う状態であるかがユーザに分かりやすく示すことができる。この状態でユーザが「山田○○」と発話すれば、発呼機能のある下層の電話帳画面を表示させることができる。
また例えば、図11Aのナビメニュー画面P32において、「周辺施設を探す」ボタンにユーザが一定時間触れたとする。この場合、出力方法決定部30は入力切換制御部4bから音声操作入力である旨の通知を受け、かつ、項目名(周辺施設を探す)を受け取る。そして、出力方法決定部30が、この項目名に紐付けられた「どちらの施設へ行きますか」、「施設名称をお話ください」といった音声ガイダンスデータを出力データ記憶部31より取得して、出力制御部13bへ出力する。
これにより、タッチされたボタンに応じた発話すべき内容を音声ガイダンスによりユーザに問いかけながら、より自然と音声操作入力に導くことができる。
これは、一般的な音声操作入力で使われているような発話ボタンを押下したときに出力される「ピッとなったらお話ください」という音声ガイダンスに比べ、分かりやすいガイダンス内容といえる。
図23は、音声操作入力時のリスト画面の一例である。実施の形態2ではユーザがスクロールバーに一定時間触れた場合に音声操作入力に切り換わる。この場合に、出力方法決定部30が、そのリスト画面上のスクロールバー付近に音声認識マークを重畳配置するよう制御して、ユーザに音声操作入力の状態である旨をユーザに報知する。
上記実施の形態1~3では、タッチディスプレイに表示されたボタン(およびリスト、スクロールバーなど)へのタッチ動作の状態(短押しか長押しかなど)によって、タッチ操作モード(ボタン機能の実行)と音声操作モード(ボタンに関連する音声認識の起動)とを切り換える構成にしたが、タッチディスプレイのボタンだけでなく、機械的なハードボタンなどの入力デバイスへのタッチ動作の状態によってもタッチ操作モードと音声操作モードとを切り換えることが可能である。そこで、本実施の形態4および後述する実施の形態5~10では、ハードボタンなどの入力デバイスへのタッチ動作の状態によって操作モードを切り換える情報装置を説明する。
上記実施の形態1~3の車載用情報装置はタッチディスプレイを入力デバイスに用いたが、ここでは入力デバイスの例として下記(1)~(6)を用いる。
(1)ハードボタンとタッチディスプレイを組み合わせた例
(2)ハードボタンとディスプレイを組み合わせた例
(3)ディスプレイの表示項目に対応したハードボタンのみの例
(4)ディスプレイとジョイスティックなどのカーソル操作用ハードデバイスを組み合わせた例
(5)ディスプレイとタッチパッドを組み合わせた例
(6)ハードボタンのみの例
例えば、触感センサが無いハードボタンの場合、短押しか長押しかによって入力方法を判定してもよいし、1回押しか2回押しかによって入力方法を判定してもよい。触感センサがあるハードボタンの場合、ユーザがハードボタンに触れたか押したかによって入力方法を判定してもよい。半押しを検出可能なハードボタン(例えば、カメラのシャッターボタン)の場合、半押しか全押しかによって入力方法を判定してもよい。
このように、1つのハードボタンに対して2種類のタッチ動作を使い分けることにより、1つのハードボタンに対してタッチ操作と音声操作のどちらの操作により入力を行おうとしているかの判定を行うことができる。
(1)ハードボタンとタッチディスプレイを組み合わせた例
図24は車載用情報装置が備える(または車載用情報装置に接続する)ハードボタン100~105とタッチディスプレイ106の構成例を示す図である。ここでは、タッチディスプレイ106の周辺にハードボタン100~105が設置され、各ハードボタン100~105には、アプリケーション実行部11で実行可能な上位階層の機能の項目名が関連付けられている。この例では、ハードボタン100~105が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。
本実施の形態5に係る車載用情報装置は、図1、図12または図20に示す車載用情報装置と図面上では同様の構成であるため、以下では図1、図12および図20を援用して説明する。
図27は車載用情報装置が備える(または車載用情報装置に接続する)ハードボタン103~105とディスプレイ108の構成例を示し、これらディスプレイ108とハードボタン103~105は車両のハンドル107周辺に設置されているものとする。また、ディスプレイ108にハードボタン103~105の項目名が表示されている。なお、ディスプレイ108とハードボタン103~105はどこに配置されていてもよい。
この例では、ハードボタン103~105が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。
本実施の形態6に係る車載用情報装置は、図1、図12または図20に示す車載用情報装置と図面上では同様の構成であるため、以下では図1、図12および図20を援用して説明する。
図29は車載用情報装置が備える(または車載用情報装置に接続する)ハードボタン100~102とディスプレイ108の構成例を示し、これらディスプレイ108とハードボタン100~102は車両のハンドル107周辺に設置されているものとする。
この例では、ハードボタン100~102が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。
本実施の形態7に係る車載用情報装置は、図1、図12または図20に示す車載用情報装置と図面上では同様の構成であるため、以下では図1、図12および図20を援用して説明する。
図32は車載用情報装置が備える(または車載用情報装置に接続する)ディスプレイ108とジョイスティック109の構成例を示し、これらディスプレイ108とジョイスティック109は車両のハンドル107周辺に設置されているものとする。なお、ディスプレイ108とジョイスティック109はどこに配置されていてもよい。また、カーソル操作用ハードデバイスの一例としてジョイスティック109を図示したが、回転ダイヤル、上下セレクタなどの他の入力デバイスを用いてもよい。
この例では、ジョイスティック109が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。
本実施の形態8に係る車載用情報装置は、図1、図12または図20に示す車載用情報装置と図面上では同様の構成であるため、以下では図1、図12および図20を援用して説明する。
図33は車載用情報装置が備える(または車載用情報装置に接続する)ディスプレイ108とタッチパッド110の構成例を示し、これらディスプレイ108とタッチパッド110は車両のハンドル107周辺に設置されているものとする。なお、ディスプレイ108とタッチパッド110はどこに配置されていてもよい。
タッチパッド110が押下の圧力を検知可能な場合、触れたか押したかによって入力方法を判定したり、半押しか全押しかによって入力方法を判定したりする。圧力を検知できない場合でも、なぞる、タップ、長押しなどのタッチ動作の違いによって入力方法を判定できる。この例では、強押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。
上記実施の形態4~8では、図1、図12または図20に示す情報装置を車載用情報装置に適用した例を説明してきたが、本実施の形態9では家庭用電気製品などのユーザインタフェース装置に適用した例を説明する。
図34は、録画機能付きテレビ111とそれを操作するリモコン112の構成例を示す図である。本実施の形態9では、図1、図12または図20に示す情報装置をテレビ111とリモコン112のユーザインタフェース装置に適用する。
この例では、リモコン112の「再生」ハードボタン113および「予約」ハードボタン114が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モードと判定する。入力方法の判定は、上記実施の形態4~8と略同様のため説明は省略する。
図35は、炊飯器120の構成例を示す図である。図35において、ユーザが「予約」ハードボタン122を短押しした場合、炊飯器120は入力をタッチ操作モードに切り換え、コマンド(予約、予約)に対応するアプリケーション実行命令(炊飯予約の動作を実行する)に基づいて、ディスプレイ121の表示と「設定」ハードボタン123を使用してユーザに予約設定させる。
上記実施の形態1~9では、タッチディスプレイまたはハードボタンなど、1つの入力デバイスへのタッチ動作の状態に応じて、タッチ操作モードと音声操作モードの2つのモードを切り換えていたが、3つ以上のモードに切り換えることも可能である。即ち、1つの入力デバイスへのn種類のタッチ動作に応じてn種類のモードを切り換える。
例えば、入力デバイスがタッチディスプレイおよびタッチパッドの場合、例1のように入力デバイスが短押しされたか、長押しされたか、ダブルタップされたかによって、ユーザが第1~第3のモードのいずれを希望しているか入力方法を判定する。
入力デバイスがハードボタンの場合、例2のように入力デバイスが短押しされたか、長押しされたか、ダブルクリックされたかによって入力方法を判定してもよいし、例3のように短押しの半押しされたか、短押しの全押しされたか、長押しの全押し(または半押し)されたかによって判定してもよい。
入力デバイスがイージーセレクタの場合、例4のように入力デバイスが押し込まれたか、上倒しされたか、下倒しされたかによって判定する。
この例では、入力デバイスとしてハードボタン100~105を利用する。また、ハードボタン100~105が短押しされた場合にタッチ操作モードと判定し、長押しされた場合に音声操作モード1と判定し、ダブルクリックされた場合に音声操作モード2と判定する。また、ハードボタン100~102は押下に連動して実行する機能が遷移画面によって異なり、一方のハードボタン103~105は機能が固定されている。
図38Aにおいて「NAVI」ハードボタン105が短押しされた場合、タッチ入力検出部1がこの短押しを検出し、タッチ-コマンド変換部3がコマンド(NAVI、NAVI)を生成する。また、入力方法判定部2がタッチ操作モードと判定し、この判定を受けた状態遷移制御部5がコマンド(NAVI、NAVI)をアプリケーション実行命令へ変換してアプリケーション実行部11に出力する。アプリケーション実行部11は、アプリケーション実行命令に基づいてディスプレイ108にNAVIメニュー画面P100を表示させる。このNAVIメニュー画面P100には、「1」ハードボタン100の押下に連動して実行される「1.目的地検索」機能と、「2」ハードボタン101の押下に連動して実行される「2.渋滞情報」表示機能と、「3」ハードボタン102の押下に連動して実行される「3.ナビ設定」機能とが含まれている。
また、入力切換制御部4を介して状態遷移制御部5にコマンドの項目名(NAVI)が入力されると、音声認識辞書切換部8がNAVIに関連する音声認識辞書に切り換え、音声認識部9がこの音声認識辞書を用いてユーザ発話の音声認識処理を行う。音声-コマンド変換部10が音声認識部9の音声認識結果をコマンド(項目値)に変換して状態遷移制御部5へ出力し、状態遷移制御部5がNAVI機能のうちの項目値に該当するアプリケーション実行命令に変換してアプリケーション実行部11に実行させる。
Claims (13)
- タッチディスプレイの出力信号に基づいて、当該タッチディスプレイに表示されタッチ動作のなされたボタンに対応する処理を実行させるための第1のコマンドを生成するタッチ-コマンド変換部と、
処理に対応付けられた音声認識キーワードからなる音声認識辞書を用いて、前記タッチ動作と略同時かそれに続くユーザ発話を音声認識し、当該音声認識の結果に対応する処理を実行させるためのコマンドであって前記第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理を実行させる第2のコマンドに変換する音声-コマンド変換部と、
前記タッチディスプレイの出力信号に基づいた前記タッチ動作の状態に応じて、前記タッチ-コマンド変換部の生成した第1のコマンドに対応する処理を実行するタッチ操作モードか、前記音声-コマンド変換部の生成する第2のコマンドに対応する処理を実行する音声操作モードかを切り換える入力切換制御部とを備えるユーザインタフェース装置。 - 入力切換制御部からタッチ操作モードの指示を受けた場合、前記入力切換制御部でモードの判定に用いたタッチ動作のなされたボタンに対応する第1のコマンドをタッチ-コマンド変換部から取得して、当該第1のコマンドに対応する処理を実行し、前記入力切換制御部から音声操作モードの指示を受けた場合、前記タッチ動作と略同時かそれに続くユーザ発話に対応する第2のコマンドを音声-コマンド変換部から取得して、当該第2のコマンドに対応する処理を実行する処理実行部と、
前記処理実行部の実行結果を出力するタッチディスプレイを含めた出力部を制御する出力制御部とを備えることを特徴とする請求項1記載のユーザインタフェース装置。 - 処理に対応付けられた音声認識キーワードからなる音声認識辞書を格納している音声認識辞書データベースと、
前記音声認識辞書データベースのうち、タッチ動作のなされたボタンに関連する処理に対応付けられた音声認識辞書に切り換える音声認識辞書切換部とを備え、
音声-コマンド変換部は、前記音声認識辞書切換部が切り換えた音声認識辞書を用いて、前記タッチ動作と略同時かそれに続くユーザ発話の音声認識を行うことを特徴とする請求項1記載のユーザインタフェース装置。 - グループ分けされ、さらに当該グループ内で階層化された項目のデータを格納しているデータ格納部と、
前記項目に対応付けられた音声認識キーワードを格納している音声認識辞書データベースと、
前記データ格納部に格納されたデータのうちの各グループの所定階層の項目が並んだリスト画面のスクロールバーエリアがタッチ動作された場合、前記音声認識辞書データベースのうち、当該リスト画面に並ぶ各項目とその下層の項目に対応付けられた音声認識キーワードを抽出して音声認識対象語辞書を作成する音声認識対象語辞書作成部とを備え、
音声-コマンド変換部は、前記音声認識辞書作成部が作成した音声認識対象語辞書を用いて、前記スクロールバーエリアへのタッチ動作と略同時かそれに続くユーザ発話の音声認識を行い、前記リスト画面に並ぶ各項目かその下層の項目に対応付けられた音声認識キーワードを取得することを特徴とする請求項1記載のユーザインタフェース装置。 - 入力切換制御部からタッチ操作モードまたは音声操作モードの指示を受け、出力部による実行結果の出力方法を当該指示されたモードに応じて決定する出力方法決定部を備え、
出力制御部は、前記出力方法決定部が決定した出力方法に従って前記出力部を制御することを特徴とする請求項2記載のユーザインタフェース装置。 - 第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理に対応付けられた音声認識キーワードの発話をユーザに促す音声ガイダンスのデータを、当該第1のコマンド毎に格納している出力データ記憶部を備え、
出力方法決定部は、入力切換制御部から音声操作モードの指示を受けた場合、タッチ-コマンド変換部の生成した第1のコマンドに対応する音声ガイダンスのデータを前記出力データ記憶部から取得して出力制御部へ出力し、
前記出力制御部は、前記出力方法決定部の出力した音声ガイダンスのデータを出力部から出力させることを特徴とする請求項5記載のユーザインタフェース装置。 - 車両に搭載されたタッチディスプレイおよびマイクと、
前記タッチディスプレイの出力信号に基づいて、当該タッチディスプレイに表示されタッチ動作のなされたボタンに対応する処理を実行させるための第1のコマンドを生成するタッチ-コマンド変換部と、
処理に対応付けられた音声認識キーワードからなる音声認識辞書を用いて、前記マイクの集音する前記タッチ動作と略同時かそれに続くユーザ発話を音声認識し、当該音声認識の結果に対応する処理を実行させるためのコマンドであって前記第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理を実行させる第2のコマンドに変換する音声-コマンド変換部と、
前記タッチディスプレイの出力信号に基づいた前記タッチ動作の状態に応じて、前記タッチ-コマンド変換部の生成した第1のコマンドに対応する処理を実行するタッチ操作モードか、前記音声-コマンド変換部の生成する第2のコマンドに対応する処理を実行する音声操作モードかを切り換える入力切換制御部とを備える車載用情報装置。 - タッチディスプレイの出力信号に基づいて、当該タッチディスプレイに表示されたボタンへのタッチ動作を検出するタッチ入力検出ステップと、
前記タッチ入力検出ステップの検出結果に基づいた前記タッチ動作の状態に応じて、タッチ操作モードか音声操作モードかを判定する入力方法判定ステップと、
前記入力方法判定ステップでタッチ操作モードと判定された場合、前記タッチ入力検出ステップの検出結果に基づいて、前記タッチ動作のなされたボタンに対応する処理を実行させるための第1のコマンドを生成するタッチ-コマンド変換ステップと、
前記入力方法判定ステップで音声操作モードと判定された場合、処理に対応付けられた音声認識キーワードからなる音声認識辞書を用いて、前記タッチ動作と略同時かそれに続くユーザ発話を音声認識し、当該音声認識の結果に対応する処理を実行させるためのコマンドであって前記第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理を実行させる第2のコマンドに変換する音声-コマンド変換ステップと、
前記タッチ-コマンド変換ステップで生成した第1のコマンド、または前記音声-コマンド変換ステップで生成した第2のコマンドに対応する処理を実行する処理実行ステップとを備える情報処理方法。 - タッチディスプレイの出力信号に基づいて、当該タッチディスプレイに表示されたボタンへのタッチ動作を検出するタッチ入力検出手順と、
前記タッチ入力検出手順の検出結果に基づいた前記タッチ動作の状態に応じて、タッチ操作モードか音声操作モードかを判定する入力方法判定手順と、
前記入力方法判定手順でタッチ操作モードと判定された場合、前記タッチ入力検出手順の検出結果に基づいて、前記タッチ動作のなされたボタンに対応する処理を実行させるための第1のコマンドを生成するタッチ-コマンド変換手順と、
前記入力方法判定手順で音声操作モードと判定された場合、処理に対応付けられた音声認識キーワードからなる音声認識辞書を用いて、前記タッチ動作と略同時かそれに続くユーザ発話を音声認識し、当該音声認識の結果に対応する処理を実行させるためのコマンドであって前記第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理を実行させる第2のコマンドに変換する音声-コマンド変換手順と、
前記タッチ-コマンド変換手順で生成した第1のコマンド、または前記音声-コマンド変換手順で生成した第2のコマンドに対応する処理を実行する処理実行手順とを、コンピュータに実行させるための情報処理プログラム。 - ユーザによるタッチ動作がなされた入力デバイスからの出力信号に基づいて、当該入力デバイスに関連付けられた処理または当該入力デバイスが選択中の処理を実行させるための第1のコマンドを生成するタッチ-コマンド変換部と、
前記処理に対応付けられた音声認識キーワードからなる音声認識辞書を用いて、前記入力デバイスへの前記タッチ動作と略同時かそれに続くユーザ発話を音声認識し、当該音声認識の結果に対応する処理を実行させるためのコマンドであって前記第1のコマンドの処理に関連する処理グループのなかの当該処理より下層に分類された処理を実行させるための第2のコマンドに変換する音声-コマンド変換部と、
前記入力デバイスの出力信号に基づいた前記タッチ動作の状態に応じて、前記タッチ-コマンド変換部の生成した第1のコマンドに対応する処理を実行するタッチ操作モードか、前記音声-コマンド変換部の生成する第2のコマンドに対応する処理を実行する音声操作モードかを切り換える入力切換制御部とを備えるユーザインタフェース装置。 - 入力デバイスは、ハードボタンであることを特徴とする請求項10記載のユーザインタフェース装置。
- 入力デバイスは、ディスプレイに表示されたカーソルを操作して処理項目を選択可能なカーソル操作用ハードデバイスであることを特徴とする請求項10記載のユーザインタフェース装置。
- 入力デバイスは、タッチパッドであることを特徴とする請求項10記載のユーザインタフェース装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280036683.5A CN103718153B (zh) | 2011-07-27 | 2012-07-26 | 用户接口装置以及信息处理方法 |
DE112012003112.1T DE112012003112T5 (de) | 2011-07-27 | 2012-07-26 | Benutzerschnittstellenvorrichtung, fahrzeugangebrachte Informationsvorrichtung, informationsverarbeitendes Verfahren und informationsverarbeitendes Programm |
US14/235,015 US20140168130A1 (en) | 2011-07-27 | 2012-07-26 | User interface device and information processing method |
JP2013525754A JP5795068B2 (ja) | 2011-07-27 | 2012-07-26 | ユーザインタフェース装置、情報処理方法および情報処理プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JPPCT/JP2011/004242 | 2011-07-27 | ||
PCT/JP2011/004242 WO2013014709A1 (ja) | 2011-07-27 | 2011-07-27 | ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013015364A1 true WO2013015364A1 (ja) | 2013-01-31 |
Family
ID=47600602
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/004242 WO2013014709A1 (ja) | 2011-07-27 | 2011-07-27 | ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム |
PCT/JP2012/068982 WO2013015364A1 (ja) | 2011-07-27 | 2012-07-26 | ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/004242 WO2013014709A1 (ja) | 2011-07-27 | 2011-07-27 | ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20140168130A1 (ja) |
CN (1) | CN103718153B (ja) |
DE (1) | DE112012003112T5 (ja) |
WO (2) | WO2013014709A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105580069A (zh) * | 2013-10-04 | 2016-05-11 | 丰田自动车株式会社 | 信息终端的显示控制器和信息终端的显示控制方法 |
JP2018109854A (ja) * | 2016-12-29 | 2018-07-12 | 恒次 國分 | 音コマンド入力装置 |
US10448762B2 (en) | 2017-09-15 | 2019-10-22 | Kohler Co. | Mirror |
US10663938B2 (en) | 2017-09-15 | 2020-05-26 | Kohler Co. | Power operation of intelligent devices |
WO2020137607A1 (ja) * | 2018-12-27 | 2020-07-02 | ソニー株式会社 | 音声発話に基いてアイテムを選択する表示制御装置 |
US10887125B2 (en) | 2017-09-15 | 2021-01-05 | Kohler Co. | Bathroom speaker |
US11093554B2 (en) | 2017-09-15 | 2021-08-17 | Kohler Co. | Feedback for water consuming appliance |
US11099540B2 (en) | 2017-09-15 | 2021-08-24 | Kohler Co. | User identity in household appliances |
US11838459B2 (en) | 2019-06-07 | 2023-12-05 | Canon Kabushiki Kaisha | Information processing system, information processing apparatus, and information processing method |
US12135535B2 (en) | 2021-07-01 | 2024-11-05 | Kohler Co. | User identity in household appliances |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101160681B1 (ko) | 2011-10-19 | 2012-06-28 | 배경덕 | 이동 통신 단말기의 활성화 시에 특정 동작이 수행되도록 하기 위한 방법, 이동 통신 단말기 및 컴퓨터 판독 가능 기록 매체 |
KR102210433B1 (ko) * | 2014-01-21 | 2021-02-01 | 삼성전자주식회사 | 전자 장치 및 이의 음성 인식 방법 |
US20170010859A1 (en) * | 2014-04-22 | 2017-01-12 | Mitsubishi Electric Corporation | User interface system, user interface control device, user interface control method, and user interface control program |
JP6004502B2 (ja) * | 2015-02-24 | 2016-10-12 | Necプラットフォームズ株式会社 | Pos端末、商品情報登録方法および商品情報登録プログラム |
US11868354B2 (en) | 2015-09-23 | 2024-01-09 | Motorola Solutions, Inc. | Apparatus, system, and method for responding to a user-initiated query with a context-based response |
US10026401B1 (en) * | 2015-12-28 | 2018-07-17 | Amazon Technologies, Inc. | Naming devices via voice commands |
US20190004665A1 (en) * | 2015-12-28 | 2019-01-03 | Thomson Licensing | Apparatus and method for altering a user interface based on user input errors |
KR101858698B1 (ko) * | 2016-01-04 | 2018-05-16 | 엘지전자 주식회사 | 차량용 디스플레이 장치 및 차량 |
US10318251B1 (en) | 2016-01-11 | 2019-06-11 | Altair Engineering, Inc. | Code generation and simulation for graphical programming |
JP6477551B2 (ja) * | 2016-03-11 | 2019-03-06 | トヨタ自動車株式会社 | 情報提供装置及び情報提供プログラム |
US11176930B1 (en) * | 2016-03-28 | 2021-11-16 | Amazon Technologies, Inc. | Storing audio commands for time-delayed execution |
US10666808B2 (en) * | 2016-09-21 | 2020-05-26 | Motorola Solutions, Inc. | Method and system for optimizing voice recognition and information searching based on talkgroup activities |
CN108617043A (zh) * | 2016-12-13 | 2018-10-02 | 佛山市顺德区美的电热电器制造有限公司 | 烹饪电器的控制方法和控制装置以及烹饪电器 |
US11507216B2 (en) | 2016-12-23 | 2022-11-22 | Realwear, Inc. | Customizing user interfaces of binary applications |
US11099716B2 (en) | 2016-12-23 | 2021-08-24 | Realwear, Inc. | Context based content navigation for wearable display |
US10437070B2 (en) | 2016-12-23 | 2019-10-08 | Realwear, Inc. | Interchangeable optics for a head-mounted display |
US10620910B2 (en) * | 2016-12-23 | 2020-04-14 | Realwear, Inc. | Hands-free navigation of touch-based operating systems |
JP2018133313A (ja) * | 2017-02-17 | 2018-08-23 | パナソニックIpマネジメント株式会社 | 押下スイッチ機構及びウェアラブルカメラ |
US10599377B2 (en) * | 2017-07-11 | 2020-03-24 | Roku, Inc. | Controlling visual indicators in an audio responsive electronic device, and capturing and providing audio using an API, by native and non-native computing devices and services |
US10569653B2 (en) * | 2017-11-20 | 2020-02-25 | Karma Automotive Llc | Driver interface system |
CN108804010B (zh) * | 2018-05-31 | 2021-07-30 | 北京小米移动软件有限公司 | 终端控制方法、装置及计算机可读存储介质 |
CN109525894A (zh) * | 2018-12-05 | 2019-03-26 | 深圳创维数字技术有限公司 | 控制电视待机的方法、装置和存储介质 |
US11066122B2 (en) * | 2019-05-30 | 2021-07-20 | Shimano Inc. | Control device and control system including control device |
DE102019123615A1 (de) * | 2019-09-04 | 2021-03-04 | Audi Ag | Verfahren zum Betreiben eines Kraftfahrzeugsystems, Steuereinrichtung, und Kraftfahrzeug |
US11418713B2 (en) * | 2020-04-02 | 2022-08-16 | Qualcomm Incorporated | Input based launch sequences for a camera application |
JP2022171477A (ja) * | 2021-04-30 | 2022-11-11 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法およびプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001129864A (ja) * | 1999-08-23 | 2001-05-15 | Meiki Co Ltd | 射出成形機の音声入力装置およびその制御方法 |
JP2004102632A (ja) * | 2002-09-09 | 2004-04-02 | Ricoh Co Ltd | 音声認識装置および画像処理装置 |
JP2007280179A (ja) * | 2006-04-10 | 2007-10-25 | Mitsubishi Electric Corp | 携帯端末 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NZ582991A (en) * | 2004-06-04 | 2011-04-29 | Keyless Systems Ltd | Using gliding stroke on touch screen and second input to choose character |
JP2006085351A (ja) * | 2004-09-15 | 2006-03-30 | Fuji Xerox Co Ltd | 画像処理装置およびその制御方法および制御プログラム |
JP5255753B2 (ja) * | 2005-06-29 | 2013-08-07 | シャープ株式会社 | 情報端末装置および通信システム |
DE112008002030B4 (de) * | 2007-10-12 | 2013-07-04 | Mitsubishi Electric Corp. | Informationsbereitstellende Vorrichtung in Fahrzeug |
CN101794173B (zh) * | 2010-03-23 | 2011-10-05 | 浙江大学 | 无手残疾人专用电脑输入装置及其方法 |
-
2011
- 2011-07-27 WO PCT/JP2011/004242 patent/WO2013014709A1/ja active Application Filing
-
2012
- 2012-07-26 DE DE112012003112.1T patent/DE112012003112T5/de not_active Ceased
- 2012-07-26 US US14/235,015 patent/US20140168130A1/en not_active Abandoned
- 2012-07-26 WO PCT/JP2012/068982 patent/WO2013015364A1/ja active Application Filing
- 2012-07-26 CN CN201280036683.5A patent/CN103718153B/zh not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001129864A (ja) * | 1999-08-23 | 2001-05-15 | Meiki Co Ltd | 射出成形機の音声入力装置およびその制御方法 |
JP2004102632A (ja) * | 2002-09-09 | 2004-04-02 | Ricoh Co Ltd | 音声認識装置および画像処理装置 |
JP2007280179A (ja) * | 2006-04-10 | 2007-10-25 | Mitsubishi Electric Corp | 携帯端末 |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105580069A (zh) * | 2013-10-04 | 2016-05-11 | 丰田自动车株式会社 | 信息终端的显示控制器和信息终端的显示控制方法 |
JP7010585B2 (ja) | 2016-12-29 | 2022-01-26 | 恒次 國分 | 音コマンド入力装置 |
JP2018109854A (ja) * | 2016-12-29 | 2018-07-12 | 恒次 國分 | 音コマンド入力装置 |
US11892811B2 (en) | 2017-09-15 | 2024-02-06 | Kohler Co. | Geographic analysis of water conditions |
US11314214B2 (en) | 2017-09-15 | 2022-04-26 | Kohler Co. | Geographic analysis of water conditions |
US10887125B2 (en) | 2017-09-15 | 2021-01-05 | Kohler Co. | Bathroom speaker |
US11093554B2 (en) | 2017-09-15 | 2021-08-17 | Kohler Co. | Feedback for water consuming appliance |
US11099540B2 (en) | 2017-09-15 | 2021-08-24 | Kohler Co. | User identity in household appliances |
US10663938B2 (en) | 2017-09-15 | 2020-05-26 | Kohler Co. | Power operation of intelligent devices |
US11314215B2 (en) | 2017-09-15 | 2022-04-26 | Kohler Co. | Apparatus controlling bathroom appliance lighting based on user identity |
US11949533B2 (en) | 2017-09-15 | 2024-04-02 | Kohler Co. | Sink device |
US11921794B2 (en) | 2017-09-15 | 2024-03-05 | Kohler Co. | Feedback for water consuming appliance |
US10448762B2 (en) | 2017-09-15 | 2019-10-22 | Kohler Co. | Mirror |
US11941322B2 (en) | 2018-12-27 | 2024-03-26 | Saturn Licensing Llc | Display control device for selecting item on basis of speech |
WO2020137607A1 (ja) * | 2018-12-27 | 2020-07-02 | ソニー株式会社 | 音声発話に基いてアイテムを選択する表示制御装置 |
US11838459B2 (en) | 2019-06-07 | 2023-12-05 | Canon Kabushiki Kaisha | Information processing system, information processing apparatus, and information processing method |
US12135535B2 (en) | 2021-07-01 | 2024-11-05 | Kohler Co. | User identity in household appliances |
Also Published As
Publication number | Publication date |
---|---|
CN103718153B (zh) | 2017-02-15 |
WO2013014709A1 (ja) | 2013-01-31 |
DE112012003112T5 (de) | 2014-04-10 |
US20140168130A1 (en) | 2014-06-19 |
CN103718153A (zh) | 2014-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013015364A1 (ja) | ユーザインタフェース装置、車載用情報装置、情報処理方法および情報処理プログラム | |
US12067332B2 (en) | Information processing device, information processing method, information processing program, and terminal device | |
CN100552609C (zh) | 使用微动转盘和导航键控制用户界面的装置和方法 | |
EP1752865B1 (en) | Mobile terminal having jog dial and controlling method thereof | |
CN101826352A (zh) | 音乐重放装置以及用于音乐选择和重放的方法 | |
JPWO2003078930A1 (ja) | 車両用ナビゲーション装置 | |
CA3010320A1 (en) | Unifying user-interface for multi-source media player | |
CN103187063A (zh) | 电子装置和控制电子装置的方法 | |
CN107197348A (zh) | 显示装置、电子设备、交互式系统及其控制方法 | |
JP6477822B2 (ja) | 端末装置、端末装置の制御方法および制御プログラム | |
JP5795068B2 (ja) | ユーザインタフェース装置、情報処理方法および情報処理プログラム | |
JP2004254006A (ja) | 電子機器 | |
JP6226020B2 (ja) | 車載装置、情報処理方法および情報処理システム | |
JP2009276833A (ja) | 表示装置および表示方法 | |
US11449167B2 (en) | Systems using dual touch and sound control, and methods thereof | |
JP7323050B2 (ja) | 表示制御装置及び表示制御方法 | |
WO2022254670A1 (ja) | 表示制御装置及び表示制御方法 | |
WO2022254669A1 (ja) | 対話サービス装置及び対話システム制御方法 | |
JP6733751B2 (ja) | 車載装置、車載装置の制御方法および制御プログラム | |
JP7010585B2 (ja) | 音コマンド入力装置 | |
JP2013109549A (ja) | 車載装置および車載装置に接続された外部機器の動作制御方法 | |
JP6099414B2 (ja) | 情報提供装置、及び、情報提供方法 | |
JP2024519327A (ja) | ディスプレイ装置 | |
JP2021071807A (ja) | 電子機器およびプログラム | |
CN101426105A (zh) | 车辆播放器遥控器按钮的用户定义方法与装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12817728 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013525754 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14235015 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120120031121 Country of ref document: DE Ref document number: 112012003112 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12817728 Country of ref document: EP Kind code of ref document: A1 |