WO2004032113A1 - 車載制御装置、並びにその操作説明方法をコンピュータに実行させるプログラム - Google Patents
車載制御装置、並びにその操作説明方法をコンピュータに実行させるプログラム Download PDFInfo
- Publication number
- WO2004032113A1 WO2004032113A1 PCT/JP2003/012848 JP0312848W WO2004032113A1 WO 2004032113 A1 WO2004032113 A1 WO 2004032113A1 JP 0312848 W JP0312848 W JP 0312848W WO 2004032113 A1 WO2004032113 A1 WO 2004032113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice
- practice
- user
- control device
- vehicle control
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004458 analytical method Methods 0.000 claims description 32
- 238000001514 detection method Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 241000380131 Ammophila arenaria Species 0.000 description 1
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present invention relates to a vehicle control apparatus that performs control according to a voice command and a program that executes the operation explanation method in a combination evening.
- Conventional technology
- the user when inputting a voice command, the user may not be able to input the command correctly unless he / she is used to the operation. Furthermore, the user may not perform the voice input function that is not used to the command. In addition, it is troublesome to perform registration work to learn the user's voice, and the user may not perform such registration itself.
- an object of the present invention is to provide an in-vehicle control device that is easy to understand the operation and requires less work for the user, and a program that causes a computer to execute the operation explanation. Disclosure of the invention
- the in-vehicle control device includes: a voice recognition unit that recognizes an input voice command based on a user's voice; a command execution mode that executes a voice command recognized by the voice recognition unit; and And a control means for displaying an operation description of the in-vehicle control device using the voice command in the practice mode.
- the speech recognition means acquires practice speech uttered in response to a voice command in the practice mode, and learns the characteristics of the user's speech based on the practice speech.
- the in-vehicle control device will complete the parameter learning for the user's voice recognition, so in order to make the in-vehicle control device learn the voice,
- the voice recognition rate can be improved without performing a separate operation of inputting the user, and the user's work can be reduced.
- FIG. 1 is a functional block diagram showing a navigation device according to an embodiment of the present invention.
- FIG. 2 is a flowchart showing the overall processing of the in-vehicle control apparatus according to the embodiment of the present invention.
- FIG. 3 is a flowchart showing the processing of the practice mode in the embodiment of the present invention.
- FIG. 4 is a diagram showing the display contents of the display device according to the embodiment of the present invention.
- FIG. 5 shows the display contents of the display device according to the embodiment of the present invention.
- FIG. 6 is a diagram showing an example of advice in the embodiment of the present invention.
- FIG. 7 is a view showing the display contents of the display device according to the embodiment of the present invention.
- FIG. 1 shows an in-vehicle control device according to the first embodiment, and here, a force navigation device is shown as an example.
- the main unit 1 is a control unit 2 composed of a microprocessor, etc., a memory 3 connected to the control unit 2 for storing programs and various data, and also connected to the control unit 2 for digital map data.
- a map information storage device 4 for storing the voice signal, and a voice recognition unit 5 for recognizing the voice signal input from the microphone 8.
- the control unit 2 has a function of controlling the display device 6 connected to the main device 1 and displaying the route information and road information necessary for navigation on the display device 6.
- a liquid crystal display device is generally used, but any type of display device can be used, and the display device 6 may be configured integrally with the main body device or a part of the interior surface of the car. It may be embedded integrally with the car.
- Control unit 2 uses G P S (G l obal to perform navigation
- the position of the device is calculated from signals from a plurality of satellites received by the antenna 7 using a known calculation method.
- the microphone 8 converts the user's voice into an electrical signal and outputs it to the voice recognition unit 5.
- Speaking power 9 produces sounds such as voice, sound effects, music, etc. based on the control of the control unit 2. To help.
- the interface 10 is an air conditioner control device (not shown), a helm dryer, a sensor that detects the on / off status of the wiper and head dryer, etc. It has a function to relay signals to and from the control unit 2.
- the input device 1 1 is a device that detects a command from the user and outputs a signal corresponding to the command to the control unit 2.
- the input device 11 includes, for example, one or a plurality of buttons, an evening bullet, an evening sensor provided on the display device 6, a joystick, a lever provided on the vehicle body, and the like.
- a variety of input devices can be used that can convert these commands into signals.
- This in-vehicle control device has two operation modes: instruction execution mode and practice mode.
- the command execution mode is a mode in which normal operation is performed.
- the voice recognition unit 5 recognizes the user's voice command, and the control unit 2 performs processing corresponding to the voice command. Execute.
- this command execution mode for example, navigation destination setting and route guidance start, air conditioner control such as air volume control, audio control, e-mail and Internet Intelligent Transport Systems (ITS)
- ITS Internet Intelligent Transport Systems
- the practice mode is a mode in which the user is taught how to perform voice input, and the user practiced voice input.
- the user can speak in the voice input (grammar, loudness, You can learn how to use it while trying speed).
- the user can control the vehicle.
- Voice input is performed according to the instructions of the control device.
- the in-vehicle controller shows an example sentence of a voice command, and the user can practice voice input using the example sentence. Note that switching between the instruction execution mode and the practice mode can be performed by, for example, pressing a selection button or selecting a menu displayed on the display device.
- Fig. 2 is a flowchart of Control Unit 2 showing the instruction execution mode and practice mode.
- an initial screen at the time of activation is displayed (step ST1).
- the control unit 2 displays “Select a user” on the initial screen, and displays a list of a plurality of users registered in the memory 2 in advance. The user who sees this display selects one user from the list.
- the control unit 2 detects the signal output from the input device 1 1 and identifies the user based on this signal (step ST 2) o
- control unit 2 detects the input signal of the input device 11 and checks whether the input signal is a signal instructing execution of the practice mode (step ST 3).
- the process of the practice mode in step ST 4 is executed (the practice mode will be described later with reference to FIG. 3).
- the control unit 2 executes the process of the instruction execution mode in steps S T 5 and S T 6. First, it waits for an input signal from the input device 11 and / or a voice command from the microphone 8 (step ST5).
- the voice recognition unit 5 When voice is input to the microphone 8 by the user, the voice recognition unit 5 recognizes the input voice. At this time, the voice recognition unit 5 reads out the recognition parameters for the user specified in step ST2 from the memory 2, and recognizes the voice using these parameters. Next, the control unit 2 identifies which command has been input from among a plurality of commands based on the recognition result of the voice recognition unit 5. The Then, the process corresponding to the specified instruction is executed (step ST 6) o
- control unit 2 determines whether or not the user has finished the operation of the in-vehicle control device (for example, an operation to turn off the power) from the power key via the input device 11 or the interface. Detect based on electrical signal etc. (Step ST7). Here, when there is no end operation, the control unit 2 repeats the processing from step ST3. On the other hand, if there is a termination operation, the process is terminated.
- the control unit 2 determines whether or not the user has finished the operation of the in-vehicle control device (for example, an operation to turn off the power) from the power key via the input device 11 or the interface. Detect based on electrical signal etc.
- FIG. 3 is a flowchart of the program executed by the control unit 2 and the processing of step ST 4 in FIG.
- the control unit 2 displays the initial screen shown in FIG. 4 (a) on the display device 6 (step ST11).
- This initial screen is a screen for explaining the overall flow of the process in the practice mode to the user.
- the control unit 2 detects an instruction input by the user to the input device 11 1, and when the instruction is “start”, the operation explanation screen shown in FIG. 4 (b) is displayed ( Step ST 1 2).
- the control unit 2 reads out the example sentence of the voice command from the memory 3 and displays it on the display device 6.
- the general operation including the operation of the input device 11 (for example, pressing of the utterance button at the time of voice input) can be explained.
- the control unit 2 When the user starts speaking in accordance with the operation description, the control unit 2 records the voice received by the microphone 8 in the memory 2 (step ST 1 3). For example, if the description “Please press the utterance switch and speak to a nearby convenience store” is displayed as the operation description, the user can set the handle on a handle etc. Press the selected utterance switch to start speaking. When the control unit 2 detects that the utterance switch is pressed, the control unit 2 starts recording the audio signal received by the microphone 8. Next, the control unit 2 analyzes and analyzes the audio recorded in step ST 1 3 to the audio recognition unit 5. Instruction of voice feature learning is given (step ST 1 4). The voice recognition unit 5 analyzes voice characteristics according to a known voice recognition algorithm, and records the analysis result in the memory 2.
- the voice recognition unit 5 compares the standard pattern of voice stored in the memory 2 with the input voice pattern, and has a plurality of characteristics such as volume, speech speed, speech timing, or input. Analyzes are made for each feature, including the likelihood of voice, whether unnecessary words (hereinafter referred to as incidental words) are included, and the analysis results are output.
- the voice recognition unit 5 analyzes the characteristics of the user's voice by comparing the user's voice with the standard pattern, and learns to correct the parameters of the voice recognition according to the characteristics of the user's voice. Do.
- Various known techniques can be used for the speech recognition algorithm and the parameter learning method.
- the speech recognition unit 5 executes step ST by executing a speech recognition method using a hidden Markov model and a user-dependent parameter learning method described in Japanese Patent Laid-Open No. 1-242494. 1 Process 4 can be performed.
- control unit 2 determines whether or not the input voice is good based on the analysis result performed in step ST 14 (step ST 15). You can use any parameter that indicates the ease of speech recognition as to whether the speech is good, but for example, volume, speech speed, speech timing, likelihood of input speech If “bad” is detected in either the presence or absence of an incidental word, the control unit 2 determines that the comprehensive analysis result is “bad”. On the other hand, if all the analysis results are “good”, Results are “good”.
- control unit 2 displays a score corresponding to the likelihood and a message “successfully recognized” on the display device 6.
- the control unit 2 displays the text associated with the analysis result on the display device 6 as the device information (about the text). (See the advice in Figure 6). An example of the text to be displayed is shown in Fig. 4 (c).
- the control unit 2 may also display a score indicating the likelihood of speech on the display device 6.
- the score may be displayed as data abstracted according to the score from the score used internally by the control unit 2 so as to be easily understood by the user. For example, when the internal score is in the range of 0 to 100 0 points, the control unit 2 divides this internal score range into 1 0 0 points and converts them to 0 to 1 0 levels. It is displayed on the display device 6 as a score.
- control unit 2 determines whether or not all the practice items prepared in advance have been completed (step ST 18). If the practice has not been completed, return to step S T 1 2 and repeat the process of practicing for different instructions. On the other hand, when the processing is completed, the control unit 2 displays the comprehensive analysis result and the advice based on the analysis result as shown in FIG. 5 (step ST 19).
- FIG. 6 shows the classification of the speech input analysis results and the advice text displayed in step ST 18.
- the control unit 2 when practicing a total of five commands by repeating the practice of steps ST 1 2 to ST 1 8, the control unit 2 performs the first to fifth scores and scores as shown in FIG. Display Dubai together.
- the control unit 2 displays “1 Please speak loudly” on the display device 6.
- the text of the advice content is the text of the analysis results (for example, “Your voice seems to be quiet”) and the text of the advice (for example, “Speak loudly”) that guides you to improve speech input. )), But if the text displayed in step ST 1 7 is to be simplified, the control unit 2 may display only the advice text as shown in Fig. 5. Get it right.
- control unit 2 repeats the processing from step ST 1 1.
- the control unit 2 stores the parameter learned in step ST 14 in the memory 2 as the parameter for the current user (step ST 2 0) o
- the control unit 2 stores the parameter parameters learned for each user separately.
- the control unit 2 displays an inquiry screen for inquiring which storage location to select from among the storage locations for each parameter divided for each user as shown in FIG. .
- the selection by the user is input to the input device 11 1, and the control unit 2 identifies the storage location based on the input information from the input device 11 1 and stores the learned parameters. If the parameter evening is already registered, the parameter evening already registered and the parameter evening learned this time are combined, and the synthesized parameter evening is stored in the memory.
- This synthesis can be done by any method that can improve the speech recognition rate.For example, depending on the importance of the average and the latest learning parameters, It is possible to adopt a method of adding and weighting the parameters of each parameter. • Voice analysis details
- whether or not the input voice is good is determined by the following method (see Fig. 6 for the detection method).
- the voice recognition unit 5 compares the standard pattern of the incidental word stored in the memory 2 with the input voice, and if there is a pattern that matches the incidental word at the beginning of the voice, the analysis result is set as “bad”. Output. On the other hand, when there are no incidental words, the analysis result is “good”.
- the voice recognition unit 5 detects whether the volume of the input voice is within a predetermined range, and outputs a “bad” analysis result when the volume is not within the range. On the other hand, if it is within the range, the analysis result is “good”. For example, if the maximum volume that can be detected by the microphone 8 is 0 decibels, the predetermined range is set to a range that is less than 0 decibels and 3 decibels.
- the voice recognition unit 5 measures the time length of the input voice, compares this time length with the time length of the standard pattern, and determines that it is good when the difference between the two is within a predetermined range. be able to.
- the predetermined range can be set arbitrarily.
- the time length of the input voice can be set to + 25% to 1-25% compared to the time length of the standard pattern. it can.
- the voice recognition unit 5 determines whether the utterance time of the input voice is within a predetermined range.
- the analysis result is “good”, and when it is out of the range, it is “bad”.
- the analysis result is “bad” when the input voice is spoken for longer than the maximum input time of the voice command or when the voice input is detected at the end of the above-mentioned voice acquisition period. It becomes. In other cases, the analysis results are good.
- the voice recognition unit 5 compares the standard pattern stored in the memory 2 with the input voice pattern and detects the likelihood. When the likelihood is greater than or equal to a preset threshold, the analysis result is “good”, and when the likelihood is less than the threshold, the analysis result is “bad”. The likelihood is indicated by the Euclidean distance between the standard pattern and the input speech pattern. At this time, the speech recognition unit 5 calculates a score based on the likelihood. For example, assuming that the score when the likelihood is theoretically the highest is 1 0 0 0 and the score when the likelihood is considered to be practically the lowest is 0, the score changes in proportion to the likelihood. In addition, the score is calculated as a function of likelihood. Note that any value can be set as the threshold value. For example, the threshold value can be set to 600 or more.
- the speech recognition unit 5 detects whether or not there is an incidental word at the end of the sentence, as in the case of detecting the incidental word in (1), and if the incidental word is detected, the analysis result is determined to be “bad” and not detected. In this case, the analysis result is “good”.
- control unit 2 uses the following five example sentences
- the sentences (a) to (e) described above include a plurality of instructions with different grammars, and the speech recognition unit 5 can learn a plurality of sounds uttered with different grammars. Therefore, it is possible to perform learning with higher recognition accuracy than learning a sentence spoken in one monotonous grammar, such as “Nearby convenience store” or “Nearby gas station”.
- the sentences (a) to (e) described above include words with different parts of speech.
- nouns and numbers are included, and not only nouns such as place names, but also speaker-dependent recognition parameters can be learned for the pronunciation of numbers. Therefore, it is possible to prevent the recognition rate improvement by learning from being biased only to nouns such as place names.
- the vehicle-mounted control device is not limited to this example, and any vehicle-mounted control device that operates a car or an electronic device outside the vehicle with voice can be used. But it doesn't matter.
- on-board control devices include on-board electronic devices such as air conditioners and audio.
- a control device that controls electronic devices outside the vehicle for example, a control device that controls various electronic devices outside the vehicle via a transmitter connected to the interface 10 can be considered.
- Select the type of electronic equipment outside the vehicle such as a home or commercial air conditioner, a home security system, a home server, other electrical appliances, a first hood or a gasoline stand, etc. This includes all electronic devices connected via communication lines, such as fee payment devices at retail stores and gate devices installed at entrances and exits of parking lots.
- control device of FIG. 1 shows an example in which the speech recognition unit 5 is configured by a speech recognition LSI.
- the speech recognition unit is not limited to being configured by a dedicated circuit, and is stored in the memory 2.
- a voice recognition program may be used.
- the speech recognition program is executed by the control unit 2 or a separate processor used exclusively for speech recognition.
- speech analysis is performed by the speech recognition unit 5 as preprocessing, and the control unit 2 performs processing for displaying advice using the analysis result.
- Other processing can be performed by the control unit 2.o
- the recognition parameters of a plurality of users are registered. However, only a specific user may be used, so the recognition parameters registered in the practice mode are divided for each user. Not necessary. If only a specific user is used, the user registration process in step ST2 in Fig. 2 is not required.
- Memory 2 can be volatile or / and non-volatile memory.
- a storage means a storage device such as a hard disk or DVD-RAM may be used.
- advice is given based on the results of practice speech analysis. This is very useful because the user can modify the operation method. Therefore, it can also be used as a practice mode that does not have a learning function for the above recognition parameters.
- the in-vehicle control device is suitable for use by reducing the work of the user when performing control according to the voice command.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03754013A EP1450349B1 (en) | 2002-10-07 | 2003-10-07 | Vehicle-mounted control apparatus and program that causes computer to execute method of providing guidance on the operation of the vehicle-mounted control apparatus |
US10/497,695 US7822613B2 (en) | 2002-10-07 | 2003-10-07 | Vehicle-mounted control apparatus and program that causes computer to execute method of providing guidance on the operation of the vehicle-mounted control apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-293327 | 2002-10-07 | ||
JP2002293327A JP4304952B2 (ja) | 2002-10-07 | 2002-10-07 | 車載制御装置、並びにその操作説明方法をコンピュータに実行させるプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004032113A1 true WO2004032113A1 (ja) | 2004-04-15 |
Family
ID=32064007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/012848 WO2004032113A1 (ja) | 2002-10-07 | 2003-10-07 | 車載制御装置、並びにその操作説明方法をコンピュータに実行させるプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US7822613B2 (ja) |
EP (1) | EP1450349B1 (ja) |
JP (1) | JP4304952B2 (ja) |
WO (1) | WO2004032113A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10448762B2 (en) | 2017-09-15 | 2019-10-22 | Kohler Co. | Mirror |
US10663938B2 (en) | 2017-09-15 | 2020-05-26 | Kohler Co. | Power operation of intelligent devices |
US10887125B2 (en) | 2017-09-15 | 2021-01-05 | Kohler Co. | Bathroom speaker |
US11093554B2 (en) | 2017-09-15 | 2021-08-17 | Kohler Co. | Feedback for water consuming appliance |
US11099540B2 (en) | 2017-09-15 | 2021-08-24 | Kohler Co. | User identity in household appliances |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060041926A1 (en) * | 2004-04-30 | 2006-02-23 | Vulcan Inc. | Voice control of multimedia content |
JP4722499B2 (ja) * | 2005-01-25 | 2011-07-13 | 本田技研工業株式会社 | 音声認識型機器制御装置および車両 |
US8200495B2 (en) | 2005-02-04 | 2012-06-12 | Vocollect, Inc. | Methods and systems for considering information about an expected response when performing speech recognition |
US7949533B2 (en) | 2005-02-04 | 2011-05-24 | Vococollect, Inc. | Methods and systems for assessing and improving the performance of a speech recognition system |
US7865362B2 (en) | 2005-02-04 | 2011-01-04 | Vocollect, Inc. | Method and system for considering information about an expected response when performing speech recognition |
US7827032B2 (en) | 2005-02-04 | 2010-11-02 | Vocollect, Inc. | Methods and systems for adapting a model for a speech recognition system |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
JP5066668B2 (ja) * | 2005-11-08 | 2012-11-07 | 株式会社国際電気通信基礎技術研究所 | 音声認識装置、およびプログラム |
WO2007118029A2 (en) * | 2006-04-03 | 2007-10-18 | Vocollect, Inc. | Methods and systems for assessing and improving the performance of a speech recognition system |
WO2008084575A1 (ja) * | 2006-12-28 | 2008-07-17 | Mitsubishi Electric Corporation | 車載用音声認識装置 |
US20100070932A1 (en) * | 2008-09-18 | 2010-03-18 | Nissan Technical Center North America, Inc. | Vehicle on-board device |
US7953552B2 (en) * | 2008-11-21 | 2011-05-31 | Gary Severson | GPS navigation code system |
US8131460B2 (en) * | 2008-11-21 | 2012-03-06 | Gary Severson | GPS navigation code system |
CN102696029B (zh) * | 2010-01-06 | 2015-05-27 | 株式会社东芝 | 信息检索装置、信息检索方法及信息检索程序 |
JP2012212351A (ja) * | 2011-03-31 | 2012-11-01 | Mazda Motor Corp | 車両用情報提供装置 |
JP5673330B2 (ja) * | 2011-04-25 | 2015-02-18 | 株式会社デンソー | 音声入力装置 |
US8914290B2 (en) | 2011-05-20 | 2014-12-16 | Vocollect, Inc. | Systems and methods for dynamically improving user intelligibility of synthesized speech in a work environment |
KR101987255B1 (ko) * | 2012-08-20 | 2019-06-11 | 엘지이노텍 주식회사 | 음성 인식 장치 및 이의 음성 인식 방법 |
JP2014071519A (ja) * | 2012-09-27 | 2014-04-21 | Aisin Seiki Co Ltd | 状態判定装置、運転支援システム、状態判定方法及びプログラム |
US9691377B2 (en) | 2013-07-23 | 2017-06-27 | Google Technology Holdings LLC | Method and device for voice recognition training |
US9978395B2 (en) | 2013-03-15 | 2018-05-22 | Vocollect, Inc. | Method and system for mitigating delay in receiving audio stream during production of sound from audio stream |
US9548047B2 (en) | 2013-07-31 | 2017-01-17 | Google Technology Holdings LLC | Method and apparatus for evaluating trigger phrase enrollment |
JP6432233B2 (ja) * | 2014-09-15 | 2018-12-05 | 株式会社デンソー | 車両用機器制御装置、制御内容検索方法 |
US20160291854A1 (en) * | 2015-03-30 | 2016-10-06 | Ford Motor Company Of Australia Limited | Methods and systems for configuration of a vehicle feature |
CN104902070A (zh) * | 2015-04-13 | 2015-09-09 | 青岛海信移动通信技术股份有限公司 | 一种移动终端语音控制的方法及移动终端 |
US10096263B2 (en) | 2015-09-02 | 2018-10-09 | Ford Global Technologies, Llc | In-vehicle tutorial |
WO2017130486A1 (ja) * | 2016-01-28 | 2017-08-03 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
JP6860553B2 (ja) * | 2016-03-30 | 2021-04-14 | 川崎重工業株式会社 | 鞍乗型車両の情報出力装置 |
WO2017168465A1 (ja) * | 2016-03-30 | 2017-10-05 | 川崎重工業株式会社 | 鞍乗型車両用処理装置 |
US10937415B2 (en) * | 2016-06-15 | 2021-03-02 | Sony Corporation | Information processing device and information processing method for presenting character information obtained by converting a voice |
US10714121B2 (en) | 2016-07-27 | 2020-07-14 | Vocollect, Inc. | Distinguishing user speech from background speech in speech-dense environments |
WO2019138651A1 (ja) * | 2018-01-10 | 2019-07-18 | ソニー株式会社 | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム |
KR20210133600A (ko) * | 2020-04-29 | 2021-11-08 | 현대자동차주식회사 | 차량 음성 인식 방법 및 장치 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4672668A (en) * | 1982-04-12 | 1987-06-09 | Hitachi, Ltd. | Method and apparatus for registering standard pattern for speech recognition |
JPS62232692A (ja) * | 1986-04-03 | 1987-10-13 | 株式会社リコー | 特定話者音声登録方法 |
JPH01285998A (ja) * | 1988-05-13 | 1989-11-16 | Sharp Corp | 音声認識装置 |
JPH0488399A (ja) * | 1990-08-01 | 1992-03-23 | Clarion Co Ltd | 音声認識装置 |
JPH04310045A (ja) * | 1991-04-08 | 1992-11-02 | Clarion Co Ltd | 練習機能付き音声ダイヤル装置 |
JPH11194790A (ja) * | 1997-12-29 | 1999-07-21 | Kyocera Corp | 音声認識作動装置 |
JP2000097719A (ja) | 1999-09-20 | 2000-04-07 | Aisin Aw Co Ltd | ナビゲ―ション装置 |
JP2000352992A (ja) * | 1999-06-11 | 2000-12-19 | Fujitsu Ten Ltd | 音声認識装置 |
WO2002021509A1 (en) | 2000-09-01 | 2002-03-14 | Snap-On Technologies, Inc. | Computer-implemented speech recognition system training |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1987007460A1 (en) | 1986-05-23 | 1987-12-03 | Devices Innovative | Voice activated telephone |
US4827520A (en) | 1987-01-16 | 1989-05-02 | Prince Corporation | Voice actuated control system for use in a vehicle |
US5909666A (en) * | 1992-11-13 | 1999-06-01 | Dragon Systems, Inc. | Speech recognition system which creates acoustic models by concatenating acoustic models of individual words |
US5444673A (en) * | 1994-07-12 | 1995-08-22 | Mathurin; Trevor S. | Audio controlled and activated wristwatch memory aid device |
US5710864A (en) * | 1994-12-29 | 1998-01-20 | Lucent Technologies Inc. | Systems, methods and articles of manufacture for improving recognition confidence in hypothesized keywords |
US5970457A (en) * | 1995-10-25 | 1999-10-19 | Johns Hopkins University | Voice command and control medical care system |
US5799279A (en) * | 1995-11-13 | 1998-08-25 | Dragon Systems, Inc. | Continuous speech recognition of text and commands |
US5864338A (en) * | 1996-09-20 | 1999-01-26 | Electronic Data Systems Corporation | System and method for designing multimedia applications |
US5922042A (en) * | 1996-09-30 | 1999-07-13 | Visteon Technologies, Llc | Automatic resumption of route guidance in vehicle navigation system |
JP3820662B2 (ja) | 1997-02-13 | 2006-09-13 | ソニー株式会社 | 車両のナビゲーション装置及び方法 |
US6006185A (en) * | 1997-05-09 | 1999-12-21 | Immarco; Peter | System and device for advanced voice recognition word spotting |
US6275231B1 (en) * | 1997-08-01 | 2001-08-14 | American Calcar Inc. | Centralized control and management system for automobiles |
US6295391B1 (en) * | 1998-02-19 | 2001-09-25 | Hewlett-Packard Company | Automatic data routing via voice command annotation |
JP3412496B2 (ja) | 1998-02-25 | 2003-06-03 | 三菱電機株式会社 | 話者適応化装置と音声認識装置 |
JP2000009480A (ja) | 1998-06-25 | 2000-01-14 | Jatco Corp | 位置情報表示装置 |
US6411926B1 (en) * | 1999-02-08 | 2002-06-25 | Qualcomm Incorporated | Distributed voice recognition system |
WO2001007281A1 (en) * | 1999-07-24 | 2001-02-01 | Novtech Co Ltd | Apparatus and method for prevention of driving of motor vehicle under the influence of alcohol and prevention of vehicle theft |
US6801222B1 (en) * | 1999-10-14 | 2004-10-05 | International Business Machines Corporation | Method and system for dynamically building application help navigation information |
US6535850B1 (en) * | 2000-03-09 | 2003-03-18 | Conexant Systems, Inc. | Smart training and smart scoring in SD speech recognition system with user defined vocabulary |
US6671669B1 (en) * | 2000-07-18 | 2003-12-30 | Qualcomm Incorporated | combined engine system and method for voice recognition |
JP2002132287A (ja) * | 2000-10-20 | 2002-05-09 | Canon Inc | 音声収録方法および音声収録装置および記憶媒体 |
JP3263696B2 (ja) | 2000-11-10 | 2002-03-04 | 富士通テン株式会社 | ナビゲーション装置の現在位置表示方法 |
JP4348503B2 (ja) * | 2000-12-21 | 2009-10-21 | 三菱電機株式会社 | ナビゲーション装置 |
US20020178004A1 (en) * | 2001-05-23 | 2002-11-28 | Chienchung Chang | Method and apparatus for voice recognition |
US7292918B2 (en) * | 2002-06-21 | 2007-11-06 | Intel Corporation | PC-based automobile owner's manual, diagnostics, and auto care |
EP1400951B1 (de) * | 2002-09-23 | 2009-10-21 | Infineon Technologies AG | Verfahren zur rechnergestützten Spracherkennung, Spracherkennungssystem und Steuereinrichtung zum Steuern eines technischen Systems und Telekommunikationsgerät |
GB0224806D0 (en) * | 2002-10-24 | 2002-12-04 | Ibm | Method and apparatus for a interactive voice response system |
-
2002
- 2002-10-07 JP JP2002293327A patent/JP4304952B2/ja not_active Expired - Fee Related
-
2003
- 2003-10-07 WO PCT/JP2003/012848 patent/WO2004032113A1/ja active Application Filing
- 2003-10-07 EP EP03754013A patent/EP1450349B1/en not_active Expired - Fee Related
- 2003-10-07 US US10/497,695 patent/US7822613B2/en active Active - Reinstated
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4672668A (en) * | 1982-04-12 | 1987-06-09 | Hitachi, Ltd. | Method and apparatus for registering standard pattern for speech recognition |
JPS62232692A (ja) * | 1986-04-03 | 1987-10-13 | 株式会社リコー | 特定話者音声登録方法 |
JPH01285998A (ja) * | 1988-05-13 | 1989-11-16 | Sharp Corp | 音声認識装置 |
JPH0488399A (ja) * | 1990-08-01 | 1992-03-23 | Clarion Co Ltd | 音声認識装置 |
JPH04310045A (ja) * | 1991-04-08 | 1992-11-02 | Clarion Co Ltd | 練習機能付き音声ダイヤル装置 |
JPH11194790A (ja) * | 1997-12-29 | 1999-07-21 | Kyocera Corp | 音声認識作動装置 |
JP2000352992A (ja) * | 1999-06-11 | 2000-12-19 | Fujitsu Ten Ltd | 音声認識装置 |
JP2000097719A (ja) | 1999-09-20 | 2000-04-07 | Aisin Aw Co Ltd | ナビゲ―ション装置 |
WO2002021509A1 (en) | 2000-09-01 | 2002-03-14 | Snap-On Technologies, Inc. | Computer-implemented speech recognition system training |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10448762B2 (en) | 2017-09-15 | 2019-10-22 | Kohler Co. | Mirror |
US10663938B2 (en) | 2017-09-15 | 2020-05-26 | Kohler Co. | Power operation of intelligent devices |
US10887125B2 (en) | 2017-09-15 | 2021-01-05 | Kohler Co. | Bathroom speaker |
US11093554B2 (en) | 2017-09-15 | 2021-08-17 | Kohler Co. | Feedback for water consuming appliance |
US11099540B2 (en) | 2017-09-15 | 2021-08-24 | Kohler Co. | User identity in household appliances |
US11314214B2 (en) | 2017-09-15 | 2022-04-26 | Kohler Co. | Geographic analysis of water conditions |
US11314215B2 (en) | 2017-09-15 | 2022-04-26 | Kohler Co. | Apparatus controlling bathroom appliance lighting based on user identity |
US11892811B2 (en) | 2017-09-15 | 2024-02-06 | Kohler Co. | Geographic analysis of water conditions |
US11921794B2 (en) | 2017-09-15 | 2024-03-05 | Kohler Co. | Feedback for water consuming appliance |
US11949533B2 (en) | 2017-09-15 | 2024-04-02 | Kohler Co. | Sink device |
Also Published As
Publication number | Publication date |
---|---|
EP1450349A1 (en) | 2004-08-25 |
EP1450349B1 (en) | 2011-06-22 |
US20050021341A1 (en) | 2005-01-27 |
US7822613B2 (en) | 2010-10-26 |
EP1450349A4 (en) | 2006-01-04 |
JP2004126413A (ja) | 2004-04-22 |
JP4304952B2 (ja) | 2009-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004032113A1 (ja) | 車載制御装置、並びにその操作説明方法をコンピュータに実行させるプログラム | |
US7617108B2 (en) | Vehicle mounted control apparatus | |
JP4131978B2 (ja) | 音声認識機器制御装置 | |
JP4260788B2 (ja) | 音声認識機器制御装置 | |
US6937982B2 (en) | Speech recognition apparatus and method using two opposite words | |
US20080177541A1 (en) | Voice recognition device, voice recognition method, and voice recognition program | |
US9123327B2 (en) | Voice recognition apparatus for recognizing a command portion and a data portion of a voice input | |
JP2008058409A (ja) | 音声認識方法及び音声認識装置 | |
JP3702867B2 (ja) | 音声制御装置 | |
JP2007011380A (ja) | 自動車インターフェース | |
JP2002169584A (ja) | 音声操作システム | |
JP2003114698A (ja) | コマンド受付装置及びプログラム | |
JP3842497B2 (ja) | 音声処理装置 | |
JP2000322088A (ja) | 音声認識マイクおよび音声認識システムならびに音声認識方法 | |
JP5986468B2 (ja) | 表示制御装置、表示システム及び表示制御方法 | |
JP3837061B2 (ja) | 音信号認識システムおよび音信号認識方法並びに当該音信号認識システムを用いた対話制御システムおよび対話制御方法 | |
JP4796686B2 (ja) | 自動音声認識器を訓練する方法 | |
JP2001312297A (ja) | 音声認識装置 | |
JP3500948B2 (ja) | 音声認識装置 | |
JP2000305596A (ja) | 音声認識装置及びナビゲーション装置 | |
WO2015102039A1 (ja) | 音声認識装置 | |
JPH08160988A (ja) | 音声認識装置 | |
JPH1165592A (ja) | 音声入力システム | |
JP2019120904A (ja) | 情報処理装置、方法、及びプログラム | |
JP2006078791A (ja) | 音声認識装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003754013 Country of ref document: EP Ref document number: 10497695 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 2003754013 Country of ref document: EP |