WO2014006690A1 - Voice recognition device - Google Patents

Voice recognition device Download PDF

Info

Publication number
WO2014006690A1
WO2014006690A1 PCT/JP2012/066974 JP2012066974W WO2014006690A1 WO 2014006690 A1 WO2014006690 A1 WO 2014006690A1 JP 2012066974 W JP2012066974 W JP 2012066974W WO 2014006690 A1 WO2014006690 A1 WO 2014006690A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
information
user
search
display
Prior art date
Application number
PCT/JP2012/066974
Other languages
French (fr)
Japanese (ja)
Inventor
裕三 丸田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2012/066974 priority Critical patent/WO2014006690A1/en
Priority to CN201280074470.1A priority patent/CN104428766B/en
Priority to DE112012006652.9T priority patent/DE112012006652T5/en
Priority to JP2014523470A priority patent/JP5925313B2/en
Priority to US14/398,933 priority patent/US9269351B2/en
Publication of WO2014006690A1 publication Critical patent/WO2014006690A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/243Natural language query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics

Definitions

  • the present invention relates to a speech recognition apparatus that recognizes speech spoken by a user and searches for information.
  • a button for instructing the start of voice recognition (hereinafter referred to as a voice recognition start instruction section) is displayed on the touch panel or installed on the handle. Then, the voice uttered after the voice recognition start instruction unit is pressed by the passenger (user) is recognized. That is, the voice recognition start instruction unit outputs a voice recognition start signal, and when the voice recognition unit receives the signal, the passenger (user) speaks from the voice data acquired by the voice acquisition unit after receiving the signal. A speech section corresponding to the content is detected and speech recognition processing is performed.
  • the voice recognition unit detects the voice section corresponding to the content spoken by the passenger (user) from the voice data acquired by the voice acquisition unit without receiving the voice recognition start signal, and the voice section A feature amount of voice data is extracted, a recognition process is performed using a recognition dictionary based on the feature amount, and a process of outputting a character string of a voice recognition result is repeatedly performed.
  • the database is searched based on the character string and the search result is displayed.
  • Patent Document 1 a voice uttered by a user is always input and voice recognition is performed, and the recognition result is displayed. Thereafter, the user performs a determination operation using an operation button, thereby executing processing based on the recognition result.
  • a speech recognition apparatus is disclosed.
  • the conventional speech recognition apparatus such as Patent Document 1 has a problem that when the same utterance is recognized, only the search result of the same level is always displayed. That is, for example, when the user speaks “gas station”, the store name and location of the nearby gas station are always displayed. In order for the user to know the price for each gas station, a predetermined operation is performed each time. There was a problem that it had to be done separately.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a voice recognition device that can immediately present information at a level required by a user.
  • a speech recognition apparatus detects a speech uttered by a user and acquires a speech by recognizing speech data acquired by the speech acquisition unit. Based on a voice recognition unit, an operation input unit that receives an operation input from the user, a display unit that presents information to the user, information received by the operation input unit, and information displayed on the display unit.
  • the operation response analysis unit that identifies the user's operation, and the display contents displayed on the display unit and the number of times displayed by the operation identified by the operation response analysis unit for each keyword extracted by the voice recognition unit Is extracted by the voice recognition unit according to the history information stored in the operation display history storage unit.
  • a search level setting unit for setting a search level of the keyword, and according to the search level set by the search level setting unit, information is searched using the keyword extracted by the voice recognition unit as a search key, and the search result is obtained.
  • An information search control unit to acquire, and an information presentation control unit for giving an instruction to display the search result acquired by the information search control unit on the display unit, wherein the search level setting unit is the voice recognition unit.
  • the search level is changed when the number of display times in the history information stored in the operation display history storage unit exceeds a predetermined number for the keywords extracted by the above.
  • the speech recognition apparatus of the present invention it is possible to immediately present information at the level required by the user, and it is possible to efficiently provide detailed information necessary for the user at all times, so that convenience for the user is improved.
  • FIG. 1 is a block diagram illustrating an example of a voice recognition device according to Embodiment 1.
  • FIG. It is a figure which shows the example of a definition of a search level. It is a figure which shows the example of the search level for every keyword set to the information search control part. It is a figure which shows the operation history and display history by the user for every keyword memorize
  • 3 is a flowchart showing the operation of the speech recognition apparatus in the first embodiment. It is a figure which shows the example in which an operation history and a display history are updated about one keyword (gas station) memorize
  • FIG. 10 is a flowchart showing the operation of the speech recognition apparatus in the second embodiment.
  • FIG. 10 is a block diagram illustrating an example of a voice recognition device according to a third embodiment.
  • 10 is a flowchart illustrating the operation of the speech recognition apparatus according to Embodiment 3.
  • It is a block diagram which shows an example of the speech recognition apparatus by Embodiment 4.
  • 10 is a flowchart showing the operation of the speech recognition apparatus in the fourth embodiment.
  • FIG. 1 is a diagram illustrating an example of a display screen of a general navigation device.
  • the following conversation is performed in a state where a map for normal road guidance and the vehicle mark 71 are displayed on the screen 70 of the navigation device.
  • User A “Soon gasoline will run out”
  • User B “Is there a gas station nearby?”
  • a genre name icon 72 corresponding to the genre name included in the utterance content (in this example, “gas station”) is displayed on the screen 70 of the navigation device (FIG. 1A).
  • a gas station around the current location is searched, and for example, the name and address of the gas station are displayed as a search result list 73 as a search result (FIG. 1B).
  • the location information of the selected gas station is displayed as a facility mark 74, and detailed information of the gas station, such as business hours and gasoline, is displayed.
  • Detailed buttons 75 for example, “business hours” button 75a and “price” button 75b) for displaying the price and the like are displayed (FIG. 1C).
  • the “business hours” button 75a the business hours of the gas station are displayed (FIG. 1D).
  • a facility search by genre such as the above-described gas station will be described as an example.
  • information to be searched in the present invention is not limited to this facility information, and traffic information , Weather information, address information, news, music information, movie information, program information, and the like.
  • FIG. FIG. 2 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 1 of the present invention.
  • This voice recognition device is used by being incorporated in a navigation device mounted on a vehicle (moving body), and includes a voice acquisition unit 1, a voice recognition unit 2, a voice recognition dictionary 3, an information database 4, and information search control. Unit 5, information presentation control unit 6, display unit 7, operation input unit 8, operation response analysis unit 9, operation display history storage unit 10, and search level setting unit 11.
  • the voice acquisition unit 1 takes in a user utterance collected by a microphone, that is, an input voice, and performs A / D (Analog / Digital) conversion by PCM (Pulse Code Modulation), for example.
  • a / D Analog / Digital
  • PCM Pulse Code Modulation
  • the voice recognition unit 2 detects a voice section corresponding to the content uttered by the user from the voice signal digitized by the voice acquisition unit 1, extracts a feature quantity of voice data in the voice section, and uses the feature quantity as the feature quantity. Based on this, a recognition process is performed using the speech recognition dictionary 3, and a character string as a speech recognition result is output.
  • the recognition process may be performed using a general method such as an HMM (Hidden Markov Model) method.
  • a button for instructing the start of voice recognition (hereinafter referred to as a voice recognition start instruction section) is displayed on the touch panel or installed on the handle. Then, the voice uttered after the user presses the voice recognition start instruction unit is recognized. That is, the voice recognition start instruction unit outputs a voice recognition start signal, and when the voice recognition unit receives the signal, it corresponds to the content uttered by the user from the voice data acquired by the voice acquisition unit after receiving the signal.
  • the speech section to be detected is detected, and the above-described recognition process is performed.
  • the voice recognition unit 2 in the first embodiment always recognizes the content spoken by the user without the voice recognition start instruction by the user as described above. That is, even if the voice recognition unit 2 does not receive the voice recognition start signal, the voice recognition unit 2 always uses the voice data acquired by the voice acquisition unit 1 from the voice data acquired by the voice acquisition unit 1 when the navigation device incorporating the voice recognition device is activated. A speech section corresponding to the content uttered is detected, a feature amount of speech data in the speech section is extracted, a recognition process is performed using the speech recognition dictionary 3 based on the feature amount, and a character string of a speech recognition result The process of outputting is repeated. The same applies to the following embodiments.
  • the information database 4 stores at least one of facility information, address information, song information, and the like.
  • the facility information includes, for example, the facility name, the genre to which the facility belongs, position data, business hours, the presence / absence of a parking lot
  • the address information includes, for example, an address, position data, etc. Includes information such as name, artist name, song title, and age.
  • the information database 4 is described as having the facility information stored therein, but it may be traffic information, weather information, address information, news, music information, movie information, program information, and the like.
  • the information database 4 may be stored in, for example, an HDD or a flash memory, or may be on a network and accessed via communication means (not shown).
  • the information search control unit 5 searches the information database 4 using the keyword output by the voice recognition unit 2 according to the search level set by the search level setting unit 11 described later, and acquires information.
  • the search level is an index representing how much detailed information is acquired from the information database 4 (which hierarchy), and is defined for each keyword.
  • Figure 3 shows an example of search level definition. For example, when searching using the keyword “gas station” as a search key, if the set search level is “1”, the facility name and address information are acquired, and if the search level is “2”, the facility name In addition to address information, information on at least one specified item of business hours or gasoline prices is acquired.
  • the search level is not set, the information search control unit 5 does not perform a search process.
  • the search level “0” may be set so that the search level is not set.
  • FIG. 4 shows an example of the search level for each keyword set in the information search control unit 5 by the search level setting unit 11 described later.
  • one item may be set as additional information as shown in FIG. 4A.
  • business hours information is acquired in addition to the facility name and address information.
  • FIG. 4B a plurality of items may be set as additional information. If only the search level is set, information may be acquired for all items of that level.
  • the information presentation control unit 6 gives an instruction to display the search result acquired by the icon or the information search control unit 5 on the display unit 7 described later according to the search level. Specifically, when the search level is not set, the genre name icon 72 as shown in FIG. 1A is displayed. When the search level is set, the information is acquired by the information search control unit 5. The search results are displayed like a search result list 73 shown in FIG.
  • the display unit 7 is a display-integrated touch panel, and includes, for example, an LCD (Liquid Crystal Display) and a touch sensor, and displays a search result according to an instruction from the information presentation control unit 6. Further, the user can operate by directly touching the display unit (touch panel) 7.
  • LCD Liquid Crystal Display
  • the operation input unit 8 is an operation key, an operation button, a touch panel, or the like that receives an operation input from a user and inputs the instruction to the in-vehicle navigation device.
  • Various instructions by the user are recognized by the hardware switch provided in the in-vehicle navigation complex device, the touch switch set and displayed on the display, or the remote control installed on the handle or other remote control The thing by an apparatus etc. is mentioned.
  • the operation response analysis unit 9 specifies a user operation based on information received by the operation input unit 8 and information on a screen displayed on the display unit 7.
  • the identification of the user's operation is not an essential matter of the present invention, and a description thereof is omitted because a known technique may be used.
  • the operation display history storage unit 10 displays the display contents displayed on the display unit 7 by the user's operation specified by the operation response analysis unit 9 and the number of times of display thereof, as history information.
  • FIG. 5 shows history information by the user for each keyword stored in the operation display history storage unit 10. For example, as shown in FIG. 5, the content displayed by the user's operation and the number of times the content is displayed are stored for each keyword as shown in FIG. 5, and when the user's operation is specified by the operation response analysis unit 9, The number of times for the displayed content is incremented and saved.
  • the search level setting unit 11 refers to the history information stored in the operation display history storage unit 10 and sets a search level for each keyword used as a search key in the information search control unit 5 according to the history information.
  • the search level set in the information search control unit 5 is a level corresponding to display contents that are equal to or greater than the predetermined display count (or display contents that exceed the predetermined display count).
  • the search level set in the information search control unit 5 is a level corresponding to display contents that are equal to or greater than the predetermined display count (or display contents that exceed the predetermined display count).
  • storage part 10 becomes more than predetermined number, a search level is changed, Every time the number of display times exceeds a predetermined number, the search level is raised.
  • the search level “1” (see FIG. 3) for searching for a name / address corresponding to the predetermined number of times 3 times or more is set.
  • the search level is raised to “2”.
  • the search level for the display content with the deepest hierarchy may be set. For example, if the predetermined number of times as the threshold is also set to 3 times, in the keyword “convenience store” shown in FIG. 5, the name / address display of level 1 is 5 times, and the business hours display and recommended product display of level 2 are both Since it is four times, the search level “2” (refer to FIG. 3) for searching for business hours and recommended products corresponding to the predetermined number of times three or more and the deepest display content is set.
  • the predetermined number of times as the threshold has been described as being 3 times, but the same value may be used for all keywords, or a different value may be used for each keyword.
  • the search level setting method shown here is an example, and a search level determined by another method may be set.
  • the voice acquisition unit 1 takes in a user utterance collected by a microphone, that is, an input voice, and performs A / D conversion using, for example, PCM (step ST01).
  • the voice recognition unit 2 detects a voice section corresponding to the content spoken by the user from the voice signal digitized by the voice acquisition unit 1, extracts a feature amount of the voice data of the voice section, and A recognition process is performed using the speech recognition dictionary 3 based on the feature amount, and a character string serving as a keyword is extracted and output (step ST02).
  • the information search control unit 5 uses the keyword output by the voice recognition unit 2 according to the search level as a search key.
  • the information database 4 is searched and information is acquired (step ST04).
  • the information presentation control unit 6 instructs the display unit 7 to display the search result acquired by the information search control unit 5 (step ST05).
  • step ST06 when the search level is not set (NO in step ST03), an icon corresponding to the keyword is displayed (step ST06). Subsequently, when the display screen is operated by the user via the operation input unit 8, the operation response analysis unit 9 analyzes the operation, specifies the user's operation (step ST07), and specifies the search keyword. The operation history and display history stored in the operation display history storage unit 10 are updated by incrementing the number of times displayed by the user's operation (step ST08).
  • the search level setting unit 11 determines whether or not the number of display contents stored in the operation display history storage unit 10 for the keyword extracted in step ST02 is equal to or greater than a predetermined number that is a preset threshold value. Is determined (step ST09). When it is determined that there is no display content more than the predetermined number of times (in the case of NO in step ST09), the process returns to step ST01. On the other hand, if it is determined that there is display content that is a predetermined number of times or more (YES in step ST09), the search level is determined based on the content, and the search level is set for the information search control unit 5. (Step ST10).
  • the search level is not set in the information search control unit 5 and the number of screen display times for each keyword is all zero.
  • the “predetermined number of times” used as a threshold value for determination in the search level setting unit 11 is set to two times.
  • a screen for normal road guidance and a vehicle mark 71 are displayed on the screen 70 of the navigation device.
  • User A “Soon gasoline will run out”
  • User B “Is there a gas station nearby?”
  • the voice signal digitized by the voice acquisition unit 1 is recognized by the voice recognition unit 2, and the keyword “gas station” is extracted and output (step ST01, step ST02).
  • the search level for the keyword “gasoline station” is not set in the information search control unit 5, so the information search control unit 5 does not search the information database 4 (in step ST03). In the case of NO). Then, a display corresponding to the search level not set, that is, a genre name icon 73 of “gas station” is displayed on the screen 70 of the display unit 7 as shown in FIG. 1A, for example (step ST06).
  • the information stored in the operation display history storage unit 10 is the keyword as shown in FIG.
  • the search level “1” is set in the information search control unit 5 for the keyword “gas station”. Is obtained, and the search result list 73 is displayed as a search result as shown in FIG. 8A (in the case of YES in step ST03, step ST04, step ST05).
  • the search result list 73 is displayed as a search result as shown in FIG. 8A (in the case of YES in step ST03, step ST04, step ST05).
  • a screen shown in FIG. 1C is displayed.
  • the information stored in the operation display history storage unit 10 includes the name / address display count “3”, the business hours display count “2”, and the price display count “ The content is “0”, and the number of times the business hours are displayed is equal to or greater than the predetermined number “2”, which is the threshold value. Therefore, the search level “2” and the additional information “business hours” are set for the information search control unit 5.
  • the information stored in the operation display history storage unit 10 includes the number of times of name / address display “4”, the number of business hours display “2”, and the number of price display “2”. ", All items are equal to or greater than the predetermined number of times" 2 "which is a threshold used for determination in the search level setting unit 11, so that the search level” 2 "and additional information” “Business hours” and “Price” (or no additional information) are set.
  • the information search control unit 5 searches the keyword “gas station” at the search level “2”, the additional information “business hours” and “price” (or no additional information). ) Is set, the business hours and prices are acquired from the information database 4, and the search result list 73 including the business hours and prices as shown in FIG. 8C is displayed as a search result.
  • the contents and the number of times displayed by the user's operation are stored as history information.
  • the same operation is performed by determining whether or not the same operation and display have been performed more than a predetermined number of times, such as checking the business hours every time.
  • FIG. FIG. 9 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 2 of the present invention.
  • symbol is attached
  • the sound setting unit 12 is further provided, and the number of times the user displays the information for the keyword recognized by the voice recognition unit 2 is a predetermined number or more (or If the predetermined number of times is exceeded, the user is alerted.
  • the information search control unit 5 is based on the number of times the user displays information on the keyword recognized by the voice recognition unit 2.
  • the ring setting unit 12 is instructed to output.
  • the ringing setting unit 12 receives an instruction from the information search control unit 5, the ringing setting unit 12 changes the setting of the navigation device to perform a predetermined output.
  • the predetermined output refers to, for example, a predetermined vibration or sound output such as a vibration of a seat, an output of a notification sound, and a sound output indicating that the keyword is recognized.
  • steps ST11 to ST19 is the same as steps ST01 to ST09 in the flowchart of FIG.
  • the search level is set (step ST20), and then the ring setting unit 12 changes the ring setting and performs a predetermined output (step ST21).
  • the user can display information about the keyword in the past more than a predetermined number of times (or beyond the predetermined number of times). If it is determined that the search is performed, that is, according to the search level of the keyword, the ringing setting unit performs a predetermined output by vibration or voice to alert the user. It is possible to appropriately recognize that detailed information tailored to is immediately presented.
  • FIG. FIG. 11 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 3 of the present invention.
  • symbol is attached
  • the search level initialization unit 13 is further provided, and the user wants to initialize the history information stored in the operation display history storage unit 10. Can be initialized by speaking.
  • the voice recognition dictionary 3 is further configured to recognize keywords such as “initialization” and “reset” which mean commands that return history information stored in the operation display history storage unit 10 to an initial state.
  • the voice recognition unit 2 outputs the keyword as a recognition result.
  • the search level initialization unit 13 extracts history information stored in the operation display history storage unit 10 when the voice recognition unit 2 extracts a keyword indicating a command for returning to an initial state such as “initialization” and “reset”. Is initialized.
  • Steps ST31 to 32 and steps ST35 to ST42 are the same as steps ST11 to 12 and steps ST13 to ST20 in the flowchart of FIG.
  • step ST32 If the keyword extracted by the voice recognition unit 2 in step ST32 is a keyword meaning a command for returning to the initial state such as “initialization” and “reset” (YES in step ST33), the operation display history storage is performed.
  • the information stored in unit 10 is initialized, that is, returned to the initial state (step ST34). If it is a keyword other than that, the process after step ST35 is performed.
  • the keyword extracted from the user's utterance content by the voice recognition unit is a keyword meaning a command for returning to the initial state such as “initialization” and “reset”.
  • the history information stored in the operation display history storage unit is initialized, the display of detailed information according to the search level is not as expected, or the user changes
  • the content of the operation display history storage unit can be returned to the initial state only by speaking a keyword meaning this command.
  • FIG. FIG. 13 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 4 of the present invention. Note that the same components as those described in the first to third embodiments are denoted by the same reference numerals, and redundant description is omitted.
  • the speaker identification part 14 is further provided and the log
  • the speaker identification unit 14 analyzes the voice signal digitized by the voice acquisition unit 1 and identifies the speaker (the user who spoke).
  • the speaker identification method is not an essential matter of the present invention, and a known technique may be used.
  • the operation display history storage unit 10 holds history information as shown in FIG. 5 for each user. Then, when a speaker (speaking user) is identified by the speaker identifying unit 14, the history information corresponding to the identified user is validated. Since other processes are the same as those in the first embodiment, description thereof is omitted. It is assumed that the speaker identified by the speaker identification unit 14 is a user who operates the operation input unit 8.
  • the search level setting unit 11 refers to the history information stored in the operation display history storage unit 10 that is valid, and the keyword used as a search key in the information search control unit 5 according to the history information. Set the search level for each.
  • the operation response analysis unit 9 validates the history information corresponding to the speaker identified by the speaker identification unit 14 from the operation display history storage unit 10 (step ST53).
  • the subsequent processing of steps ST54 to ST62 is the same as steps ST02 to ST10 of the flowchart shown in FIG.
  • the speaker is identified by the user's utterance
  • the search level is set with reference to the history information stored for each utterer, and the detailed information corresponding thereto is obtained. Since the information is displayed, even if the user who uses the navigation device in which the voice recognition device is incorporated changes, the level of information required by each user can be presented immediately, and detailed information that is always necessary for the user. Can be provided efficiently, and the convenience for the user is further improved.
  • the user's utterance content is always recognized.
  • voice recognition may be performed only for a predetermined time after the button is pressed.
  • the user may be able to set whether to always recognize or recognize only for a predetermined period.
  • a navigation device incorporating a voice recognition device when activated even if the user is not conscious, by performing voice acquisition and voice recognition at all times, If there is an utterance, voice acquisition and voice recognition are automatically performed, keywords are extracted from the voice recognition results, a search level is set, and information at the level requested by the user is immediately displayed. Detailed information necessary for the user can always be efficiently provided without requiring the user's manual operation or input intention to start speech recognition.
  • the voice recognition device is described as being incorporated in a vehicle-mounted navigation device.
  • the device in which the voice recognition device of the invention is incorporated is not limited to a vehicle-mounted navigation device, It is possible to search and display information through dialogue between the user and the device, such as a navigation device for a moving body including a vehicle, a railroad, a ship or an aircraft, a portable navigation device, a portable information processing device, etc. Any device can be applied as long as it is a device.
  • the device in which the voice recognition device of the present invention is incorporated is not limited to a vehicle-mounted navigation device, but a navigation device for a mobile object including a person, a vehicle, a railroad, a ship or an aircraft, a portable navigation device, and portable information.
  • the present invention can be applied to any form as long as the apparatus can search and display information through a dialogue between the user and the apparatus, such as a processing apparatus.
  • 1 voice acquisition unit 2 voice recognition unit, 3 voice recognition dictionary, 4 information database, 5 information search control unit, 6 information presentation control unit, 7 display unit, 8 operation input unit, 9 operation response analysis unit, 10 operation display history Storage section, 11 Search level setting section, 12 Ringing setting section, 13 Search level initialization section, 14 Speaker identification section, 70 Navigation device screen, 71 Vehicle mark, 72 Genre name icon, 73 Search result list, 74 facilities Mark, 75 details button.

Abstract

According to this voice recognition device, for keywords extracted by a voice recognition unit from speech contents of a user, content displayed and number of occurrences of display caused by operations of a user are stored as history information, and by assessing whether or not the same operations and display are performed for a predetermined number of occurrences or more and setting a search level therefor, when the same keyword is subsequently extracted, it is possible to immediately present information of a level demanded by the user. Therefore, it is possible to always efficiently provide detailed information necessary for the user, and accordingly, convenience for the user is improved.

Description

音声認識装置Voice recognition device
 この発明は、ユーザが発話した音声を認識して情報を検索する音声認識装置に関するものである。 The present invention relates to a speech recognition apparatus that recognizes speech spoken by a user and searches for information.
 カーナビゲーションシステム等に搭載されている音声認識機能においては、搭乗者(ユーザ)が発話の開始をシステムに対して明示(指示)するのが一般的である。そのために、音声認識開始を指示するボタン(以下、音声認識開始指示部と記載する)が、タッチパネルに表示されたりハンドルに設置されたりしている。そして、搭乗者(ユーザ)により音声認識開始指示部が押下された後に発話された音声を認識する。すなわち、音声認識開始指示部は音声認識開始信号を出力し、音声認識部は当該信号を受けると、当該信号を受けた後に音声取得部により取得された音声データから、搭乗者(ユーザ)が発話した内容に該当する音声区間を検出し、音声認識処理を行う。 In a speech recognition function installed in a car navigation system or the like, it is common for a passenger (user) to clearly indicate (instruct) the start of an utterance to the system. For this purpose, a button for instructing the start of voice recognition (hereinafter referred to as a voice recognition start instruction section) is displayed on the touch panel or installed on the handle. Then, the voice uttered after the voice recognition start instruction unit is pressed by the passenger (user) is recognized. That is, the voice recognition start instruction unit outputs a voice recognition start signal, and when the voice recognition unit receives the signal, the passenger (user) speaks from the voice data acquired by the voice acquisition unit after receiving the signal. A speech section corresponding to the content is detected and speech recognition processing is performed.
 しかし、搭乗者(ユーザ)による音声認識開始指示がなくても、常に、搭乗者(ユーザ)が発話した内容を認識する音声認識装置も存在する。すなわち、音声認識部は、音声認識開始信号を受けなくても、音声取得部により取得された音声データから、搭乗者(ユーザ)が発話した内容に該当する音声区間を検出し、該音声区間の音声データの特徴量を抽出し、その特徴量に基づいて認識辞書を用いて認識処理を行い、音声認識結果の文字列を出力する処理を繰り返し行う。または、その文字列をもとにデータベースを検索して検索結果を表示する。 However, there is a voice recognition device that always recognizes the content spoken by the passenger (user) even if there is no voice recognition start instruction from the passenger (user). That is, the voice recognition unit detects the voice section corresponding to the content spoken by the passenger (user) from the voice data acquired by the voice acquisition unit without receiving the voice recognition start signal, and the voice section A feature amount of voice data is extracted, a recognition process is performed using a recognition dictionary based on the feature amount, and a process of outputting a character string of a voice recognition result is repeatedly performed. Alternatively, the database is searched based on the character string and the search result is displayed.
 例えば特許文献1には、ユーザが発した音声を常時入力して音声認識を行い、その認識結果を表示し、その後にユーザが操作ボタンにより決定操作を行うことにより、認識結果に基づく処理を実行する音声認識装置が開示されている。 For example, in Patent Document 1, a voice uttered by a user is always input and voice recognition is performed, and the recognition result is displayed. Thereafter, the user performs a determination operation using an operation button, thereby executing processing based on the recognition result. A speech recognition apparatus is disclosed.
特開2008-14818号公報Japanese Patent Laid-Open No. 2008-14818
 しかしながら、例えば特許文献1のような従来の音声認識装置では、同じ発話を認識した場合、常に同じレベルの検索結果が表示されるだけである、という問題があった。すなわち、例えば、ユーザが「ガソリンスタンド」を発話した場合、常に近傍のガソリンスタンドの店名と位置を表示するだけであり、ユーザがガソリンスタンド毎の価格を知るためには、毎回さらに所定の操作を別途行わなければならない、という課題があった。 However, for example, the conventional speech recognition apparatus such as Patent Document 1 has a problem that when the same utterance is recognized, only the search result of the same level is always displayed. That is, for example, when the user speaks “gas station”, the store name and location of the nearby gas station are always displayed. In order for the user to know the price for each gas station, a predetermined operation is performed each time. There was a problem that it had to be done separately.
 この発明は、上記のような課題を解決するためになされたものであり、ユーザが求めるレベルの情報を即座に提示できる音声認識装置を提供することを目的とする。 The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a voice recognition device that can immediately present information at a level required by a user.
 上記目的を達成するため、この発明の音声認識装置は、ユーザにより発話された音声を検知して取得する音声取得部と、前記音声取得部により取得された音声データを認識してキーワードを抽出する音声認識部と、前記ユーザからの操作入力を受け付ける操作入力部と、前記ユーザに情報を提示する表示部と、前記操作入力部により受け付けられた情報および前記表示部に表示された情報に基づいて、前記ユーザの操作を特定する操作応答解析部と、前記音声認識部により抽出されたキーワード毎に、前記操作応答解析部により特定された操作により前記表示部に表示された表示内容とその表示回数を履歴情報として記憶する操作表示履歴記憶部と、前記操作表示履歴記憶部に記憶されている履歴情報に応じて、前記音声認識部により抽出されたキーワードの検索レベルを設定する検索レベル設定部と、前記検索レベル設定部により設定された検索レベルにしたがって、前記音声認識部により抽出されたキーワードを検索キーとして情報を検索して検索結果を取得する情報検索制御部と、前記情報検索制御部により取得された検索結果を、前記表示部に表示させる指示を行う情報提示制御部と、を備え、前記検索レベル設定部は、前記音声認識部により抽出されたキーワードについて、前記操作表示履歴記憶部に記憶されている履歴情報の中の表示回数が所定回数以上になった場合に、前記検索レベルを変更することを特徴とする。 In order to achieve the above object, a speech recognition apparatus according to the present invention detects a speech uttered by a user and acquires a speech by recognizing speech data acquired by the speech acquisition unit. Based on a voice recognition unit, an operation input unit that receives an operation input from the user, a display unit that presents information to the user, information received by the operation input unit, and information displayed on the display unit The operation response analysis unit that identifies the user's operation, and the display contents displayed on the display unit and the number of times displayed by the operation identified by the operation response analysis unit for each keyword extracted by the voice recognition unit Is extracted by the voice recognition unit according to the history information stored in the operation display history storage unit. A search level setting unit for setting a search level of the keyword, and according to the search level set by the search level setting unit, information is searched using the keyword extracted by the voice recognition unit as a search key, and the search result is obtained. An information search control unit to acquire, and an information presentation control unit for giving an instruction to display the search result acquired by the information search control unit on the display unit, wherein the search level setting unit is the voice recognition unit The search level is changed when the number of display times in the history information stored in the operation display history storage unit exceeds a predetermined number for the keywords extracted by the above.
 この発明の音声認識装置によれば、ユーザが求めるレベルの情報を即座に提示することができ、常にユーザにとって必要な詳細情報を効率よく提供することができるので、ユーザの利便性が向上する。 According to the speech recognition apparatus of the present invention, it is possible to immediately present information at the level required by the user, and it is possible to efficiently provide detailed information necessary for the user at all times, so that convenience for the user is improved.
ナビゲーション装置の表示画面例を示す図である。It is a figure which shows the example of a display screen of a navigation apparatus. 実施の形態1による音声認識装置の一例を示すブロック図である。1 is a block diagram illustrating an example of a voice recognition device according to Embodiment 1. FIG. 検索レベルの定義例を示す図である。It is a figure which shows the example of a definition of a search level. 情報検索制御部に設定されているキーワード毎の検索レベルの例を示す図である。It is a figure which shows the example of the search level for every keyword set to the information search control part. 操作表示履歴記憶部に記憶されているキーワード毎のユーザによる操作履歴および表示履歴を示す図である。It is a figure which shows the operation history and display history by the user for every keyword memorize | stored in the operation display history memory | storage part. 実施の形態1における音声認識装置の動作を示すフローチャートである。3 is a flowchart showing the operation of the speech recognition apparatus in the first embodiment. 操作表示履歴記憶部に記憶されている一のキーワード(ガソリンスタンド)について操作履歴および表示履歴が更新される例を示す図である。It is a figure which shows the example in which an operation history and a display history are updated about one keyword (gas station) memorize | stored in the operation display history memory | storage part. 検索結果の表示例を示す図である。It is a figure which shows the example of a display of a search result. 実施の形態2による音声認識装置の一例を示すブロック図である。It is a block diagram which shows an example of the speech recognition apparatus by Embodiment 2. 実施の形態2における音声認識装置の動作を示すフローチャートである。10 is a flowchart showing the operation of the speech recognition apparatus in the second embodiment. 実施の形態3による音声認識装置の一例を示すブロック図である。FIG. 10 is a block diagram illustrating an example of a voice recognition device according to a third embodiment. 実施の形態3における音声認識装置の動作を示すフローチャートである。10 is a flowchart illustrating the operation of the speech recognition apparatus according to Embodiment 3. 実施の形態4による音声認識装置の一例を示すブロック図である。It is a block diagram which shows an example of the speech recognition apparatus by Embodiment 4. 実施の形態4における音声認識装置の動作を示すフローチャートである。10 is a flowchart showing the operation of the speech recognition apparatus in the fourth embodiment.
 以下、この発明の実施の形態について、図面を参照しながら詳細に説明する。
 まず初めに、この発明の前提となる音声認識装置が組み込まれたナビゲーション装置について説明する。図1は、一般的なナビゲーション装置の表示画面例を示す図である。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
First, a description will be given of a navigation device incorporating a speech recognition device as a premise of the present invention. FIG. 1 is a diagram illustrating an example of a display screen of a general navigation device.
 例えば、当該ナビゲーション装置が搭載されている車内において、ナビゲーション装置の画面70には、通常の道案内のための地図および自車マーク71が表示されている状態で、次のような会話が行われたとする。
 ユーザA:「そろそろガソリンがなくなるなぁ」
 ユーザB:「近くにガソリンスタンドはないかなぁ」
For example, in a vehicle in which the navigation device is mounted, the following conversation is performed in a state where a map for normal road guidance and the vehicle mark 71 are displayed on the screen 70 of the navigation device. Suppose.
User A: “Soon gasoline will run out”
User B: “Is there a gas station nearby?”
 すると、その発話内容に含まれるジャンル名(この例では「ガソリンスタンド」)に対応するジャンル名アイコン72がナビゲーション装置の画面70に表示される(図1(a))。ユーザが当該アイコン72を押下すると、現在地周辺のガソリンスタンドが検索され、検索結果としてガソリンスタンドの例えば名称と住所等が検索結果リスト73のように表示される(図1(b))。 Then, a genre name icon 72 corresponding to the genre name included in the utterance content (in this example, “gas station”) is displayed on the screen 70 of the navigation device (FIG. 1A). When the user presses the icon 72, a gas station around the current location is searched, and for example, the name and address of the gas station are displayed as a search result list 73 as a search result (FIG. 1B).
 続いて、ユーザが表示された検索結果の一つを選択すると、選択されたガソリンスタンドの位置情報が施設マーク74のように表示されるとともに、当該ガソリンスタンドの詳細情報、例えば、営業時間やガソリン価格等を表示するための詳細ボタン75(例えば「営業時間」ボタン75aと「価格」ボタン75b)が表示される(図1(c))。ここで、ユーザが「営業時間」のボタン75aを押下すると、そのガソリンスタンドの営業時間が表示される(図1(d))。 Subsequently, when the user selects one of the displayed search results, the location information of the selected gas station is displayed as a facility mark 74, and detailed information of the gas station, such as business hours and gasoline, is displayed. Detailed buttons 75 (for example, “business hours” button 75a and “price” button 75b) for displaying the price and the like are displayed (FIG. 1C). Here, when the user presses the “business hours” button 75a, the business hours of the gas station are displayed (FIG. 1D).
 なお、以下の実施の形態ではいずれも、上述したガソリンスタンドのような、ジャンルによる施設検索を例に説明を行うが、この発明において検索する情報はこの施設情報に限られるものではなく、交通情報、天気情報、住所情報、ニュース、音楽情報、映画情報、番組情報などであってもよい。 In all of the following embodiments, a facility search by genre such as the above-described gas station will be described as an example. However, information to be searched in the present invention is not limited to this facility information, and traffic information , Weather information, address information, news, music information, movie information, program information, and the like.
実施の形態1.
 図2は、この発明の実施の形態1による音声認識装置の一例を示すブロック図である。この音声認識装置は、車両(移動体)に搭載されたナビゲーション装置に組み込まれて使用されるものであり、音声取得部1、音声認識部2、音声認識辞書3、情報データベース4、情報検索制御部5、情報提示制御部6、表示部7、操作入力部8、操作応答解析部9、操作表示履歴記憶部10、検索レベル設定部11を備えている。
Embodiment 1 FIG.
FIG. 2 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 1 of the present invention. This voice recognition device is used by being incorporated in a navigation device mounted on a vehicle (moving body), and includes a voice acquisition unit 1, a voice recognition unit 2, a voice recognition dictionary 3, an information database 4, and information search control. Unit 5, information presentation control unit 6, display unit 7, operation input unit 8, operation response analysis unit 9, operation display history storage unit 10, and search level setting unit 11.
 音声取得部1は、マイクにより集音されたユーザ発話、すなわち、入力された音声を取込み、例えばPCM(Pulse Code Modulation)によりA/D(Analog/Digital)変換する。 The voice acquisition unit 1 takes in a user utterance collected by a microphone, that is, an input voice, and performs A / D (Analog / Digital) conversion by PCM (Pulse Code Modulation), for example.
 音声認識部2は、音声取得部1によりデジタル化された音声信号から、ユーザが発話した内容に該当する音声区間を検出し、該音声区間の音声データの特徴量を抽出し、その特徴量に基づいて音声認識辞書3を用いて認識処理を行い、音声認識結果の文字列を出力する。なお、認識処理としては、例えばHMM(Hidden Markov Model)法のような一般的な方法を用いて行えばよい。 The voice recognition unit 2 detects a voice section corresponding to the content uttered by the user from the voice signal digitized by the voice acquisition unit 1, extracts a feature quantity of voice data in the voice section, and uses the feature quantity as the feature quantity. Based on this, a recognition process is performed using the speech recognition dictionary 3, and a character string as a speech recognition result is output. The recognition process may be performed using a general method such as an HMM (Hidden Markov Model) method.
 ところで、カーナビゲーションシステム等に搭載されている音声認識機能においては、ユーザが発話の開始をシステムに対して明示(指示)するのが一般的である。そのために、音声認識開始を指示するボタン(以下、音声認識開始指示部と記載する)が、タッチパネルに表示されたりハンドルに設置されたりしている。そして、ユーザにより音声認識開始指示部が押下された後に発話された音声を認識する。すなわち、音声認識開始指示部は音声認識開始信号を出力し、音声認識部は当該信号を受けると、当該信号を受けた後に音声取得部により取得された音声データから、ユーザが発話した内容に該当する音声区間を検出し、上述した認識処理を行う。 Incidentally, in a voice recognition function installed in a car navigation system or the like, it is common for a user to clearly indicate (instruct) the start of an utterance to the system. For this purpose, a button for instructing the start of voice recognition (hereinafter referred to as a voice recognition start instruction section) is displayed on the touch panel or installed on the handle. Then, the voice uttered after the user presses the voice recognition start instruction unit is recognized. That is, the voice recognition start instruction unit outputs a voice recognition start signal, and when the voice recognition unit receives the signal, it corresponds to the content uttered by the user from the voice data acquired by the voice acquisition unit after receiving the signal. The speech section to be detected is detected, and the above-described recognition process is performed.
 しかし、この実施の形態1における音声認識部2は、上述したようなユーザによる音声認識開始指示がなくても、常に、ユーザが発話した内容を認識する。すなわち、音声認識部2は、音声認識開始信号を受けなくても、この音声認識装置が組み込まれたナビゲーション装置が起動されている場合は常時、音声取得部1により取得された音声データから、ユーザが発話した内容に該当する音声区間を検出し、該音声区間の音声データの特徴量を抽出し、その特徴量に基づいて音声認識辞書3を用いて認識処理を行い、音声認識結果の文字列を出力する処理を繰り返し行う。以下の実施の形態においても同様である。 However, the voice recognition unit 2 in the first embodiment always recognizes the content spoken by the user without the voice recognition start instruction by the user as described above. That is, even if the voice recognition unit 2 does not receive the voice recognition start signal, the voice recognition unit 2 always uses the voice data acquired by the voice acquisition unit 1 from the voice data acquired by the voice acquisition unit 1 when the navigation device incorporating the voice recognition device is activated. A speech section corresponding to the content uttered is detected, a feature amount of speech data in the speech section is extracted, a recognition process is performed using the speech recognition dictionary 3 based on the feature amount, and a character string of a speech recognition result The process of outputting is repeated. The same applies to the following embodiments.
 情報データベース4は、施設情報や住所情報や曲情報等のうち少なくとも一つ以上を記憶している。施設情報には、例えば、施設名称、施設が属するジャンル、位置データ、営業時間、駐車場の有無等が、住所情報には、例えば、住所、位置データ等が、曲情報には、例えば、アルバム名、アーティスト名、曲名、年代等の情報が含まれる。なお、ここでは、情報データベース4には施設情報が記憶されているものとして説明するが、交通情報、天気情報、住所情報、ニュース、音楽情報、映画情報、番組情報などであってもよい。なお、情報データベース4は、例えば、HDDやフラッシュメモリに格納されているものでもよく、また、ネットワーク上にあり通信手段(図示せず)を介してアクセスするものであってもよい。 The information database 4 stores at least one of facility information, address information, song information, and the like. The facility information includes, for example, the facility name, the genre to which the facility belongs, position data, business hours, the presence / absence of a parking lot, the address information includes, for example, an address, position data, etc. Includes information such as name, artist name, song title, and age. Here, the information database 4 is described as having the facility information stored therein, but it may be traffic information, weather information, address information, news, music information, movie information, program information, and the like. The information database 4 may be stored in, for example, an HDD or a flash memory, or may be on a network and accessed via communication means (not shown).
 情報検索制御部5は、後述する検索レベル設定部11により設定された検索レベルに従って音声認識部2により出力されたキーワードを検索キーとして情報データベース4を検索し、情報を取得する。ここで、検索レベルとは、情報データベース4からどの程度(どの階層)の詳細情報まで取得するかを表す指標であり、キーワード毎に定義されている。 The information search control unit 5 searches the information database 4 using the keyword output by the voice recognition unit 2 according to the search level set by the search level setting unit 11 described later, and acquires information. Here, the search level is an index representing how much detailed information is acquired from the information database 4 (which hierarchy), and is defined for each keyword.
 図3に、検索レベルの定義例を示す。例えば、キーワード「ガソリンスタンド」を検索キーとして検索する場合、設定された検索レベルが「1」であれば施設名称と住所情報までを取得し、検索レベルが「2」である場合は、施設名称と住所情報に加え、営業時間かガソリン価格の少なくとも一つ以上の指定された項目の情報を取得する。検索レベルが設定されていない場合は、情報検索制御部5は検索処理を行わない。なお、検索レベル「0」を設定することで検索レベルが設定されていないものとしてもよい。 Figure 3 shows an example of search level definition. For example, when searching using the keyword “gas station” as a search key, if the set search level is “1”, the facility name and address information are acquired, and if the search level is “2”, the facility name In addition to address information, information on at least one specified item of business hours or gasoline prices is acquired. When the search level is not set, the information search control unit 5 does not perform a search process. The search level “0” may be set so that the search level is not set.
 図4は、後述する検索レベル設定部11により情報検索制御部5に設定されたキーワード毎の検索レベルの例を示したものである。ここで、図3のキーワード「ガソリンスタンド」のように、同じ検索レベルに複数の項目がある場合は、図4(a)に示すように、付加情報として一の項目が設定されてもよい。この場合、施設名称と住所情報に加え営業時間情報を取得する。また、図4(b)に示すように、付加情報として複数の項目が設定されてもよい。また、検索レベルのみが設定された場合は、当該レベルの項目すべてについて情報を取得するようにしてもよい。 FIG. 4 shows an example of the search level for each keyword set in the information search control unit 5 by the search level setting unit 11 described later. Here, when there are a plurality of items at the same search level, such as the keyword “gas station” in FIG. 3, one item may be set as additional information as shown in FIG. 4A. In this case, business hours information is acquired in addition to the facility name and address information. Further, as shown in FIG. 4B, a plurality of items may be set as additional information. If only the search level is set, information may be acquired for all items of that level.
 情報提示制御部6は、検索レベルに応じてアイコンや情報検索制御部5により取得された検索結果を、後述する表示部7に表示させる指示を行う。具体的には、検索レベルが設定されていない場合は、図1(a)のようなジャンル名アイコン72を表示させ、検索レベルが設定されている場合は、情報検索制御部5により取得された検索結果を、図1(b)に示す検索結果リスト73のように表示させる。 The information presentation control unit 6 gives an instruction to display the search result acquired by the icon or the information search control unit 5 on the display unit 7 described later according to the search level. Specifically, when the search level is not set, the genre name icon 72 as shown in FIG. 1A is displayed. When the search level is set, the information is acquired by the information search control unit 5. The search results are displayed like a search result list 73 shown in FIG.
 表示部7は、表示一体型のタッチパネルであり、例えばLCD(Liquid Crystal Display)とタッチセンサから構成されており、情報提示制御部6からの指示に応じて検索結果を表示する。また、ユーザが表示部(タッチパネル)7に直接触れることにより、操作することが可能である。 The display unit 7 is a display-integrated touch panel, and includes, for example, an LCD (Liquid Crystal Display) and a touch sensor, and displays a search result according to an instruction from the information presentation control unit 6. Further, the user can operate by directly touching the display unit (touch panel) 7.
 操作入力部8は、ユーザからの操作入力を受け付けて車載用ナビゲーション装置にその指示を入力する操作キー、操作ボタン、タッチパネル等である。ユーザによる諸々の指示は、車載用ナビゲーション複合装置に設けられたハードウエアのスイッチ、ディスプレイに設定され表示されるタッチスイッチ、あるいはハンドル等に設置されたリモコンもしくは別体のリモコンによる指示を認識する認識装置などによるものが挙げられる。 The operation input unit 8 is an operation key, an operation button, a touch panel, or the like that receives an operation input from a user and inputs the instruction to the in-vehicle navigation device. Various instructions by the user are recognized by the hardware switch provided in the in-vehicle navigation complex device, the touch switch set and displayed on the display, or the remote control installed on the handle or other remote control The thing by an apparatus etc. is mentioned.
 操作応答解析部9は、操作入力部8により受け付けられた情報および表示部7に表示された画面の情報等に基づいて、ユーザの操作を特定する。なお、ユーザの操作の特定に関しては、この発明の本質的な事柄ではなく、公知の技術を用いればよいため説明を省略する。 The operation response analysis unit 9 specifies a user operation based on information received by the operation input unit 8 and information on a screen displayed on the display unit 7. The identification of the user's operation is not an essential matter of the present invention, and a description thereof is omitted because a known technique may be used.
 操作表示履歴記憶部10は、音声認識部2により抽出されたキーワード毎に、操作応答解析部9により特定されたユーザの操作により表示部7に表示された表示内容とその表示回数を、履歴情報として記憶する記憶部である。図5は、操作表示履歴記憶部10に記憶されているキーワード毎のユーザによる履歴情報を示している。例えば、図5のようにキーワード毎にユーザの操作により表示した内容と当該内容を表示した回数を対にして記憶しており、操作応答解析部9によりユーザの操作が特定されると、その操作により表示された内容に対する回数がインクリメントされて保存される。 For each keyword extracted by the speech recognition unit 2, the operation display history storage unit 10 displays the display contents displayed on the display unit 7 by the user's operation specified by the operation response analysis unit 9 and the number of times of display thereof, as history information. As a storage unit. FIG. 5 shows history information by the user for each keyword stored in the operation display history storage unit 10. For example, as shown in FIG. 5, the content displayed by the user's operation and the number of times the content is displayed are stored for each keyword as shown in FIG. 5, and when the user's operation is specified by the operation response analysis unit 9, The number of times for the displayed content is incremented and saved.
 検索レベル設定部11は、操作表示履歴記憶部10に記憶されている履歴情報を参照して、当該履歴情報に応じて情報検索制御部5において検索キーとされるキーワード毎の検索レベルを設定する。ここで、情報検索制御部5に設定する検索レベルは、所定の表示回数以上の表示内容(または所定の表示回数を超える表示内容)に対応するレベルであるとする。そして、音声認識部2により抽出されたキーワードについて、操作表示履歴記憶部10に記憶されている履歴情報の中の表示回数が所定回数以上になった場合に、検索レベルを変更するものであり、表示回数が所定回数以上になるたびに、検索レベルを上げていく。 The search level setting unit 11 refers to the history information stored in the operation display history storage unit 10 and sets a search level for each keyword used as a search key in the information search control unit 5 according to the history information. . Here, it is assumed that the search level set in the information search control unit 5 is a level corresponding to display contents that are equal to or greater than the predetermined display count (or display contents that exceed the predetermined display count). And about the keyword extracted by the speech recognition part 2, when the frequency | count of display in the historical information memorize | stored in the operation display history memory | storage part 10 becomes more than predetermined number, a search level is changed, Every time the number of display times exceeds a predetermined number, the search level is raised.
 例えば、閾値となる所定回数を3回とした場合、図5に示すキーワード「ガソリンスタンド」においては、階層1の名称・住所表示が6回であり、階層2の営業時間表示が2回、価格表示が0回であるため、所定回数3回以上に該当する名称・住所を検索する検索レベル「1」(図3参照)を設定する。また、この時にユーザにより営業時間を表示する操作が行われると、営業時間表示の回数が3回に更新されるので、次にキーワード「ガソリンスタンド」が抽出された場合には、営業時間の表示回数が所定回数3回以上になっているので、検索レベルが「2」に上げられる。 For example, when the predetermined number of times as the threshold is 3 times, in the keyword “gas station” shown in FIG. 5, the name / address display of level 1 is 6 times, the business hours display of level 2 is 2 times, the price Since the display is 0 times, a search level “1” (see FIG. 3) for searching for a name / address corresponding to the predetermined number of times 3 times or more is set. In addition, if the user performs an operation for displaying business hours at this time, the number of business hours displayed is updated to three, so that the next time the keyword “gas station” is extracted, the business hours are displayed. Since the number of times is the predetermined number of times 3 or more, the search level is raised to “2”.
 また、表示回数が所定回数を超えているものが複数ある場合は、例えば、最も階層が深い表示内容に対する検索レベルを設定するようにすればよい。例えば、閾値となる所定回数を同じく3回とした場合、図5に示すキーワード「コンビニエンスストア」においては、階層1の名称・住所表示が5回、階層2の営業時間表示とおすすめ商品表示が共に4回であるため、所定回数3回以上に該当し、かつ、最も階層が深い表示内容である営業時間とお勧め商品を検索する検索レベル「2」(図3参照)を設定する。 In addition, when there are a plurality of display counts exceeding the predetermined count, for example, the search level for the display content with the deepest hierarchy may be set. For example, if the predetermined number of times as the threshold is also set to 3 times, in the keyword “convenience store” shown in FIG. 5, the name / address display of level 1 is 5 times, and the business hours display and recommended product display of level 2 are both Since it is four times, the search level “2” (refer to FIG. 3) for searching for business hours and recommended products corresponding to the predetermined number of times three or more and the deepest display content is set.
 ここで、閾値とする所定回数については、いずれも3回ということで説明したが、すべてのキーワードについて同じ値を用いるようにしてもよいし、キーワード毎に異なる値を用いるようにしてもよい。
 なお、ここに示した検索レベルの設定方法は一例であって、他の方法で決定された検索レベルを設定するようにしてもよい。
Here, the predetermined number of times as the threshold has been described as being 3 times, but the same value may be used for all keywords, or a different value may be used for each keyword.
The search level setting method shown here is an example, and a search level determined by another method may be set.
 次に、図6に示すフローチャートを用いて、実施の形態1の音声認識装置の動作を説明する。
 まず、音声取得部1は、マイクにより集音されたユーザ発話、すなわち、入力された音声を取込み、例えばPCMによりA/D変換する(ステップST01)。
 次に、音声認識部2は、音声取得部1によりデジタル化された音声信号から、ユーザが発話した内容に該当する音声区間を検出し、該音声区間の音声データの特徴量を抽出し、その特徴量に基づいて音声認識辞書3を用いて認識処理を行い、キーワードとなる文字列抽出し、出力する(ステップST02)。
Next, the operation of the speech recognition apparatus according to the first embodiment will be described using the flowchart shown in FIG.
First, the voice acquisition unit 1 takes in a user utterance collected by a microphone, that is, an input voice, and performs A / D conversion using, for example, PCM (step ST01).
Next, the voice recognition unit 2 detects a voice section corresponding to the content spoken by the user from the voice signal digitized by the voice acquisition unit 1, extracts a feature amount of the voice data of the voice section, and A recognition process is performed using the speech recognition dictionary 3 based on the feature amount, and a character string serving as a keyword is extracted and output (step ST02).
 そして、情報検索制御部5は、検索レベル設定部11により検索レベルが設定されている場合(ステップST03のYESの場合)は、当該検索レベルに従って音声認識部2により出力されたキーワードを検索キーとして情報データベース4を検索し、情報を取得する(ステップST04)。その後、情報提示制御部6が、情報検索制御部5により取得された検索結果を表示部7に表示するよう指示を行う(ステップST05)。 When the search level is set by the search level setting unit 11 (in the case of YES at step ST03), the information search control unit 5 uses the keyword output by the voice recognition unit 2 according to the search level as a search key. The information database 4 is searched and information is acquired (step ST04). Thereafter, the information presentation control unit 6 instructs the display unit 7 to display the search result acquired by the information search control unit 5 (step ST05).
 一方、検索レベルが設定されていない場合(ステップST03のNOの場合)は、当該キーワードに対応するアイコンを表示する(ステップST06)。
 続いて、ユーザにより操作入力部8を介して表示画面が操作されると、操作応答解析部9が当該操作を解析し、ユーザの操作を特定し(ステップST07)、当該検索キーワードについて、特定されたユーザの操作により表示された内容に対する回数をインクリメントして、操作表示履歴記憶部10に保存されている操作履歴、表示履歴を更新する(ステップST08)。
On the other hand, when the search level is not set (NO in step ST03), an icon corresponding to the keyword is displayed (step ST06).
Subsequently, when the display screen is operated by the user via the operation input unit 8, the operation response analysis unit 9 analyzes the operation, specifies the user's operation (step ST07), and specifies the search keyword. The operation history and display history stored in the operation display history storage unit 10 are updated by incrementing the number of times displayed by the user's operation (step ST08).
 検索レベル設定部11は、ステップST02において抽出されたキーワードについて、操作表示履歴記憶部10に保存されている表示内容の回数が、予め設定された閾値である所定回数以上であるものがあるか否かを判定する(ステップST09)。所定回数以上である表示内容がないと判定された場合(ステップST09のNOの場合)は、ステップST01に戻る。一方、所定回数以上である表示内容があると判定された場合(ステップST09のYESの場合)は、その内容に基づいて検索レベルを決定し、情報検索制御部5に対して検索レベルを設定する(ステップST10)。 The search level setting unit 11 determines whether or not the number of display contents stored in the operation display history storage unit 10 for the keyword extracted in step ST02 is equal to or greater than a predetermined number that is a preset threshold value. Is determined (step ST09). When it is determined that there is no display content more than the predetermined number of times (in the case of NO in step ST09), the process returns to step ST01. On the other hand, if it is determined that there is display content that is a predetermined number of times or more (YES in step ST09), the search level is determined based on the content, and the search level is set for the information search control unit 5. (Step ST10).
 次に、具体例を挙げて説明する。なお、説明のため、初期状態は、情報検索制御部5において検索レベルは未設定、各キーワードにおける画面表示の回数はすべて0であるものとする。また、検索レベル設定部11における判定の際の閾値とする「所定回数」は2回とする。 Next, a specific example will be described. For the sake of explanation, it is assumed that, in the initial state, the search level is not set in the information search control unit 5 and the number of screen display times for each keyword is all zero. In addition, the “predetermined number of times” used as a threshold value for determination in the search level setting unit 11 is set to two times.
 例えば、ナビゲーション装置が搭載されている車内において、ナビゲーション装置の画面70には、通常の道案内のための地図および自車マーク71が表示されている状態で、
 ユーザA:「そろそろガソリンがなくなるなぁ」
 ユーザB:「近くにガソリンスタンドはないかなぁ」
上記のような会話がなされたとすると、音声取得部1によりデジタル化された音声信号が音声認識部2により認識され、キーワード「ガソリンスタンド」が抽出されて出力される(ステップST01、ステップST02)。
For example, in a car equipped with a navigation device, a screen for normal road guidance and a vehicle mark 71 are displayed on the screen 70 of the navigation device.
User A: “Soon gasoline will run out”
User B: “Is there a gas station nearby?”
Assuming that the above conversation is made, the voice signal digitized by the voice acquisition unit 1 is recognized by the voice recognition unit 2, and the keyword “gas station” is extracted and output (step ST01, step ST02).
 ここで、上述したとおり初期状態では、情報検索制御部5においてキーワード「ガソリンスタンド」に対する検索レベルは未設定であるため、情報検索制御部5による情報データベース4の検索は行われない(ステップST03のNOの場合)。そして、検索レベル未設定に対応する表示、すなわち「ガソリンスタンド」のジャンル名アイコン73が、例えば図1(a)に示すように表示部7の画面70に表示される(ステップST06)。 Here, in the initial state as described above, the search level for the keyword “gasoline station” is not set in the information search control unit 5, so the information search control unit 5 does not search the information database 4 (in step ST03). In the case of NO). Then, a display corresponding to the search level not set, that is, a genre name icon 73 of “gas station” is displayed on the screen 70 of the display unit 7 as shown in FIG. 1A, for example (step ST06).
 そして、ユーザにより図1(a)(b)(c)に示す操作が行われ、図1(d)のような画面が表示されると、操作応答解析部9によりそれら図1(a)(b)(c)の操作により名称・住所表示と営業時間表示がなされたことが特定され、キーワード「ガソリンスタンド」について、名称・住所表示と営業時間表示の回数がインクリメントされて操作表示履歴記憶部10の内容が更新される(ステップST07、ステップST08)。この結果、操作表示履歴記憶部10には、キーワード「ガソリンスタンド」について、名称・住所表示の回数「1」、営業時間表示の回数「1」、価格表示の回数「0」という履歴が保存される。 1 (a), (b), and (c) are performed by the user, and when the screen as shown in FIG. 1 (d) is displayed, the operation response analysis unit 9 performs the operations shown in FIGS. b) It is specified that the name / address display and business hours are displayed by the operation of (c), and the number of the name / address display and business hours display is incremented for the keyword “gas station”, and the operation display history storage unit 10 contents are updated (step ST07, step ST08). As a result, for the keyword “gas station”, the operation display history storage unit 10 stores a history of “1” for name / address display, “1” for business hours display, and “0” for price display. The
 また、別のときにユーザが上記のような会話を行い、名称・住所表示まで行ったとすると、操作表示履歴記憶部10に記憶されている情報は、図7(a)に示すように、キーワード「ガソリンスタンド」について、名称・住所表示の回数「2」、営業時間表示の回数「1」、価格表示の回数「0」という内容になり、名称・住所表示回数が閾値である所定回数「2」以上となるため、情報検索制御部5に対して検索レベル「1」が設定される(ステップST09、ステップST10)。 Further, if the user performs the conversation as described above and displays the name and address at another time, the information stored in the operation display history storage unit 10 is the keyword as shown in FIG. The name / address display count “2”, the business hours display count “1”, and the price display count “0” for the “gas station”, and the name / address display count is a predetermined number of times “2”. Therefore, the search level “1” is set for the information search control unit 5 (step ST09, step ST10).
 さらに、別のときにユーザが上記のような会話を行うと、キーワード「ガソリンスタンド」について、情報検索制御部5において検索レベル「1」が設定されているので、情報データベース4から名称・住所情報が取得され、検索結果として図8(a)のように検索結果リスト73が表示される(ステップST03のYESの場合、ステップST04、ステップST05)。ここで、ユーザが検索結果の一つを選択すると、図1(c)に示す画面が表示される。また、操作表示履歴記憶部10に記憶されている情報は、図7(b)に示すように、名称・住所表示の回数「3」、営業時間表示の回数「2」、価格表示の回数「0」という内容になり、営業時間表示回数が閾値である所定回数「2」以上となるため、情報検索制御部5に対して検索レベル「2」、付加情報「営業時間」が設定される。 Further, when the user has a conversation as described above at another time, the search level “1” is set in the information search control unit 5 for the keyword “gas station”. Is obtained, and the search result list 73 is displayed as a search result as shown in FIG. 8A (in the case of YES in step ST03, step ST04, step ST05). Here, when the user selects one of the search results, a screen shown in FIG. 1C is displayed. Further, as shown in FIG. 7B, the information stored in the operation display history storage unit 10 includes the name / address display count “3”, the business hours display count “2”, and the price display count “ The content is “0”, and the number of times the business hours are displayed is equal to or greater than the predetermined number “2”, which is the threshold value. Therefore, the search level “2” and the additional information “business hours” are set for the information search control unit 5.
 同様にして、操作表示履歴記憶部10で記憶されている情報が図7(b)のような場合に、さらに別のときにユーザが上記のような会話を行うと、キーワード「ガソリンスタンド」について、情報検索制御部5において検索レベル「2」、付加情報「営業時間」が設定されているので、情報データベース4から営業時間まで取得され、検索結果として図8(b)のような営業時間を含めた検索結果リスト73が表示される。ここで、ユーザが検索結果の一つを選択すると、図1(d)に示す画面が表示される。 Similarly, when the information stored in the operation display history storage unit 10 is as shown in FIG. 7B, when the user has a conversation as described above, the keyword “gas station” Since the search level “2” and the additional information “business hours” are set in the information search control unit 5, business hours are acquired from the information database 4, and the business hours as shown in FIG. The included search result list 73 is displayed. Here, when the user selects one of the search results, the screen shown in FIG. 1D is displayed.
 また、操作表示履歴記憶部10で記憶されている情報が図7(c)に示すように、名称・住所表示の回数「4」、営業時間表示の回数「2」、価格表示の回数「2」という場合には、すべての項目が検索レベル設定部11における判定に使用する閾値である所定回数「2」以上となるため、情報検索制御部5に対して検索レベル「2」、付加情報「営業時間」および「価格」(または、付加情報なし)が設定される。 Further, as shown in FIG. 7C, the information stored in the operation display history storage unit 10 includes the number of times of name / address display “4”, the number of business hours display “2”, and the number of price display “2”. ", All items are equal to or greater than the predetermined number of times" 2 "which is a threshold used for determination in the search level setting unit 11, so that the search level" 2 "and additional information" “Business hours” and “Price” (or no additional information) are set.
 この状態で、さらにユーザが上記のような会話を行うと、キーワード「ガソリンスタンド」について、情報検索制御部5において検索レベル「2」、付加情報「営業時間」と「価格」(または付加情報なし)が設定されているため、情報データベース4から営業時間および価格まで取得され、検索結果として図8(c)のような営業時間と価格まで含めて検索結果リスト73が表示される。 In this state, when the user further has a conversation as described above, the information search control unit 5 searches the keyword “gas station” at the search level “2”, the additional information “business hours” and “price” (or no additional information). ) Is set, the business hours and prices are acquired from the information database 4, and the search result list 73 including the business hours and prices as shown in FIG. 8C is displayed as a search result.
 以上のように、この実施の形態1によれば、ユーザの発話内容から音声認識部により抽出されたキーワードについて、ユーザの操作によって表示が行われた内容と回数を履歴情報として記憶しておき、ユーザが「ガソリンスタンド」の情報を見る時には毎回営業時間の確認をしているなど、所定回数以上同じ操作および表示を行っているか否かを判定して検索レベルを設定することにより、次に同じキーワードが抽出された際に、ユーザが求めるレベルの情報を即座に提示することができ、常にユーザにとって必要な詳細情報を効率よく提供することができるので、ユーザの利便性が向上する。 As described above, according to the first embodiment, for the keywords extracted by the speech recognition unit from the utterance contents of the user, the contents and the number of times displayed by the user's operation are stored as history information. When the user looks at the information of the “gas station”, the same operation is performed by determining whether or not the same operation and display have been performed more than a predetermined number of times, such as checking the business hours every time. When a keyword is extracted, information at a level required by the user can be presented immediately, and detailed information necessary for the user can always be efficiently provided, so that convenience for the user is improved.
実施の形態2.
 図9は、この発明の実施の形態2による音声認識装置の一例を示すブロック図である。なお、実施の形態1で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態2では、実施の形態1と比べると、鳴動設定部12をさらに備えており、音声認識部2により認識されたキーワードに対するユーザの情報表示回数が所定回数以上である(または所定回数を超えている)場合に、ユーザに注意を促すものである。
Embodiment 2. FIG.
FIG. 9 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 2 of the present invention. In addition, the same code | symbol is attached | subjected to the structure similar to what was demonstrated in Embodiment 1, and the overlapping description is abbreviate | omitted. In the second embodiment shown below, compared with the first embodiment, the sound setting unit 12 is further provided, and the number of times the user displays the information for the keyword recognized by the voice recognition unit 2 is a predetermined number or more (or If the predetermined number of times is exceeded, the user is alerted.
 情報検索制御部5は、音声認識部2により認識されたキーワードに対するユーザの情報表示回数に基づいて、検索レベル設定部11により検索レベル「1」以上が設定された場合(または所定の値より大きい検索レベルが設定された場合)に、鳴動設定部12に対して出力の指示を行う。
 鳴動設定部12は、情報検索制御部5からの指示を受けると、所定の出力を行うようナビゲーション装置の設定を変更する。ここで、所定の出力とは、例えば、シートの振動、報知音の出力、当該キーワードが認識された旨の音声出力など、予め定められた振動または音声による鳴動出力をいう。
When the search level setting unit 11 sets a search level “1” or higher (or larger than a predetermined value), the information search control unit 5 is based on the number of times the user displays information on the keyword recognized by the voice recognition unit 2. When the search level is set), the ring setting unit 12 is instructed to output.
When the ringing setting unit 12 receives an instruction from the information search control unit 5, the ringing setting unit 12 changes the setting of the navigation device to perform a predetermined output. Here, the predetermined output refers to, for example, a predetermined vibration or sound output such as a vibration of a seat, an output of a notification sound, and a sound output indicating that the keyword is recognized.
 次に、図10に示すフローチャートを用いて実施の形態2の音声認識装置の動作を説明する。
 ステップST11~ST19までの処理については、実施の形態1における図6のフローチャートのステップST01~ST09と同じであるため、説明を省略する。
 そして、音声認識部2により抽出されたキーワードについて、操作履歴、表示履歴が所定回数以上である表示内容があると判定された場合(ステップST19のYESの場合)は、実施の形態1と同様に検索レベルを設定し(ステップST20)、その後、鳴動設定部12が鳴動設定を変更して所定の出力を行う(ステップST21)。
Next, the operation of the speech recognition apparatus according to the second embodiment will be described using the flowchart shown in FIG.
The processing from steps ST11 to ST19 is the same as steps ST01 to ST09 in the flowchart of FIG.
Then, when it is determined that there is display content whose operation history and display history are a predetermined number of times or more for the keywords extracted by the speech recognition unit 2 (in the case of YES in step ST19), as in the first embodiment. The search level is set (step ST20), and then the ring setting unit 12 changes the ring setting and performs a predetermined output (step ST21).
 以上のように、この実施の形態2によれば、ユーザの発話内容から音声認識部により抽出されたキーワードについて、過去にユーザが所定回数以上(または所定回数を超えて)そのキーワードに関する情報表示を行っていると判定された場合、すなわち、そのキーワードの検索レベルに応じて、鳴動設定部により振動または音声による所定の出力を行ってユーザに注意を促すようにしたので、ユーザが、その検索レベルに合わせた詳細情報が即座に提示されている状態である、ということを適切に認識することができる。 As described above, according to the second embodiment, for a keyword extracted by the speech recognition unit from the user's utterance content, the user can display information about the keyword in the past more than a predetermined number of times (or beyond the predetermined number of times). If it is determined that the search is performed, that is, according to the search level of the keyword, the ringing setting unit performs a predetermined output by vibration or voice to alert the user. It is possible to appropriately recognize that detailed information tailored to is immediately presented.
実施の形態3.
 図11は、この発明の実施の形態3による音声認識装置の一例を示すブロック図である。なお、実施の形態1,2で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態3では、実施の形態2と比べると、検索レベル初期化部13をさらに備えており、操作表示履歴記憶部10に記憶されている履歴情報を、ユーザが初期化したい場合に発話により初期化することができる。
Embodiment 3 FIG.
FIG. 11 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 3 of the present invention. In addition, the same code | symbol is attached | subjected to the structure similar to what was demonstrated in Embodiment 1, 2, and the overlapping description is abbreviate | omitted. In the third embodiment described below, when compared with the second embodiment, the search level initialization unit 13 is further provided, and the user wants to initialize the history information stored in the operation display history storage unit 10. Can be initialized by speaking.
 音声認識辞書3は、さらに「初期化」「リセット」等の、操作表示履歴記憶部10に記憶されている履歴情報を初期状態に戻すコマンドを意味するキーワードも認識可能なように構成されており、音声認識部2は、当該キーワードを認識結果として出力する。
 検索レベル初期化部13は、音声認識部2により「初期化」「リセット」等の初期状態に戻すコマンドを意味するキーワードが抽出されると、操作表示履歴記憶部10に記憶されている履歴情報を初期化する。
The voice recognition dictionary 3 is further configured to recognize keywords such as “initialization” and “reset” which mean commands that return history information stored in the operation display history storage unit 10 to an initial state. The voice recognition unit 2 outputs the keyword as a recognition result.
The search level initialization unit 13 extracts history information stored in the operation display history storage unit 10 when the voice recognition unit 2 extracts a keyword indicating a command for returning to an initial state such as “initialization” and “reset”. Is initialized.
 次に、図12に示すフローチャートを用いて実施の形態3の音声認識装置の動作を説明する。
 ステップST31~32およびステップST35~42は実施の形態2における図10のフローチャートのステップST11~12およびステップST13~20と同じであるため、説明を省略する。
Next, the operation of the speech recognition apparatus according to the third embodiment will be described using the flowchart shown in FIG.
Steps ST31 to 32 and steps ST35 to ST42 are the same as steps ST11 to 12 and steps ST13 to ST20 in the flowchart of FIG.
 そして、ステップST32において音声認識部2により抽出されたキーワードが「初期化」「リセット」等の初期状態に戻すコマンドを意味するキーワードである場合(ステップST33のYESの場合)は、操作表示履歴記憶部10に記憶されている情報を初期化、すなわち、初期状態に戻す(ステップST34)。また、それ以外のキーワードである場合は、ステップST35以降の処理を行う。 If the keyword extracted by the voice recognition unit 2 in step ST32 is a keyword meaning a command for returning to the initial state such as “initialization” and “reset” (YES in step ST33), the operation display history storage is performed. The information stored in unit 10 is initialized, that is, returned to the initial state (step ST34). If it is a keyword other than that, the process after step ST35 is performed.
 以上のように、この実施の形態3によれば、ユーザの発話内容から音声認識部により抽出されたキーワードが、「初期化」「リセット」等の初期状態に戻すコマンドを意味するキーワードであった場合には、操作表示履歴記憶部に記憶されている履歴情報を初期化するようにしたので、検索レベルに応じた詳細情報の表示が期待どおりのものでなくなった場合や、ユーザが変わった場合など、ユーザが初期化したい場合にこのコマンドを意味するキーワードを発話するだけで、操作表示履歴記憶部の内容を初期状態に戻すことができる。 As described above, according to the third embodiment, the keyword extracted from the user's utterance content by the voice recognition unit is a keyword meaning a command for returning to the initial state such as “initialization” and “reset”. In this case, since the history information stored in the operation display history storage unit is initialized, the display of detailed information according to the search level is not as expected, or the user changes For example, when the user wants to initialize, the content of the operation display history storage unit can be returned to the initial state only by speaking a keyword meaning this command.
実施の形態4.
 図13は、この発明の実施の形態4による音声認識装置の一例を示すブロック図である。なお、実施の形態1~3で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態4では、実施の形態1と比べると、話者識別部14をさらに備えており、発話者(発話したユーザ)毎に参照する履歴情報を変更するものである。
Embodiment 4 FIG.
FIG. 13 is a block diagram showing an example of a speech recognition apparatus according to Embodiment 4 of the present invention. Note that the same components as those described in the first to third embodiments are denoted by the same reference numerals, and redundant description is omitted. In Embodiment 4 shown below, compared with Embodiment 1, the speaker identification part 14 is further provided and the log | history information referred for every speaker (user who spoke) is changed.
 話者識別部14は、音声取得部1によりデジタル化された音声信号を解析し、発話者(発話したユーザ)を識別する。ここで、発話者の識別方法に関しては、この発明の本質的な事項ではなく、公知の技術を用いればよいため、ここでは説明を省略する。 The speaker identification unit 14 analyzes the voice signal digitized by the voice acquisition unit 1 and identifies the speaker (the user who spoke). Here, the speaker identification method is not an essential matter of the present invention, and a known technique may be used.
 操作表示履歴記憶部10は、ユーザ毎に図5に示すような履歴情報を保持している。そして、話者識別部14により発話者(発話したユーザ)が識別されると、当該識別されたユーザに対応する履歴情報を有効にする。その他の処理については、実施の形態1と同じであるため説明を省略する。なお、話者識別部14により識別された発話者が操作入力部8を操作したユーザであるとする。 The operation display history storage unit 10 holds history information as shown in FIG. 5 for each user. Then, when a speaker (speaking user) is identified by the speaker identifying unit 14, the history information corresponding to the identified user is validated. Since other processes are the same as those in the first embodiment, description thereof is omitted. It is assumed that the speaker identified by the speaker identification unit 14 is a user who operates the operation input unit 8.
 検索レベル設定部11は、操作表示履歴記憶部10に記憶されている履歴情報であって有効になっているものを参照し、当該履歴情報に応じて情報検索制御部5において検索キーとするキーワード毎の検索レベルを設定する。 The search level setting unit 11 refers to the history information stored in the operation display history storage unit 10 that is valid, and the keyword used as a search key in the information search control unit 5 according to the history information. Set the search level for each.
 次に、図14に示すフローチャートを用いて実施の形態4の音声認識装置の動作を説明する。
 まず、音声取得部1は、マイクにより集音されたユーザ発話、すなわち、入力された音声を取り込み、例えばPCMによりA/D変換する(ステップST51)。
 次に、話者識別部14は、音声取得部1により取り込まれた音声信号を解析し、発話者を識別する(ステップST52)。
Next, the operation of the speech recognition apparatus according to the fourth embodiment will be described using the flowchart shown in FIG.
First, the voice acquisition unit 1 takes in a user utterance collected by a microphone, that is, an input voice, and performs A / D conversion using, for example, PCM (step ST51).
Next, the speaker identification unit 14 analyzes the voice signal captured by the voice acquisition unit 1 and identifies the speaker (step ST52).
 そして、操作応答解析部9は、操作表示履歴記憶部10の中から話者識別部14により識別された発話者に対応する履歴情報を有効化する(ステップST53)。
 その後のステップST54~ST62の処理については、実施の形態1における図6に示すフローチャートのステップST02~ST10と同じであるため、説明を省略する。
Then, the operation response analysis unit 9 validates the history information corresponding to the speaker identified by the speaker identification unit 14 from the operation display history storage unit 10 (step ST53).
The subsequent processing of steps ST54 to ST62 is the same as steps ST02 to ST10 of the flowchart shown in FIG.
 以上のように、この実施の形態4によれば、ユーザの発話により発話者を識別し、発話者毎に記憶されている履歴情報を参照して検索レベルを設定してそれに応じた詳細情報を表示するようにしたので、この音声認識装置が組み込まれたナビゲーション装置を使用するユーザが変わっても、それぞれのユーザが求めるレベルの情報を即座に提示することができ、常にユーザにとって必要な詳細情報を効率よく提供することができるので、よりユーザの利便性が向上する。 As described above, according to the fourth embodiment, the speaker is identified by the user's utterance, the search level is set with reference to the history information stored for each utterer, and the detailed information corresponding thereto is obtained. Since the information is displayed, even if the user who uses the navigation device in which the voice recognition device is incorporated changes, the level of information required by each user can be presented immediately, and detailed information that is always necessary for the user. Can be provided efficiently, and the convenience for the user is further improved.
 なお、以上の実施の形態では、ユーザの発話内容を常に認識するものとしているが、所定の期間(例えば、ユーザが音声認識を行うためのボタンを押下し、そのボタンが押下されている間、または、そのボタン押下後所定の時間)のみ音声認識を行うようにしてもよい。また、常に認識するか、所定の期間のみ認識するかを、ユーザが設定できるようにしてもよい。 In the above embodiment, the user's utterance content is always recognized. However, for a predetermined period (for example, while the user presses a button for voice recognition and the button is pressed, Alternatively, voice recognition may be performed only for a predetermined time after the button is pressed. Further, the user may be able to set whether to always recognize or recognize only for a predetermined period.
 ただし、以上の実施の形態のように、ユーザが意識しなくても音声認識装置が組み込まれたナビゲーション装置が起動している場合は常時、音声取得および音声認識を行うようにすることにより、何らかの発話があれば自動的に音声取得および音声認識を行ってその音声認識結果からキーワードを抽出し、検索レベルを設定して、ユーザが求めるレベルの情報を即座に表示してくれるため、音声取得や音声認識開始のためのユーザの手動操作や入力の意思などを必要とせず、常にユーザにとって必要な詳細情報を効率よく提供することができる。 However, as in the above embodiment, when a navigation device incorporating a voice recognition device is activated even if the user is not conscious, by performing voice acquisition and voice recognition at all times, If there is an utterance, voice acquisition and voice recognition are automatically performed, keywords are extracted from the voice recognition results, a search level is set, and information at the level requested by the user is immediately displayed. Detailed information necessary for the user can always be efficiently provided without requiring the user's manual operation or input intention to start speech recognition.
 また、以上の実施の形態では、この音声認識装置が車載用のナビゲーション装置に組み込まれるものとして説明したが、この発明の音声認識装置が組み込まれる装置は車載用のナビゲーション装置に限らず、人、車両、鉄道、船舶または航空機等を含む移動体用のナビゲーション装置や、携帯型のナビゲーション装置、携帯型の情報処理装置等、ユーザと装置との対話により情報を検索して表示することが可能な装置であれば、どのような形態のものにも適用することができる。 In the above embodiment, the voice recognition device is described as being incorporated in a vehicle-mounted navigation device. However, the device in which the voice recognition device of the invention is incorporated is not limited to a vehicle-mounted navigation device, It is possible to search and display information through dialogue between the user and the device, such as a navigation device for a moving body including a vehicle, a railroad, a ship or an aircraft, a portable navigation device, a portable information processing device, etc. Any device can be applied as long as it is a device.
 なお、本願発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, any combination of the embodiments, or any modification of any component in each embodiment, or omission of any component in each embodiment is possible. .
 この発明の音声認識装置が組み込まれる装置は車載用のナビゲーション装置に限らず、人、車両、鉄道、船舶または航空機等を含む移動体用のナビゲーション装置や、携帯型のナビゲーション装置、携帯型の情報処理装置等、ユーザと装置との対話により情報を検索して表示することが可能な装置であれば、どのような形態のものにも適用することができる。 The device in which the voice recognition device of the present invention is incorporated is not limited to a vehicle-mounted navigation device, but a navigation device for a mobile object including a person, a vehicle, a railroad, a ship or an aircraft, a portable navigation device, and portable information. The present invention can be applied to any form as long as the apparatus can search and display information through a dialogue between the user and the apparatus, such as a processing apparatus.
 1 音声取得部、2 音声認識部、3 音声認識辞書、4 情報データベース、5 情報検索制御部、6 情報提示制御部、7 表示部、8 操作入力部、9 操作応答解析部、10 操作表示履歴記憶部、11 検索レベル設定部、12 鳴動設定部、13 検索レベル初期化部、14 話者識別部、70 ナビゲーション装置の画面、71 自車マーク、72 ジャンル名アイコン、73 検索結果リスト、74 施設マーク、75 詳細ボタン。 1 voice acquisition unit, 2 voice recognition unit, 3 voice recognition dictionary, 4 information database, 5 information search control unit, 6 information presentation control unit, 7 display unit, 8 operation input unit, 9 operation response analysis unit, 10 operation display history Storage section, 11 Search level setting section, 12 Ringing setting section, 13 Search level initialization section, 14 Speaker identification section, 70 Navigation device screen, 71 Vehicle mark, 72 Genre name icon, 73 Search result list, 74 facilities Mark, 75 details button.

Claims (6)

  1.  ユーザにより発話された音声を検知して取得する音声取得部と、
     前記音声取得部により取得された音声データを認識してキーワードを抽出する音声認識部と、
     前記ユーザからの操作入力を受け付ける操作入力部と、
     前記ユーザに情報を提示する表示部と、
     前記操作入力部により受け付けられた情報および前記表示部に表示された情報に基づいて、前記ユーザの操作を特定する操作応答解析部と、
     前記音声認識部により抽出されたキーワード毎に、前記操作応答解析部により特定された操作により前記表示部に表示された表示内容とその表示回数を履歴情報として記憶する操作表示履歴記憶部と、
     前記操作表示履歴記憶部に記憶されている履歴情報に応じて、前記音声認識部により抽出されたキーワードの検索レベルを設定する検索レベル設定部と、
     前記検索レベル設定部により設定された検索レベルにしたがって、前記音声認識部により抽出されたキーワードを検索キーとして情報を検索して検索結果を取得する情報検索制御部と、
     前記情報検索制御部により取得された検索結果を、前記表示部に表示させる指示を行う情報提示制御部と、を備え、
     前記検索レベル設定部は、前記音声認識部により抽出されたキーワードについて、前記操作表示履歴記憶部に記憶されている履歴情報の中の表示回数が所定回数以上になった場合に、前記検索レベルを変更する
     ことを特徴とする音声認識装置。
    A voice acquisition unit for detecting and acquiring voice spoken by the user;
    A voice recognition unit that recognizes voice data acquired by the voice acquisition unit and extracts keywords;
    An operation input unit that receives an operation input from the user;
    A display unit for presenting information to the user;
    Based on the information received by the operation input unit and the information displayed on the display unit, an operation response analysis unit that identifies the user's operation;
    For each keyword extracted by the voice recognition unit, an operation display history storage unit that stores the display content displayed on the display unit by the operation specified by the operation response analysis unit and the number of display times as history information;
    A search level setting unit that sets a search level of a keyword extracted by the voice recognition unit according to history information stored in the operation display history storage unit;
    An information search control unit that searches for information using the keyword extracted by the voice recognition unit as a search key and obtains a search result according to the search level set by the search level setting unit;
    An information presentation control unit that gives an instruction to display the search result acquired by the information search control unit on the display unit,
    The search level setting unit sets the search level when the number of displays in the history information stored in the operation display history storage unit exceeds a predetermined number of times for the keyword extracted by the voice recognition unit. A voice recognition device characterized by being changed.
  2.  前記検索レベル設定部は、前記音声認識部により抽出されたキーワードについて、前記操作表示履歴記憶部に記憶されている履歴情報の中の表示回数が前記所定回数以上になるたびに、前記検索レベルを上げる
     ことを特徴とする請求項1記載の音声認識装置。
    The search level setting unit sets the search level for the keyword extracted by the voice recognition unit each time the number of display times in the history information stored in the operation display history storage unit is equal to or greater than the predetermined number. The speech recognition apparatus according to claim 1, wherein
  3.  前記情報検索制御部が前記音声認識部により抽出されたキーワードを検索キーとして検索する情報は、施設情報、交通情報、天気情報、住所情報、ニュース、音楽情報、映画情報または番組情報のいずれかである
     ことを特徴とする請求項1記載の音声認識装置。
    The information searched by the information search control unit using the keyword extracted by the voice recognition unit as a search key is any of facility information, traffic information, weather information, address information, news, music information, movie information, or program information. The speech recognition apparatus according to claim 1, wherein:
  4.  前記音声取得部により取得された音声を発話したユーザを特定する話者識別部をさらに備え、
     前記操作表示履歴記憶部は、ユーザ毎に履歴情報を記憶しており、前記話者識別部により特定されたユーザの履歴情報を有効にし、
     前記検索レベル設定部は、前記操作表示履歴記憶部において有効にされた履歴情報を参照して、前記検索レベルを設定する
     ことを特徴とする請求項1記載の音声認識装置。
    A speaker identification unit that identifies a user who utters the voice acquired by the voice acquisition unit;
    The operation display history storage unit stores history information for each user, validates the user history information specified by the speaker identification unit,
    The speech recognition apparatus according to claim 1, wherein the search level setting unit sets the search level with reference to history information validated in the operation display history storage unit.
  5.  前記検索レベルに応じて、振動または音声により前記ユーザに注意を促す鳴動設定部をさらに備える
     ことを特徴とする請求項1記載の音声認識装置。
    The speech recognition apparatus according to claim 1, further comprising a ringing setting unit that alerts the user by vibration or voice according to the search level.
  6.  前記音声認識部により抽出されたキーワードが、初期状態に戻すコマンドを意味するキーワードであった場合に、前記操作表示履歴記憶部に記憶されている履歴情報を初期状態に戻す検索レベル初期化部をさらに備える
     ことを特徴とする請求項1記載の音声認識装置。
    A search level initialization unit for returning history information stored in the operation display history storage unit to an initial state when the keyword extracted by the voice recognition unit is a keyword meaning a command to return to an initial state; The speech recognition apparatus according to claim 1, further comprising:
PCT/JP2012/066974 2012-07-03 2012-07-03 Voice recognition device WO2014006690A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/JP2012/066974 WO2014006690A1 (en) 2012-07-03 2012-07-03 Voice recognition device
CN201280074470.1A CN104428766B (en) 2012-07-03 2012-07-03 Speech recognition equipment
DE112012006652.9T DE112012006652T5 (en) 2012-07-03 2012-07-03 Voice recognition device
JP2014523470A JP5925313B2 (en) 2012-07-03 2012-07-03 Voice recognition device
US14/398,933 US9269351B2 (en) 2012-07-03 2012-07-03 Voice recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/066974 WO2014006690A1 (en) 2012-07-03 2012-07-03 Voice recognition device

Publications (1)

Publication Number Publication Date
WO2014006690A1 true WO2014006690A1 (en) 2014-01-09

Family

ID=49881481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/066974 WO2014006690A1 (en) 2012-07-03 2012-07-03 Voice recognition device

Country Status (5)

Country Link
US (1) US9269351B2 (en)
JP (1) JP5925313B2 (en)
CN (1) CN104428766B (en)
DE (1) DE112012006652T5 (en)
WO (1) WO2014006690A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017004193A (en) * 2015-06-09 2017-01-05 凸版印刷株式会社 Information processing device, information processing method, and program
JP2019079345A (en) * 2017-10-25 2019-05-23 アルパイン株式会社 Information presentation device, information presentation system, and terminal device

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102091003B1 (en) * 2012-12-10 2020-03-19 삼성전자 주식회사 Method and apparatus for providing context aware service using speech recognition
US9761224B2 (en) * 2013-04-25 2017-09-12 Mitsubishi Electric Corporation Device and method that posts evaluation information about a facility at which a moving object has stopped off based on an uttered voice
US10008204B2 (en) * 2014-06-30 2018-06-26 Clarion Co., Ltd. Information processing system, and vehicle-mounted device
JP6418820B2 (en) * 2014-07-07 2018-11-07 キヤノン株式会社 Information processing apparatus, display control method, and computer program
CN104834691A (en) * 2015-04-22 2015-08-12 中国建设银行股份有限公司 Voice robot
US10018977B2 (en) * 2015-10-05 2018-07-10 Savant Systems, Llc History-based key phrase suggestions for voice control of a home automation system
JP6625508B2 (en) * 2016-10-24 2019-12-25 クラリオン株式会社 Control device, control system
JP6920878B2 (en) 2017-04-28 2021-08-18 フォルシアクラリオン・エレクトロニクス株式会社 Information providing device and information providing method
KR102353486B1 (en) * 2017-07-18 2022-01-20 엘지전자 주식회사 Mobile terminal and method for controlling the same
JP6978174B2 (en) * 2017-10-11 2021-12-08 アルパイン株式会社 Evaluation information generation system and in-vehicle device
KR20200042127A (en) * 2018-10-15 2020-04-23 현대자동차주식회사 Dialogue processing apparatus, vehicle having the same and dialogue processing method
CN113113029A (en) * 2018-08-29 2021-07-13 胡开良 Unmanned aerial vehicle voiceprint news tracking method
US11094327B2 (en) * 2018-09-28 2021-08-17 Lenovo (Singapore) Pte. Ltd. Audible input transcription
JP7266432B2 (en) * 2019-03-14 2023-04-28 本田技研工業株式会社 AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM
CN109996026B (en) * 2019-04-23 2021-01-19 广东小天才科技有限公司 Video special effect interaction method, device, equipment and medium based on wearable equipment
CN111696548A (en) * 2020-05-13 2020-09-22 深圳追一科技有限公司 Method and device for displaying driving prompt information, electronic equipment and storage medium
CN113470636B (en) * 2020-07-09 2023-10-27 青岛海信电子产业控股股份有限公司 Voice information processing method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002297374A (en) * 2001-03-30 2002-10-11 Alpine Electronics Inc Voice retrieving device
JP2007206886A (en) * 2006-01-31 2007-08-16 Canon Inc Information processor and method
WO2008136105A1 (en) * 2007-04-25 2008-11-13 Pioneer Corporation Display device, display method, display program and recording medium
WO2009147745A1 (en) * 2008-06-06 2009-12-10 三菱電機株式会社 Retrieval device
WO2010013369A1 (en) * 2008-07-30 2010-02-04 三菱電機株式会社 Voice recognition device
JP2011075525A (en) * 2009-10-02 2011-04-14 Clarion Co Ltd Navigation device and method of changing operation menu

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030400A (en) * 2002-06-27 2004-01-29 Fujitsu Ten Ltd Retrieval system
US7386454B2 (en) * 2002-07-31 2008-06-10 International Business Machines Corporation Natural error handling in speech recognition
JP2004185240A (en) * 2002-12-02 2004-07-02 Alpine Electronics Inc Electronic equipment with operation history reproduction function, and reproduction method of operation history
US9224394B2 (en) * 2009-03-24 2015-12-29 Sirius Xm Connected Vehicle Services Inc Service oriented speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same
JP4423327B2 (en) * 2005-02-08 2010-03-03 日本電信電話株式会社 Information communication terminal, information communication system, information communication method, information communication program, and recording medium recording the same
JP4736982B2 (en) 2006-07-06 2011-07-27 株式会社デンソー Operation control device, program
DE112007002665B4 (en) * 2006-12-15 2017-12-28 Mitsubishi Electric Corp. Voice recognition system
WO2008084575A1 (en) * 2006-12-28 2008-07-17 Mitsubishi Electric Corporation Vehicle-mounted voice recognition apparatus
CN101499277B (en) * 2008-07-25 2011-05-04 中国科学院计算技术研究所 Service intelligent navigation method and system
JP5972372B2 (en) * 2012-06-25 2016-08-17 三菱電機株式会社 Car information system
JP2014109889A (en) * 2012-11-30 2014-06-12 Toshiba Corp Content retrieval device, content retrieval method and control program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002297374A (en) * 2001-03-30 2002-10-11 Alpine Electronics Inc Voice retrieving device
JP2007206886A (en) * 2006-01-31 2007-08-16 Canon Inc Information processor and method
WO2008136105A1 (en) * 2007-04-25 2008-11-13 Pioneer Corporation Display device, display method, display program and recording medium
WO2009147745A1 (en) * 2008-06-06 2009-12-10 三菱電機株式会社 Retrieval device
WO2010013369A1 (en) * 2008-07-30 2010-02-04 三菱電機株式会社 Voice recognition device
JP2011075525A (en) * 2009-10-02 2011-04-14 Clarion Co Ltd Navigation device and method of changing operation menu

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017004193A (en) * 2015-06-09 2017-01-05 凸版印刷株式会社 Information processing device, information processing method, and program
JP2019079345A (en) * 2017-10-25 2019-05-23 アルパイン株式会社 Information presentation device, information presentation system, and terminal device

Also Published As

Publication number Publication date
US20150120300A1 (en) 2015-04-30
CN104428766B (en) 2017-07-11
DE112012006652T5 (en) 2015-03-26
JP5925313B2 (en) 2016-05-25
JPWO2014006690A1 (en) 2016-06-02
US9269351B2 (en) 2016-02-23
CN104428766A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
JP5925313B2 (en) Voice recognition device
WO2016051519A1 (en) Speech recognition system
JP5921722B2 (en) Voice recognition apparatus and display method
KR101614756B1 (en) Apparatus of voice recognition, vehicle and having the same, method of controlling the vehicle
JP5158174B2 (en) Voice recognition device
JPWO2014188512A1 (en) Speech recognition device, recognition result display device, and display method
JP5677650B2 (en) Voice recognition device
WO2013005248A1 (en) Voice recognition device and navigation device
EP2196989A1 (en) Grammar and template-based speech recognition of spoken utterances
JP5893217B2 (en) Voice recognition apparatus and display method
JP2014142566A (en) Voice recognition system and voice recognition method
US9881609B2 (en) Gesture-based cues for an automatic speech recognition system
JP4466379B2 (en) In-vehicle speech recognition device
WO2015125274A1 (en) Speech recognition device, system, and method
US20170069311A1 (en) Adapting a speech system to user pronunciation
CN105448293A (en) Voice monitoring and processing method and voice monitoring and processing device
US20130013310A1 (en) Speech recognition system
JP6522009B2 (en) Speech recognition system
US20160019892A1 (en) Procedure to automate/simplify internet search based on audio content from a vehicle radio
US20230315997A9 (en) Dialogue system, a vehicle having the same, and a method of controlling a dialogue system
JP3296783B2 (en) In-vehicle navigation device and voice recognition method
JP4624825B2 (en) Voice dialogue apparatus and voice dialogue method
JP3759313B2 (en) Car navigation system
JP5446540B2 (en) Information retrieval apparatus, control method, and program
JP2001154691A (en) Voice recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12880630

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014523470

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 14398933

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1120120066529

Country of ref document: DE

Ref document number: 112012006652

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12880630

Country of ref document: EP

Kind code of ref document: A1