EP3089158B1 - Speech recognition processing - Google Patents

Speech recognition processing Download PDF

Info

Publication number
EP3089158B1
EP3089158B1 EP14875013.6A EP14875013A EP3089158B1 EP 3089158 B1 EP3089158 B1 EP 3089158B1 EP 14875013 A EP14875013 A EP 14875013A EP 3089158 B1 EP3089158 B1 EP 3089158B1
Authority
EP
European Patent Office
Prior art keywords
voice
information
utterance
vocabulary
exclusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP14875013.6A
Other languages
German (de)
French (fr)
Other versions
EP3089158A4 (en
EP3089158A1 (en
Inventor
Tomohiro Konuma
Tomohiro Koganei
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Publication of EP3089158A4 publication Critical patent/EP3089158A4/en
Publication of EP3089158A1 publication Critical patent/EP3089158A1/en
Application granted granted Critical
Publication of EP3089158B1 publication Critical patent/EP3089158B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L2025/783Detection of presence or absence of voice signals based on threshold decision
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the present disclosure relates to voice recognition processing apparatuses, voice recognition processing methods, and display apparatuses that operate by recognizing a voice uttered by a user.
  • Patent Literature 1 discloses a voice input apparatus that has a voice recognition function. This voice input apparatus is configured to receive a voice uttered by a user, to recognize (voice recognition) a command indicated by the voice of the user by analyzing the received voice, and to control a device in accordance with the voice-recognized command. That is, the voice input apparatus of Patent Literature 1 is capable of performing voice recognition on the voice arbitrarily uttered by the user, and controlling the device in accordance with the command that is a result of the voice recognition.
  • a user who uses this voice input apparatus can select hypertext displayed on a browser by using the voice recognition function of this voice input apparatus while operating the browser on an apparatus such as a television receiver (hereinafter referred to as "television") and a PC (Personal Computer).
  • the user can also use this voice recognition function to perform a search on a web site (search site) that provides a search service.
  • triggerless recognition may be performed in order to increase convenience of the user.
  • the triggerless recognition refers to a condition in which voice collection and voice recognition of the collected voice are always performed in a voice input apparatus without limitation of a period in which voice input for voice recognition is accepted.
  • triggerless recognition it is difficult to distinguish whether the collected voice is a voice uttered by the user for a purpose of voice recognition, and whether the collected voice is not a voice for a purpose of voice recognition, such as conversation among the users and a monologue of the user.
  • a voice that is not for the purpose of voice recognition may be falsely recognized (false recognition).
  • US2008/059186 and US2008/120107 disclose acquiring a voice uttered by a user, converting a corresponding voice information into "first information" and determining whether said first information is information to be rejected or to be accepted (and then executed) based on a comparison of the first information with an pre-stored exclusion vocabulary.
  • a subset blocking word list contains words that are "non-content" words.
  • US2008/120107 teaches to use a rejectable word dictionary in order to prevent from misrecognizing words as recognition vocabulary words when a user utters out-of-recognition vocabulary words.
  • Document EP1562178 discloses a recognition result post-processor using a skip list containing results to be removed from the recognition results list.
  • the present disclosure provides a voice recognition processing apparatus and a voice recognition processing method that reduce false recognition and improve operativity of the user, as defined in claims 1 and 5.
  • Another object of the invention concerns a display apparatus as defined in claim 6.
  • the voice recognition processing apparatus can improve operativity when the user performs voice operation.
  • television receiver (television) 10 is cited in the present example as an example of a display apparatus including a voice recognition processing apparatus, the display apparatus is not limited to television 10.
  • the display apparatus may be an apparatus such as a PC, a tablet terminal, and a mobile terminal.
  • voice recognition processing system 11 is configured to perform triggerless recognition
  • the present disclosure is not limited to triggerless recognition.
  • the present disclosure is also applicable to a system in which voice recognition is started by an operation for starting voice recognition by user 700.
  • FIG. 1 is a diagram schematically illustrating voice recognition processing system 11 according to the example.
  • television 10 that is an example of the display apparatus incorporates the voice recognition processing apparatus.
  • Voice recognition processing system 11 includes television 10 that is an example of a display apparatus, and voice recognition server 50.
  • voice recognition icon 203 and indicator 202 indicating volume of a collected voice are displayed on display device 140 of television 10, together with an image based on signals such as an input image signal and a received broadcast signal. This is for indicating user 700 that an operation (hereinafter referred to as "voice operation") of television 10 based on a voice of user 700 is available and for prompting user 700 to utter a voice.
  • voice operation an operation of television 10 based on a voice of user 700 is available and for prompting user 700 to utter a voice.
  • Television 10 may have a configuration that includes a remote control or mobile terminal configured such that the voice uttered by user 700 is collected by a built-in microphone and wirelessly transmitted to television 10.
  • television 10 is connected to voice recognition server 50 via network 40. Communication can take place between television 10 and voice recognition server 50.
  • FIG. 2 is a block diagram illustrating a configuration example of voice recognition processing system 11 according to the example.
  • Television 10 includes voice recognition processing apparatus 100, display device 140, transmitter-receiver 150, tuner 160, storage device 171, and built-in microphone 130.
  • Voice recognition processing apparatus 100 is configured to acquire a voice uttered by user 700 and to analyze the acquired voice. Voice recognition processing apparatus 100 is configured to recognize an instruction represented by the voice and to control television 10 in accordance with a recognized result. Specific configuration of voice recognition processing apparatus 100 will be described later.
  • Built-in microphone 130 is a microphone configured to collect voice that mainly comes from a direction facing a display surface of display device 140. That is, a sound-collecting direction of built-in microphone 130 is set so as to collect the voice uttered by user 700 who faces display device 140 of television 10. Built-in microphone 130 can collect the voice uttered by user 700 accordingly. Built-in microphone 130 may be provided inside an enclosure of television 10, and as illustrated in an example of FIG. 1 , may be installed outside the enclosure of television 10.
  • Display device 140 which is, for example, a liquid crystal display, may also be a display such as a plasma display and an organic EL (Electro Luminescence) display. Display device 140 is controlled by display controller (not shown), and displays an image based on signals such as an external input image signal and a broadcast signal received by tuner 160.
  • display controller not shown
  • Transmitter-receiver 150 is connected to network 40, and is configured to communicate via network 40 with an external device (for example, voice recognition server 50) connected to network 40.
  • an external device for example, voice recognition server 50
  • Tuner 160 is configured to receive a television broadcast signal of terrestrial broadcasting or satellite broadcasting via an antenna (not illustrated). Tuner 160 may be configured to receive the television broadcast signal transmitted via a private cable.
  • Storage device 171 which is, for example, a nonvolatile semiconductor memory, may be a device such as a volatile semiconductor memory and a hard disk. Storage device 171 stores information (data), a program, and the like used for control of each unit of television 10.
  • Network 40 which is, for example, the Internet, may be another network.
  • Voice recognition server 50 is an example of "a second voice recognizer".
  • Voice recognition server 50 is a server (dictionary server on a cloud) connected to television 10 via network 40.
  • Voice recognition server 50 includes recognition dictionary 55, and is configured to receive voice information transmitted via network 40 from television 10.
  • Recognition dictionary 55 is a database for associating the voice information with voice recognition models. Then, voice recognition server 50 compares the received voice information with the voice recognition models in recognition dictionary 55, to confirm whether the received voice information includes voice information corresponding to the voice recognition models registered in recognition dictionary 55.
  • voice recognition server 50 selects a character string represented by the voice recognition models. In this way, voice recognition server 50 converts the received voice information into the character string.
  • this character string may be a plurality of characters, and may be one character.
  • voice recognition server 50 transmits character string information representing the converted character string to television 10 via network 40 as a result of voice recognition.
  • This character string information is an example of "second information”.
  • Voice recognition processing apparatus 100 includes voice acquirer 101, voice recognizer 102, recognition result acquirer 103, recognition result determiner 104, command processor 106, and storage device 170.
  • Storage device 170 is, for example, a nonvolatile semiconductor memory, and can write and read data arbitrarily.
  • Storage device 170 may be a device such as a volatile semiconductor memory and a hard disk.
  • Storage device 170 also stores information such as information (for example, recognition dictionary 175) that is referred to by voice recognizer 102 and recognition result determiner 104.
  • Recognition dictionary 175 is an example of "a dictionary”.
  • Recognition dictionary 175 is a database for associating the voice information with the voice recognition models.
  • an exclusion object list is also registered in recognition dictionary 175. Details of the exclusion object list will be described later. It is to be noted that storage device 170 and storage device 171 may be integrally formed.
  • Voice acquirer 101 acquires a voice signal generated by the voice uttered by user 700, converts the voice signal into the voice information, and outputs the voice information to voice recognizer 102.
  • Voice recognizer 102 is an example of "a first voice recognizer". Voice recognizer 102 converts the voice information into the character string information, and outputs the character string information to recognition result acquirer 103 as a result of voice recognition. This character string information is an example of "first information”. In addition, voice recognizer 102 transmits the voice information acquired from voice acquirer 101, from transmitter-receiver 150 via network 40 to voice recognition server 50.
  • Voice recognition server 50 recognizes the voice information received from television 10 with reference to recognition dictionary 55, and replies a result of voice recognition to television 10.
  • Recognition result acquirer 103 is an example of "a selector". On receipt of the result (the first information) of voice recognition that is output from voice recognizer 102, and the result (the second information) of voice recognition replied from voice recognition server 50, recognition result acquirer 103 compares the first information with the second information to select either one. Then, recognition result acquirer 103 outputs the selected one to recognition result determiner 104.
  • Recognition result determiner 104 determines whether to reject or execute (accept) the result of voice recognition that is output from recognition result acquirer 103. Details of this determination will be described later. Then, based on the determination, recognition result determiner 104 outputs the result of voice recognition to command processor 106 or voice acquirer 101.
  • command processor 106 Based on the output (the result of voice recognition that is determined to be executed) from recognition result determiner 104, command processor 106 performs command processing (for example, control of television 10).
  • command processor 106 is an example of "a processor", and this command processing is an example of "processing".
  • FIG. 3 is a block diagram illustrating a configuration example of recognition result determiner 104 of voice recognition processing apparatus 100 according to the example.
  • Recognition result determiner 104 includes exclusion vocabulary rejecter 1042 and acceptance rejection transmitter 1045. Detailed operations of these units will be described later.
  • FIG. 4 is a flow chart illustrating an operation example of voice recognition processing apparatus 100 according to the example.
  • Voice acquirer 101 acquires the voice signal generated from the voice uttered by user 700 from built-in microphone 130 of television 10 (step S101).
  • Voice acquirer 101 may acquire the voice signal from a microphone incorporated in a remote control (not illustrated) or a microphone incorporated in a mobile terminal (not illustrated) via a wireless communicator (not illustrated).
  • voice acquirer 101 converts the voice signal into the voice information that can be used for various types of downstream processing, and outputs the voice information to voice recognizer 102. It is to be noted that, when the voice signal is a digital signal, voice acquirer 101 may use the voice signal as it is as the voice information.
  • Voice recognizer 102 converts the voice information acquired from voice acquirer 101 into character string information. Then, voice recognizer 102 outputs the character string information to recognition result acquirer 103 as a result of voice recognition.
  • voice recognition server 50 converts the voice information acquired from television 10 via network 40 into character string information, and replies the character string information to television 10 as a result of voice recognition (step S102).
  • voice recognizer 102 Specifically based on the voice information acquired from voice acquirer 101, voice recognizer 102 refers to an acceptance object list in recognition dictionary 175 previously stored in storage device 170. Then, voice recognizer 102 compares the voice information with the voice recognition models registered in the acceptance object list.
  • the voice recognition models refer to information for associating the voice information with the character string information.
  • voice recognition the voice information is compared with each of the plurality of voice recognition models, and one voice recognition model that agrees with or is similar to the voice information is selected. Then, character string information associated with the voice recognition model becomes a result of voice recognition of the voice information.
  • Voice recognition models related to operations of television 10 are registered in the acceptance object list, for example, instructions to television 10 (for example, channel change, volume change, etc.), functions of television 10 (for example, network connection function, etc.), unit names of television 10 (for example, power supply and channel), and instructions to content displayed on a screen of television 10 (for example, zoom in, zoom out, scroll).
  • an exclusion object list (not illustrated in FIG. 2 ) described later is also registered in recognition dictionary 175 stored in storage device 170.
  • Voice recognizer 102 compares the voice information with the voice recognition models registered in the acceptance object list. Then, when the voice information acquired from voice acquirer 101 includes information corresponding to the voice recognition model registered in the acceptance object list, voice recognizer 102 outputs the character string information associated with the voice recognition model to recognition result acquirer 103 as a result of voice recognition.
  • Voice recognizer 102 calculates a recognition score when comparing the voice information with the voice recognition models.
  • the recognition score is a numerical value that represents likelihood, and is an indicator indicating to what extent the voice information agrees with or is similar to the voice recognition models. The larger the numerical value is, the higher a degree of similarity is.
  • Voice recognizer 102 compares the voice information with the voice recognition models, and selects a plurality of voice recognition models as candidates. At this time, voice recognizer 102 calculates a recognition score for each of the voice recognition models. It is to be noted that a method for calculating this recognition score may be a commonly known method.
  • voice recognizer 102 selects a voice recognition model having a recognition score that is highest and is equal to or higher than a preset threshold value, and outputs character string information corresponding to the selected voice recognition model as a result of voice recognition. It is to be noted that voice recognizer 102 may output, along with the character string information, the recognition score related to the character string information to recognition result acquirer 103.
  • voice recognizer 102 converts the voice information into the character string information. It is to be noted that voice recognizer 102 may convert the voice information into information other than the character string information to output the converted information. In addition, if there is no voice recognition model having a recognition score that is equal to or higher than the threshold value, voice recognizer 102 may output information representing inability to recognize the voice.
  • voice recognizer 102 transmits the voice information acquired from voice acquirer 101, from transmitter-receiver 150 via network 40 to voice recognition server 50.
  • voice recognition server 50 Based on the voice information received from television 10, voice recognition server 50 refers to recognition dictionary 55. Then, voice recognition server 50 compares the voice information with the voice recognition models in recognition dictionary 55 to convert the voice information into character string information.
  • Voice recognition server 50 calculates the recognition score when comparing the received voice information with the voice recognition models in recognition dictionary 55.
  • This recognition score is a numerical value representing likelihood similar to the likelihood of the recognition score calculated by voice recognizer 102, and is calculated by a method similar to a method for calculating the recognition score by voice recognizer 102.
  • voice recognition server 50 selects a plurality of voice recognition models as candidates based on the received voice information, and selects one voice recognition model from among the candidates based on the recognition score. Then, voice recognition server 50 replies the character string information associated with the voice recognition model to television 10 as a result of voice recognition.
  • Voice recognition server 50 may transmit, along with the character string information, the recognition score related to the character string information to television 10.
  • Voice recognition server 50 is configured to collect various terms through network 40 and to register those terms in recognition dictionary 55. Accordingly, voice recognition server 50 can include more voice recognition models as compared with recognition dictionary 175 included in television 10. Therefore, in voice recognition server 50, when user 700 utters a word (for example, conversation among the users and a monologue of the user) that is irrelevant to functions of television 10 or instructions to television 10, the recognition score of voice recognition of the voice is likely to become high as compared with a case where voice recognizer 102 of television 10 performs similar voice recognition.
  • a word for example, conversation among the users and a monologue of the user
  • transmitter-receiver 150 On receipt of the result of voice recognition from voice recognition server 50 via network 40, transmitter-receiver 150 outputs the result of voice recognition to recognition result acquirer 103.
  • recognition result acquirer 103 selects one of the voice recognition results in accordance with a determination rule (step S103).
  • This determination rule may be, for example, comparison of a recognition score associated with a result of voice recognition received from voice recognizer 102 with a recognition score associated with a result of voice recognition received from voice recognition server 50, and selection of the voice recognition result with a higher recognition score.
  • Recognition result acquirer 103 outputs the selected voice recognition result to recognition result determiner 104.
  • recognition result acquirer 103 may skip processing of step S103 and may output the received result of voice recognition as it is.
  • Exclusion vocabulary rejecter 1042 of recognition result determiner 104 illustrated in FIG. 3 determines whether the result of voice recognition that is output from recognition result acquirer 103 agrees with any character string information in a vocabulary (exclusion vocabulary) registered in an exclusion object list (step S104).
  • the exclusion object list refers to a list in which a word (vocabulary) determined not to be used for voice operation of television 10 is registered as the exclusion vocabulary.
  • the exclusion vocabulary is, for example, a vocabulary except a vocabulary registered in recognition dictionary 175 of storage device 170 as the acceptance object list.
  • This exclusion object list which is previously registered in recognition dictionary 175 of storage device 170, may be configured so that a new exclusion vocabulary can be added arbitrarily. It is to be noted that, if the exclusion object list has, as the exclusion vocabulary, registration of a vocabulary having pronunciation similar to pronunciation of a word that user 700 utters during voice operation of television 10 and having no relationship with the voice operation of television 10, accuracy of voice recognition can be improved.
  • exclusion vocabulary rejecter 1042 compares the exclusion object list in recognition dictionary 175 stored in storage device 170 with the character string information that is the result of voice recognition that is output from recognition result acquirer 103. Exclusion vocabulary rejecter 1042 examines presence of character string information that agrees with a word in the exclusion vocabulary included in the exclusion object list. Then, exclusion vocabulary rejecter 1042 determines that the character string information that agrees with a word included in the exclusion vocabulary is information to be rejected, sets a flag, and outputs the character string information to acceptance rejection transmitter 1045 (Yes).
  • acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 101 as rejection information.
  • voice acquirer 101 prepares for voice acquisition in preparation for next voice recognition (step S106). Therefore, command processor 106 performs no processing on the character string information (rejection information) in which a flag is set.
  • exclusion vocabulary rejecter 1042 determines that the character string information, that does not agree with any words included in the exclusion vocabulary, is information to be accepted (executed), and outputs the character string information to acceptance rejection transmitter 1045 without setting a flag (No).
  • acceptance rejection transmitter 1045 outputs the character string information to command processor 106.
  • Command processor 106 executes command processing in accordance with an instruction represented by the character string information received from acceptance rejection transmitter 1045 (step S105).
  • command processor 106 issues an instruction to a controller (not illustrated) of television 10 so that an operation corresponding to the command information may be executed in television 10.
  • command processor 106 transmits a signal indicating that command processing has been completed to voice acquirer 101.
  • voice acquirer 101 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • voice recognition processing apparatus 100 includes voice acquirer 101, voice recognizer 102 that is one example of the first voice recognizer, storage device 170, and recognition result determiner 104.
  • Voice acquirer 101 is configured to acquire the voice uttered by user 700 and to output the voice information.
  • Voice recognizer 102 is configured to convert the voice information into the character string information that is an example of the first information.
  • Storage device 170 previously stores recognition dictionary 175 in which the exclusion vocabulary is registered.
  • Recognition dictionary 175 is an example of a dictionary.
  • Recognition result determiner 104 compares the character string information with the exclusion vocabulary, and determines whether the character string information includes a word that agrees with a word included in the exclusion vocabulary.
  • recognition result determiner 104 determines that the character string information is information to be rejected.
  • recognition result determiner 104 determines that the character string information is information to be executed.
  • voice recognition processing apparatus 100 may further include voice recognition server 50 that is an example of the second voice recognizer, and recognition result acquirer 103 that is an example of the selector.
  • voice recognition server 50 is configured to convert the voice information into the character string information that is an example of the second information.
  • Recognition result acquirer 103 is configured to select and output one of the character string information that voice recognizer 102 outputs and the character string information that voice recognition server 50 outputs. Then, recognition result determiner 104 determines whether to reject or execute the character string information selected by recognition result acquirer 103.
  • Voice recognition server 50 that is an example of the second voice recognizer may be installed on network 40.
  • Voice recognition processing apparatus 100 may include transmitter-receiver 150 configured to communicate with voice recognition server 50 via network 40.
  • Voice recognition processing apparatus 100 configured in this way can discriminate a voice that user 700 utters for voice operation from a voice of conversation among users 700 or a monologue of user 700 with good accuracy, reduce false recognition, and improve accuracy of voice recognition.
  • voice recognizer 102 is likely to output character string information registered in the acceptance object list (that is, likely to perform false recognition).
  • voice recognition server 50 has recognition dictionary 55 that is likely to have more registered voice recognition models (vocabulary) than recognition dictionary 175 because registered information is updated through network 40. Accordingly, voice recognition server 50 is likely to perform more accurate voice recognition of such a voice.
  • a recognition score associated with character string information that is output from voice recognizer 102 that falsely recognizes a voice that is easy to be falsely recognized a recognition score associated with character string information that is output from voice recognition server 50 that performs voice recognition of this voice has a larger numerical value. Therefore, it is likely that the character string information that is output from voice recognition server 50 is selected by recognition result acquirer 103.
  • exclusion vocabulary rejecter 1042 determines that this character string information is information to be rejected.
  • the present example makes it possible to improve accuracy of voice recognition of a voice that is likely to be falsely recognized by voice recognizer 102, and to prevent command processor 106 from performing false command processing due to false recognition.
  • voice recognizer 102 is likely to recognize a voice falsely in cases where the voice uttered by user 700 is not sufficiently large or where there is much noise, accuracy of voice recognition can be improved even in such cases.
  • voice recognition processing system 11 may be configured so that voice recognition may be performed only by television 10. Even in such a configuration, operations of recognition result determiner 104 can reduce false recognition and improve accuracy of voice recognition.
  • the exemplary embodiment describes a method for increasing accuracy of voice recognition of a word that user 700 is likely to utter (for example, a word regarding operations, functions, etc. of television 10).
  • FIG. 5 is a block diagram illustrating a configuration example of voice recognition processing system 21 according to the exemplary embodiment.
  • Voice recognition processing system 21 includes television 20 that is an example of a display apparatus, and voice recognition server 50. Since voice recognition server 50 is substantially identical to voice recognition server 50 described in the example, description will be omitted.
  • Television 20 includes voice recognition processing apparatus 200, display device 140, transmitter-receiver 150, tuner 160, storage device 171, and built-in microphone 130.
  • Voice recognition processing apparatus 200 includes voice acquirer 201, voice recognizer 102, recognition result acquirer 103, recognition result determiner 204, command processor 106, and storage device 270.
  • recognition dictionary 175 in storage device 270 has registration of an acceptance object list and exclusion object list that are similar to an acceptance object list and exclusion object list described in the example.
  • Voice recognition processing apparatus 200 differs from voice recognition processing apparatus 100 described in the example in operations in voice acquirer 201 and recognition result determiner 204.
  • voice acquirer 201 acquires a voice signal generated from a voice uttered by user 700 from built-in microphone 130. However, different from voice acquirer 101 described in the example, voice acquirer 201 creates utterance duration information and utterance form information based on the acquired voice signal.
  • the utterance duration information refers to information indicating a length of time uttered by user 700.
  • Voice acquirer 201 can create the utterance duration information by, for example, measuring the length of time during which a voice having volume equal to or higher than a preset threshold is continuously made. Voice acquirer 201 may create the utterance duration information by another method.
  • the utterance form information refers to information indicating lengths of silent periods that occur before and after utterance of user 700, or lengths of periods that can be substantially considered as silent.
  • Voice acquirer 201 can create the utterance form information by, for example, considering that a condition in which volume is lower than a preset threshold is silence, and by measuring the lengths of the silent periods that occur before and after utterance. Voice acquirer 201 may create the utterance form information by another method.
  • Voice acquirer 201 adds each of the utterance duration information and the utterance form information to voice information, and outputs these information items to voice recognizer 102.
  • a voice such as conversation among a plurality of users 700 and a monologue of user 700, may include a word in a vocabulary (acceptance object vocabulary) registered in the acceptance object list. Then, this voice may be collected by built-in microphone 130, and the voice information based on this voice may be input into voice recognizer 102. In such a case, voice recognizer 102 may perform false voice recognition based on such voice information, and command processor 106 may perform false command processing based on the false recognition, although user 700 does not have an intention to perform voice operation of television 20. In order to reduce occurrence of such false recognition, in addition to the exclusion object list described in the example, the present exemplary embodiment performs voice recognition using "the utterance duration information" and "the utterance form information".
  • voice recognizer 102 transmits the voice information to which the utterance duration information and the utterance form information are added, to voice recognition server 50 via transmitter-receiver 150 and network 40.
  • recognition result determiner 204 included in voice recognition processing apparatus 200 of television 20 will be described with reference to FIG. 6 and FIG. 7 .
  • FIG. 6 is a block diagram illustrating a configuration example of recognition result determiner 204 of voice recognition processing apparatus 200 according to the exemplary embodiment.
  • Recognition result determiner 204 includes exclusion vocabulary rejecter 1042, utterance duration determiner 2043, utterance form determiner 2044, and acceptance rejection transmitter 1045.
  • FIG. 7 is a flow chart illustrating an operation example of recognition result determiner 204 according to the exemplary embodiment.
  • recognition result acquirer 103 selects one of the voice recognition results in accordance with a determination rule (step S103).
  • This determination rule is substantially identical to the determination rule described in the example.
  • exclusion vocabulary rejecter 1042 of recognition result determiner 204 determines whether the result of voice recognition that is output from recognition result acquirer 103 includes a word that agrees with a word included in a vocabulary (exclusion vocabulary) registered in the exclusion object list (step S104).
  • exclusion vocabulary rejecter 1042 compares the exclusion object list in recognition dictionary 175 stored in storage device 270 with character string information that is the result of voice recognition that is output from recognition result acquirer 103, to examine presence of character string information that agrees with a word in the exclusion vocabulary included in the exclusion object list. Then, exclusion vocabulary rejecter 1042 determines that the character string information that agrees with a word included in the exclusion vocabulary is information to be rejected, sets a flag, and outputs the character string information to acceptance rejection transmitter 1045 (Yes).
  • acceptance rejection transmitter 1045 outputs the flagged character string information to voice acquirer 201 as rejection information.
  • voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • exclusion vocabulary rejecter 1042 outputs character string information that does not agree with any words included in the exclusion vocabulary to utterance duration determiner 2043 as it is without setting a flag (No).
  • Utterance duration determiner 2043 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on utterance duration (step S200).
  • the utterance duration is a length of time of utterance.
  • an utterance of user 700 for performing voice operation of television 20 is described as “an utterance for control”
  • an utterance that is not for a purpose of voice operation of television 20 for example, conversation among users 700 and a monologue of user 700
  • an utterance for conversation for example, conversation among users 700 and a monologue of user 700
  • utterance duration data (data indicating a length of time required for utterance) corresponding to each word of the acceptance object vocabulary included in the acceptance object list registered in recognition dictionary 175 is previously stored in storage device 270. This allows utterance duration determiner 2043 to calculate the utterance duration of a word included in the acceptance object vocabulary, selected as a result of voice recognition. It is to be noted that this utterance duration data preferably has a margin (range) in consideration of differences of utterance speed among individuals and the like.
  • the utterance for control includes about one word or two words in many cases.
  • all of these words are words included in the acceptance object vocabulary registered in the acceptance object list. Therefore, it is likely that, after voice recognition of "the utterance for control", the utterance duration based on the utterance duration data of the word in the acceptance object vocabulary selected as a result of voice recognition becomes closer to the utterance duration of "the utterance for control" indicated by the utterance duration information created by voice acquirer 201. It is assumed that, when a plurality of words included in the acceptance object vocabulary is selected as a result of voice recognition, the utterance duration is calculated based on the utterance duration data corresponding to the plurality of the words in the acceptance object vocabulary.
  • the utterance for conversation includes a plurality of words in many cases, and those words (vocabularies) are unlikely to include a word corresponding to the acceptance object vocabulary registered in the acceptance object list. Therefore, it is likely that, after voice recognition of "the utterance for conversation", the utterance duration based on the utterance duration data of the word included in the acceptance object vocabulary selected as a result of voice recognition becomes shorter than the utterance duration of "the utterance for conversation" indicated by the utterance duration information created by voice acquirer 201.
  • voice recognition processing apparatus 200 can determine whether the voice that is an object of voice recognition is based on "the utterance for control" or "the utterance for conversation", by comparing the utterance duration based on the utterance duration data of the word(s) included in the acceptance object vocabulary selected by voice recognizer 102 as a result of voice recognition with the utterance duration based on the utterance duration information created by voice acquirer 201.
  • utterance duration determiner 2043 makes this determination.
  • utterance duration determiner 2043 reads the utterance duration data which is associated with the word included in the acceptance object vocabulary from storage device 270.
  • utterance duration determiner 2043 reads the utterance duration data regarding all of the words from storage device 270. Then, utterance duration determiner 2043 calculates the utterance duration based on the read utterance duration data. Then, utterance duration determiner 2043 compares a result of the calculation with the utterance duration indicated by the utterance duration information created by voice acquirer 201.
  • utterance duration determiner 2043 may compare the calculated utterance duration with the utterance duration indicated by the utterance duration information as it is, utterance duration determiner 2043 may set a range for determination based on the calculated utterance duration.
  • a range for comparison an example of setting a range for comparison will be described.
  • step S200 when the utterance duration indicated by the utterance duration information created by voice acquirer 201 is outside the range that is set based on the calculated utterance duration (No), utterance duration determiner 2043 determines that the unflagged character string information that is output from exclusion vocabulary rejecter 1042 is based on "the utterance for conversation", and that the unflagged character string information is to be rejected. Utterance duration determiner 2043 sets a flag in this character string information, and outputs the flagged character string information to acceptance rejection transmitter 1045.
  • acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 201 as rejection information.
  • voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • step S200 when the utterance duration indicated by the utterance duration information created by voice acquirer 201 is within the range that is set based on the calculated utterance duration (Yes), utterance duration determiner 2043 determines that the unflagged character string information that is output from exclusion vocabulary rejecter 1042 is based on "the utterance for control". Utterance duration determiner 2043 avoids setting a flag in this character string information, and outputs the character string information as it is to utterance form determiner 2044.
  • utterance duration determiner 2043 may set the range for determination by, for example, multiplying the calculated utterance duration by a predetermined numerical value (for example, 1.5). This numerical value is only an example and may be another numerical value. Alternatively, utterance duration determiner 2043 may set the range for determination by, for example, adding a predetermined numerical value to the calculated utterance duration, and may set the range by another method.
  • a predetermined numerical value for example 1.5
  • This numerical value is only an example and may be another numerical value.
  • utterance duration determiner 2043 may set the range for determination by, for example, adding a predetermined numerical value to the calculated utterance duration, and may set the range by another method.
  • Utterance form determiner 2044 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from utterance duration determiner 2043, based on an utterance form (step S201).
  • the utterance form used by utterance form determiner 2044 will be described.
  • This "utterance form” refers to a silent period that occurs immediately before user 700 utters, or to a period that can be substantially considered as silent (hereinafter described as "a pause period”), and to a pause period that occurs immediately after user 700 finishes utterance.
  • the pause period that occurs immediately before user 700 utters is a period for preparation for utterance.
  • the pause period that occurs immediately after user 700 finishes utterance is a period for waiting for an operation (operation based on voice operation) corresponding to uttered information to be started.
  • utterance form determiner 2044 makes this determination based on the utterance form information created by voice acquirer 201.
  • utterance form determiner 2044 reads utterance form data which is associated with the word included in the acceptance object vocabulary from storage device 270.
  • This utterance form data refers to data indicating the lengths of respective pause periods that occur before and after utterance of the word included in the acceptance object vocabulary.
  • the utterance form data which is associated with the word included in the acceptance object vocabulary is previously stored in storage device 270.
  • utterance form determiner 2044 compares the utterance form data that is read from storage device 270 with the utterance form information (the utterance form information created by voice acquirer 201) added to the character string information that is input from utterance duration determiner 2043.
  • utterance form determiner 2044 compares the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201, with the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270, respectively. It is to be noted that utterance form determiner 2044 may compare the utterance form information created by voice acquirer 201 as it is with the utterance form data that is read from storage device 270, but utterance form determiner 2044 may set a range for determination based on the utterance form data that is read from storage device 270.
  • utterance form determiner 2044 may read utterance form data regarding all of the words from storage device 270, and may select either one with a larger value. Alternatively, utterance form determiner 2044 may select either one with a smaller value, or may calculate an average value or a medium value.
  • step S201 when at least one of the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201 is shorter than the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270 (No), utterance form determiner 2044 determines that the unflagged character string information that is output from utterance duration determiner 2043 is based on "the utterance for conversation", sets a flag in this character string information, and outputs the flagged character string information to acceptance rejection transmitter 1045.
  • acceptance rejection transmitter 1045 When a flag is set in the character string information that is input from utterance form determiner 2044, acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 201 as rejection information. On receipt of the rejection information, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • step S201 when both of the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201 are equal to or longer than the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270 (Yes), utterance form determiner 2044 determines that the unflagged character string information that is output from utterance duration determiner 2043 is based on "the utterance for control", avoids setting a flag in this character string information, and outputs the character string information as it is to acceptance rejection transmitter 1045.
  • the unflagged character string information received by acceptance rejection transmitter 1045 is character string information in which a flag is not set by any of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • the character string information is character string information that is determined to be accepted (to execute command processing), by all of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • the character string information is character string information that is determined to be rejection information, by one of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • Acceptance rejection transmitter 1045 outputs the unflagged character string information to command processor 106 as it is as character string information to be accepted (executed).
  • Command processor 106 executes command processing in accordance with an instruction indicated by the character string information received from acceptance rejection transmitter 1045 (step S105).
  • command processor 106 transmits, to voice acquirer 201, a signal indicating that command processing is completed.
  • voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • step S106 the flagged character string information is output as rejection information from acceptance rejection transmitter 1045 to voice acquirer 201.
  • voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition.
  • step S200 and step S201 may be performed first.
  • voice recognition processing apparatus 200 includes voice acquirer 201, recognition result determiner 204, and storage device 270.
  • Voice acquirer 201 measures the length of time uttered by user 700 based on the acquired voice to create the utterance duration information.
  • voice acquirer 201 measures the lengths of the silent periods that occur before and after utterance of user 700 based on the acquired voice to create the utterance form information.
  • Storage device 270 previously stores the utterance duration data representing the time required for utterance and the utterance form data representing the lengths of the silent periods that occur before and after utterance.
  • recognition result determiner 204 reads the utterance duration data from storage device 270, and compares the read utterance duration data with the utterance duration information created by voice acquirer 201 to make a second determination whether to reject or execute the character string information based on the comparison. Then, regarding the character string information that is determined to be executed, recognition result determiner 204 reads the utterance form data from storage device 270, and compares the read utterance form data with the utterance form information created by voice acquirer 201 to make a second determination whether to reject or execute the character string information based on the comparison.
  • This character string information is an example of the first information.
  • this character string information is character string information that is determined to be accepted (to execute command processing), by all of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • this character string information is character string information that is determined to be rejection information, by one of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • each of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044 determines whether to accept (command processing) or reject the character string information received by recognition result acquirer 103 as a result of voice recognition. Then, character string information that is determined to be rejected by either one of these units is rejected, and only character string information that is determined to be accepted by all of these units undergoes command processing.
  • voice recognition processing apparatus 200 can determine with good accuracy whether the voice that undergoes voice recognition is a voice based on "the utterance for control", or a voice based on "the utterance for conversation". Therefore, voice recognition processing apparatus 200 can reduce false recognition and further improve accuracy of voice recognition.
  • the example has been described as an example of a technique disclosed in the present application.
  • the technique in the present disclosure is not limited to this example, and can be applied to exemplary embodiments to which change, replacement, addition, and omission have been made inasmuch as falling within the subject-matter defined by the appended claims.
  • recognition result determiner 204 includes utterance duration determiner 2043 and utterance form determiner 2044, in addition to exclusion vocabulary rejecter 1042, to improve accuracy of voice recognition.
  • recognition result determiner having a configuration that includes exclusion vocabulary rejecter 1042 combined with one of utterance duration determiner 2043 and utterance form determiner 2044 can also improve accuracy of voice recognition.
  • FIG. 8A is a block diagram illustrating a configuration example of recognition result determiner 304 in another exemplary embodiment.
  • FIG. 8B is a block diagram illustrating a configuration example of recognition result determiner 404 in another exemplary embodiment.
  • Recognition result determiner 304 illustrated in FIG. 8A has a configuration that includes exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and acceptance rejection transmitter 1045, and does not include utterance form determiner 2044.
  • a voice recognition apparatus that includes recognition result determiner 304 illustrated in FIG. 8A operates as follows.
  • a voice acquirer (not illustrated) measures a length of time uttered by user 700 based on an acquired voice to create utterance duration information.
  • Storage device 370 previously stores utterance duration data representing a time required for utterance. These pieces of utterance duration information and utterance duration data are substantially identical to utterance duration information and utterance duration data described in the exemplary embodiment.
  • recognition result determiner 304 reads the utterance duration data from storage device 370, and compares the read utterance duration data with the utterance duration information created by the voice acquirer to make a second determination whether to reject or execute the character string information based on the comparison.
  • This character string information is an example of first information.
  • recognition result determiner 304 operates as follows.
  • Utterance duration determiner 2043 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on the utterance duration.
  • utterance duration determiner 2043 Since the operation of utterance duration determiner 2043 is substantially identical to operation of utterance duration determiner 2043 described in the exemplary embodiment, description will be omitted.
  • Utterance duration determiner 2043 avoids setting a flag in the character string information that is determined to be based on "an utterance for control", and outputs the character string information as it is to acceptance rejection transmitter 1045.
  • Acceptance rejection transmitter 1045 outputs the unflagged character string information as it is to command processor 106 as character string information to be accepted (executed).
  • Recognition result determiner 404 illustrated in FIG. 8B has a configuration that includes exclusion vocabulary rejecter 1042, utterance form determiner 2044, and acceptance rejection transmitter 1045, and does not include utterance duration determiner 2043.
  • a voice recognition apparatus that includes recognition result determiner 404 illustrated in FIG. 8B operates as follows.
  • a voice acquirer measures lengths of silent periods that occur before and after utterance of user 700 based on an acquired voice to create utterance form information.
  • Storage device 470 previously stores utterance form data representing the lengths of the silent periods that occur before and after utterance. These pieces of utterance form information and utterance form data are substantially identical to utterance form information and utterance form data described in the exemplary embodiment.
  • recognition result determiner 404 reads the utterance form data from storage device 470, and compares the read utterance form data with the utterance form information created by the voice acquirer to make a second determination whether to reject or execute the character string information based on the comparison.
  • This character string information is an example of the first information.
  • recognition result determiner 404 operates as follows.
  • Utterance form determiner 2044 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on utterance form.
  • utterance form determiner 2044 Since the operation of utterance form determiner 2044 is substantially identical to operation of utterance form determiner 2044 described in the exemplary embodiment, description will be omitted.
  • Utterance form determiner 2044 avoids setting a flag in the character string information that is determined to be based on "the utterance for control", and outputs the character string information as it is to acceptance rejection transmitter 1045.
  • Acceptance rejection transmitter 1045 outputs the unflagged character string information as it is to command processor 106 as character string information to be accepted (executed).
  • the recognition result determiner has, for example, a configuration that includes only one of utterance duration determiner 2043 and utterance form determiner 2044 as illustrated in FIG. 8A and FIG. 8B , respectively, the recognition result determiner is capable of improving accuracy of voice recognition.
  • voice recognition server 50 may be included in voice recognition processing apparatus 100.
  • voice recognition server 50 may be included in voice recognition processing apparatus 100.
  • voice recognition server 50 it is also possible to have a configuration in which voice recognition server 50 is not included, and in which voice recognition is performed only by voice recognizer 102.
  • Each block illustrated in FIGS. 2 , 3 , 5 , 6 , 8A, and 8B may be configured as an independent circuit block, and may be configured such that a processor may execute software that is programmed to implement an operation of each block.
  • the present disclosure is applicable to devices that perform processing operations instructed by a user. Specifically, the present disclosure is applicable to devices such as a mobile terminal device, a television receiver, a personal computer, a set top box, a videocassette recorder, a game machine, a smart phone, and a tablet terminal.

Description

    TECHNICAL FIELD
  • The present disclosure relates to voice recognition processing apparatuses, voice recognition processing methods, and display apparatuses that operate by recognizing a voice uttered by a user.
  • BACKGROUND ART
  • Patent Literature 1 discloses a voice input apparatus that has a voice recognition function. This voice input apparatus is configured to receive a voice uttered by a user, to recognize (voice recognition) a command indicated by the voice of the user by analyzing the received voice, and to control a device in accordance with the voice-recognized command. That is, the voice input apparatus of Patent Literature 1 is capable of performing voice recognition on the voice arbitrarily uttered by the user, and controlling the device in accordance with the command that is a result of the voice recognition.
  • For example, a user who uses this voice input apparatus can select hypertext displayed on a browser by using the voice recognition function of this voice input apparatus while operating the browser on an apparatus such as a television receiver (hereinafter referred to as "television") and a PC (Personal Computer). In addition, the user can also use this voice recognition function to perform a search on a web site (search site) that provides a search service.
  • In addition, in this voice input apparatus, "triggerless recognition" may be performed in order to increase convenience of the user. "The triggerless recognition" refers to a condition in which voice collection and voice recognition of the collected voice are always performed in a voice input apparatus without limitation of a period in which voice input for voice recognition is accepted. However, if triggerless recognition is performed in this voice input apparatus, it is difficult to distinguish whether the collected voice is a voice uttered by the user for a purpose of voice recognition, and whether the collected voice is not a voice for a purpose of voice recognition, such as conversation among the users and a monologue of the user. Thus, a voice that is not for the purpose of voice recognition may be falsely recognized (false recognition).
  • US2008/059186 and US2008/120107 disclose acquiring a voice uttered by a user, converting a corresponding voice information into "first information" and determining whether said first information is information to be rejected or to be accepted (and then executed) based on a comparison of the first information with an pre-stored exclusion vocabulary. In US2008/059186 , a subset blocking word list contains words that are "non-content" words. US2008/120107 teaches to use a rejectable word dictionary in order to prevent from misrecognizing words as recognition vocabulary words when a user utters out-of-recognition vocabulary words. Document EP1562178 discloses a recognition result post-processor using a skip list containing results to be removed from the recognition results list.
  • Citation List Patent Literature
  • PTL 1: Japanese Patent No. 4812941
  • SUMMARY
  • The present disclosure provides a voice recognition processing apparatus and a voice recognition processing method that reduce false recognition and improve operativity of the user, as defined in claims 1 and 5. Another object of the invention concerns a display apparatus as defined in claim 6.
  • The voice recognition processing apparatus according to the present disclosure can improve operativity when the user performs voice operation.
  • BRIEF DESCRIPTION OF DRAWINGS
    • FIG. 1 is a diagram schematically illustrating a voice recognition processing system according to an example that is not part of the present invention.
    • FIG. 2 is a block diagram illustrating a configuration example of the voice recognition processing system according to the example.
    • FIG. 3 is a block diagram illustrating a configuration example of a recognition result determiner of a voice recognition processing apparatus according to the example.
    • FIG. 4 is a flow chart illustrating an operation example of the voice recognition processing apparatus according to the example.
    • FIG. 5 is a block diagram illustrating a configuration example of the voice recognition processing system according to an exemplary embodiment of the present invention.
    • FIG. 6 is a block diagram illustrating a configuration example of the recognition result determiner of the voice recognition processing apparatus according to the exemplary embodiment.
    • FIG. 7 is a flow chart illustrating an operation example of the recognition result determiner according to the exemplary embodiment.
    • FIG. 8A is a block diagram illustrating a configuration example of the recognition result determiner according to another exemplary embodiment.
    • FIG. 8B is a block diagram illustrating a configuration example of the recognition result determiner according to another exemplary embodiment.
    DESCRIPTION OF AN EXAMPLE AND EMBODIMENTS
  • An example that is not part of the invention and Exemplary embodiments will be described in detail below with reference to the drawings as needed. However, a description that is more detailed than necessary may be omitted. For example, a detailed description of an already well-known item and a repeated description of substantially identical components may be omitted. This is for avoiding the following description from becoming unnecessarily redundant and for making the description easier for a person skilled in the art to understand.
  • It is to be noted that the accompanying drawings and the following description are provided in order for a person skilled in the art to fully understand the present disclosure, and are not intended to limit the subject described in the appended claims.
  • EXAMPLE
  • An example, that is not part of the present invention, will be described below with reference to FIG. 1 to FIG. 4. It is to be noted that although television receiver (television) 10 is cited in the present example as an example of a display apparatus including a voice recognition processing apparatus, the display apparatus is not limited to television 10. For example, the display apparatus may be an apparatus such as a PC, a tablet terminal, and a mobile terminal.
  • Although voice recognition processing system 11 according to the present example is configured to perform triggerless recognition, the present disclosure is not limited to triggerless recognition. The present disclosure is also applicable to a system in which voice recognition is started by an operation for starting voice recognition by user 700.
  • [1-1. Configuration]
  • FIG. 1 is a diagram schematically illustrating voice recognition processing system 11 according to the example. In the present example, television 10 that is an example of the display apparatus incorporates the voice recognition processing apparatus.
  • Voice recognition processing system 11 according to the present example includes television 10 that is an example of a display apparatus, and voice recognition server 50.
  • When the voice recognition processing apparatus starts in television 10, voice recognition icon 203 and indicator 202 indicating volume of a collected voice are displayed on display device 140 of television 10, together with an image based on signals such as an input image signal and a received broadcast signal. This is for indicating user 700 that an operation (hereinafter referred to as "voice operation") of television 10 based on a voice of user 700 is available and for prompting user 700 to utter a voice.
  • When user 700 utters a voice toward built-in microphone 130 included in television 10, the voice is collected by built-in microphone 130, and the collected voice is recognized by the voice recognition processing apparatus incorporated in television 10. In television 10, control of television 10 is performed in accordance with a result of the voice recognition.
  • Television 10 may have a configuration that includes a remote control or mobile terminal configured such that the voice uttered by user 700 is collected by a built-in microphone and wirelessly transmitted to television 10.
  • In addition, television 10 is connected to voice recognition server 50 via network 40. Communication can take place between television 10 and voice recognition server 50.
  • FIG. 2 is a block diagram illustrating a configuration example of voice recognition processing system 11 according to the example.
  • Television 10 includes voice recognition processing apparatus 100, display device 140, transmitter-receiver 150, tuner 160, storage device 171, and built-in microphone 130.
  • Voice recognition processing apparatus 100 is configured to acquire a voice uttered by user 700 and to analyze the acquired voice. Voice recognition processing apparatus 100 is configured to recognize an instruction represented by the voice and to control television 10 in accordance with a recognized result. Specific configuration of voice recognition processing apparatus 100 will be described later.
  • Built-in microphone 130 is a microphone configured to collect voice that mainly comes from a direction facing a display surface of display device 140. That is, a sound-collecting direction of built-in microphone 130 is set so as to collect the voice uttered by user 700 who faces display device 140 of television 10. Built-in microphone 130 can collect the voice uttered by user 700 accordingly. Built-in microphone 130 may be provided inside an enclosure of television 10, and as illustrated in an example of FIG. 1, may be installed outside the enclosure of television 10.
  • Display device 140, which is, for example, a liquid crystal display, may also be a display such as a plasma display and an organic EL (Electro Luminescence) display. Display device 140 is controlled by display controller (not shown), and displays an image based on signals such as an external input image signal and a broadcast signal received by tuner 160.
  • Transmitter-receiver 150 is connected to network 40, and is configured to communicate via network 40 with an external device (for example, voice recognition server 50) connected to network 40.
  • Tuner 160 is configured to receive a television broadcast signal of terrestrial broadcasting or satellite broadcasting via an antenna (not illustrated). Tuner 160 may be configured to receive the television broadcast signal transmitted via a private cable.
  • Storage device 171, which is, for example, a nonvolatile semiconductor memory, may be a device such as a volatile semiconductor memory and a hard disk. Storage device 171 stores information (data), a program, and the like used for control of each unit of television 10.
  • Network 40, which is, for example, the Internet, may be another network.
  • Voice recognition server 50 is an example of "a second voice recognizer". Voice recognition server 50 is a server (dictionary server on a cloud) connected to television 10 via network 40. Voice recognition server 50 includes recognition dictionary 55, and is configured to receive voice information transmitted via network 40 from television 10. Recognition dictionary 55 is a database for associating the voice information with voice recognition models. Then, voice recognition server 50 compares the received voice information with the voice recognition models in recognition dictionary 55, to confirm whether the received voice information includes voice information corresponding to the voice recognition models registered in recognition dictionary 55. When the received voice information includes the voice information corresponding to the voice recognition models registered in recognition dictionary 55, voice recognition server 50 selects a character string represented by the voice recognition models. In this way, voice recognition server 50 converts the received voice information into the character string. It is to be noted that this character string may be a plurality of characters, and may be one character. Then, voice recognition server 50 transmits character string information representing the converted character string to television 10 via network 40 as a result of voice recognition. This character string information is an example of "second information".
  • Voice recognition processing apparatus 100 includes voice acquirer 101, voice recognizer 102, recognition result acquirer 103, recognition result determiner 104, command processor 106, and storage device 170.
  • Storage device 170 is, for example, a nonvolatile semiconductor memory, and can write and read data arbitrarily. Storage device 170 may be a device such as a volatile semiconductor memory and a hard disk. Storage device 170 also stores information such as information (for example, recognition dictionary 175) that is referred to by voice recognizer 102 and recognition result determiner 104. Recognition dictionary 175 is an example of "a dictionary". Recognition dictionary 175 is a database for associating the voice information with the voice recognition models. In addition, an exclusion object list is also registered in recognition dictionary 175. Details of the exclusion object list will be described later. It is to be noted that storage device 170 and storage device 171 may be integrally formed.
  • Voice acquirer 101 acquires a voice signal generated by the voice uttered by user 700, converts the voice signal into the voice information, and outputs the voice information to voice recognizer 102.
  • Voice recognizer 102 is an example of "a first voice recognizer". Voice recognizer 102 converts the voice information into the character string information, and outputs the character string information to recognition result acquirer 103 as a result of voice recognition. This character string information is an example of "first information". In addition, voice recognizer 102 transmits the voice information acquired from voice acquirer 101, from transmitter-receiver 150 via network 40 to voice recognition server 50.
  • Voice recognition server 50 recognizes the voice information received from television 10 with reference to recognition dictionary 55, and replies a result of voice recognition to television 10.
  • Recognition result acquirer 103 is an example of "a selector". On receipt of the result (the first information) of voice recognition that is output from voice recognizer 102, and the result (the second information) of voice recognition replied from voice recognition server 50, recognition result acquirer 103 compares the first information with the second information to select either one. Then, recognition result acquirer 103 outputs the selected one to recognition result determiner 104.
  • Recognition result determiner 104 determines whether to reject or execute (accept) the result of voice recognition that is output from recognition result acquirer 103. Details of this determination will be described later. Then, based on the determination, recognition result determiner 104 outputs the result of voice recognition to command processor 106 or voice acquirer 101.
  • Based on the output (the result of voice recognition that is determined to be executed) from recognition result determiner 104, command processor 106 performs command processing (for example, control of television 10). Command processor 106 is an example of "a processor", and this command processing is an example of "processing".
  • FIG. 3 is a block diagram illustrating a configuration example of recognition result determiner 104 of voice recognition processing apparatus 100 according to the example.
  • Recognition result determiner 104 includes exclusion vocabulary rejecter 1042 and acceptance rejection transmitter 1045. Detailed operations of these units will be described later.
  • [1-2. Operation]
  • Next, operations of voice recognition processing apparatus 100 of television 10 according to the present example will be described.
  • FIG. 4 is a flow chart illustrating an operation example of voice recognition processing apparatus 100 according to the example.
  • Voice acquirer 101 acquires the voice signal generated from the voice uttered by user 700 from built-in microphone 130 of television 10 (step S101).
  • Voice acquirer 101 may acquire the voice signal from a microphone incorporated in a remote control (not illustrated) or a microphone incorporated in a mobile terminal (not illustrated) via a wireless communicator (not illustrated).
  • Then, voice acquirer 101 converts the voice signal into the voice information that can be used for various types of downstream processing, and outputs the voice information to voice recognizer 102. It is to be noted that, when the voice signal is a digital signal, voice acquirer 101 may use the voice signal as it is as the voice information.
  • Voice recognizer 102 converts the voice information acquired from voice acquirer 101 into character string information. Then, voice recognizer 102 outputs the character string information to recognition result acquirer 103 as a result of voice recognition. In addition, voice recognition server 50 converts the voice information acquired from television 10 via network 40 into character string information, and replies the character string information to television 10 as a result of voice recognition (step S102).
  • Specifically based on the voice information acquired from voice acquirer 101, voice recognizer 102 refers to an acceptance object list in recognition dictionary 175 previously stored in storage device 170. Then, voice recognizer 102 compares the voice information with the voice recognition models registered in the acceptance object list.
  • The voice recognition models refer to information for associating the voice information with the character string information. In voice recognition, the voice information is compared with each of the plurality of voice recognition models, and one voice recognition model that agrees with or is similar to the voice information is selected. Then, character string information associated with the voice recognition model becomes a result of voice recognition of the voice information. Voice recognition models related to operations of television 10 are registered in the acceptance object list, for example, instructions to television 10 (for example, channel change, volume change, etc.), functions of television 10 (for example, network connection function, etc.), unit names of television 10 (for example, power supply and channel), and instructions to content displayed on a screen of television 10 (for example, zoom in, zoom out, scroll).
  • It is to be noted that, in addition to the acceptance object list, an exclusion object list (not illustrated in FIG. 2) described later is also registered in recognition dictionary 175 stored in storage device 170.
  • Voice recognizer 102 compares the voice information with the voice recognition models registered in the acceptance object list. Then, when the voice information acquired from voice acquirer 101 includes information corresponding to the voice recognition model registered in the acceptance object list, voice recognizer 102 outputs the character string information associated with the voice recognition model to recognition result acquirer 103 as a result of voice recognition.
  • Voice recognizer 102 calculates a recognition score when comparing the voice information with the voice recognition models. The recognition score is a numerical value that represents likelihood, and is an indicator indicating to what extent the voice information agrees with or is similar to the voice recognition models. The larger the numerical value is, the higher a degree of similarity is. Voice recognizer 102 compares the voice information with the voice recognition models, and selects a plurality of voice recognition models as candidates. At this time, voice recognizer 102 calculates a recognition score for each of the voice recognition models. It is to be noted that a method for calculating this recognition score may be a commonly known method. Then, voice recognizer 102 selects a voice recognition model having a recognition score that is highest and is equal to or higher than a preset threshold value, and outputs character string information corresponding to the selected voice recognition model as a result of voice recognition. It is to be noted that voice recognizer 102 may output, along with the character string information, the recognition score related to the character string information to recognition result acquirer 103.
  • In this way, voice recognizer 102 converts the voice information into the character string information. It is to be noted that voice recognizer 102 may convert the voice information into information other than the character string information to output the converted information. In addition, if there is no voice recognition model having a recognition score that is equal to or higher than the threshold value, voice recognizer 102 may output information representing inability to recognize the voice.
  • In addition, voice recognizer 102 transmits the voice information acquired from voice acquirer 101, from transmitter-receiver 150 via network 40 to voice recognition server 50.
  • Based on the voice information received from television 10, voice recognition server 50 refers to recognition dictionary 55. Then, voice recognition server 50 compares the voice information with the voice recognition models in recognition dictionary 55 to convert the voice information into character string information.
  • Voice recognition server 50 calculates the recognition score when comparing the received voice information with the voice recognition models in recognition dictionary 55. This recognition score is a numerical value representing likelihood similar to the likelihood of the recognition score calculated by voice recognizer 102, and is calculated by a method similar to a method for calculating the recognition score by voice recognizer 102. In a similar manner to voice recognizer 102, voice recognition server 50 selects a plurality of voice recognition models as candidates based on the received voice information, and selects one voice recognition model from among the candidates based on the recognition score. Then, voice recognition server 50 replies the character string information associated with the voice recognition model to television 10 as a result of voice recognition. Voice recognition server 50 may transmit, along with the character string information, the recognition score related to the character string information to television 10.
  • Voice recognition server 50 is configured to collect various terms through network 40 and to register those terms in recognition dictionary 55. Accordingly, voice recognition server 50 can include more voice recognition models as compared with recognition dictionary 175 included in television 10. Therefore, in voice recognition server 50, when user 700 utters a word (for example, conversation among the users and a monologue of the user) that is irrelevant to functions of television 10 or instructions to television 10, the recognition score of voice recognition of the voice is likely to become high as compared with a case where voice recognizer 102 of television 10 performs similar voice recognition.
  • On receipt of the result of voice recognition from voice recognition server 50 via network 40, transmitter-receiver 150 outputs the result of voice recognition to recognition result acquirer 103.
  • On receipt of the result of voice recognition from each of voice recognizer 102 and voice recognition server 50, recognition result acquirer 103 selects one of the voice recognition results in accordance with a determination rule (step S103).
  • This determination rule may be, for example, comparison of a recognition score associated with a result of voice recognition received from voice recognizer 102 with a recognition score associated with a result of voice recognition received from voice recognition server 50, and selection of the voice recognition result with a higher recognition score. Recognition result acquirer 103 outputs the selected voice recognition result to recognition result determiner 104.
  • It is to be noted that, when recognition result acquirer 103 can receive the result of voice recognition only from one of voice recognizer 102 and voice recognition server 50, recognition result acquirer 103 may skip processing of step S103 and may output the received result of voice recognition as it is.
  • Exclusion vocabulary rejecter 1042 of recognition result determiner 104 illustrated in FIG. 3 determines whether the result of voice recognition that is output from recognition result acquirer 103 agrees with any character string information in a vocabulary (exclusion vocabulary) registered in an exclusion object list (step S104).
  • The exclusion object list refers to a list in which a word (vocabulary) determined not to be used for voice operation of television 10 is registered as the exclusion vocabulary. The exclusion vocabulary is, for example, a vocabulary except a vocabulary registered in recognition dictionary 175 of storage device 170 as the acceptance object list. This exclusion object list, which is previously registered in recognition dictionary 175 of storage device 170, may be configured so that a new exclusion vocabulary can be added arbitrarily. It is to be noted that, if the exclusion object list has, as the exclusion vocabulary, registration of a vocabulary having pronunciation similar to pronunciation of a word that user 700 utters during voice operation of television 10 and having no relationship with the voice operation of television 10, accuracy of voice recognition can be improved.
  • In step S104, exclusion vocabulary rejecter 1042 compares the exclusion object list in recognition dictionary 175 stored in storage device 170 with the character string information that is the result of voice recognition that is output from recognition result acquirer 103. Exclusion vocabulary rejecter 1042 examines presence of character string information that agrees with a word in the exclusion vocabulary included in the exclusion object list. Then, exclusion vocabulary rejecter 1042 determines that the character string information that agrees with a word included in the exclusion vocabulary is information to be rejected, sets a flag, and outputs the character string information to acceptance rejection transmitter 1045 (Yes).
  • If a flag is set in the character string information that is input from exclusion vocabulary rejecter 1042, acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 101 as rejection information. On receipt of the rejection information, voice acquirer 101 prepares for voice acquisition in preparation for next voice recognition (step S106). Therefore, command processor 106 performs no processing on the character string information (rejection information) in which a flag is set.
  • In step S104, exclusion vocabulary rejecter 1042 determines that the character string information, that does not agree with any words included in the exclusion vocabulary, is information to be accepted (executed), and outputs the character string information to acceptance rejection transmitter 1045 without setting a flag (No).
  • If no flag is set in the character string information that is input from exclusion vocabulary rejecter 1042, acceptance rejection transmitter 1045 outputs the character string information to command processor 106. Command processor 106 executes command processing in accordance with an instruction represented by the character string information received from acceptance rejection transmitter 1045 (step S105).
  • For example, when the character string information includes command information regarding control of television 10, such as channel change and volume change, command processor 106 issues an instruction to a controller (not illustrated) of television 10 so that an operation corresponding to the command information may be executed in television 10.
  • After completion of step S105, command processor 106 transmits a signal indicating that command processing has been completed to voice acquirer 101. On receipt of the signal, voice acquirer 101 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • [1-3. Effect and Others]
  • As described above, in the present example, voice recognition processing apparatus 100 includes voice acquirer 101, voice recognizer 102 that is one example of the first voice recognizer, storage device 170, and recognition result determiner 104. Voice acquirer 101 is configured to acquire the voice uttered by user 700 and to output the voice information. Voice recognizer 102 is configured to convert the voice information into the character string information that is an example of the first information. Storage device 170 previously stores recognition dictionary 175 in which the exclusion vocabulary is registered. Recognition dictionary 175 is an example of a dictionary. Recognition result determiner 104 compares the character string information with the exclusion vocabulary, and determines whether the character string information includes a word that agrees with a word included in the exclusion vocabulary. Then, when the character string information includes the word that agrees with a word included in the exclusion vocabulary, recognition result determiner 104 determines that the character string information is information to be rejected. When the character string information does not include the word that agrees with a word included in the exclusion vocabulary, recognition result determiner 104 determines that the character string information is information to be executed.
  • In addition, voice recognition processing apparatus 100 may further include voice recognition server 50 that is an example of the second voice recognizer, and recognition result acquirer 103 that is an example of the selector. In this case, voice recognition server 50 is configured to convert the voice information into the character string information that is an example of the second information. Recognition result acquirer 103 is configured to select and output one of the character string information that voice recognizer 102 outputs and the character string information that voice recognition server 50 outputs. Then, recognition result determiner 104 determines whether to reject or execute the character string information selected by recognition result acquirer 103.
  • Voice recognition server 50 that is an example of the second voice recognizer may be installed on network 40. Voice recognition processing apparatus 100 may include transmitter-receiver 150 configured to communicate with voice recognition server 50 via network 40.
  • Voice recognition processing apparatus 100 configured in this way can discriminate a voice that user 700 utters for voice operation from a voice of conversation among users 700 or a monologue of user 700 with good accuracy, reduce false recognition, and improve accuracy of voice recognition.
  • For example, it is assumed that user 700 utters a word having pronunciation similar to pronunciation of a word uttered during voice operation of television 10, and having no relationship with voice operation of television 10. At this time, as a result of voice recognition based on the voice, voice recognizer 102 is likely to output character string information registered in the acceptance object list (that is, likely to perform false recognition).
  • Meanwhile, voice recognition server 50 has recognition dictionary 55 that is likely to have more registered voice recognition models (vocabulary) than recognition dictionary 175 because registered information is updated through network 40. Accordingly, voice recognition server 50 is likely to perform more accurate voice recognition of such a voice.
  • Therefore, it is likely that, compared with a recognition score associated with character string information that is output from voice recognizer 102 that falsely recognizes a voice that is easy to be falsely recognized, a recognition score associated with character string information that is output from voice recognition server 50 that performs voice recognition of this voice has a larger numerical value. Therefore, it is likely that the character string information that is output from voice recognition server 50 is selected by recognition result acquirer 103.
  • Then, if a vocabulary that corresponds to this character string information has been registered in the exclusion object list in recognition dictionary 175 as the exclusion vocabulary, exclusion vocabulary rejecter 1042 determines that this character string information is information to be rejected.
  • In this way, the present example makes it possible to improve accuracy of voice recognition of a voice that is likely to be falsely recognized by voice recognizer 102, and to prevent command processor 106 from performing false command processing due to false recognition.
  • In addition, although voice recognizer 102 is likely to recognize a voice falsely in cases where the voice uttered by user 700 is not sufficiently large or where there is much noise, accuracy of voice recognition can be improved even in such cases.
  • It is to be noted that, if recognition dictionary 175 included in voice recognizer 102 is configured so that registered information can be updated through network 40 in a similar manner to recognition dictionary 55 of voice recognition server 50, voice recognition processing system 11 may be configured so that voice recognition may be performed only by television 10. Even in such a configuration, operations of recognition result determiner 104 can reduce false recognition and improve accuracy of voice recognition.
  • EXEMPLARY EMBODIMENT OF THE INVENTION
  • Next, an exemplary embodiment of the present invention will be described with reference to FIG. 5 to FIG. 7. The exemplary embodiment describes a method for increasing accuracy of voice recognition of a word that user 700 is likely to utter (for example, a word regarding operations, functions, etc. of television 10).
  • [2-1. Configuration]
  • FIG. 5 is a block diagram illustrating a configuration example of voice recognition processing system 21 according to the exemplary embodiment.
  • Voice recognition processing system 21 according to the present exemplary embodiment includes television 20 that is an example of a display apparatus, and voice recognition server 50. Since voice recognition server 50 is substantially identical to voice recognition server 50 described in the example, description will be omitted.
  • Television 20 includes voice recognition processing apparatus 200, display device 140, transmitter-receiver 150, tuner 160, storage device 171, and built-in microphone 130. Voice recognition processing apparatus 200 includes voice acquirer 201, voice recognizer 102, recognition result acquirer 103, recognition result determiner 204, command processor 106, and storage device 270.
  • It is to be noted that components performing operations substantially identical to operations of components included in television 10 described in the example are provided with reference symbols identical to reference symbols of the example, and description will be omitted.
  • In addition, it is assumed that recognition dictionary 175 in storage device 270 has registration of an acceptance object list and exclusion object list that are similar to an acceptance object list and exclusion object list described in the example.
  • Voice recognition processing apparatus 200 according to the exemplary embodiment differs from voice recognition processing apparatus 100 described in the example in operations in voice acquirer 201 and recognition result determiner 204.
  • In a similar manner to voice acquirer 101 described in the example, voice acquirer 201 acquires a voice signal generated from a voice uttered by user 700 from built-in microphone 130. However, different from voice acquirer 101 described in the example, voice acquirer 201 creates utterance duration information and utterance form information based on the acquired voice signal.
  • The utterance duration information refers to information indicating a length of time uttered by user 700. Voice acquirer 201 can create the utterance duration information by, for example, measuring the length of time during which a voice having volume equal to or higher than a preset threshold is continuously made. Voice acquirer 201 may create the utterance duration information by another method.
  • The utterance form information refers to information indicating lengths of silent periods that occur before and after utterance of user 700, or lengths of periods that can be substantially considered as silent. Voice acquirer 201 can create the utterance form information by, for example, considering that a condition in which volume is lower than a preset threshold is silence, and by measuring the lengths of the silent periods that occur before and after utterance. Voice acquirer 201 may create the utterance form information by another method.
  • Voice acquirer 201 adds each of the utterance duration information and the utterance form information to voice information, and outputs these information items to voice recognizer 102.
  • A voice, such as conversation among a plurality of users 700 and a monologue of user 700, may include a word in a vocabulary (acceptance object vocabulary) registered in the acceptance object list. Then, this voice may be collected by built-in microphone 130, and the voice information based on this voice may be input into voice recognizer 102. In such a case, voice recognizer 102 may perform false voice recognition based on such voice information, and command processor 106 may perform false command processing based on the false recognition, although user 700 does not have an intention to perform voice operation of television 20. In order to reduce occurrence of such false recognition, in addition to the exclusion object list described in the example, the present exemplary embodiment performs voice recognition using "the utterance duration information" and "the utterance form information".
  • Details of the utterance duration information and the utterance form information will be described later. In addition, voice recognizer 102 transmits the voice information to which the utterance duration information and the utterance form information are added, to voice recognition server 50 via transmitter-receiver 150 and network 40.
  • [2-2. Operation]
  • Next, a configuration and operation of recognition result determiner 204 included in voice recognition processing apparatus 200 of television 20 according to the present exemplary embodiment will be described with reference to FIG. 6 and FIG. 7.
  • FIG. 6 is a block diagram illustrating a configuration example of recognition result determiner 204 of voice recognition processing apparatus 200 according to the exemplary embodiment.
  • Recognition result determiner 204 includes exclusion vocabulary rejecter 1042, utterance duration determiner 2043, utterance form determiner 2044, and acceptance rejection transmitter 1045.
  • FIG. 7 is a flow chart illustrating an operation example of recognition result determiner 204 according to the exemplary embodiment.
  • As in step S103 described in the example, on receipt of results of voice recognition from each of voice recognizer 102 and voice recognition server 50, recognition result acquirer 103 selects one of the voice recognition results in accordance with a determination rule (step S103). This determination rule is substantially identical to the determination rule described in the example.
  • As in step S104 described in the example, exclusion vocabulary rejecter 1042 of recognition result determiner 204 determines whether the result of voice recognition that is output from recognition result acquirer 103 includes a word that agrees with a word included in a vocabulary (exclusion vocabulary) registered in the exclusion object list (step S104).
  • In step S104, in a similar manner to exclusion vocabulary rejecter 1042 described in the example, exclusion vocabulary rejecter 1042 compares the exclusion object list in recognition dictionary 175 stored in storage device 270 with character string information that is the result of voice recognition that is output from recognition result acquirer 103, to examine presence of character string information that agrees with a word in the exclusion vocabulary included in the exclusion object list. Then, exclusion vocabulary rejecter 1042 determines that the character string information that agrees with a word included in the exclusion vocabulary is information to be rejected, sets a flag, and outputs the character string information to acceptance rejection transmitter 1045 (Yes).
  • In a similar manner to acceptance rejection transmitter 1045 described in the example, acceptance rejection transmitter 1045 outputs the flagged character string information to voice acquirer 201 as rejection information. On receipt of the rejection information, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • On the other hand, in step S104, exclusion vocabulary rejecter 1042 outputs character string information that does not agree with any words included in the exclusion vocabulary to utterance duration determiner 2043 as it is without setting a flag (No).
  • Utterance duration determiner 2043 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on utterance duration (step S200).
  • Here, "the utterance duration" used by utterance duration determiner 2043 will be described. The utterance duration is a length of time of utterance. Here, an utterance of user 700 for performing voice operation of television 20 is described as "an utterance for control", while an utterance that is not for a purpose of voice operation of television 20 (for example, conversation among users 700 and a monologue of user 700) is described as "an utterance for conversation".
  • In the present exemplary embodiment, utterance duration data (data indicating a length of time required for utterance) corresponding to each word of the acceptance object vocabulary included in the acceptance object list registered in recognition dictionary 175 is previously stored in storage device 270. This allows utterance duration determiner 2043 to calculate the utterance duration of a word included in the acceptance object vocabulary, selected as a result of voice recognition. It is to be noted that this utterance duration data preferably has a margin (range) in consideration of differences of utterance speed among individuals and the like.
  • It has been confirmed that "the utterance for control" includes about one word or two words in many cases. In addition, it is likely that all of these words (vocabulary) are words included in the acceptance object vocabulary registered in the acceptance object list. Therefore, it is likely that, after voice recognition of "the utterance for control", the utterance duration based on the utterance duration data of the word in the acceptance object vocabulary selected as a result of voice recognition becomes closer to the utterance duration of "the utterance for control" indicated by the utterance duration information created by voice acquirer 201. It is assumed that, when a plurality of words included in the acceptance object vocabulary is selected as a result of voice recognition, the utterance duration is calculated based on the utterance duration data corresponding to the plurality of the words in the acceptance object vocabulary.
  • On the other hand, "the utterance for conversation" includes a plurality of words in many cases, and those words (vocabularies) are unlikely to include a word corresponding to the acceptance object vocabulary registered in the acceptance object list. Therefore, it is likely that, after voice recognition of "the utterance for conversation", the utterance duration based on the utterance duration data of the word included in the acceptance object vocabulary selected as a result of voice recognition becomes shorter than the utterance duration of "the utterance for conversation" indicated by the utterance duration information created by voice acquirer 201.
  • Thus, voice recognition processing apparatus 200 can determine whether the voice that is an object of voice recognition is based on "the utterance for control" or "the utterance for conversation", by comparing the utterance duration based on the utterance duration data of the word(s) included in the acceptance object vocabulary selected by voice recognizer 102 as a result of voice recognition with the utterance duration based on the utterance duration information created by voice acquirer 201. In the present exemplary embodiment, utterance duration determiner 2043 makes this determination.
  • In step S200, based on the word included in the acceptance object vocabulary that is output from recognition result acquirer 103 as a result of voice recognition, utterance duration determiner 2043 reads the utterance duration data which is associated with the word included in the acceptance object vocabulary from storage device 270. When receiving a plurality of words included in the acceptance object vocabulary, utterance duration determiner 2043 reads the utterance duration data regarding all of the words from storage device 270. Then, utterance duration determiner 2043 calculates the utterance duration based on the read utterance duration data. Then, utterance duration determiner 2043 compares a result of the calculation with the utterance duration indicated by the utterance duration information created by voice acquirer 201. While utterance duration determiner 2043 may compare the calculated utterance duration with the utterance duration indicated by the utterance duration information as it is, utterance duration determiner 2043 may set a range for determination based on the calculated utterance duration. Here, an example of setting a range for comparison will be described.
  • In step S200, when the utterance duration indicated by the utterance duration information created by voice acquirer 201 is outside the range that is set based on the calculated utterance duration (No), utterance duration determiner 2043 determines that the unflagged character string information that is output from exclusion vocabulary rejecter 1042 is based on "the utterance for conversation", and that the unflagged character string information is to be rejected. Utterance duration determiner 2043 sets a flag in this character string information, and outputs the flagged character string information to acceptance rejection transmitter 1045.
  • If a flag is set in the character string information that is input from utterance duration determiner 2043, acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 201 as rejection information. On receipt of the rejection information, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • On the other hand, in step S200, when the utterance duration indicated by the utterance duration information created by voice acquirer 201 is within the range that is set based on the calculated utterance duration (Yes), utterance duration determiner 2043 determines that the unflagged character string information that is output from exclusion vocabulary rejecter 1042 is based on "the utterance for control". Utterance duration determiner 2043 avoids setting a flag in this character string information, and outputs the character string information as it is to utterance form determiner 2044.
  • It is to be noted that utterance duration determiner 2043 may set the range for determination by, for example, multiplying the calculated utterance duration by a predetermined numerical value (for example, 1.5). This numerical value is only an example and may be another numerical value. Alternatively, utterance duration determiner 2043 may set the range for determination by, for example, adding a predetermined numerical value to the calculated utterance duration, and may set the range by another method.
  • Utterance form determiner 2044 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from utterance duration determiner 2043, based on an utterance form (step S201).
  • Here, "the utterance form" used by utterance form determiner 2044 will be described. This "utterance form" refers to a silent period that occurs immediately before user 700 utters, or to a period that can be substantially considered as silent (hereinafter described as "a pause period"), and to a pause period that occurs immediately after user 700 finishes utterance.
  • A result of comparison between "the utterance for control" and "the utterance for conversation" has verified that there is a difference in the utterance form.
  • In a case of "the utterance for control", long pause periods exist before and after user 700 utters, as compared with "the utterance for conversation". The pause period that occurs immediately before user 700 utters is a period for preparation for utterance. The pause period that occurs immediately after user 700 finishes utterance is a period for waiting for an operation (operation based on voice operation) corresponding to uttered information to be started.
  • On the other hand, in a case of "the utterance for conversation", such pause periods are relatively short before and after utterance of user 700.
  • Therefore, it is possible to determine whether a voice that is an object of voice recognition is based on "the utterance for control" or based on "the utterance for conversation" by detecting lengths of the pause periods before and after utterance. Then, in the present exemplary embodiment, utterance form determiner 2044 makes this determination based on the utterance form information created by voice acquirer 201.
  • In step S201, based on the word included in the acceptance object vocabulary that is output from utterance duration determiner 2043, utterance form determiner 2044 reads utterance form data which is associated with the word included in the acceptance object vocabulary from storage device 270. This utterance form data refers to data indicating the lengths of respective pause periods that occur before and after utterance of the word included in the acceptance object vocabulary. In the present exemplary embodiment, the utterance form data which is associated with the word included in the acceptance object vocabulary is previously stored in storage device 270. Then, utterance form determiner 2044 compares the utterance form data that is read from storage device 270 with the utterance form information (the utterance form information created by voice acquirer 201) added to the character string information that is input from utterance duration determiner 2043.
  • Specifically, utterance form determiner 2044 compares the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201, with the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270, respectively. It is to be noted that utterance form determiner 2044 may compare the utterance form information created by voice acquirer 201 as it is with the utterance form data that is read from storage device 270, but utterance form determiner 2044 may set a range for determination based on the utterance form data that is read from storage device 270. It is to be noted that, when receiving a plurality of words included in the acceptance object vocabulary, utterance form determiner 2044 may read utterance form data regarding all of the words from storage device 270, and may select either one with a larger value. Alternatively, utterance form determiner 2044 may select either one with a smaller value, or may calculate an average value or a medium value.
  • In step S201, when at least one of the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201 is shorter than the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270 (No), utterance form determiner 2044 determines that the unflagged character string information that is output from utterance duration determiner 2043 is based on "the utterance for conversation", sets a flag in this character string information, and outputs the flagged character string information to acceptance rejection transmitter 1045.
  • When a flag is set in the character string information that is input from utterance form determiner 2044, acceptance rejection transmitter 1045 outputs the character string information to voice acquirer 201 as rejection information. On receipt of the rejection information, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • On the other hand, in step S201, when both of the lengths of the pause periods before and after utterance indicated by the utterance form information created by voice acquirer 201 are equal to or longer than the lengths of the pause periods before and after utterance indicated by the utterance form data that is read from storage device 270 (Yes), utterance form determiner 2044 determines that the unflagged character string information that is output from utterance duration determiner 2043 is based on "the utterance for control", avoids setting a flag in this character string information, and outputs the character string information as it is to acceptance rejection transmitter 1045.
  • Accordingly, the unflagged character string information received by acceptance rejection transmitter 1045 is character string information in which a flag is not set by any of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044. In other words, if a flag is not set in the character string information that is input into acceptance rejection transmitter 1045, the character string information is character string information that is determined to be accepted (to execute command processing), by all of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044. On the other hand, when a flag is set in the character string information that is input into acceptance rejection transmitter 1045, the character string information is character string information that is determined to be rejection information, by one of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044.
  • Acceptance rejection transmitter 1045 outputs the unflagged character string information to command processor 106 as it is as character string information to be accepted (executed).
  • Command processor 106 executes command processing in accordance with an instruction indicated by the character string information received from acceptance rejection transmitter 1045 (step S105).
  • After completion of step S105, command processor 106 transmits, to voice acquirer 201, a signal indicating that command processing is completed. On receipt of the signal, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition (step S106).
  • In step S106, the flagged character string information is output as rejection information from acceptance rejection transmitter 1045 to voice acquirer 201. On receipt of the rejection information, voice acquirer 201 prepares for voice acquisition in preparation for next voice recognition.
  • It is to be noted that whichever one of step S200 and step S201 may be performed first.
  • [2-3. Effect and Others]
  • As described above, in the present exemplary embodiment, voice recognition processing apparatus 200 includes voice acquirer 201, recognition result determiner 204, and storage device 270. Voice acquirer 201 measures the length of time uttered by user 700 based on the acquired voice to create the utterance duration information. In addition, voice acquirer 201 measures the lengths of the silent periods that occur before and after utterance of user 700 based on the acquired voice to create the utterance form information. Storage device 270 previously stores the utterance duration data representing the time required for utterance and the utterance form data representing the lengths of the silent periods that occur before and after utterance. Regarding the character string information that is determined not to include a word that agrees with a word included in the exclusion vocabulary and to be executed, recognition result determiner 204 reads the utterance duration data from storage device 270, and compares the read utterance duration data with the utterance duration information created by voice acquirer 201 to make a second determination whether to reject or execute the character string information based on the comparison. Then, regarding the character string information that is determined to be executed, recognition result determiner 204 reads the utterance form data from storage device 270, and compares the read utterance form data with the utterance form information created by voice acquirer 201 to make a second determination whether to reject or execute the character string information based on the comparison. This character string information is an example of the first information.
  • In voice recognition processing apparatus 200 configured in this way, when a flag is not set in the character string information that is input into acceptance rejection transmitter 1045, this character string information is character string information that is determined to be accepted (to execute command processing), by all of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044. On the other hand, when a flag is set in the character string information that is input into acceptance rejection transmitter 1045, this character string information is character string information that is determined to be rejection information, by one of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044. In this way, in the present exemplary embodiment, each of exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and utterance form determiner 2044 determines whether to accept (command processing) or reject the character string information received by recognition result acquirer 103 as a result of voice recognition. Then, character string information that is determined to be rejected by either one of these units is rejected, and only character string information that is determined to be accepted by all of these units undergoes command processing.
  • This allows voice recognition processing apparatus 200 to determine with good accuracy whether the voice that undergoes voice recognition is a voice based on "the utterance for control", or a voice based on "the utterance for conversation". Therefore, voice recognition processing apparatus 200 can reduce false recognition and further improve accuracy of voice recognition.
  • OTHER EXEMPLARY EMBODIMENTS
  • As described above, the example has been described as an example of a technique disclosed in the present application. However, the technique in the present disclosure is not limited to this example, and can be applied to exemplary embodiments to which change, replacement, addition, and omission have been made inasmuch as falling within the subject-matter defined by the appended claims. In addition, it is also possible to make a new exemplary embodiment by combining elements described in the above-described example and exemplary embodiment.
  • Therefore, other exemplary embodiments will be described below.
  • In the exemplary embodiment, a configuration has been described in which recognition result determiner 204 includes utterance duration determiner 2043 and utterance form determiner 2044, in addition to exclusion vocabulary rejecter 1042, to improve accuracy of voice recognition. However, a recognition result determiner having a configuration that includes exclusion vocabulary rejecter 1042 combined with one of utterance duration determiner 2043 and utterance form determiner 2044 can also improve accuracy of voice recognition.
  • FIG. 8A is a block diagram illustrating a configuration example of recognition result determiner 304 in another exemplary embodiment. FIG. 8B is a block diagram illustrating a configuration example of recognition result determiner 404 in another exemplary embodiment.
  • It is to be noted that components that perform operations substantially identical to operations of components included in televisions 10 and 20 described in the example and in the exemplary embodiment are provided with reference symbols identical to reference symbols of the example and the exemplary embodiment, and description will be omitted.
  • Recognition result determiner 304 illustrated in FIG. 8A has a configuration that includes exclusion vocabulary rejecter 1042, utterance duration determiner 2043, and acceptance rejection transmitter 1045, and does not include utterance form determiner 2044.
  • A voice recognition apparatus that includes recognition result determiner 304 illustrated in FIG. 8A operates as follows.
  • A voice acquirer (not illustrated) measures a length of time uttered by user 700 based on an acquired voice to create utterance duration information. Storage device 370 previously stores utterance duration data representing a time required for utterance. These pieces of utterance duration information and utterance duration data are substantially identical to utterance duration information and utterance duration data described in the exemplary embodiment.
  • Regarding character string information that is determined by exclusion vocabulary rejecter 1042 not to include a word that agrees with a word included in an exclusion vocabulary and to be executed, recognition result determiner 304 reads the utterance duration data from storage device 370, and compares the read utterance duration data with the utterance duration information created by the voice acquirer to make a second determination whether to reject or execute the character string information based on the comparison. This character string information is an example of first information.
  • Specifically, recognition result determiner 304 operates as follows.
  • Utterance duration determiner 2043 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on the utterance duration.
  • Since the operation of utterance duration determiner 2043 is substantially identical to operation of utterance duration determiner 2043 described in the exemplary embodiment, description will be omitted.
  • Utterance duration determiner 2043 avoids setting a flag in the character string information that is determined to be based on "an utterance for control", and outputs the character string information as it is to acceptance rejection transmitter 1045. Acceptance rejection transmitter 1045 outputs the unflagged character string information as it is to command processor 106 as character string information to be accepted (executed).
  • Recognition result determiner 404 illustrated in FIG. 8B has a configuration that includes exclusion vocabulary rejecter 1042, utterance form determiner 2044, and acceptance rejection transmitter 1045, and does not include utterance duration determiner 2043.
  • A voice recognition apparatus that includes recognition result determiner 404 illustrated in FIG. 8B operates as follows.
  • A voice acquirer (not illustrated) measures lengths of silent periods that occur before and after utterance of user 700 based on an acquired voice to create utterance form information. Storage device 470 previously stores utterance form data representing the lengths of the silent periods that occur before and after utterance. These pieces of utterance form information and utterance form data are substantially identical to utterance form information and utterance form data described in the exemplary embodiment.
  • Regarding the character string information that is determined by exclusion vocabulary rejecter 1042 not to include a word that agrees with a word included in the exclusion vocabulary and to be executed, recognition result determiner 404 reads the utterance form data from storage device 470, and compares the read utterance form data with the utterance form information created by the voice acquirer to make a second determination whether to reject or execute the character string information based on the comparison. This character string information is an example of the first information.
  • Specifically, recognition result determiner 404 operates as follows.
  • Utterance form determiner 2044 makes a second determination whether to reject or accept (execute) the unflagged character string information that is input from exclusion vocabulary rejecter 1042, based on utterance form.
  • Since the operation of utterance form determiner 2044 is substantially identical to operation of utterance form determiner 2044 described in the exemplary embodiment, description will be omitted.
  • Utterance form determiner 2044 avoids setting a flag in the character string information that is determined to be based on "the utterance for control", and outputs the character string information as it is to acceptance rejection transmitter 1045. Acceptance rejection transmitter 1045 outputs the unflagged character string information as it is to command processor 106 as character string information to be accepted (executed).
  • Even if the recognition result determiner has, for example, a configuration that includes only one of utterance duration determiner 2043 and utterance form determiner 2044 as illustrated in FIG. 8A and FIG. 8B, respectively, the recognition result determiner is capable of improving accuracy of voice recognition.
  • While the example has been described in which voice recognition server 50 is disposed on network 40 in the present exemplary embodiment, voice recognition server 50 may be included in voice recognition processing apparatus 100. Alternatively, it is also possible to have a configuration in which voice recognition server 50 is not included, and in which voice recognition is performed only by voice recognizer 102.
  • Each block illustrated in FIGS. 2, 3, 5, 6, 8A, and 8B may be configured as an independent circuit block, and may be configured such that a processor may execute software that is programmed to implement an operation of each block.
  • INDUSTRIAL APPLICABILITY
  • The present disclosure is applicable to devices that perform processing operations instructed by a user. Specifically, the present disclosure is applicable to devices such as a mobile terminal device, a television receiver, a personal computer, a set top box, a videocassette recorder, a game machine, a smart phone, and a tablet terminal.
  • REFERENCE MARKS IN THE DRAWINGS
  • 10, 20:
    television receiver
    11, 21:
    voice recognition processing system
    40:
    network
    50:
    voice recognition server
    55,175:
    recognition dictionary
    100, 200:
    voice recognition processing apparatus
    101, 201:
    voice acquirer
    102:
    voice recognizer
    103:
    recognition result acquirer
    104, 204, 304, 404:
    recognition result determiner
    106:
    command processor
    130:
    built-in microphone
    140:
    display device
    150:
    transmitter-receiver
    160:
    tuner
    170, 171, 270, 370, 470:
    storage device
    202:
    indicator
    203:
    voice recognition icon
    700:
    user
    1042:
    exclusion vocabulary rejecter
    1045:
    acceptance rejection transmitter
    2043:
    utterance duration determiner
    2044:
    utterance form determiner

Claims (6)

  1. A voice recognition processing apparatus comprising:
    a voice acquirer (201) configured to acquire a voice uttered by a user, to output voice information and to measure a length of time uttered by the user based on the acquired voice to create utterance duration information ;
    a first voice recognizer (102) configured to convert the voice information into first information;
    a storage device (270; 370; 470) configured to previously store
    - a dictionary (175) in which an acceptance object list and an exclusion vocabulary are registered, the exclusion vocabulary being a vocabulary except a vocabulary registered in the dictionary as the acceptance object list, and
    - utterance duration data corresponding to each word of the vocabulary included in the acceptance object list and indicating a length of time required for an utterance; and
    a recognition result determiner (204; 304; 404) configured to
    - compare the first information with the exclusion vocabulary to determine whether the first information includes a word that agrees with a word included in the exclusion vocabulary,
    - determine that the first information is information to be rejected, when the first information includes the word that agrees with a word included in the exclusion vocabulary;
    - determine that the first information is information to be executed, when the first information does not include the word that agrees with a word included in the exclusion vocabulary, and
    - regarding the first information that is determined not to include the word that agrees with a word included in the exclusion vocabulary and to be executed, read the utterance duration data from the storage device and compare the read utterance duration data with the utterance duration information created by the voice acquirer (201) to make a second determination whether to reject or execute the first information, based on the comparison.
  2. The voice recognition processing apparatus according to claim 1, wherein
    the voice acquirer (201) is configured to measure lengths of silent periods that occur before and after an utterance of the user based on the acquired voice to create utterance form information,
    the storage device (270; 370; 470) is configured to previously store utterance form data corresponding to each word of vocabulary included in the acceptance object list and representing lengths of silent periods that occur before and after an utterance,
    regarding the first information that is determined not to include the word that agrees with a word included in the exclusion vocabulary and to be executed, the recognition result determiner (204; 304; 404) is configured to :
    read the utterance form data from the storage device, and
    compare the read utterance form data with the utterance form information created by the voice acquirer to make a second determination whether to reject or execute the first information, based on the comparison.
  3. The voice recognition processing apparatus according to claim 1, further comprising:
    a second voice recognizer (50) configured to convert the voice information into second information; and
    a selector (103) configured to select and output one of the first information and the second information,
    wherein the recognition result determiner (204; 304; 404) is configured to determine whether to reject or execute information selected by the selector.
  4. The voice recognition processing apparatus according to claim 3, further comprising a transmitter-receiver (150) configured to communicate with the second voice recognizer (50) via a network (40),
    wherein the second voice recognizer (50) is installed on the network (40).
  5. A voice recognition processing method comprising the following steps, performed by a voice recognition processing apparatus (200):
    previously storing in a storage device (270; 370; 470)
    - a dictionary (175) in which an acceptance object list and an exclusion vocabulary are registered, the exclusion vocabulary being a vocabulary except a vocabulary registered in the dictionary as the acceptance object list, and
    - utterance duration data corresponding to each word of the vocabulary included in the acceptance object list and indicating a length of time required for an utterance,
    acquiring (S101) a voice uttered by a user , outputting voice information and measuring a length of time uttered by the user based on the acquired voice to create utterance duration information ;
    converting (S102) the voice information into first information by a first voice recognizer; comparing (S104) the first information with the exclusion vocabulary to determine whether the first information includes a word that agrees with a word included in the exclusion vocabulary;
    determining (S104) that the first information is information to be rejected, when the selected information includes the word that agrees with a word included in the exclusion vocabulary;
    determining (S104) that the first information is information to be executed, when the selected information does not include the word that agrees with a word included in the exclusion vocabulary;
    regarding the first information that is determined not to include the word that agrees with a word included in the exclusion vocabulary and is to be executed:
    reading the utterance duration data from the storage device , and
    comparing (S200) the read utterance duration data with the created utterance duration information to make a second determination whether to reject or execute the first information, based on the comparison.
  6. A display apparatus comprising:
    a voice recognition processing apparatus according to claim 1, a processor configured to execute processing based on the first information that is determined by the recognition result determiner to be executed; and
    a display device.
EP14875013.6A 2013-12-26 2014-12-25 Speech recognition processing Active EP3089158B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013268670 2013-12-26
PCT/JP2014/006449 WO2015098109A1 (en) 2013-12-26 2014-12-25 Speech recognition processing device, speech recognition processing method and display device

Publications (3)

Publication Number Publication Date
EP3089158A4 EP3089158A4 (en) 2016-11-02
EP3089158A1 EP3089158A1 (en) 2016-11-02
EP3089158B1 true EP3089158B1 (en) 2018-08-08

Family

ID=53478005

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14875013.6A Active EP3089158B1 (en) 2013-12-26 2014-12-25 Speech recognition processing

Country Status (5)

Country Link
US (1) US9767795B2 (en)
EP (1) EP3089158B1 (en)
JP (1) JPWO2015098109A1 (en)
CN (1) CN105556594B (en)
WO (1) WO2015098109A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014103099A1 (en) * 2012-12-28 2014-07-03 パナソニック株式会社 Device with voice recognition function and method for recognizing voice
JP6731581B2 (en) * 2015-03-27 2020-07-29 パナソニックIpマネジメント株式会社 Speech recognition system, speech recognition device, speech recognition method, and control program
US9691413B2 (en) * 2015-10-06 2017-06-27 Microsoft Technology Licensing, Llc Identifying sound from a source of interest based on multiple audio feeds
CN107665708B (en) * 2016-07-29 2021-06-08 科大讯飞股份有限公司 Intelligent voice interaction method and system
CN109643543A (en) * 2016-09-02 2019-04-16 夏普株式会社 Responding device and its control method and control program
US10409552B1 (en) * 2016-09-19 2019-09-10 Amazon Technologies, Inc. Speech-based audio indicators
CN111611575A (en) * 2016-10-13 2020-09-01 创新先进技术有限公司 Service implementation method and device based on virtual reality scene
US10467510B2 (en) 2017-02-14 2019-11-05 Microsoft Technology Licensing, Llc Intelligent assistant
US11100384B2 (en) 2017-02-14 2021-08-24 Microsoft Technology Licensing, Llc Intelligent device user interactions
US11010601B2 (en) 2017-02-14 2021-05-18 Microsoft Technology Licensing, Llc Intelligent assistant device communicating non-verbal cues
JP2019200394A (en) * 2018-05-18 2019-11-21 シャープ株式会社 Determination device, electronic apparatus, response system, method for controlling determination device, and control program
CN112135564B (en) * 2018-05-23 2024-04-02 松下知识产权经营株式会社 Method, recording medium, evaluation device, and evaluation system for ingestion swallowing function
JP7096707B2 (en) * 2018-05-29 2022-07-06 シャープ株式会社 Electronic devices, control devices that control electronic devices, control programs and control methods
JP7231342B2 (en) * 2018-07-09 2023-03-01 シャープ株式会社 Content display system and display device
CN109147780B (en) * 2018-08-15 2023-03-03 重庆柚瓣家科技有限公司 Voice recognition method and system under free chat scene
JP2020064197A (en) * 2018-10-18 2020-04-23 コニカミノルタ株式会社 Image forming device, voice recognition device, and program
US11176939B1 (en) 2019-07-30 2021-11-16 Suki AI, Inc. Systems, methods, and storage media for performing actions based on utterance of a command
CN112447177B (en) * 2019-09-04 2022-08-23 思必驰科技股份有限公司 Full duplex voice conversation method and system
JP7248564B2 (en) * 2019-12-05 2023-03-29 Tvs Regza株式会社 Information processing device and program

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3477751B2 (en) 1993-09-07 2003-12-10 株式会社デンソー Continuous word speech recognition device
JPH11311994A (en) * 1998-04-30 1999-11-09 Sony Corp Information processor, information processing method, and presentation media
JP2000020089A (en) * 1998-07-07 2000-01-21 Matsushita Electric Ind Co Ltd Speed recognition method and apparatus therefor as well as voice control system
DE69941686D1 (en) 1999-01-06 2010-01-07 Koninkl Philips Electronics Nv LANGUAGE ENTRY WITH ATTENTION SPAN
US7899671B2 (en) * 2004-02-05 2011-03-01 Avaya, Inc. Recognition results postprocessor for use in voice recognition systems
JP4236597B2 (en) 2004-02-16 2009-03-11 シャープ株式会社 Speech recognition apparatus, speech recognition program, and recording medium.
US7813482B2 (en) * 2005-12-12 2010-10-12 International Business Machines Corporation Internet telephone voice mail management
US7949536B2 (en) * 2006-08-31 2011-05-24 Microsoft Corporation Intelligent speech recognition of incomplete phrases
JP4845118B2 (en) * 2006-11-20 2011-12-28 富士通株式会社 Speech recognition apparatus, speech recognition method, and speech recognition program
JP4902617B2 (en) * 2008-09-30 2012-03-21 株式会社フュートレック Speech recognition system, speech recognition method, speech recognition client, and program
JP4852584B2 (en) * 2008-10-23 2012-01-11 ヤフー株式会社 Prohibited word transmission prevention method, prohibited word transmission prevention telephone, prohibited word transmission prevention server
JP2011170274A (en) * 2010-02-22 2011-09-01 Chugoku Electric Power Co Inc:The Accident restoration training device
US20130018895A1 (en) 2011-07-12 2013-01-17 Harless William G Systems and methods for extracting meaning from speech-to-text data
CN103460281B (en) * 2011-10-25 2015-12-23 奥林巴斯株式会社 Endoscope surgery system
CN103247291B (en) * 2013-05-07 2016-01-13 华为终端有限公司 A kind of update method of speech recognition apparatus, Apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US9767795B2 (en) 2017-09-19
WO2015098109A1 (en) 2015-07-02
US20160217783A1 (en) 2016-07-28
CN105556594B (en) 2019-05-17
CN105556594A (en) 2016-05-04
JPWO2015098109A1 (en) 2017-03-23
EP3089158A4 (en) 2016-11-02
EP3089158A1 (en) 2016-11-02

Similar Documents

Publication Publication Date Title
EP3089158B1 (en) Speech recognition processing
KR102245747B1 (en) Apparatus and method for registration of user command
CN105741836B (en) Voice recognition device and voice recognition method
KR102339657B1 (en) Electronic device and control method thereof
US8818816B2 (en) Voice recognition device
EP3089157B1 (en) Voice recognition processing device, voice recognition processing method, and display device
US11238871B2 (en) Electronic device and control method thereof
KR20140058127A (en) Voice recognition apparatus and voice recogniton method
CN106796785A (en) Sample sound for producing sound detection model is verified
KR20150087687A (en) Interactive system, display apparatus and controlling method thereof
CN108322770B (en) Video program identification method, related device, equipment and system
US11948567B2 (en) Electronic device and control method therefor
CN112735396A (en) Speech recognition error correction method, device and storage medium
KR20170141970A (en) Electronic device and method thereof for providing translation service
JP2012088370A (en) Voice recognition system, voice recognition terminal and center
CN107977187B (en) Reverberation adjusting method and electronic equipment
JP2010016444A (en) Situation recognizing apparatus, situation recognizing method, and radio terminal apparatus
KR20220109238A (en) Device and method for providing recommended sentence related to utterance input of user
KR20120083025A (en) Multimedia device for providing voice recognition service by using at least two of database and the method for controlling the same
KR20210098250A (en) Electronic device and Method for controlling the electronic device thereof
JP2011180416A (en) Voice synthesis device, voice synthesis method and car navigation system
KR102456588B1 (en) Apparatus and method for registration of user command
KR102599069B1 (en) Apparatus and method for registration of user command
CN112542157A (en) Voice processing method and device, electronic equipment and computer readable storage medium
US20230386508A1 (en) Information processing apparatus, information processing method, and non-transitory recording medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20151222

A4 Supplementary search report drawn up and despatched

Effective date: 20161004

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20170601

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 15/32 20130101ALN20180503BHEP

Ipc: G10L 15/22 20060101AFI20180503BHEP

Ipc: G10L 25/78 20130101ALN20180503BHEP

INTG Intention to grant announced

Effective date: 20180522

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LT

RIN1 Information on inventor provided before grant (corrected)

Inventor name: KONUMA, TOMOHIRO

Inventor name: KOGANEI, TOMOHIRO

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1027931

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014030238

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20180808

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1027931

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181109

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181108

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181108

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181208

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014030238

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20190509

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181225

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20181231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181231

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180808

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180808

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20141225

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230424

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231220

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230420

Year of fee payment: 10