WO2018110818A1 - Speech recognition method and apparatus - Google Patents
Speech recognition method and apparatus Download PDFInfo
- Publication number
- WO2018110818A1 WO2018110818A1 PCT/KR2017/011440 KR2017011440W WO2018110818A1 WO 2018110818 A1 WO2018110818 A1 WO 2018110818A1 KR 2017011440 W KR2017011440 W KR 2017011440W WO 2018110818 A1 WO2018110818 A1 WO 2018110818A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- speech recognition
- recognition apparatus
- speech
- activation word
- command
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000004913 activation Effects 0.000 claims abstract description 287
- 230000006870 function Effects 0.000 claims abstract description 69
- 230000004044 response Effects 0.000 claims abstract description 42
- 230000005236 sound signal Effects 0.000 claims description 131
- 238000012790 confirmation Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 7
- 230000035945 sensitivity Effects 0.000 claims 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 14
- 238000004422 calculation algorithm Methods 0.000 abstract description 7
- 238000013135 deep learning Methods 0.000 abstract description 6
- 238000010801 machine learning Methods 0.000 abstract description 6
- 210000004556 brain Anatomy 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000003213 activating effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
- G10L17/24—Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- the present disclosure relates to an artificial intelligence (AI) system and its application, which simulates functions such as recognition and judgment of a human brain using a machine learning algorithm such as deep learning.
- AI artificial intelligence
- the present disclosure relates to a speech recognition method and apparatus. More particularly, the present disclosure relates to a speech recognition method and apparatus for performing speech recognition in response to an activation word determined based on information related to a situation in which the speech recognition apparatus operates.
- An artificial intelligence (AI) system is a computer system that implements human-level intelligence. Unlike an existing rule-based smart system, the AI system is a system that machine learns, judges and becomes smart autonomously. The more the AI system is used, the more the recognition rate may be improved and the more accurately the user preference may be understood, and thus the existing rule-based smart system is gradually being replaced by a deep-learning-based AI system.
- AI technology consists of machine learning (deep learning) and element technologies that utilize the machine learning.
- Machine learning is an algorithm-based technology that classifies/learns characteristics of input data autonomously.
- Element technology is a technology that simulates functions of the human brain such as recognition and judgment using machine learning algorithms such as deep learning, and consists of technical fields such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, and motion control, etc.
- AI technology may be applied various fields.
- Linguistic understanding is a technology for recognizing, applying, and processing human language/characters, including natural language processing, machine translation, dialogue system, query response, speech recognition/synthesis, and the like.
- Visual understanding is a technique to recognize and process objects as performed in human vision, including object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, and image enhancement.
- Inference/prediction is a technique for judging and logically inferring and predicting information, including knowledge/probability based inference, optimization prediction, preference based planning, recommendation, etc.
- Knowledge representation is a technology for automating human experience information into knowledge data, including knowledge building (data generation/classification) and knowledge management (data utilization).
- Motion control is a technique for controlling autonomous travel of a vehicle and a motion of a robot, including movement control (navigation, collision-avoidance, traveling), operation control (behavior control), and the like.
- the speech recognition function has an advantage in that a user may easily control a device by recognizing speech of the user without depending on an operation of a separate button or a contact of a touch module.
- a portable terminal such as a smart phone may perform a call function or text messaging without pressing a button, and may easily set various functions such as a route search, an Internet search, an alarm setting, etc.
- an aspect of the present disclosure is to provide a method that may control a speech recognition apparatus by speaking as a user naturally interacts with the speech recognition apparatus, thereby enhancing convenience for the user.
- a speech recognition method includes determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates, receiving an input audio signal, performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and outputting a result of the performing of the speech recognition.
- a speech recognition apparatus includes a receiver configured to receive an input audio signal, at least one processor configured to determine at least one activation word based on information related to a situation in which a speech recognition apparatus operates and perform speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and an outputter configured to output a result of the speech recognition.
- a non-transitory computer-readable recording medium having recorded thereon at least one program includes instructions for allowing a speech recognition apparatus to execute a speech recognition method.
- the speech recognition method includes determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates, receiving an input audio signal, performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and outputting a result of the performing of the speech recognition.
- FIGS. 1A, 1B, and 1C are views for explaining a speech recognition system according to an embodiment of the present disclosure
- FIG. 2A is a diagram of an operation method of a general speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 2B is a diagram of an operation method of a speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 3 is a flowchart of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 4 is a diagram of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 5 is a flowchart illustrating a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 6 is a flowchart of a method of outputting a result of speech recognition performed by a speech recognition apparatus according to an embodiment of the present disclosure
- FIGS. 7A and 7B show examples in which a speech recognition apparatus is included in a home robot according to an embodiment of the present disclosure
- FIG. 8 shows a case where a speech recognition apparatus determines "air conditioner” as an activation word corresponding to a current situation according to an embodiment of the present disclosure
- FIG. 9 is a flowchart of a method of determining whether a speech command is a direct command or an indirect command performed by a speech recognition apparatus according to an embodiment of the present disclosure
- FIG. 10 is a flowchart of a method of determining candidate activation words respectively corresponding to situations performed by a speech recognition apparatus according to an embodiment of the present disclosure.
- FIGS. 11A and 11B are block diagrams of a speech recognition apparatus according to an embodiment of the present disclosure.
- Various embodiments of the present disclosure may be represented by functional block configurations and various processing operations. Some or all of these functional blocks may be implemented with various numbers of hardware and/or software configurations that perform particular functions.
- the functional blocks of the present disclosure may be implemented by one or more microprocessors, or by circuit configurations for a given function.
- the functional blocks of the present disclosure may be implemented in various programming or scripting languages.
- the functional blocks may be implemented with algorithms running on one or more processors.
- the present disclosure may also employ techniques for electronic configuration, signal processing, and/or data processing, and the like according to the related art.
- Connection lines or connection members between the components shown in the figures are merely illustrative of functional connections and/or physical or circuit connections. In actual devices, connections between components may be represented by various functional connections, physical connections, or circuit connections that may be replaced or added.
- FIGS. 1A, 1B, and 1C are views for explaining a speech recognition system according to an embodiment of the present disclosure.
- the speech recognition system may be a deep learning based artificial intelligence (AI) system.
- the speech recognition system may use artificial intelligence technology to infer and predict a situation in which the speech recognition apparatus operates, and may recognize, apply, and process a human language.
- the speech recognition system may include a speech recognition apparatus 100-1.
- the speech recognition apparatus 100-1 may be a mobile computing apparatus such as a smart phone, a tablet personal computer (PC), a PC, a smart television (TV), a personal digital assistant (PDA), a laptop, a media player, a micro server, a global positioning system (GPS), an e-book reader, a digital broadcasting terminal, a navigation device, a kiosk, an Moving Picture Experts Group phase 1 or phase 2 (MPEG-1 or MPEG-2) audio layer 3 (MP3) player, a digital camera, an electronic control device of a vehicle, a central information display (CID), etc. or a non-mobile computing apparatus but is not limited thereto.
- the speech recognition apparatus 100-1 may receive an audio signal including a speech signal uttered by a user 10 and perform speech recognition on the speech signal.
- the speech recognition apparatus 100-1 may output a speech recognition result.
- the speech recognition system may include a speech recognition apparatus 100-2 and an electronic apparatus 110 connected to the speech recognition apparatus 100-2.
- the speech recognition apparatus 100-2 and the electronic apparatus 110 may be connected by wires or wirelessly.
- the electronic apparatus 110 coupled to the speech recognition apparatus 100-2 may be a mobile computing apparatus such as a smart phone, a tablet PC, a PC, a smart TV, an electronic control device of a vehicle, a CID, or a non-mobile computing apparatus.
- the speech recognition apparatus 100-2 may be, but is not limited to, a wearable device, a smart phone, a tablet PC, a PC, a navigation system, or a smart TV, which cooperates with the electronic apparatus 110.
- the speech recognition apparatus 100-2 may receive an audio signal including a speech signal uttered by the user 10 and transmit the inputted audio signal to the electronic apparatus 110.
- the speech recognition apparatus 100-2 may receive an audio signal including a speech signal uttered by the user 10, and may transmit the detected speech signal from the input audio signal to the electronic apparatus 110.
- the speech recognition apparatus 100-2 may receive an audio signal including the speech signal uttered by the user 10, and transmit a characteristic of the speech signal detected from the input audio signal to the electronic apparatus 110.
- the electronic apparatus 110 may perform speech recognition based on a signal received from the speech recognition apparatus 100-2. For example, the electronic apparatus 110 may perform speech recognition on the speech signal detected from the audio signal input from the speech recognition apparatus 100-2. The electronic apparatus 110 may output a speech recognition result or send the speech recognition result to the speech recognition apparatus 100-2 so that the speech recognition apparatus 100-2 outputs the speech recognition result.
- the speech recognition system may include a speech recognition apparatus 100-3 and a server 120 connected to the speech recognition apparatus 100-3.
- the speech recognition apparatus 100-3 and the server 120 may be connected by wired or wirelessly.
- the speech recognition apparatus 100-3 may receive an audio signal including a speech signal uttered by the user 10 and transmit the inputted audio signal to the server 120.
- the speech recognition apparatus 100-3 may also receive an audio signal including a speech signal uttered by the user 10, and may transmit the detected speech signal from the input audio signal to the server 120.
- the speech recognition apparatus 100-3 may also receive an audio signal including the speech signal uttered by the user 10, and transmit a characteristic of the detected speech signal from the input audio signal to the server 120.
- the server 120 may perform speech recognition based on the signal received from the speech recognition apparatus 100-3. For example, the server 120 may perform speech recognition on the speech signal detected from the audio signal input from the speech recognition apparatus 100-3. The server 120 may transmit the speech recognition result to the speech recognition apparatus 100-3 so that the speech recognition apparatus 100-3 outputs the speech recognition result.
- the speech recognition system shown in FIGS. 1A, 1B, and 1C has an advantage in that a user may easily control an apparatus by recognizing the user's speech.
- a speech recognition apparatus continuously activates a speech recognition function, since it is difficult for the speech recognition apparatus to distinguish whether an input audio signal is speech that is an object of speech recognition or noise that is not the object of speech recognition, the recognition performance deteriorates. Further, if the speech recognition apparatus continues a speech detection operation and a speech recognition operation, the speech recognition apparatus may unnecessarily consume power or memory capacity.
- the speech recognition apparatus should be capable of activating the speech recognition function only when the user utters a speech command.
- a speech recognition apparatus uses a method of activating the speech recognition function when the user presses a button.
- This activation method has a disadvantage that the user must be located within a certain physical distance from the speech recognition apparatus and that the user should be careful not to press the button when the activation of the speech recognition function is not desired.
- the speech recognition apparatus uses a method of activating the speech recognition function when a predetermined specific activation word is uttered.
- This activation method has a disadvantage in that it is unnatural that the user must utter the specific activation word before uttering a speech command.
- the speech recognition apparatus requires an active action of the user in order to activate the speech recognition function according to the related art. Accordingly, since the speech recognition function may not be started when the active action of the user is not involved, the speech recognition apparatus has a limitation in providing a proactive service through speech recognition.
- the speech recognition apparatus provides a method of enhancing the convenience of the user by enabling the user to control the speech recognition apparatus by speaking as if the user naturally interacts with the speech recognition apparatus.
- the speech recognition apparatus may provide a proactive service even when there is no direct operation of the user.
- An embodiment provides a method of activating a speech recognition function based on a plurality of activation words designated according to a situation in which the speech recognition apparatus operates.
- an embodiment provides a method in which a speech recognition apparatus operates before performing speech recognition.
- FIG. 2A is a diagram of an operation method of a general speech recognition apparatus according to an embodiment of the present disclosure.
- the general speech recognition apparatus 100 activates a speech recognition function when one specific activation word “Hi Galaxy” is uttered.
- a user 10 has to utter the activation word "Hi Galaxy” prior to a speech command to ask for today's weather.
- the speech recognition apparatus 100 may activate a speech recognition function when a speech signal for uttering the activation word “Hi Galaxy” is received.
- the speech recognition apparatus 100 may perform speech recognition on a speech command of the user “What is the weather like today?” which is a sentence to be uttered after the activation word, and may provide weather information “Today’s weather is fine” as a response to the speech command of the user.
- the user 10 should utter the activation word “Hi Galaxy” prior to a speech command to ask for the current time.
- the speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering the activation word “Hi Galaxy” is received.
- the speech recognition apparatus 100 may perform speech recognition on the speech command of the user “What time is it?” which is a sentence to be uttered after the activation word and may provide time information “3:20 pm” as a response to the speech command of the user.
- a speech recognition apparatus may perform speech recognition with respect to a speech command that a user naturally utters, without requiring the user to utter a separate activation word or activate a speech recognition function.
- FIG. 2B is a diagram of an operation method of a speech recognition apparatus according to an embodiment of the present disclosure.
- the user 10 may utter a speech command “What is the weather like today?” that is a speech command to ask for today's weather without any separate activation operation in order to ask for today's weather.
- the speech recognition apparatus 100 may activate a speech recognition function when a speech signal for uttering the speech command “What is the weather like today?” is received.
- the speech recognition apparatus 100 may perform speech recognition on the speech command of the user “What is the weather like today?” and may provide weather information “Today’s weather is fine” as a response to the speech command of the user.
- the user 10 may utter “What time is it now?” that is a speech command to ask for the current time without a separate activation operation in order to ask for the current time.
- the speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering the speech command “What time is it now?” is received.
- the speech recognition apparatus 100 may perform speech recognition with respect to the speech command of the user “What time is it?” and provide time information "3:20 pm" as a response to the speech command of the user.
- a speech recognition system may include at least one speech recognition apparatus and may further include a server or an electronic device.
- a speech recognition method performed in the “speech recognition apparatus” will be described.
- some or all of operations of the speech recognition apparatus described below may also be performed by the server and may be partially performed by a plurality of electronic apparatuses.
- FIG. 3 is a flowchart of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure.
- the speech recognition apparatus 100 may determine at least one activation word based on information related to a situation in which the speech recognition apparatus 100 operates.
- the speech recognition apparatus 100 may utilize an artificial intelligence technology to infer and predict the situation in which the speech recognition apparatus 100 operates and to determine at least one activation word.
- the information related to the situation may include at least one of information related to a location and time of the speech recognition apparatus 100, whether or not the speech recognition apparatus 100 is connected to another electronic apparatus, a type of a network to which the speech recognition apparatus 100 is connected, and a characteristic of a user using the speech recognition apparatus 100.
- the speech recognition apparatus 100 may obtain information about at least one electronic apparatus connected to the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine a word associated with the at least one electronic device as at least one activation word.
- the speech recognition apparatus 100 may acquire information about the network to which the speech recognition apparatus 100 is connected.
- the speech recognition apparatus 100 may identify the situation in which the speech recognition apparatus 100 operates based on the information about the network to which the speech recognition apparatus 100 is connected.
- the speech recognition apparatus 100 may determine a location where the speech recognition apparatus 100 operates based on the information about the network to which the speech recognition apparatus 100 is connected.
- the speech recognition apparatus 100 may determine that the location of the speech recognition apparatus 100 is in the house.
- the speech recognition apparatus 100 may determine at least one activation word corresponding to the house.
- the speech recognition apparatus 100 may determine a TV, an air conditioner, a cleaner, weather, a schedule, etc. as activation words corresponding to the house.
- the speech recognition apparatus 100 may further include storing a plurality of candidate activation words corresponding to a plurality of situations respectively prior to determining the at least one activation word.
- the speech recognition apparatus 100 may acquire information related to the situation in which the speech recognition apparatus 100 operates and retrieve stored data so that the at least one candidate activation word corresponding to the situation in which the speech recognition apparatus 100 operates may be extracted.
- the speech recognition apparatus 100 may determine the at least one candidate activation word as the at least one activation word.
- the speech recognition apparatus 100 may receive information on speech commands that the speech recognition apparatus 100 receives from the user in a plurality of situations to store the plurality of candidate activation words.
- the speech recognition apparatus 100 may extract a plurality of words included in the speech commands.
- the speech recognition apparatus 100 may store at least one word as a candidate activation word corresponding to a specific situation, based on a frequency with which the plurality of words are included in the speech commands received in a specific situation among the plurality of situations.
- the speech recognition apparatus 100 may determine the number of at least one activation words determined based on a degree to which the speech recognition function of the speech recognition apparatus 100 is sensitively activated.
- the degree to which the speech recognition function of the speech recognition apparatus 100 is sensitively activated may mean at least one of a speed at which the speech recognition apparatus 100 is activated in response to various speech signals, a difficulty level at which the speech recognition apparatus 100 is activated, and the frequency with which the speech recognition apparatus 100 is activated. For example, when the speech recognition apparatus 100 is activated at a high frequency in response to various speech signals, it may be determined that the speech recognition function of the speech recognition apparatus 100 is activated sensitively. It may be determined that the speech recognition function of the speech recognition apparatus 100 is activated less sensitively when the speech recognition apparatus 100 is activated at a relatively low frequency in response to various speech signals.
- the degree to which the speech recognition function is sensitively activated may be determined based on a user input or may be determined based on the location of the speech recognition apparatus 100. For example, when the speech recognition apparatus 100 is located in a private space such as a house, it may be determined that the speech recognition function is sensitively activated, and when the speech recognition apparatus 100 is located in a public space such as a company, it may be determined that the speech recognition function is activated less sensitively. For example, when the speech recognition apparatus 100 is located in a private space such as the house, it may be determined that the speech recognition function is activated at a high frequency, and when the speech recognition apparatus 100 is located in the public space such as the company, it may be determined that the speech recognition function is activated at a relatively low frequency.
- the speech recognition apparatus 100 may receive the input audio signal. For example, the speech recognition apparatus 100 may divide an input audio signal input in real time in a frame unit of a predetermined length and process the input audio signals divided in the frame unit. A speech signal in the frame unit may be detected from the input audio signal divided in the frame unit.
- the speech recognition apparatus 100 may receive and store the input audio signal.
- the speech recognition apparatus 100 may detect presence or absence of utterance by Voice Activation Detection (VAD) or End Point Detection (EPD).
- VAD Voice Activation Detection
- EPD End Point Detection
- the speech recognition apparatus 100 may determine that a sentence starts when utterance starts and may start storing the input audio signal.
- the speech recognition apparatus 100 may determine that the sentence starts when the utterance starts after pause and may start storing the input audio signal.
- the speech recognition apparatus 100 may determine that the sentence ends if the utterance ends without uttering an activation word and may start storing the input audio signal. Alternatively, the speech recognition apparatus 100 may receive and store an audio signal in a predetermined time length unit as shown in FIG. 5.
- the speech recognition apparatus 100 may perform speech recognition on the input audio signal, based on whether or not the speech signal for uttering the activation word included in the at least one activation word is included in the input audio signal.
- the speech recognition apparatus 100 may recognize, apply, and process a language of a speaker included in the input audio signal by using the artificial intelligence technology.
- the speech recognition apparatus 100 may perform speech recognition on the input audio signal including the speech signal for uttering the activation word included in the at least one activation word.
- the speech recognition apparatus 100 may determine whether the input audio signal includes a speech signal for uttering an activation word. When it is determined that the input audio signal includes the speech signal for uttering the activation word included in the at least one activation word, the speech recognition apparatus 100 may perform speech recognition on the stored input audio signal and the input audio signal received thereafter.
- the speech recognition apparatus 100 may transmit an audio signal including a speech command including an activation word to a server (or an embedded speech recognition module).
- the server (or the embedded speech recognition module) may extract an activation word from the received audio signal.
- the server (or the embedded speech recognition module) may determine whether to recognize a speech command including the activation word or remove the activation word and recognize speech commands located before or after the activation word.
- the server (or the embedded speech recognition module) may perform speech recognition based on a determination result.
- the speech recognition apparatus 100 may perform speech recognition on the speech command including the activation word when the activation word has a meaning in the speech command. On the other hand, when the activation word does not have the meaning in the speech command, the speech recognition apparatus 100 may perform speech recognition on a previous sentence or a succeeding sentence from which the activation word is removed.
- the user may utter “Hi Robot Call Hana” to the speech recognition apparatus 100. Since the speech recognition apparatus 100 has received a speech signal for uttering the activation word “Hi Robot”, the speech recognition apparatus 100 may transmit a speech command including the activation word “Hi Robot Call Hana” to the server (or the embedded speech recognition module). Since the activation word “Hi Robot” is a basic activation word having no meaning in the speech command, the server (or the embedded speech recognition module) may perform speech recognition on only “Call Hana” that is the speech command from which the activation word is removed.
- the user may utter “What is the weather like today?” to the speech recognition apparatus 100. Since the speech recognition apparatus 100 has received the speech signal for uttering the activation word “weather”, the speech recognition apparatus 100 may transmit “What is weather like today?” that is a speech command including the activation word to the server (or the embedded speech recognition module). The server (or the embedded speech recognition module) may perform speech recognition on “What is the weather like today?” that is a speech command including the activation word since the activation word “weather” has meaning in the speech command.
- the speech recognition apparatus 100 may transmit an audio signal from which the speech command for uttering the activation word is removed from the input audio signal to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may extract the activation word from the input audio signal.
- the speech recognition apparatus 100 may determine whether to transmit an audio signal including the speech command for uttering the activation word to the server (or the embedded speech recognition module) or transmit the audio signal from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may transmit the audio signal including the speech command for uttering the activation word to the server (or the embedded speech recognition module) when the activation word has the meaning in the speech command.
- the speech recognition apparatus 100 may transmit a previous sentence or a succeeding sentence from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module).
- the user may utter “Hi Robot. Call Hana.” to the speech recognition apparatus 100. Since the activation word “Hi Robot” is a basic activation word having no meaning in the speech command, the speech recognition apparatus 100 may transmit only “Call Hana” that is the audio signal from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is a direct command requesting a response of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine whether the speech command is the direct command or an indirect command based on natural language understanding and sentence analysis regarding extracted text. For example, the speech recognition apparatus 100 may determine whether the speech command is the direct command or the indirect command based on at least one of a termination end of the speech command, an intonation, a direction in which the speech command is received, and a size of the speech command.
- the speech recognition apparatus 100 may determine whether to transmit the speech command to the server (or the embedded speech recognition module) or to perform speech recognition on the speech command, according to a determined type of the speech command. For example, the speech recognition apparatus 100 may perform natural language understanding and sentence type analysis using artificial intelligence technology.
- the speech recognition apparatus 100 may transmit the audio signal including the speech command including the activation word to the server (or the embedded speech recognition module) when it is determined that the speech command is the direct command.
- the speech recognition apparatus 100 may transmit the stored input audio signal and an input audio signal received thereafter to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may search for and extract a signal including a sentence including an activation word from the stored input audio signal.
- the speech recognition apparatus 100 may transmit an audio signal including the sentence containing the activation word to the server (or the embedded speech recognition module).
- the server (or the embedded speech recognition module) may perform speech recognition on the speech command.
- the speech recognition apparatus 100 may not transmit the audio signal including the speech signal to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may repeat an operation of receiving and storing a new input audio signal while ignoring the previously input audio signal.
- the speech recognition apparatus 100 may determine whether or not the new input audio signal includes the speech signal for uttering the activation word.
- the speech recognition apparatus 100 may output a result of performing speech recognition.
- the speech recognition apparatus 100 may output the result of speech recognition performed by the server (or the embedded speech recognition module).
- the result of performing speech recognition may include text extracted from a speech command.
- the result of performing speech recognition may be a screen performing an operation corresponding to the result of performing speech recognition.
- the speech recognition apparatus 100 may perform an operation corresponding to the result of performing speech recognition.
- the speech recognition apparatus 100 may determine a function of the speech recognition apparatus 100 corresponding to the result of performing speech recognition, and output a screen performing the function.
- the speech recognition apparatus 100 may transmit a keyword corresponding to the result of performing speech recognition to an external server, receive information related to the transmitted keyword from the server, and output the received information on the screen.
- the speech recognition apparatus 100 may determine a method of outputting the result of performing speech recognition based on an analysis result by analyzing a speech command.
- the speech recognition apparatus 100 may output the result of speech recognition performed in various ways such as sound, light, image, and vibration in response to a speech command.
- the speech recognition apparatus 100 may notify the user that a response is waiting while waiting for a response to the speech command.
- the speech recognition apparatus 100 may inform the user that the response is waiting in various ways such as sound, light, image, and vibration.
- the speech recognition apparatus 100 may store the result of performing speech recognition and then when the user makes an utterance related to the result of performing speech recognition and output the result of performing speech recognition.
- the speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is a direct command requesting a response of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine whether to output the result of performing speech recognition immediately or to output the result of performing speech recognition when a confirmation command is received from the user according to the determined type of the speech command.
- the speech recognition apparatus 100 may extract text uttered by the user by performing speech recognition on the input audio signal.
- the speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is the direct command requesting the response of the speech recognition apparatus 100, based on natural language understanding and sentence type analysis regarding the extracted text.
- the speech recognition apparatus 100 may perform an operation of responding to the speech command when it is determined that the speech command is the direct command.
- the speech recognition apparatus 100 may display that a response to the speech command is possible and the response is waiting.
- the speech recognition apparatus 100 may perform the operation of responding to the speech command when a confirmation command is received from the user.
- FIG. 4 is a diagram for explaining a method of performing speech recognition by the speech recognition apparatus 100 according to an embodiment of the present disclosure.
- the speech recognition apparatus 100 is connected to an electronic control apparatus 401 of a vehicle and operates.
- the speech recognition apparatus 100 may communicate with the electronic control apparatus 401 of the vehicle via Bluetooth.
- the speech recognition apparatus 100 may determine that a location of the speech recognition apparatus 100 is the vehicle based on information that the speech recognition apparatus 100 is connected to the electronic control apparatus 401 of the vehicle.
- the speech recognition apparatus 100 may determine at least one activation word corresponding to the vehicle.
- the speech recognition apparatus 100 may extract candidate activation words corresponding to the vehicle including navigation, an air conditioner, a window, a gas supply, a trunk, a side mirror, etc., and candidate activation words corresponding to functions available in the vehicle including, for example, a text message, a schedule, etc.
- the speech recognition apparatus 100 may determine the extracted candidate activation words as activation words suitable for a current situation.
- the speech recognition apparatus 100 may determine an activation word based on whether or not the speech recognition apparatus 100 is moving. When the vehicle is traveling, the speech recognition apparatus 100 may determine only the candidate activation words that do not disturb safe vehicle operation among the candidate activation words corresponding to the vehicle and the candidate activation words corresponding to the functions available in the vehicle as activation words.
- the speech recognition apparatus 100 may determine all candidate activation words related to the vehicle, such as the navigation, the air conditioner, the gas supply, the trunk, the side mirror, the text message, the schedule, etc.
- the speech recognition apparatus 100 may determine an activation word so that the speech recognition apparatus 100 does not respond to speech commands that may disturb safe vehicle operation. For example, when the vehicle is traveling, opening the trunk of the vehicle or opening the gas supply by a speech command may disturb safe vehicle operation. Therefore, when the vehicle is traveling, the speech recognition apparatus 100 may determine only some candidate activation words, such as navigation, air conditioner, text message, and schedule, which do not disturb safe vehicle operation as activation words.
- the speech recognition apparatus 100 may receive and store an input audio signal prior to performing speech recognition.
- the speech recognition apparatus 100 may analyze the input audio signal to determine whether the input audio signal includes a speech signal for uttering an activation word.
- the user 10 may use the speech recognition function without having to utter a specific activation word such as “Hi Robot” to be guided to a train station.
- the user 10 may utter “Find the way to the train station on the navigation!” that is a speech command to ask for a direction to get to the train station.
- the speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering “navigation” which is an activation word corresponding to the vehicle is received.
- the speech recognition apparatus 100 may perform speech recognition on “Find the way to the train station on the navigation!” that is a whole speech command including the speech signal for uttering "navigation”.
- the speech recognition apparatus 100 may transmit “Find at” which is a speech command received after the activation word to a server (or an embedded speech recognition module) and perform speech recognition.
- the speech recognition apparatus 100 may transmit a previously received and stored speech command to the server (or the embedded speech recognition module) together with the speech command received after the activation word and perform speech recognition.
- the speech recognition apparatus 100 may perform speech recognition on “the way to the train station” that is the speech command which is received and stored before the activation word, the activation word “navigation”, and “Find at” which is the speech command received after the activation word.
- the speech recognition apparatus 100 may guide a route to a train station as a response to the speech command of the user 10.
- FIG. 4 a case where the location of the speech recognition apparatus 100 is the vehicle is shown as an example.
- embodiments of the present disclosure are not limited thereto.
- light, television, air conditioner, washing machine, refrigerator, weather, date, time, etc. may be determined as activation words corresponding to the house.
- the speech recognition apparatus 100 may determine at least one activation word based on the location of the speech recognition apparatus 100 or a characteristic of a space in which the speech recognition apparatus 100 is located.
- the speech recognition apparatus 100 may acquire information related to the location of the speech recognition apparatus 100 based on an electronic apparatus connected to the speech recognition apparatus 100, a network connected to the speech recognition apparatus 100, or a base station connected to the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine that the speech recognition apparatus 100 is located in the vehicle when the speech recognition apparatus 100 is connected to audio in the vehicle via a Bluetooth method.
- the speech recognition apparatus 100 may acquire information related to a current location by a GPS module included in the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine words associated with an electronic apparatus that the speech recognition apparatus 100 may control in the house or a function of the electronic apparatus as activation words. As the location of the speech recognition apparatus 100 in the house changes, the speech recognition apparatus 100 may determine different words as activation words according to the location. For example, when the speech recognition apparatus 100 is located in a living room, the speech recognition apparatus 100 may determine words associated with all electronic apparatuses in the house as activation words. On the other hand, when the speech recognition apparatus 100 is located in a room, the speech recognition apparatus 100 may determine only words associated with electronic apparatuses in the room as activation words.
- the speech recognition apparatus 100 may determine words associated with an electronic apparatus that the speech recognition apparatus 100 may control in the vehicle or a function of the electronic apparatus as activation words.
- the speech recognition apparatus 100 may determine different activation words even when the location of the speech recognition apparatus 100 in the vehicle changes or a characteristic of a user of the speech recognition apparatus 100 changes.
- the speech recognition apparatus 100 When the speech recognition apparatus 100 is located in a driver’s seat or a user of the speech recognition apparatus 100 is driving, the speech recognition apparatus 100 may determine words related to all electronic apparatuses and functions that a driver may control in the vehicle as activation words. On the other hand, when the speech recognition apparatus 100 is located in a seat other than the driver's seat or the user of the speech recognition apparatus 100 is not driving, the speech recognition apparatus 100 may determine only words related to electronic apparatuses and functions that do not disturb driving as activation words.
- the speech recognition apparatus 100 when the user of the speech recognition apparatus 100 is a driving driver, words related to driving of the vehicle such as “side mirrors”, “lights”, “steering wheels”, etc. as activation words.
- words related to driving of the vehicle such as “side mirrors”, “lights”, “steering wheels”, etc. as activation words.
- the user of the speech recognition apparatus 100 when the user of the speech recognition apparatus 100 is a passenger who does not drive, only words related to electronic apparatuses that are not related to the driving of the vehicle such as “an air conditioner”, “radio”, etc. as activation words.
- the speech recognition apparatus 100 may determine an activation word based on whether or not there is an environment in which noise exists. For example, the speech recognition apparatus 100 may not determine, as an activation word, a word whose characteristic is similar to that of noise in an environment in which noise is frequently generated.
- the speech recognition apparatus 100 may determine an activation word based on whether a space in which the speech recognition apparatus 100 is located is a common space or a private space. For example, when the speech recognition apparatus 100 is located in a common space such as a corridor of a company, the speech recognition apparatus 100 may determine only words corresponding to the common space as activation words. On the other hand, when the speech recognition apparatus 100 is located in a private space such as a private office, the speech recognition apparatus 100 may determine words related to private affairs as activation words together with the words corresponding to the public space. For example, when the speech recognition apparatus 100 is located in the common space, the speech recognition apparatus 100 may activate a speech recognition function by activation words corresponding to the common space such as “air conditioner”, “light”, etc.
- the speech recognition apparatus 100 may also activate the speech recognition function by the words related to private affairs, such as “telephone” or “text message” along with the activation words corresponding to the public space, such as “air conditioner”, “light”, etc.
- the speech recognition apparatus 100 may determine words on which local language characteristics are reflected as activation words based on a region where the speech recognition apparatus 100 is located. For example, when the speech recognition apparatus 100 is located in a region where a dialect is used, the speech recognition apparatus 100 may determine words on which the dialect is reflected as activation words.
- the speech recognition apparatus 100 may determine at least one activation word based on time.
- the speech recognition apparatus 100 may use a specific word as an activation word for a specific period of time. After the specific period of time, the speech recognition apparatus 100 may no longer use the specific word as the activation word.
- the speech recognition apparatus 100 may determine a word whose frequency of use has recently increased as an activation word by learning speech commands received from the user. For example, if the user is about to travel to Jeju Island, the user may frequently input speech commands related to “Jeju Island” to the speech recognition apparatus 100 to obtain information related to “Jeju Island”. The speech recognition apparatus 100 may add a word frequently appearing above a threshold frequency number as an activation word. Therefore, even if the user does not separately activate the speech recognition function, the user may use the speech recognition function by simply uttering a speech command including the added activation word.
- the speech recognition apparatus 100 may determine the activation word based on current time information in which the speech recognition apparatus 100 is operating. For example, the speech recognition apparatus 100 may use different activation words depending on season, day, date, whether it is weekend or weekday, and a time zone. The speech recognition apparatus 100 may learn speech commands received from the user according to the season, the day, the date, the time, etc., thereby updating an activation word suitable for each situation and using the updated activation word.
- the speech recognition apparatus 100 may determine at least one activation word based on a movement of the user of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may reflect a change in an utterance characteristic in determining the activation word depending on whether the user of the speech recognition apparatus 100 stops moving, is walking, or is running. For example, when the user of the speech recognition apparatus 100 is walking or running, the speech recognition apparatus 100 may reflect a characteristic that the user who breathes out to determine the activation word.
- the speech recognition apparatus 100 may determine at least one activation word based on information related to a characteristic of the user who uses the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine at least one activation word based on an age of the user of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine words related to a common interest of the adult as activation words. For example, the speech recognition apparatus 100 may determine words such as news, sports, etc. which are related to the common interest of the adult as activation words.
- the speech recognition apparatus 100 may determine words related to characteristics of the minor as activation words. For example, when the user is a high school student, the speech recognition apparatus 100 may determine words such as test, math, differential integration, etc. related to a common interest of the high school student as activation words.
- the speech recognition apparatus 100 may determine at least one activation word based on a gender of the user of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine words related to a common interest of the woman as activation words. For example, the speech recognition apparatus 100 may determine a word “cosmetics” that is related to the common interest of the woman as an activation word.
- the speech recognition apparatus 100 may determine at least one activation word based on an occupation or hobby of the user of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine words on which characteristics of the user according to occupations are reflected or words related to hobbies as activation words. For example, when the hobby of the user of the speech recognition apparatus 100 is listening to music, the speech recognition apparatus 100 may determine words related to hobby such as music, radio, etc. as activation words.
- the speech recognition apparatus 100 may operate differently depending on whether the speech recognition apparatus 100 is used by only one person or by several people.
- the speech recognition apparatus 100 may recognize the gender or age of the user by analyzing a characteristic of speech or may perform an operation of identifying the user by analyzing a characteristic of a face.
- the speech recognition apparatus 100 may determine words suitable for the identified user as activation words.
- the speech recognition apparatus 100 may reflect history in which words are used in determining an activation word.
- the speech recognition apparatus 100 may reflect history in which words are used in common regardless of the user in determining the activation word.
- the speech recognition apparatus 100 may determine the activation word from a database including candidate activation words corresponding to each situation in common regardless of the user.
- embodiments of the present disclosure are not limited thereto.
- the speech recognition apparatus 100 may reflect history in which words are used by each individual in determining an activation word.
- the speech recognition apparatus 100 may manage a database including candidate activation words suitable for each individual.
- the speech recognition apparatus 100 may update a personalized database by accumulating a frequency of using words in each situation for each individual.
- the speech recognition apparatus 100 may determine an activation word suitable for a current situation from the personalized database.
- FIG. 5 is a flowchart illustrating a method of performing speech recognition by the speech recognition apparatus 100 according to an embodiment of the present disclosure.
- Operations S510 and S520 of FIG. 5 may correspond to operation S310 of FIG. 3
- operation S530 of FIG. 5 may correspond to operation S320 of FIG. 3
- operations S540 to S580 of FIG. 5 may correspond to operation S330 of FIG. 3
- operation S590 of FIG. 5 may correspond to operation S340 of FIG. 3.
- Descriptions of FIG. 3 may be applied to each operation of FIG. 5 corresponding to each operation of FIG. 3. Thus, descriptions of redundant operations are omitted.
- the speech recognition apparatus 100 may acquire information related to a situation in which the speech recognition apparatus 100 operates.
- the speech recognition apparatus 100 may include one or more sensors and may sense various information for determining the situation in which the speech recognition apparatus 100 operates.
- the sensor included in the speech recognition apparatus 100 may sense a location of the speech recognition apparatus 100, information related to a movement of the speech recognition apparatus 100, information capable of identifying a user who is using the speech recognition apparatus 100, and surrounding environment information of the speech recognition apparatus 100, and the like.
- the speech recognition apparatus 100 may include at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetic sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor, or a combination thereof.
- the speech recognition apparatus 100 may acquire information sensed by an external electronic apparatus as the information related to the situation in which the speech recognition apparatus 100 operates.
- the external electronic apparatus may be at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetic sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor, or a combination thereof.
- the speech recognition apparatus 100 may acquire a user input as the information related to the situation in which the speech recognition apparatus 100 operates.
- the speech recognition apparatus 100 may acquire information related to a location in which the speech recognition apparatus 100 operates or a characteristic of a user of the speech recognition apparatus 100 from the user input.
- the speech recognition apparatus 100 may acquire the information related to the situation in which the speech recognition apparatus 100 operates through communication with another electronic apparatus. For example, when the speech recognition apparatus 100 is connected to an electronic apparatus recognized as existing in a house through near distance communication, the speech recognition apparatus 100 may determine that the speech recognition apparatus 100 is present in the house. For example, the speech recognition apparatus 100 may acquire information such as house, indoors, private space as the location of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine at least one activation word based on the information obtained in operation S510.
- the speech recognition apparatus 100 may store candidate activation words suitable for each situation with respect to a plurality of situations prior to determining an activation word. Based on the information obtained in operation S510, the speech recognition apparatus 100 may retrieve candidate activation words suitable for a current situation from the stored data. The speech recognition apparatus 100 may determine at least one of the retrieved candidate activation words as the activation word.
- the speech recognition apparatus 100 may communicate with a server that stores candidate activation words suitable for each situation with respect to the plurality of situations prior to determining the activation word. Based on the information obtained in operation S510, the speech recognition apparatus 100 may retrieve candidate activation words suitable for the current situation from the server. The speech recognition apparatus 100 may determine at least one of the retrieved candidate activation words as the activation word. The candidate activation words for each situation stored in the server may be shared and used by a plurality of speech recognition apparatuses.
- the speech recognition apparatus 100 may determine the number of activation words to be determined based on a degree to which a speech recognition function of the speech recognition apparatus 100 is sensitively activated. A priority may be assigned to the candidate activation words for each situation. The speech recognition apparatus 100 may determine some of the candidate activation words as at least one activation word based on the degree to which the speech recognition function is sensitively activated and priority.
- the speech recognition apparatus 100 may determine at least one activation word based on information related to a characteristic of the user of the speech recognition apparatus 100.
- the speech recognition apparatus 100 used by families of various ages may determine different activation words by recognizing a speech age, by recognizing the face of the user, or initially input user information.
- the speech recognition apparatus 100 may determine all candidate activation words related to the house such as a TV, an air conditioner, a vacuum cleaner, weather, a schedule, an Internet connection, watching of TV channels for children, heating, cooling, humidity control, etc., as at least one activation word.
- the speech recognition apparatus 100 may determine the activation word so as to respond only to speech commands that are allowed to be controlled by a speech command of the child. Therefore, the speech recognition apparatus 100 may determine only some candidate activation words such as weather, watching of TV channels for children, etc. as at least one activation word.
- the speech recognition apparatus 100 may receive and store the input audio signal.
- the speech recognition apparatus 100 may determine whether or not an input audio signal having a length longer than a predetermined time has been stored. If the input audio signal having the length longer than the predetermined time is stored, then in operation S560, the speech recognition apparatus 100 may delete the input audio signal that was received in the past.
- FIG. 5 shows an example of receiving an audio signal in units of a predetermined time length
- the speech recognition apparatus 100 may receive and store an audio signal in units of a sentence.
- the speech recognition apparatus 100 may receive and store the audio signal in units of data of a predetermined size.
- the speech recognition apparatus 100 may determine whether a speech signal for uttering the activation word has been received.
- the speech recognition apparatus 100 may transmit the stored input audio signal and the input audio signal received thereafter to a server (or an embedded speech recognition module).
- the speech recognition apparatus 100 may search for and extract a signal including a sentence including an activation word from the stored input audio signals.
- the speech recognition apparatus 100 may transmit the audio signal including the sentence including the activation word to the server (or the embedded speech recognition module).
- the speech recognition apparatus 100 may use the following method to search for and extract a signal including a sentence including an activation word.
- the speech recognition apparatus 100 may determine a start and an end of a sentence based on at least one of a length of a silence section, a sentence structure, and an intonation.
- the speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on a determined result.
- the speech recognition apparatus 100 may determine a past audio signal of a predetermined length and a currently received audio signal as a start and an end of a sentence from the speech signal in which the activation word is uttered.
- the speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on the determined result.
- the speech recognition apparatus 100 may determine a past speech signal of a variable length before the speech signal in which the activation word has been uttered and a speech signal of a variable length after the speech signal in which the activation word has been uttered as a start and an end of a sentence based on a grammatical position of the activation word.
- the speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on the determined result.
- the speech recognition apparatus 100 may repeatedly perform the operation of receiving and storing the input audio signal of the length longer than the predetermined length.
- the speech recognition apparatus 100 may perform speech recognition.
- the speech recognition apparatus 100 may extract a frequency characteristic of the speech signal from the input audio signal and perform speech recognition using an acoustic model and a language model.
- the speech recognition apparatus 100 may output a result of performing speech recognition.
- the speech recognition apparatus 100 may output the result of performing speech recognition in various ways such as sound, light, image, vibration, etc.
- FIG. 6 is a flowchart of a method of outputting a result of speech recognition performed by a speech recognition apparatus according to an embodiment of the present disclosure.
- operations S610 to S650 in FIG. 6 may correspond to operation S330 in FIG. 3.
- the speech recognition apparatus 100 may analyze a speech command.
- the speech recognition apparatus 100 may analyze the speech command through natural language understanding and dialog management.
- the speech recognition apparatus 100 may perform natural language understanding on the result of performing speech recognition.
- the speech recognition apparatus 100 may extract text estimated to have been uttered by a speaker by performing speech recognition on the speech command.
- the speech recognition apparatus 100 may perform natural language understanding on the text estimated to have been uttered by the speaker.
- the speech recognition apparatus 100 may grasp an intention of the speaker through natural language processing.
- the speech recognition apparatus 100 may determine whether the speech command is a direct command for requesting a response of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may determine whether the speech command is a direct command or an indirect command based on at least one of a sentence structure of the speech command, an intonation, a direction in which the speech command is received, a size of the speech command, and a result of natural language understanding.
- the speech command may mean any acoustic speech signal received by the speech recognition apparatus 100 or may mean a speech signal uttered by a human being among the acoustic speech signals received by the speech recognition apparatus 100.
- the direct command may include a speech command that the user intentionally uttered to allow the speech recognition apparatus 100 to perform an operation that respond to the speech command.
- the indirect instruction may include all speech commands except the direct command among speech commands uttered by the user.
- the indirect command may include a speech signal that the user has uttered without intending to perform speech recognition by the speech recognition apparatus 100.
- the speech recognition apparatus 100 may perform an operation of responding to the speech command when it is determined that the speech command is the direct command.
- the speech recognition apparatus 100 may display that a response to the speech command is possible.
- the speech recognition apparatus 100 may notify the user that the response is waiting while waiting for the response to the speech command.
- the speech recognition apparatus 100 may receive a confirmation command from the user.
- the speech recognition apparatus 100 may perform the operation of responding to the speech command when the confirmation command is received from the user.
- FIGS. 7A and 7B show examples in which a speech recognition apparatus is included in a home robot.
- the speech recognition apparatus 100 may be various mobile computing apparatuses or non-mobile computing apparatuses.
- the speech recognition apparatus 100 may be included in a central control apparatus that controls a home network connecting various home appliances in a house.
- FIGS. 7A and 7B show cases where the speech recognition apparatus 100 determines “weather” as an activation word corresponding to a current situation according to an embodiment of the present disclosure.
- the user 10 may utter “I do not know what the weather will be like tomorrow” expressing an intention to wonder about tomorrow's weather during a dialog with another speaker. Since the speech recognition apparatus 100 has received a speech signal for uttering the activation word “weather”, the speech recognition apparatus 100 may perform speech recognition on a sentence “I do not know what the weather will be like tomorrow” including the activation word. The speech recognition apparatus 100 may activate a speech recognition function when the speech signal for uttering the activation word “weather” is received.
- the speech recognition apparatus 100 may perform speech recognition on “I do not know what the weather will be like tomorrow”, which is a whole speech command including the speech signal for uttering the activation word “weather”.
- the speech recognition apparatus 100 may transmit “I do not know what” that is a speech command received after the activation word to a server to allow the server to perform speech recognition. Also, the speech recognition apparatus 100 may transmit a previously received and stored speech command to the server together with the speech command received after the activation word and receive a result of speech recognition performed by the server from the server. When the speech signal for uttering the activation word “weather” is received, the speech recognition apparatus 100 may perform speech recognition on “tomorrow” that is a speech command received and stored before the activation word, the activation word “weather”, and “I do not know what” that is the speech command received after the activation word.
- the speech recognition apparatus 100 may transmit “tomorrow weather” which is a keyword corresponding to the result of performing speech recognition to an external server and may receive and store “sunny” as information related to the transmitted keyword from the server.
- the speech recognition apparatus 100 may perform natural language processing and sentence structure analysis on the speech command on which speech recognition has been performed to determine whether the speech command is a direct command for requesting a response of the speech recognition apparatus 100. For example, the speech recognition apparatus 100 may determine that the uttered speech command of FIG. 7A is an indirect command.
- the speech recognition apparatus 100 may display that a response to the speech command is possible. For example, the speech recognition apparatus 100 may inform the user 10 that the response is waiting in various ways such as sound, light, image, vibration, etc.
- the user 10 may recognize that the speech recognition apparatus 100 is waiting for the response and may issue a confirmation command to request the response to the speech command.
- the user 10 may issue the confirmation command to the speech recognition apparatus 100 by speaking “Say Robot” that is a previously confirmed confirmation command.
- the speech recognition apparatus 100 may output speech “It will be sunny tomorrow” as an operation to respond to the speech command.
- the speech recognition apparatus 100 may perform speech recognition only by making a natural utterance suitable for a situation even if the user 10 does not perform an operation for directly activating the speech recognition function.
- the speech recognition apparatus 100 may perform speech recognition by recognizing a word included in the natural utterance suitable for the situation uttered by the user 10 as an activation word.
- information about “today's weather” which is content that the user 10 wants to know may be acquired in advance before receiving the speech command of the user 10 “Say Robot”.
- the speech recognition apparatus 100 may provide a proactive service before the user 10 utters a speech command so that the speech recognition apparatus 100 performs the speech recognition function.
- FIGS. 7A and 7B show examples in which the speech recognition apparatus 100 operates in a manner of notifying the user 10 that a response to speech command is waiting while a speech command is an indirect command.
- an embodiment is not limited to FIGS. 7A and 7B.
- the speech recognition apparatus 100 may output a result of performing speech recognition only when a speech command is a direct command for requesting a response of the speech recognition apparatus 100.
- the speech recognition apparatus 100 may not take a separate action when the speech command is not the direct command for requesting the response of the speech recognition apparatus 100.
- FIG. 8 shows a case where the speech recognition apparatus 100 determines "air conditioner" as an activation word corresponding to a current situation according to an embodiment of the present disclosure.
- the first user 10 may utter “Today is the weather to turn on the air conditioner” to explain a current weather during a dialog with a second user 20.
- the speech recognition apparatus 100 may determine whether “Today is the weather to turn on the air conditioner” that is a speech command including an activation word is a direct command or an indirect command.
- the speech recognition apparatus 100 may determine that a speech command of the first user 10 is not the direct command. For example, the speech recognition apparatus 100 may determine that the speech command of the first user 10 is not the direct command because the speech command of the first user 10 does not have a sentence structure to ask a question or issue a command. The speech recognition apparatus 100 may not transmit an audio signal including the speech command to a server (or an embedded speech recognition module) because it is determined that the speech command of the first user 10 is not the direct command. The speech recognition apparatus 100 may ignore an utterance of the first user 10 that has been received and stored and repeat an operation of newly receiving and storing an input audio signal.
- the second user 20 may utter “turn on the air conditioner” that is a speech command to request the speech recognition apparatus 100 to turn on the air conditioner in response to the utterance of the first user 10.
- the speech recognition apparatus 100 may determine whether “turn on the air conditioner” that is the speech command including the activation word is a direct command.
- the speech recognition apparatus 100 may determine that the speech command of the second user 20 is the direct command. For example, the speech recognition apparatus 100 may determine that the speech command of the second user 20 is the direct command because the speech command of the second user 20 has a sentence structure to issue a command.
- the speech recognition apparatus 100 may transmit an audio signal including the speech command including the activation word to the server (or the embedded speech recognition module) because it is determined that the speech command of the second user 20 is the direct command.
- the server (or the embedded speech recognition module) may perform speech recognition on the speech command.
- the speech recognition apparatus 100 may control the air conditioner so that power of the air conditioner is turned on in response to a speech recognition result.
- FIG. 9 is a flowchart of a method of determining whether a speech command is a direct command or an indirect command performed by a speech recognition apparatus according to an embodiment of the present disclosure.
- operations S910 to S930 in FIG. 9 may correspond to operation S610 in FIG. 6.
- the speech recognition apparatus 100 may filter the speech command based on matching accuracy based on natural language understanding.
- the speech recognition apparatus 100 may calculate the matching accuracy indicating a degree to which the speech command of a user may be matched with a machine-recognizable command based on natural language understanding.
- the speech recognition apparatus 100 may primarily determine whether the speech command is the direct command for requesting a response of the speech recognition apparatus 100 by comparing the calculated matching accuracy with a predetermined threshold value.
- the speech recognition apparatus 100 may secondarily determine whether the speech command is the direct command by analyzing a sentence structure of the speech command.
- the speech recognition apparatus 100 may analyze morphemes included in the speech command and analyze the sentence structure of the speech command based on a final ending. For example, when the speech recognition apparatus 100 determines that the speech command is an interrogative type sentence (e.g., “how...?”, “what ...?”, etc.) or an imperative type sentence (e.g., “close...!”, “stop...!”, “do...!”, etc.), the speech command may assign a weight to a reliability value which is a direct command.
- interrogative type sentence e.g., “how...?”, “what ...?”, etc.
- an imperative type sentence e.g., “close...!”, “stop...!”, “do...!”, etc.
- the speech recognition apparatus 100 may filter the speech command based on the reliability value calculated in operations S910 and S920. The speech recognition apparatus 100 may finally determine whether the speech command is the direct command by comparing the calculated reliability value through operations S910 and S920 with a predetermined threshold value.
- the speech recognition apparatus 100 may extract candidate activation words according to each situation before determining an activation word suitable for a situation.
- the speech recognition apparatus 100 may store the extracted candidate activation words in an embedded database or a database included in an external server.
- FIG. 10 is a flowchart of a method of determining candidate activation words respectively corresponding to situations performed by a speech recognition apparatus according to an embodiment of the present disclosure.
- the speech recognition apparatus 100 may group speech commands that may be uttered according to each situation.
- the speech commands that may be uttered in each situation may include speech commands that are expected to be uttered by a user in each situation or speech commands that have been uttered by the user in each situation.
- the speech recognition apparatus 100 may receive a corpus uttered by the user in a plurality of situations and group speech commands included in the received corpus.
- the speech recognition apparatus 100 may receive information about a situation in which the speech commands included in the corpus are uttered, together with the corpus.
- the speech recognition apparatus 100 may extract statistics on words included in the speech commands that may be uttered for each situation.
- the speech recognition apparatus 100 may extract a frequency of a plurality of words included in speech commands received in each of a plurality of situations.
- the speech recognition apparatus 100 may extract at least one word included in the speech commands uniquely at a high frequency for each situation.
- the speech recognition apparatus 100 may exclude a frequently appearing word more than a threshold frequency commonly in the speech commands uttered in the plurality of situations from words included in speech commands uttered in a specific situation uniquely at a high frequency.
- the speech recognition apparatus 100 may determine a frequently appearing word more than a threshold frequency only in the speech commands uttered in the specific situation as a word appearing uniquely at a high frequency in the speech commands uttered in the specific situation.
- the speech recognition apparatus 100 may determine the extracted at least one word as a candidate activation word for each situation.
- the speech recognition apparatus 100 may store candidate activation words suitable for each situation with respect to the plurality of situations.
- the speech recognition apparatus 100 may extract at least one candidate activation word corresponding to a current situation from stored data.
- the speech recognition apparatus 100 may determine at least one of the extracted candidate activation words as an activation word.
- a candidate activation word is determined by analyzing a corpus including speech commands that may be uttered in a plurality of situations.
- a user may directly input or delete the candidate activation word corresponding to each situation.
- the speech recognition apparatus 100 may store a candidate activation word corresponding to a specific situation in a database or delete a specific candidate activation word based on a user input. For example, if the user newly installs an air purifier in a house, the speech recognition apparatus 100 may add “air purifier” as a candidate activation word associated with the house, based on the user input.
- Each component of the speech recognition apparatus 100 described below may perform each operation of the method of performing speech recognition by the speech recognition apparatus 100 described above.
- FIGS. 11A and 11B are block diagrams of a speech recognition apparatus according to an embodiment of the present disclosure.
- the speech recognition apparatus 100 may include a receiver 1110, a processor 1120, and an outputter 1130. However, the speech recognition apparatus 100 may be implemented by more components than all of the components shown in FIGS. 11A and 11B. As shown in FIG. 11B, the speech recognition apparatus 100 may further include at least one of a memory 1140, a user inputter 1150, a communicator 1160, and a sensing unit 1170.
- the speech recognition apparatus 100 may be included in at least one of a non-mobile computing device, a mobile computing device, an electronic control apparatus of a vehicle, and a server, or may be connected to at least one of the non-mobile computing device, the mobile computing device, the electronic control apparatus of a vehicle, and the server by wired or wirelessly.
- the receiver 1110 may receive an audio signal.
- the receiver 1110 may directly receive an audio signal by converting external sound into electrical acoustic data by a microphone.
- the receiver 1110 may receive the audio signal transmitted from an external apparatus.
- FIG. 11A the receiver 1110 is shown as being included in the speech recognition apparatus 100, but the receiver 1110 may be included in a separate apparatus and may be connected to the speech recognition apparatus 100 by wires or wirelessly.
- the processor 1120 may control the overall operation of the speech recognition apparatus 100.
- the processor 1120 may control the receiver 1110, and the outputter 1130.
- the processor 1120 according to an embodiment may control the operation of the speech recognition apparatus 100 using an artificial intelligence technology.
- FIG. 11A illustrates one processor, the speech recognition apparatus may include one or more processors.
- the processor 1120 may determine at least one activation word based on information related to a situation in which the speech recognition apparatus 100 operates.
- the processor 1120 may obtain at least one of, for example, a location of the speech recognition apparatus 100, time, whether the speech recognition apparatus 100 is connected to another electronic apparatus, whether the speech recognition apparatus 100 is moving, and information related to a characteristic of a user of the speech recognition apparatus 100 as the information related to the situation in which the speech recognition apparatus 100 operates.
- the processor 1120 may determine the number of at least one activation word corresponding to a current situation based on a degree to which a speech recognition function of the speech recognition apparatus 100 is sensitively activated in determining the at least one activation word corresponding to the current situation.
- the processor 1120 may perform speech recognition on the input audio signal.
- the processor 1120 may detect a speech signal from the input audio signal input from the receiver 1110 and perform speech recognition on the speech signal.
- the processor 1120 may include a speech recognition module for performing speech recognition.
- the processor 1120 may extract a frequency characteristic of the speech signal from the input audio signal and perform speech recognition using an acoustic model and a language model.
- the frequency characteristic may refer to a distribution of frequency components of an acoustic input extracted by analyzing a frequency spectrum of the acoustic input. Therefore, as shown in FIG. 11B, the speech recognition apparatus 1100 may further include a memory 1140 that stores the acoustic model and the language model.
- the processor 1120 may perform speech recognition on the input audio signal including the speech signal for uttering the activation word.
- the processor 1120 may receive and store the input audio signal prior to performing speech recognition.
- the processor 1120 may determine whether the input audio signal includes the speech signal for uttering the activation word. When it is determined that the input audio signal includes the speech signal for uttering the activation word included in the at least one activation word, the processor 1120 may perform speech recognition on the stored input audio signal and a subsequently received input audio signal.
- the processor 1120 may determine whether to output a result of performing speech recognition immediately or to output the result of performing speech recognition when a confirmation command is received from the user.
- the processor 1120 may extract text uttered by the user by performing speech recognition on the input audio signal.
- the processor 1120 may determine whether a speech command included in the input audio signal is a direct command for requesting a response of the speech recognition apparatus, based on natural language understanding and sentence analysis of the extracted text.
- the processor 1120 may perform an operation of responding to the speech command when it is determined that the speech command is the direct command.
- the processor 1120 may control the outputter 1130 to display that the response to the speech command is possible when it is determined that the speech command is not the direct command.
- the processor 1120 may perform the operation of responding to the speech command when a confirmation command is received from the user through the receiver 1110.
- the processor 1120 may be implemented with hardware and/or software components that perform particular functions.
- the processor 1120 may include a user situation analyzer (not shown) for analyzing a situation in which the speech recognition apparatus 100 operates, a candidate activation word extractor (not shown) for extracting candidate activation words corresponding to a current situation from a database, an activation word switcher (not shown) for switching an activation word according to the current situation, and an audio signal processer (not shown) for processing an audio signal including a speech command for uttering the activation word.
- the functions performed by the processor 1120 may be implemented by at least one microprocessor, or by circuit components for related functions. Some or all of the functions performed by the processor 1120 may be implemented by software modules configured in various programming languages or script languages that are executed in the processor 1120.
- FIGS. 11A and 11B illustrate that the speech recognition apparatus 100 includes one processor 1120, but the embodiment is not limited thereto.
- the speech recognition apparatus 100 may include a plurality of processors.
- the outputter 1130 may output a result of speech recognition performed on the input audio signal.
- the outputter 1130 may inform the user of the result of performing speech recognition or transmit the result to an external device (e.g., a smart phone, a smart TV, a smart watch, a server, etc.).
- the outputter 1130 may include a speaker or a display capable of outputting an audio signal or a video signal.
- the outputter 1130 may perform an operation corresponding to the result of performing speech recognition.
- the speech recognition apparatus 100 may determine a function of the speech recognition apparatus 100 corresponding to the result of performing speech recognition, and may output a screen performing the function through the outputter 1130.
- the speech recognition apparatus 100 may transmit a keyword corresponding to the result of performing speech recognition to an external server, receive information related to the transmitted keyword from the server, and output the information on the screen through the outputter 1130.
- the outputter 1130 may output information that is received from outside, is processed by the processor 1120, or is stored in the form of at least one of light, sound, image, and vibration.
- the outputter 1130 may further include at least one of a display for outputting text or an image, an acoustic outputter for outputting sound, and a vibration motor for outputting vibration.
- the memory 1140 of FIG. 11B may store the result of speech recognition performed by the processor 1120.
- the memory 1140 may store the input audio signal received through the receiver 1110.
- the memory 1140 may receive and store the input audio signal in units of a sentence, in units of a predetermined time length, or in units of a predetermined data size.
- the memory 1140 may store instructions that are executed in the processor 1120 to control the speech recognition apparatus 100.
- the memory 1140 may store a database including a plurality of candidate activation words respectively corresponding to a plurality of situations.
- the processor 1120 may retrieve at least one candidate activation word corresponding to a situation in which the speech recognition apparatus 100 operates from data stored in memory 1140 in determining the at least one activation word.
- the processor 1120 may determine at least one of the retrieved candidate activation words as an activation word.
- the memory 1140 may include a database including information about a sentence structure and grammar.
- the processor 1120 may determine whether the speech command included in the input audio signal is a direct command by using the information about the sentence structure and grammar stored in the memory 1140.
- the memory 1140 may include at least one type storage medium of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, secure digital (SD) or extreme digital (XD) A random access memory (SRAM), a read only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), magnetic disk, magnetic disk, magnetic disk, or optical disk.
- SD secure digital
- XD extreme digital
- SRAM random access memory
- ROM read only memory
- EEPROM electrically erasable programmable read-only memory
- PROM programmable read-only memory
- the user inputter 1150 may receive a user input for controlling the speech recognition apparatus 100.
- the user inputter 1150 may include a user input device including a touch panel for receiving a touch of the user, a button for receiving a push operation of a user, a wheel for receiving a rotation operation of the user, a key board, a dome switch, etc. but is not limited thereto.
- the communicator 1160 may communicate with an external electronic apparatus or server through wired communication or wireless communication.
- the communicator 1160 may communicate with the server that stores a database including candidate activation words suitable for each situation with respect to a plurality of situations.
- the communicator 1160 may retrieve and extract at least one candidate activation word suitable for a current situation from the server.
- the processor 1120 may determine at least one of the retrieved candidate activation words as an activation word.
- the communicator 1160 may acquire information related to a situation in which the speech recognition apparatus 100 operates from the external electronic apparatus.
- the communicator 1160 may acquire information sensed by the external electronic apparatus as the information related to the situation in which the speech recognition apparatus 100 operates.
- the communicator 1160 may communicate with a server that performs a speech recognition function. For example, the communicator 1160 may transmit an audio signal including a sentence including an activation word to the server. The communicator 1160 may receive a result of speech recognition performed by the server.
- the communicator 1160 may include a near distance communication module, a wired communication module, a mobile communication module, a broadcast receiving module, and the like.
- the sensing unit 1170 may include one or more sensors and sense various information used to determine a situation in which the speech recognition apparatus 100 operates. For example, the sensing unit 1170 may sense a location of the speech recognition apparatus 100, information related to a motion of the speech recognition apparatus 100, information that may identify a user who uses the speech recognition apparatus 100, surrounding environment information of the speech recognition apparatus 100, and the like.
- the sensing unit 1170 may include at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetism sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor or a combination thereof.
- the block diagrams shown in FIGS. 11A and 11B may also be applied to a speech recognition server.
- the speech recognition server may include a receiver for receiving an input audio signal from the speech recognition apparatus.
- the speech recognition server may be connected to the speech recognition apparatus by wired or wirelessly.
- the speech recognition server may include a processor and an outputter, and may further include a memory and a communicator.
- the processor of the speech recognition server may detect a speech signal from an input audio signal and perform speech recognition on the speech signal.
- the outputter of the speech recognition server may transmit a result of performing speech recognition to the speech recognition apparatus.
- the speech recognition apparatus may output the result of performing speech recognition received from the speech recognition server.
- the above-described embodiments may be implemented in a general-purpose digital computer that may be created as a program that may be executed in a computer and operates the program using a medium readable by a computer. Further, the structure of the data used in the above-described embodiments may be recorded on the computer-readable medium through various means. Furthermore, the above-described embodiments may be embodied in the form of a recording medium including instructions executable by a computer, such as a program module, executed by a computer. For example, methods implemented with software modules or algorithms may be stored in computer readable media as code or program instructions that may be read and executed by the computer.
- the one or more embodiments of the present disclosure may be written as computer programs and may be implemented in general-use digital computers that execute the programs using a non-transitory computer-readable recording medium.
- a data structure used in the embodiments of the present disclosure may be written in a non-transitory computer-readable recording medium through various means.
- the one or more embodiments may be embodied as computer readable code/instructions on a recording medium, e.g., a program module to be executed in computers, which include computer-readable commands.
- methods that are implemented as software modules or algorithms may be stored as computer readable codes or program instructions executable on a non-transitory computer-readable recording medium.
- the computer-readable medium may include any recording medium that may be accessed by computers, volatile and non-volatile medium, and detachable and non-detachable medium.
- Examples of the computer-readable medium include, but are not limited to, magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., compact disc read-only memory (CD-ROMs), or digital versatile discs (DVDs)), etc.
- the computer-readable medium may include a computer storage medium.
- the non-transitory computer-readable recording media may be distributed over network coupled computer systems, and data stored in the distributed recording media, e.g., a program command and code, may be executed by using at least one computer.
- unit means a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software.
- unit and “module” may be implemented by a program stored on an addressable storage medium and executable by a processor.
- unit and module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
- components such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- User Interface Of Digital Computer (AREA)
- Navigation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A speech recognition method and apparatus for performing speech recognition in response to an activation word determined based on a situation are provided. The speech recognition method and apparatus include an artificial intelligence (AI) system and its application, which simulates functions such as recognition and judgment of a human brain using a machine learning algorithm such as deep learning.
Description
The present disclosure relates to an artificial intelligence (AI) system and its application, which simulates functions such as recognition and judgment of a human brain using a machine learning algorithm such as deep learning.
The present disclosure relates to a speech recognition method and apparatus. More particularly, the present disclosure relates to a speech recognition method and apparatus for performing speech recognition in response to an activation word determined based on information related to a situation in which the speech recognition apparatus operates.
An artificial intelligence (AI) system is a computer system that implements human-level intelligence. Unlike an existing rule-based smart system, the AI system is a system that machine learns, judges and becomes smart autonomously. The more the AI system is used, the more the recognition rate may be improved and the more accurately the user preference may be understood, and thus the existing rule-based smart system is gradually being replaced by a deep-learning-based AI system.
AI technology consists of machine learning (deep learning) and element technologies that utilize the machine learning. Machine learning is an algorithm-based technology that classifies/learns characteristics of input data autonomously. Element technology is a technology that simulates functions of the human brain such as recognition and judgment using machine learning algorithms such as deep learning, and consists of technical fields such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, and motion control, etc.
AI technology may be applied various fields. Linguistic understanding is a technology for recognizing, applying, and processing human language/characters, including natural language processing, machine translation, dialogue system, query response, speech recognition/synthesis, and the like. Visual understanding is a technique to recognize and process objects as performed in human vision, including object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, and image enhancement. Inference/prediction is a technique for judging and logically inferring and predicting information, including knowledge/probability based inference, optimization prediction, preference based planning, recommendation, etc. Knowledge representation is a technology for automating human experience information into knowledge data, including knowledge building (data generation/classification) and knowledge management (data utilization). Motion control is a technique for controlling autonomous travel of a vehicle and a motion of a robot, including movement control (navigation, collision-avoidance, traveling), operation control (behavior control), and the like.
As electronic devices that perform various functions in a complex manner such as smart phones have been developed, electronic devices equipped with a speech recognition function are being introduced. The speech recognition function has an advantage in that a user may easily control a device by recognizing speech of the user without depending on an operation of a separate button or a contact of a touch module.
According to the speech recognition function, for example, a portable terminal such as a smart phone may perform a call function or text messaging without pressing a button, and may easily set various functions such as a route search, an Internet search, an alarm setting, etc.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.
Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure is to provide a method that may control a speech recognition apparatus by speaking as a user naturally interacts with the speech recognition apparatus, thereby enhancing convenience for the user.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments.
In accordance with an aspect of the present disclosure, a speech recognition method is provided. The method includes determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates, receiving an input audio signal, performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and outputting a result of the performing of the speech recognition.
In accordance with another aspect of the present disclosure, a speech recognition apparatus is provided. The apparatus includes a receiver configured to receive an input audio signal, at least one processor configured to determine at least one activation word based on information related to a situation in which a speech recognition apparatus operates and perform speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and an outputter configured to output a result of the speech recognition.
In accordance with another aspect of the present disclosure, a non-transitory computer-readable recording medium is provided. The non-transitory computer-readable recording medium having recorded thereon at least one program includes instructions for allowing a speech recognition apparatus to execute a speech recognition method. The speech recognition method includes determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates, receiving an input audio signal, performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal, and outputting a result of the performing of the speech recognition.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the present disclosure.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
FIGS. 1A, 1B, and 1C are views for explaining a speech recognition system according to an embodiment of the present disclosure;
FIG. 2A is a diagram of an operation method of a general speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 2B is a diagram of an operation method of a speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 4 is a diagram of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of a method of outputting a result of speech recognition performed by a speech recognition apparatus according to an embodiment of the present disclosure;
FIGS. 7A and 7B show examples in which a speech recognition apparatus is included in a home robot according to an embodiment of the present disclosure;
FIG. 8 shows a case where a speech recognition apparatus determines "air conditioner" as an activation word corresponding to a current situation according to an embodiment of the present disclosure;
FIG. 9 is a flowchart of a method of determining whether a speech command is a direct command or an indirect command performed by a speech recognition apparatus according to an embodiment of the present disclosure;
FIG. 10 is a flowchart of a method of determining candidate activation words respectively corresponding to situations performed by a speech recognition apparatus according to an embodiment of the present disclosure; and
FIGS. 11A and 11B are block diagrams of a speech recognition apparatus according to an embodiment of the present disclosure.
Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding, but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purposes only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
Various embodiments of the present disclosure may be represented by functional block configurations and various processing operations. Some or all of these functional blocks may be implemented with various numbers of hardware and/or software configurations that perform particular functions. For example, the functional blocks of the present disclosure may be implemented by one or more microprocessors, or by circuit configurations for a given function. Also, the functional blocks of the present disclosure may be implemented in various programming or scripting languages. The functional blocks may be implemented with algorithms running on one or more processors. The present disclosure may also employ techniques for electronic configuration, signal processing, and/or data processing, and the like according to the related art.
Connection lines or connection members between the components shown in the figures are merely illustrative of functional connections and/or physical or circuit connections. In actual devices, connections between components may be represented by various functional connections, physical connections, or circuit connections that may be replaced or added.
The present disclosure will be described in detail with reference to the accompanying drawings.
FIGS. 1A, 1B, and 1C are views for explaining a speech recognition system according to an embodiment of the present disclosure.
Referring to FIGS 1A ? 1C, the speech recognition system may be a deep learning based artificial intelligence (AI) system. The speech recognition system may use artificial intelligence technology to infer and predict a situation in which the speech recognition apparatus operates, and may recognize, apply, and process a human language.
Referring to FIG. 1A, the speech recognition system may include a speech recognition apparatus 100-1. For example, the speech recognition apparatus 100-1 may be a mobile computing apparatus such as a smart phone, a tablet personal computer (PC), a PC, a smart television (TV), a personal digital assistant (PDA), a laptop, a media player, a micro server, a global positioning system (GPS), an e-book reader, a digital broadcasting terminal, a navigation device, a kiosk, an Moving Picture Experts Group phase 1 or phase 2 (MPEG-1 or MPEG-2) audio layer 3 (MP3) player, a digital camera, an electronic control device of a vehicle, a central information display (CID), etc. or a non-mobile computing apparatus but is not limited thereto. The speech recognition apparatus 100-1 may receive an audio signal including a speech signal uttered by a user 10 and perform speech recognition on the speech signal. The speech recognition apparatus 100-1 may output a speech recognition result.
Referring to FIG. 1B, the speech recognition system may include a speech recognition apparatus 100-2 and an electronic apparatus 110 connected to the speech recognition apparatus 100-2. The speech recognition apparatus 100-2 and the electronic apparatus 110 may be connected by wires or wirelessly. For example, the electronic apparatus 110 coupled to the speech recognition apparatus 100-2 may be a mobile computing apparatus such as a smart phone, a tablet PC, a PC, a smart TV, an electronic control device of a vehicle, a CID, or a non-mobile computing apparatus. The speech recognition apparatus 100-2 may be, but is not limited to, a wearable device, a smart phone, a tablet PC, a PC, a navigation system, or a smart TV, which cooperates with the electronic apparatus 110.
The speech recognition apparatus 100-2 may receive an audio signal including a speech signal uttered by the user 10 and transmit the inputted audio signal to the electronic apparatus 110. The speech recognition apparatus 100-2 may receive an audio signal including a speech signal uttered by the user 10, and may transmit the detected speech signal from the input audio signal to the electronic apparatus 110. The speech recognition apparatus 100-2 may receive an audio signal including the speech signal uttered by the user 10, and transmit a characteristic of the speech signal detected from the input audio signal to the electronic apparatus 110.
The electronic apparatus 110 may perform speech recognition based on a signal received from the speech recognition apparatus 100-2. For example, the electronic apparatus 110 may perform speech recognition on the speech signal detected from the audio signal input from the speech recognition apparatus 100-2. The electronic apparatus 110 may output a speech recognition result or send the speech recognition result to the speech recognition apparatus 100-2 so that the speech recognition apparatus 100-2 outputs the speech recognition result.
Referring to FIG. 1C, the speech recognition system may include a speech recognition apparatus 100-3 and a server 120 connected to the speech recognition apparatus 100-3. The speech recognition apparatus 100-3 and the server 120 may be connected by wired or wirelessly.
The speech recognition apparatus 100-3 may receive an audio signal including a speech signal uttered by the user 10 and transmit the inputted audio signal to the server 120. The speech recognition apparatus 100-3 may also receive an audio signal including a speech signal uttered by the user 10, and may transmit the detected speech signal from the input audio signal to the server 120. The speech recognition apparatus 100-3 may also receive an audio signal including the speech signal uttered by the user 10, and transmit a characteristic of the detected speech signal from the input audio signal to the server 120.
The server 120 may perform speech recognition based on the signal received from the speech recognition apparatus 100-3. For example, the server 120 may perform speech recognition on the speech signal detected from the audio signal input from the speech recognition apparatus 100-3. The server 120 may transmit the speech recognition result to the speech recognition apparatus 100-3 so that the speech recognition apparatus 100-3 outputs the speech recognition result.
The speech recognition system shown in FIGS. 1A, 1B, and 1C has an advantage in that a user may easily control an apparatus by recognizing the user's speech.
However, when a speech recognition apparatus continuously activates a speech recognition function, since it is difficult for the speech recognition apparatus to distinguish whether an input audio signal is speech that is an object of speech recognition or noise that is not the object of speech recognition, the recognition performance deteriorates. Further, if the speech recognition apparatus continues a speech detection operation and a speech recognition operation, the speech recognition apparatus may unnecessarily consume power or memory capacity.
Therefore, the speech recognition apparatus should be capable of activating the speech recognition function only when the user utters a speech command.
As an example, a speech recognition apparatus according to the related art uses a method of activating the speech recognition function when the user presses a button. This activation method has a disadvantage that the user must be located within a certain physical distance from the speech recognition apparatus and that the user should be careful not to press the button when the activation of the speech recognition function is not desired.
As another example, the speech recognition apparatus according to the related art uses a method of activating the speech recognition function when a predetermined specific activation word is uttered. This activation method has a disadvantage in that it is unnatural that the user must utter the specific activation word before uttering a speech command.
As described above, the speech recognition apparatus requires an active action of the user in order to activate the speech recognition function according to the related art. Accordingly, since the speech recognition function may not be started when the active action of the user is not involved, the speech recognition apparatus has a limitation in providing a proactive service through speech recognition.
Accordingly, the speech recognition apparatus according to embodiments of the present disclosure provides a method of enhancing the convenience of the user by enabling the user to control the speech recognition apparatus by speaking as if the user naturally interacts with the speech recognition apparatus. The speech recognition apparatus may provide a proactive service even when there is no direct operation of the user. An embodiment provides a method of activating a speech recognition function based on a plurality of activation words designated according to a situation in which the speech recognition apparatus operates. In addition, an embodiment provides a method in which a speech recognition apparatus operates before performing speech recognition.
FIG. 2A is a diagram of an operation method of a general speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 2A, an example is illustrated in which the general speech recognition apparatus 100 activates a speech recognition function when one specific activation word “Hi Galaxy” is uttered.
As shown in FIG. 2A, a user 10 has to utter the activation word "Hi Galaxy" prior to a speech command to ask for today's weather.
The speech recognition apparatus 100 may activate a speech recognition function when a speech signal for uttering the activation word “Hi Galaxy” is received. The speech recognition apparatus 100 may perform speech recognition on a speech command of the user “What is the weather like today?” which is a sentence to be uttered after the activation word, and may provide weather information “Today’s weather is fine” as a response to the speech command of the user.
Next, the user 10 should utter the activation word “Hi Galaxy” prior to a speech command to ask for the current time.
The speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering the activation word “Hi Galaxy” is received. The speech recognition apparatus 100 may perform speech recognition on the speech command of the user “What time is it?” which is a sentence to be uttered after the activation word and may provide time information “3:20 pm” as a response to the speech command of the user.
As shown in FIG. 2A, when the speech recognition function is activated using only a designated activation word, it is cumbersome and unnatural for the user to utter the activation word each time.
Therefore, according to an embodiment of the present invention, a speech recognition apparatus may perform speech recognition with respect to a speech command that a user naturally utters, without requiring the user to utter a separate activation word or activate a speech recognition function.
FIG. 2B is a diagram of an operation method of a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 2B, the user 10 may utter a speech command “What is the weather like today?” that is a speech command to ask for today's weather without any separate activation operation in order to ask for today's weather. The speech recognition apparatus 100 may activate a speech recognition function when a speech signal for uttering the speech command “What is the weather like today?” is received. The speech recognition apparatus 100 may perform speech recognition on the speech command of the user “What is the weather like today?” and may provide weather information “Today’s weather is fine” as a response to the speech command of the user.
Next, the user 10 may utter “What time is it now?” that is a speech command to ask for the current time without a separate activation operation in order to ask for the current time. The speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering the speech command “What time is it now?” is received. The speech recognition apparatus 100 may perform speech recognition with respect to the speech command of the user “What time is it?” and provide time information "3:20 pm" as a response to the speech command of the user.
A specific method of performing a speech recognition method by a speech recognition apparatus according to an embodiment of the present disclosure is described below. However, as shown in FIGS. 1A, 1B, and 1C, a speech recognition system may include at least one speech recognition apparatus and may further include a server or an electronic device. Hereinafter, for the convenience of explanation, the speech recognition method performed in the “speech recognition apparatus” will be described. However, some or all of operations of the speech recognition apparatus described below may also be performed by the server and may be partially performed by a plurality of electronic apparatuses.
FIG. 3 is a flowchart of a method of performing speech recognition by a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 3, in operation S310, the speech recognition apparatus 100 according to an embodiment may determine at least one activation word based on information related to a situation in which the speech recognition apparatus 100 operates. The speech recognition apparatus 100 according to an embodiment may utilize an artificial intelligence technology to infer and predict the situation in which the speech recognition apparatus 100 operates and to determine at least one activation word.
The information related to the situation may include at least one of information related to a location and time of the speech recognition apparatus 100, whether or not the speech recognition apparatus 100 is connected to another electronic apparatus, a type of a network to which the speech recognition apparatus 100 is connected, and a characteristic of a user using the speech recognition apparatus 100.
As an example, the speech recognition apparatus 100 may obtain information about at least one electronic apparatus connected to the speech recognition apparatus 100. The speech recognition apparatus 100 may determine a word associated with the at least one electronic device as at least one activation word. As another example, the speech recognition apparatus 100 may acquire information about the network to which the speech recognition apparatus 100 is connected. The speech recognition apparatus 100 may identify the situation in which the speech recognition apparatus 100 operates based on the information about the network to which the speech recognition apparatus 100 is connected. For example, the speech recognition apparatus 100 may determine a location where the speech recognition apparatus 100 operates based on the information about the network to which the speech recognition apparatus 100 is connected.
For example, when the speech recognition apparatus 100 connects to a Wi-Fi network installed in a house, the speech recognition apparatus 100 may determine that the location of the speech recognition apparatus 100 is in the house. The speech recognition apparatus 100 may determine at least one activation word corresponding to the house. The speech recognition apparatus 100 may determine a TV, an air conditioner, a cleaner, weather, a schedule, etc. as activation words corresponding to the house.
The speech recognition apparatus 100 may further include storing a plurality of candidate activation words corresponding to a plurality of situations respectively prior to determining the at least one activation word. The speech recognition apparatus 100 may acquire information related to the situation in which the speech recognition apparatus 100 operates and retrieve stored data so that the at least one candidate activation word corresponding to the situation in which the speech recognition apparatus 100 operates may be extracted. The speech recognition apparatus 100 may determine the at least one candidate activation word as the at least one activation word.
The speech recognition apparatus 100 may receive information on speech commands that the speech recognition apparatus 100 receives from the user in a plurality of situations to store the plurality of candidate activation words. The speech recognition apparatus 100 may extract a plurality of words included in the speech commands. The speech recognition apparatus 100 may store at least one word as a candidate activation word corresponding to a specific situation, based on a frequency with which the plurality of words are included in the speech commands received in a specific situation among the plurality of situations.
The speech recognition apparatus 100 may determine the number of at least one activation words determined based on a degree to which the speech recognition function of the speech recognition apparatus 100 is sensitively activated.
For example, the degree to which the speech recognition function of the speech recognition apparatus 100 is sensitively activated may mean at least one of a speed at which the speech recognition apparatus 100 is activated in response to various speech signals, a difficulty level at which the speech recognition apparatus 100 is activated, and the frequency with which the speech recognition apparatus 100 is activated. For example, when the speech recognition apparatus 100 is activated at a high frequency in response to various speech signals, it may be determined that the speech recognition function of the speech recognition apparatus 100 is activated sensitively. It may be determined that the speech recognition function of the speech recognition apparatus 100 is activated less sensitively when the speech recognition apparatus 100 is activated at a relatively low frequency in response to various speech signals.
The degree to which the speech recognition function is sensitively activated may be determined based on a user input or may be determined based on the location of the speech recognition apparatus 100. For example, when the speech recognition apparatus 100 is located in a private space such as a house, it may be determined that the speech recognition function is sensitively activated, and when the speech recognition apparatus 100 is located in a public space such as a company, it may be determined that the speech recognition function is activated less sensitively. For example, when the speech recognition apparatus 100 is located in a private space such as the house, it may be determined that the speech recognition function is activated at a high frequency, and when the speech recognition apparatus 100 is located in the public space such as the company, it may be determined that the speech recognition function is activated at a relatively low frequency.
In operation S320, the speech recognition apparatus 100 may receive the input audio signal. For example, the speech recognition apparatus 100 may divide an input audio signal input in real time in a frame unit of a predetermined length and process the input audio signals divided in the frame unit. A speech signal in the frame unit may be detected from the input audio signal divided in the frame unit.
The speech recognition apparatus 100 according to an embodiment may receive and store the input audio signal. For example, the speech recognition apparatus 100 may detect presence or absence of utterance by Voice Activation Detection (VAD) or End Point Detection (EPD).
For example, the speech recognition apparatus 100 may determine that a sentence starts when utterance starts and may start storing the input audio signal. The speech recognition apparatus 100 may determine that the sentence starts when the utterance starts after pause and may start storing the input audio signal.
The speech recognition apparatus 100 may determine that the sentence ends if the utterance ends without uttering an activation word and may start storing the input audio signal. Alternatively, the speech recognition apparatus 100 may receive and store an audio signal in a predetermined time length unit as shown in FIG. 5.
In operation S330, the speech recognition apparatus 100 may perform speech recognition on the input audio signal, based on whether or not the speech signal for uttering the activation word included in the at least one activation word is included in the input audio signal. The speech recognition apparatus 100 may recognize, apply, and process a language of a speaker included in the input audio signal by using the artificial intelligence technology.
The speech recognition apparatus 100 may perform speech recognition on the input audio signal including the speech signal for uttering the activation word included in the at least one activation word.
The speech recognition apparatus 100 may determine whether the input audio signal includes a speech signal for uttering an activation word. When it is determined that the input audio signal includes the speech signal for uttering the activation word included in the at least one activation word, the speech recognition apparatus 100 may perform speech recognition on the stored input audio signal and the input audio signal received thereafter.
The speech recognition apparatus 100 may transmit an audio signal including a speech command including an activation word to a server (or an embedded speech recognition module). The server (or the embedded speech recognition module) may extract an activation word from the received audio signal. The server (or the embedded speech recognition module) may determine whether to recognize a speech command including the activation word or remove the activation word and recognize speech commands located before or after the activation word. The server (or the embedded speech recognition module) may perform speech recognition based on a determination result. The speech recognition apparatus 100 may perform speech recognition on the speech command including the activation word when the activation word has a meaning in the speech command. On the other hand, when the activation word does not have the meaning in the speech command, the speech recognition apparatus 100 may perform speech recognition on a previous sentence or a succeeding sentence from which the activation word is removed.
For example, in order to activate the speech recognition function of the speech recognition apparatus 100, a case where “Hi Robot” is determined as a basic activation word and “weather” is determined as an activation word corresponding to a current situation is explained as an example.
The user may utter “Hi Robot Call Hana” to the speech recognition apparatus 100. Since the speech recognition apparatus 100 has received a speech signal for uttering the activation word “Hi Robot”, the speech recognition apparatus 100 may transmit a speech command including the activation word “Hi Robot Call Hana” to the server (or the embedded speech recognition module). Since the activation word “Hi Robot” is a basic activation word having no meaning in the speech command, the server (or the embedded speech recognition module) may perform speech recognition on only “Call Hana” that is the speech command from which the activation word is removed.
Alternatively, the user may utter “What is the weather like today?” to the speech recognition apparatus 100. Since the speech recognition apparatus 100 has received the speech signal for uttering the activation word “weather”, the speech recognition apparatus 100 may transmit “What is weather like today?” that is a speech command including the activation word to the server (or the embedded speech recognition module). The server (or the embedded speech recognition module) may perform speech recognition on “What is the weather like today?” that is a speech command including the activation word since the activation word “weather” has meaning in the speech command.
The speech recognition apparatus 100 may transmit an audio signal from which the speech command for uttering the activation word is removed from the input audio signal to the server (or the embedded speech recognition module). The speech recognition apparatus 100 may extract the activation word from the input audio signal. The speech recognition apparatus 100 may determine whether to transmit an audio signal including the speech command for uttering the activation word to the server (or the embedded speech recognition module) or transmit the audio signal from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module). The speech recognition apparatus 100 may transmit the audio signal including the speech command for uttering the activation word to the server (or the embedded speech recognition module) when the activation word has the meaning in the speech command. On the other hand, when the activation word does not have a meaning in the speech command, the speech recognition apparatus 100 may transmit a previous sentence or a succeeding sentence from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module).
For example, in order to activate the speech recognition function of the speech recognition apparatus 100, a case where “Hi Robot” is determined as a basic activation word will be described as an example.
The user may utter “Hi Robot. Call Hana.” to the speech recognition apparatus 100. Since the activation word “Hi Robot” is a basic activation word having no meaning in the speech command, the speech recognition apparatus 100 may transmit only “Call Hana” that is the audio signal from which the speech command for uttering the activation word is removed to the server (or the embedded speech recognition module).
When it is determined that the input audio signal includes a speech signal for uttering the activation word, the speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is a direct command requesting a response of the speech recognition apparatus 100. The speech recognition apparatus 100 may determine whether the speech command is the direct command or an indirect command based on natural language understanding and sentence analysis regarding extracted text. For example, the speech recognition apparatus 100 may determine whether the speech command is the direct command or the indirect command based on at least one of a termination end of the speech command, an intonation, a direction in which the speech command is received, and a size of the speech command. The speech recognition apparatus 100 may determine whether to transmit the speech command to the server (or the embedded speech recognition module) or to perform speech recognition on the speech command, according to a determined type of the speech command. For example, the speech recognition apparatus 100 may perform natural language understanding and sentence type analysis using artificial intelligence technology.
The speech recognition apparatus 100 may transmit the audio signal including the speech command including the activation word to the server (or the embedded speech recognition module) when it is determined that the speech command is the direct command. When the speech signal for uttering the activation word is received, the speech recognition apparatus 100 may transmit the stored input audio signal and an input audio signal received thereafter to the server (or the embedded speech recognition module).
The speech recognition apparatus 100 may search for and extract a signal including a sentence including an activation word from the stored input audio signal. The speech recognition apparatus 100 may transmit an audio signal including the sentence containing the activation word to the server (or the embedded speech recognition module). The server (or the embedded speech recognition module) may perform speech recognition on the speech command.
On the other hand, when the speech recognition apparatus 100 determines that the speech command is not the direct command requesting the response of the speech recognition apparatus 100 but is the indirect command, the speech recognition apparatus 100 may not transmit the audio signal including the speech signal to the server (or the embedded speech recognition module). The speech recognition apparatus 100 may repeat an operation of receiving and storing a new input audio signal while ignoring the previously input audio signal. The speech recognition apparatus 100 may determine whether or not the new input audio signal includes the speech signal for uttering the activation word.
In operation S340, the speech recognition apparatus 100 according to an embodiment may output a result of performing speech recognition.
The speech recognition apparatus 100 may output the result of speech recognition performed by the server (or the embedded speech recognition module). As an example, the result of performing speech recognition may include text extracted from a speech command. As another example, the result of performing speech recognition may be a screen performing an operation corresponding to the result of performing speech recognition. The speech recognition apparatus 100 may perform an operation corresponding to the result of performing speech recognition. For example, the speech recognition apparatus 100 may determine a function of the speech recognition apparatus 100 corresponding to the result of performing speech recognition, and output a screen performing the function. Alternatively, the speech recognition apparatus 100 may transmit a keyword corresponding to the result of performing speech recognition to an external server, receive information related to the transmitted keyword from the server, and output the received information on the screen.
The speech recognition apparatus 100 may determine a method of outputting the result of performing speech recognition based on an analysis result by analyzing a speech command.
As an example, the speech recognition apparatus 100 may output the result of speech recognition performed in various ways such as sound, light, image, and vibration in response to a speech command. As another example, the speech recognition apparatus 100 may notify the user that a response is waiting while waiting for a response to the speech command. The speech recognition apparatus 100 may inform the user that the response is waiting in various ways such as sound, light, image, and vibration. As another example, the speech recognition apparatus 100 may store the result of performing speech recognition and then when the user makes an utterance related to the result of performing speech recognition and output the result of performing speech recognition.
The speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is a direct command requesting a response of the speech recognition apparatus 100. The speech recognition apparatus 100 may determine whether to output the result of performing speech recognition immediately or to output the result of performing speech recognition when a confirmation command is received from the user according to the determined type of the speech command.
The speech recognition apparatus 100 may extract text uttered by the user by performing speech recognition on the input audio signal. The speech recognition apparatus 100 may determine whether the speech command included in the input audio signal is the direct command requesting the response of the speech recognition apparatus 100, based on natural language understanding and sentence type analysis regarding the extracted text. The speech recognition apparatus 100 may perform an operation of responding to the speech command when it is determined that the speech command is the direct command.
When it is determined that the speech command is not the direct command, the speech recognition apparatus 100 may display that a response to the speech command is possible and the response is waiting. The speech recognition apparatus 100 may perform the operation of responding to the speech command when a confirmation command is received from the user.
FIG. 4 is a diagram for explaining a method of performing speech recognition by the speech recognition apparatus 100 according to an embodiment of the present disclosure.
Referring to FIG. 4, an example is illustrated in which the speech recognition apparatus 100 is connected to an electronic control apparatus 401 of a vehicle and operates. For example, the speech recognition apparatus 100 may communicate with the electronic control apparatus 401 of the vehicle via Bluetooth.
The speech recognition apparatus 100 according to an embodiment may determine that a location of the speech recognition apparatus 100 is the vehicle based on information that the speech recognition apparatus 100 is connected to the electronic control apparatus 401 of the vehicle. The speech recognition apparatus 100 may determine at least one activation word corresponding to the vehicle. For example, the speech recognition apparatus 100 may extract candidate activation words corresponding to the vehicle including navigation, an air conditioner, a window, a gas supply, a trunk, a side mirror, etc., and candidate activation words corresponding to functions available in the vehicle including, for example, a text message, a schedule, etc. The speech recognition apparatus 100 may determine the extracted candidate activation words as activation words suitable for a current situation.
Further, the speech recognition apparatus 100 may determine an activation word based on whether or not the speech recognition apparatus 100 is moving. When the vehicle is traveling, the speech recognition apparatus 100 may determine only the candidate activation words that do not disturb safe vehicle operation among the candidate activation words corresponding to the vehicle and the candidate activation words corresponding to the functions available in the vehicle as activation words.
For example, when the vehicle is stopped, the speech recognition apparatus 100 may determine all candidate activation words related to the vehicle, such as the navigation, the air conditioner, the gas supply, the trunk, the side mirror, the text message, the schedule, etc. On the other hand, when the vehicle is traveling, the speech recognition apparatus 100 may determine an activation word so that the speech recognition apparatus 100 does not respond to speech commands that may disturb safe vehicle operation. For example, when the vehicle is traveling, opening the trunk of the vehicle or opening the gas supply by a speech command may disturb safe vehicle operation. Therefore, when the vehicle is traveling, the speech recognition apparatus 100 may determine only some candidate activation words, such as navigation, air conditioner, text message, and schedule, which do not disturb safe vehicle operation as activation words.
The speech recognition apparatus 100 may receive and store an input audio signal prior to performing speech recognition. The speech recognition apparatus 100 may analyze the input audio signal to determine whether the input audio signal includes a speech signal for uttering an activation word.
The user 10 may use the speech recognition function without having to utter a specific activation word such as “Hi Robot” to be guided to a train station. The user 10 may utter “Find the way to the train station on the navigation!” that is a speech command to ask for a direction to get to the train station. The speech recognition apparatus 100 may activate the speech recognition function when a speech signal for uttering “navigation” which is an activation word corresponding to the vehicle is received.
The speech recognition apparatus 100 may perform speech recognition on “Find the way to the train station on the navigation!” that is a whole speech command including the speech signal for uttering "navigation". When the speech signal for uttering “navigation” is received, the speech recognition apparatus 100 may transmit “Find at” which is a speech command received after the activation word to a server (or an embedded speech recognition module) and perform speech recognition. In addition, the speech recognition apparatus 100 may transmit a previously received and stored speech command to the server (or the embedded speech recognition module) together with the speech command received after the activation word and perform speech recognition. When the speech signal for uttering “navigation” is received, the speech recognition apparatus 100 may perform speech recognition on “the way to the train station” that is the speech command which is received and stored before the activation word, the activation word “navigation”, and “Find at” which is the speech command received after the activation word.
The speech recognition apparatus 100 may guide a route to a train station as a response to the speech command of the user 10.
In FIG. 4, a case where the location of the speech recognition apparatus 100 is the vehicle is shown as an example. However, embodiments of the present disclosure are not limited thereto. For example, when it is determined that the location of the speech recognition apparatus 100 is a house, light, television, air conditioner, washing machine, refrigerator, weather, date, time, etc. may be determined as activation words corresponding to the house.
Specific examples of the activation words corresponding to respective situations are as follows.
The speech recognition apparatus 100 may determine at least one activation word based on the location of the speech recognition apparatus 100 or a characteristic of a space in which the speech recognition apparatus 100 is located.
The speech recognition apparatus 100 may acquire information related to the location of the speech recognition apparatus 100 based on an electronic apparatus connected to the speech recognition apparatus 100, a network connected to the speech recognition apparatus 100, or a base station connected to the speech recognition apparatus 100.
For example, the speech recognition apparatus 100 may determine that the speech recognition apparatus 100 is located in the vehicle when the speech recognition apparatus 100 is connected to audio in the vehicle via a Bluetooth method. Alternatively, the speech recognition apparatus 100 may acquire information related to a current location by a GPS module included in the speech recognition apparatus 100.
As an example, when the speech recognition apparatus 100 is located in the house, the speech recognition apparatus 100 may determine words associated with an electronic apparatus that the speech recognition apparatus 100 may control in the house or a function of the electronic apparatus as activation words. As the location of the speech recognition apparatus 100 in the house changes, the speech recognition apparatus 100 may determine different words as activation words according to the location. For example, when the speech recognition apparatus 100 is located in a living room, the speech recognition apparatus 100 may determine words associated with all electronic apparatuses in the house as activation words. On the other hand, when the speech recognition apparatus 100 is located in a room, the speech recognition apparatus 100 may determine only words associated with electronic apparatuses in the room as activation words.
As another example, when the speech recognition apparatus 100 is located in the vehicle, the speech recognition apparatus 100 may determine words associated with an electronic apparatus that the speech recognition apparatus 100 may control in the vehicle or a function of the electronic apparatus as activation words. The speech recognition apparatus 100 may determine different activation words even when the location of the speech recognition apparatus 100 in the vehicle changes or a characteristic of a user of the speech recognition apparatus 100 changes.
When the speech recognition apparatus 100 is located in a driver’s seat or a user of the speech recognition apparatus 100 is driving, the speech recognition apparatus 100 may determine words related to all electronic apparatuses and functions that a driver may control in the vehicle as activation words. On the other hand, when the speech recognition apparatus 100 is located in a seat other than the driver's seat or the user of the speech recognition apparatus 100 is not driving, the speech recognition apparatus 100 may determine only words related to electronic apparatuses and functions that do not disturb driving as activation words.
For example, when the user of the speech recognition apparatus 100 is a driving driver, words related to driving of the vehicle such as “side mirrors”, “lights”, “steering wheels”, etc. as activation words. On the other hand, when the user of the speech recognition apparatus 100 is a passenger who does not drive, only words related to electronic apparatuses that are not related to the driving of the vehicle such as “an air conditioner”, “radio”, etc. as activation words.
As another example, when the speech recognition apparatus 100 is located outdoors, the speech recognition apparatus 100 may determine an activation word based on whether or not there is an environment in which noise exists. For example, the speech recognition apparatus 100 may not determine, as an activation word, a word whose characteristic is similar to that of noise in an environment in which noise is frequently generated.
As another example, the speech recognition apparatus 100 may determine an activation word based on whether a space in which the speech recognition apparatus 100 is located is a common space or a private space. For example, when the speech recognition apparatus 100 is located in a common space such as a corridor of a company, the speech recognition apparatus 100 may determine only words corresponding to the common space as activation words. On the other hand, when the speech recognition apparatus 100 is located in a private space such as a private office, the speech recognition apparatus 100 may determine words related to private affairs as activation words together with the words corresponding to the public space. For example, when the speech recognition apparatus 100 is located in the common space, the speech recognition apparatus 100 may activate a speech recognition function by activation words corresponding to the common space such as “air conditioner”, “light”, etc. However, when the speech recognition apparatus 100 is located in the private space, the speech recognition apparatus 100 may also activate the speech recognition function by the words related to private affairs, such as “telephone” or “text message” along with the activation words corresponding to the public space, such as “air conditioner”, “light”, etc.
As another example, the speech recognition apparatus 100 may determine words on which local language characteristics are reflected as activation words based on a region where the speech recognition apparatus 100 is located. For example, when the speech recognition apparatus 100 is located in a region where a dialect is used, the speech recognition apparatus 100 may determine words on which the dialect is reflected as activation words.
The speech recognition apparatus 100 according to an embodiment may determine at least one activation word based on time.
As an example, the speech recognition apparatus 100 may use a specific word as an activation word for a specific period of time. After the specific period of time, the speech recognition apparatus 100 may no longer use the specific word as the activation word.
The speech recognition apparatus 100 may determine a word whose frequency of use has recently increased as an activation word by learning speech commands received from the user. For example, if the user is about to travel to Jeju Island, the user may frequently input speech commands related to “Jeju Island” to the speech recognition apparatus 100 to obtain information related to “Jeju Island”. The speech recognition apparatus 100 may add a word frequently appearing above a threshold frequency number as an activation word. Therefore, even if the user does not separately activate the speech recognition function, the user may use the speech recognition function by simply uttering a speech command including the added activation word.
As another example, the speech recognition apparatus 100 may determine the activation word based on current time information in which the speech recognition apparatus 100 is operating. For example, the speech recognition apparatus 100 may use different activation words depending on season, day, date, whether it is weekend or weekday, and a time zone. The speech recognition apparatus 100 may learn speech commands received from the user according to the season, the day, the date, the time, etc., thereby updating an activation word suitable for each situation and using the updated activation word.
As another example, the speech recognition apparatus 100 may determine at least one activation word based on a movement of the user of the speech recognition apparatus 100. The speech recognition apparatus 100 may reflect a change in an utterance characteristic in determining the activation word depending on whether the user of the speech recognition apparatus 100 stops moving, is walking, or is running. For example, when the user of the speech recognition apparatus 100 is walking or running, the speech recognition apparatus 100 may reflect a characteristic that the user who breathes out to determine the activation word.
The speech recognition apparatus 100 may determine at least one activation word based on information related to a characteristic of the user who uses the speech recognition apparatus 100.
As an example, the speech recognition apparatus 100 may determine at least one activation word based on an age of the user of the speech recognition apparatus 100.
When the user of the speech recognition apparatus 100 is an adult, the speech recognition apparatus 100 may determine words related to a common interest of the adult as activation words. For example, the speech recognition apparatus 100 may determine words such as news, sports, etc. which are related to the common interest of the adult as activation words.
If the user of the speech recognition apparatus 100 is not an adult, the speech recognition apparatus 100 may determine words related to characteristics of the minor as activation words. For example, when the user is a high school student, the speech recognition apparatus 100 may determine words such as test, math, differential integration, etc. related to a common interest of the high school student as activation words.
As another example, the speech recognition apparatus 100 may determine at least one activation word based on a gender of the user of the speech recognition apparatus 100.
When the user of the speech recognition apparatus 100 is a woman, the speech recognition apparatus 100 may determine words related to a common interest of the woman as activation words. For example, the speech recognition apparatus 100 may determine a word “cosmetics” that is related to the common interest of the woman as an activation word.
As another example, the speech recognition apparatus 100 may determine at least one activation word based on an occupation or hobby of the user of the speech recognition apparatus 100.
The speech recognition apparatus 100 may determine words on which characteristics of the user according to occupations are reflected or words related to hobbies as activation words. For example, when the hobby of the user of the speech recognition apparatus 100 is listening to music, the speech recognition apparatus 100 may determine words related to hobby such as music, radio, etc. as activation words.
On the other hand, the speech recognition apparatus 100 may operate differently depending on whether the speech recognition apparatus 100 is used by only one person or by several people. When the speech recognition apparatus 100 is used by several people, prior to performing speech recognition, the speech recognition apparatus 100 may recognize the gender or age of the user by analyzing a characteristic of speech or may perform an operation of identifying the user by analyzing a characteristic of a face. The speech recognition apparatus 100 may determine words suitable for the identified user as activation words.
The speech recognition apparatus 100 may reflect history in which words are used in determining an activation word.
The speech recognition apparatus 100 may reflect history in which words are used in common regardless of the user in determining the activation word. The speech recognition apparatus 100 may determine the activation word from a database including candidate activation words corresponding to each situation in common regardless of the user. However, embodiments of the present disclosure are not limited thereto.
The speech recognition apparatus 100 may reflect history in which words are used by each individual in determining an activation word. The speech recognition apparatus 100 may manage a database including candidate activation words suitable for each individual. The speech recognition apparatus 100 may update a personalized database by accumulating a frequency of using words in each situation for each individual. The speech recognition apparatus 100 may determine an activation word suitable for a current situation from the personalized database.
FIG. 5 is a flowchart illustrating a method of performing speech recognition by the speech recognition apparatus 100 according to an embodiment of the present disclosure.
Operations S510 and S520 of FIG. 5 may correspond to operation S310 of FIG. 3, operation S530 of FIG. 5 may correspond to operation S320 of FIG. 3, operations S540 to S580 of FIG. 5 may correspond to operation S330 of FIG. 3, and operation S590 of FIG. 5 may correspond to operation S340 of FIG. 3. Descriptions of FIG. 3 may be applied to each operation of FIG. 5 corresponding to each operation of FIG. 3. Thus, descriptions of redundant operations are omitted.
In operation S510, the speech recognition apparatus 100 may acquire information related to a situation in which the speech recognition apparatus 100 operates.
The speech recognition apparatus 100 may include one or more sensors and may sense various information for determining the situation in which the speech recognition apparatus 100 operates. For example, the sensor included in the speech recognition apparatus 100 may sense a location of the speech recognition apparatus 100, information related to a movement of the speech recognition apparatus 100, information capable of identifying a user who is using the speech recognition apparatus 100, and surrounding environment information of the speech recognition apparatus 100, and the like.
For example, the speech recognition apparatus 100 may include at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetic sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor, or a combination thereof.
The speech recognition apparatus 100 may acquire information sensed by an external electronic apparatus as the information related to the situation in which the speech recognition apparatus 100 operates. For example, the external electronic apparatus may be at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetic sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor, or a combination thereof.
The speech recognition apparatus 100 may acquire a user input as the information related to the situation in which the speech recognition apparatus 100 operates. The speech recognition apparatus 100 may acquire information related to a location in which the speech recognition apparatus 100 operates or a characteristic of a user of the speech recognition apparatus 100 from the user input.
The speech recognition apparatus 100 may acquire the information related to the situation in which the speech recognition apparatus 100 operates through communication with another electronic apparatus. For example, when the speech recognition apparatus 100 is connected to an electronic apparatus recognized as existing in a house through near distance communication, the speech recognition apparatus 100 may determine that the speech recognition apparatus 100 is present in the house. For example, the speech recognition apparatus 100 may acquire information such as house, indoors, private space as the location of the speech recognition apparatus 100.
In operation S520, the speech recognition apparatus 100 according to an embodiment may determine at least one activation word based on the information obtained in operation S510.
As an example, the speech recognition apparatus 100 may store candidate activation words suitable for each situation with respect to a plurality of situations prior to determining an activation word. Based on the information obtained in operation S510, the speech recognition apparatus 100 may retrieve candidate activation words suitable for a current situation from the stored data. The speech recognition apparatus 100 may determine at least one of the retrieved candidate activation words as the activation word.
As another example, the speech recognition apparatus 100 may communicate with a server that stores candidate activation words suitable for each situation with respect to the plurality of situations prior to determining the activation word. Based on the information obtained in operation S510, the speech recognition apparatus 100 may retrieve candidate activation words suitable for the current situation from the server. The speech recognition apparatus 100 may determine at least one of the retrieved candidate activation words as the activation word. The candidate activation words for each situation stored in the server may be shared and used by a plurality of speech recognition apparatuses.
The speech recognition apparatus 100 may determine the number of activation words to be determined based on a degree to which a speech recognition function of the speech recognition apparatus 100 is sensitively activated. A priority may be assigned to the candidate activation words for each situation. The speech recognition apparatus 100 may determine some of the candidate activation words as at least one activation word based on the degree to which the speech recognition function is sensitively activated and priority.
The speech recognition apparatus 100 may determine at least one activation word based on information related to a characteristic of the user of the speech recognition apparatus 100. As an example, the speech recognition apparatus 100 used by families of various ages may determine different activation words by recognizing a speech age, by recognizing the face of the user, or initially input user information.
For example, when a parent uses the speech recognition apparatus 100 in a house, the speech recognition apparatus 100 may determine all candidate activation words related to the house such as a TV, an air conditioner, a vacuum cleaner, weather, a schedule, an Internet connection, watching of TV channels for children, heating, cooling, humidity control, etc., as at least one activation word. On the other hand, when a child uses the speech recognition apparatus 100 in the house, the speech recognition apparatus 100 may determine the activation word so as to respond only to speech commands that are allowed to be controlled by a speech command of the child. Therefore, the speech recognition apparatus 100 may determine only some candidate activation words such as weather, watching of TV channels for children, etc. as at least one activation word.
In operation S530, the speech recognition apparatus 100 may receive and store the input audio signal.
In operation S540, the speech recognition apparatus 100 may determine whether or not an input audio signal having a length longer than a predetermined time has been stored. If the input audio signal having the length longer than the predetermined time is stored, then in operation S560, the speech recognition apparatus 100 may delete the input audio signal that was received in the past.
Although FIG. 5 shows an example of receiving an audio signal in units of a predetermined time length, embodiments of the present disclosure are not limited to that shown in FIG. 5. As described above, the speech recognition apparatus 100 may receive and store an audio signal in units of a sentence. Alternatively, the speech recognition apparatus 100 may receive and store the audio signal in units of data of a predetermined size.
In operation S550, the speech recognition apparatus 100 according to an embodiment may determine whether a speech signal for uttering the activation word has been received.
When the speech signal for uttering the activation word is received, then in operation S570, the speech recognition apparatus 100 may transmit the stored input audio signal and the input audio signal received thereafter to a server (or an embedded speech recognition module). The speech recognition apparatus 100 may search for and extract a signal including a sentence including an activation word from the stored input audio signals. The speech recognition apparatus 100 may transmit the audio signal including the sentence including the activation word to the server (or the embedded speech recognition module).
The speech recognition apparatus 100 may use the following method to search for and extract a signal including a sentence including an activation word.
As an example, the speech recognition apparatus 100 may determine a start and an end of a sentence based on at least one of a length of a silence section, a sentence structure, and an intonation. The speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on a determined result.
As another example, the speech recognition apparatus 100 may determine a past audio signal of a predetermined length and a currently received audio signal as a start and an end of a sentence from the speech signal in which the activation word is uttered. The speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on the determined result.
As another example, the speech recognition apparatus 100 may determine a past speech signal of a variable length before the speech signal in which the activation word has been uttered and a speech signal of a variable length after the speech signal in which the activation word has been uttered as a start and an end of a sentence based on a grammatical position of the activation word. The speech recognition apparatus 100 may transmit the audio signal corresponding to the sentence including the activation word to the server (or the embedded speech recognition module) based on the determined result.
In operation S550, if it is determined that the speech signal for uttering the activation word has not been received, the speech recognition apparatus 100 may repeatedly perform the operation of receiving and storing the input audio signal of the length longer than the predetermined length.
In operation S580, the speech recognition apparatus 100 may perform speech recognition. The speech recognition apparatus 100 may extract a frequency characteristic of the speech signal from the input audio signal and perform speech recognition using an acoustic model and a language model. In operation S590, the speech recognition apparatus 100 according to an embodiment may output a result of performing speech recognition. The speech recognition apparatus 100 may output the result of performing speech recognition in various ways such as sound, light, image, vibration, etc.
FIG. 6 is a flowchart of a method of outputting a result of speech recognition performed by a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 6, operations S610 to S650 in FIG. 6 may correspond to operation S330 in FIG. 3.
In operation S610, the speech recognition apparatus 100 according to an embodiment may analyze a speech command. The speech recognition apparatus 100 may analyze the speech command through natural language understanding and dialog management.
The speech recognition apparatus 100 may perform natural language understanding on the result of performing speech recognition. The speech recognition apparatus 100 may extract text estimated to have been uttered by a speaker by performing speech recognition on the speech command. The speech recognition apparatus 100 may perform natural language understanding on the text estimated to have been uttered by the speaker. The speech recognition apparatus 100 may grasp an intention of the speaker through natural language processing.
In operation S620, the speech recognition apparatus 100 may determine whether the speech command is a direct command for requesting a response of the speech recognition apparatus 100. The speech recognition apparatus 100 may determine whether the speech command is a direct command or an indirect command based on at least one of a sentence structure of the speech command, an intonation, a direction in which the speech command is received, a size of the speech command, and a result of natural language understanding.
The speech command may mean any acoustic speech signal received by the speech recognition apparatus 100 or may mean a speech signal uttered by a human being among the acoustic speech signals received by the speech recognition apparatus 100. The direct command may include a speech command that the user intentionally uttered to allow the speech recognition apparatus 100 to perform an operation that respond to the speech command. The indirect instruction may include all speech commands except the direct command among speech commands uttered by the user. For example, the indirect command may include a speech signal that the user has uttered without intending to perform speech recognition by the speech recognition apparatus 100. In operation S630, the speech recognition apparatus 100 according to an embodiment may perform an operation of responding to the speech command when it is determined that the speech command is the direct command.
If it is determined that the speech command is the indirect command other than the direct command for requesting the response from the speech recognition apparatus 100, then in operation S640, the speech recognition apparatus 100 according to an embodiment may display that a response to the speech command is possible. The speech recognition apparatus 100 may notify the user that the response is waiting while waiting for the response to the speech command.
In operation S650, the speech recognition apparatus 100 may receive a confirmation command from the user. The speech recognition apparatus 100 may perform the operation of responding to the speech command when the confirmation command is received from the user.
FIGS. 7A and 7B show examples in which a speech recognition apparatus is included in a home robot.
Referring to FIGS. 7A and 7B, embodiments of the present disclosure is not limited to FIGS. 7A and 7B, and the speech recognition apparatus 100 may be various mobile computing apparatuses or non-mobile computing apparatuses. Alternatively, the speech recognition apparatus 100 may be included in a central control apparatus that controls a home network connecting various home appliances in a house.
FIGS. 7A and 7B show cases where the speech recognition apparatus 100 determines “weather” as an activation word corresponding to a current situation according to an embodiment of the present disclosure.
Referring to FIG. 7A, the user 10 may utter “I do not know what the weather will be like tomorrow” expressing an intention to wonder about tomorrow's weather during a dialog with another speaker. Since the speech recognition apparatus 100 has received a speech signal for uttering the activation word “weather”, the speech recognition apparatus 100 may perform speech recognition on a sentence “I do not know what the weather will be like tomorrow” including the activation word. The speech recognition apparatus 100 may activate a speech recognition function when the speech signal for uttering the activation word “weather” is received.
The speech recognition apparatus 100 may perform speech recognition on “I do not know what the weather will be like tomorrow”, which is a whole speech command including the speech signal for uttering the activation word “weather”.
Alternatively, when the speech signal for uttering the activation word “weather” is received, the speech recognition apparatus 100 may transmit “I do not know what” that is a speech command received after the activation word to a server to allow the server to perform speech recognition. Also, the speech recognition apparatus 100 may transmit a previously received and stored speech command to the server together with the speech command received after the activation word and receive a result of speech recognition performed by the server from the server. When the speech signal for uttering the activation word “weather” is received, the speech recognition apparatus 100 may perform speech recognition on “tomorrow” that is a speech command received and stored before the activation word, the activation word “weather”, and “I do not know what” that is the speech command received after the activation word.
The speech recognition apparatus 100 may transmit “tomorrow weather” which is a keyword corresponding to the result of performing speech recognition to an external server and may receive and store “sunny” as information related to the transmitted keyword from the server.
The speech recognition apparatus 100 may perform natural language processing and sentence structure analysis on the speech command on which speech recognition has been performed to determine whether the speech command is a direct command for requesting a response of the speech recognition apparatus 100. For example, the speech recognition apparatus 100 may determine that the uttered speech command of FIG. 7A is an indirect command.
Since it is determined that the speech command is an indirect command other than the direct command for requesting the response of the speech recognition apparatus 100, the speech recognition apparatus 100 may display that a response to the speech command is possible. For example, the speech recognition apparatus 100 may inform the user 10 that the response is waiting in various ways such as sound, light, image, vibration, etc.
Referring to FIG. 7B, the user 10 may recognize that the speech recognition apparatus 100 is waiting for the response and may issue a confirmation command to request the response to the speech command. For example, the user 10 may issue the confirmation command to the speech recognition apparatus 100 by speaking “Say Robot” that is a previously confirmed confirmation command. When the confirmation command is received from the user 10, the speech recognition apparatus 100 may output speech “It will be sunny tomorrow” as an operation to respond to the speech command.
As described above, the speech recognition apparatus 100 may perform speech recognition only by making a natural utterance suitable for a situation even if the user 10 does not perform an operation for directly activating the speech recognition function. The speech recognition apparatus 100 may perform speech recognition by recognizing a word included in the natural utterance suitable for the situation uttered by the user 10 as an activation word.
Also, as shown in FIGS. 7A and 7B, information about “today's weather” which is content that the user 10 wants to know may be acquired in advance before receiving the speech command of the user 10 “Say Robot”. The speech recognition apparatus 100 may provide a proactive service before the user 10 utters a speech command so that the speech recognition apparatus 100 performs the speech recognition function.
FIGS. 7A and 7B show examples in which the speech recognition apparatus 100 operates in a manner of notifying the user 10 that a response to speech command is waiting while a speech command is an indirect command. However, an embodiment is not limited to FIGS. 7A and 7B.
For example, as shown in FIG. 8, the speech recognition apparatus 100 may output a result of performing speech recognition only when a speech command is a direct command for requesting a response of the speech recognition apparatus 100. The speech recognition apparatus 100 may not take a separate action when the speech command is not the direct command for requesting the response of the speech recognition apparatus 100.
FIG. 8 shows a case where the speech recognition apparatus 100 determines "air conditioner" as an activation word corresponding to a current situation according to an embodiment of the present disclosure.
Referring to FIG. 8, the first user 10 may utter “Today is the weather to turn on the air conditioner” to explain a current weather during a dialog with a second user 20.
Since the speech recognition apparatus 100 has received a speech signal for uttering an activation word “air conditioner”, the speech recognition apparatus 100 may determine whether “Today is the weather to turn on the air conditioner” that is a speech command including an activation word is a direct command or an indirect command.
The speech recognition apparatus 100 may determine that a speech command of the first user 10 is not the direct command. For example, the speech recognition apparatus 100 may determine that the speech command of the first user 10 is not the direct command because the speech command of the first user 10 does not have a sentence structure to ask a question or issue a command. The speech recognition apparatus 100 may not transmit an audio signal including the speech command to a server (or an embedded speech recognition module) because it is determined that the speech command of the first user 10 is not the direct command. The speech recognition apparatus 100 may ignore an utterance of the first user 10 that has been received and stored and repeat an operation of newly receiving and storing an input audio signal.
Next, in FIG. 8, the second user 20 may utter “turn on the air conditioner” that is a speech command to request the speech recognition apparatus 100 to turn on the air conditioner in response to the utterance of the first user 10.
Since the speech recognition apparatus 100 has received the speech signal for uttering the activation word “air conditioner”, the speech recognition apparatus 100 may determine whether “turn on the air conditioner” that is the speech command including the activation word is a direct command.
The speech recognition apparatus 100 may determine that the speech command of the second user 20 is the direct command. For example, the speech recognition apparatus 100 may determine that the speech command of the second user 20 is the direct command because the speech command of the second user 20 has a sentence structure to issue a command. The speech recognition apparatus 100 may transmit an audio signal including the speech command including the activation word to the server (or the embedded speech recognition module) because it is determined that the speech command of the second user 20 is the direct command. The server (or the embedded speech recognition module) may perform speech recognition on the speech command. The speech recognition apparatus 100 may control the air conditioner so that power of the air conditioner is turned on in response to a speech recognition result.
FIG. 9 is a flowchart of a method of determining whether a speech command is a direct command or an indirect command performed by a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 9, operations S910 to S930 in FIG. 9 may correspond to operation S610 in FIG. 6.
In operation S910, the speech recognition apparatus 100 may filter the speech command based on matching accuracy based on natural language understanding. The speech recognition apparatus 100 may calculate the matching accuracy indicating a degree to which the speech command of a user may be matched with a machine-recognizable command based on natural language understanding. The speech recognition apparatus 100 may primarily determine whether the speech command is the direct command for requesting a response of the speech recognition apparatus 100 by comparing the calculated matching accuracy with a predetermined threshold value.
In operation S920, the speech recognition apparatus 100 may secondarily determine whether the speech command is the direct command by analyzing a sentence structure of the speech command. The speech recognition apparatus 100 may analyze morphemes included in the speech command and analyze the sentence structure of the speech command based on a final ending. For example, when the speech recognition apparatus 100 determines that the speech command is an interrogative type sentence (e.g., “how…?”, “what …?”, etc.) or an imperative type sentence (e.g., “close…!”, “stop…!”, “do…!”, etc.), the speech command may assign a weight to a reliability value which is a direct command.
In operation S930, the speech recognition apparatus 100 may filter the speech command based on the reliability value calculated in operations S910 and S920. The speech recognition apparatus 100 may finally determine whether the speech command is the direct command by comparing the calculated reliability value through operations S910 and S920 with a predetermined threshold value.
The speech recognition apparatus 100 may extract candidate activation words according to each situation before determining an activation word suitable for a situation. The speech recognition apparatus 100 may store the extracted candidate activation words in an embedded database or a database included in an external server.
FIG. 10 is a flowchart of a method of determining candidate activation words respectively corresponding to situations performed by a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 10, in operation S1010, the speech recognition apparatus 100 may group speech commands that may be uttered according to each situation. The speech commands that may be uttered in each situation may include speech commands that are expected to be uttered by a user in each situation or speech commands that have been uttered by the user in each situation.
The speech recognition apparatus 100 may receive a corpus uttered by the user in a plurality of situations and group speech commands included in the received corpus. The speech recognition apparatus 100 may receive information about a situation in which the speech commands included in the corpus are uttered, together with the corpus.
In operation S1020, the speech recognition apparatus 100 may extract statistics on words included in the speech commands that may be uttered for each situation. The speech recognition apparatus 100 may extract a frequency of a plurality of words included in speech commands received in each of a plurality of situations.
In operation S1030, the speech recognition apparatus 100 may extract at least one word included in the speech commands uniquely at a high frequency for each situation.
The speech recognition apparatus 100 may exclude a frequently appearing word more than a threshold frequency commonly in the speech commands uttered in the plurality of situations from words included in speech commands uttered in a specific situation uniquely at a high frequency. The speech recognition apparatus 100 may determine a frequently appearing word more than a threshold frequency only in the speech commands uttered in the specific situation as a word appearing uniquely at a high frequency in the speech commands uttered in the specific situation.
In operation S1040, the speech recognition apparatus 100 may determine the extracted at least one word as a candidate activation word for each situation. The speech recognition apparatus 100 may store candidate activation words suitable for each situation with respect to the plurality of situations.
The speech recognition apparatus 100 may extract at least one candidate activation word corresponding to a current situation from stored data. The speech recognition apparatus 100 may determine at least one of the extracted candidate activation words as an activation word.
Referring to FIG. 10, a case where a candidate activation word is determined by analyzing a corpus including speech commands that may be uttered in a plurality of situations has been described as an example. However, embodiments of the present disclosure are not limited to FIG. 10. A user may directly input or delete the candidate activation word corresponding to each situation. The speech recognition apparatus 100 may store a candidate activation word corresponding to a specific situation in a database or delete a specific candidate activation word based on a user input. For example, if the user newly installs an air purifier in a house, the speech recognition apparatus 100 may add “air purifier” as a candidate activation word associated with the house, based on the user input.
Hereinafter, components of the speech recognition apparatus 100 according to an embodiment of the present disclosure will be described. Each component of the speech recognition apparatus 100 described below may perform each operation of the method of performing speech recognition by the speech recognition apparatus 100 described above.
FIGS. 11A and 11B are block diagrams of a speech recognition apparatus according to an embodiment of the present disclosure.
Referring to FIG. 11A, the speech recognition apparatus 100 may include a receiver 1110, a processor 1120, and an outputter 1130. However, the speech recognition apparatus 100 may be implemented by more components than all of the components shown in FIGS. 11A and 11B. As shown in FIG. 11B, the speech recognition apparatus 100 may further include at least one of a memory 1140, a user inputter 1150, a communicator 1160, and a sensing unit 1170.
For example, the speech recognition apparatus 100 according to an embodiment of the present disclosure may be included in at least one of a non-mobile computing device, a mobile computing device, an electronic control apparatus of a vehicle, and a server, or may be connected to at least one of the non-mobile computing device, the mobile computing device, the electronic control apparatus of a vehicle, and the server by wired or wirelessly.
The receiver 1110 may receive an audio signal. For example, the receiver 1110 may directly receive an audio signal by converting external sound into electrical acoustic data by a microphone. Alternatively, the receiver 1110 may receive the audio signal transmitted from an external apparatus. In FIG. 11A, the receiver 1110 is shown as being included in the speech recognition apparatus 100, but the receiver 1110 may be included in a separate apparatus and may be connected to the speech recognition apparatus 100 by wires or wirelessly.
The processor 1120 may control the overall operation of the speech recognition apparatus 100. For example, the processor 1120 may control the receiver 1110, and the outputter 1130. The processor 1120 according to an embodiment may control the operation of the speech recognition apparatus 100 using an artificial intelligence technology. Although FIG. 11A illustrates one processor, the speech recognition apparatus may include one or more processors.
The processor 1120 may determine at least one activation word based on information related to a situation in which the speech recognition apparatus 100 operates. The processor 1120 may obtain at least one of, for example, a location of the speech recognition apparatus 100, time, whether the speech recognition apparatus 100 is connected to another electronic apparatus, whether the speech recognition apparatus 100 is moving, and information related to a characteristic of a user of the speech recognition apparatus 100 as the information related to the situation in which the speech recognition apparatus 100 operates.
The processor 1120 may determine the number of at least one activation word corresponding to a current situation based on a degree to which a speech recognition function of the speech recognition apparatus 100 is sensitively activated in determining the at least one activation word corresponding to the current situation.
When it is determined that a speech signal for uttering an activation word included in the at least one activation word has been received, the processor 1120 may perform speech recognition on the input audio signal.
The processor 1120 may detect a speech signal from the input audio signal input from the receiver 1110 and perform speech recognition on the speech signal. The processor 1120 may include a speech recognition module for performing speech recognition. The processor 1120 may extract a frequency characteristic of the speech signal from the input audio signal and perform speech recognition using an acoustic model and a language model. The frequency characteristic may refer to a distribution of frequency components of an acoustic input extracted by analyzing a frequency spectrum of the acoustic input. Therefore, as shown in FIG. 11B, the speech recognition apparatus 1100 may further include a memory 1140 that stores the acoustic model and the language model.
When it is determined that the speech signal for uttering the activation word has been received, the processor 1120 may perform speech recognition on the input audio signal including the speech signal for uttering the activation word.
The processor 1120 may receive and store the input audio signal prior to performing speech recognition. The processor 1120 may determine whether the input audio signal includes the speech signal for uttering the activation word. When it is determined that the input audio signal includes the speech signal for uttering the activation word included in the at least one activation word, the processor 1120 may perform speech recognition on the stored input audio signal and a subsequently received input audio signal.
The processor 1120 may determine whether to output a result of performing speech recognition immediately or to output the result of performing speech recognition when a confirmation command is received from the user. The processor 1120 may extract text uttered by the user by performing speech recognition on the input audio signal. The processor 1120 may determine whether a speech command included in the input audio signal is a direct command for requesting a response of the speech recognition apparatus, based on natural language understanding and sentence analysis of the extracted text.
The processor 1120 may perform an operation of responding to the speech command when it is determined that the speech command is the direct command. The processor 1120 may control the outputter 1130 to display that the response to the speech command is possible when it is determined that the speech command is not the direct command. The processor 1120 may perform the operation of responding to the speech command when a confirmation command is received from the user through the receiver 1110.
The processor 1120 according to an embodiment may be implemented with hardware and/or software components that perform particular functions. For example, the processor 1120 may include a user situation analyzer (not shown) for analyzing a situation in which the speech recognition apparatus 100 operates, a candidate activation word extractor (not shown) for extracting candidate activation words corresponding to a current situation from a database, an activation word switcher (not shown) for switching an activation word according to the current situation, and an audio signal processer (not shown) for processing an audio signal including a speech command for uttering the activation word.
The functions performed by the processor 1120 may be implemented by at least one microprocessor, or by circuit components for related functions. Some or all of the functions performed by the processor 1120 may be implemented by software modules configured in various programming languages or script languages that are executed in the processor 1120. FIGS. 11A and 11B illustrate that the speech recognition apparatus 100 includes one processor 1120, but the embodiment is not limited thereto. The speech recognition apparatus 100 may include a plurality of processors.
The outputter 1130 according to an embodiment may output a result of speech recognition performed on the input audio signal. The outputter 1130 may inform the user of the result of performing speech recognition or transmit the result to an external device (e.g., a smart phone, a smart TV, a smart watch, a server, etc.). For example, the outputter 1130 may include a speaker or a display capable of outputting an audio signal or a video signal.
Alternatively, the outputter 1130 may perform an operation corresponding to the result of performing speech recognition. For example, the speech recognition apparatus 100 may determine a function of the speech recognition apparatus 100 corresponding to the result of performing speech recognition, and may output a screen performing the function through the outputter 1130. Alternatively, the speech recognition apparatus 100 may transmit a keyword corresponding to the result of performing speech recognition to an external server, receive information related to the transmitted keyword from the server, and output the information on the screen through the outputter 1130.
The outputter 1130 may output information that is received from outside, is processed by the processor 1120, or is stored in the form of at least one of light, sound, image, and vibration. For example, the outputter 1130 may further include at least one of a display for outputting text or an image, an acoustic outputter for outputting sound, and a vibration motor for outputting vibration.
The memory 1140 of FIG. 11B may store the result of speech recognition performed by the processor 1120. The memory 1140 may store the input audio signal received through the receiver 1110. The memory 1140 may receive and store the input audio signal in units of a sentence, in units of a predetermined time length, or in units of a predetermined data size.
The memory 1140 may store instructions that are executed in the processor 1120 to control the speech recognition apparatus 100.
The memory 1140 according to an embodiment may store a database including a plurality of candidate activation words respectively corresponding to a plurality of situations. The processor 1120 may retrieve at least one candidate activation word corresponding to a situation in which the speech recognition apparatus 100 operates from data stored in memory 1140 in determining the at least one activation word. The processor 1120 may determine at least one of the retrieved candidate activation words as an activation word.
The memory 1140 may include a database including information about a sentence structure and grammar. The processor 1120 may determine whether the speech command included in the input audio signal is a direct command by using the information about the sentence structure and grammar stored in the memory 1140.
The memory 1140 may include at least one type storage medium of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, secure digital (SD) or extreme digital (XD) A random access memory (SRAM), a read only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), magnetic disk, magnetic disk, magnetic disk, or optical disk.
The user inputter 1150 according to an embodiment may receive a user input for controlling the speech recognition apparatus 100. The user inputter 1150 may include a user input device including a touch panel for receiving a touch of the user, a button for receiving a push operation of a user, a wheel for receiving a rotation operation of the user, a key board, a dome switch, etc. but is not limited thereto.
The communicator 1160 may communicate with an external electronic apparatus or server through wired communication or wireless communication. For example, the communicator 1160 may communicate with the server that stores a database including candidate activation words suitable for each situation with respect to a plurality of situations. The communicator 1160 may retrieve and extract at least one candidate activation word suitable for a current situation from the server. The processor 1120 may determine at least one of the retrieved candidate activation words as an activation word.
The communicator 1160 may acquire information related to a situation in which the speech recognition apparatus 100 operates from the external electronic apparatus. The communicator 1160 may acquire information sensed by the external electronic apparatus as the information related to the situation in which the speech recognition apparatus 100 operates.
The communicator 1160 may communicate with a server that performs a speech recognition function. For example, the communicator 1160 may transmit an audio signal including a sentence including an activation word to the server. The communicator 1160 may receive a result of speech recognition performed by the server.
The communicator 1160 may include a near distance communication module, a wired communication module, a mobile communication module, a broadcast receiving module, and the like.
The sensing unit 1170 may include one or more sensors and sense various information used to determine a situation in which the speech recognition apparatus 100 operates. For example, the sensing unit 1170 may sense a location of the speech recognition apparatus 100, information related to a motion of the speech recognition apparatus 100, information that may identify a user who uses the speech recognition apparatus 100, surrounding environment information of the speech recognition apparatus 100, and the like.
The sensing unit 1170 may include at least one of an illuminance sensor, a biosensor, a tilt sensor, a position sensor, a proximity sensor, a geomagnetism sensor, a gyroscope sensor, a temperature/humidity sensor, an infrared ray sensor, and a speed/acceleration sensor or a combination thereof.
The block diagrams shown in FIGS. 11A and 11B may also be applied to a speech recognition server. The speech recognition server may include a receiver for receiving an input audio signal from the speech recognition apparatus. The speech recognition server may be connected to the speech recognition apparatus by wired or wirelessly.
Also, the speech recognition server may include a processor and an outputter, and may further include a memory and a communicator. The processor of the speech recognition server may detect a speech signal from an input audio signal and perform speech recognition on the speech signal.
The outputter of the speech recognition server may transmit a result of performing speech recognition to the speech recognition apparatus. The speech recognition apparatus may output the result of performing speech recognition received from the speech recognition server.
The above-described embodiments may be implemented in a general-purpose digital computer that may be created as a program that may be executed in a computer and operates the program using a medium readable by a computer. Further, the structure of the data used in the above-described embodiments may be recorded on the computer-readable medium through various means. Furthermore, the above-described embodiments may be embodied in the form of a recording medium including instructions executable by a computer, such as a program module, executed by a computer. For example, methods implemented with software modules or algorithms may be stored in computer readable media as code or program instructions that may be read and executed by the computer.
The one or more embodiments of the present disclosure may be written as computer programs and may be implemented in general-use digital computers that execute the programs using a non-transitory computer-readable recording medium. In addition, a data structure used in the embodiments of the present disclosure may be written in a non-transitory computer-readable recording medium through various means. The one or more embodiments may be embodied as computer readable code/instructions on a recording medium, e.g., a program module to be executed in computers, which include computer-readable commands. For example, methods that are implemented as software modules or algorithms may be stored as computer readable codes or program instructions executable on a non-transitory computer-readable recording medium.
The computer-readable medium may include any recording medium that may be accessed by computers, volatile and non-volatile medium, and detachable and non-detachable medium. Examples of the computer-readable medium include, but are not limited to, magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., compact disc read-only memory (CD-ROMs), or digital versatile discs (DVDs)), etc. In addition, the computer-readable medium may include a computer storage medium.
The non-transitory computer-readable recording media may be distributed over network coupled computer systems, and data stored in the distributed recording media, e.g., a program command and code, may be executed by using at least one computer.
The particular executions described in the present disclosure are by way of example only and are not intended to limit the scope of the present disclosure in any way. For brevity of description, descriptions of various electronic components, control systems, software, and other functional aspects of the systems may be omitted according to the related art.
The terms “unit”, “module”, etc. as used herein mean a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software. The terms “unit” and “module” may be implemented by a program stored on an addressable storage medium and executable by a processor.
For example, “unit” and “module” may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
Claims (15)
- A speech recognition method comprising:determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates;receiving an input audio signal;performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal; andoutputting a result of the performing of the speech recognition.
- The speech recognition method of claim 1, wherein the information related to the situation comprises at least one of a location of the speech recognition apparatus, a time, whether the speech recognition apparatus is connected to another electronic apparatus, whether the speech recognition apparatus moves, or information related to a characteristic of a user of the speech recognition apparatus.
- The speech recognition method of claim 1, wherein the determining of the at least one activation word comprises: determining the number of the determined at least one activation word, based on a degree of sensitivity of an activated speech recognition function of the speech recognition apparatus.
- The speech recognition method of claim 1, further comprising: storing a plurality of activation words respectively corresponding to a plurality of situations,wherein the determining of the at least one activation word comprises:obtaining information related to the situation in which the speech recognition apparatus operates, anddetermining the at least one activation word corresponding to the situation in which the speech recognition apparatus operates.
- The speech recognition method of claim 1,wherein the receiving of the input audio signal comprises storing the input audio signal, andwherein the performing of speech recognition comprises:determining whether the input audio signal comprises the speech signal for uttering the activation word included in the at least one activation word, andwhen it is determined that the input audio signal comprises the speech signal for uttering the activation word included in the at least one activation word, performing speech recognition on the stored input audio signal and a subsequently received input audio signal.
- The speech recognition method of claim 1, wherein the performing of the speech recognition comprises performing speech recognition on the input audio signal comprising the speech signal for uttering the activation word included in the at least one activation word.
- The speech recognition method of claim 1, wherein the outputting of the result of performing the speech recognition comprises determining whether to output the result of performing the speech recognition immediately or whether to output the result of performing the speech recognition if a confirmation command is received from the user.
- The speech recognition method of claim 1, wherein the outputting of the result of performing the speech recognition comprises:extracting text uttered by the user by performing speech recognition on the input audio signal;determining whether a speech command included in the input audio signal is a direct command for requesting a response of the speech recognition apparatus based on natural language understanding and sentence structure analysis of the extracted text; andwhen it is determined that the speech command is the direct command, performing an operation of responding to the speech command.
- The speech recognition method of claim 8, wherein the outputting of the result of performing the speech recognition further comprises:when it is determined that the speech command is not the direct command, displaying that a response to the speech command is possible; andwhen a confirmation command is received from the user, performing the operation of responding to the speech command.
- The speech recognition method of claim 1, further comprising:receiving information about speech commands received from a user in a plurality of situations, wherein the receiving is performed by the speech recognition apparatus;extracting a plurality of words included in the speech commands; andbased on a frequency of the plurality of words included in speech commands received in a specific situation among the plurality of situations, storing at least one word as an activation word corresponding to the specific situation.
- The speech recognition method of claim 1, wherein the determining of the at least one activation word comprises:obtaining information about at least one electronic apparatus connected to the speech recognition apparatus; anddetermining a word related to the at least one electronic apparatus as the at least one activation word.
- A speech recognition apparatus comprising:a receiver configured to receive an input audio signal;at least one processor configured to:determine at least one activation word based on information related to a situation in which a speech recognition apparatus operates, andperform speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal; andan outputter configured to output a result of the speech recognition.
- The speech recognition apparatus of claim 12, wherein the information related to the situation comprises at least one of a location of the speech recognition apparatus, a time, whether the speech recognition apparatus is connected to another electronic apparatus, whether the speech recognition apparatus moves, or information related to a characteristic of a user of the speech recognition apparatus.
- The speech recognition apparatus of claim 12, wherein, in the determining of the at least one activation word, the processor is further configured to determine the number of the determined at least one activation word, based on a degree of sensitivity of an activated speech recognition function of the speech recognition apparatus.
- A non-transitory computer-readable recording medium having recorded thereon at least one program comprising instructions for allowing a speech recognition apparatus to execute a speech recognition method, the speech recognition method comprising:determining at least one activation word based on information related to a situation in which a speech recognition apparatus operates;receiving an input audio signal;performing speech recognition on the input audio signal, based on whether a speech signal for uttering an activation word included in the at least one activation word has been included in the input audio signal; andoutputting a result of the performing of the speech recognition.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780078008.1A CN110100277B (en) | 2016-12-15 | 2017-10-17 | Speech recognition method and device |
EP17879966.4A EP3533052B1 (en) | 2016-12-15 | 2017-10-17 | Speech recognition method and apparatus |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20160171670 | 2016-12-15 | ||
KR10-2016-0171670 | 2016-12-15 | ||
KR1020170054513A KR102409303B1 (en) | 2016-12-15 | 2017-04-27 | Method and Apparatus for Voice Recognition |
KR10-2017-0054513 | 2017-04-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018110818A1 true WO2018110818A1 (en) | 2018-06-21 |
Family
ID=62558848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2017/011440 WO2018110818A1 (en) | 2016-12-15 | 2017-10-17 | Speech recognition method and apparatus |
Country Status (2)
Country | Link |
---|---|
US (2) | US11003417B2 (en) |
WO (1) | WO2018110818A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020171809A1 (en) * | 2019-02-20 | 2020-08-27 | Google Llc | Utilizing pre-event and post-event input streams to engage an automated assistant |
WO2021145895A1 (en) * | 2020-01-17 | 2021-07-22 | Google Llc | Selectively invoking an automated assistant based on detected environmental conditions without necessitating voice-based invocation of the automated assistant |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10186263B2 (en) * | 2016-08-30 | 2019-01-22 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Spoken utterance stop event other than pause or cessation in spoken utterances stream |
KR102667413B1 (en) * | 2016-10-27 | 2024-05-21 | 삼성전자주식회사 | Method and Apparatus for Executing Application based on Voice Command |
US11003417B2 (en) * | 2016-12-15 | 2021-05-11 | Samsung Electronics Co., Ltd. | Speech recognition method and apparatus with activation word based on operating environment of the apparatus |
CN111108550A (en) * | 2017-09-21 | 2020-05-05 | 索尼公司 | Information processing device, information processing terminal, information processing method, and program |
US10460746B2 (en) * | 2017-10-31 | 2019-10-29 | Motorola Solutions, Inc. | System, method, and device for real-time language detection and real-time language heat-map data structure creation and/or modification |
KR102348124B1 (en) * | 2017-11-07 | 2022-01-07 | 현대자동차주식회사 | Apparatus and method for recommending function of vehicle |
US10192554B1 (en) * | 2018-02-26 | 2019-01-29 | Sorenson Ip Holdings, Llc | Transcription of communications using multiple speech recognition systems |
US10789940B2 (en) * | 2018-03-27 | 2020-09-29 | Lenovo (Singapore) Pte. Ltd. | Dynamic wake word identification |
JP2019204025A (en) * | 2018-05-24 | 2019-11-28 | レノボ・シンガポール・プライベート・リミテッド | Electronic apparatus, control method, and program |
CN112272846A (en) * | 2018-08-21 | 2021-01-26 | 谷歌有限责任公司 | Dynamic and/or context-specific hotwords for invoking an automated assistant |
CN112292724A (en) | 2018-08-21 | 2021-01-29 | 谷歌有限责任公司 | Dynamic and/or context-specific hotwords for invoking automated assistants |
US11289097B2 (en) * | 2018-08-28 | 2022-03-29 | Dell Products L.P. | Information handling systems and methods for accurately identifying an active speaker in a communication session |
JP7202853B2 (en) * | 2018-11-08 | 2023-01-12 | シャープ株式会社 | refrigerator |
JP7023823B2 (en) * | 2018-11-16 | 2022-02-22 | アルパイン株式会社 | In-vehicle device and voice recognition method |
JP7002823B2 (en) * | 2018-12-06 | 2022-01-20 | アルパイン株式会社 | Guidance voice output control system and guidance voice output control method |
US10637985B1 (en) * | 2019-05-28 | 2020-04-28 | Toyota Research Institute, Inc. | Systems and methods for locating a mobile phone in a vehicle |
KR102246936B1 (en) * | 2019-06-20 | 2021-04-29 | 엘지전자 주식회사 | Method and apparatus for recognizing a voice |
WO2021010562A1 (en) | 2019-07-15 | 2021-01-21 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
KR20210008788A (en) | 2019-07-15 | 2021-01-25 | 삼성전자주식회사 | Electronic apparatus and controlling method thereof |
US11176939B1 (en) * | 2019-07-30 | 2021-11-16 | Suki AI, Inc. | Systems, methods, and storage media for performing actions based on utterance of a command |
US10971151B1 (en) | 2019-07-30 | 2021-04-06 | Suki AI, Inc. | Systems, methods, and storage media for performing actions in response to a determined spoken command of a user |
KR20190113693A (en) * | 2019-09-18 | 2019-10-08 | 엘지전자 주식회사 | Artificial intelligence apparatus and method for recognizing speech of user in consideration of word usage frequency |
US11481510B2 (en) * | 2019-12-23 | 2022-10-25 | Lenovo (Singapore) Pte. Ltd. | Context based confirmation query |
TWI735168B (en) * | 2020-02-27 | 2021-08-01 | 東元電機股份有限公司 | Voice robot |
CN113359538A (en) * | 2020-03-05 | 2021-09-07 | 东元电机股份有限公司 | Voice control robot |
CN113539251A (en) * | 2020-04-20 | 2021-10-22 | 青岛海尔洗衣机有限公司 | Control method, device, equipment and storage medium for household electrical appliance |
US11587564B2 (en) | 2020-04-20 | 2023-02-21 | Rovi Guides, Inc. | Enhancing signature word detection in voice assistants |
US11128636B1 (en) * | 2020-05-13 | 2021-09-21 | Science House LLC | Systems, methods, and apparatus for enhanced headsets |
CN113096654B (en) * | 2021-03-26 | 2022-06-24 | 山西三友和智慧信息技术股份有限公司 | Computer voice recognition system based on big data |
DE102022207082A1 (en) * | 2022-07-11 | 2024-01-11 | Volkswagen Aktiengesellschaft | Location-based activation of voice control without using a specific activation term |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03145700A (en) * | 1989-11-01 | 1991-06-20 | Nippon Telegr & Teleph Corp <Ntt> | Word standard pattern registering system |
US20120303267A1 (en) * | 2009-07-27 | 2012-11-29 | Robert Bosch Gmbh | Method and system for improving speech recognition accuracy by use of geographic information |
US20130166293A1 (en) * | 2009-06-24 | 2013-06-27 | At&T Intellectual Property I, L.P. | Automatic disclosure detection |
US20140324431A1 (en) * | 2013-04-25 | 2014-10-30 | Sensory, Inc. | System, Method, and Apparatus for Location-Based Context Driven Voice Recognition |
US20140350924A1 (en) * | 2013-05-24 | 2014-11-27 | Motorola Mobility Llc | Method and apparatus for using image data to aid voice recognition |
US20150161990A1 (en) | 2013-12-05 | 2015-06-11 | Google Inc. | Promoting voice actions to hotwords |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH086591A (en) * | 1994-06-15 | 1996-01-12 | Sony Corp | Voice output device |
US5799279A (en) * | 1995-11-13 | 1998-08-25 | Dragon Systems, Inc. | Continuous speech recognition of text and commands |
US5930751A (en) * | 1997-05-30 | 1999-07-27 | Lucent Technologies Inc. | Method of implicit confirmation for automatic speech recognition |
US8275617B1 (en) * | 1998-12-17 | 2012-09-25 | Nuance Communications, Inc. | Speech command input recognition system for interactive computer display with interpretation of ancillary relevant speech query terms into commands |
US7949529B2 (en) * | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
US8270933B2 (en) * | 2005-09-26 | 2012-09-18 | Zoomsafer, Inc. | Safety features for portable electronic device |
US20080134038A1 (en) | 2006-12-05 | 2008-06-05 | Electronics And Telecommunications Research | Interactive information providing service method and apparatus |
KR20080052279A (en) | 2006-12-05 | 2008-06-11 | 한국전자통신연구원 | Apparatus and method of dialogue tv agent service for providing daily information |
US9305548B2 (en) * | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8037070B2 (en) * | 2008-06-25 | 2011-10-11 | Yahoo! Inc. | Background contextual conversational search |
US8682667B2 (en) * | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8738377B2 (en) * | 2010-06-07 | 2014-05-27 | Google Inc. | Predicting and learning carrier phrases for speech input |
JP5548541B2 (en) * | 2010-07-13 | 2014-07-16 | 富士通テン株式会社 | Information providing system and in-vehicle device |
JP2013080015A (en) * | 2011-09-30 | 2013-05-02 | Toshiba Corp | Speech recognition device and speech recognition method |
US8924219B1 (en) | 2011-09-30 | 2014-12-30 | Google Inc. | Multi hotword robust continuous voice command detection in mobile devices |
US20130096771A1 (en) * | 2011-10-12 | 2013-04-18 | Continental Automotive Systems, Inc. | Apparatus and method for control of presentation of media to users of a vehicle |
US8666751B2 (en) * | 2011-11-17 | 2014-03-04 | Microsoft Corporation | Audio pattern matching for device activation |
US8942692B2 (en) * | 2011-12-02 | 2015-01-27 | Text Safe Teens, Llc | Remote mobile device management |
KR101743514B1 (en) | 2012-07-12 | 2017-06-07 | 삼성전자주식회사 | Method for controlling external input and Broadcasting receiving apparatus thereof |
US9288421B2 (en) | 2012-07-12 | 2016-03-15 | Samsung Electronics Co., Ltd. | Method for controlling external input and broadcast receiving apparatus |
CN102945671A (en) | 2012-10-31 | 2013-02-27 | 四川长虹电器股份有限公司 | Voice recognition method |
US9654563B2 (en) * | 2012-12-14 | 2017-05-16 | Biscotti Inc. | Virtual remote functionality |
DE102013001219B4 (en) * | 2013-01-25 | 2019-08-29 | Inodyn Newmedia Gmbh | Method and system for voice activation of a software agent from a standby mode |
US10176167B2 (en) * | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9245527B2 (en) * | 2013-10-11 | 2016-01-26 | Apple Inc. | Speech recognition wake-up of a handheld portable electronic device |
US20170200455A1 (en) * | 2014-01-23 | 2017-07-13 | Google Inc. | Suggested query constructor for voice actions |
CN103996400A (en) | 2014-04-29 | 2014-08-20 | 四川长虹电器股份有限公司 | Speech recognition method |
US9715875B2 (en) * | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9697828B1 (en) * | 2014-06-20 | 2017-07-04 | Amazon Technologies, Inc. | Keyword detection modeling using contextual and environmental information |
US9721001B2 (en) * | 2014-06-27 | 2017-08-01 | Intel Corporation | Automatic question detection in natural language |
JP2016024212A (en) * | 2014-07-16 | 2016-02-08 | ソニー株式会社 | Information processing device, information processing method and program |
KR102292546B1 (en) | 2014-07-21 | 2021-08-23 | 삼성전자주식회사 | Method and device for performing voice recognition using context information |
KR102585228B1 (en) | 2015-03-13 | 2023-10-05 | 삼성전자주식회사 | Speech recognition system and method thereof |
EP3067884B1 (en) * | 2015-03-13 | 2019-05-08 | Samsung Electronics Co., Ltd. | Speech recognition system and speech recognition method thereof |
JP2016218852A (en) * | 2015-05-22 | 2016-12-22 | ソニー株式会社 | Information processor, information processing method, and program |
US10121471B2 (en) * | 2015-06-29 | 2018-11-06 | Amazon Technologies, Inc. | Language model speech endpointing |
US9542941B1 (en) * | 2015-10-01 | 2017-01-10 | Lenovo (Singapore) Pte. Ltd. | Situationally suspending wakeup word to enable voice command input |
US9928840B2 (en) * | 2015-10-16 | 2018-03-27 | Google Llc | Hotword recognition |
US10388280B2 (en) * | 2016-01-27 | 2019-08-20 | Motorola Mobility Llc | Method and apparatus for managing multiple voice operation trigger phrases |
US9691384B1 (en) * | 2016-08-19 | 2017-06-27 | Google Inc. | Voice action biasing system |
US10043516B2 (en) * | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10217453B2 (en) * | 2016-10-14 | 2019-02-26 | Soundhound, Inc. | Virtual assistant configured by selection of wake-up phrase |
US11003417B2 (en) * | 2016-12-15 | 2021-05-11 | Samsung Electronics Co., Ltd. | Speech recognition method and apparatus with activation word based on operating environment of the apparatus |
US11482224B2 (en) * | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
-
2017
- 2017-10-13 US US15/783,476 patent/US11003417B2/en active Active
- 2017-10-17 WO PCT/KR2017/011440 patent/WO2018110818A1/en unknown
-
2021
- 2021-03-29 US US17/215,409 patent/US11687319B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03145700A (en) * | 1989-11-01 | 1991-06-20 | Nippon Telegr & Teleph Corp <Ntt> | Word standard pattern registering system |
US20130166293A1 (en) * | 2009-06-24 | 2013-06-27 | At&T Intellectual Property I, L.P. | Automatic disclosure detection |
US20120303267A1 (en) * | 2009-07-27 | 2012-11-29 | Robert Bosch Gmbh | Method and system for improving speech recognition accuracy by use of geographic information |
US20140324431A1 (en) * | 2013-04-25 | 2014-10-30 | Sensory, Inc. | System, Method, and Apparatus for Location-Based Context Driven Voice Recognition |
US20140350924A1 (en) * | 2013-05-24 | 2014-11-27 | Motorola Mobility Llc | Method and apparatus for using image data to aid voice recognition |
US20150161990A1 (en) | 2013-12-05 | 2015-06-11 | Google Inc. | Promoting voice actions to hotwords |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020171809A1 (en) * | 2019-02-20 | 2020-08-27 | Google Llc | Utilizing pre-event and post-event input streams to engage an automated assistant |
US11423885B2 (en) | 2019-02-20 | 2022-08-23 | Google Llc | Utilizing pre-event and post-event input streams to engage an automated assistant |
WO2021145895A1 (en) * | 2020-01-17 | 2021-07-22 | Google Llc | Selectively invoking an automated assistant based on detected environmental conditions without necessitating voice-based invocation of the automated assistant |
Also Published As
Publication number | Publication date |
---|---|
US20210216276A1 (en) | 2021-07-15 |
US11003417B2 (en) | 2021-05-11 |
US20180173494A1 (en) | 2018-06-21 |
US11687319B2 (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018110818A1 (en) | Speech recognition method and apparatus | |
EP3533052A1 (en) | Speech recognition method and apparatus | |
WO2018117532A1 (en) | Speech recognition method and apparatus | |
WO2020189850A1 (en) | Electronic device and method of controlling speech recognition by electronic device | |
WO2020231181A1 (en) | Method and device for providing voice recognition service | |
WO2018124620A1 (en) | Method and device for transmitting and receiving audio data | |
WO2018128362A1 (en) | Electronic apparatus and method of operating the same | |
WO2012169679A1 (en) | Display apparatus, method for controlling display apparatus, and voice recognition system for display apparatus | |
WO2019124742A1 (en) | Method for processing voice signals of multiple speakers, and electronic device according thereto | |
WO2020231230A1 (en) | Method and apparatus for performing speech recognition with wake on voice | |
WO2019031707A1 (en) | Mobile terminal and method for controlling mobile terminal using machine learning | |
WO2021015308A1 (en) | Robot and trigger word recognition method therefor | |
EP3545436A1 (en) | Electronic apparatus and method of operating the same | |
WO2019112342A1 (en) | Voice recognition apparatus and operation method thereof cross-reference to related application | |
WO2021045447A1 (en) | Apparatus and method for providing voice assistant service | |
WO2019124963A1 (en) | Speech recognition device and method | |
WO2019164120A1 (en) | Electronic device and control method thereof | |
WO2020105856A1 (en) | Electronic apparatus for processing user utterance and controlling method thereof | |
WO2020122677A1 (en) | Method of performing function of electronic device and electronic device using same | |
WO2020085769A1 (en) | Speech recognition method and apparatus in environment including plurality of apparatuses | |
WO2019078615A1 (en) | Method and electronic device for translating speech signal | |
WO2021029643A1 (en) | System and method for modifying speech recognition result | |
WO2020071858A1 (en) | Electronic apparatus and assistant service providing method thereof | |
WO2020096218A1 (en) | Electronic device and operation method thereof | |
WO2020251074A1 (en) | Artificial intelligence robot for providing voice recognition function and operation method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17879966 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017879966 Country of ref document: EP Effective date: 20190530 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |