WO2015088141A1 - Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances - Google Patents
Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances Download PDFInfo
- Publication number
- WO2015088141A1 WO2015088141A1 PCT/KR2014/010536 KR2014010536W WO2015088141A1 WO 2015088141 A1 WO2015088141 A1 WO 2015088141A1 KR 2014010536 W KR2014010536 W KR 2014010536W WO 2015088141 A1 WO2015088141 A1 WO 2015088141A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice
- information
- unit
- user
- home appliance
- Prior art date
Links
- 238000011017 operating method Methods 0.000 title claims description 9
- 230000006870 function Effects 0.000 claims abstract description 118
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 238000004891 communication Methods 0.000 claims description 72
- 230000008451 emotion Effects 0.000 claims description 57
- 238000001514 detection method Methods 0.000 claims description 44
- 238000000034 method Methods 0.000 claims description 43
- 230000008909 emotion recognition Effects 0.000 claims description 25
- 239000000284 extract Substances 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 14
- 230000001133 acceleration Effects 0.000 claims description 11
- 238000005286 illumination Methods 0.000 claims description 9
- 238000009434 installation Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 5
- 238000001228 spectrum Methods 0.000 claims description 5
- 235000013305 food Nutrition 0.000 description 15
- 238000005406 washing Methods 0.000 description 12
- 239000002775 capsule Substances 0.000 description 10
- 241000220259 Raphanus Species 0.000 description 7
- 235000006140 Raphanus sativus var sativus Nutrition 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000004378 air conditioning Methods 0.000 description 6
- 235000015278 beef Nutrition 0.000 description 6
- 239000003205 fragrance Substances 0.000 description 5
- 238000002360 preparation method Methods 0.000 description 5
- 235000014347 soups Nutrition 0.000 description 5
- 241000287181 Sturnus vulgaris Species 0.000 description 4
- 206010044565 Tremor Diseases 0.000 description 4
- 230000009471 action Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 4
- 235000021147 sweet food Nutrition 0.000 description 4
- 235000012041 food component Nutrition 0.000 description 3
- 239000005417 food ingredient Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 235000013403 specialized food Nutrition 0.000 description 3
- 235000021404 traditional food Nutrition 0.000 description 3
- 241000282326 Felis catus Species 0.000 description 2
- 238000010411 cooking Methods 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 235000001674 Agaricus brunnescens Nutrition 0.000 description 1
- 244000178870 Lavandula angustifolia Species 0.000 description 1
- 235000010663 Lavandula angustifolia Nutrition 0.000 description 1
- 244000246386 Mentha pulegium Species 0.000 description 1
- 235000016257 Mentha pulegium Nutrition 0.000 description 1
- 235000004357 Mentha x piperita Nutrition 0.000 description 1
- 235000017716 Poliomintha incana Nutrition 0.000 description 1
- 244000178231 Rosmarinus officinalis Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000007791 dehumidification Methods 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000001050 hortel pimenta Nutrition 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000001102 lavandula vera Substances 0.000 description 1
- 235000018219 lavender Nutrition 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 235000015277 pork Nutrition 0.000 description 1
- 235000014102 seafood Nutrition 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
- G10L15/25—Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
- G10L17/24—Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/84—Detection of presence or absence of voice signals for discriminating voice from noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/027—Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
Definitions
- the present disclosure relates to smart home appliances, an operating method thereof, and a voice recognition system using the smart home appliances.
- Home appliances as electronic products equipped in homes, include refrigerators, air conditioners, cookers, and vacuum cleaner. Conventionally, in order to operate such home appliances, a method of approaching and directly manipulating them or remotely controlling them through a remote controller is used.
- Fig. 1 is a view illustrating a configuration of a conventional home appliance and its operating method.
- a conventional home appliance includes a voice recognition unit 2, a control unit 3, a memory 4, and a driving unit 5.
- the home appliance 1 collects the spoken voice and interprets the collected voice by using the voice recognition unit 2.
- a text corresponding to the voice may be extracted.
- the control unit 3 compares extracted first text information and second text information stored in the memory 4 to determine whether the text is matched.
- control unit 3 may recognize a predetermined function of the home appliance 1 corresponding to the second text information.
- control unit 3 may operate the driving unit 5 on the basis of the recognized function.
- Embodiments provide a smart home appliance with improved voice recognition rate, an operation method thereof, and a voice recognition system using the smart home appliance.
- a smart home appliance includes: a voice input unit collecting a voice; a voice recognition unit recognizing a text corresponding to the voice collected through the voice input unit; a capturing unit collecting an image for detecting a user’s visage or face; a memory unit mapping the text recognized by the voice recognition unit and a setting function and storing the mapped information; and a control unit determining whether to perform a voice recognition service on the basis of at least one information of image information collected by the capturing unit and voice information collected by the voice input unit.
- the control unit may include a face detection unit recognizing that a user is in a staring state for voice input when image information on a user’s visage or face is collected for more than a setting time through the capturing unit.
- the control unit may determine that a voice recognition service standby state is entered when it is recognized that there is keyword information in a voice through the voice input unit and a user in the staring state through the face detection unit.
- the smart home appliance may further include: a filter unit removing a noise sound from the voice inputted through the voice input unit; and a memory unit mapping voice information related to an operation of the smart home appliance and voice information unrelated to an operation of the smart home appliance in advance in the voice inputted through the voice input unit and storing the mapped information.
- the smart home appliance may further include: a region recognition unit determining a user’s region on the basis of information on the voice collected through the voice input unit; and an output unit outputting region customized information on the basis of information on a region determined by the region recognition unit and information on the setting function.
- the setting function may include a plurality of functions divided according to regions; and the region customized information including one function matching information on the region among the plurality of functions is outputted through the output unit.
- the output unit may output the region customized information by using a dialect in the region determined by the region recognition unit.
- the output unit may output a key word for security setting and the voice input unit may set a reply word corresponding to the key word.
- the smart home appliance may further include an emotion recognition unit and an output unit, wherein the voice recognition unit may recognize a text corresponding to first voice information in the voice collected through the voice input unit; the emotion recognition unit may extract a user’s emotion on the basis of second voice information in the voice collected through the voice input unit; and the output unit may output user customized information on information on a user’s emotion determined by the emotion recognition unit and information on the setting function.
- the voice recognition unit may recognize a text corresponding to first voice information in the voice collected through the voice input unit
- the emotion recognition unit may extract a user’s emotion on the basis of second voice information in the voice collected through the voice input unit
- the output unit may output user customized information on information on a user’s emotion determined by the emotion recognition unit and information on the setting function.
- the first voice information may include a language element in the collected voice; and the second voice information may include a non-language element related to a user’s emotion.
- the emotion recognition unit may include a database where information on user’s voice characteristics and information on an emotion state are mapped; and the information on the user’s voice characteristics may include information on a speech spectrum having characteristics for each user’s emotion.
- the setting function may include a plurality of functions to be recommended or selected; and the user customized information including one function matching the information on the user’s emotion among the plurality of functions is outputted through the output unit.
- the smart home appliance may further include: a position information recognition unit recognizing position information; and an output unit outputting the information on the setting function on the basis of position information recognized by the position information recognition unit.
- the position information recognition unit may include: a GPS reception unit receiving a position coordinate from an external position information transmission unit; and a first communication module communicably connected to a second communication module equipped in an external server.
- the output unit may include a voice output unit outputting the information on the setting function as a voice by the position recognized by the position information recognition unit or a dialect used in a region.
- the output unit may output information optimized for a region recognized by the position information recognition unit among a plurality of information on the setting function.
- the position information recognized by the position information recognition unit may include weather information.
- an operating method of a smart home appliance includes: collecting a voice through a voice input unit; recognizing whether keyword information is included in the collected voice; collecting image information on a user’s visage or face through a capturing unit equipped in the smart home appliance; and entering a standby state of a voice recognition service on the basis of the image information on the user’s visage or face.
- the method may further include: determining a user’s region on the basis of information on the collected voice; and driving the smart home appliance on the basis of information on the setting function and information on the determined region.
- the method may further include outputting region customized information related to the driving of the smart home appliance on the basis of the information on the determined region.
- the outputting of the region customized information may include outputting a voice or a screen by using a dialect used in the user’s region.
- the method may further include performing a security setting, wherein the performing of the security setting may include: outputting a set key word; and inputting a reply word in response to the suggested key word.
- the method may further include: extracting a user’s emotion state on the basis of information on the collected voice; and recommending an operation mode on the basis of information on the user’s emotion state.
- the method may further include: recognizing an installation position of the smart home appliance through a position information recognition unit; and driving the smart home appliance on the basis of information on the installation position.
- the recognizing of the installation position of the smart home appliance may include receiving GPS coordinate information from a GPS satellite or a communication base station.
- the recognizing of the installation position of the smart home appliance may include checking a communication address as a first communication module equipped in the smart home appliance is connected to a second communication module equipped in a server.
- a voice recognition system includes: a mobile device including a voice input unit receiving a voice; a smart home appliance operating and controlled based on a voice collected through the voice input unit; and a communication module equipped in each of the mobile device and the smart home appliance, wherein the mobile device includes a movement detection unit determining whether to enter a standby state of a voice recognition service in the smart home appliance by detecting a movement of the mobile device.
- the movement detection unit may include an acceleration sensor or a gyro sensor detecting a change in an inclined angle of the mobile device, wherein the voice input unit may be disposed at a lower part of the mobile device; and when a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an angle value detected by the acceleration sensor or the gyro sensor may be reduced.
- the movement detection unit may include an illumination sensor detecting an intensity of an external light collected by the mobile device; and when a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an intensity value of a light detected by the illumination sensor may be increased.
- a user s face or an action for manipulating a mobile device is recognized and through this, whether a user has an intention for speaking a voice command is determined, so that voice misrecognition may be prevented.
- a command subject may be recognized by extracting the feature (specific word) of a command voice that a user makes and only a specific electronic product among a plurality of electronic products responds according to the recognized command subject. Therefore, miscommunication may be prevented during an operation of an electronic product.
- happy, angry, sad, and trembling emotions may be classified from a user’s voice by using mapped information of voice characteristics and emotional states in voice and on the basis of the classified emotion, an operation of a home appliance may be performed or recommended.
- mapping information for distinguishing voice information related to an operation of an air conditioner from voice information related to noise in a user's voice is stored in the air conditioner, the misrecognition of a user’s voice may be reduced.
- Fig. 1 is a view illustrating a configuration of a conventional home appliance and its operating method.
- Fig. 2 is a view illustrating a configuration of an air conditioner as one example of a smart appliance according to a first embodiment of the present invention.
- Figs. 3 and 4 are block diagrams illustrating a configuration of the air conditioner.
- Fig. 5 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
- Fig. 6 is a schematic view illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
- Fig. 7 is a block diagram illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
- Fig. 8 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
- Fig. 9 is a view when a user performs an action for starting a voice recognition by using a mobile device according to a second embodiment of the present invention.
- Fig. 10 is a view illustrating a configuration of a plurality of smart home appliances according to a third embodiment of the present invention.
- Fig. 11 is a view when a user makes a voice on a plurality of smart home appliances according to a third embodiment of the present invention.
- Fig. 12 is a view when a plurality of smart home appliances operate by using a mobile device according to a fourth embodiment of the present invention.
- Fig. 13 is a view illustrating a configuration of a smart home appliance or a mobile device and an operating method thereof according to an embodiment of the present invention.
- Fig. 14 is a view illustrating a message output of a display unit according to an embodiment of the present invention.
- Fig. 15 is a view illustrating a message output of a display unit according to another embodiment of the present invention.
- Figs. 16A and 16B are views illustrating a security setting for voice recognition function performance according to a fifth embodiment of the present invention.
- Fig. 17 is a view illustrating a configuration of a voice recognition system and its operation method according to a sixth embodiment of the present invention.
- Figs. 18 to 20 are views illustrating a message output of a display unit according to a sixth embodiment of the present invention.
- Figs. 21A to 23 are views illustrating a message output of a display unit according to another embodiment of the present invention.
- Fig. 24 is a block diagram illustrating a configuration of an air conditioner as one example of a smart home appliance according to a seventh embodiment of the present invention.
- Fig. 25 is a flowchart illustrating a control method of a smart home appliance according to a seventh embodiment of the present invention.
- Fig. 26 is a view illustrating a display unit of a smart home appliance.
- Fig. 27 is a view illustrating a configuration of a cooker as another example of a smart home appliance according to a seventh embodiment of the present invention.
- Figs. 28A and 28B are views illustrating a display unit of the cooker.
- Fig. 29 is a view illustrating a configuration of a washing machine as another example of a smart home appliance according to an eight embodiment of the present invention.
- Fig. 30 is a flowchart illustrating a control method of a smart home appliance according to an eighth embodiment of the present invention.
- Fig. 31 is a block diagram illustrating a configuration of a voice recognition system according to a ninth embodiment of the present invention.
- Fig. 2 is a view illustrating a configuration of an air conditioner as one example of a smart appliance according to a first embodiment of the present invention.
- Figs. 3 and 4 are block diagrams illustrating a configuration of the air conditioner.
- an air conditioner is described as one example of a smart appliance, it should be clear in advance that the ideas related to a voice recognition or communication (information offer) procedure except for the unique setting functions of an air conditioner may be applied to other smart home appliances, for example, cleaners, cookers, washing machines, or refrigerators.
- an air conditioner 10 according to the first embodiment of the present invention includes a suction part 22, discharge parts 25 and 42, and a case 20 forming an external appearance.
- the air conditioner 10 shown in Fig. 2 may be an indoor unit installed in an indoor space to discharge air.
- the suction part 22 may be formed at the rear of the case 20.
- the discharge parts 25 and 42 include a main discharge part 25 through which the air suctioned through the suction part 22 is discharged to the front or side of the case 20 and a lower discharge part 42 discharging the air downwardly.
- the main discharge parts 25 may be formed at the both sides of the case 20 and its opening/closing degree may be adjusted by a discharge vane 26.
- the discharge vane 26 may be rotatably provided at one side of the main discharge part 25.
- the opening/closing degree of the lower discharge part 42 may be adjusted by a lower discharge vane 44.
- a vertically movable upper discharge device 30 may be provided at an upper part of the case 20.
- the upper discharge device 30 When the air conditioner 10 is turned on, the upper discharge device 30 may move to protrude from an upper end of the case 20 toward an upper direction and when the air conditioner 10 is turned off, the upper discharge device 30 may move downwardly and may be received inside the case 20.
- An upper discharge part 32 discharging air is defined at the front of the upper discharge device 30 and an upper discharge vane 34 adjusting a flow direction of the discharged air is equipped inside the upper discharge device 30.
- the upper discharge vane 34 may be provided rotatably.
- a voice input unit 110 receiving a user’s voice is equipped on at least one side of the case 20.
- the voice input unit 110 may be equipped at the left side or right side of the case 20.
- the voice input unit 110 may be referred to as a “voice collection unit” in that it is possible to collect voice.
- the voice input unit 110 may include a microphone.
- the voice input unit 110 may be disposed at the rear of the main discharge unit 25 so as not to be affected by the air discharged from the main discharge part 25.
- the air conditioner 10 may further include a body detection unit 36 detecting a body in an indoor space or a body’s movement.
- the body detection unit 36 may include at least one of an infrared sensor and a camera.
- the body detection unit 36 may be disposed at the front part of the upper discharge device 30.
- the air conditioner 10 may further include a capsule injection device 60 through which a capsule with aroma is injected.
- the capsule input device 60 may be installed at the front of the air conditioner 10 to be withdrawable.
- a capsule release device (not shown) disposed inside the air conditioner 10 pops the capsule and a predetermine aroma fragrance is diffused.
- the diffused aroma fragrance may be discharged to the outside of the air conditioner 10 together with the air discharged from the discharge parts 25 and 42.
- aroma fragrance may be provided variously, for example, lavender, rosemary or peppermint.
- the air conditioner 10 may include a filter unit 115 for removing noise sound from a voice inputted through the voice input unit 110.
- the voice may be filtered into a voice frequency easy for voice information through the filter unit 115.
- the air conditioner 10 includes control units 120 and 150 recognizing information for an operation of the air conditioner 10 from the voice information passing through the filter unit 115.
- the control units 120 and 150 include a main control unit 120 controlling an operation of the driving unit 140 in order for an operation of the air conditioner 10 and an output unit 160 communicably connected to the main control unit 120 and controlling the output unit 160 to display operation information of the air conditioner 10 to the outside.
- the driving unit 140 may include a compressor or a blow fan.
- the output unit 160 includes a display unit displaying operation information of the air conditioner 10 as an image and a voice output unit outputting the operation information as a voice.
- the voice output unit may include a speaker.
- the voice output unit may be disposed at one side of the case 20 and may be provided separated from the voice input unit 110.
- the air conditioner 10 may include a memory unit 130 mapping and storing voice information related to an operation of the air conditioner 10 and voice information irrelevant to an operation of the air conditioner 10 in advance among voices inputted through the voice input unit 110.
- First voice information, second voice information, and third voice information are stored in the memory unit 130.
- frequency information defining the first to third voice information may be stored in the memory unit 130.
- the first voice information may be understood as voice information related to an operation of the air conditioner 10, that is, keyword information.
- an operation of the air conditioner 10 corresponding to the inputted first voice information may be performed or stop.
- the memory unit 130 may include text information corresponding to the first voice information and information on a setting function corresponding to the text information.
- the first voice information corresponds to ON of the air conditioner 10, as the first voice information is recognized, an operation of the air conditioner 10 starts.
- the first voice information corresponds to OFF of the air conditioner 10, as the first voice information is recognized, an operation of the air conditioner 10 stops.
- the first voice information corresponds to one operation mode of the air conditioner 10, that is, air conditioning, heating, ventilation or dehumidification
- a corresponding operation mode may be performed.
- the first voice information and an operation method (on/off and an operation mode) of the air conditioner 10 corresponding to the first voice information may be mapped in advance in the memory unit 130.
- the second voice information may include frequency information similar to the first voice information related to an operation of the air conditioner 10 but is substantially understood as voice information related to an operation of the air conditioner 10.
- the frequency information similar to the first voice information may be understood as frequency information showing a frequency difference between the first voice information and the inside of a setting range.
- corresponding voice information may be filtered as noise information. That is, the main control unit 120 may recognize the second voice information but may not perform an operation of the air conditioner 10.
- the third voice information may be understood as voice information irrelevant to an operation of the air conditioner 10.
- the third voice information may be understood as frequency information showing a frequency difference between the first voice information and the outside of a setting range.
- the second voice information and the third voice information may be referred to as “unrelated information” in that they are voice information unrelated to an operation of the air conditioner 10.
- voice recognition since voice information related to an operation of a home appliance and voice information unrelated to an operation of a home appliance are mapped and stored in advance in a smart home appliance, voice recognition may be performed effectively.
- the air conditioner 10 includes a camera 50 as a capturing unit capturing a user’s face.
- the camera 50 may be installed at the front part of the air conditioner 10.
- the air conditioner 10 may further include a face detection unit 180 recognizing that a user looks at the air conditioner 10 in order for voice recognition on the basis of an image captured through the camera 50.
- the face detection unit 180 may be installed inside the camera 50 or installed separately in the case 20, in order for one function of the camera 50.
- the face detection unit 180 recognizes that a user stares at the air conditioner 10 in order for voice input (that is, a staring state).
- the air conditioner 10 further include a voice recognition unit 170 extracting text information from a voice collected through the voice input unit 110 and recognizing a setting function of the air conditioner 10 on the basis of the extracted text information.
- the information recognized by the voice recognition unit 170 or the face detection unit 180 may be delivered to the main control unit 120.
- the main control unit 120 may realize a user’s intention of using a voice recognition service and may then enter a standby state on the basis of the information recognized by the voice recognition unit 170 and the face detection unit 180.
- the voice recognition unit 170, the face detection unit 180, and the main control unit 120 are separately configured as shown in Fig. 4, the voice recognition unit 170 and the face detection unit 180 may be installed as one component of the main control unit 120. That is, the voice recognition unit 170 may be understood as a function component of the main control unit 120 to perform a voice recognition function and also the face detection unit 170 may be understood as a function component of the main control unit 120 to perform a face detection function.
- Fig. 5 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
- a voice recognition service may be set to be turned on.
- the voice recognition service is understood as a service controlling an operation of the air conditioner 10 by inputting a voice command.
- a predetermined input unit (not shown) may be manipulated.
- a user may be set to be turned off in operation S11.
- a user speaks a predetermined voice command in operation S12.
- the spoken voice is collected through the voice input unit 110. It is determined whether keyword information for activating voice recognition is included in the collected voice.
- the keyword information is understood as information that a user may input to start a voice recognition service. That is, even when the voice recognition service is set to be turned on, the keyword information may be inputted in order to represent a user’s intention of using a voice recognition service after the current time.
- the keyword information may include pre-mapped information such as “turn on air conditioner” or “voice recognition start”.
- pre-mapped information such as “turn on air conditioner” or “voice recognition start”.
- the keyword information is included in a voice command inputted through the voice input unit 110, it is recognized whether a user truly has an intention for voice command.
- a request message for re-inputting keyword information may be outputted from the output unit 160 of the air conditioner 10.
- Whether a user truly has an intention for voice command may be determined based on whether a user’s visage or face is detected for more than a setting time through the camera 50.
- the setting time may be 2 sec to 3 sec.
- the face detection unit 180 may recognize whether a user stares at the camera 50 for a setting time.
- the main control unit 120 determines that the user has an intention on voice command and enters a voice recognition standby state in operations S14 and S15.
- a filtering process is performed on all voice information collected through the input unit 110 of the air conditioner 10 and then, whether there is a voice command is recognized.
- the termination of the voice recognition standby state may be performed when a user inputs keyword information for voice recognition termination as voice or manipulates an additional input unit in operation S16.
- the voice keyword information recognition operation may be performed.
- the voice recognition standby when two conditions on the voice keyword information recognition and the user face detection are satisfied, the voice recognition standby is entered, when any one condition on the voice keyword information recognition and the user face detection is satisfied, the voice recognition standby may be entered.
- Fig. 6 is a schematic view illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
- the voice recognition system includes a mobile device 200 receiving a user’s voice input and an air conditioner 10 operating and controlled based on a voice inputted to the mobile device 200.
- the air conditioner 10 is just one example of a smart home appliance. Thus, the idea of this embodiment is applicable to other smart home appliances.
- the mobile device 200 may include a smartphone, a remote controller, and a tap book.
- the mobile device 200 may include a voice input available voice input unit 220, a manipulation available input unit 210, a display unit equipped at the front part to display information on an operation state of the mobile device 200 or information provided from the mobile device 200, and a movement detection unit 230 detecting a movement of the mobile device 200.
- the voice input unit 220 may include a mike.
- the voice input unit 220 may be disposed at a lower part of the mobile device 200.
- the input unit 210 may include a user press manipulation available button or a user touch manipulation available touch panel.
- the movement detection unit 230 may include an acceleration sensor or a gyro sensor.
- the acceleration sensor or the gyro sensor may detect information on an inclined angle of the mobile device 200, for example, an inclined angle with respect to the ground.
- the acceleration sensor or the gyro sensor may detect a different angle value according to the standing degree of the mobile device 200.
- the movement detection unit 230 may include an illumination sensor.
- the illumination sensor may detect the intensity of external light collected according the standing degree of the mobile device 200. For example, when a user stands up the mobile device 200 in order to stare at the display unit 260 of the mobile device 200 and a users lays down the mobile terminal 200 in order to input a voice command through the voice input unit 220, the acceleration sensor or the gyro sensor may detect a different angle value according to the standing degree of the mobile device 200.
- the mobile device 200 further includes a control unit 250 receiving information detected by the input unit 210, the voice input unit 230, the display unit 260, and the movement detection unit 230 and recognizing a user’s voice command input intention, and a communication module 270 communicating with the air conditioner 10.
- a communication module of the air conditioner 10 and the communication module 270 of the mobile device 100 may directly communicate with each other. That is, direct communication is possible without going through a wireless access point by using Wi-Fi-Direct technique, an Ad-Hoc mode (or network), or Bluetooth.
- WiFi-Direct may mean a technique for communicating at high speed by using communication standards such as 802.11a, b, g, and n regardless of the installation of an access point. This technique is understood as a communication technique for connecting the air conditioner 10 and the mobile device 200 by using Wi-Fi wirelessly without internet network.
- the Ad-Hoc mode (or Ad-Hoc network) is a communication network including only mobile hosts without a fixed wired network. Since there are no limitations in the movement of a host and it does not require a wired network and a base station, fast network configuration is possible and its cost is inexpensive. That is, wireless communication is possible without the access point. Accordingly, in the Ad-Hoc mode, without the access point, wireless communication is possible between the air conditioner 10 and the mobile device 200.
- Bluetooth communication as a short-range wireless communication method, wireless communication is possible within a specific range through a pairing process between a communication module (a first Bluetooth module) of the air conditioner 10 and the communication module 270 (a second Bluetooth module) of the mobile device.
- the air conditioner 10 and the mobile device 200 may communicate with each other through an access point and a server (not shown) or a wired network.
- the control unit 250 may transmit to the air conditioner 10 information that the voice recognition service standby state is entered through the communication module 270.
- Fig. 8 is a flowchart illustrating a control method of a smart home appliance according to a second embodiment of the present invention.
- Fig. 9 is a view when a user performs an action for starting a voice recognition by using a mobile device according to a second embodiment of the present invention.
- a voice recognition service may be set to be turned on in operation S21.
- a user may input or speak a command for a voice recognition standby preparation state through a manipulation of the input unit 210 or the voice input unit 220.
- a voice recognition standby preparation state When an input of the input unit 210 is recognized or it is recognized that keyword information for voice recognition standby is included in a voice collected through the voice input unit 220, it is determined that "voice recognition standby preparation state" is entered.
- the keyword information as first voice information stored in the memory unit 130, is understood as information that a user may input to start a voice recognition service in operations S22 and S23.
- Whether a user truly has an intention on voice command may be determined based on whether a detection value is changed in the movement detection unit 230. For example, when the movement detection unit 230 includes an acceleration sensor or a gyro sensor, whether a value detected by the acceleration sensor or the gyro sensor is changed is recognized. The change may depend on whether an inclined value (or range) at which the mobile device 200 stands changes into an inclined value (or range) at which the mobile device 200 lies.
- the mobile device 200 may stand up somewhat. At this point, an angle at which the mobile device 200 makes with the ground may be ⁇ 1.
- the mobile device 200 when a user inputs a predetermined voice command through the voice input unit 220 disposed at a lower part of the mobile device 200 as gripping the mobile device 200, the mobile device 200 may lie somewhat. At this point, an angle at which the mobile device 200 makes with the ground may be ⁇ 2. Then, ⁇ 1 > ⁇ 2. Values for ⁇ 1 and ⁇ 2 may be predetermined within a predetermined setting range.
- the movement detection unit 230 when the movement detection unit 230 includes an illumination sensor, it is recognized whether a value detected by the illumination sensor is changed. The change may depend on whether the intensity of light (first intensity) collected when the mobile device 200 stands changes into the intensity of light (second intensity) collected when the mobile device 200 lies.
- the second intensity may be formed greater than the first intensity. That is, the intensity of light collected from the outside when the mobile device 200 lines may be greater than the intensity of light collected from the outside when the mobile device 200 stands.
- values for the first intensity and the second intensity may be predetermined within a predetermined setting range.
- a voice recognition standby state may be entered.
- a filtering process is performed on all voice information collected through the voice input unit 220 of the air conditioner 10 and then, whether there is a voice command is recognized.
- the termination of the voice recognition standby state may be performed when a user inputs keyword information for voice recognition termination as voice or manipulates an additional input unit. In such a way, since whether keyword information according to a simple voice input is recognized and whether to activate a voice recognition service is determined by detecting a movement of a mobile device, voice misrecognition may be prevented in operation S26.
- the movement detection operation of the mobile device is performed after the user’s button (touch) input or voice input operation
- the user’s button (touch) input or voice input operation may be performed after the movement detection operation of the mobile device.
- a voice recognition standby is entered, when any one condition on the user’s button (touch) input or voice input operation and the movement detection operation of the mobile device is satisfied, the voice recognition standby state may be entered.
- Fig. 10 is a view illustrating a configuration of a plurality of smart home appliances according to a third embodiment of the present invention.
- Fig. 11 is a view when a user makes a voice on a plurality of smart home appliances according to a third embodiment of the present invention.
- a voice recognition system 10 includes a plurality of voice recognition available smart home appliances 310, 320, 330, and 340.
- the plurality of smart home appliances 310, 320, 330, and 340 may include a cleaner 310, a cooker 320, an air conditioner 330, and a refrigerator 340.
- the plurality of smart home appliances 310, 320, 330, and 340 may be in a standby state for receiving a voice.
- the standby state may be entered when a user sets a voice recognition mode in each smart home appliance. Then, the setting of the voice recognition mode may be accomplished by an input of a predetermined input unit or an input of a set voice.
- the plurality of smart home appliances 310, 320, 330, and 340 may be disposed together in a predetermined space. In this case, even when a user speaks a predetermined voice command toward a specific one among the plurality of smart home appliances 310, 320, 330, and 340, another home appliance may react to the voice command. Accordingly, this embodiment is characterized in that when a user makes a predetermined voice, a target home appliance to be commanded is estimated or determined appropriately.
- each of the smart home appliances 310, 320, 330, and 340 includes a voice input unit 510, a voice recognition unit 520, and a command recognition unit 530.
- the voice input unit 510 may collects voices that a user makes.
- the voice input unit 510 may include a microphone.
- the voice recognition unit 520 extracts a text from the collected voice.
- the command recognition unit 530 determines whether there is a text where a specific word related to an operation of each home appliance is used by using the extracted text.
- the command recognition unit 530 may include a memory storing information related to the specific word.
- the command recognition unit 530 may recognize that a corresponding home appliance is a home appliance that is a user's command target.
- the voice recognition unit 520 and the command recognition unit 530 are functionally distinguished and described but may be equipped inside one controller.
- a home appliance recognized as a command target may output a message that whether a user's command target is the home appliance itself. For example, when a home appliance recognized as a command target is the air conditioner 330, a voice or text message "turn on air conditioner?" may be outputted. At this, the outputted voice or text message is referred to as "recognition message”.
- a user may input a recognition or confirmation message that an air conditioner is a target, for example, a concise message “air conditioner operation” or "OK”.
- the inputted voice message is referred to as "confirmation message”.
- the command recognition unit 530 may recognize that corresponding home appliances, that is, the cleaner 310, the cooker 320, and the refrigerator 340, are excluded from a user's command target. Then, even when a user's voice is inputted for a setting time after the recognition, the home appliances excluded from the command target are not recognized as the user's command target and do not react to the user's voice.
- each home appliance may output a confirmation message that whether a user's command target is each home appliance itself. Then, a user may specify a home appliance that is a command target by inputting a voice for the type of a home appliance to be commanded among a plurality of home appliances.
- the air conditioner 330 recognizes that a specific word “air conditioning” is used and also recognizes that the home appliance itself is a command target.
- information on the text "air conditioning” may be stored in the memory of the air conditioner 330 in advance.
- the word "air conditioning” is not a specific word of a corresponding home appliance with respect to the cleaner 310, the cooker 320, and the air conditioner 330, it is recognized that the home appliances 310, 320, and 330 are excluded from the command target.
- the cooker 320, the air conditioner 330, and the refrigerator 340 may recognize that a specific word "temperature” is used. That is, the plurality of home appliances 320, 330, and 340 may recognize that they are the command targets.
- the plurality of home appliances 320, 330, and 340 may output a message that whether the user's command targets are the home appliances 320, 330, and 340 themselves. Then, as a user inputs a voice for a specific home appliance, for example, "air conditioner", it is specified that a command target is the air conditioner 330.
- a command target is specified in the above manner, an operation of a home appliance may be controlled through an interactive communication between a user and a corresponding home appliance.
- a command subject may be recognized by extracting the feature (specific word) of a voice that a user makes and only a specific electronic product among a plurality of electronic products responds according to the recognized command subject. Therefore, miscommunication may be prevented during an operation of an electronic product.
- Fig. 12 is a view when a plurality of smart home appliances operate by using a mobile device according to a fourth embodiment of the present invention.
- a voice recognition system 10 includes a mobile device 400 receiving a user’s voice input, a plurality of home appliances 310, 320, 330, and 340 operating and controlled based on a voice inputted to the mobile device 400, and a server 450 communicably connecting the mobile device 400 and the plurality of home appliances 310, 320, 330, and 340.
- the mobile device 400 is equipped with the voice input unit 510 described with reference to Fig. 11 and the server 450 includes the voice recognition unit 520 and the command recognition unit 530.
- the mobile device 400 may include an application connected to the server 400. Once the application is executed, a voice input mode for user's voice input may be activated in the mobile device 400.
- the inputted voice is delivered to the server 450 and the server 450 determines which home appliance is the target of a voice command as the voice recognition unit 520 and the command recognition unit 530 operate.
- the server 450 When a specific home appliance is recognized as the command target on the basis of a determination result, the server 450 notifies the specific home appliance of the recognized result.
- the home appliance notified of the result responses to a user's command.
- the air conditioner 330 when recognized as a command target and notified of a result, it may output a recognition message such as "turn on air conditioner?". According thereto, a user may input a confirmation message such as "OK" or "air conditioner operation". In relation to this, the contents described with reference to Fig. 11 are used.
- Fig. 13 is a view illustrating a configuration of a smart home appliance or a mobile device and an operating method thereof according to an embodiment of the present invention. Configurations shown in Fig. 13 may be equipped in smart home appliances or mobile devices. Hereinafter, smart home appliances will be described for an example.
- a smart home appliance includes a voice input unit 510 receiving a user’s voice input and a voice recognition unit 520 extracting a text from a voice collected through the voice input unit 510.
- the voice recognition unit 520 may include a memory unit where the frequency of a voice and a text are mapped.
- the smart home appliance may further include a region recognition unit 540 extracting the intonation of a voice inputted from the voice input unit 510 to determine the local color of the voice, that is, which region dialect is used.
- the region recognition unit 540 may include a database for dialects used in a plurality of regions. The database may store information on the intonation recognized when speaking in dialect, that is, unique frequency changes.
- the text extracted through the voice recognition unit 520 and the information on a region determined through the region recognition unit 540 may be delivered to the control unit 550.
- the smart home appliance may further include a memory unit 560 mapping the text extracted by the voice recognition unit 520 and a function corresponding to the text and storing them.
- the control unit 550 may recognize a function corresponding to the text extracted by the voice recognition unit 520 on the basis of the information stored in the memory unit 560. Then, the control unit 550 may control a driving unit 590 equipped in the home appliance in order to perform the recognized function.
- the driving unit 560 may include a suction motor of a cleaner, a motor or a heater of a cooker, a compressor motor of an air conditioner, or a compressor motor of a refrigerator.
- the home appliance further includes a display unit 570 outputting or providing region customized information to a screen or a voice output unit 580 outputting a voice on the basis of a setting function corresponding to the text extracted by the voice recognition unit 520 and the region information determined by the region recognition unit 540.
- the combined display unit 570 and voice output unit 580 may be referred to as “output unit”
- the setting function may include a plurality of functions divided according to regions and one function matching the region information determined by the region recognition unit 540 among the plurality of functions may be outputted.
- combined information of the recognized function and the determined local color may be inputted to the display unit 570 or the voice output unit 580 (region customized information providing service).
- the smart home appliance further includes a selection available mode setting unit 565 to perform a mode for the region customized information providing service.
- a user may use the region customized information providing service when the mode setting unit 565 is in “ON” state.
- a user may not use the region customized information providing service when the mode setting unit 565 is in “OFF” state.
- Fig. 14 is a view illustrating a message output of a display unit according to an embodiment of the present invention.
- the display unit 570 may be equipped in the cooler 320, the refrigerator 340, or the mobile device 400.
- the display unit 570 equipped at the refrigerator 340 is described for an example.
- the cooler 320 or the refrigerator 340 may provide information on a recipe for a predetermined cooking to a user.
- the cooker 320 or the refrigerator 340 may include a memory unit storing recipe information on at least one cooking.
- a user may provide an input when the mode setting unit 565 of the refrigerator 340 is ON state.
- a guide message such as “what can I help you”, that is, a voice input request message
- a voice input request message may be displayed on a screen of the display unit 570.
- the voice input request message may be outputted as a voice through the voice output unit 580.
- a user may input a specific recipe for the voice input request message, for example, as shown in Fig. 14, a voice “beef radish soup recipe”.
- the refrigerator 340 receives and recognizes a user’s voice command and extracts a text corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “beef radish soup recipe”.
- the refrigerator 340 extracts the intonation from a voice inputted by a user and recognizes a frequency change corresponding to the extracted intonation, so that it may recognize a dialect for a specific region.
- the refrigerator 340 recognizes the Gyeongsang-do dialect and prepares to provide a recipe optimized for Gyeongsang-do region. That is, there are a plurality of beef radish recipes according to regions, one recipe matching the recognized region, Gyeongsang-do, may be recommended.
- the refrigerator 340 may recognize that a user in Gyeongsang-do region wants to receive ““beef radish soup recipe” and may then, read information on a Gyeongsang-do style radish recipe to provide it to a user.
- the display unit 570 may display a message “here is Gyeongsang-do style red beef radish soup recipe”.
- a voice message may be outputted through the voice output unit 580.
- the smart home appliance When the smart home appliance is an air conditioner for conditioning an indoor space and recognizes that a user’ region is a cold region such as Gangwon-do, as a user inputs a voice command “temperature down”, under the assumption that a user in cold region likes cold weather, the smart home appliance may operate to set a relatively low temperature as a setting temperature. Then, information on contents related to adjusting a setting temperature to a relatively low temperature, for example, 20°C, may be outputted to the output units 570 and 580.
- a relatively low temperature for example, 20°C
- the dialect that a user speaks is recognized and region customized information is provided on the basis of the recognized dialect information. Therefore, usability may be improved.
- Fig. 15 is a view illustrating a message output of a display unit according to another embodiment of the present invention.
- the display unit 570 may be equipped in the air conditioner 330 or the mobile device 400.
- a region customized information providing service is used to input a command for an operation of the air conditioner 330
- a user may provide an input when the mode setting unit 545 of the air conditioner 330 is in ON state.
- a guide message such as “what can I help you”, that is, a voice input request message
- a voice input request message may be displayed on a screen of the display unit 570.
- the voice input request message may be outputted as a voice through the voice output unit 580.
- a user may input a command on an operation of the air conditioner 300, for example, as shown in Fig. 15, a voice “turn on air conditioner (in dialect)”.
- the air conditioner 330 receives and recognizes a user’s voice command and extracts a text corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “turn on air conditioner (in dialect)”.
- the air conditioner 330 extracts the intonation from a voice inputted by a user and recognizes a frequency change corresponding to the extracted intonation, so that it may recognize a dialect for a specific region. For example, when a user inputs a voice “turn on air conditioner” in Jeolla-do accent, the air conditioner 330 may recognize the Jeolla-do dialect and may then generate a response message for a user as the Jeolla-do dialect.
- the air conditioner 330 recognizes that a user in Jeolla-do region wants “air conditioner operation” and reads dialect information on a message that an air conditioner operation is performed from the memory unit 560 to provide it to a user.
- the display unit 570 may output a message using the Jeolla-do dialect, for example, “it is very hot and turn on quickly (in the Jeolla-do dialect)”.
- a voice message may be outputted through the voice output unit 580.
- the dialect that a user speaks is recognized and information to be provided to a user is provided as a dialect on the basis of the recognized dialect information, so that the user may feel the intimacy.
- Figs. 16A and 16B are views illustrating a security setting for voice recognition function performance according to a fifth embodiment of the present invention.
- a user’s security setting is possible in the voice recognition system according to the fifth embodiment of the present invention.
- the security setting may be completed by a smart home appliance directly or by using a mobile device.
- security setting and authentication procedures by using a mobile device are described.
- an input may be provided when the mode setting unit 565 in the mobile device 400 is in ON state.
- an operation for setting an initial security may be performed.
- the mobile device 400 may output a message for a predetermined key word.
- a key word “calendar” may be outputted as a text through a screen of the mobile device 400 or may be outputted as voice through a speaker.
- a first guide message may be outputted.
- the first guide message includes contents for an input of a reply word to the key word.
- the first guide message may include content “please speak word coming to mind when looking at the next word”.
- the first guide message may be outputted as a text through a screen of the mobile device 400 or may be outputted as a voice through a speaker.
- a user may input a word to be set as a password as a voice.
- a relay word “cat” may be inputted.
- a second guide message notifying the reply word is stored may be outputted through a screen or a voice.
- a procedure for performing an authentication by inputting a reply word to the key word may be performed.
- the mobile device 400 when an input is provided as the mode setting unit 565 in the mobile device 400 is in ON state, the mobile device 400 outputs a message for the key word, for example, “calendar”, and outputs a third guide message notifying the need for authentication, for example, “user authentication is required for this function”.
- the message for keyword and the third guide message may be outputted through a screen of the mobile device 400 or a voice.
- a user may input a predetermined set reply word, for example, a voice of “cat”.
- the mobile device 400 may output a fourth guide message notifying that authentication is successful, for example, a text of voice message “authenticated”.
- a predetermined reply word is configured to be inputted in a usage stage, so that other users’ service access and usage are limited except for designated users.
- Fig. 17 is a view illustrating a configuration of a voice recognition system and its operation method according to a sixth embodiment of the present invention.
- a smart home appliance includes a voice input unit 510 receiving a user’s voice input and a voice recognition unit 520 extracting a language element as a text from voice information collected through the voice input unit 510.
- the voice recognition unit 520 may include a memory unit where the frequency of a voice and a text are mapped.
- the smart home appliance may further include an emotion recognition unit 540 extracting user’s emotion information from the voice information inputted through the voice input unit 510.
- the emotion recognition unit 540 may include a database where information on user’s voice characteristics and information on an emotion state are mapped.
- the information on user’s voice characteristics may include information on speech spectrum having distinctive characteristics for each user’s emotion.
- the speech spectrum represents a distribution according to a voice’s frequency and may be understood as that a frequency distribution for each emotion, that is, emotions such as joy, angry, and sadness, is patterned. Accordingly, when a user makes a voice with a predetermined emotion, the emotion recognition unit 540 interprets a frequency change to extract a user’s emotion.
- the text extracted through the voice recognition unit 520 and the information on an emotion determined through the emotion recognition unit 540 may be delivered to the control unit 550.
- the smart home appliance may further include a memory unit 560 mapping the text extracted by the voice recognition unit 520 and a function corresponding to the text and storing them.
- the control unit 550 may recognize a function corresponding to the text extracted by the voice recognition unit 520 on the basis of the information stored in the memory unit 560. Then, the control unit 550 may control a driving unit 590 equipped in the home appliance in order to perform the recognized function.
- the home appliance further includes a display unit 570 outputting or providing user customized information to a screen or a voice output unit 580 outputting a voice on the basis of a setting function corresponding to the text extracted by the voice recognition unit 520 and the emotion information extracted by the region recognition unit 540.
- the setting function may include a plurality of functions divided according to user’s emotions and one function matching the emotion information determined by the emotion recognition unit 540 among the plurality of functions may be outputted.
- the display unit 570 or the voice output unit 580 may output combined information of a function corresponding to the text and user’s emotion information (user customized information providing service).
- the smart home appliance further includes a selection available mode setting unit 565 to perform a mode for the user customized information providing service.
- a user may use the user customized information providing service when the mode setting unit 565 is in “ON” state.
- a user may not use the user customized information providing service when the mode setting unit 565 is in “OFF” state.
- FIGs. 18 to 20 are views illustrating a message output of a display unit according to a sixth embodiment of the present invention.
- a display unit 570 shown in Figs. 18 to 20 may be equipped in the air conditioner 330 or the mobile device 450.
- the display unit 570 equipped at the air conditioner 330 is described for an example.
- a user when a user customized information providing service is used by using the air conditioner 330 to condition the air in an indoor space, a user may provide an input when the mode setting unit 565 is in ON state.
- a guide message such as “what can I help you”, that is, a voice input request message
- a voice input request message may be displayed on a screen of the display unit 570.
- the voice input request message may be outputted as a voice through the voice output unit 580.
- a user may input a specific operation command for the voice input request message, for example, as shown in Fig. 18, a voice “air conditioner start”.
- the air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “air conditioner start”.
- the air conditioner 330 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- emotion information second voice information
- information on a frequency change detected from a user’s voice may be compared to information on a speech spectrum having characteristics for people’s each emotion.
- information corresponding to each other may be matched based on a comparison result and accordingly, emotion information that a user’s voice has may be obtained.
- the emotion recognition unit 540 may recognize that a user makes a voice with an angry emotion from a frequency change detected from the user’s voice.
- the air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode.
- the display unit 570 may output a message “oh! very hot? air conditioner start with lowest temperature and direct wind?”
- the direct wind is understood as a mode in which the discharge direction of air is formed directly toward the position of a user detected through the body detection unit 36 of the air conditioner 330. That is, a setting temperature is set to the lowest temperature to perform an air conditioning function and cool wind reaches a user directly, so that a user may feed cool instantly.
- a voice that a user accepts or selects the outputted message for example, “yes”, is inputted, the air conditioner 330 recognizes this and operates in the recommended mode.
- a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice
- a user may input a specific command for the voice input request message, for example, a voice “temperature up”
- the air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “temperature up”.
- the air conditioner 300 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- emotion information second voice information
- the emotion recognition unit 540 may recognize that a user makes a voice with a trembling voice due to cold from a frequency change detected from the user’s voice.
- the air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode. That is, as shown in Fig. 19, as an operation of the air conditioner 330 starts according to a user’s command, an operation mode is recommended in consideration of a cold trembling state. For example, the display unit 570 may output a message “cold? set air conditioner temperature to 26 degrees. execute action detection indirect wind?”.
- the indirect wind is understood as a mode in which the discharge direction of air is indirectly provided toward a user by avoiding the position of the user detected through the body detection unit 36 (see Fig. 2) of the air conditioner 300. That is, a setting temperature rises to a relatively high temperature to perform a cooling or heating function and wind reaches a user indirectly, so that the user may feel pleasant without cold.
- the air conditioner 330 recognizes this and operates in the recommended mode.
- a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice
- a user may input a specific command for the voice input request message, for example, a voice “air conditioner start”
- the air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “air conditioner start”.
- the air conditioner 330 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- emotion information second voice information
- the emotion recognition unit 540 may recognize that a user makes a voice with a sad emotion from a frequency change detected from the user’s voice.
- the air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode. That is, as shown in Fig. 20, as an operation of the air conditioner 330 starts according to a user’s command, an operation mode is recommended in consideration of a user’s sad emotion state.
- the display unit 570 may output a message “air conditioner start. sir, use aroma function for refresh. start aroma function?”.
- the aroma function is understood as a function through which a capsule inserted into the capsule injection device 60 (see Fig. 2) of the air conditioner 300 acts so that a wind with an aroma fragrance is discharged. That is, the air conditioner 300 recognizes a user’s sad emotion and then diffuses an aroma fragrance for refresh into an indoor space. When a voice that a user accepts the outputted message, for example, “yes”, is inputted, the air conditioner 300 recognizes this and operates in the recommended mode.
- FIGs. 21A to 23 are views illustrating a message output of a display unit according to another embodiment of the present invention.
- a display unit 570 shown in Figs. 21A to 23 may be equipped in the cooker 320, the refrigerator 340, or the mobile device 400.
- the display unit 570 equipped at the refrigerator 340 is described for an example.
- a user may input a specific command for the voice input request message, for example, a voice “recipe search”
- the refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”. Then, the refrigerator 340 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- the emotion recognition unit 540 may recognize that a user makes a voice with a sad emotion from a frequency change detected from the user’s voice.
- the refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 21A, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of a sad emotion. For example, the display unit 570 may output a message “feel depressed? eat sweet food then you feel better. sweet food recipe search?” When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “sweet food recipe search”.
- Fig. 21B is similar to Fig. 21A in terms of scenario but when a user rejects a specific recipe that the refrigerator 340 recommends, for example, when a user inputs a voice rejecting a message “sweet food recipe search?” outputted by the display unit 570, that is, “no”, a voice input request message “speak food ingredients to search” may be outputted.
- a voice input request message “what can I help you?” may be defined as “first message” and a voice input request message “tell food ingredients to search” may be defined as “second message”.
- a user may input a voice for another selectable function with respect to the second message, that is, another food ingredient, to receive information on a desired recipe.
- a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice
- a user may input a specific command for the voice input request message, for example, a voice “recipe search”
- the refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”.
- the refrigerator 340 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- emotion information second voice information
- the emotion recognition unit 540 may recognize that a user makes a voice with an angry emotion from a frequency change detected from the user’s voice.
- the refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 22, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of an angry emotion. For example, the display unit 570 may output a message “are you angry with empty stomach? fast cook food recipe search?”.
- the refrigerator 340 When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “fast cook food recipe search”.
- the refrigerator 340 may output a second message to receive a desired specific recipe from a user.
- a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice
- a user may input a specific command for the voice input request message, for example, a voice “recipe search”
- the refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”.
- the refrigerator 400 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
- emotion information second voice information
- the emotion recognition unit 540 may recognize that a user makes a voice with a happy emotion from a frequency change detected from the user’s voice.
- the refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 23, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of a happy emotion. For example, the display unit 570 may output a message “feel good? make special food. special food recipe search?”.
- the refrigerator 340 When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “special food recipe search”.
- the refrigerator 340 may output a second message to receive a desired specific recipe from a user.
- a smart home appliance extracts emotion information from a user’s voice and recommends a specific function matching a user’s emotion among a plurality of functions, instead of extracting a text from a user’s voice command and performs a set function simply, user’s convenience may be increased and product satisfaction may be improved.
- Fig. 24 is a block diagram illustrating a configuration of an air conditioner as one example of a smart home appliance according to a seventh embodiment of the present invention.
- an air conditioner is described as one example of a smart appliance, it should be clear in advance that the ideas related to a voice recognition or communication (information offer) procedure except for the unique setting functions of an air conditioner may be applied to other smart home appliances.
- an air conditioner 600 includes a plurality of communication units 680 and 690 communicating with an external device.
- the plurality of communication units 680 and 690 include a first communication module 680 communicating with the server 700 and a position information reception unit 690 receiving information on the position of the air conditioner 600 from a position information transmission unit 695.
- the first communication module 680 may communicate with the second communication module 780 of the server 780 in a wired or wireless manner.
- the first communication module 680 of the air conditioner 600 and the second communication module 780 of the server 700 may communicate with each other directly or through an access point, or through a wired network.
- each of the first communication module 680 of the air conditioner 600 and the second communication module 780 of the server 700 may have a unique internet protocol (IP) address. Accordingly, when the first communication module 680 and the second communication module 780 are communicably connected to each other, the server 700 may recognize the installed position or region of the air conditioner 600 by recognizing the first communication module 680.
- IP internet protocol
- the position information reception unit 680 may be a GPS reception unit, for example.
- the position information transmission unit 695 is configured to transmit information on the position of the position information reception unit 690 or the air conditioner 600, to the position information reception unit 690.
- the information on position may include a position coordinate value.
- the position information transmission unit 695 may be a GPS satellite or a communication base station.
- the position information reception unit 690 may transmit a predetermined signal to the position information transmission unit 695 periodically or at a specific time point and the position information transmission unit 695 may transmit the information on position to the position information reception unit 690.
- the first communication module 680 and the position information reception unit 695 recognize the position or region of the air conditioner 600, they may be referred to as a “position information recognition unit”.
- the server 700 may further include a server memory 770.
- the server memory 770 may store information necessary for an operation of the air conditioner 600, for example, information on the position (region) of the air conditioner 600 or the first communication module 680 or weather information corresponding to the position (region).
- information stored in the server memory 770 may be transmitted to the air conditioner 700.
- information on the position of the air conditioner 600 may be recognized based on information on the communication address when the first communication module 680 and the second communication module 780 are connected communicably or information received from the position information reception unit 690.
- the air conditioner 600 may receive information on the position through the position information reception unit 690.
- the air conditioner 600 may receive the information on position through a communication connection of the first and second communication modules 680 and 780.
- Fig. 25 is a flowchart illustrating a control method of a smart home appliance according to a seventh embodiment of the present invention.
- a position recognition service may be set to be turned on.
- the position recognition service is understood as a service in which the installed position or region of the air conditioner 600 is recognized when the first and second communication modules 680 and 780 are communicably connected or the position information reception unit 690 receives position information and a function of a smart home appliance is performed based on information on the recognized position or region. Then, an application for using the position recognition service is executed in operations S31 and S32.
- a user's voice command is inputted through the voice input unit 110 (see Fig. 2). Then, information on the position of the smart home appliance is recognized through the communication connection of the first and second communication modules 680 and 780 and the position information reception unit 690 in operations S33 and S34.
- a voice command inputted through the voice input unit 110 may correspond to at least one voice information among a plurality of voice information stored in the memory unit 130 (see Fig. 3) and the corresponding voice information may be extracted as a text. Then, by using the extracted text, a predetermined setting function among a plurality of setting functions that home appliance performs may be recognized in operation S35.
- information on the position of the smart home appliance may be considered. Then, information on the setting function and information on the position are combined so that predetermined information may be provided to a user.
- a language used in a corresponding region may guide information on the setting function.
- information for a setting function optimized for the position of the smart home appliance may be guided in operation S36.
- an example of information on a setting function that a smart home appliance provides is described.
- Fig. 26 is a view illustrating a display unit of a smart home appliance.
- Fig. 26 illustrates a view when information on a setting function combined with position information is outputted.
- a message for requesting a voice input may be outputted from an output unit 660 of the air conditioner 600. For example, a message “what can I help you?” may be outputted from the output unit 660. At this point, voice and text messages may be outputted together.
- a user may speak a voice command “temperature up”.
- the spoken voice is inputted through the voice input unit 110 and filtered and then is delivered to the main control unit 120 (see Fig. 3).
- the main control unit 120 recognizes the filtered voice as predetermined voice information and outputs it as a text.
- the air conditioner 600 may provide a setting function corresponding to the voice command to a user according to the recognized voice command information. At this point information on the recognized position may be considered.
- the main control unit 120 may control an operation of the driving unit 140 (see Fig. 3) to raise a setting temperature by 1 degree.
- Fig. 27 is a view illustrating a configuration of a cooker as another example of a smart home appliance according to a seventh embodiment of the present invention.
- Figs. 28A and 28B are views illustrating a display unit of the cooker.
- a cooker 810 includes a voice input unit 812 receiving a user’s voice input, an input unit 814 manipulated for a user’s command input, and an output unit 816 displaying information on an operation state of the cooker 810.
- the output unit 816 includes a display unit displaying information on a screen and a voice output unit outputting a voice.
- the cooker 810 includes the filter unit 115, the memory unit 130, the driving unit 140, the control units 120 and 150, and the first communication module 680, and the position information reception unit 690, all of which are described with reference to Fig. 3. Their detailed descriptions are omitted.
- a message for requesting a voice input may be outputted from the output unit 816 of the cooker 810. For example, a message “what can I help you?” may be outputted from the output unit 816. At this point, voice and text messages may be outputted together.
- a user may speak a voice command “food recipe”.
- the spoken voice is inputted through the voice input unit 110 and filtered and then is delivered to the main control unit 120.
- the main control unit 120 recognizes the filtered voice as predetermined voice information and outputs it as a text. Then, through a communication connection with the server 700 or on the basis of information received from the position information reception unit 690, information on the position of the cooker 810 may be recognized.
- the output unit 816 may output a message for requesting an input of detailed information on a recipe. For example, a message “please input food type” may be outputted as voice or text.
- a user may input information on a desired food type, that is, a food keyword, through the input unit 815. For example, a user may input a food keyword “grilled food”.
- the cooker 810 may complete a related recipe search and may then output a guide message. For example, a message “recipe search is completed. want to check search result?” may be outputted. For this, when a user inputs an acceptance intention, that is, a voice “yes”, a screen may be switched and another screen shown in Fig. 28B may be outputted.
- information on the position and setting function information corresponding to the recognized voice command may be combined and predetermined information may be outputted to the output unit 816 of the cooker 810.
- specialty or traditional food recipes in the position (region) of the cooker 810 may be arranged preferentially and outputted to the output unit 360.
- a specialty or traditional food recipe for “grilled food”, that is, “oven-grilled pork roll with fishery”, “assorted grilled seafood”, and “assorted grilled mushroom”, may be arranged at an upper part of the output unit 816 and may be displayed in a check box. That is, among a plurality of information on a setting function, information optimized for the position recognized by the position information recognition units 680 and 690 may be outputted first to the output unit 816.
- general grilled food recipes may be arranged at a lower part of the specialty or traditional food recipe.
- a user selects a desired recipe among each arranged recipe, detailed information on a recipe may be checked.
- information on a setting function of a home appliance is provided on the basis of information on a user’s voice command and position information of a home appliance, user’s convenience may be increased.
- Fig. 29 is a view illustrating a configuration of a washing machine as another example of a smart home appliance according to an eight embodiment of the present invention.
- Fig. 30 is a flowchart illustrating a control method of a smart home appliance according to the eighth embodiment of the present invention.
- the smart home appliance according to the eighth embodiment may include a washing machine 820.
- the washing machine 820 includes a voice input unit 822 receiving a user’s voice input, an input unit 825 manipulated for a user’s command input, and an output unit 826 displaying information on an operation state of the washing machine 820.
- the output unit 826 includes a display unit displaying information on a screen and a voice output unit outputting a voice.
- the washing machine 820 includes the filter unit 115, the memory unit 130, the driving unit 140, the control units 120 and 150, and the first communication module 680, and the position information reception unit 690. Their detailed descriptions are omitted.
- a position recognition service may be set to be turned on.
- the position recognition service is understood as a service in which the installed position or region of the washing machine 820 is recognized when the first and second communication modules 680 and 780 are communicably connected or the position information reception unit 690 receives position information and a function of a smart home appliance is performed based on information on the recognized position or region. Then, an application for using the position recognition service is executed in operations S41 and S42.
- a user's voice command is inputted through the voice input unit 110.
- information on the position of the smart home appliance is recognized through the communication connection of the first and second communication modules 680 and 780 and the position information reception unit 690 in operations S43 and S44.
- weather information of the position (region) where the washing machine 820 is installed is received from the server 700 in operation S45.
- a voice command inputted through the voice input unit 110 may correspond to at least one voice information among a plurality of voice information stored in the memory unit 130 and the corresponding voice information may be extracted as a text. Then, by using the extracted text, a predetermined setting function among a plurality of setting functions that home appliance performs may be recognized in operation S46.
- weather information on the installed region of the smart home appliance may be considered.
- information on the setting function and information on the weather are combined so that recommendation information related to the setting function may be provided to a user. That is, one information among a plurality of information related to the setting function may be recommended.
- a laundry course may be recommended by recognizing weather information on a region where the washing machine 820 is installed. For example, if rainy or high humidity, strong spin or a drying function may be recommended in operation S47.
- a driving course may be recommended by recognizing weather information on a region where the air conditioner is installed. For example, a dehumidifying function may be recommended by receiving humidity information. Then, a user sets a bedtime reservation, by receiving a temperature of the nighttime, a recommendation for increasing or decreasing a reservation time.
- Fig. 31 is a block diagram illustrating a configuration of a voice recognition system according to a ninth embodiment of the present invention.
- a voice recognition system includes a mobile device 900 receiving a user’s voice input, a plurality of home appliances 810, 820, 830, and 840 operating and controlled based on a voice inputted to the mobile device 900, and a server 950 communicably connecting the mobile device 900 and the plurality of home appliances 810, 820, 830, and 840.
- the plurality of smart home appliances 810, 820, 830, and 840 may include a cooker 810, a washing machine 820, a cleaner 830, and an air conditioner 840.
- the mobile device 900 may include a smartphone, a remote controller, and a tap book.
- the mobile device 900 includes a voice input unit 110, a first communication module 918, and a position information reception unit 919. Then, the mobile device 900 further includes an input unit 916 outputting information related to a function performance of the home appliance.
- the server 950 may further include a server memory 957 and a second communication module 958.
- the service memory may store text information mapped into an inputted voice and setting function information corresponding to the text information.
- An application connected to the server 950 may be executed in the mobile device 900. Once the application is executed, a voice input mode for user's voice input may be activated in the mobile device 900.
- the inputted voice is delivered to the server 950 and the server 950 may recognize the inputted voice to transmit a command on a setting function performance to a home appliance corresponding to a voice command.
- the server 950 may recognize the position of the mobile device 900 and may then transmit a command on the setting function performance to the home appliance on the basis of information on the recognized position.
- the server 950 may transmit the information on the setting function performance to the mobile device 900 and the information may be outputted to the output unit 916 of the mobile device 900.
- information on voice recognition, position recognition, and setting function performance may be outputted to the output unit 916 of the mobile device 900. That is, information described with reference to Figs. 26, 28A and 28B may be outputted to the output unit 916.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Quality & Reliability (AREA)
- User Interface Of Digital Computer (AREA)
- Selective Calling Equipment (AREA)
- Telephonic Communication Services (AREA)
Abstract
Provided is a smart home appliance. The smart home appliance includes: a voice input unit collecting a voice; a voice recognition unit recognizing a text corresponding to the voice collected through the voice input unit; a capturing unit collecting an image for detecting a user's visage or face; a memory unit mapping the text recognized by the voice recognition unit and a setting function and storing the mapped information; and a control unit determining whether to perform a voice recognition service on the basis of at least one information of image information collected by the capturing unit and voice information collected by the voice input unit.
Description
The present disclosure relates to smart home appliances, an operating method thereof, and a voice recognition system using the smart home appliances.
Home appliances, as electronic products equipped in homes, include refrigerators, air conditioners, cookers, and vacuum cleaner. Conventionally, in order to operate such home appliances, a method of approaching and directly manipulating them or remotely controlling them through a remote controller is used.
However, with the recent developments of communication technology, a technique for inputting a command for operating home appliances by using a voice and allowing the home appliances to recognize the inputted voice content and operate is introduced.
Fig. 1 is a view illustrating a configuration of a conventional home appliance and its operating method.
A conventional home appliance includes a voice recognition unit 2, a control unit 3, a memory 4, and a driving unit 5.
When a user makes a voice meaning a specific command, the home appliance 1 collects the spoken voice and interprets the collected voice by using the voice recognition unit 2.
As an interpretation result of the colleted voice, a text corresponding to the voice may be extracted. The control unit 3 compares extracted first text information and second text information stored in the memory 4 to determine whether the text is matched.
When the first and second text information matches, the control unit 3 may recognize a predetermined function of the home appliance 1 corresponding to the second text information.
Then, the control unit 3 may operate the driving unit 5 on the basis of the recognized function.
However, when such a conventional home appliance is in use, a noise source generated from the surrounding may be wrongly recognized as a voice. Additionally, even when a user simply talks with other people near a home appliance without an intention of speaking a command for voice recognition, this may be also wrongly recognized. That is, the home appliance malfunctions.
Embodiments provide a smart home appliance with improved voice recognition rate, an operation method thereof, and a voice recognition system using the smart home appliance.
In one embodiment, a smart home appliance includes: a voice input unit collecting a voice; a voice recognition unit recognizing a text corresponding to the voice collected through the voice input unit; a capturing unit collecting an image for detecting a user’s visage or face; a memory unit mapping the text recognized by the voice recognition unit and a setting function and storing the mapped information; and a control unit determining whether to perform a voice recognition service on the basis of at least one information of image information collected by the capturing unit and voice information collected by the voice input unit.
The control unit may include a face detection unit recognizing that a user is in a staring state for voice input when image information on a user’s visage or face is collected for more than a setting time through the capturing unit.
The control unit may determine that a voice recognition service standby state is entered when it is recognized that there is keyword information in a voice through the voice input unit and a user in the staring state through the face detection unit.
The smart home appliance may further include: a filter unit removing a noise sound from the voice inputted through the voice input unit; and a memory unit mapping voice information related to an operation of the smart home appliance and voice information unrelated to an operation of the smart home appliance in advance in the voice inputted through the voice input unit and storing the mapped information.
The smart home appliance may further include: a region recognition unit determining a user’s region on the basis of information on the voice collected through the voice input unit; and an output unit outputting region customized information on the basis of information on a region determined by the region recognition unit and information on the setting function.
The setting function may include a plurality of functions divided according to regions; and the region customized information including one function matching information on the region among the plurality of functions is outputted through the output unit.
The output unit may output the region customized information by using a dialect in the region determined by the region recognition unit.
The output unit may output a key word for security setting and the voice input unit may set a reply word corresponding to the key word.
The smart home appliance may further include an emotion recognition unit and an output unit, wherein the voice recognition unit may recognize a text corresponding to first voice information in the voice collected through the voice input unit; the emotion recognition unit may extract a user’s emotion on the basis of second voice information in the voice collected through the voice input unit; and the output unit may output user customized information on information on a user’s emotion determined by the emotion recognition unit and information on the setting function.
The first voice information may include a language element in the collected voice; and the second voice information may include a non-language element related to a user’s emotion.
The emotion recognition unit may include a database where information on user’s voice characteristics and information on an emotion state are mapped; and the information on the user’s voice characteristics may include information on a speech spectrum having characteristics for each user’s emotion.
The setting function may include a plurality of functions to be recommended or selected; and the user customized information including one function matching the information on the user’s emotion among the plurality of functions is outputted through the output unit.
The smart home appliance may further include: a position information recognition unit recognizing position information; and an output unit outputting the information on the setting function on the basis of position information recognized by the position information recognition unit.
The position information recognition unit may include: a GPS reception unit receiving a position coordinate from an external position information transmission unit; and a first communication module communicably connected to a second communication module equipped in an external server.
The output unit may include a voice output unit outputting the information on the setting function as a voice by the position recognized by the position information recognition unit or a dialect used in a region.
The output unit may output information optimized for a region recognized by the position information recognition unit among a plurality of information on the setting function.
The position information recognized by the position information recognition unit may include weather information.
In another embodiment, an operating method of a smart home appliance includes: collecting a voice through a voice input unit; recognizing whether keyword information is included in the collected voice; collecting image information on a user’s visage or face through a capturing unit equipped in the smart home appliance; and entering a standby state of a voice recognition service on the basis of the image information on the user’s visage or face.
When the image information on the user’s visage or face is collected for more than a setting time, it may be recognized that a user is in a staring state for voice input; and when it is recognized that there is keyword information in the voice and the user is in the staring state for voice input, a standby state of the voice recognition service may be entered.
The method may further include: determining a user’s region on the basis of information on the collected voice; and driving the smart home appliance on the basis of information on the setting function and information on the determined region.
The method may further include outputting region customized information related to the driving of the smart home appliance on the basis of the information on the determined region.
The outputting of the region customized information may include outputting a voice or a screen by using a dialect used in the user’s region.
The method may further include performing a security setting, wherein the performing of the security setting may include: outputting a set key word; and inputting a reply word in response to the suggested key word.
The method may further include: extracting a user’s emotion state on the basis of information on the collected voice; and recommending an operation mode on the basis of information on the user’s emotion state.
The method may further include: recognizing an installation position of the smart home appliance through a position information recognition unit; and driving the smart home appliance on the basis of information on the installation position.
The recognizing of the installation position of the smart home appliance may include receiving GPS coordinate information from a GPS satellite or a communication base station.
The recognizing of the installation position of the smart home appliance may include checking a communication address as a first communication module equipped in the smart home appliance is connected to a second communication module equipped in a server.
In further another embodiment, a voice recognition system includes: a mobile device including a voice input unit receiving a voice; a smart home appliance operating and controlled based on a voice collected through the voice input unit; and a communication module equipped in each of the mobile device and the smart home appliance, wherein the mobile device includes a movement detection unit determining whether to enter a standby state of a voice recognition service in the smart home appliance by detecting a movement of the mobile device.
The movement detection unit may include an acceleration sensor or a gyro sensor detecting a change in an inclined angle of the mobile device, wherein the voice input unit may be disposed at a lower part of the mobile device; and when a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an angle value detected by the acceleration sensor or the gyro sensor may be reduced.
The movement detection unit may include an illumination sensor detecting an intensity of an external light collected by the mobile device; and when a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an intensity value of a light detected by the illumination sensor may be increased.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
According to the present invention, since a user controls an operation of a smart home appliance through a voice, usability may be improved.
Additionally, a user’s face or an action for manipulating a mobile device is recognized and through this, whether a user has an intention for speaking a voice command is determined, so that voice misrecognition may be prevented.
Furthermore, when there are a plurality of voice recognition available smart home appliances, a command subject may be recognized by extracting the feature (specific word) of a command voice that a user makes and only a specific electronic product among a plurality of electronic products responds according to the recognized command subject. Therefore, miscommunication may be prevented during an operation of an electronic product.
Additionally, whether a user speaks in standard language or direct is recognized and according to the content or dialect type of the recognized voice, customized information is provided, so that user’s convenience may be improved.
Moreover, since a setting on whether to use a voice recognition function of a home appliance and a security setting are possible, an arbitrary user may be prevented from using a corresponding function. Therefore, the reliability of a product operation may be increased.
Especially, happy, angry, sad, and trembling emotions may be classified from a user’s voice by using mapped information of voice characteristics and emotional states in voice and on the basis of the classified emotion, an operation of a home appliance may be performed or recommended.
Additionally, since mapping information for distinguishing voice information related to an operation of an air conditioner from voice information related to noise in a user's voice is stored in the air conditioner, the misrecognition of a user’s voice may be reduced.
According to the present invention, since a user controls an operation of a smart home appliance through a voice, usability may be improved. Thus, industrial applicability is remarkable.
Fig. 1 is a view illustrating a configuration of a conventional home appliance and its operating method.
Fig. 2 is a view illustrating a configuration of an air conditioner as one example of a smart appliance according to a first embodiment of the present invention.
Figs. 3 and 4 are block diagrams illustrating a configuration of the air conditioner.
Fig. 5 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
Fig. 6 is a schematic view illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
Fig. 7 is a block diagram illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
Fig. 8 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
Fig. 9 is a view when a user performs an action for starting a voice recognition by using a mobile device according to a second embodiment of the present invention.
Fig. 10 is a view illustrating a configuration of a plurality of smart home appliances according to a third embodiment of the present invention.
Fig. 11 is a view when a user makes a voice on a plurality of smart home appliances according to a third embodiment of the present invention.
Fig. 12 is a view when a plurality of smart home appliances operate by using a mobile device according to a fourth embodiment of the present invention.
Fig. 13 is a view illustrating a configuration of a smart home appliance or a mobile device and an operating method thereof according to an embodiment of the present invention.
Fig. 14 is a view illustrating a message output of a display unit according to an embodiment of the present invention.
Fig. 15 is a view illustrating a message output of a display unit according to another embodiment of the present invention.
Figs. 16A and 16B are views illustrating a security setting for voice recognition function performance according to a fifth embodiment of the present invention.
Fig. 17 is a view illustrating a configuration of a voice recognition system and its operation method according to a sixth embodiment of the present invention.
Figs. 18 to 20 are views illustrating a message output of a display unit according to a sixth embodiment of the present invention.
Figs. 21A to 23 are views illustrating a message output of a display unit according to another embodiment of the present invention.
Fig. 24 is a block diagram illustrating a configuration of an air conditioner as one example of a smart home appliance according to a seventh embodiment of the present invention.
Fig. 25 is a flowchart illustrating a control method of a smart home appliance according to a seventh embodiment of the present invention.
Fig. 26 is a view illustrating a display unit of a smart home appliance.
Fig. 27 is a view illustrating a configuration of a cooker as another example of a smart home appliance according to a seventh embodiment of the present invention.
Figs. 28A and 28B are views illustrating a display unit of the cooker.
Fig. 29 is a view illustrating a configuration of a washing machine as another example of a smart home appliance according to an eight embodiment of the present invention.
Fig. 30 is a flowchart illustrating a control method of a smart home appliance according to an eighth embodiment of the present invention.
Fig. 31 is a block diagram illustrating a configuration of a voice recognition system according to a ninth embodiment of the present invention.
Hereinafter, specific embodiments of the present invention are described with reference to the accompanying drawings. However, the idea of the present invention is not limited to suggested embodiments and those skilled in the art may suggest other embodiments within the scope of the same idea.
Fig. 2 is a view illustrating a configuration of an air conditioner as one example of a smart appliance according to a first embodiment of the present invention. Figs. 3 and 4 are block diagrams illustrating a configuration of the air conditioner.
Hereinafter, although an air conditioner is described as one example of a smart appliance, it should be clear in advance that the ideas related to a voice recognition or communication (information offer) procedure except for the unique setting functions of an air conditioner may be applied to other smart home appliances, for example, cleaners, cookers, washing machines, or refrigerators.
Referring to Figs. 2 to 4, an air conditioner 10 according to the first embodiment of the present invention includes a suction part 22, discharge parts 25 and 42, and a case 20 forming an external appearance. The air conditioner 10 shown in Fig. 2 may be an indoor unit installed in an indoor space to discharge air.
The suction part 22 may be formed at the rear of the case 20. Then, the discharge parts 25 and 42 include a main discharge part 25 through which the air suctioned through the suction part 22 is discharged to the front or side of the case 20 and a lower discharge part 42 discharging the air downwardly.
The main discharge parts 25 may be formed at the both sides of the case 20 and its opening/closing degree may be adjusted by a discharge vane 26. The discharge vane 26 may be rotatably provided at one side of the main discharge part 25. The opening/closing degree of the lower discharge part 42 may be adjusted by a lower discharge vane 44.
A vertically movable upper discharge device 30 may be provided at an upper part of the case 20. When the air conditioner 10 is turned on, the upper discharge device 30 may move to protrude from an upper end of the case 20 toward an upper direction and when the air conditioner 10 is turned off, the upper discharge device 30 may move downwardly and may be received inside the case 20.
An upper discharge part 32 discharging air is defined at the front of the upper discharge device 30 and an upper discharge vane 34 adjusting a flow direction of the discharged air is equipped inside the upper discharge device 30. The upper discharge vane 34 may be provided rotatably.
A voice input unit 110 receiving a user’s voice is equipped on at least one side of the case 20. For example, the voice input unit 110 may be equipped at the left side or right side of the case 20. The voice input unit 110 may be referred to as a “voice collection unit” in that it is possible to collect voice. The voice input unit 110 may include a microphone. The voice input unit 110 may be disposed at the rear of the main discharge unit 25 so as not to be affected by the air discharged from the main discharge part 25.
The air conditioner 10 may further include a body detection unit 36 detecting a body in an indoor space or a body’s movement. For example, the body detection unit 36 may include at least one of an infrared sensor and a camera. The body detection unit 36 may be disposed at the front part of the upper discharge device 30.
The air conditioner 10 may further include a capsule injection device 60 through which a capsule with aroma is injected. The capsule input device 60 may be installed at the front of the air conditioner 10 to be withdrawable. When a capsule is inputted to the capsule input device 60, a capsule release device (not shown) disposed inside the air conditioner 10 pops the capsule and a predetermine aroma fragrance is diffused. Then, the diffused aroma fragrance may be discharged to the outside of the air conditioner 10 together with the air discharged from the discharge parts 25 and 42. According to types of the capsule, aroma fragrance may be provided variously, for example, lavender, rosemary or peppermint.
The air conditioner 10 may include a filter unit 115 for removing noise sound from a voice inputted through the voice input unit 110. The voice may be filtered into a voice frequency easy for voice information through the filter unit 115.
The air conditioner 10 includes control units 120 and 150 recognizing information for an operation of the air conditioner 10 from the voice information passing through the filter unit 115. The control units 120 and 150 include a main control unit 120 controlling an operation of the driving unit 140 in order for an operation of the air conditioner 10 and an output unit 160 communicably connected to the main control unit 120 and controlling the output unit 160 to display operation information of the air conditioner 10 to the outside.
The driving unit 140 may include a compressor or a blow fan. Then, the output unit 160 includes a display unit displaying operation information of the air conditioner 10 as an image and a voice output unit outputting the operation information as a voice. The voice output unit may include a speaker. The voice output unit may be disposed at one side of the case 20 and may be provided separated from the voice input unit 110.
Then, the air conditioner 10 may include a memory unit 130 mapping and storing voice information related to an operation of the air conditioner 10 and voice information irrelevant to an operation of the air conditioner 10 in advance among voices inputted through the voice input unit 110. First voice information, second voice information, and third voice information are stored in the memory unit 130. In more detail, frequency information defining the first to third voice information may be stored in the memory unit 130.
The first voice information may be understood as voice information related to an operation of the air conditioner 10, that is, keyword information. When it is recognized that the first voice information is inputted, an operation of the air conditioner 10 corresponding to the inputted first voice information may be performed or stop. The memory unit 130 may include text information corresponding to the first voice information and information on a setting function corresponding to the text information.
For example, when the first voice information corresponds to ON of the air conditioner 10, as the first voice information is recognized, an operation of the air conditioner 10 starts. On the other hand, when the first voice information corresponds to OFF of the air conditioner 10, as the first voice information is recognized, an operation of the air conditioner 10 stops.
As another example, when the first voice information corresponds to one operation mode of the air conditioner 10, that is, air conditioning, heating, ventilation or dehumidification, as the first voice information is recognized, a corresponding operation mode may be performed. As a result, the first voice information and an operation method (on/off and an operation mode) of the air conditioner 10 corresponding to the first voice information may be mapped in advance in the memory unit 130.
The second voice information may include frequency information similar to the first voice information related to an operation of the air conditioner 10 but is substantially understood as voice information related to an operation of the air conditioner 10. Herein, the frequency information similar to the first voice information may be understood as frequency information showing a frequency difference between the first voice information and the inside of a setting range.
When it is recognized that the second voice information is inputted, corresponding voice information may be filtered as noise information. That is, the main control unit 120 may recognize the second voice information but may not perform an operation of the air conditioner 10.
The third voice information may be understood as voice information irrelevant to an operation of the air conditioner 10. The third voice information may be understood as frequency information showing a frequency difference between the first voice information and the outside of a setting range. The second voice information and the third voice information may be referred to as “unrelated information” in that they are voice information unrelated to an operation of the air conditioner 10.
In such a way, since voice information related to an operation of a home appliance and voice information unrelated to an operation of a home appliance are mapped and stored in advance in a smart home appliance, voice recognition may be performed effectively.
The air conditioner 10 includes a camera 50 as a capturing unit capturing a user’s face. For example, the camera 50 may be installed at the front part of the air conditioner 10.
The air conditioner 10 may further include a face detection unit 180 recognizing that a user looks at the air conditioner 10 in order for voice recognition on the basis of an image captured through the camera 50. The face detection unit 180 may be installed inside the camera 50 or installed separately in the case 20, in order for one function of the camera 50. When the camera 50 captures a user’s face for a predetermined time, the face detection unit 180 recognizes that a user stares at the air conditioner 10 in order for voice input (that is, a staring state).
The air conditioner 10 further include a voice recognition unit 170 extracting text information from a voice collected through the voice input unit 110 and recognizing a setting function of the air conditioner 10 on the basis of the extracted text information.
The information recognized by the voice recognition unit 170 or the face detection unit 180 may be delivered to the main control unit 120. The main control unit 120 may realize a user’s intention of using a voice recognition service and may then enter a standby state on the basis of the information recognized by the voice recognition unit 170 and the face detection unit 180.
Although the voice recognition unit 170, the face detection unit 180, and the main control unit 120 are separately configured as shown in Fig. 4, the voice recognition unit 170 and the face detection unit 180 may be installed as one component of the main control unit 120. That is, the voice recognition unit 170 may be understood as a function component of the main control unit 120 to perform a voice recognition function and also the face detection unit 170 may be understood as a function component of the main control unit 120 to perform a face detection function.
Fig. 5 is a flowchart illustrating a control method of a smart home appliance according to a first embodiment of the present invention.
Referring to Fig. 5, in controlling a smart home appliance according to the first embodiment of the present invention, a voice recognition service may be set to be turned on. The voice recognition service is understood as a service controlling an operation of the air conditioner 10 by inputting a voice command. For example, in order to turn on the voice recognition service, a predetermined input unit (not shown) may be manipulated. Of course, when a user does not want to use a voice recognition service, it may be set to be turned off in operation S11.
Then, a user speaks a predetermined voice command in operation S12. The spoken voice is collected through the voice input unit 110. It is determined whether keyword information for activating voice recognition is included in the collected voice.
The keyword information, as first voice information stored in the memory unit 130, is understood as information that a user may input to start a voice recognition service. That is, even when the voice recognition service is set to be turned on, the keyword information may be inputted in order to represent a user’s intention of using a voice recognition service after the current time.
For example, the keyword information may include pre-mapped information such as “turn on air conditioner” or “voice recognition start”. In such a manner, at the beginning of using a voice recognition service, by inputting the keyword information, a time for preparation to minimize surrounding noise or conversation is provided to a user.
When it is recognized that the keyword information is included in a voice command inputted through the voice input unit 110, it is recognized whether a user truly has an intention for voice command.
On the other hand, when it is recognized that the keyword information is not included in the voice command, a request message for re-inputting keyword information may be outputted from the output unit 160 of the air conditioner 10.
Whether a user truly has an intention for voice command may be determined based on whether a user’s visage or face is detected for more than a setting time through the camera 50. For example, the setting time may be 2 sec to 3 sec. When a user stares at the camera 50 at the front of the air conditioner 10, the camera 50 captures the user’s visage or face and transmits it to the face detection unit 180.
Then, the face detection unit 180 may recognize whether a user stares at the camera 50 for a setting time. When the face detection unit 180 recognizes the user’s staring state, the main control unit 120 determines that the user has an intention on voice command and enters a voice recognition standby state in operations S14 and S15.
As entering the voice recognition standby state, a filtering process is performed on all voice information collected through the input unit 110 of the air conditioner 10 and then, whether there is a voice command is recognized. The termination of the voice recognition standby state may be performed when a user inputs keyword information for voice recognition termination as voice or manipulates an additional input unit in operation S16.
In such a way, since whether to activate a voice recognition service is determined by simply recognizing keyword information according to a voice input or detecting a user’s visage or face, voice misrecognition may be prevented.
Although it is described with reference to Fig. 5 that the user face detection operation is performed after the voice keyword information recognition operation, unlike this, after the user face detection operation, the voice keyword information recognition operation may be performed.
As another example, although it is described with reference to Fig. 5 that when two conditions on the voice keyword information recognition and the user face detection are satisfied, the voice recognition standby is entered, when any one condition on the voice keyword information recognition and the user face detection is satisfied, the voice recognition standby may be entered.
Hereinafter, a second embodiment of the present invention is described. In this embodiment, since there is a difference from the first embodiment in that a voice recognition service is performed through a mobile device, the difference is mainly described and for the same parts as in the first embodiment, the description and reference numbers of the first embodiment are incorporated.
Fig. 6 is a schematic view illustrating a configuration of a voice recognition system according to a second embodiment of the present invention.
Referring to Fig. 6, the voice recognition system includes a mobile device 200 receiving a user’s voice input and an air conditioner 10 operating and controlled based on a voice inputted to the mobile device 200. The air conditioner 10 is just one example of a smart home appliance. Thus, the idea of this embodiment is applicable to other smart home appliances. The mobile device 200 may include a smartphone, a remote controller, and a tap book.
The mobile device 200 may include a voice input available voice input unit 220, a manipulation available input unit 210, a display unit equipped at the front part to display information on an operation state of the mobile device 200 or information provided from the mobile device 200, and a movement detection unit 230 detecting a movement of the mobile device 200.
The voice input unit 220 may include a mike. The voice input unit 220 may be disposed at a lower part of the mobile device 200. The input unit 210 may include a user press manipulation available button or a user touch manipulation available touch panel.
Then, the movement detection unit 230 may include an acceleration sensor or a gyro sensor. The acceleration sensor or the gyro sensor may detect information on an inclined angle of the mobile device 200, for example, an inclined angle with respect to the ground.
For example, when a user stands up the mobile device 200 in order to stare at the display unit 260 of the mobile device 200 and a users lays down the mobile terminal 200 in order to input a voice command through the voice input unit 220, the acceleration sensor or the gyro sensor may detect a different angle value according to the standing degree of the mobile device 200.
As another example, the movement detection unit 230 may include an illumination sensor. The illumination sensor may detect the intensity of external light collected according the standing degree of the mobile device 200. For example, when a user stands up the mobile device 200 in order to stare at the display unit 260 of the mobile device 200 and a users lays down the mobile terminal 200 in order to input a voice command through the voice input unit 220, the acceleration sensor or the gyro sensor may detect a different angle value according to the standing degree of the mobile device 200.
The mobile device 200 further includes a control unit 250 receiving information detected by the input unit 210, the voice input unit 230, the display unit 260, and the movement detection unit 230 and recognizing a user’s voice command input intention, and a communication module 270 communicating with the air conditioner 10.
For example, a communication module of the air conditioner 10 and the communication module 270 of the mobile device 100 may directly communicate with each other. That is, direct communication is possible without going through a wireless access point by using Wi-Fi-Direct technique, an Ad-Hoc mode (or network), or Bluetooth.
In more detail, WiFi-Direct may mean a technique for communicating at high speed by using communication standards such as 802.11a, b, g, and n regardless of the installation of an access point. This technique is understood as a communication technique for connecting the air conditioner 10 and the mobile device 200 by using Wi-Fi wirelessly without internet network.
The Ad-Hoc mode (or Ad-Hoc network) is a communication network including only mobile hosts without a fixed wired network. Since there are no limitations in the movement of a host and it does not require a wired network and a base station, fast network configuration is possible and its cost is inexpensive. That is, wireless communication is possible without the access point. Accordingly, in the Ad-Hoc mode, without the access point, wireless communication is possible between the air conditioner 10 and the mobile device 200.
In the Bluetooth communication, as a short-range wireless communication method, wireless communication is possible within a specific range through a pairing process between a communication module (a first Bluetooth module) of the air conditioner 10 and the communication module 270 (a second Bluetooth module) of the mobile device.
As another example, the air conditioner 10 and the mobile device 200 may communicate with each other through an access point and a server (not shown) or a wired network. When a user's voice command intention is recognized, the control unit 250 may transmit to the air conditioner 10 information that the voice recognition service standby state is entered through the communication module 270.
Fig. 8 is a flowchart illustrating a control method of a smart home appliance according to a second embodiment of the present invention. Fig. 9 is a view when a user performs an action for starting a voice recognition by using a mobile device according to a second embodiment of the present invention.
Referring to Fig. 8, in controlling a smart home appliance according to the second embodiment of the present invention, a voice recognition service may be set to be turned on in operation S21.
Then, a user may input or speak a command for a voice recognition standby preparation state through a manipulation of the input unit 210 or the voice input unit 220. When an input of the input unit 210 is recognized or it is recognized that keyword information for voice recognition standby is included in a voice collected through the voice input unit 220, it is determined that "voice recognition standby preparation state" is entered.
As described in the first embodiment, the keyword information, as first voice information stored in the memory unit 130, is understood as information that a user may input to start a voice recognition service in operations S22 and S23.
When it is recognized that "the voice recognition standby preparation state" is entered, whether a user truly has an intention on voice command is recognized. On the other hand, when it is not recognized that "the voice recognition standby preparation state" is entered, operation S22 and the following operations may be performed again.
Whether a user truly has an intention on voice command may be determined based on whether a detection value is changed in the movement detection unit 230. For example, when the movement detection unit 230 includes an acceleration sensor or a gyro sensor, whether a value detected by the acceleration sensor or the gyro sensor is changed is recognized. The change may depend on whether an inclined value (or range) at which the mobile device 200 stands changes into an inclined value (or range) at which the mobile device 200 lies.
As shown in Fig. 9, while a user stares at the display unit 260 as gripping the mobile device 200, the mobile device 200 may stand up somewhat. At this point, an angle at which the mobile device 200 makes with the ground may be α1.
On the other hand, when a user inputs a predetermined voice command through the voice input unit 220 disposed at a lower part of the mobile device 200 as gripping the mobile device 200, the mobile device 200 may lie somewhat. At this point, an angle at which the mobile device 200 makes with the ground may be α2. Then, α1 > α2. Values for α1 and α2 may be predetermined within a predetermined setting range.
When a value detected by the acceleration sensor or the gyro sensor changes from a setting range of α1 into a setting range of α2, it is recognized that a users puts the voice input unit 220 close to a user’s mouth. In this case, it is recognized that the user has an intention on voice command.
As another example, when the movement detection unit 230 includes an illumination sensor, it is recognized whether a value detected by the illumination sensor is changed. The change may depend on whether the intensity of light (first intensity) collected when the mobile device 200 stands changes into the intensity of light (second intensity) collected when the mobile device 200 lies.
Herein, the second intensity may be formed greater than the first intensity. That is, the intensity of light collected from the outside when the mobile device 200 lines may be greater than the intensity of light collected from the outside when the mobile device 200 stands. Herein, values for the first intensity and the second intensity may be predetermined within a predetermined setting range.
When it is detected that a value detected by the illumination sensor changes from a setting range of the first intensity into a setting range of the second intensity, it is determines that a user puts the voice input unit 220 closer to the user’s mouth. In this case, it is recognized that the user has an intention on voice command in operations S24 and S25.
Through such a method, when it is recognized that the user has an intention on voice command, a voice recognition standby state may be entered. As entering the voice recognition standby state, a filtering process is performed on all voice information collected through the voice input unit 220 of the air conditioner 10 and then, whether there is a voice command is recognized.
The termination of the voice recognition standby state may be performed when a user inputs keyword information for voice recognition termination as voice or manipulates an additional input unit. In such a way, since whether keyword information according to a simple voice input is recognized and whether to activate a voice recognition service is determined by detecting a movement of a mobile device, voice misrecognition may be prevented in operation S26.
Although it is described with reference to Fig. 8 that the movement detection operation of the mobile device is performed after the user’s button (touch) input or voice input operation, the user’s button (touch) input or voice input operation may be performed after the movement detection operation of the mobile device.
As another example, although it is descried with reference to Fig. 8 that when two conditions on the user’s button (touch) input or voice input operation and the movement detection operation of the mobile device are satisfied, a voice recognition standby is entered, when any one condition on the user’s button (touch) input or voice input operation and the movement detection operation of the mobile device is satisfied, the voice recognition standby state may be entered.
Fig. 10 is a view illustrating a configuration of a plurality of smart home appliances according to a third embodiment of the present invention. Fig. 11 is a view when a user makes a voice on a plurality of smart home appliances according to a third embodiment of the present invention.
Referring to Fig. 10, a voice recognition system 10 according to the third embodiment of the present invention includes a plurality of voice recognition available smart home appliances 310, 320, 330, and 340. For example, the plurality of smart home appliances 310, 320, 330, and 340 may include a cleaner 310, a cooker 320, an air conditioner 330, and a refrigerator 340.
The plurality of smart home appliances 310, 320, 330, and 340 may be in a standby state for receiving a voice. The standby state may be entered when a user sets a voice recognition mode in each smart home appliance. Then, the setting of the voice recognition mode may be accomplished by an input of a predetermined input unit or an input of a set voice.
The plurality of smart home appliances 310, 320, 330, and 340 may be disposed together in a predetermined space. In this case, even when a user speaks a predetermined voice command toward a specific one among the plurality of smart home appliances 310, 320, 330, and 340, another home appliance may react to the voice command. Accordingly, this embodiment is characterized in that when a user makes a predetermined voice, a target home appliance to be commanded is estimated or determined appropriately.
In more detail, referring to Fig. 11, each of the smart home appliances 310, 320, 330, and 340 includes a voice input unit 510, a voice recognition unit 520, and a command recognition unit 530.
The voice input unit 510 may collects voices that a user makes. For example, the voice input unit 510 may include a microphone. The voice recognition unit 520 extracts a text from the collected voice. The command recognition unit 530 determines whether there is a text where a specific word related to an operation of each home appliance is used by using the extracted text. The command recognition unit 530 may include a memory storing information related to the specific word.
If a voice in which the specific word is used is included in the collected voice, the command recognition unit 530 may recognize that a corresponding home appliance is a home appliance that is a user's command target. The voice recognition unit 520 and the command recognition unit 530 are functionally distinguished and described but may be equipped inside one controller.
A home appliance recognized as a command target may output a message that whether a user's command target is the home appliance itself. For example, when a home appliance recognized as a command target is the air conditioner 330, a voice or text message "turn on air conditioner?" may be outputted. At this, the outputted voice or text message is referred to as "recognition message".
According thereto, when an operation of the air conditioner 330 is desired, a user may input a recognition or confirmation message that an air conditioner is a target, for example, a concise message "air conditioner operation" or "OK". At this, the inputted voice message is referred to as "confirmation message".
On the other hand, if a voice in which the specific word is used is not included in the collected voice, the command recognition unit 530 may recognize that corresponding home appliances, that is, the cleaner 310, the cooker 320, and the refrigerator 340, are excluded from a user's command target. Then, even when a user's voice is inputted for a setting time after the recognition, the home appliances excluded from the command target are not recognized as the user's command target and do not react to the user's voice.
It is shown in Fig. 11 that when it is recognized that a voice corresponding to the air conditioner 330 among a plurality of home appliances is inputted, a recognition message is outputted and an operation in response to an inputted command is performed.
Moreover, when a home appliance recognized as the command target is more than one, as mentioned above, each home appliance may output a confirmation message that whether a user's command target is each home appliance itself. Then, a user may specify a home appliance that is a command target by inputting a voice for the type of a home appliance to be commanded among a plurality of home appliances.
For example, when there are the voice recognition available cleaner 310, cooker 320, air conditioner 330, and refrigerator 340 together in home, as a user makes a voice "air conditioning start", the air conditioner 330 recognizes that a specific word "air conditioning" is used and also recognizes that the home appliance itself is a command target. Of course, information on the text "air conditioning" may be stored in the memory of the air conditioner 330 in advance.
On the other hand, since the word "air conditioning" is not a specific word of a corresponding home appliance with respect to the cleaner 310, the cooker 320, and the air conditioner 330, it is recognized that the home appliances 310, 320, and 330 are excluded from the command target.
As another example, when a user makes the voice "temperature up", the cooker 320, the air conditioner 330, and the refrigerator 340 may recognize that a specific word "temperature" is used. That is, the plurality of home appliances 320, 330, and 340 may recognize that they are the command targets.
At this point, the plurality of home appliances 320, 330, and 340 may output a message that whether the user's command targets are the home appliances 320, 330, and 340 themselves. Then, as a user inputs a voice for a specific home appliance, for example, "air conditioner", it is specified that a command target is the air conditioner 330. When a command target is specified in the above manner, an operation of a home appliance may be controlled through an interactive communication between a user and a corresponding home appliance.
In such a manner, when there are a plurality of voice recognition available smart home appliances, a command subject may be recognized by extracting the feature (specific word) of a voice that a user makes and only a specific electronic product among a plurality of electronic products responds according to the recognized command subject. Therefore, miscommunication may be prevented during an operation of an electronic product.
Fig. 12 is a view when a plurality of smart home appliances operate by using a mobile device according to a fourth embodiment of the present invention.
Referring to Fig. 12, a voice recognition system 10 according to the fourth embodiment of the present invention includes a mobile device 400 receiving a user’s voice input, a plurality of home appliances 310, 320, 330, and 340 operating and controlled based on a voice inputted to the mobile device 400, and a server 450 communicably connecting the mobile device 400 and the plurality of home appliances 310, 320, 330, and 340.
The mobile device 400 is equipped with the voice input unit 510 described with reference to Fig. 11 and the server 450 includes the voice recognition unit 520 and the command recognition unit 530.
The mobile device 400 may include an application connected to the server 400. Once the application is executed, a voice input mode for user's voice input may be activated in the mobile device 400.
When a user's voice is inputted through the voice input unit 510 of the mobile device 400, the inputted voice is delivered to the server 450 and the server 450 determines which home appliance is the target of a voice command as the voice recognition unit 520 and the command recognition unit 530 operate.
When a specific home appliance is recognized as the command target on the basis of a determination result, the server 450 notifies the specific home appliance of the recognized result. The home appliance notified of the result responses to a user's command. For example, when the air conditioner 330 is recognized as a command target and notified of a result, it may output a recognition message such as "turn on air conditioner?". According thereto, a user may input a confirmation message such as "OK" or "air conditioner operation". In relation to this, the contents described with reference to Fig. 11 are used.
Fig. 13 is a view illustrating a configuration of a smart home appliance or a mobile device and an operating method thereof according to an embodiment of the present invention. Configurations shown in Fig. 13 may be equipped in smart home appliances or mobile devices. Hereinafter, smart home appliances will be described for an example.
Referring to Fig. 13, a smart home appliance according to an embodiment of the present invention includes a voice input unit 510 receiving a user’s voice input and a voice recognition unit 520 extracting a text from a voice collected through the voice input unit 510. The voice recognition unit 520 may include a memory unit where the frequency of a voice and a text are mapped.
The smart home appliance may further include a region recognition unit 540 extracting the intonation of a voice inputted from the voice input unit 510 to determine the local color of the voice, that is, which region dialect is used. The region recognition unit 540 may include a database for dialects used in a plurality of regions. The database may store information on the intonation recognized when speaking in dialect, that is, unique frequency changes.
The text extracted through the voice recognition unit 520 and the information on a region determined through the region recognition unit 540 may be delivered to the control unit 550.
The smart home appliance may further include a memory unit 560 mapping the text extracted by the voice recognition unit 520 and a function corresponding to the text and storing them.
The control unit 550 may recognize a function corresponding to the text extracted by the voice recognition unit 520 on the basis of the information stored in the memory unit 560. Then, the control unit 550 may control a driving unit 590 equipped in the home appliance in order to perform the recognized function.
For example, the driving unit 560 may include a suction motor of a cleaner, a motor or a heater of a cooker, a compressor motor of an air conditioner, or a compressor motor of a refrigerator.
The home appliance further includes a display unit 570 outputting or providing region customized information to a screen or a voice output unit 580 outputting a voice on the basis of a setting function corresponding to the text extracted by the voice recognition unit 520 and the region information determined by the region recognition unit 540. The combined display unit 570 and voice output unit 580 may be referred to as “output unit”
That is, the setting function may include a plurality of functions divided according to regions and one function matching the region information determined by the region recognition unit 540 among the plurality of functions may be outputted. In summary, combined information of the recognized function and the determined local color may be inputted to the display unit 570 or the voice output unit 580 (region customized information providing service).
Moreover, the smart home appliance further includes a selection available mode setting unit 565 to perform a mode for the region customized information providing service. A user may use the region customized information providing service when the mode setting unit 565 is in “ON” state. Of course, a user may not use the region customized information providing service when the mode setting unit 565 is in “OFF” state.
Hereinafter, by referring to the drawings, contents of region customized information outputted to the display unit 570 are described.
Fig. 14 is a view illustrating a message output of a display unit according to an embodiment of the present invention.
Referring to Fig. 14, the display unit 570 may be equipped in the cooler 320, the refrigerator 340, or the mobile device 400. Hereinafter, the display unit 570 equipped at the refrigerator 340 is described for an example.
The cooler 320 or the refrigerator 340 may provide information on a recipe for a predetermined cooking to a user. In other words, the cooker 320 or the refrigerator 340 may include a memory unit storing recipe information on at least one cooking.
When a region customized information providing service is used to obtain recipe information, a user may provide an input when the mode setting unit 565 of the refrigerator 340 is ON state.
When the region customized information providing service starts, for example, a guide message such as “what can I help you”, that is, a voice input request message, may be displayed on a screen of the display unit 570. Of course, the voice input request message may be outputted as a voice through the voice output unit 580.
As shown in Fig. 14, a user may input a specific recipe for the voice input request message, for example, as shown in Fig. 14, a voice “beef radish soup recipe”. The refrigerator 340 receives and recognizes a user’s voice command and extracts a text corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “beef radish soup recipe”.
Then, the refrigerator 340 extracts the intonation from a voice inputted by a user and recognizes a frequency change corresponding to the extracted intonation, so that it may recognize a dialect for a specific region.
For example, when a user inputs a voice “beef radish soup recipe” in Gyeongsang-do accent, the refrigerator 340 recognizes the Gyeongsang-do dialect and prepares to provide a recipe optimized for Gyeongsang-do region. That is, there are a plurality of beef radish recipes according to regions, one recipe matching the recognized region, Gyeongsang-do, may be recommended.
As a result, the refrigerator 340 may recognize that a user in Gyeongsang-do region wants to receive ““beef radish soup recipe” and may then, read information on a Gyeongsang-do style radish recipe to provide it to a user. For example, the display unit 570 may display a message “here is Gyeongsang-do style red beef radish soup recipe”. In addition, a voice message may be outputted through the voice output unit 580.
Another example is described.
When the smart home appliance is an air conditioner for conditioning an indoor space and recognizes that a user’ region is a cold region such as Gangwon-do, as a user inputs a voice command “temperature down”, under the assumption that a user in cold region likes cold weather, the smart home appliance may operate to set a relatively low temperature as a setting temperature. Then, information on contents related to adjusting a setting temperature to a relatively low temperature, for example, 20℃, may be outputted to the output units 570 and 580.
According to such a configuration, without a user’s input for specific information, the dialect that a user speaks is recognized and region customized information is provided on the basis of the recognized dialect information. Therefore, usability may be improved.
Fig. 15 is a view illustrating a message output of a display unit according to another embodiment of the present invention.
Referring to Fig. 15, the display unit 570 according to another embodiment of the present invention may be equipped in the air conditioner 330 or the mobile device 400. When a region customized information providing service is used to input a command for an operation of the air conditioner 330, a user may provide an input when the mode setting unit 545 of the air conditioner 330 is in ON state.
When the region customized information providing service starts, for example, a guide message such as “what can I help you”, that is, a voice input request message, may be displayed on a screen of the display unit 570. Of course, the voice input request message may be outputted as a voice through the voice output unit 580.
With respect to the voice input request message, a user may input a command on an operation of the air conditioner 300, for example, as shown in Fig. 15, a voice “turn on air conditioner (in dialect)”. The air conditioner 330 receives and recognizes a user’s voice command and extracts a text corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “turn on air conditioner (in dialect)”.
Then, the air conditioner 330 extracts the intonation from a voice inputted by a user and recognizes a frequency change corresponding to the extracted intonation, so that it may recognize a dialect for a specific region. For example, when a user inputs a voice “turn on air conditioner” in Jeolla-do accent, the air conditioner 330 may recognize the Jeolla-do dialect and may then generate a response message for a user as the Jeolla-do dialect.
That is, the air conditioner 330 recognizes that a user in Jeolla-do region wants “air conditioner operation” and reads dialect information on a message that an air conditioner operation is performed from the memory unit 560 to provide it to a user. For example, the display unit 570 may output a message using the Jeolla-do dialect, for example, “it is very hot and turn on quickly (in the Jeolla-do dialect)”. In addition, a voice message may be outputted through the voice output unit 580.
By such a configuration, the dialect that a user speaks is recognized and information to be provided to a user is provided as a dialect on the basis of the recognized dialect information, so that the user may feel the intimacy.
Figs. 16A and 16B are views illustrating a security setting for voice recognition function performance according to a fifth embodiment of the present invention.
Referring to Figs. 16A and 16B, a user’s security setting is possible in the voice recognition system according to the fifth embodiment of the present invention. The security setting may be completed by a smart home appliance directly or by using a mobile device. Hereinafter, for example, security setting and authentication procedures by using a mobile device are described.
When a user wants to use the region customized information providing service, an input may be provided when the mode setting unit 565 in the mobile device 400 is in ON state. As an input is provided when the mode setting unit 565 is in ON state, an operation for setting an initial security may be performed.
The mobile device 400 may output a message for a predetermined key word. For example, as shown in the drawing, a key word “calendar” may be outputted as a text through a screen of the mobile device 400 or may be outputted as voice through a speaker.
Then, together with a message for the outputted key word, a first guide message may be outputted. The first guide message includes contents for an input of a reply word to the key word. For example, the first guide message may include content “please speak word coming to mind when looking at the next word”. Then, the first guide message may be outputted as a text through a screen of the mobile device 400 or may be outputted as a voice through a speaker.
For the first guide message, a user may input a word to be set as a password as a voice. For example, as shown in the drawing, a relay word “cat” may be inputted. When the mobile device 400 recognizes the reply word, a second guide message notifying the reply word is stored may be outputted through a screen or a voice.
In such way, when the region customized information providing service is used after the completion of the security setting, as shown in Fig. 16B, a procedure for performing an authentication by inputting a reply word to the key word may be performed.
In more detail, when an input is provided as the mode setting unit 565 in the mobile device 400 is in ON state, the mobile device 400 outputs a message for the key word, for example, “calendar”, and outputs a third guide message notifying the need for authentication, for example, “user authentication is required for this function”. The message for keyword and the third guide message may be outputted through a screen of the mobile device 400 or a voice.
For the key word, a user may input a predetermined set reply word, for example, a voice of “cat”. As recognizing the matching of the key word and the reply word, the mobile device 400 may output a fourth guide message notifying that authentication is successful, for example, a text of voice message “authenticated”.
In such a way, after the security is set to use the region customized information providing service, a predetermined reply word is configured to be inputted in a usage stage, so that other users’ service access and usage are limited except for designated users.
Fig. 17 is a view illustrating a configuration of a voice recognition system and its operation method according to a sixth embodiment of the present invention.
Referring to Fig. 17, a smart home appliance according to the sixth embodiment of the present invention includes a voice input unit 510 receiving a user’s voice input and a voice recognition unit 520 extracting a language element as a text from voice information collected through the voice input unit 510. The voice recognition unit 520 may include a memory unit where the frequency of a voice and a text are mapped.
The smart home appliance may further include an emotion recognition unit 540 extracting user’s emotion information from the voice information inputted through the voice input unit 510. The emotion recognition unit 540 may include a database where information on user’s voice characteristics and information on an emotion state are mapped. The information on user’s voice characteristics may include information on speech spectrum having distinctive characteristics for each user’s emotion.
The speech spectrum represents a distribution according to a voice’s frequency and may be understood as that a frequency distribution for each emotion, that is, emotions such as joy, angry, and sadness, is patterned. Accordingly, when a user makes a voice with a predetermined emotion, the emotion recognition unit 540 interprets a frequency change to extract a user’s emotion.
The text extracted through the voice recognition unit 520 and the information on an emotion determined through the emotion recognition unit 540 may be delivered to the control unit 550. The smart home appliance may further include a memory unit 560 mapping the text extracted by the voice recognition unit 520 and a function corresponding to the text and storing them.
The control unit 550 may recognize a function corresponding to the text extracted by the voice recognition unit 520 on the basis of the information stored in the memory unit 560. Then, the control unit 550 may control a driving unit 590 equipped in the home appliance in order to perform the recognized function.
The home appliance further includes a display unit 570 outputting or providing user customized information to a screen or a voice output unit 580 outputting a voice on the basis of a setting function corresponding to the text extracted by the voice recognition unit 520 and the emotion information extracted by the region recognition unit 540.
That is, the setting function may include a plurality of functions divided according to user’s emotions and one function matching the emotion information determined by the emotion recognition unit 540 among the plurality of functions may be outputted. In summary, the display unit 570 or the voice output unit 580 may output combined information of a function corresponding to the text and user’s emotion information (user customized information providing service).
Moreover, the smart home appliance further includes a selection available mode setting unit 565 to perform a mode for the user customized information providing service. A user may use the user customized information providing service when the mode setting unit 565 is in “ON” state. Of course, a user may not use the user customized information providing service when the mode setting unit 565 is in “OFF” state.
Figs. 18 to 20 are views illustrating a message output of a display unit according to a sixth embodiment of the present invention. A display unit 570 shown in Figs. 18 to 20 may be equipped in the air conditioner 330 or the mobile device 450. Hereinafter, the display unit 570 equipped at the air conditioner 330 is described for an example.
First, referring to Fig. 18, when a user customized information providing service is used by using the air conditioner 330 to condition the air in an indoor space, a user may provide an input when the mode setting unit 565 is in ON state.
When the user customized information providing service starts, for example, a guide message such as “what can I help you”, that is, a voice input request message, may be displayed on a screen of the display unit 570. Of course, the voice input request message may be outputted as a voice through the voice output unit 580.
As shown in Fig. 18, a user may input a specific operation command for the voice input request message, for example, as shown in Fig. 18, a voice “air conditioner start”. The air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “air conditioner start”.
Then, the air conditioner 330 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540. In more detail, information on a frequency change detected from a user’s voice may be compared to information on a speech spectrum having characteristics for people’s each emotion. Then, information corresponding to each other may be matched based on a comparison result and accordingly, emotion information that a user’s voice has may be obtained.
For example, when a user inputs “air conditioner start” with an angry voice, the emotion recognition unit 540 may recognize that a user makes a voice with an angry emotion from a frequency change detected from the user’s voice. The air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode.
That is, as shown in Fig. 18, as an operation of the air conditioner 330 starts according to a user’s command, an operation mode is recommended in consideration of an emotion annoyed with hot weather. For example, the display unit 570 may output a message “oh! very hot? air conditioner start with lowest temperature and direct wind?”
Herein, the direct wind is understood as a mode in which the discharge direction of air is formed directly toward the position of a user detected through the body detection unit 36 of the air conditioner 330. That is, a setting temperature is set to the lowest temperature to perform an air conditioning function and cool wind reaches a user directly, so that a user may feed cool instantly. When a voice that a user accepts or selects the outputted message, for example, “yes”, is inputted, the air conditioner 330 recognizes this and operates in the recommended mode.
It is described with reference to Fig. 18 that after the air conditioner 330 recommends a specific mode to a user and a user’s recommendation mode acceptance is recognized, the air conditioner 330 operates according to the specific mode. However, unlike this, the recommended mode may operate instantly.
Then, referring to Fig. 19, when a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice, a user may input a specific command for the voice input request message, for example, a voice “temperature up”
The air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “temperature up”.
Then, the air conditioner 300 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540. For example, when a user inputs “temperature up” with a trembling voice, the emotion recognition unit 540 may recognize that a user makes a voice with a trembling voice due to cold from a frequency change detected from the user’s voice.
The air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode. That is, as shown in Fig. 19, as an operation of the air conditioner 330 starts according to a user’s command, an operation mode is recommended in consideration of a cold trembling state. For example, the display unit 570 may output a message “cold? set air conditioner temperature to 26 degrees. execute action detection indirect wind?”.
Herein, the indirect wind is understood as a mode in which the discharge direction of air is indirectly provided toward a user by avoiding the position of the user detected through the body detection unit 36 (see Fig. 2) of the air conditioner 300. That is, a setting temperature rises to a relatively high temperature to perform a cooling or heating function and wind reaches a user indirectly, so that the user may feel pleasant without cold. When a voice that a user accepts the outputted message, for example, “yes”, is inputted, the air conditioner 330 recognizes this and operates in the recommended mode.
Then, referring to Fig. 20, when a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice, a user may input a specific command for the voice input request message, for example, a voice “air conditioner start”
The air conditioner 330 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “air conditioner start”.
Then, the air conditioner 330 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540. For example, when a user input “air conditioner start” with a sad voice, the emotion recognition unit 540 may recognize that a user makes a voice with a sad emotion from a frequency change detected from the user’s voice.
The air conditioner 330 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific operation mode. That is, as shown in Fig. 20, as an operation of the air conditioner 330 starts according to a user’s command, an operation mode is recommended in consideration of a user’s sad emotion state. For example, the display unit 570 may output a message “air conditioner start. sir, use aroma function for refresh. start aroma function?”.
Herein, the aroma function is understood as a function through which a capsule inserted into the capsule injection device 60 (see Fig. 2) of the air conditioner 300 acts so that a wind with an aroma fragrance is discharged. That is, the air conditioner 300 recognizes a user’s sad emotion and then diffuses an aroma fragrance for refresh into an indoor space. When a voice that a user accepts the outputted message, for example, “yes”, is inputted, the air conditioner 300 recognizes this and operates in the recommended mode.
Figs. 21A to 23 are views illustrating a message output of a display unit according to another embodiment of the present invention. A display unit 570 shown in Figs. 21A to 23 may be equipped in the cooker 320, the refrigerator 340, or the mobile device 400. Hereinafter, the display unit 570 equipped at the refrigerator 340 is described for an example.
First, referring to Fig. 21A, when a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice, a user may input a specific command for the voice input request message, for example, a voice “recipe search”
The refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”. Then, the refrigerator 340 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540.
For example, when a user inputs “recipe search” with a sad voice, the emotion recognition unit 540 may recognize that a user makes a voice with a sad emotion from a frequency change detected from the user’s voice.
The refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a predetermined function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 21A, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of a sad emotion. For example, the display unit 570 may output a message “feel depressed? eat sweet food then you feel better. sweet food recipe search?” When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “sweet food recipe search”.
Fig. 21B is similar to Fig. 21A in terms of scenario but when a user rejects a specific recipe that the refrigerator 340 recommends, for example, when a user inputs a voice rejecting a message “sweet food recipe search?” outputted by the display unit 570, that is, “no”, a voice input request message “speak food ingredients to search” may be outputted.
As shown in Fig. 21B, a voice input request message “what can I help you?” may be defined as “first message” and a voice input request message “tell food ingredients to search” may be defined as “second message”. A user may input a voice for another selectable function with respect to the second message, that is, another food ingredient, to receive information on a desired recipe.
Then, referring to Fig. 22, when a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice, a user may input a specific command for the voice input request message, for example, a voice “recipe search”
The refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”.
Then, the refrigerator 340 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540. For example, when a user input “recipe search” with an angry voice, the emotion recognition unit 540 may recognize that a user makes a voice with an angry emotion from a frequency change detected from the user’s voice.
The refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 22, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of an angry emotion. For example, the display unit 570 may output a message “are you angry with empty stomach? fast cook food recipe search?”.
When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “fast cook food recipe search”. Of course, in the case of Fig. 22, as described with reference to Fig. 21B, when a user rejects a recommended recipe, the refrigerator 340 may output a second message to receive a desired specific recipe from a user.
Then, referring to Fig. 23, when a user customized information providing service starts and a voice input request message is outputted to a screen of the display unit 570 or as a voice, a user may input a specific command for the voice input request message, for example, a voice “recipe search”
The refrigerator 340 receives and recognizes a user’s voice command and extracts a text (first voice information) corresponding to the recognized voice command. Accordingly, the display unit 570 may display a screen “recipe search”.
Then, the refrigerator 400 may extract emotion information (second voice information) from a voice inputted from a user by operating the emotion recognition unit 540. For example, when a user inputs “recipe search” with a happy voice, the emotion recognition unit 540 may recognize that a user makes a voice with a happy emotion from a frequency change detected from the user’s voice.
The refrigerator 340 combines the first voice information and the second voice information to recognize a function that a user wants and recommend a function matching a user’s emotion, that is, a specific recipe. That is, as shown in Fig. 23, as a recipe is searched according to a user’s command, one of a plurality of recipes is recommended in consideration of a happy emotion. For example, the display unit 570 may output a message “feel good? make special food. special food recipe search?”.
When a user inputs the acceptance voice for the outputted message, for example, “yes”, the refrigerator 340 recognizes this and recommends a specific recipe while outputting a message “special food recipe search”. Of course, in the case of Fig. 23, as described with reference to Fig. 21B, when a user rejects a recommended recipe, the refrigerator 340 may output a second message to receive a desired specific recipe from a user.
In such a wary, since a smart home appliance extracts emotion information from a user’s voice and recommends a specific function matching a user’s emotion among a plurality of functions, instead of extracting a text from a user’s voice command and performs a set function simply, user’s convenience may be increased and product satisfaction may be improved.
Fig. 24 is a block diagram illustrating a configuration of an air conditioner as one example of a smart home appliance according to a seventh embodiment of the present invention. Hereinafter, although an air conditioner is described as one example of a smart appliance, it should be clear in advance that the ideas related to a voice recognition or communication (information offer) procedure except for the unique setting functions of an air conditioner may be applied to other smart home appliances.
Referring to Fig. 24, an air conditioner 600 according to the seventh embodiment of the present invention includes a plurality of communication units 680 and 690 communicating with an external device. The plurality of communication units 680 and 690 include a first communication module 680 communicating with the server 700 and a position information reception unit 690 receiving information on the position of the air conditioner 600 from a position information transmission unit 695.
The first communication module 680 may communicate with the second communication module 780 of the server 780 in a wired or wireless manner. For example, the first communication module 680 of the air conditioner 600 and the second communication module 780 of the server 700 may communicate with each other directly or through an access point, or through a wired network. In this embodiment, there is no limitation in a communication method between the first communication module 680 of the air conditioner 600 and the second communication module 780 of the server 700.
Moreover, each of the first communication module 680 of the air conditioner 600 and the second communication module 780 of the server 700 may have a unique internet protocol (IP) address. Accordingly, when the first communication module 680 and the second communication module 780 are communicably connected to each other, the server 700 may recognize the installed position or region of the air conditioner 600 by recognizing the first communication module 680.
The position information reception unit 680 may be a GPS reception unit, for example. Then, the position information transmission unit 695 is configured to transmit information on the position of the position information reception unit 690 or the air conditioner 600, to the position information reception unit 690. For example, the information on position may include a position coordinate value.
The position information transmission unit 695 may be a GPS satellite or a communication base station. The position information reception unit 690 may transmit a predetermined signal to the position information transmission unit 695 periodically or at a specific time point and the position information transmission unit 695 may transmit the information on position to the position information reception unit 690. In that the first communication module 680 and the position information reception unit 695 recognize the position or region of the air conditioner 600, they may be referred to as a “position information recognition unit”.
The server 700 may further include a server memory 770. The server memory 770 may store information necessary for an operation of the air conditioner 600, for example, information on the position (region) of the air conditioner 600 or the first communication module 680 or weather information corresponding to the position (region). When the first communication module 680 and the second communication module 780 are connected communicably, information stored in the server memory 770 may be transmitted to the air conditioner 700.
In such a manner, information on the position of the air conditioner 600 may be recognized based on information on the communication address when the first communication module 680 and the second communication module 780 are connected communicably or information received from the position information reception unit 690.
For example, when the first communication module 680 and the second communication module 780 are not connected communicably, the air conditioner 600 may receive information on the position through the position information reception unit 690. On the other hand, when the position information reception 690 is not in ON state, the air conditioner 600 may receive the information on position through a communication connection of the first and second communication modules 680 and 780.
Fig. 25 is a flowchart illustrating a control method of a smart home appliance according to a seventh embodiment of the present invention.
Referring to Fig. 25, in controlling a smart home appliance according to the seventh embodiment of the present invention, a position recognition service may be set to be turned on. The position recognition service is understood as a service in which the installed position or region of the air conditioner 600 is recognized when the first and second communication modules 680 and 780 are communicably connected or the position information reception unit 690 receives position information and a function of a smart home appliance is performed based on information on the recognized position or region. Then, an application for using the position recognition service is executed in operations S31 and S32.
A user's voice command is inputted through the voice input unit 110 (see Fig. 2). Then, information on the position of the smart home appliance is recognized through the communication connection of the first and second communication modules 680 and 780 and the position information reception unit 690 in operations S33 and S34.
A voice command inputted through the voice input unit 110 may correspond to at least one voice information among a plurality of voice information stored in the memory unit 130 (see Fig. 3) and the corresponding voice information may be extracted as a text. Then, by using the extracted text, a predetermined setting function among a plurality of setting functions that home appliance performs may be recognized in operation S35.
Then, as the smart home appliance performs the recognized setting function, information on the position of the smart home appliance may be considered. Then, information on the setting function and information on the position are combined so that predetermined information may be provided to a user.
For example, when the smart home appliance is located in a region, a language used in a corresponding region, that is, a dialect, may guide information on the setting function. As another example, information for a setting function optimized for the position of the smart home appliance may be guided in operation S36. In relation to this, hereinafter, an example of information on a setting function that a smart home appliance provides is described.
Fig. 26 is a view illustrating a display unit of a smart home appliance. Fig. 26 illustrates a view when information on a setting function combined with position information is outputted.
Referring to Fig. 26, a message for requesting a voice input may be outputted from an output unit 660 of the air conditioner 600. For example, a message “what can I help you?” may be outputted from the output unit 660. At this point, voice and text messages may be outputted together.
For this, a user may speak a voice command “temperature up”. The spoken voice is inputted through the voice input unit 110 and filtered and then is delivered to the main control unit 120 (see Fig. 3). The main control unit 120 recognizes the filtered voice as predetermined voice information and outputs it as a text.
Then, through a communication connection with the server 700 or on the basis of information received from the position information reception unit 690, information on the position of the air conditioner 600 may be recognized. The air conditioner 600 may provide a setting function corresponding to the voice command to a user according to the recognized voice command information. At this point information on the recognized position may be considered.
For example, when the position (region) of the air conditioner 600 is in the Gyeongsang-do region, information on the setting function may be guided in Gyeongsang-do accent. For example, a message for performing a function to raise a setting temperature in Gyeongsang-do accent “raise temperature by one degree?” may be outputted. For this, when a user inputs an acceptance intention, that is, a voice “yes”, the main control unit 120 may control an operation of the driving unit 140 (see Fig. 3) to raise a setting temperature by 1 degree.
Fig. 27 is a view illustrating a configuration of a cooker as another example of a smart home appliance according to a seventh embodiment of the present invention. Figs. 28A and 28B are views illustrating a display unit of the cooker.
Referring to Fig. 27, a cooker 810 according to the seventh embodiment of the present invention includes a voice input unit 812 receiving a user’s voice input, an input unit 814 manipulated for a user’s command input, and an output unit 816 displaying information on an operation state of the cooker 810. The output unit 816 includes a display unit displaying information on a screen and a voice output unit outputting a voice.
The cooker 810 includes the filter unit 115, the memory unit 130, the driving unit 140, the control units 120 and 150, and the first communication module 680, and the position information reception unit 690, all of which are described with reference to Fig. 3. Their detailed descriptions are omitted.
Referring to Fig. 28A, a message for requesting a voice input may be outputted from the output unit 816 of the cooker 810. For example, a message “what can I help you?” may be outputted from the output unit 816. At this point, voice and text messages may be outputted together.
For this, a user may speak a voice command “food recipe”. The spoken voice is inputted through the voice input unit 110 and filtered and then is delivered to the main control unit 120. The main control unit 120 recognizes the filtered voice as predetermined voice information and outputs it as a text. Then, through a communication connection with the server 700 or on the basis of information received from the position information reception unit 690, information on the position of the cooker 810 may be recognized.
When the main control unit 120 recognizes a user’s voice command, the output unit 816 may output a message for requesting an input of detailed information on a recipe. For example, a message “please input food type” may be outputted as voice or text. A user may input information on a desired food type, that is, a food keyword, through the input unit 815. For example, a user may input a food keyword “grilled food”.
Once a user’s food keyword input is completed, the cooker 810 may complete a related recipe search and may then output a guide message. For example, a message “recipe search is completed. want to check search result?” may be outputted. For this, when a user inputs an acceptance intention, that is, a voice “yes”, a screen may be switched and another screen shown in Fig. 28B may be outputted.
Referring to Fig. 28B, information on the position and setting function information corresponding to the recognized voice command may be combined and predetermined information may be outputted to the output unit 816 of the cooker 810.
In more detail, among recipe information on "grilled food” that a user wants, specialty or traditional food recipes in the position (region) of the cooker 810 may be arranged preferentially and outputted to the output unit 360. For example, a specialty or traditional food recipe for “grilled food”, that is, “oven-grilled pork roll with fishery”, “assorted grilled seafood”, and “assorted grilled mushroom”, may be arranged at an upper part of the output unit 816 and may be displayed in a check box. That is, among a plurality of information on a setting function, information optimized for the position recognized by the position information recognition units 680 and 690 may be outputted first to the output unit 816.
Then, general grilled food recipes may be arranged at a lower part of the specialty or traditional food recipe. When a user selects a desired recipe among each arranged recipe, detailed information on a recipe may be checked. In such a way, since information on a setting function of a home appliance is provided on the basis of information on a user’s voice command and position information of a home appliance, user’s convenience may be increased.
Fig. 29 is a view illustrating a configuration of a washing machine as another example of a smart home appliance according to an eight embodiment of the present invention. Fig. 30 is a flowchart illustrating a control method of a smart home appliance according to the eighth embodiment of the present invention.
Referring to Fig. 29, the smart home appliance according to the eighth embodiment may include a washing machine 820.
The washing machine 820 includes a voice input unit 822 receiving a user’s voice input, an input unit 825 manipulated for a user’s command input, and an output unit 826 displaying information on an operation state of the washing machine 820. The output unit 826 includes a display unit displaying information on a screen and a voice output unit outputting a voice. The washing machine 820 includes the filter unit 115, the memory unit 130, the driving unit 140, the control units 120 and 150, and the first communication module 680, and the position information reception unit 690. Their detailed descriptions are omitted.
Referring to Fig. 30, in controlling a smart home appliance, a position recognition service may be set to be turned on. The position recognition service is understood as a service in which the installed position or region of the washing machine 820 is recognized when the first and second communication modules 680 and 780 are communicably connected or the position information reception unit 690 receives position information and a function of a smart home appliance is performed based on information on the recognized position or region. Then, an application for using the position recognition service is executed in operations S41 and S42.
A user's voice command is inputted through the voice input unit 110. Then, information on the position of the smart home appliance is recognized through the communication connection of the first and second communication modules 680 and 780 and the position information reception unit 690 in operations S43 and S44. Then, weather information of the position (region) where the washing machine 820 is installed is received from the server 700 in operation S45.
A voice command inputted through the voice input unit 110 may correspond to at least one voice information among a plurality of voice information stored in the memory unit 130 and the corresponding voice information may be extracted as a text. Then, by using the extracted text, a predetermined setting function among a plurality of setting functions that home appliance performs may be recognized in operation S46.
Then, as the smart home appliance performs the recognized setting function, weather information on the installed region of the smart home appliance may be considered. Then, information on the setting function and information on the weather are combined so that recommendation information related to the setting function may be provided to a user. That is, one information among a plurality of information related to the setting function may be recommended.
For example, when a smart home appliance is the washing machine 820 and a voice command is “laundry start”, a laundry course may be recommended by recognizing weather information on a region where the washing machine 820 is installed. For example, if rainy or high humidity, strong spin or a drying function may be recommended in operation S47.
As another example, when a smart home appliance is an air conditioner, in the case of a voice command “turn on air conditioner”, a driving course may be recommended by recognizing weather information on a region where the air conditioner is installed. For example, a dehumidifying function may be recommended by receiving humidity information. Then, a user sets a bedtime reservation, by receiving a temperature of the nighttime, a recommendation for increasing or decreasing a reservation time.
Fig. 31 is a block diagram illustrating a configuration of a voice recognition system according to a ninth embodiment of the present invention.
Referring to Fig. 31, a voice recognition system according to the ninth embodiment of the present invention includes a mobile device 900 receiving a user’s voice input, a plurality of home appliances 810, 820, 830, and 840 operating and controlled based on a voice inputted to the mobile device 900, and a server 950 communicably connecting the mobile device 900 and the plurality of home appliances 810, 820, 830, and 840.
For example, the plurality of smart home appliances 810, 820, 830, and 840 may include a cooker 810, a washing machine 820, a cleaner 830, and an air conditioner 840. The mobile device 900 may include a smartphone, a remote controller, and a tap book.
The mobile device 900 includes a voice input unit 110, a first communication module 918, and a position information reception unit 919. Then, the mobile device 900 further includes an input unit 916 outputting information related to a function performance of the home appliance.
The server 950 may further include a server memory 957 and a second communication module 958. The service memory may store text information mapped into an inputted voice and setting function information corresponding to the text information.
An application connected to the server 950 may be executed in the mobile device 900. Once the application is executed, a voice input mode for user's voice input may be activated in the mobile device 900.
When a user's voice is inputted through the voice input unit 110 of the mobile device 900, the inputted voice is delivered to the server 950 and the server 950 may recognize the inputted voice to transmit a command on a setting function performance to a home appliance corresponding to a voice command. At this point, the server 950 may recognize the position of the mobile device 900 and may then transmit a command on the setting function performance to the home appliance on the basis of information on the recognized position.
Then, the server 950 may transmit the information on the setting function performance to the mobile device 900 and the information may be outputted to the output unit 916 of the mobile device 900. In more detail, information on voice recognition, position recognition, and setting function performance may be outputted to the output unit 916 of the mobile device 900. That is, information described with reference to Figs. 26, 28A and 28B may be outputted to the output unit 916.
In such a way, since a user inputs a voice command or a manipulation through the mobile device 900 and checks information on voice recognition, position recognition, and setting function performance of a home appliance through the output unit 916 of the mobile device 900, user’s convenience may be improved.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
Claims (30)
- A smart home appliance comprising:a voice input unit collecting a voice;a voice recognition unit recognizing a text corresponding to the voice collected through the voice input unit;a capturing unit collecting an image for detecting a user’s visage;a memory unit mapping the text recognized by the voice recognition unit and a setting function and storing the mapped information; anda control unit determining whether to perform a voice recognition service on the basis of at least one information of image information collected by the capturing unit and voice information collected by the voice input unit.
- The smart home appliance according to claim 1, wherein the control unit comprises a face detection unit recognizing that a user is in a staring state for voice input when image information on a user’s visage is collected for more than a setting time through the capturing unit.
- The smart home appliance according to claim 2, wherein the control unit determines that a voice recognition service standby state is entered when it is recognized that there is keyword information in a voice through the voice input unit and a user is in the staring state through the face detection unit.
- The smart home appliance according to claim 1, further comprising:a filter unit removing a noise sound from the voice inputted through the voice input unit; anda memory unit mapping voice information related to an operation of the smart home appliance and voice information unrelated to an operation of the smart home appliance in the voice inputted through the voice input unit and storing the mapped information.
- The smart home appliance according to claim 1, further comprising:a region recognition unit determining a user’s region on the basis of information on the voice collected through the voice input unit; andan output unit outputting region customized information on the basis of information on a region determined by the region recognition unit and information on the setting function.
- The smart home appliance according to claim 5, whereinthe setting function comprises a plurality of functions divided according to regions; andthe region customized information including one function matching information on the region among the plurality of functions is outputted through the output unit.
- The smart home appliance according to claim 5, wherein the output unit outputs the region customized information by using a dialect in the region determined by the region recognition unit.
- The smart home appliance according to claim 5, wherein the output unit outputs a key word for security setting and the voice input unit sets a reply word corresponding to the key word.
- The smart home appliance according to claim 1, further comprising an emotion recognition unit and an output unit,whereinthe voice recognition unit recognizes a text corresponding to first voice information in the voice collected through the voice input unit;the emotion recognition unit extracts a user’s emotion on the basis of second voice information in the voice collected through the voice input unit; andthe output unit outputs user customized information on information on a user’s emotion determined by the emotion recognition unit and information on the setting function.
- The smart home appliance according to claim 9, whereinthe first voice information comprises a language element in the collected voice; andthe second voice information comprises a non-language element related to a user’s emotion.
- The smart home appliance according to claim 9, whereinthe emotion recognition unit comprises a database where information on user’s voice characteristics and information on an emotion state are mapped; andthe information on the user’s voice characteristics comprises information on a speech spectrum having characteristics for each user’s emotion.
- The smart home appliance according to claim 9, whereinthe setting function comprises a plurality of functions to be recommended or selected; andthe user customized information including one function matching the information on the user’s emotion among the plurality of functions is outputted through the output unit.
- The smart home appliance according to claim 1, further comprising:a position information recognition unit recognizing position information; andan output unit outputting the information on the setting function on the basis of position information recognized by the position information recognition unit.
- The smart home appliance according to claim 13, wherein the position information recognition unit comprises:a GPS reception unit receiving a position coordinate from a position information transmission unit; anda first communication module communicably connected to a second communication module equipped in a server.
- The smart home appliance according to claim 13, wherein the output unit comprises a voice output unit outputting the information on the setting function as a voice, by using the position recognized by the position information recognition unit or a dialect used in a region.
- The smart home appliance according to claim 13, wherein the output unit outputs information optimized for a region recognized by the position information recognition unit among a plurality of information on the setting function.
- The smart home appliance according to claim 13, wherein the position information recognized by the position information recognition unit comprises weather information.
- An operating method of a smart home appliance, the method comprising:collecting a voice through a voice input unit;recognizing whether keyword information is included in the collected voice;collecting image information on a user’s visage through a capturing unit equipped in the smart home appliance; andentering a standby state of a voice recognition service on the basis of the image information on the user’s visage.
- The method according to claim 18, whereinwhen the image information on the user’s visage is collected for more than a setting time, it is recognized that a user is in a staring state for voice input; andwhen it is recognized that there is keyword information in the voice and the user is in the staring state for voice input, a standby state of the voice recognition service is entered.
- The method according to claim 18, further comprising:determining a user’s region on the basis of information on the collected voice; anddriving the smart home appliance on the basis of information on the setting function and information on the determined region.
- The method according to claim 20, further comprising outputting region customized information related to the driving of the smart home appliance on the basis of the information on the determined region.
- The method according to claim 21, wherein the outputting of the region customized information comprises outputting a voice or a screen by using a dialect used in the user’s region.
- The method according to claim 22, further comprising performing a security setting,wherein the performing of the security setting comprises:outputting a predetermined key word; andinputting a reply word in response to the outputted key word.
- The method according to claim 18, further comprising:extracting a user’s emotion state on the basis of information on the collected voice; andrecommending an operation mode on the basis of information on the user’s emotion state.
- The method according to claim 18, further comprising:recognizing an installation position of the smart home appliance through a position information recognition unit; anddriving the smart home appliance on the basis of information on the installation position.
- The method according to claim 25, wherein the recognizing of the installation position of the smart home appliance comprises receiving GPS coordinate information from a GPS satellite or a communication base station.
- The method according to claim 25, wherein the recognizing of the installation position of the smart home appliance comprises checking a communication address as a first communication module equipped in the smart home appliance is connected to a second communication module equipped in a server.
- A voice recognition system comprising:a mobile device including a voice input unit receiving a voice;a smart home appliance operating based on a voice collected through the voice input unit; anda communication module equipped in each of the mobile device and the smart home appliance,wherein the mobile device comprises a movement detection unit determining whether to enter a standby state of a voice recognition service in the smart home appliance by detecting a movement of the mobile device.
- The system according to claim 28, wherein the movement detection unit comprises an acceleration sensor or a gyro sensor detecting a change in an inclined angle of the mobile device,whereinthe voice input unit is disposed at a lower part of the mobile device; andwhen a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an angle value detected by the acceleration sensor or the gyro sensor is reduced.
- The system according to claim 28, whereinthe movement detection unit comprises an illumination sensor detecting an intensity of an external light collected by the mobile device; andwhen a user puts the voice input unit close the mouth in order for a voice input as gripping the mobile device, an intensity value of a light detected by the illumination sensor is increased.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/103,528 US10269344B2 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
EP24168136.0A EP4387174A3 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
EP14870553.6A EP3080678A4 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
CN201480072279.2A CN105874405A (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
EP20187912.9A EP3761309B1 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
US16/289,558 US20190267004A1 (en) | 2013-12-11 | 2019-02-28 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130153713A KR102188090B1 (en) | 2013-12-11 | 2013-12-11 | A smart home appliance, a method for operating the same and a system for voice recognition using the same |
KR10-2013-0153713 | 2013-12-11 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/103,528 A-371-Of-International US10269344B2 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
US16/289,558 Continuation US20190267004A1 (en) | 2013-12-11 | 2019-02-28 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015088141A1 true WO2015088141A1 (en) | 2015-06-18 |
Family
ID=53371410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/010536 WO2015088141A1 (en) | 2013-12-11 | 2014-11-04 | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances |
Country Status (5)
Country | Link |
---|---|
US (2) | US10269344B2 (en) |
EP (3) | EP4387174A3 (en) |
KR (1) | KR102188090B1 (en) |
CN (1) | CN105874405A (en) |
WO (1) | WO2015088141A1 (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015176950A1 (en) * | 2014-05-21 | 2015-11-26 | Vorwerk & Co. Interholding Gmbh | Electrically operated domestic appliance having a voice recognition device |
CN105976817A (en) * | 2016-07-04 | 2016-09-28 | 佛山市顺德区美的电热电器制造有限公司 | Voice control method and voice control device for cooking utensil as well as cooking utensil |
CN106019977A (en) * | 2016-08-05 | 2016-10-12 | 易晓阳 | Gesture and emotion recognition home control system |
CN106125566A (en) * | 2016-08-05 | 2016-11-16 | 易晓阳 | A kind of household background music control system |
CN106125565A (en) * | 2016-08-05 | 2016-11-16 | 易晓阳 | A kind of motion and emotion recognition house control system |
CN106200396A (en) * | 2016-08-05 | 2016-12-07 | 易晓阳 | A kind of appliance control method based on Motion Recognition |
CN106200395A (en) * | 2016-08-05 | 2016-12-07 | 易晓阳 | A kind of multidimensional identification appliance control method |
CN106228989A (en) * | 2016-08-05 | 2016-12-14 | 易晓阳 | A kind of interactive voice identification control method |
WO2016197824A1 (en) * | 2016-01-18 | 2016-12-15 | 中兴通讯股份有限公司 | Voice command processing method and apparatus, and smart gateway |
CN106254186A (en) * | 2016-08-05 | 2016-12-21 | 易晓阳 | A kind of interactive voice control system for identifying |
CN106297783A (en) * | 2016-08-05 | 2017-01-04 | 易晓阳 | A kind of interactive voice identification intelligent terminal |
WO2017082543A1 (en) * | 2015-11-10 | 2017-05-18 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the same |
WO2017162019A1 (en) * | 2016-03-24 | 2017-09-28 | 深圳市国华识别科技开发有限公司 | Intelligent terminal control method and intelligent terminal |
EP3244597A1 (en) * | 2016-05-09 | 2017-11-15 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for controlling devices |
CN107370649A (en) * | 2017-08-31 | 2017-11-21 | 广东美的制冷设备有限公司 | Household electric appliance control method, system, control terminal and storage medium |
WO2018023516A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interaction recognition and control method |
WO2018023514A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Home background music control system |
WO2018023512A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Furniture control method using multi-dimensional recognition |
WO2018023518A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Smart terminal for voice interaction and recognition |
WO2018023517A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interactive recognition control system |
WO2018023513A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Home control method based on motion recognition |
WO2018023523A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Motion and emotion recognizing home control system |
WO2018023515A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Gesture and emotion recognition home control system |
WO2018027506A1 (en) * | 2016-08-09 | 2018-02-15 | 曹鸿鹏 | Emotion recognition-based lighting control method |
US20180122379A1 (en) * | 2016-11-03 | 2018-05-03 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
WO2018224812A1 (en) * | 2017-06-07 | 2018-12-13 | Kenwood Limited | Kitchen appliance and system therefor |
EP3413304A3 (en) * | 2017-05-19 | 2019-04-03 | LG Electronics Inc. | Method for operating home appliance and voice recognition server system |
US10275651B2 (en) * | 2017-05-16 | 2019-04-30 | Google Llc | Resolving automated assistant requests that are based on image(s) and/or other sensor data |
WO2019118089A1 (en) * | 2017-12-11 | 2019-06-20 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
WO2020024546A1 (en) * | 2018-08-01 | 2020-02-06 | 珠海格力电器股份有限公司 | Auxiliary speech control method and device and air conditioner |
WO2020068375A1 (en) * | 2018-09-28 | 2020-04-02 | Apple Inc. | Device control using gaze information |
US10749967B2 (en) | 2016-05-19 | 2020-08-18 | Apple Inc. | User interface for remote authorization |
US10748153B2 (en) | 2014-05-29 | 2020-08-18 | Apple Inc. | User interface for payments |
US10783227B2 (en) | 2017-09-09 | 2020-09-22 | Apple Inc. | Implementation of biometric authentication |
US10803281B2 (en) | 2013-09-09 | 2020-10-13 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs |
US10872256B2 (en) | 2017-09-09 | 2020-12-22 | Apple Inc. | Implementation of biometric authentication |
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
EP3770522A4 (en) * | 2018-05-18 | 2021-07-07 | Samsung Electronics Co., Ltd. | Air conditioner and control method thereof |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
CN114190823A (en) * | 2021-10-21 | 2022-03-18 | 湖南师范大学 | Intelligent household robot and control method |
US20220343909A1 (en) * | 2019-09-06 | 2022-10-27 | Lg Electronics Inc. | Display apparatus |
US11521038B2 (en) | 2018-07-19 | 2022-12-06 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11898788B2 (en) | 2015-09-03 | 2024-02-13 | Samsung Electronics Co., Ltd. | Refrigerator |
US12079458B2 (en) | 2022-04-20 | 2024-09-03 | Apple Inc. | Image data for enhanced user interactions |
Families Citing this family (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10410648B1 (en) * | 2013-12-31 | 2019-09-10 | Allscripts Software, Llc | Moderating system response using stress content of voice command |
CN104284486A (en) * | 2014-09-26 | 2015-01-14 | 生迪光电科技股份有限公司 | Intelligent lighting device and system and intelligent lighting control method |
US20160162592A1 (en) * | 2014-12-09 | 2016-06-09 | Chian Chiu Li | Systems And Methods For Performing Task Using Simple Code |
US10453461B1 (en) * | 2015-03-17 | 2019-10-22 | Amazon Technologies, Inc. | Remote execution of secondary-device drivers |
US10655951B1 (en) | 2015-06-25 | 2020-05-19 | Amazon Technologies, Inc. | Determining relative positions of user devices |
US10365620B1 (en) | 2015-06-30 | 2019-07-30 | Amazon Technologies, Inc. | Interoperability of secondary-device hubs |
DE102015009157A1 (en) * | 2015-07-14 | 2017-01-19 | Liebherr-Hausgeräte Ochsenhausen GmbH | Fridge and / or freezer |
US10018977B2 (en) * | 2015-10-05 | 2018-07-10 | Savant Systems, Llc | History-based key phrase suggestions for voice control of a home automation system |
US9946862B2 (en) * | 2015-12-01 | 2018-04-17 | Qualcomm Incorporated | Electronic device generating notification based on context data in response to speech phrase from user |
WO2017111234A1 (en) * | 2015-12-23 | 2017-06-29 | Samsung Electronics Co., Ltd. | Method for electronic device to control object and electronic device |
US10853761B1 (en) | 2016-06-24 | 2020-12-01 | Amazon Technologies, Inc. | Speech-based inventory management system and method |
US11315071B1 (en) * | 2016-06-24 | 2022-04-26 | Amazon Technologies, Inc. | Speech-based storage tracking |
KR102193036B1 (en) * | 2016-07-05 | 2020-12-18 | 삼성전자주식회사 | Display Apparatus and Driving Method Thereof, and Computer Readable Recording Medium |
CN107765838A (en) * | 2016-08-18 | 2018-03-06 | 北京北信源软件股份有限公司 | Man-machine interaction householder method and device |
WO2018045303A1 (en) * | 2016-09-02 | 2018-03-08 | Bose Corporation | Application-based messaging system using headphones |
CN106354263A (en) * | 2016-09-09 | 2017-01-25 | 电子科技大学 | Real-time man-machine interaction system based on facial feature tracking and working method of real-time man-machine interaction system |
CN106354264A (en) * | 2016-09-09 | 2017-01-25 | 电子科技大学 | Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system |
CN106448652A (en) * | 2016-09-12 | 2017-02-22 | 珠海格力电器股份有限公司 | Control method and device of air conditioner |
KR20180049787A (en) * | 2016-11-03 | 2018-05-11 | 삼성전자주식회사 | Electric device, method for control thereof |
US10783883B2 (en) * | 2016-11-03 | 2020-09-22 | Google Llc | Focus session at a voice interface device |
CN106776926A (en) * | 2016-12-01 | 2017-05-31 | 竹间智能科技(上海)有限公司 | Improve the method and system of responsibility when robot talks with |
CN106895552A (en) * | 2017-02-14 | 2017-06-27 | 珠海格力电器股份有限公司 | Control method, device and system of air conditioner |
US10122854B2 (en) * | 2017-02-18 | 2018-11-06 | Motorola Mobility Llc | Interactive voice response (IVR) using voice input for tactile input based on context |
KR20180098079A (en) | 2017-02-24 | 2018-09-03 | 삼성전자주식회사 | Vision-based object recognition device and method for controlling thereof |
WO2018158894A1 (en) * | 2017-03-01 | 2018-09-07 | 三菱電機株式会社 | Air conditioning control device, air conditioning control method, and program |
CN106782521A (en) * | 2017-03-22 | 2017-05-31 | 海南职业技术学院 | A kind of speech recognition system |
KR102363794B1 (en) * | 2017-03-31 | 2022-02-16 | 삼성전자주식회사 | Information providing method and electronic device supporting the same |
CN107038462B (en) * | 2017-04-14 | 2020-12-15 | 广州机智云物联网科技有限公司 | Equipment control operation method and system |
KR20180118471A (en) * | 2017-04-21 | 2018-10-31 | 엘지전자 주식회사 | Voice recognition apparatus |
KR20180118470A (en) * | 2017-04-21 | 2018-10-31 | 엘지전자 주식회사 | Voice recognition apparatus and voice recognition method |
CN107018611B (en) * | 2017-04-23 | 2020-04-24 | 湖北德龙自动化科技有限公司 | Smart lamp control system and control method based on voice recognition and emotion |
KR102112564B1 (en) * | 2017-05-19 | 2020-06-04 | 엘지전자 주식회사 | Home appliance and method for operating the same |
US10468020B2 (en) | 2017-06-06 | 2019-11-05 | Cypress Semiconductor Corporation | Systems and methods for removing interference for audio pattern recognition |
CN107403617A (en) * | 2017-06-26 | 2017-11-28 | 合肥美的智能科技有限公司 | Refrigerator, sound control method, computer equipment, readable storage medium storing program for executing |
KR102203720B1 (en) * | 2017-06-26 | 2021-01-15 | 에스케이텔레콤 주식회사 | Method and apparatus for speech recognition |
CN109215643B (en) * | 2017-07-05 | 2023-10-24 | 阿里巴巴集团控股有限公司 | Interaction method, electronic equipment and server |
WO2019013349A1 (en) * | 2017-07-14 | 2019-01-17 | ダイキン工業株式会社 | Air conditioner, air-conditioning system, communication system, control system, machinery control system, machinery management system, and sound information analysis system |
US11005993B2 (en) | 2017-07-14 | 2021-05-11 | Google Llc | Computational assistant extension device |
WO2019032996A1 (en) * | 2017-08-10 | 2019-02-14 | Facet Labs, Llc | Oral communication device and computing architecture for processing data and outputting user feedback, and related methods |
JP6919710B2 (en) * | 2017-09-14 | 2021-08-18 | 株式会社ソシオネクスト | Electronic device control systems, audio output devices and their methods |
KR102455199B1 (en) * | 2017-10-27 | 2022-10-18 | 엘지전자 주식회사 | Artificial intelligence device |
US20190138095A1 (en) * | 2017-11-03 | 2019-05-09 | Qualcomm Incorporated | Descriptive text-based input based on non-audible sensor data |
JP2019109567A (en) * | 2017-12-15 | 2019-07-04 | オンキヨー株式会社 | Electronic apparatus and control program of electric apparatus |
CN110033502B (en) * | 2018-01-10 | 2020-11-13 | Oppo广东移动通信有限公司 | Video production method, video production device, storage medium and electronic equipment |
CN108198553B (en) | 2018-01-23 | 2021-08-06 | 北京百度网讯科技有限公司 | Voice interaction method, device, equipment and computer readable storage medium |
US10636416B2 (en) * | 2018-02-06 | 2020-04-28 | Wistron Neweb Corporation | Smart network device and method thereof |
JP7281683B2 (en) * | 2018-02-22 | 2023-05-26 | パナソニックIpマネジメント株式会社 | VOICE CONTROL INFORMATION OUTPUT SYSTEM, VOICE CONTROL INFORMATION OUTPUT METHOD AND PROGRAM |
KR20190102509A (en) * | 2018-02-26 | 2019-09-04 | 삼성전자주식회사 | Method and system for performing voice commands |
CN108600511A (en) * | 2018-03-22 | 2018-09-28 | 上海摩软通讯技术有限公司 | The control system and method for intelligent sound assistant's equipment |
KR20190114321A (en) * | 2018-03-29 | 2019-10-10 | 삼성전자주식회사 | Electronic device and control method thereof |
CN108600911B (en) * | 2018-03-30 | 2021-05-18 | 联想(北京)有限公司 | Output method and electronic equipment |
CN111903194B (en) * | 2018-04-02 | 2024-04-09 | 昕诺飞控股有限公司 | System and method for enhancing voice commands using connected lighting systems |
KR102443052B1 (en) * | 2018-04-13 | 2022-09-14 | 삼성전자주식회사 | Air conditioner and method for controlling air conditioner |
CN110377145B (en) * | 2018-04-13 | 2021-03-30 | 北京京东尚科信息技术有限公司 | Electronic device determination method, system, computer system and readable storage medium |
US10566010B2 (en) | 2018-04-20 | 2020-02-18 | Spotify Ab | Systems and methods for enhancing responsiveness to utterances having detectable emotion |
US10621983B2 (en) | 2018-04-20 | 2020-04-14 | Spotify Ab | Systems and methods for enhancing responsiveness to utterances having detectable emotion |
US10622007B2 (en) * | 2018-04-20 | 2020-04-14 | Spotify Ab | Systems and methods for enhancing responsiveness to utterances having detectable emotion |
CN112313924A (en) | 2018-05-07 | 2021-02-02 | 谷歌有限责任公司 | Providing a composite graphical assistant interface for controlling various connected devices |
CN108877334B (en) * | 2018-06-12 | 2021-03-12 | 广东小天才科技有限公司 | Voice question searching method and electronic equipment |
CN108927809A (en) * | 2018-06-21 | 2018-12-04 | 佛山市豪洋电子有限公司 | A kind of family's shoes robot |
CN110719544A (en) * | 2018-07-11 | 2020-01-21 | 惠州迪芬尼声学科技股份有限公司 | Method for providing VUI specific response and application thereof in intelligent sound box |
CN108882454B (en) * | 2018-07-20 | 2023-09-22 | 佛山科学技术学院 | Intelligent voice recognition interactive lighting method and system based on emotion judgment |
KR102635434B1 (en) * | 2018-08-07 | 2024-02-07 | 엘지전자 주식회사 | Controlling method for preventing accident performed by home appliance and cloud server using artificial intelligence |
US20210327435A1 (en) * | 2018-09-06 | 2021-10-21 | Nec Corporation | Voice processing device, voice processing method, and program recording medium |
CN110968774B (en) * | 2018-09-29 | 2023-04-14 | 宁波方太厨具有限公司 | Intelligent menu recommendation method based on voice recognition |
US10942637B2 (en) * | 2018-10-09 | 2021-03-09 | Midea Group Co., Ltd. | Method and system for providing control user interfaces for home appliances |
JP7242248B2 (en) * | 2018-10-31 | 2023-03-20 | キヤノン株式会社 | ELECTRONIC DEVICE, CONTROL METHOD AND PROGRAM THEREOF |
JP7202853B2 (en) * | 2018-11-08 | 2023-01-12 | シャープ株式会社 | refrigerator |
JP7220058B2 (en) * | 2018-11-15 | 2023-02-09 | 東芝ライフスタイル株式会社 | Refrigerator voice interaction device, and refrigerator |
US11233671B2 (en) * | 2018-11-28 | 2022-01-25 | Motorola Mobility Llc | Smart internet of things menus with cameras |
CN109933782B (en) * | 2018-12-03 | 2023-11-28 | 创新先进技术有限公司 | User emotion prediction method and device |
CN109859751A (en) * | 2018-12-03 | 2019-06-07 | 珠海格力电器股份有限公司 | A method of it controlling equipment and its executes instruction |
US11393478B2 (en) * | 2018-12-12 | 2022-07-19 | Sonos, Inc. | User specific context switching |
KR102570384B1 (en) * | 2018-12-27 | 2023-08-25 | 삼성전자주식회사 | Home appliance and method for voice recognition thereof |
CN109599112B (en) * | 2019-01-02 | 2021-07-06 | 珠海格力电器股份有限公司 | Voice control method and device, storage medium and air conditioner |
CN111490915A (en) * | 2019-01-29 | 2020-08-04 | 佛山市顺德区美的电热电器制造有限公司 | Method and system for controlling intelligent household electrical appliance through voice |
KR20200100367A (en) * | 2019-02-18 | 2020-08-26 | 삼성전자주식회사 | Method for providing rountine and electronic device for supporting the same |
CN109991867A (en) * | 2019-04-16 | 2019-07-09 | 彭雪海 | A kind of smart home system with face recognition |
JP7241601B2 (en) * | 2019-05-21 | 2023-03-17 | リンナイ株式会社 | heating system |
WO2020241911A1 (en) * | 2019-05-28 | 2020-12-03 | 엘지전자 주식회사 | Iot device-controlling apparatus and control method of apparatus |
KR102323656B1 (en) * | 2019-06-04 | 2021-11-08 | 엘지전자 주식회사 | Apparatus and method for controlling operation of home appliance, home appliance and method for operating of home appliance |
CN112152667A (en) | 2019-06-11 | 2020-12-29 | 华为技术有限公司 | Method and device for identifying electric appliance |
DE102019134874A1 (en) * | 2019-06-25 | 2020-12-31 | Miele & Cie. Kg | Method for operating a device by a user by means of voice control |
US10976432B2 (en) * | 2019-06-28 | 2021-04-13 | Synaptics Incorporated | Acoustic locationing for smart environments |
US11508375B2 (en) * | 2019-07-03 | 2022-11-22 | Samsung Electronics Co., Ltd. | Electronic apparatus including control command identification tool generated by using a control command identified by voice recognition identifying a control command corresponding to a user voice and control method thereof |
KR20190092332A (en) | 2019-07-19 | 2019-08-07 | 엘지전자 주식회사 | Smart lighting and method for operating the same |
CN110491379B (en) * | 2019-07-22 | 2021-11-26 | 青岛海信日立空调系统有限公司 | Voice control method and voice controller of household appliance and air conditioner |
US11695809B2 (en) | 2019-07-29 | 2023-07-04 | Samsung Electronics Co., Ltd. | System and method for registering device for voice assistant service |
KR20210017087A (en) * | 2019-08-06 | 2021-02-17 | 삼성전자주식회사 | Method for recognizing voice and an electronic device supporting the same |
IT201900017000A1 (en) * | 2019-09-23 | 2021-03-23 | Candy Spa | Method and system for controlling and / or communicating with an appliance by means of voice commands and textual displays |
KR102629796B1 (en) * | 2019-10-15 | 2024-01-26 | 삼성전자 주식회사 | An electronic device supporting improved speech recognition |
US11743070B2 (en) | 2019-12-11 | 2023-08-29 | At&T Intellectual Property I, L.P. | Variable information communication |
TWI732409B (en) * | 2020-01-02 | 2021-07-01 | 台灣松下電器股份有限公司 | Smart home appliance control method |
CN113154783A (en) * | 2020-01-22 | 2021-07-23 | 青岛海尔电冰箱有限公司 | Refrigerator interaction control method, refrigerator and computer readable storage medium |
US11966964B2 (en) * | 2020-01-31 | 2024-04-23 | Walmart Apollo, Llc | Voice-enabled recipe selection |
KR20210147678A (en) * | 2020-05-29 | 2021-12-07 | 엘지전자 주식회사 | Artificial intelligence device |
CN112180747A (en) * | 2020-09-28 | 2021-01-05 | 上海连尚网络科技有限公司 | Method and equipment for adjusting intelligent household equipment |
CN112562662A (en) * | 2020-11-09 | 2021-03-26 | 金茂智慧科技(广州)有限公司 | Intelligent household appliance control equipment capable of realizing semantic understanding |
CN112712683B (en) * | 2020-12-14 | 2022-06-14 | 珠海格力电器股份有限公司 | Control method and system of household appliance, remote controller and server |
CN112667762B (en) * | 2020-12-25 | 2023-04-25 | 贵州北斗空间信息技术有限公司 | Method for quickly constructing GIS system by zero programming |
CN112735462B (en) * | 2020-12-30 | 2024-05-31 | 科大讯飞股份有限公司 | Noise reduction method and voice interaction method for distributed microphone array |
CN113793588A (en) * | 2021-09-15 | 2021-12-14 | 深圳创维-Rgb电子有限公司 | Intelligent voice prompt method, device, equipment and storage medium |
CN114420255A (en) * | 2021-12-28 | 2022-04-29 | 北京瞰瞰智能科技有限公司 | Intelligent recommendation method and device based on image recognition and intelligent refrigerator |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114519A1 (en) * | 2001-02-16 | 2002-08-22 | International Business Machines Corporation | Method and system for providing application launch by identifying a user via a digital camera, utilizing an edge detection algorithm |
US20060252457A1 (en) * | 2002-08-09 | 2006-11-09 | Avon Associates, Inc. | Voice controlled multimedia and communications system |
WO2008032329A2 (en) * | 2006-09-13 | 2008-03-20 | Alon Atsmon | Providing content responsive to multimedia signals |
US20120163677A1 (en) * | 2007-11-08 | 2012-06-28 | Sony Mobile Communications Ab | Automatic identifying |
US20130010207A1 (en) * | 2011-07-04 | 2013-01-10 | 3Divi | Gesture based interactive control of electronic equipment |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5226090A (en) * | 1989-12-29 | 1993-07-06 | Pioneer Electronic Corporation | Voice-operated remote control system |
DE69232407T2 (en) * | 1991-11-18 | 2002-09-12 | Kabushiki Kaisha Toshiba, Kawasaki | Speech dialogue system to facilitate computer-human interaction |
JP3674990B2 (en) * | 1995-08-21 | 2005-07-27 | セイコーエプソン株式会社 | Speech recognition dialogue apparatus and speech recognition dialogue processing method |
US6557756B1 (en) * | 1998-09-04 | 2003-05-06 | Ncr Corporation | Communications, particularly in the domestic environment |
JP4314680B2 (en) * | 1999-07-27 | 2009-08-19 | ソニー株式会社 | Speech recognition control system and speech recognition control method |
JP4292646B2 (en) | 1999-09-16 | 2009-07-08 | 株式会社デンソー | User interface device, navigation system, information processing device, and recording medium |
US6999932B1 (en) * | 2000-10-10 | 2006-02-14 | Intel Corporation | Language independent voice-based search system |
US6721706B1 (en) * | 2000-10-30 | 2004-04-13 | Koninklijke Philips Electronics N.V. | Environment-responsive user interface/entertainment device that simulates personal interaction |
GB0107689D0 (en) | 2001-03-28 | 2001-05-16 | Ncr Int Inc | Self service terminal |
US7698228B2 (en) * | 2001-04-27 | 2010-04-13 | Accenture Llp | Tracking purchases in a location-based services system |
US20040054534A1 (en) * | 2002-09-13 | 2004-03-18 | Junqua Jean-Claude | Client-server voice customization |
US7058578B2 (en) * | 2002-09-24 | 2006-06-06 | Rockwell Electronic Commerce Technologies, L.L.C. | Media translator for transaction processing system |
CN1174337C (en) | 2002-10-17 | 2004-11-03 | 南开大学 | Apparatus and method for identifying gazing direction of human eyes and its use |
GB0224806D0 (en) * | 2002-10-24 | 2002-12-04 | Ibm | Method and apparatus for a interactive voice response system |
DE102004001863A1 (en) | 2004-01-13 | 2005-08-11 | Siemens Ag | Method and device for processing a speech signal |
KR20050081470A (en) * | 2004-02-13 | 2005-08-19 | 주식회사 엑스텔테크놀러지 | Method for recording and play of voice message by voice recognition |
JP2006033795A (en) * | 2004-06-15 | 2006-02-02 | Sanyo Electric Co Ltd | Remote control system, controller, program for imparting function of controller to computer, storage medium with the program stored thereon, and server |
US20060192775A1 (en) * | 2005-02-25 | 2006-08-31 | Microsoft Corporation | Using detected visual cues to change computer system operating states |
US7672931B2 (en) * | 2005-06-30 | 2010-03-02 | Microsoft Corporation | Searching for content using voice search queries |
US8725518B2 (en) * | 2006-04-25 | 2014-05-13 | Nice Systems Ltd. | Automatic speech analysis |
US7523108B2 (en) | 2006-06-07 | 2009-04-21 | Platformation, Inc. | Methods and apparatus for searching with awareness of geography and languages |
US7822606B2 (en) * | 2006-07-14 | 2010-10-26 | Qualcomm Incorporated | Method and apparatus for generating audio information from received synthesis information |
US8886521B2 (en) * | 2007-05-17 | 2014-11-11 | Redstart Systems, Inc. | System and method of dictation for a speech recognition command system |
WO2009048984A1 (en) * | 2007-10-08 | 2009-04-16 | The Regents Of The University Of California | Voice-controlled clinical information dashboard |
US9986293B2 (en) | 2007-11-21 | 2018-05-29 | Qualcomm Incorporated | Device access control |
JP2010055375A (en) * | 2008-08-28 | 2010-03-11 | Toshiba Corp | Electronic apparatus operation instruction device and operating method thereof |
KR20100026353A (en) * | 2008-08-29 | 2010-03-10 | 엘지전자 주식회사 | Air conditioner and controlling method thereof |
CN101765188A (en) * | 2008-12-25 | 2010-06-30 | 英华达(上海)电子有限公司 | Energy-saving method of running gear and running gear adopting the same |
US20110110534A1 (en) * | 2009-11-12 | 2011-05-12 | Apple Inc. | Adjustable voice output based on device status |
CN101808047A (en) * | 2010-02-10 | 2010-08-18 | 深圳先进技术研究院 | Instant messaging partner robot and instant messaging method with messaging partner |
CN102377864A (en) * | 2010-08-13 | 2012-03-14 | 希姆通信息技术(上海)有限公司 | Mobile phone motion detection method based on acceleration sensor |
US8417530B1 (en) * | 2010-08-20 | 2013-04-09 | Google Inc. | Accent-influenced search results |
KR101165537B1 (en) * | 2010-10-27 | 2012-07-16 | 삼성에스디에스 주식회사 | User Equipment and method for cogniting user state thereof |
KR101789619B1 (en) * | 2010-11-22 | 2017-10-25 | 엘지전자 주식회사 | Method for controlling using voice and gesture in multimedia device and multimedia device thereof |
TW201223231A (en) | 2010-11-26 | 2012-06-01 | Hon Hai Prec Ind Co Ltd | Handheld device and method for constructing user interface thereof |
CN102867005A (en) | 2011-07-06 | 2013-01-09 | 阿尔派株式会社 | Retrieving device, retrieving method and vehicle-mounted navigation apparatus |
KR20130084543A (en) * | 2012-01-17 | 2013-07-25 | 삼성전자주식회사 | Apparatus and method for providing user interface |
US20130212501A1 (en) * | 2012-02-10 | 2013-08-15 | Glen J. Anderson | Perceptual computing with conversational agent |
US9401140B1 (en) * | 2012-08-22 | 2016-07-26 | Amazon Technologies, Inc. | Unsupervised acoustic model training |
CN103024521B (en) * | 2012-12-27 | 2017-02-08 | 深圳Tcl新技术有限公司 | Program screening method, program screening system and television with program screening system |
US8571851B1 (en) * | 2012-12-31 | 2013-10-29 | Google Inc. | Semantic interpretation using user gaze order |
CN203132059U (en) | 2013-02-17 | 2013-08-14 | 海尔集团公司 | Control system of air conditioner |
US9443527B1 (en) * | 2013-09-27 | 2016-09-13 | Amazon Technologies, Inc. | Speech recognition capability generation and control |
US9812126B2 (en) * | 2014-11-28 | 2017-11-07 | Microsoft Technology Licensing, Llc | Device arbitration for listening devices |
-
2013
- 2013-12-11 KR KR1020130153713A patent/KR102188090B1/en active IP Right Grant
-
2014
- 2014-11-04 EP EP24168136.0A patent/EP4387174A3/en active Pending
- 2014-11-04 EP EP20187912.9A patent/EP3761309B1/en active Active
- 2014-11-04 WO PCT/KR2014/010536 patent/WO2015088141A1/en active Application Filing
- 2014-11-04 CN CN201480072279.2A patent/CN105874405A/en active Pending
- 2014-11-04 EP EP14870553.6A patent/EP3080678A4/en not_active Ceased
- 2014-11-04 US US15/103,528 patent/US10269344B2/en active Active
-
2019
- 2019-02-28 US US16/289,558 patent/US20190267004A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114519A1 (en) * | 2001-02-16 | 2002-08-22 | International Business Machines Corporation | Method and system for providing application launch by identifying a user via a digital camera, utilizing an edge detection algorithm |
US20060252457A1 (en) * | 2002-08-09 | 2006-11-09 | Avon Associates, Inc. | Voice controlled multimedia and communications system |
WO2008032329A2 (en) * | 2006-09-13 | 2008-03-20 | Alon Atsmon | Providing content responsive to multimedia signals |
US20120163677A1 (en) * | 2007-11-08 | 2012-06-28 | Sony Mobile Communications Ab | Automatic identifying |
US20130010207A1 (en) * | 2011-07-04 | 2013-01-10 | 3Divi | Gesture based interactive control of electronic equipment |
Cited By (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11468155B2 (en) | 2007-09-24 | 2022-10-11 | Apple Inc. | Embedded authentication systems in an electronic device |
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11755712B2 (en) | 2011-09-29 | 2023-09-12 | Apple Inc. | Authentication with secondary approver |
US10803281B2 (en) | 2013-09-09 | 2020-10-13 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
US11494046B2 (en) | 2013-09-09 | 2022-11-08 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US11768575B2 (en) | 2013-09-09 | 2023-09-26 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US9966070B2 (en) | 2014-05-21 | 2018-05-08 | Vorwerk & Co., Interholding Gmbh | Electrically operated domestic appliance having a voice recognition device |
EP3145376B1 (en) | 2014-05-21 | 2018-07-11 | Vorwerk & Co. Interholding GmbH | Electrically operated food processor with speech recognition unit |
WO2015176950A1 (en) * | 2014-05-21 | 2015-11-26 | Vorwerk & Co. Interholding Gmbh | Electrically operated domestic appliance having a voice recognition device |
US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
US10748153B2 (en) | 2014-05-29 | 2020-08-18 | Apple Inc. | User interface for payments |
US11836725B2 (en) | 2014-05-29 | 2023-12-05 | Apple Inc. | User interface for payments |
US10796309B2 (en) | 2014-05-29 | 2020-10-06 | Apple Inc. | User interface for payments |
US10902424B2 (en) | 2014-05-29 | 2021-01-26 | Apple Inc. | User interface for payments |
US11898788B2 (en) | 2015-09-03 | 2024-02-13 | Samsung Electronics Co., Ltd. | Refrigerator |
US10811002B2 (en) | 2015-11-10 | 2020-10-20 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the same |
WO2017082543A1 (en) * | 2015-11-10 | 2017-05-18 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the same |
WO2016197824A1 (en) * | 2016-01-18 | 2016-12-15 | 中兴通讯股份有限公司 | Voice command processing method and apparatus, and smart gateway |
WO2017162019A1 (en) * | 2016-03-24 | 2017-09-28 | 深圳市国华识别科技开发有限公司 | Intelligent terminal control method and intelligent terminal |
EP3244597A1 (en) * | 2016-05-09 | 2017-11-15 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for controlling devices |
US10564833B2 (en) | 2016-05-09 | 2020-02-18 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for controlling devices |
US10749967B2 (en) | 2016-05-19 | 2020-08-18 | Apple Inc. | User interface for remote authorization |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
CN105976817A (en) * | 2016-07-04 | 2016-09-28 | 佛山市顺德区美的电热电器制造有限公司 | Voice control method and voice control device for cooking utensil as well as cooking utensil |
WO2018023516A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interaction recognition and control method |
WO2018023523A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Motion and emotion recognizing home control system |
WO2018023514A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Home background music control system |
WO2018023512A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Furniture control method using multi-dimensional recognition |
WO2018023518A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Smart terminal for voice interaction and recognition |
WO2018023517A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interactive recognition control system |
WO2018023513A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Home control method based on motion recognition |
WO2018023515A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Gesture and emotion recognition home control system |
CN106200395A (en) * | 2016-08-05 | 2016-12-07 | 易晓阳 | A kind of multidimensional identification appliance control method |
CN106200396A (en) * | 2016-08-05 | 2016-12-07 | 易晓阳 | A kind of appliance control method based on Motion Recognition |
CN106019977A (en) * | 2016-08-05 | 2016-10-12 | 易晓阳 | Gesture and emotion recognition home control system |
CN106125566A (en) * | 2016-08-05 | 2016-11-16 | 易晓阳 | A kind of household background music control system |
CN106125565A (en) * | 2016-08-05 | 2016-11-16 | 易晓阳 | A kind of motion and emotion recognition house control system |
CN106228989A (en) * | 2016-08-05 | 2016-12-14 | 易晓阳 | A kind of interactive voice identification control method |
CN106254186A (en) * | 2016-08-05 | 2016-12-21 | 易晓阳 | A kind of interactive voice control system for identifying |
CN106297783A (en) * | 2016-08-05 | 2017-01-04 | 易晓阳 | A kind of interactive voice identification intelligent terminal |
WO2018027506A1 (en) * | 2016-08-09 | 2018-02-15 | 曹鸿鹏 | Emotion recognition-based lighting control method |
US11908465B2 (en) | 2016-11-03 | 2024-02-20 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
WO2018084576A1 (en) * | 2016-11-03 | 2018-05-11 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US10679618B2 (en) | 2016-11-03 | 2020-06-09 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US20180122379A1 (en) * | 2016-11-03 | 2018-05-03 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
US11734926B2 (en) | 2017-05-16 | 2023-08-22 | Google Llc | Resolving automated assistant requests that are based on image(s) and/or other sensor data |
US10867180B2 (en) | 2017-05-16 | 2020-12-15 | Google Llc | Resolving automated assistant requests that are based on image(s) and/or other sensor data |
US10275651B2 (en) * | 2017-05-16 | 2019-04-30 | Google Llc | Resolving automated assistant requests that are based on image(s) and/or other sensor data |
JP2021061027A (en) * | 2017-05-16 | 2021-04-15 | グーグル エルエルシーGoogle LLC | Resolving automated assistant request that is based on image and/or other sensor data |
JP2020521167A (en) * | 2017-05-16 | 2020-07-16 | グーグル エルエルシー | Resolution of automated assistant requests based on images and/or other sensor data |
EP3413304A3 (en) * | 2017-05-19 | 2019-04-03 | LG Electronics Inc. | Method for operating home appliance and voice recognition server system |
US11406224B2 (en) | 2017-06-07 | 2022-08-09 | Kenwood Limited | Kitchen appliance and system therefor |
WO2018224812A1 (en) * | 2017-06-07 | 2018-12-13 | Kenwood Limited | Kitchen appliance and system therefor |
CN107370649B (en) * | 2017-08-31 | 2020-09-11 | 广东美的制冷设备有限公司 | Household appliance control method, system, control terminal and storage medium |
CN107370649A (en) * | 2017-08-31 | 2017-11-21 | 广东美的制冷设备有限公司 | Household electric appliance control method, system, control terminal and storage medium |
US11765163B2 (en) | 2017-09-09 | 2023-09-19 | Apple Inc. | Implementation of biometric authentication |
US10783227B2 (en) | 2017-09-09 | 2020-09-22 | Apple Inc. | Implementation of biometric authentication |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US10872256B2 (en) | 2017-09-09 | 2020-12-22 | Apple Inc. | Implementation of biometric authentication |
WO2019118089A1 (en) * | 2017-12-11 | 2019-06-20 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
US11830289B2 (en) | 2017-12-11 | 2023-11-28 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
EP3770522A4 (en) * | 2018-05-18 | 2021-07-07 | Samsung Electronics Co., Ltd. | Air conditioner and control method thereof |
US11530836B2 (en) | 2018-05-18 | 2022-12-20 | Samsung Electronics Co., Ltd. | Air conditioner and control method thereof |
US11928200B2 (en) | 2018-06-03 | 2024-03-12 | Apple Inc. | Implementation of biometric authentication |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11521038B2 (en) | 2018-07-19 | 2022-12-06 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
WO2020024546A1 (en) * | 2018-08-01 | 2020-02-06 | 珠海格力电器股份有限公司 | Auxiliary speech control method and device and air conditioner |
US11619991B2 (en) | 2018-09-28 | 2023-04-04 | Apple Inc. | Device control using gaze information |
WO2020068375A1 (en) * | 2018-09-28 | 2020-04-02 | Apple Inc. | Device control using gaze information |
US11809784B2 (en) | 2018-09-28 | 2023-11-07 | Apple Inc. | Audio assisted enrollment |
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
JP2021521496A (en) * | 2018-09-28 | 2021-08-26 | アップル インコーポレイテッドApple Inc. | Device control using gaze information |
US20220343909A1 (en) * | 2019-09-06 | 2022-10-27 | Lg Electronics Inc. | Display apparatus |
CN114190823A (en) * | 2021-10-21 | 2022-03-18 | 湖南师范大学 | Intelligent household robot and control method |
US12079458B2 (en) | 2022-04-20 | 2024-09-03 | Apple Inc. | Image data for enhanced user interactions |
Also Published As
Publication number | Publication date |
---|---|
EP4387174A3 (en) | 2024-07-17 |
KR102188090B1 (en) | 2020-12-04 |
EP4387174A2 (en) | 2024-06-19 |
KR20150068013A (en) | 2015-06-19 |
US20170004828A1 (en) | 2017-01-05 |
CN105874405A (en) | 2016-08-17 |
EP3080678A1 (en) | 2016-10-19 |
US10269344B2 (en) | 2019-04-23 |
EP3761309B1 (en) | 2024-05-08 |
EP3761309A1 (en) | 2021-01-06 |
US20190267004A1 (en) | 2019-08-29 |
EP3080678A4 (en) | 2018-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015088141A1 (en) | Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances | |
WO2019164148A1 (en) | Method and system for performing voice command | |
WO2020045950A1 (en) | Method, device, and system of selectively using multiple voice data receiving devices for intelligent service | |
WO2019182325A1 (en) | Electronic device and voice recognition control method of electronic device | |
WO2012036475A2 (en) | Digital device control system using smart phone | |
WO2016017945A1 (en) | Mobile device and method of pairing the same with electronic device | |
WO2014062032A1 (en) | Display device, remote control device to control display device, method of controlling display device, method of controlling server and method of controlling remote control device | |
WO2015194693A1 (en) | Video display device and operation method therefor | |
WO2020204531A1 (en) | Tv control system and tv control device suitable therefor | |
WO2019050227A1 (en) | Operation method of air conditioner | |
WO2019045228A1 (en) | Cooking device and cooking system | |
WO2017126909A1 (en) | Image capturing apparatus and control method thereof | |
WO2021060590A1 (en) | Display device and artificial intelligence system | |
WO2016013705A1 (en) | Remote control device and operating method thereof | |
WO2018048098A1 (en) | Portable camera and controlling method therefor | |
EP3830821A1 (en) | Method, device, and system of selectively using multiple voice data receiving devices for intelligent service | |
WO2021210795A1 (en) | Method and apparatus for wireless connection between electronic devices | |
WO2020153600A1 (en) | Server, terminal device, and method for home appliance management thereby | |
WO2021033785A1 (en) | Display device and artificial intelligence server capable of controlling home appliance through user's voice | |
WO2022014738A1 (en) | Display device | |
WO2019124775A1 (en) | Electronic device and method for providing service information related to broadcast content in electronic device | |
WO2020256184A1 (en) | Display device | |
WO2021137333A1 (en) | Display device | |
WO2021015319A1 (en) | Display device and operation method for same | |
WO2020235724A1 (en) | Display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14870553 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15103528 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014870553 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014870553 Country of ref document: EP |