WO2014190496A1 - Method and system for identifying location associated with voice command to control home appliance - Google Patents
Method and system for identifying location associated with voice command to control home appliance Download PDFInfo
- Publication number
- WO2014190496A1 WO2014190496A1 PCT/CN2013/076345 CN2013076345W WO2014190496A1 WO 2014190496 A1 WO2014190496 A1 WO 2014190496A1 CN 2013076345 W CN2013076345 W CN 2013076345W WO 2014190496 A1 WO2014190496 A1 WO 2014190496A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice command
- voice
- room
- features
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000005070 sampling Methods 0.000 claims abstract description 3
- 230000000694 effects Effects 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/06—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being correlation coefficients
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
Definitions
- the present invention relates to a method and system for identifying the location associated with voice command in a home environment to control a home appliance. More particularly, the present invention relates to a method and system for identifying where the voice command by a user is emitted with machine learning method and then performing the action of the voice command on the home appliance in the same room as the user.
- Personal assistant applications by voice command on mobile phone are becoming popular now.
- Such kind of applications use natural language processing to answer questions, make recommendations, and perform actions on home appliances such as TV sets by delegating requests to the destination TV set or STB (Set-Top-Box) .
- the solution proposed in this application solves the problem that current state-of-the art personal assistant application by voice command can't correctly identify which TV set needs to be controlled if there are multiple TV sets at home environment.
- the method can find the location associated with the voice command and then turn on the television in the same room.
- the home appliances include multiple TV sets, air- conditioning equipments, illumination equipments, and so on .
- US20100332668A1 discloses a method and system for detecting proximity between electronic devices.
- the system comprising: a receiver for receiving a voice command by a user; a recorder for recording the received voice command; and a controller configured to: sample the recorded voice command and feature extracting from the recorded voice command; determine room label by comparing the extracted features of the voice command with feature references, wherein the room label is associated with the feature references; assign the room label to the voice command; and control the home appliance located in the assigned room in accordance with the voice command.
- Fig. 1 shows an exemplary circumstance where there are more than one TV set in different rooms in a home environment according to an embodiment of the present invention
- Fig. 2 shows an exemplary flow chart illustrating a classification method according to an embodiment of the present invention
- Fig. 3 shows an exemplary block diagram illustrating a system according to an embodiment of the present invention .
- Fig. 1 shows the circumstance there are more than one TV set 111, 113, 115, 117 in different rooms 103, 105, 107, 109 in a home environment 101. Under the home environment 101, it is impossible for a voice command system based personal assistant application on mobile phone to determine which TV set is needed to be controlled if a user 119 just instructs "turn on TV" to the mobile phone 121.
- this invention takes into account the surrounding acoustics when the user instructs the voice command of "turn on TV” and leverage the existing correlations among the voice command and its surrounding such as voice features and command time into the voice command understanding, in order to identify where the voice command is instructed with machine learning method and then turn on the television in the same room.
- the personal assistant application includes a voice classification system which combines three processing stages: 1. voice recording, 2. feature extraction and 3. classification .
- voice classification system which combines three processing stages: 1. voice recording, 2. feature extraction and 3. classification .
- signal features including low-level parameters such as the zero- crossing rate, signal bandwidth, spectral centroid, and signal energy have been used.
- Another set of features used, inherited from automatic speech recognizers, is the set mel-frequency cepstral coefficients (MFCC) . It means the voice classification module will combine standard features with representations of rhythm and pitch content 1.
- MFCC mel-frequency cepstral coefficients
- the personal assistant application Every time when a user instructs the voice command of "turn on TV", the personal assistant application records the voice command and then provides the feature analysis module with the recorded audio for further processing.
- a system In order to get high accuracy for location classification, a system according to the invention samples the recorded audio into 8KHz sample rate and then segment it into segments by one-second window, for example. Then this one-second audio segment is taken as the basic classification unit in its algorithms, and is further divided into forty 25ms non-overlapping frames. Each feature is extracted based on these forty frames in one- second audio segment. Then the system selects good features that can identify the effect on the recorded audio posed by the different environment in different rooms .
- audio mean which measures mean of the audio segment vector
- audio spread which measures the spread of recorded audio segment spectrum
- zero-crossing rate ratio which counts the number of sign changes of the audio segment waveform
- short-time energy ratio which describes the short time energy of the audio segment by computing using root mean square.
- MFCC Mel-Frequency Cepstral Coefficients
- non-voice features associated with the recording voice command can also be considered. It includes, for example, the time when the voice command is recorded, as the pattern that a user tends to watch TV in a specific room at the same time in different days exists
- the personal assistant software on the mobile phone can successfully identify in which room, for example, room 1, room 2 or room 3, the voice command is given by analyzing the features related with the recorded audio, and then turn on the TV in the associated room.
- k-nearest neighbor scheme As the learning algorithm in the invention.
- the system need to predict an output variable Y, given a set of input features, X.
- Y would be 1 if the recording voice command is associated with room 1, 2 if the recording voice command is associated with room 2, and etc, while X would be a vector of feature values extracted from the recording voice command.
- the training samples for references are voice feature vectors in a multidimensional feature space, each with a class label of room 1, room 2 and room 3.
- the training phase of the process consists only of storing the feature vectors and class labels of the training samples for references.
- the training samples are used as references to classify coming voice commands.
- the training phase may be set as a predetermined period. Or else, references can be accumulated after training phase.
- reference table features are related with the room labels.
- a recording voice command is classified by assigning the room label which is the most frequent among the k-nearest training references to the features of the recorded voice command. So, the room in which the audio stream is recorded can be got from the classification results. Then the television in the corresponding room can be turned on by an embedded infrared communication equipment with the mobile phone.
- classification strategies including decision tree and probabilistic graphical model, can also be employed in the idea disclosed in this invention.
- FIG.2 A diagram illustrating the whole voice command recording, feature extraction and classification process is shown in the Fig.2.
- Fig.2 shows an exemplary flow chart 201 illustrating a classification method according to an embodiment of the invention .
- a user instructs a voice command such as "turn on TV" on a mobile device such as a mobile phone.
- the system records the voice command.
- the system samples and feature extracts the recorded voice command.
- the system assigns room label to the voice command according to L-nearest neighbor class algorism on the basis of the voice feature vector and the other features such as recording time.
- the reference table including features and related room labels are used for this procedure.
- the system controls the TV in the corresponding room to the room label for the voice command .
- Fig. 3 illustrates an exemplary block diagram of a system 301 according to an embodiment of the present invention.
- the system 301 can be a mobile phone, computer system, tablet, portable game, smart-phone, and the like.
- the system 301 comprises a CPU (Central Processing Unit) 303, a micro phone 309, a storage 305, a display 311, and a infrared communication equipment 313.
- a memory 307 such as RAM (Random Access Memory) may be connected to the CPU 303 as shown in Fig. 3.
- RAM Random Access Memory
- the storage 305 is configured to store software programs and data for the CPU 303 to drive and operate the processes as explained above.
- the micro phone 309 is configures to detect a user's command voice.
- the display 311 is configured to visually present text, image, video and any other contents to a user of the system 301.
- the infrared communication equipment 313 is configured to send commands to any home appliances on the basis of the room label for the voice command.
- Other communication equipment can be replaced the infrared communication equipment.
- the communication equipment can send command to a central system controlling all of home appliances .
- the system can instruct any home appliances such as TV sets, air-conditioning equipments, illumination equipments, and so on.
- the teachings of the present principles are implemented as a combination of hardware and software
- the software may be implemented as an application program tangibly embodied on a program storage unit.
- the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
- the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU") , a random access memory (“RAM”), and input/output ("I/O") interfaces.
- CPU central processing units
- RAM random access memory
- I/O input/output
- the computer platform may also include an operating system and microinstruction code.
- the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
- various other peripheral units may be connected to the computer platform such as an additional data storage unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/894,518 US20160125880A1 (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
EP13885491.4A EP3005346A4 (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
KR1020157034002A KR20160014625A (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
JP2016515589A JP2016524724A (en) | 2013-05-28 | 2013-05-28 | Method and system for controlling a home electrical appliance by identifying a position associated with a voice command in a home environment |
CN201380076839.7A CN105308679A (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
PCT/CN2013/076345 WO2014190496A1 (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/076345 WO2014190496A1 (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014190496A1 true WO2014190496A1 (en) | 2014-12-04 |
Family
ID=51987857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/076345 WO2014190496A1 (en) | 2013-05-28 | 2013-05-28 | Method and system for identifying location associated with voice command to control home appliance |
Country Status (6)
Country | Link |
---|---|
US (1) | US20160125880A1 (en) |
EP (1) | EP3005346A4 (en) |
JP (1) | JP2016524724A (en) |
KR (1) | KR20160014625A (en) |
CN (1) | CN105308679A (en) |
WO (1) | WO2014190496A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105137937A (en) * | 2015-08-28 | 2015-12-09 | 青岛海尔科技有限公司 | Control method and device of intelligent IoT electrical appliances, and intelligent IoT electrical appliances |
EP3157007A1 (en) * | 2015-10-12 | 2017-04-19 | Samsung Electronics Co., Ltd. | Apparatus and method for processing control command based on voice agent, and agent device |
WO2017151672A3 (en) * | 2016-02-29 | 2017-10-12 | Faraday & Future Inc. | Voice assistance system for devices of an ecosystem |
CN109145124A (en) * | 2018-08-16 | 2019-01-04 | 格力电器(武汉)有限公司 | Information storage method and device, storage medium and electronic device |
CN110782875A (en) * | 2019-10-16 | 2020-02-11 | 腾讯科技(深圳)有限公司 | Voice rhythm processing method and device based on artificial intelligence |
EP3702904A4 (en) * | 2017-10-23 | 2020-12-30 | Sony Corporation | Information processing device and information processing method |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9996164B2 (en) | 2016-09-22 | 2018-06-12 | Qualcomm Incorporated | Systems and methods for recording custom gesture commands |
KR102573383B1 (en) * | 2016-11-01 | 2023-09-01 | 삼성전자주식회사 | Electronic apparatus and controlling method thereof |
US11276395B1 (en) * | 2017-03-10 | 2022-03-15 | Amazon Technologies, Inc. | Voice-based parameter assignment for voice-capturing devices |
US11594229B2 (en) | 2017-03-31 | 2023-02-28 | Sony Corporation | Apparatus and method to identify a user based on sound data and location information |
CN107528753B (en) * | 2017-08-16 | 2021-02-26 | 捷开通讯(深圳)有限公司 | Intelligent household voice control method, intelligent equipment and device with storage function |
KR102421255B1 (en) * | 2017-10-17 | 2022-07-18 | 삼성전자주식회사 | Electronic device and method for controlling voice signal |
US10748533B2 (en) * | 2017-11-08 | 2020-08-18 | Harman International Industries, Incorporated | Proximity aware voice agent |
CN110097885A (en) * | 2018-01-31 | 2019-08-06 | 深圳市锐吉电子科技有限公司 | A kind of sound control method and system |
CN110727200A (en) * | 2018-07-17 | 2020-01-24 | 珠海格力电器股份有限公司 | Control method of intelligent household equipment and terminal equipment |
US11133004B1 (en) * | 2019-03-27 | 2021-09-28 | Amazon Technologies, Inc. | Accessory for an audio output device |
US11580973B2 (en) * | 2019-05-31 | 2023-02-14 | Apple Inc. | Multi-user devices in a connected home environment |
CA3148908A1 (en) * | 2019-07-29 | 2021-02-04 | Siemens Industry, Inc. | Building automation system for controlling conditions of a room |
CN110925944B (en) * | 2019-11-27 | 2021-02-12 | 珠海格力电器股份有限公司 | Control method and control device of air conditioning system and air conditioning system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599270A (en) * | 2008-06-02 | 2009-12-09 | 海尔集团公司 | Voice server and voice control method |
CN101753871A (en) * | 2008-11-28 | 2010-06-23 | 康佳集团股份有限公司 | Voice remote control TV system |
CN101794126A (en) * | 2009-12-15 | 2010-08-04 | 广东工业大学 | Wireless intelligent home appliance voice control system |
CN101867742A (en) * | 2010-05-21 | 2010-10-20 | 中山大学 | Television system based on sound control |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6400310B1 (en) * | 1998-10-22 | 2002-06-04 | Washington University | Method and apparatus for a tunable high-resolution spectral estimator |
JP2003204282A (en) * | 2002-01-07 | 2003-07-18 | Toshiba Corp | Headset with radio communication function, communication recording system using the same and headset system capable of selecting communication control system |
US7016884B2 (en) * | 2002-06-27 | 2006-03-21 | Microsoft Corporation | Probability estimate for K-nearest neighbor |
JP3836815B2 (en) * | 2003-05-21 | 2006-10-25 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Speech recognition apparatus, speech recognition method, computer-executable program and storage medium for causing computer to execute speech recognition method |
WO2005034395A2 (en) * | 2003-09-17 | 2005-04-14 | Nielsen Media Research, Inc. | Methods and apparatus to operate an audience metering device with voice commands |
US7505902B2 (en) * | 2004-07-28 | 2009-03-17 | University Of Maryland | Discrimination of components of audio signals based on multiscale spectro-temporal modulations |
US7774202B2 (en) * | 2006-06-12 | 2010-08-10 | Lockheed Martin Corporation | Speech activated control system and related methods |
US8108204B2 (en) * | 2006-06-16 | 2012-01-31 | Evgeniy Gabrilovich | Text categorization using external knowledge |
US8502876B2 (en) * | 2006-09-12 | 2013-08-06 | Storz Endoskop Producktions GmbH | Audio, visual and device data capturing system with real-time speech recognition command and control system |
US7649456B2 (en) * | 2007-01-26 | 2010-01-19 | Sony Ericsson Mobile Communications Ab | User interface for an electronic device used as a home controller |
ATE454692T1 (en) * | 2007-02-02 | 2010-01-15 | Harman Becker Automotive Sys | VOICE CONTROL SYSTEM AND METHOD |
JP5265141B2 (en) * | 2007-06-15 | 2013-08-14 | オリンパス株式会社 | Portable electronic device, program and information storage medium |
US8380499B2 (en) * | 2008-03-31 | 2013-02-19 | General Motors Llc | Speech recognition adjustment based on manual interaction |
US9253560B2 (en) * | 2008-09-16 | 2016-02-02 | Personics Holdings, Llc | Sound library and method |
US8527278B2 (en) * | 2009-06-29 | 2013-09-03 | Abraham Ben David | Intelligent home automation |
US9565156B2 (en) * | 2011-09-19 | 2017-02-07 | Microsoft Technology Licensing, Llc | Remote access to a mobile communication device over a wireless local area network (WLAN) |
US8340975B1 (en) * | 2011-10-04 | 2012-12-25 | Theodore Alfred Rosenberger | Interactive speech recognition device and system for hands-free building control |
US8825020B2 (en) * | 2012-01-12 | 2014-09-02 | Sensory, Incorporated | Information access and device control using mobile phones and audio in the home environment |
CN102641198B (en) * | 2012-04-27 | 2013-09-25 | 浙江大学 | Blind person environment sensing method based on wireless networks and sound positioning |
US9368104B2 (en) * | 2012-04-30 | 2016-06-14 | Src, Inc. | System and method for synthesizing human speech using multiple speakers and context |
CN202632077U (en) * | 2012-05-24 | 2012-12-26 | 李强 | Intelligent household master control host |
CN103456301B (en) * | 2012-05-28 | 2019-02-12 | 中兴通讯股份有限公司 | A kind of scene recognition method and device and mobile terminal based on ambient sound |
US8831957B2 (en) * | 2012-08-01 | 2014-09-09 | Google Inc. | Speech recognition models based on location indicia |
-
2013
- 2013-05-28 WO PCT/CN2013/076345 patent/WO2014190496A1/en active Application Filing
- 2013-05-28 US US14/894,518 patent/US20160125880A1/en not_active Abandoned
- 2013-05-28 KR KR1020157034002A patent/KR20160014625A/en not_active Application Discontinuation
- 2013-05-28 CN CN201380076839.7A patent/CN105308679A/en active Pending
- 2013-05-28 JP JP2016515589A patent/JP2016524724A/en not_active Withdrawn
- 2013-05-28 EP EP13885491.4A patent/EP3005346A4/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599270A (en) * | 2008-06-02 | 2009-12-09 | 海尔集团公司 | Voice server and voice control method |
CN101753871A (en) * | 2008-11-28 | 2010-06-23 | 康佳集团股份有限公司 | Voice remote control TV system |
CN101794126A (en) * | 2009-12-15 | 2010-08-04 | 广东工业大学 | Wireless intelligent home appliance voice control system |
CN101867742A (en) * | 2010-05-21 | 2010-10-20 | 中山大学 | Television system based on sound control |
Non-Patent Citations (1)
Title |
---|
See also references of EP3005346A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105137937A (en) * | 2015-08-28 | 2015-12-09 | 青岛海尔科技有限公司 | Control method and device of intelligent IoT electrical appliances, and intelligent IoT electrical appliances |
EP3157007A1 (en) * | 2015-10-12 | 2017-04-19 | Samsung Electronics Co., Ltd. | Apparatus and method for processing control command based on voice agent, and agent device |
US10607605B2 (en) | 2015-10-12 | 2020-03-31 | Samsung Electronics Co., Ltd. | Apparatus and method for processing control command based on voice agent, and agent device |
WO2017151672A3 (en) * | 2016-02-29 | 2017-10-12 | Faraday & Future Inc. | Voice assistance system for devices of an ecosystem |
EP3702904A4 (en) * | 2017-10-23 | 2020-12-30 | Sony Corporation | Information processing device and information processing method |
CN109145124A (en) * | 2018-08-16 | 2019-01-04 | 格力电器(武汉)有限公司 | Information storage method and device, storage medium and electronic device |
CN109145124B (en) * | 2018-08-16 | 2022-02-25 | 格力电器(武汉)有限公司 | Information storage method and device, storage medium and electronic device |
CN110782875A (en) * | 2019-10-16 | 2020-02-11 | 腾讯科技(深圳)有限公司 | Voice rhythm processing method and device based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
EP3005346A1 (en) | 2016-04-13 |
JP2016524724A (en) | 2016-08-18 |
CN105308679A (en) | 2016-02-03 |
KR20160014625A (en) | 2016-02-11 |
EP3005346A4 (en) | 2017-02-01 |
US20160125880A1 (en) | 2016-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160125880A1 (en) | Method and system for identifying location associated with voice command to control home appliance | |
US11094323B2 (en) | Electronic device and method for processing audio signal by electronic device | |
JP6613347B2 (en) | Method and apparatus for pushing information | |
US11188289B2 (en) | Identification of preferred communication devices according to a preference rule dependent on a trigger phrase spoken within a selected time from other command data | |
US10242677B2 (en) | Speaker dependent voiced sound pattern detection thresholds | |
CN105139858B (en) | A kind of information processing method and electronic equipment | |
US11457061B2 (en) | Creating a cinematic storytelling experience using network-addressable devices | |
CN117594042A (en) | Electronic device and control method thereof | |
CN110060677A (en) | Voice remote controller control method, device and computer readable storage medium | |
CN102568478A (en) | Video play control method and system based on voice recognition | |
CN109801646B (en) | Voice endpoint detection method and device based on fusion features | |
CN109448705B (en) | Voice segmentation method and device, computer device and readable storage medium | |
CN109616098B (en) | Voice endpoint detection method and device based on frequency domain energy | |
WO2010020138A1 (en) | Control method and device for monitoring equipment | |
CN104900236B (en) | Audio signal processing | |
CN113129893B (en) | Voice recognition method, device, equipment and storage medium | |
CN110262278B (en) | Control method and device of intelligent household electrical appliance and intelligent household electrical appliance | |
CN110070891B (en) | Song identification method and device and storage medium | |
US20180082703A1 (en) | Suitability score based on attribute scores | |
CN113270099B (en) | Intelligent voice extraction method and device, electronic equipment and storage medium | |
CN110085264A (en) | Voice signal detection method, device, equipment and storage medium | |
CN110197663A (en) | A kind of control method, device and electronic equipment | |
CN110970019A (en) | Control method and device of intelligent home system | |
CN112017662A (en) | Control instruction determination method and device, electronic equipment and storage medium | |
CN118098237B (en) | Control method of intelligent voice mouse and intelligent voice mouse |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380076839.7 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13885491 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013885491 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016515589 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20157034002 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14894518 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |