CN113611306A - Intelligent household voice control method and system based on user habits and storage medium - Google Patents

Intelligent household voice control method and system based on user habits and storage medium Download PDF

Info

Publication number
CN113611306A
CN113611306A CN202111040743.8A CN202111040743A CN113611306A CN 113611306 A CN113611306 A CN 113611306A CN 202111040743 A CN202111040743 A CN 202111040743A CN 113611306 A CN113611306 A CN 113611306A
Authority
CN
China
Prior art keywords
voice
user
information
voice control
control logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111040743.8A
Other languages
Chinese (zh)
Inventor
张泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Shanghai Intelligent Technology Co Ltd
Original Assignee
Unisound Shanghai Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Shanghai Intelligent Technology Co Ltd filed Critical Unisound Shanghai Intelligent Technology Co Ltd
Priority to CN202111040743.8A priority Critical patent/CN113611306A/en
Publication of CN113611306A publication Critical patent/CN113611306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a user habit-based intelligent home voice control method, a system and a storage medium, wherein the method comprises the following steps: inputting user voiceprint information; collecting voice control logic configured by a user in a self-defined way; binding the voice instruction and the corresponding matched voice control logic with the voiceprint information and uploading the voiceprint control logic and the voiceprint information to a cloud terminal; establishing a corpus at a cloud end; receiving a user voice instruction sent by pickup equipment; carrying out voiceprint recognition on the voice instruction to obtain voiceprint information and uploading the voiceprint information to a cloud; inquiring voice control logic corresponding to the voiceprint information in a cloud corpus; and transmitting the voice control logic to the corresponding target equipment and executing the target operation. The invention solves the problem that a user cannot define the voice control logic according to the living habit of the user in the intelligent home scene.

Description

Intelligent household voice control method and system based on user habits and storage medium
Technical Field
The invention relates to the technical field of smart home, in particular to a voice control method, a voice control system and a storage medium for smart home based on user habits.
Background
The current intelligent home voice control logic mainly obtains a user control intention through ASR (voice to text) and NLU (natural language understanding) capabilities, and fixed equipment logic is issued to intelligent equipment through a cloud.
Therefore, the current smart home voice control logic cannot customize thousands of people according to the user dimension, for example, when a user says "turn on the air conditioner", which types of lights are turned on are generally uniformly defined by the cloud, and then the logic is applied to all user intentions trying to execute the "turn on the air conditioner" instruction, as shown in fig. 1. In an actual life scenario, the "air conditioner on" intention of the user a and the "air conditioner on" intention of the user B are likely to be inconsistent, for example, the user a says that the "air conditioner on" actual intention is "air conditioner on to 22 ℃", and the user B says that the "air conditioner on" actual intention is "air conditioner on to 26 ℃", because the user a and the user B may have different ages, constitutions, preferences, or habits and the like, the user a prefers to turn the air conditioner temperature lower, and the user B prefers to turn the air conditioner temperature higher.
Disclosure of Invention
In view of the above problems, the present invention provides a method, a system, a device and a computer storage medium for voice control of smart homes based on user habits, which solve the problem that users cannot customize voice control logic according to their own habits in smart home scenarios.
In order to realize the technical effects, the invention adopts the technical scheme that:
on one hand, the invention provides an intelligent home voice control method based on user habits, which comprises the steps of establishing a corpus for the first time and then repeatedly carrying out voice control on an intelligent home based on the corpus;
establishing a corpus comprises:
inputting voiceprint information of a user;
collecting voice control logic which is self-defined and configured by the user for each voice instruction, wherein the voice control logic comprises target operations which are required to be executed by target equipment matched with the voice instruction;
binding all the voice instructions and the corresponding matched voice control logic with the voiceprint information of the user and then uploading the voiceprint information to a cloud;
establishing a corpus at a cloud end, and storing information uploaded by all users;
the voice control includes:
receiving a user voice instruction sent by pickup equipment;
performing voiceprint recognition on the voice command to obtain voiceprint information of the user and uploading the voiceprint information to the cloud;
inquiring voice control logic which is matched with the voiceprint information and is self-defined and configured by the user for the voice instruction in the pre-material library at the cloud end;
and issuing the voice control logic to corresponding target equipment for executing the target operation.
Preferably, the voice control logic further includes a type or name of the target device matched with the voice instruction, and spatial information where the target device is located.
Preferably, after the voiceprint recognition is performed on the voice command, ASR voice recognition and NLU intent processing are also performed on the voice command to obtain corpus information of the voice command, where the expectation information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
On the other hand, the invention provides an intelligent household voice control system based on user habits, which comprises:
the voiceprint management module is used for inputting the voiceprint information of the user;
the voice control logic module is used for configuring a corresponding voice control logic for each voice instruction of the user in a self-defined mode, and the voice control logic comprises target operation which is matched with the voice instruction and is required to be executed by target equipment;
the information transmission module is used for binding all the voice instructions and the corresponding matched voice control logic with the voiceprint information of the user, uploading the voiceprint information to a cloud terminal, and storing the voiceprint information in a corpus;
the voice print recognition module is used for carrying out voice print recognition on a user voice command sent by pickup equipment to obtain voice print information of the user and uploading the voice print information to the cloud end through the information transmission module;
and the logic query module is used for querying the voice control logic which is matched with the voiceprint information and is custom-configured by the user on the voice instruction in the pre-material library of the cloud end, and issuing the voice control logic to corresponding target equipment through the information transmission module for executing the target operation.
Preferably, the voice control logic further includes a type or name of the target device matched with the voice instruction, and spatial information where the target device is located.
Preferably, the system further comprises an ASR speech recognition module for converting the user's speech instruction into text.
Preferably, the system further includes an NLU natural language understanding module, configured to understand and process a result recognized by the ASR speech recognition module, so as to obtain corpus information of the speech instruction, where the expectation information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
In yet another aspect, the present invention provides a computer storage medium having a computer program stored thereon, where the program is executed by a processor to implement the steps of the smart home voice control method as described above.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, starting from an actual life scene, aiming at the situation that the true intentions corresponding to the same sentence language expression of different users in different scenes are possibly different, the logic of voice control household equipment is changed from original unified issuing into user dimension isolation, and each user can customize the control logic of the exclusive equipment according to the respective language expression habit. The logic of voice control household equipment is changed from one thousand of people to one thousand of people on the premise that the current NUL technology cannot completely and accurately identify the user intention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a system architecture diagram of a current smart home voice control logic.
Fig. 2 is a system architecture diagram of a smart home voice control method based on user habits according to an embodiment of the present invention.
Fig. 3 is a flowchart of steps of establishing a corpus in the smart home voice control method according to the embodiment of the present invention.
Fig. 4 is a flowchart of voice control steps in the voice control method for smart home according to the embodiment of the present invention.
Fig. 5 is a block diagram of a structure of a voice control system for smart homes based on user habits according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
The terms in the examples of the present invention are explained as follows:
ASR: automatic Speech Recognition (Automatic Speech Recognition technology) is a technology for converting human Speech into text.
NLP: natural Language Processing is a technology for communicating with a computer using Natural Language. The research uses the electronic computer to simulate the human language communication process, so that the computer can understand and use the natural language of human society, such as Chinese and English, to realize the natural language communication between human and machine, to replace part of mental labor, including the processing of information inquiry, question answering, document extraction, compilation and all the information about natural language.
The invention provides an intelligent home voice control method based on user habits, which is applied to an intelligent home ecosystem, wherein the intelligent home ecosystem comprises an intelligent home Application (APP), a voice acquisition device and a plurality of intelligent home devices, the voice acquisition device is communicated with the intelligent home application in a wireless or wired mode, the intelligent home devices are communicated with the intelligent home application in a wireless or wired mode, when a user wants to control a certain intelligent home device through voice, the user sends voice information, then the voice acquisition device acquires the voice information sent by the user, and when the voice acquisition device acquires the voice information sent by the user, the voice information is sent to the intelligent home application.
The embodiment of the invention provides an intelligent home voice control method and system based on user habits, the system is used for realizing the intelligent home voice control method, and it can be understood that the system is the intelligent home Application (APP), and the voice acquisition equipment in the embodiment adopts pickup equipment (an intelligent sound box, a voice assistant and the like).
Referring to fig. 2 to 4, an embodiment of the present invention provides a voice control method for an intelligent home based on user habits, where the method includes establishing a corpus for the first time and then repeatedly performing voice control for the intelligent home based on the corpus;
the process of establishing the corpus comprises the following steps:
step S11: inputting voiceprint information of a user;
in particular, in this step, an electro-acoustic instrument may be used to perform voiceprint recognition on the user's voice, and the voiceprint information has the same identity recognition function as a fingerprint. According to a preset voice recognition algorithm, the voiceprint information of the user in the voice instruction can be recognized.
Step S12: collecting voice control logics which are self-defined and configured by a user for each voice instruction, wherein each voice control logic comprises target operation which is required to be executed by target equipment matched with the corresponding voice instruction;
specifically, in the step, a user configures, in a customized manner, a corresponding target device type or device ID that each voice command needs to be executed in the system according to a living habit of the user, where the target device type or ID is a type or ID of a target smart home device, and the target operation is an operation that is executed by the target smart home device under control of the voice command of the user; and binding with the voiceprint information input by the user in the system in the previous step, and identifying different users through voiceprints and finding corresponding self-defined voice control logic. If the user A can set the voice control logic of the voice command 'turn on the lamp' as 'turn on all the ceiling lamps'; user B sets the voice control logic of the voice command "turn on light" to "turn on the hallway light".
Step S13: binding all voice instructions and the corresponding matched voice control logic with voiceprint information of the user and then uploading the voiceprint information to the cloud;
step S14: establishing a corpus at a cloud end, and storing information uploaded by all users;
secondly, the process of carrying out voice control of the smart home based on the established corpus comprises the following steps:
step 21: receiving a user voice instruction sent by pickup equipment;
it should be noted that the sound pickup apparatus may be an intelligent sound box, a voice assistant, and the like, and after the sound pickup apparatus logs in the system, information corresponding to the sound pickup apparatus, including ID information, location space information, and the like, is automatically stored on the system, so that when the system acquires a voice command of the sound pickup apparatus, the system can automatically recognize the location space information of the sound pickup apparatus, and in a broad sense, the system acquires the voice command and the location space information of the sound pickup apparatus at the same time. The user voice command, for example, the user says "light on", "air conditioner on", "television on", and among them "light", "air conditioner", "television" should be a smart home device with smart control capability.
Step 22: carrying out voiceprint recognition on the voice command to obtain voiceprint information of the user and uploading the voiceprint information to a cloud;
further, in this step, after performing voiceprint recognition on the voice instruction, ASR voice recognition and NLU intent processing may also be performed on the voice instruction, so as to obtain corpus information of the voice instruction, where the forecast information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
Step 23: inquiring a voice control logic which is matched with the voiceprint information and is self-defined by the user for the voice instruction in the pre-material library established at the cloud end;
such as: when the recognized voiceprint information belongs to the user A, the content of the voice instruction of the user A obtained after ASR voice recognition and NLU intention processing is 'light on', at the moment, the user A can be inquired in the corpus that the voice control logic 'turning on all ceiling lamps' with custom configuration corresponding to the voice instruction 'light on' of the user A is 'on'.
Step 24: and issuing the voice control logic to corresponding target equipment for executing the target operation.
Referring to fig. 5, an embodiment of the present invention provides an intelligent home voice control system based on user habits, where the system includes:
a voiceprint management module 31, configured to enter voiceprint information of a user;
the voice control logic module 32 is configured to configure a corresponding voice control logic for each voice instruction of the user in a self-defined manner, where the voice control logic includes a target operation that is required to be executed by a target device matched with the voice instruction;
the information transmission module 33 is used for binding all the voice instructions and the corresponding matched voice control logic with the voiceprint information of the user, uploading the voiceprint information to the cloud, and storing the voiceprint information in the corpus;
the voiceprint recognition module 34 is configured to perform voiceprint recognition on a user voice instruction sent by the sound pickup device, obtain voiceprint information of the user, and upload the voiceprint information to the cloud through the information transmission module 33;
the logic query module 35 is configured to query, in the cloud corpus, a voice control logic custom-configured for the voice instruction by the user, which is in accordance with the voiceprint information, and issue the voice control logic to a corresponding target device through the information transmission module 33, so as to execute a target operation.
The voice control logic 32 may further include the type or name (ID) of the target device matching the voice command, and the spatial information of the target device (sometimes obtained through the location information of the sound pickup device).
Further, the smart home voice control system based on the user habit of this embodiment may further include: an ASR speech recognition module 36 and an NLU natural language understanding module 37. The ASR speech recognition module 36 is configured to convert the speech instruction of the user into text by using the AI capability; the NLU natural language understanding module 37, using AI capability, is configured to understand and process the result recognized by the ASR speech recognition module 36, so as to obtain the corpus information of the speech instruction, where the corpus information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
Furthermore, embodiments of the present invention also provide a computer storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the methods of the embodiments described above.
According to the invention, starting from an actual life scene, aiming at the situation that the true intentions corresponding to the same sentence language expression of different users in different scenes are possibly different, the logic of voice control household equipment is changed from original unified issuing into user dimension isolation, and each user can customize the control logic of the exclusive equipment according to the respective language expression habit. The logic of voice control household equipment is changed from one thousand of people to one thousand of people on the premise that the current NUL technology cannot completely and accurately identify the user intention.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims. In addition, the parts not related to the invention are the same as or can be realized by the prior art.

Claims (8)

1. A voice control method for an intelligent home based on user habits is characterized by comprising the steps of establishing a corpus for the first time and then repeatedly carrying out voice control on the intelligent home based on the corpus;
establishing a corpus comprises:
inputting voiceprint information of a user;
collecting voice control logic which is self-defined and configured by the user for each voice instruction, wherein the voice control logic comprises target operations which are required to be executed by target equipment matched with the voice instruction;
binding all the voice instructions and the corresponding matched voice control logic with the voiceprint information of the user and then uploading the voiceprint information to a cloud;
establishing a corpus at a cloud end, and storing information uploaded by all users;
the voice control includes:
receiving a user voice instruction sent by pickup equipment;
performing voiceprint recognition on the voice command to obtain voiceprint information of the user and uploading the voiceprint information to the cloud;
inquiring voice control logic which is matched with the voiceprint information and is self-defined and configured by the user for the voice instruction in the pre-material library at the cloud end;
and issuing the voice control logic to corresponding target equipment for executing the target operation.
2. The smart home voice control method based on user habits according to claim 1, wherein the voice control logic further includes a type or name of a target device matched with the voice command, and spatial information where the target device is located.
3. The smart home voice control method based on user habits according to claim 1, wherein after voiceprint recognition is performed on the voice command, ASR voice recognition and NLU intention processing are further performed on the voice command to obtain corpus information of the voice command, and the expected information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
4. The utility model provides an intelligence house speech control system based on user's habit which characterized in that includes:
the voiceprint management module is used for inputting the voiceprint information of the user;
the voice control logic module is used for configuring a corresponding voice control logic for each voice instruction of the user in a self-defined mode, and the voice control logic comprises target operation which is matched with the voice instruction and is required to be executed by target equipment;
the information transmission module is used for binding all the voice instructions and the corresponding matched voice control logic with the voiceprint information of the user, uploading the voiceprint information to a cloud terminal, and storing the voiceprint information in a corpus;
the voice print recognition module is used for carrying out voice print recognition on a user voice command sent by pickup equipment to obtain voice print information of the user and uploading the voice print information to the cloud end through the information transmission module;
and the logic query module is used for querying the voice control logic which is matched with the voiceprint information and is custom-configured by the user on the voice instruction in the pre-material library of the cloud end, and issuing the voice control logic to corresponding target equipment through the information transmission module for executing the target operation.
5. The smart home voice control system based on user habits according to claim 1, wherein: the voice control logic further comprises the type or name of the target device matched with the voice instruction and the spatial information of the target device.
6. The smart home speech control system based on user habits of claim 1, further comprising an ASR speech recognition module for converting the user's speech instructions into text.
7. The smart home voice control system based on user habits according to claim 6, further comprising an NLU natural language understanding module for understanding and processing the result recognized by the ASR voice recognition module to obtain the corpus information of the voice instruction, wherein the expectation information at least includes one of the following information: the type or name of the target device; a space where the target device is located; and (4) target operation.
8. A computer storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 3.
CN202111040743.8A 2021-09-07 2021-09-07 Intelligent household voice control method and system based on user habits and storage medium Pending CN113611306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040743.8A CN113611306A (en) 2021-09-07 2021-09-07 Intelligent household voice control method and system based on user habits and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040743.8A CN113611306A (en) 2021-09-07 2021-09-07 Intelligent household voice control method and system based on user habits and storage medium

Publications (1)

Publication Number Publication Date
CN113611306A true CN113611306A (en) 2021-11-05

Family

ID=78342686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040743.8A Pending CN113611306A (en) 2021-09-07 2021-09-07 Intelligent household voice control method and system based on user habits and storage medium

Country Status (1)

Country Link
CN (1) CN113611306A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346530A (en) * 2022-10-19 2022-11-15 亿咖通(北京)科技有限公司 Voice control method, device, equipment, medium, system and vehicle
WO2023098002A1 (en) * 2021-12-03 2023-06-08 青岛海尔科技有限公司 Method, system and apparatus for controlling household appliance, and storage medium and electronic apparatus
CN117555250A (en) * 2024-01-02 2024-02-13 珠海格力电器股份有限公司 Control method, device, equipment and storage medium
CN117555250B (en) * 2024-01-02 2024-05-31 珠海格力电器股份有限公司 Control method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107919121A (en) * 2017-11-24 2018-04-17 江西科技师范大学 Control method, device, storage medium and the computer equipment of smart home device
CN110286601A (en) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 Control the method, apparatus, control equipment and storage medium of smart home device
CN111554286A (en) * 2020-04-26 2020-08-18 云知声智能科技股份有限公司 Method and equipment for controlling unmanned aerial vehicle based on voice
CN112201233A (en) * 2020-09-01 2021-01-08 沈澈 Voice control method, system and device of intelligent household equipment and computer storage medium
CN112562670A (en) * 2020-12-03 2021-03-26 深圳市欧瑞博科技股份有限公司 Intelligent voice recognition method, intelligent voice recognition device and intelligent equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107919121A (en) * 2017-11-24 2018-04-17 江西科技师范大学 Control method, device, storage medium and the computer equipment of smart home device
CN110286601A (en) * 2019-07-01 2019-09-27 珠海格力电器股份有限公司 Control the method, apparatus, control equipment and storage medium of smart home device
CN111554286A (en) * 2020-04-26 2020-08-18 云知声智能科技股份有限公司 Method and equipment for controlling unmanned aerial vehicle based on voice
CN112201233A (en) * 2020-09-01 2021-01-08 沈澈 Voice control method, system and device of intelligent household equipment and computer storage medium
CN112562670A (en) * 2020-12-03 2021-03-26 深圳市欧瑞博科技股份有限公司 Intelligent voice recognition method, intelligent voice recognition device and intelligent equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098002A1 (en) * 2021-12-03 2023-06-08 青岛海尔科技有限公司 Method, system and apparatus for controlling household appliance, and storage medium and electronic apparatus
CN115346530A (en) * 2022-10-19 2022-11-15 亿咖通(北京)科技有限公司 Voice control method, device, equipment, medium, system and vehicle
CN117555250A (en) * 2024-01-02 2024-02-13 珠海格力电器股份有限公司 Control method, device, equipment and storage medium
CN117555250B (en) * 2024-01-02 2024-05-31 珠海格力电器股份有限公司 Control method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108831469B (en) Voice command customizing method, device and equipment and computer storage medium
CN106647311B (en) Intelligent central control system, equipment, server and intelligent equipment control method
WO2020253064A1 (en) Speech recognition method and apparatus, and computer device and storage medium
CN113611306A (en) Intelligent household voice control method and system based on user habits and storage medium
CN105118257A (en) Intelligent control system and method
WO2019001451A1 (en) Intelligent device control method, apparatus, system and computer storage medium
CN110060677A (en) Voice remote controller control method, device and computer readable storage medium
CN110932953A (en) Intelligent household control method and device, computer equipment and storage medium
CN110992937B (en) Language off-line identification method, terminal and readable storage medium
CN111462741B (en) Voice data processing method, device and storage medium
CN111048085A (en) Off-line voice control method, system and storage medium based on ZIGBEE wireless technology
CN113611305A (en) Voice control method, system, device and medium in autonomous learning home scene
CN115327932A (en) Scene creation method and device, electronic equipment and storage medium
CN108932947B (en) Voice control method and household appliance
CN107742520B (en) Voice control method, device and system
CN114067798A (en) Server, intelligent equipment and intelligent voice control method
CN110531632B (en) Control method and system
CN113990324A (en) Voice intelligent home control system
CN107680598B (en) Information interaction method, device and equipment based on friend voiceprint address list
CN114391165A (en) Voice information processing method, device, equipment and storage medium
CN107180629B (en) Voice acquisition and recognition method and system
CN113056066B (en) Light adjusting method, device, system and storage medium based on television program
CN114242054A (en) Intelligent device control method and device, storage medium and electronic device
CN212519027U (en) Intelligent home system based on voice control
CN114627859A (en) Method and system for recognizing electronic photo frame in offline semantic manner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination