CN114020909A - Scene-based smart home control method, device, equipment and storage medium - Google Patents

Scene-based smart home control method, device, equipment and storage medium Download PDF

Info

Publication number
CN114020909A
CN114020909A CN202111292674.XA CN202111292674A CN114020909A CN 114020909 A CN114020909 A CN 114020909A CN 202111292674 A CN202111292674 A CN 202111292674A CN 114020909 A CN114020909 A CN 114020909A
Authority
CN
China
Prior art keywords
scene
text
text data
control
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111292674.XA
Other languages
Chinese (zh)
Inventor
严海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Konka Electronic Technology Co Ltd
Original Assignee
Shenzhen Konka Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Konka Electronic Technology Co Ltd filed Critical Shenzhen Konka Electronic Technology Co Ltd
Priority to CN202111292674.XA priority Critical patent/CN114020909A/en
Publication of CN114020909A publication Critical patent/CN114020909A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a scene-based intelligent home control method, a scene-based intelligent home control device, intelligent home control equipment and a storage medium, wherein the method comprises the following steps: acquiring voice data of a user, and identifying the voice data to acquire corresponding text data; identifying the text data to judge whether the text data is a control text; when the text data is judged to be the control text, judging whether the text data is the scene control text or not at least according to a pre-constructed scene dictionary and a classification model; and when the text data is judged to be the scene control text, acquiring scene information corresponding to the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information. According to the method and the device, the mapping relation between different scenes and each intelligent home is set, so that the multiple associated intelligent homes can be simultaneously controlled by sending the voice control instruction comprising the scenes, and the use experience of a user is improved.

Description

Scene-based smart home control method, device, equipment and storage medium
Technical Field
The invention relates to the field of smart home, in particular to a scene-based smart home control method, device, equipment and storage medium.
Background
With the progress of science and technology and the continuous popularization of intelligent home concepts, various intelligent home products increasingly enter the lives of ordinary people, and therefore a more comfortable and more convenient living environment is provided for people. At present, an intelligent home control system is generally implemented based on a voice technology, wherein the voice technology comprises voice recognition and semantic understanding, voice recognition (ASR) converts a voice signal into a text, semantic parsing identifies and extracts a corresponding control command from the text by using a Natural Language Processing (NLP) technology in the field of Artificial Intelligence (AI), and then the control command is sent to IoT equipment through a big data platform so that the corresponding equipment can respond to relevant operations.
The current smart home system generally only supports control over a single device, so that a user needs to perform multiple voice interactions to control a corresponding smart device under some conditions, and user experience is poor.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a storage medium for controlling a smart home based on a scene, so as to improve the above problem.
The embodiment of the invention provides an intelligent home control method based on a scene, which comprises the following steps:
acquiring voice data of a user, and identifying the voice data to acquire corresponding text data;
identifying the text data to judge whether the text data is a control text;
when the text data is judged to be the control text, judging whether the text data is the scene control text or not at least according to a pre-constructed scene dictionary and a classification model;
and when the text data is judged to be the scene control text, acquiring scene information corresponding to the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information.
Preferably, the acquiring voice data of a user, and performing recognition processing on the voice data to obtain corresponding text data specifically includes:
receiving voice data sent by a user through a channel end;
and performing character recognition on the voice data, and performing text error correction and equipment normalization on the recognized characters to obtain text data.
Preferably, the recognizing the text data to determine whether the text data is a control text specifically includes:
carrying out rule matching on the text data through a pre-constructed control dictionary;
if the matching is successful, judging the text data as a control text;
if the matching is unsuccessful, inputting the text data into a first classification model trained in advance to obtain a classification result; the first classification model is a two-classification model obtained by training based on TF-IDF and K nearest neighbor algorithm by utilizing the existing corpus;
and judging whether the text data is a control text according to the classification result.
Preferably, when the text data is judged to be the control text, judging whether the text data is the scene control text at least according to a pre-constructed scene dictionary and a classification model, specifically including:
when the text data is judged to be the control text, performing rule matching on the text data based on a pre-constructed scene dictionary;
if the matching is successful, marking the text data as a scene text, and acquiring scene information matched with the text data;
if the matching is unsuccessful, inputting the text data into a trained second classification model, and acquiring corresponding classification results and the probability value of each classification result; the second classification model is a multi-classification model obtained by training based on a pre-training scene word vector and a K nearest neighbor algorithm;
judging whether the probability value of the classification result with the maximum probability value is greater than a preset threshold value or not;
if the probability value is larger than the preset probability value, outputting the classification result with the maximum probability value to obtain the scene information of the text data;
and if not, outputting a null value, and executing entity identification to acquire a control instruction corresponding to the text data.
Preferably, the performing entity recognition to acquire the control instruction corresponding to the text data includes:
marking each character in the text data according to a pre-trained marking model;
carrying out entity extraction on the marked text data through a deep learning model obtained by training a network structure combining BERT and CRF to obtain a corresponding entity;
and normalizing the entity to generate a corresponding control command.
Preferably, in marking each character, the marker includes:
a first marker indicating a start position of the entity;
representing a second marker located inside the entity;
representing a third marker located outside the entity.
Preferably, the types of entities include categories, functions, commands, values, brands.
The embodiment of the invention also provides an intelligent home control device based on the scene, which comprises the following components:
the voice processing unit is used for acquiring voice data of a user and identifying the voice data to obtain corresponding text data;
the identification unit is used for identifying the text data to judge whether the text data is a control text;
the scene judging unit is used for judging whether the text data is a scene control text or not at least according to a pre-constructed scene dictionary and a classification model when the text data is judged to be the control text;
and the scene control unit is used for acquiring scene information corresponding to the scene control text when the text data is judged to be the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information.
The embodiment of the invention also provides intelligent home control equipment based on the scene, which comprises a processor and a memory, wherein the memory is internally stored with a computer program, and the computer program can be executed by the processor so as to realize the intelligent home control method based on the scene.
The embodiment of the invention also provides a computer-readable storage medium, which stores a computer program, wherein the computer program can be executed by a processor of a device where the computer-readable storage medium is located, so as to realize the scene-based intelligent home control method.
In summary, in this embodiment, by setting the mapping relationship between different scenes and each smart home, simultaneous control over a plurality of associated smart homes can be realized by sending a voice control instruction including a scene, so that the problem of complex operation generated when a plurality of smart homes are respectively controlled is avoided, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a scene-based smart home control method according to a first embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a scene-based smart home control device according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
referring to fig. 1, a first embodiment of the present invention provides a scene-based intelligent home control method, which is executed by a scene-based intelligent home control device (hereinafter referred to as a control device), and in particular, executed by one or more processors in the control device, to implement the following steps:
s101, voice data of a user are obtained, and the voice data are identified to obtain corresponding text data.
In this embodiment, the smart home is various smart devices disposed in a home environment, such as a smart television, a smart lighting lamp, a smart toilet, a smart refrigerator, a smart speaker, a smart water heater, and a smart fan, which are not limited in this disclosure. These smart devices are generally capable of networking and control of their functions over a network connection. For example, the smart devices may be connected to a router of a device in the home, and then connected to a control device local to the home or a server located in the cloud via the router.
In this embodiment, the user may send out voice data through the channel end, such as various smart speakers, and after receiving the voice data, the channel end sends the voice data to the control device through the network.
In this embodiment, the control device may be a control device that is set locally in a home, such as a smart phone of a user, an intelligent router, or may be a server that is set in a cloud.
In this embodiment, after obtaining the voice data, the control device may perform recognition processing on the voice data to obtain corresponding text data.
Specifically, after receiving the voice data, the control device performs voice recognition on the voice data to obtain corresponding text information, and then performs text error correction and device normalization on the text information to obtain final text data. The text error correction mainly corrects the problem that phonetic alphabets of characters recognized by voice are similar but words are incorrect, a KenLM-based language model can be adopted as an error correction scheme, a specific scene error set can be added in real time to further improve the error correction effect, and the used tool is a pycorrector. Equipment normalization refers to type normalization according to a built equipment name dictionary, such as 'leke intelligent circulation fan' - > 'fan' and 'light strip' - > 'light bulb'.
S102, identifying the text data to judge whether the text data is a control text.
In this embodiment, after obtaining the text data, it is necessary to determine the field to which the text data belongs, such as whether the text is a control text, a chat text, an invalid content text, or the like.
Specifically, the method comprises the following steps:
s1021, constructing a domain dictionary, and training a binary model based on TF-IDF (term frequency-inverse document frequency) and K Nearest Neighbor (KNN) algorithm by using the existing corpus.
Wherein, TF-IDF represents the word frequency-inverse document word frequency, TF represents the word frequency, namely the frequency of the appearance of the word in the text, IDF represents the inverse document frequency, and is in inverse proportion to the number of the documents containing the word, and TF-IDF value is multiplied to obtain TF-IDF value of a word. The larger the TF-IDF a word has in an article, the more important the word will be in the article in general. The K-nearest neighbor algorithm is one of machine learning algorithms and is a classification model, and the classification criterion is that most of K nearest samples near one sample belong to a certain class, and the sample also belongs to the class.
S1022, first determine whether the text data is a control text by combining the rule matching and the model determination.
The method comprises the steps of firstly carrying out rule matching on input text data through a dictionary comprising control words, if the input text data can be matched with the dictionary, indicating that the text data is a control text, otherwise, inputting the text data into the trained first classification model of the second classification to obtain a first classification result, if the first classification result is the control text, indicating that the current text data is the control text, and otherwise, indicating that the current text data is other texts, such as chatting texts or invalid texts.
In this embodiment, the higher priority of rule matching mainly considers that the speech text is a short text and has strong domain, the rule matching speed is faster and the accuracy is higher, and the first classification model is added to judge the speech text to further improve the recall rate.
S103, when the text data is judged to be the control text, judging whether the text data is the scene control text or not at least according to a pre-constructed scene dictionary and a classification model.
In this embodiment, scene recognition is performed on a control text belonging to the control field, and whether scene control or device control is performed is determined.
First, a scene dictionary is constructed.
A set of matching keywords is constructed for each scene, for example, the keywords of the sleep mode include ' rest ', ' sleep ', ' night ' and the like ', then a multi-classification second classification model is trained by using a pre-training word vector and K neighbor algorithm, and the classification result includes different scenes such as home returning, home leaving, sleep, getting up and the like.
Then, whether the scene control or the equipment control is performed is judged by combining the rule matching and the model judgment.
Or, carrying out rule matching on the input text data through a dictionary comprising scenes, marking the text data as scene texts, and acquiring scene information matched with the scene texts. And if the matching is unsuccessful, inputting the text data into a trained second classification model, and acquiring the corresponding classification result and the probability value of each classification result. Then judging whether the probability value of the classification result with the maximum probability value is greater than a preset threshold value or not; if the probability value is larger than the preset probability value, outputting the classification result with the maximum probability value to obtain the scene information of the text data; and if not, outputting a null value, and executing entity identification to acquire a control instruction corresponding to the text data.
Executing entity recognition to acquire a control instruction corresponding to the text data specifically includes:
firstly, each character in the text data is marked according to a pre-trained marking model.
And then, carrying out entity extraction on the marked text data through a deep learning model obtained by training a network structure combining BERT and CRF to obtain a corresponding entity.
In this embodiment, the marker includes:
a first marker B (begin abbreviation) indicating the start position of the entity;
represents a second marker I (Inside abbreviation) located Inside the entity;
indicating a third marker O (Outside abbreviation) located Outside the entity.
In this embodiment, the type of the entity may include category (category), function (kill), command (order), value (value), brand (brand), and the like, which are set according to the actual requirement.
For example, the text data is "help me open Huache fan set fan speed to mid-range", and the marking and entity recognition results are shown in table 1:
TABLE 1
Text Label (R) Text Label (R)
Upper for shoes 0 Is provided with B-order
I am 0 Device for placing I-order
Beat and beat B-order Wind power B-category
Opening device I-order Fan (Refresh Fan) I-category
Hua Qi Wan B-brand Wind power B-skill
Is composed of I-brand Speed measuring device I-skill
Is/are as follows O Is composed of O
Wind power B-category In B-value
Fan (Refresh Fan) I-category Gear I-value
And finally, normalizing the entity to generate a corresponding control command.
The entities are then normalized, e.g., "turn on" - > "turn on", as shown in table 2:
TABLE 2
Figure BDA0003335434970000091
Figure BDA0003335434970000101
The corresponding control commands are generated as follows:
{ "category": electric fan "," brand ": Hua is", "command": { "kill": switch "," order ": set", "value": 1 "}
And S104, when the text data is judged to be the scene control text, acquiring scene information corresponding to the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information.
In this embodiment, for example, if the scene information is sleep, the control device obtains a working state of each home corresponding to the scene information, and controls each smart home according to the working state.
For example, when the scene information is sleep, the working state of the corresponding smart home is as follows: { "category": air conditioner "," brand ": XX", "command": pause ": kill": switch "," order ": set", "value": 27 "}
{ "category": lamp "," brand ": XX", "command": skeleton ": switch", "order": OFF "}
{ "category": TV set "," brand ": XX", "command": { "kill": switch "," order ": OFF" }.
The control device will simultaneously perform the following actions: the bedroom air conditioner is turned on and adjusted to 27 degrees, and meanwhile the lamp and the television are turned off.
In summary, in this embodiment, by setting the mapping relationship between different scenes and each smart home, simultaneous control over a plurality of associated smart homes can be realized by sending a voice control instruction including a scene, so that the problem of complex operation generated when a plurality of smart homes are respectively controlled is avoided, and the use experience of the user is improved.
In addition, aiming at the field of household appliances, the method creatively provides a set of complete solution by combining a field dictionary, a statistical learning method and a deep learning algorithm, can identify scenes, remarkably improves the recognition rate of voice control commands, can adapt to input voice texts under different conditions, and improves user experience.
Referring to fig. 2, a second embodiment of the present invention further provides a scene-based smart home control apparatus, which includes:
a voice processing unit 210, configured to acquire voice data of a user, and perform recognition processing on the voice data to obtain corresponding text data;
an identifying unit 220, configured to identify the text data to determine whether the text data is a control text;
a scene determining unit 230, configured to determine whether the text data is a scene control text according to at least a pre-constructed scene dictionary and a classification model when the text data is determined to be the control text;
and the scene control unit 240 is configured to, when it is determined that the text data is a scene control text, acquire scene information corresponding to the scene control text, and control a working state of at least one corresponding smart home according to the scene information.
Preferably, the speech processing unit 210 is specifically configured to:
receiving voice data sent by a user through a channel end;
and performing character recognition on the voice data, and performing text error correction and equipment normalization on the recognized characters to obtain text data.
Preferably, the identifying unit 220 is specifically configured to:
carrying out rule matching on the text data through a pre-constructed control dictionary;
if the matching is successful, judging the text data as a control text;
if the matching is unsuccessful, inputting the text data into a first classification model trained in advance to obtain a classification result; the first classification model is a two-classification model obtained by training based on TF-IDF and K nearest neighbor algorithm by utilizing the existing corpus;
and judging whether the text data is a control text according to the classification result.
Preferably, the scene determining unit 230 is specifically configured to:
when the text data is judged to be the control text, performing rule matching on the text data based on a pre-constructed scene dictionary;
if the matching is successful, marking the text data as a scene text, and acquiring scene information matched with the text data;
if the matching is unsuccessful, inputting the text data into a trained second classification model, and acquiring corresponding classification results and the probability value of each classification result; the second classification model is a multi-classification model obtained by training based on a pre-training scene word vector and a K nearest neighbor algorithm;
judging whether the probability value of the classification result with the maximum probability value is greater than a preset threshold value or not;
if the probability value is larger than the preset probability value, outputting the classification result with the maximum probability value to obtain the scene information of the text data;
and if not, outputting a null value, and executing entity identification to acquire a control instruction corresponding to the text data.
Wherein performing entity recognition to obtain a control instruction corresponding to the text data comprises:
marking each character in the text data according to a pre-trained marking model;
carrying out entity extraction on the marked text data through a deep learning model obtained by training a network structure combining BERT and CRF to obtain a corresponding entity;
and normalizing the entity to generate a corresponding control command.
Preferably, in marking each character, the marker includes:
a first marker indicating a start position of the entity;
representing a second marker located inside the entity;
representing a third marker located outside the entity.
Preferably, the types of entities include categories, functions, commands, values, brands.
The third embodiment of the present invention further provides a scene-based intelligent home control device, which includes a processor and a memory, where the memory stores a computer program, and the computer program can be executed by the processor to implement the scene-based intelligent home control method.
The fourth embodiment of the present invention also provides a computer-readable storage medium, which stores a computer program, where the computer program can be executed by a processor of a device where the computer-readable storage medium is located, so as to implement the scene-based intelligent home control method as described above.
In the embodiments provided in the present invention, it should be understood that the disclosed method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A scene-based intelligent home control method is characterized by comprising the following steps:
acquiring voice data of a user, and identifying the voice data to acquire corresponding text data;
identifying the text data to judge whether the text data is a control text;
when the text data is judged to be the control text, judging whether the text data is the scene control text or not at least according to a pre-constructed scene dictionary and a classification model;
and when the text data is judged to be the scene control text, acquiring scene information corresponding to the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information.
2. The intelligent home control method based on the scene according to claim 1, wherein the obtaining of the voice data of the user and the recognition processing of the voice data to obtain the corresponding text data specifically comprises:
receiving voice data sent by a user through a channel end;
and performing character recognition on the voice data, and performing text error correction and equipment normalization on the recognized characters to obtain text data.
3. The intelligent home control method based on the scene as claimed in claim 1, wherein identifying the text data to determine whether the text data is a control text specifically comprises:
carrying out rule matching on the text data through a pre-constructed control dictionary;
if the matching is successful, judging the text data as a control text;
if the matching is unsuccessful, inputting the text data into a first classification model trained in advance to obtain a classification result; the first classification model is a two-classification model obtained by training based on TF-IDF and K nearest neighbor algorithm by utilizing the existing corpus;
and judging whether the text data is a control text according to the classification result.
4. The intelligent home control method based on the scene as claimed in claim 1, wherein when the text data is determined to be the control text, determining whether the text data is the scene control text at least according to a pre-constructed scene dictionary and a classification model, specifically includes:
when the text data is judged to be the control text, performing rule matching on the text data based on a pre-constructed scene dictionary;
if the matching is successful, marking the text data as a scene text, and acquiring scene information matched with the text data;
if the matching is unsuccessful, inputting the text data into a trained second classification model, and acquiring corresponding classification results and the probability value of each classification result; the second classification model is a multi-classification model obtained by training based on a pre-training scene word vector and a K nearest neighbor algorithm;
judging whether the probability value of the classification result with the maximum probability value is greater than a preset threshold value or not;
if the probability value is larger than the preset probability value, outputting the classification result with the maximum probability value to obtain the scene information of the text data;
and if not, outputting a null value, and executing entity identification to acquire a control instruction corresponding to the text data.
5. The intelligent scene-based home control method of claim 4, wherein performing entity recognition to obtain control instructions corresponding to the text data comprises:
marking each character in the text data according to a pre-trained marking model;
carrying out entity extraction on the marked text data through a deep learning model obtained by training a network structure combining BERT and CRF to obtain a corresponding entity;
and normalizing the entity to generate a corresponding control command.
6. The intelligent home control method based on scene of claim 5, wherein in marking each character, the marker comprises:
a first marker indicating a start position of the entity;
representing a second marker located inside the entity;
representing a third marker located outside the entity.
7. The intelligent home control method based on scene of claim 5, wherein the type of the entity comprises category, function, command, value, brand.
8. The utility model provides an intelligence house controlling means based on scene which characterized in that includes:
the voice processing unit is used for acquiring voice data of a user and identifying the voice data to obtain corresponding text data;
the identification unit is used for identifying the text data to judge whether the text data is a control text;
the scene judging unit is used for judging whether the text data is a scene control text or not at least according to a pre-constructed scene dictionary and a classification model when the text data is judged to be the control text;
and the scene control unit is used for acquiring scene information corresponding to the scene control text when the text data is judged to be the scene control text, and controlling the working state of at least one corresponding smart home according to the scene information.
9. A scene-based intelligent home control device, comprising a processor and a memory, the memory having stored therein a computer program executable by the processor to implement the scene-based intelligent home control method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, the computer program being executable by a processor of a device in which the computer-readable storage medium is located, to implement the scene-based intelligent home control method according to any one of claims 1 to 7.
CN202111292674.XA 2021-11-03 2021-11-03 Scene-based smart home control method, device, equipment and storage medium Pending CN114020909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111292674.XA CN114020909A (en) 2021-11-03 2021-11-03 Scene-based smart home control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111292674.XA CN114020909A (en) 2021-11-03 2021-11-03 Scene-based smart home control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114020909A true CN114020909A (en) 2022-02-08

Family

ID=80060206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111292674.XA Pending CN114020909A (en) 2021-11-03 2021-11-03 Scene-based smart home control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114020909A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047778A (en) * 2022-06-20 2022-09-13 青岛海尔科技有限公司 Control method and device for intelligent equipment, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047778A (en) * 2022-06-20 2022-09-13 青岛海尔科技有限公司 Control method and device for intelligent equipment, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN106571140B (en) Intelligent electric appliance control method and system based on voice semantics
CN107437415B (en) Intelligent voice interaction method and system
CN110992934B (en) Defense method and defense device for black box attack model of voice recognition system
CN105334743B (en) A kind of intelligent home furnishing control method and its system based on emotion recognition
US10325593B2 (en) Method and device for waking up via speech based on artificial intelligence
CN110675870A (en) Voice recognition method and device, electronic equipment and storage medium
CN108694940B (en) Voice recognition method and device and electronic equipment
CN106875941B (en) Voice semantic recognition method of service robot
CN112100349A (en) Multi-turn dialogue method and device, electronic equipment and storage medium
CN112051743A (en) Device control method, conflict processing method, corresponding devices and electronic device
CN107526798B (en) Entity identification and normalization combined method and model based on neural network
CN109308319B (en) Text classification method, text classification device and computer readable storage medium
CN115761813A (en) Intelligent control system and method based on big data analysis
CN111161726B (en) Intelligent voice interaction method, device, medium and system
CN111199729B (en) Voiceprint recognition method and voiceprint recognition device
CN111178081B (en) Semantic recognition method, server, electronic device and computer storage medium
CN114020909A (en) Scene-based smart home control method, device, equipment and storage medium
CN110895936B (en) Voice processing method and device based on household appliance
CN205072656U (en) Intelligence pronunciation steam ager
CN116994565B (en) Intelligent voice assistant and voice control method thereof
CN107622769B (en) Number modification method and device, storage medium and electronic equipment
CN115104151A (en) Offline voice recognition method and device, electronic equipment and readable storage medium
CN116978367A (en) Speech recognition method, device, electronic equipment and storage medium
CN115858747A (en) Clustering-combined Prompt structure intention identification method, device, equipment and storage medium
CN110970019A (en) Control method and device of intelligent home system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination