CN116090461A - Intent recognition method of control instruction, storage medium and electronic device - Google Patents

Intent recognition method of control instruction, storage medium and electronic device Download PDF

Info

Publication number
CN116090461A
CN116090461A CN202310083554.1A CN202310083554A CN116090461A CN 116090461 A CN116090461 A CN 116090461A CN 202310083554 A CN202310083554 A CN 202310083554A CN 116090461 A CN116090461 A CN 116090461A
Authority
CN
China
Prior art keywords
intention
expression
control instruction
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310083554.1A
Other languages
Chinese (zh)
Inventor
杨令铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202310083554.1A priority Critical patent/CN116090461A/en
Publication of CN116090461A publication Critical patent/CN116090461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses an intention recognition method, a storage medium and an electronic device of a control instruction, which relate to the technical field of smart families, wherein the intention recognition method of the control instruction comprises the following steps: acquiring a control instruction received by intelligent equipment; determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data; the method comprises the steps of determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.

Description

Intent recognition method of control instruction, storage medium and electronic device
Technical Field
The application relates to the technical field of smart home, in particular to an intention recognition method of control instructions, a storage medium and an electronic device.
Background
At present, in the technical field of smart home, along with the continuous improvement of the intelligent level of smart home appliances, more and more smart home appliances can provide corresponding services for users by identifying the intention of the users. For example, a classification model may be used to perform a classification analysis on the user's speech data, deriving from among the classified plurality of intentions the user's most likely control intent over the device. However, in the method for identifying the control intention of the user through the classification model, the analysis of the voice data of the user is realized by relying on the classification model, the control intention of the user is not determined by combining with the control habit of the user, and the identification result of the user intention is not accurate enough, so that the experience of the user is lower.
Accordingly, in the related art, there is a problem of how to improve accuracy of the user intention recognition result.
For the problem of how to improve the accuracy of the user intention recognition result in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides an intention recognition method, a storage medium and an electronic device for a control instruction, which are used for at least solving the problem of how to improve the accuracy of a user intention recognition result in the related art.
According to an embodiment of the present application, there is provided an intention recognition method of a control instruction, including: acquiring a control instruction received by intelligent equipment; determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data; determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.
Determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data, wherein the method comprises the following steps: performing text recognition on the control instruction to obtain recognized text data; determining the entity data from the text data; and generating an intention expression of the control instruction according to a binary tree preset by the target object and the entity data.
In an exemplary embodiment, determining the entity data from the text data includes: inputting the text data into a word segmentation model; performing word segmentation on the text data by using a first word segmentation sub-model in the word segmentation model to obtain a word segmentation result; marking parts of speech of a plurality of segmented words in the segmented word result by using a second segmented word model in the segmented word model to obtain a plurality of marked segmented words, wherein each segmented word of the plurality of marked segmented words corresponds to a word label, and the word label represents the word category of each segmented word; and determining target word segmentation in the plurality of word segmentation as the entity data, wherein the word label of the target word segmentation is an entity word.
In an exemplary embodiment, generating the intent expression of the control instruction according to the binary tree preset by the target object and the entity data includes: traversing the binary tree to obtain tree nodes of the binary tree and node expressions corresponding to the tree nodes; wherein the tree node represents a category corresponding to an entity of the entity data; determining a target tree node from a plurality of tree nodes, and acquiring a target node expression corresponding to the target tree node, wherein the target tree node is consistent with the entity type of the entity data; and sequencing the target node expressions according to the traversing sequence of traversing the binary tree to generate the intention expression of the control instruction.
In one exemplary embodiment, determining a target intent expression consistent with the intent expression from a preset intent set includes: parsing the intention expression to obtain nonstandard words in the intention expression; obtaining standard words corresponding to the non-standard words from a preset dictionary; replacing the nonstandard words of the intent expression with the standard words to obtain a standardized intent expression; determining a target intention expression consistent with the normalized intention expression from a preset intention set.
In an exemplary embodiment, before determining the control intent corresponding to the target intent expression as the intent recognition result of the control instruction, the method further includes: acquiring the expression similarity between the target intention expression and a first preset intention expression; under the condition that the similarity of the expressions is larger than a first preset threshold value, acquiring a control intention preset by a target object for the first preset intention expression; and determining the preset control intention as the control intention corresponding to the target intention expression.
In an exemplary embodiment, after determining the control intent corresponding to the target intent expression as the intent recognition result of the control instruction, the method further includes: establishing a corresponding relation between the control instruction and the intention recognition result, and storing the corresponding relation; and under the condition that the intelligent equipment receives the control instruction again, determining an intention recognition result corresponding to the control instruction directly according to the corresponding relation.
In one exemplary embodiment, determining a target intent expression consistent with the intent expression from a preset intent set includes: determining a second preset intent expression within the preset intent set, and determining a difference between a first length of the intent expression and a second length of the second preset intent expression; under the condition that the difference value is larger than a second preset threshold value, determining a first real word appearing first in the intention expression and a second real word appearing first in the target intention expression; and determining the second preset intention expression as the target intention expression under the condition that the part of speech of the first real word is consistent with the part of speech of the second real word.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the intention recognition method of the control instruction described above when running.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the intention recognition method of the control instruction by the computer program.
In the embodiment of the application, a control instruction received by intelligent equipment is obtained; determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data; determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction; by adopting the technical scheme, the problem of how to improve the accuracy of the user intention recognition result is solved, and the accuracy of the user intention recognition result is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an intent recognition method for control instructions according to an embodiment of the present application;
FIG. 2 is a flow chart of an intent recognition method of control instructions in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of an intent recognition method of control instructions according to an embodiment of the present application;
FIG. 4 is a block diagram of an intent recognition device for control instructions according to an embodiment of the present application;
fig. 5 is a block diagram (two) of a structure of an intention recognition device of a control instruction according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiments of the present application, there is provided an intent recognition method of a control instruction. The intent recognition method of the control instruction is widely applied to full-house intelligent digital control application scenes such as intelligent Home (Smart Home), intelligent Home equipment ecology, intelligent Home (Intelligence House) ecology and the like. Alternatively, in the present embodiment, the above-described method of identifying intent of a control instruction may be applied to a hardware environment constituted by the terminal device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal device 102 through a network, and may be used to provide services (such as application services and the like) for a terminal or a client installed on the terminal, a database may be set on the server or independent of the server, for providing data storage services for the server 104, and cloud computing and/or edge computing services may be configured on the server or independent of the server, for providing data computing services for the server 104.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth. The terminal device 102 may not be limited to a PC, a mobile phone, a tablet computer, an intelligent air conditioner, an intelligent smoke machine, an intelligent refrigerator, an intelligent oven, an intelligent cooking range, an intelligent washing machine, an intelligent water heater, an intelligent washing device, an intelligent dish washer, an intelligent projection device, an intelligent television, an intelligent clothes hanger, an intelligent curtain, an intelligent video, an intelligent socket, an intelligent sound box, an intelligent fresh air device, an intelligent kitchen and toilet device, an intelligent bathroom device, an intelligent sweeping robot, an intelligent window cleaning robot, an intelligent mopping robot, an intelligent air purifying device, an intelligent steam box, an intelligent microwave oven, an intelligent kitchen appliance, an intelligent purifier, an intelligent water dispenser, an intelligent door lock, and the like.
In this embodiment, an intention recognition method of a control instruction is provided and applied to the computer terminal, and fig. 2 is a flowchart of the intention recognition method of the control instruction according to an embodiment of the present application, where the flowchart includes the following steps:
step S202, a control instruction received by intelligent equipment is obtained;
it should be noted that, the obtaining the control instruction received by the intelligent device may include: and acquiring voice information received by the intelligent equipment, and performing voice recognition on the voice information to determine a control instruction for controlling the intelligent equipment.
Step S204, determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data;
step S206, determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.
Through the steps, the control instruction received by the intelligent equipment is obtained; determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data; the method comprises the steps of determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction, so that the problem of how to improve the accuracy of the user intention recognition result in the related art is solved, and the accuracy of the user intention recognition result is further improved.
Further, in the step S204, the process of determining the entity data corresponding to the control instruction and generating the intent expression of the control instruction according to the entity data may include: performing text recognition on the control instruction to obtain recognized text data; determining the entity data from the text data; and generating an intention expression of the control instruction according to a binary tree preset by the target object and the entity data.
In an exemplary embodiment, in order to better understand the technical solution of determining the entity data from the text data in the foregoing embodiment, the following technical solution is further provided, and the specific steps include: inputting the text data into a word segmentation model; performing word segmentation on the text data by using a first word segmentation sub-model in the word segmentation model to obtain a word segmentation result; marking parts of speech of a plurality of segmented words in the segmented word result by using a second segmented word model in the segmented word model to obtain a plurality of marked segmented words, wherein each segmented word of the plurality of marked segmented words corresponds to a word label, and the word label represents the word category of each segmented word; and determining target word segmentation in the plurality of word segmentation as the entity data, wherein the word label of the target word segmentation is an entity word.
The word segmentation model may be, for example, a word segmentation tool, including LTP (Language Technology Plantform, language technology platform) word segmentation tools, resultant word segmentation tools, pkuseg word segmentation tools, and the like. Further, for example, the word segmentation using LTP word segmentation tool may be implemented by pyltp module of Python, the word segmentation using nub word segmentation tool may be implemented by jieba module of Python, and the word segmentation using pkuseg tool may be implemented by pkuseg module of Python, but is not limited thereto.
Alternatively, in this embodiment, the learning rate of the first sub-model and the learning rate of the second sub-model may be set separately, for example, the learning rate of the first sub-model may be set to be greater than the learning rate of the second sub-model, or the learning rate of the second sub-model may be set to be greater than the learning rate of the first sub-model.
In an exemplary embodiment, in order to better understand the implementation process of generating the intent expression of the control instruction according to the binary tree preset by the target object and the entity data in the above embodiment, the following technical solution is provided, which specifically includes: traversing the binary tree to obtain tree nodes of the binary tree and node expressions corresponding to the tree nodes; wherein the tree node represents a category corresponding to an entity of the entity data; determining a target tree node from a plurality of tree nodes, and acquiring a target node expression corresponding to the target tree node, wherein the target tree node is consistent with the entity type of the entity data; and sequencing the target node expressions according to the traversing sequence of traversing the binary tree to generate the intention expression of the control instruction.
Note that, the node expression corresponding to the tree node may be set, for example, as an entity class device to which a device name of the intelligent device belongs, an entity class action to which an operation action of the intelligent device belongs, or an entity class attr to which an attribute of the intelligent device belongs, but not limited thereto.
In an exemplary embodiment, to better describe the implementation process of determining, in the step S206, the target intent expression consistent with the intent expression from the preset intent set, the following implementation steps are further proposed, and specifically include: parsing the intention expression to obtain nonstandard words in the intention expression; obtaining standard words corresponding to the non-standard words from a preset dictionary; replacing the nonstandard words of the intent expression with the standard words to obtain a standardized intent expression; determining a target intention expression consistent with the normalized intention expression from a preset intention set.
In an exemplary embodiment, before determining the control intent corresponding to the target intent expression as the intent recognition result of the control instruction, further, an expression similarity between the target intent expression and a first preset intent expression may be obtained; under the condition that the similarity of the expressions is larger than a first preset threshold value, acquiring a control intention preset by a target object for the first preset intention expression; and determining the preset control intention as the control intention corresponding to the target intention expression.
Here, the above-mentioned similarity of expressions may be understood as, for example, similarity of word structures of different expressions.
In an exemplary embodiment, after determining the control intent corresponding to the target intent expression as the intent recognition result of the control instruction, the following execution steps are further proposed, specifically including: establishing a corresponding relation between the control instruction and the intention recognition result, and storing the corresponding relation; and under the condition that the intelligent equipment receives the control instruction again, determining an intention recognition result corresponding to the control instruction directly according to the corresponding relation.
In an exemplary embodiment, the process of determining the target intent expression consistent with the intent expression from the preset intent set may also be implemented through other technical steps, including: determining a second preset intent expression within the preset intent set, and determining a difference between a first length of the intent expression and a second length of the second preset intent expression; under the condition that the difference value is larger than a second preset threshold value, determining a first real word appearing first in the intention expression and a second real word appearing first in the target intention expression; and determining the second preset intention expression as the target intention expression under the condition that the part of speech of the first real word is consistent with the part of speech of the second real word.
In an alternative embodiment, the process of determining a target intent expression consistent with the intent expression from a preset intent set may be implemented using, for example, a thread, then generating a search thread for performing a full search of the preset intent set; grouping the intention expressions in the preset intention set to obtain a plurality of groups of intention expressions; searching a plurality of groups of intention expressions by using the searching thread, and determining a target intention expression consistent with the intention expression from a preset intention set.
In order to better understand the process of the intent recognition method of the control instruction, the following describes the implementation method flow of the intent recognition of the control instruction in combination with the alternative embodiment, but is not used to limit the technical solution of the embodiment of the present application.
In this embodiment, a method for identifying intent of a control instruction is provided in conjunction with fig. 3, which specifically includes the following steps:
step one, entity extraction:
for example, the entity extraction may be performed on text data corresponding to a control instruction of a user using a bert model. Wherein a relationship between the following entity data and categories of entity data may be defined:
device: target device names such as refrigerator, air conditioner.
attr: target device attributes such as temperature, mode, volume are set.
attrValue: a target device attribute value, such as volume setting "26", is set.
Positioning: target device locations such as living room, bathroom, first floor.
action: target device actions such as set-up, open, close, turn-up, etc.
The schematic structural diagram of the bert model may be shown in fig. 3, and fig. 3 is a schematic diagram of an intent recognition method of a control instruction according to an embodiment of the present application, and in combination with a logic structure of bert (chinese word segmentation) +crf (part of speech tagging), a learning rate of the crf layer may be set to be greater than that of the bert layer during training, for example, the learning rate of the crf layer is set to be 100 times that of the bert layer.
In this embodiment, taking "air conditioner temperature adjustment 25 degrees" as an example, the following entity relationship can be obtained by using the entity extraction model:
device: air-conditioning;
attr: a temperature;
attrValue:25;
action: and (5) adjusting.
Step two, acquiring the generated intention expression:
in this step, the entity data extracted in the step one may be formally expressed using the following manner, resulting in a generated intent expression: traversing a binary tree preset by the target object to obtain tree nodes of the binary tree and node expressions corresponding to the tree nodes; and determining a target tree node consistent with the entity category of the entity data from the plurality of tree nodes, acquiring a target node expression corresponding to the target tree node, and sequencing the plurality of target node expressions according to the traversing sequence of traversing the binary tree to generate the intention expression of the control instruction.
Alternatively, taking "the temperature of the living room air conditioner is increased by 2 degrees" as an example, the following intended expression can be obtained:
and (3) heightening (dest_attr (location, air conditioner, living room), and 2 degrees).
Step three, expression intention deduces:
specifically, the third step includes the following steps:
step 1, acquiring a predefined intention set.
For example, taking AirConditionerIncrTemp, airConditionerDecrTemp as an example of the codes of the air conditioner for adjusting the temperature up and down, an intention set including the following mapping relationship is defined. It should be noted that, when defining the mapping relationship, all possible expression cases should be enumerated as much as possible.
Taking the AirCondition IncrTemp as a preset intention set as an example, the AirCondition IncrTemp includes the following intention expression:
AirConditionerIncrTemp:[
"increase (location (air conditioner, room)", \\d+degree) ",
"increment (dest_attr (air conditioner, temperature), \\d+degree)",
"increment (dest_attr (-temperature), \\d+degree)",
"increment (air conditioner, \\d+degree)",
"increment (degree [ -, \d+ ])"
]。
Taking the AirCondition DecrTemp as a preset intention set as an example, the AirCondition IncrTemp includes the following intention expression:
AirConditionerDecrTemp:[
"dest_attr (location (air conditioner, room), temperature), \\d + degrees)",
"dest_attr (air conditioner, temperature), \\d+degree)",
"dest_attr (to, temperature), \\d + degrees)",
"decease (air conditioning, \\d + degree)",
"Decryase (-, \\d+degree)"
]。
Step 2, acquiring a target intention expression matched with the intention expression generated in the step two from a preset intention set:
in step 2, it includes:
step 2.1: normalizing the intention expression generated in the second step according to a preset dictionary, wherein the content of the dictionary is exemplified as follows:
and (3) increasing: raising by a height;
decreatase: a decrease in turndown;
room: parlor|kitchen.
Then, for the intent expression "raise (location, temperature, 2 degrees)" can be normalized as: "increase_attr (location, room, temperature, 2 degrees)".
Step 2.2: traversing each intention expression in the intention set, and matching each element of each intention expression, wherein pseudo code for realizing matching is as follows:
1var points// intent semantic set
2var queryFormula// user input formalized representation
3var intentResults// hit intention
4def intentInfer(var queryFormula):
5for intent in intents:
6for formula in intents.formulas:
7if match(formula,queryFormula):
8intentResults.add(intent)。
According to the embodiment, the intention of the user can be accurately identified, and compared with the existing method for identifying the intention by the classification model, the method is low in maintenance cost and higher in identification efficiency.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
FIG. 4 is a block diagram of an intent recognition device for control instructions according to an embodiment of the present application; as shown in fig. 4, includes:
an obtaining module 42, configured to obtain a control instruction received by the intelligent device;
a first determining module 44, configured to determine entity data corresponding to the control instruction, and generate an intent expression of the control instruction according to the entity data;
a second determining module 46, configured to determine a target intent expression consistent with the intent expression from a preset intent set, and determine a control intent corresponding to the target intent expression as an intent recognition result of the control instruction.
The control instruction received by the intelligent equipment is obtained through the device; determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data; the method comprises the steps of determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction, so that the problem of how to improve the accuracy of the user intention recognition result in the related art is solved, and the accuracy of the user intention recognition result is further improved.
Further, the first determining module 44 is further configured to: performing text recognition on the control instruction to obtain recognized text data; determining the entity data from the text data; and generating an intention expression of the control instruction according to a binary tree preset by the target object and the entity data.
In an exemplary embodiment, the first determining module 44 is further configured to: inputting the text data into a word segmentation model; performing word segmentation on the text data by using a first word segmentation sub-model in the word segmentation model to obtain a word segmentation result; marking parts of speech of a plurality of segmented words in the segmented word result by using a second segmented word model in the segmented word model to obtain a plurality of marked segmented words, wherein each segmented word of the plurality of marked segmented words corresponds to a word label, and the word label represents the word category of each segmented word; and determining target word segmentation in the plurality of word segmentation as the entity data, wherein the word label of the target word segmentation is an entity word.
The word segmentation model may be, for example, a word segmentation tool, including LTP (Language Technology Plantform, language technology platform) word segmentation tools, resultant word segmentation tools, pkuseg word segmentation tools, and the like. Further, for example, the word segmentation using LTP word segmentation tool may be implemented by pyltp module of Python, the word segmentation using nub word segmentation tool may be implemented by jieba module of Python, and the word segmentation using pkuseg tool may be implemented by pkuseg module of Python, but is not limited thereto.
Alternatively, in this embodiment, the learning rate of the first sub-model and the learning rate of the second sub-model may be set separately, for example, the learning rate of the first sub-model may be set to be greater than the learning rate of the second sub-model, or the learning rate of the second sub-model may be set to be greater than the learning rate of the first sub-model.
In an exemplary embodiment, the first determining module 44 is further configured to: traversing the binary tree to obtain tree nodes of the binary tree and node expressions corresponding to the tree nodes; wherein the tree node represents a category corresponding to an entity of the entity data; determining a target tree node from a plurality of tree nodes, and acquiring a target node expression corresponding to the target tree node, wherein the target tree node is consistent with the entity type of the entity data; and sequencing the target node expressions according to the traversing sequence of traversing the binary tree to generate the intention expression of the control instruction.
Note that, the node expression corresponding to the tree node may be set, for example, as an entity class device to which a device name of the intelligent device belongs, an entity class action to which an operation action of the intelligent device belongs, or an entity class attr to which an attribute of the intelligent device belongs, but not limited thereto.
In an exemplary embodiment, the second determining module 46 is further configured to: parsing the intention expression to obtain nonstandard words in the intention expression; obtaining standard words corresponding to the non-standard words from a preset dictionary; replacing the nonstandard words of the intent expression with the standard words to obtain a standardized intent expression; determining a target intention expression consistent with the normalized intention expression from a preset intention set.
In an exemplary embodiment, the second determining module 46 is further configured to: acquiring the expression similarity between the target intention expression and a first preset intention expression; under the condition that the similarity of the expressions is larger than a first preset threshold value, acquiring a control intention preset by a target object for the first preset intention expression; and determining the preset control intention as the control intention corresponding to the target intention expression.
Here, the above-mentioned similarity of expressions may be understood as, for example, similarity of word structures of different expressions.
In an exemplary embodiment, the second determining module 46 is further configured to: determining a second preset intent expression within the preset intent set, and determining a difference between a first length of the intent expression and a second length of the second preset intent expression; under the condition that the difference value is larger than a second preset threshold value, determining a first real word appearing first in the intention expression and a second real word appearing first in the target intention expression; and determining the second preset intention expression as the target intention expression under the condition that the part of speech of the first real word is consistent with the part of speech of the second real word.
In an alternative embodiment, the process of determining a target intent expression consistent with the intent expression from a preset intent set may be implemented using, for example, a thread, then generating a search thread for performing a full search of the preset intent set; grouping the intention expressions in the preset intention set to obtain a plurality of groups of intention expressions; searching a plurality of groups of intention expressions by using the searching thread, and determining a target intention expression consistent with the intention expression from a preset intention set.
FIG. 5 is a block diagram of an intent recognition device for control instructions according to an embodiment of the present application; as shown in fig. 5, in an exemplary embodiment, the intention recognition device of the control instruction further includes, in addition to the acquisition module 42, the first determination module 44, and the second determination module 46: the storage module 52 is configured to, after determining the control intent corresponding to the target intent expression as the intent recognition result of the control instruction, further propose the following execution steps, specifically including: establishing a corresponding relation between the control instruction and the intention recognition result, and storing the corresponding relation; and under the condition that the intelligent equipment receives the control instruction again, determining an intention recognition result corresponding to the control instruction directly according to the corresponding relation.
Embodiments of the present application also provide a storage medium including a stored program, wherein the program performs the method of any one of the above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store program code for performing the steps of:
s1, acquiring a control instruction received by intelligent equipment;
s2, determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data;
s3, determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.
Embodiments of the present application also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a control instruction received by intelligent equipment;
s2, determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data;
s3, determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices and, in some cases, the steps shown or described may be performed in a different order than what is shown or described, or they may be implemented as individual integrated circuit modules, or as individual integrated circuit modules. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. An intention recognition method of a control instruction, comprising:
acquiring a control instruction received by intelligent equipment;
determining entity data corresponding to the control instruction, and generating an intention expression of the control instruction according to the entity data;
determining a target intention expression consistent with the intention expression from a preset intention set, and determining a control intention corresponding to the target intention expression as an intention recognition result of the control instruction.
2. The method for recognizing intention of a control instruction according to claim 1, wherein determining entity data corresponding to the control instruction and generating an intention expression of the control instruction from the entity data, comprises:
performing text recognition on the control instruction to obtain recognized text data;
determining the entity data from the text data;
and generating an intention expression of the control instruction according to a binary tree preset by the target object and the entity data.
3. The method of claim 2, wherein determining the entity data from the text data comprises:
inputting the text data into a word segmentation model;
performing word segmentation on the text data by using a first word segmentation sub-model in the word segmentation model to obtain a word segmentation result;
marking parts of speech of a plurality of segmented words in the segmented word result by using a second segmented word model in the segmented word model to obtain a plurality of marked segmented words, wherein each segmented word of the plurality of marked segmented words corresponds to a word label, and the word label represents the word category of each segmented word;
and determining target word segmentation in the plurality of word segmentation as the entity data, wherein the word label of the target word segmentation is an entity word.
4. The method for recognizing intention of a control instruction according to claim 2, wherein generating an intention expression of the control instruction from a binary tree preset for a target object and the entity data, comprises:
traversing the binary tree to obtain tree nodes of the binary tree and node expressions corresponding to the tree nodes; wherein the tree node represents a category corresponding to an entity of the entity data;
determining a target tree node from a plurality of tree nodes, and acquiring a target node expression corresponding to the target tree node, wherein the target tree node is consistent with the entity type of the entity data;
and sequencing the target node expressions according to the traversing sequence of traversing the binary tree to generate the intention expression of the control instruction.
5. The method of claim 1, wherein determining a target intent expression consistent with the intent expression from a preset intent set comprises:
parsing the intention expression to obtain nonstandard words in the intention expression;
obtaining standard words corresponding to the non-standard words from a preset dictionary;
replacing the nonstandard words of the intent expression with the standard words to obtain a standardized intent expression;
determining a target intention expression consistent with the normalized intention expression from a preset intention set.
6. The intention recognition method of a control instruction according to claim 1, characterized in that before determining a control intention corresponding to the target intention expression as a result of intention recognition of the control instruction, the method further comprises:
acquiring the expression similarity between the target intention expression and a first preset intention expression;
under the condition that the similarity of the expressions is larger than a first preset threshold value, acquiring a control intention preset by a target object for the first preset intention expression;
and determining the preset control intention as the control intention corresponding to the target intention expression.
7. The intention recognition method of a control instruction according to claim 1, characterized in that after determining a control intention corresponding to the target intention expression as a result of intention recognition of the control instruction, the method further comprises:
establishing a corresponding relation between the control instruction and the intention recognition result, and storing the corresponding relation;
and under the condition that the intelligent equipment receives the control instruction again, determining an intention recognition result corresponding to the control instruction directly according to the corresponding relation.
8. The method of claim 1, wherein determining a target intent expression consistent with the intent expression from a preset intent set comprises:
determining a second preset intent expression within the preset intent set, and determining a difference between a first length of the intent expression and a second length of the second preset intent expression;
under the condition that the difference value is larger than a second preset threshold value, determining a first real word appearing first in the intention expression and a second real word appearing first in the target intention expression;
and determining the second preset intention expression as the target intention expression under the condition that the part of speech of the first real word is consistent with the part of speech of the second real word.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 8.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 8 by means of the computer program.
CN202310083554.1A 2023-01-30 2023-01-30 Intent recognition method of control instruction, storage medium and electronic device Pending CN116090461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310083554.1A CN116090461A (en) 2023-01-30 2023-01-30 Intent recognition method of control instruction, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310083554.1A CN116090461A (en) 2023-01-30 2023-01-30 Intent recognition method of control instruction, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116090461A true CN116090461A (en) 2023-05-09

Family

ID=86200617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310083554.1A Pending CN116090461A (en) 2023-01-30 2023-01-30 Intent recognition method of control instruction, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116090461A (en)

Similar Documents

Publication Publication Date Title
CN110286601A (en) Control the method, apparatus, control equipment and storage medium of smart home device
US20220253710A1 (en) Human-Machine Multi-Turn Conversation Method and System for Human-Machine Interaction, and Intelligent Apparatus
CN111128156A (en) Intelligent household equipment voice control method and device based on model training
WO2023168838A1 (en) Sentence text recognition method and apparatus, and storage medium and electronic apparatus
CN115356939A (en) Control command transmission method, control device, storage medium, and electronic device
CN110895936B (en) Voice processing method and device based on household appliance
CN108877774B (en) Data acquisition device, data analysis platform, system and method
CN113990324A (en) Voice intelligent home control system
CN110866094A (en) Instruction recognition method, instruction recognition device, storage medium, and electronic device
CN116090461A (en) Intent recognition method of control instruction, storage medium and electronic device
CN114915514B (en) Method and device for processing intention, storage medium and electronic device
CN108173722A (en) A kind of smart home device automatic operation method
CN116245596A (en) Article recommendation method and device, electronic equipment and storage medium
CN114925158A (en) Sentence text intention recognition method and device, storage medium and electronic device
CN110970019A (en) Control method and device of intelligent home system
CN116224815A (en) Control instruction generation method, storage medium and electronic device
CN117706954B (en) Method and device for generating scene, storage medium and electronic device
CN114911535B (en) Application program component configuration method, storage medium and electronic device
CN117010378A (en) Semantic conversion method and device, storage medium and electronic device
CN117892171A (en) Method and device for generating scene rule information based on GPT model
CN114124597B (en) Control method, equipment and system of Internet of things equipment
CN116072113A (en) Method and device for determining control instruction, storage medium and electronic device
CN117059083A (en) Equipment control method, storage medium and electronic device
CN116108861A (en) Voice data processing method and device, storage medium and electronic device
CN111128135A (en) Voice communication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination