CN116187335A - Intention recognition method and device, storage medium and electronic equipment - Google Patents

Intention recognition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116187335A
CN116187335A CN202211657527.2A CN202211657527A CN116187335A CN 116187335 A CN116187335 A CN 116187335A CN 202211657527 A CN202211657527 A CN 202211657527A CN 116187335 A CN116187335 A CN 116187335A
Authority
CN
China
Prior art keywords
intention
text corpus
target
keyword
intention recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211657527.2A
Other languages
Chinese (zh)
Inventor
严海锐
傅晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202211657527.2A priority Critical patent/CN116187335A/en
Publication of CN116187335A publication Critical patent/CN116187335A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an intention recognition method, an intention recognition device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring text corpus to be subjected to intention recognition; matching the text corpus with the rule templates respectively to obtain a matching result; under the condition that the matching result is that a target rule template corresponding to the corpus text is matched, obtaining an intention recognition result in the text corpus based on the target rule template recognition; and under the condition that the matching result is that the corresponding target rule template is not matched, carrying out intention recognition on the text corpus based on the trained intention recognition model to obtain an intention recognition result. The method can enable the final intention recognition result to be more accurate.

Description

Intention recognition method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of artificial intelligence and natural language understanding technologies, and in particular, to an intent recognition method, apparatus, storage medium, and electronic device.
Background
With the continuous development of intelligent terminals, voice interaction systems, intelligent customer service systems and other systems have been commonly applied to various intelligent terminals. In a voice interaction system or an intelligent customer service system, whether the intention of a user can be accurately understood is a key of the whole interaction/question-answering process. Thus, various intention recognition methods have been developed.
However, in the current intention recognition method, a deep learning model is usually directly applied, or meaning analysis is performed on a sentence by using a time sequence model, so that the problem of insufficient accuracy of recognition results exists in the method of performing intention recognition by using a single model.
Disclosure of Invention
In view of the above, the present invention provides an intent recognition method, apparatus, storage medium and electronic device, for solving the problem of insufficient accuracy of intent recognition in the prior art. To achieve one or a part or all of the above or other objects, the present invention provides an intent recognition method comprising:
acquiring text corpus to be subjected to intention recognition;
matching the text corpus with the rule templates respectively to obtain a matching result;
under the condition that the matching result is that a target rule template corresponding to the corpus text is matched, obtaining an intention recognition result in the text corpus based on the target rule template recognition;
and under the condition that the matching result is that the corresponding target rule template is not matched, carrying out intention recognition on the text corpus based on the trained intention recognition model to obtain an intention recognition result.
In order to solve the above-mentioned problems, the present invention provides an intention recognition apparatus comprising:
The acquisition module is used for acquiring text corpus to be subjected to intention recognition;
the matching module is used for respectively matching the text corpus with the rule templates to obtain a matching result;
the first recognition module is used for obtaining an intention recognition result in the text corpus based on the target rule template recognition under the condition that the matching result is the target rule template corresponding to the corpus text;
and the second recognition module is used for carrying out intention recognition on the text corpus based on the trained intention recognition model under the condition that the matching result is not matched with the corresponding target rule template, so as to obtain an intention recognition result.
Preferably, each rule template comprises a plurality of word slots; the matching module is used for:
acquiring a plurality of keywords based on the text corpus, and respectively matching the keywords with word slots in each rule template based on each keyword;
under the condition that each keyword is matched with a corresponding word slot in the same rule template, determining that the keyword is matched with the target rule template;
and under the condition that the keywords are not matched with the corresponding word slots in any rule template at the same time, determining that the keywords are not matched with the target rule template.
Preferably, each rule template includes an intention label corresponding to each word slot group; the first obtaining module is used for:
and obtaining an intention recognition result based on the keywords corresponding to the target word slots in the target rule template and the target intention labels corresponding to the target word slots.
Preferably, the second obtaining module is configured to: acquiring each keyword in the text corpus and an intention label corresponding to each keyword;
calculating and obtaining the probability of each keyword corresponding to each intention label by using the intention recognition model based on each keyword and the intention label corresponding to each keyword;
a first target intent tag is determined based on the probability that each of the keywords corresponds to each of the intent tags to obtain an intent recognition result.
Preferably, the second obtaining module is further configured to: after determining a first target intention label based on the probability that each keyword corresponds to each intention label, splitting the text corpus based on the connection relation words in the text corpus to obtain a plurality of sub-text corpora;
obtaining keywords of each sub-text corpus and intention labels corresponding to the keywords;
Based on the keywords of each sub-text corpus and the intention labels corresponding to the keywords, respectively carrying out intention recognition on each sub-text corpus based on a trained intention recognition model to obtain second target intention labels corresponding to each sub-text corpus;
and obtaining the intention recognition result based on each first target intention label and each second target intention label.
Preferably, the intention recognition device further comprises an analysis module, wherein the analysis module is used for performing dependency relationship analysis on each keyword in the text corpus after obtaining the target intention label to obtain target corresponding relations containing action keywords and control equipment keywords;
and obtaining the intention recognition result based on the target corresponding relations and the target intention labels corresponding to the action keywords in the target corresponding relations.
Preferably, the intention recognition device further comprises a training module for: before the text corpus to be subjected to intention recognition is obtained, training is performed to obtain the intention recognition model, and the training module is specifically used for:
acquiring a plurality of sample text corpus and sample intention labels corresponding to the sample text corpus; calculating the probability that each keyword corresponds to each keyword intention label in the sample text corpus by using an initial intention recognition model based on preset keywords and keyword intention labels corresponding to each preset keyword; determining a current intent recognition result based on the probability; and adjusting model parameters in the initial intention recognition model based on the difference between the current intention recognition result and the sample intention label until training is stopped when training conditions are met, so as to obtain a trained intention recognition model.
In order to solve the above-described problems, the present invention provides a storage medium storing a computer program which, when executed by a processor, implements the steps of the intention recognition method of any one of the above.
In order to solve the above problems, the present invention provides an electronic device, at least including a memory, and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the above intent recognition methods when executing the computer program on the memory.
To solve the above problems, the present invention provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium; the processor of the computer device reads the computer instructions from the computer readable storage medium, which when executed implements the steps in the intent recognition method of the embodiments of the invention.
According to the intention recognition method, the device, the storage medium and the electronic equipment, the intention recognition is carried out on the text corpus by adopting a mode of combining the rule template with the intention recognition model, so that a final intention recognition result is more accurate, and the problem of inaccurate recognition result caused by single intention recognition by adopting the model in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a schematic illustration of an application environment in one embodiment;
FIG. 2 is a block diagram of the hardware architecture of a gateway device in one embodiment;
FIG. 3 is a flow chart of a method of intent recognition in one embodiment;
FIG. 4 (a) is a text corpus of one embodiment: dividing schematic diagrams of turning on a lamp and an air conditioner;
fig. 4 (b) is a text corpus in one embodiment, as follows: dividing schematic diagrams for helping me turn on a lamp and turn off an air conditioner;
FIG. 5 is a flowchart of an intent recognition method in accordance with another embodiment of the present invention;
FIG. 6 is a flowchart of an intent recognition method in accordance with another embodiment of the present invention;
fig. 7 is a block diagram illustrating an intention recognition apparatus according to another embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The intention recognition method of the invention can be applied to the application environment shown in fig. 1. The implementation environment is an internet of things platform, which includes a terminal 100, a gateway 200, and an internet of things device 300 deployed in the gateway 200. The terminal 100 may be a desktop computer, a notebook computer, a tablet computer, a smart phone, a smart speaker, an intelligent control panel, or other electronic devices capable of implementing network connection, which is not limited herein. A network connection is established between the terminal 100 and the gateway 200, and in one embodiment, the terminal 100 and the gateway 200 establish a network connection through 2G/3G/4G/5G, WIFI, etc. Through the network connection, the user interacts with the gateway 200, and further, the user controls the internet of things equipment accessing the gateway 200 to execute corresponding actions by means of the terminal 100. The internet of things device 300 is connected to the gateway 200 in the internet of things platform, and is communicated with the gateway 200 through a communication module configured by the internet of things device 300, and is further controlled by the gateway 200.
In one embodiment, the internet of things device 300 may be deployed in the gateway 200 by accessing the gateway 200 through a local area network. The process of the internet of things device 300 accessing the gateway 200 through the local area network includes that the gateway 200 first establishes a local area network, and the internet of things device 300 accesses the local area network established by the gateway 200 by connecting the gateway 200. The local area network comprises ZIGBEE or Bluetooth. The internet of things device 300 may be an intelligent home device such as an intelligent printer, an intelligent fax machine, an intelligent video camera, an intelligent air conditioner, an intelligent refrigerator, an intelligent sound box, an intelligent television, an intelligent lamp, or a human body sensor, a door and window sensor, a temperature and humidity sensor, a water immersion sensor, a natural gas alarm, a smoke alarm, a wall switch, a wall socket, a wireless wall-mounted switch of a wireless switch, a magic cube controller, a curtain motor, or the like configured with a communication module (e.g., a ZIGBEE module, a Wi-Fi module, a bluetooth communication module, or the like), without limitation.
The terminal 100 may receive voice information sent by a user, and perform voice recognition on the voice information to obtain text corpus to be subjected to intention recognition corresponding to the voice information. The terminal 100 further matches the text corpus with the rule templates respectively to obtain a matching result; under the condition that the matching result is that the target rule template corresponding to the corpus text is matched, obtaining an intention recognition result in the text corpus based on target rule template recognition; and under the condition that the matching result is that the corresponding target rule template is not matched, carrying out intention recognition on the text corpus based on the trained intention recognition model to obtain one or more intention recognition results. The terminal 100 may further determine one or more corresponding voice commands according to the one or more intention recognition results, and if the voice command is a control command for one or more internet of things devices 300, the terminal 100 sends the control command to the gateway 200, and the gateway 200 controls the corresponding one or more internet of things devices 300 to perform a corresponding action based on the control command.
In this embodiment, by respectively generating one or more corresponding control instructions according to the identified one or more intentions and respectively controlling the corresponding one or more controlled devices, such as the internet of things device, a plurality of intentions in a sentence can be accurately identified, and further, the corresponding plurality of controlled devices can be accurately controlled, so as to realize semantic understanding processing of multiple operations of multiple devices.
Fig. 2 is a block diagram of a hardware architecture of a gateway device according to an exemplary embodiment. This gateway device is suitable for the implementation environment shown in fig. 1. It should be noted that this gateway is only an example adapted to the present invention and should not be construed as providing any limitation to the scope of use of the present invention. Nor should the gateway be construed as necessarily relying on or necessarily having one or more of the components of the exemplary gateway 200 shown in fig. 2.
The hardware structure of the gateway 200 may vary widely depending on the configuration or performance, as shown in the figure, the gateway 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270. The interface 230 includes at least one wired or wireless network interface 231, at least one serial-parallel conversion interface 233, at least one input-output interface 235, and at least one USB interface 237, etc. for communicating with external devices. The memory 250 may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like as a carrier for storing resources. The resources stored thereon include an operating system 251, application programs 253, data 255, or the like.
In one embodiment, the present invention provides an intent recognition method, which is described by taking application of the method to an electronic device, where the electronic device may specifically be a terminal, a gateway, an intelligent speaker, an intelligent control panel, etc. in fig. 1. As shown in fig. 3, the method in this embodiment includes the steps of:
step S101, obtaining text corpus to be subjected to intention recognition.
In this step, the intention recognition means: and determining a certain purpose which the user wants to achieve, for example, judging whether the user obtains weather information or sets an alarm clock or the purpose of the user such as equipment control, delay control and the like by the electronic equipment according to the text corpus of the user. Text corpus refers in particular to words, phrases or several sentences in text form. Specifically, the electronic device may obtain a text corpus by obtaining voice information of a user.
For example, the terminal device collects voice information of the user, and then the terminal device performs voice recognition and text conversion on the voice information, so as to obtain corresponding text corpus. The text corpus in this step may be, for example: "help me turn on air conditioner", "turn on television set off after one hour", "help me turn on lights and air conditioner in living room, turn off television set off, turn them off all after one hour", etc.
Step S102, matching the text corpus with the rule templates respectively to obtain a matching result.
The rule template in this step specifically refers to a template file containing a plurality of word slots and intention labels. Wherein each word slot is used to indicate the location of a word in the rule template. Rule templates may be produced empirically in advance through manual/expert summaries.
When the electronic equipment is matched with the rule template, specifically, keywords can be extracted from the text corpus first, then each keyword is matched with each word slot in the rule template, and accordingly a matching result is obtained.
Step S103, in the case that the matching result is that the target rule template corresponding to the corpus text is matched, the intention recognition result in the text corpus is obtained based on the target rule template recognition.
In this step, after the electronic device matches the target rule template corresponding to the text corpus, the intention recognition result can be directly obtained according to the intention label in the rule template. Specifically, the mapping relation between each word slot group and the intention label is pre-established in each rule template. The intention labels refer to intention configured for the rule templates in advance, namely pre-configured target information which a user wants to achieve, such as label information corresponding to 'equipment control', 'delay control', and the like.
The word slot group is formed by a plurality of word slot groups, and different intention labels can be specifically configured for different word slot groups, wherein the intention labels can be equipment control, delay control and the like. Therefore, after the target rule template is matched, each word slot group in the rule template can be obtained, and then the intention labels corresponding to each word slot group are obtained based on each word slot group, so that the intention recognition result of the text corpus is obtained.
Step S104, if the matching result is that the target rule template corresponding to the corpus text is not matched, carrying out intention recognition on the text corpus based on the trained intention recognition model to obtain an intention recognition result.
It will be appreciated that the trained intent recognition model is a machine learning model with intent recognition capabilities.
In this step, when the matching result does not include the target rule template, it is indicated that each keyword in the text corpus does not exist in the same rule template at the same time, and therefore, the intent recognition result cannot be directly obtained based on the rule template, so that the intent in the text corpus can be recognized by using the intent recognition model obtained by training in advance, specifically, the probability that each keyword corresponds to a respective intent label can be calculated based on the intent recognition model, then the intent label with a probability greater than a predetermined probability value is determined as the target intent label based on the probability of each intent label, and/or several intent labels with the highest probability are determined as the target intent labels, thereby obtaining the intent recognition result.
According to the intention recognition method, the intention recognition is carried out on the text corpus by adopting a mode of combining the rule template with the intention recognition model, so that a final intention recognition result is more accurate, and the problem of inaccurate recognition result caused by single intention recognition by adopting the model in the prior art is solved. And because the rule template is adopted for intention recognition, and a plurality of intention labels can be configured in the rule template at the same time, the intention labels can be quickly and accurately obtained based on the matched target rule template, and then the intention labels can be obtained.
In still another embodiment of the present invention, when the terminal device matches the text corpus with each predetermined rule template, a plurality of keywords may be obtained based on the text corpus, and the word slots in each rule template may be respectively matched based on each keyword and the word type to which each keyword belongs; under the condition that each keyword is matched with a corresponding word slot in the same rule template, determining that the keyword is matched with the target rule template; and under the condition that the keywords are not matched with the corresponding word slots in any rule template at the same time, determining that the keywords are not matched with the target rule template.
Wherein the word slots are used to indicate the location of words in the rule templates. That is, word types may be pre-configured for each word slot in the rule template. The word type may refer to part-of-speech information of a word corresponding to each word slot, and may be, for example, nouns, verbs, adjectives, auxiliary words, and the like.
Therefore, after extracting a plurality of keywords from the text expectation, the word type of each keyword can be compared with the word type of each word slot in the rule template, and when the word type of the keyword is consistent with the word type of a certain word slot, the keyword is determined to be successfully matched with the word slot, namely the keyword is determined to be matched with the word slot, so that a matching result is obtained. In the implementation process, a plurality of rule templates can be set in advance based on experience, and each rule template is configured with word slots and intention labels, for example, the following two rule templates are included:
rule template one: [ D: action ] [ D: device1] [ D: time ];
rule template II: [ D: action ] [ D: device1] [ D: and ] [ D: action ] [ D: device2].
Wherein D represents a word, i.e., a keyword; action, device, respectively, the word type, for example, action indicates that the word type is an action/verb, and device indicates that the word type is a device/noun. Therefore, when keywords of "open", "bath heater", "close" and "television" are obtained from the text corpus, it is determined that the keywords of "open" are matched with the word slot of "[ D: action ]," the keywords of "bath heater" are matched with the word slot of "[ D: device1]," the keywords of "close" are matched with the word slot of "[ D: action ]," the keywords of "television" are matched with the word slot of "[ D: device2]," and therefore each keyword is determined to be matched with the word slot in one rule template at the same time, and accordingly the rule template can be matched with the rule template.
According to the embodiment, the keywords are respectively matched with the word slots in the rule templates according to the word types, so that the matching result is more accurate, and a foundation is laid for accurately obtaining the intention recognition result based on the matching result.
In yet another embodiment of the present invention, word-slot groupings may be pre-built based on word slots in each rule template, and then corresponding intent tags may be configured for each word-slot grouping. Therefore, after the target is determined to be matched with the target rule target, the intention recognition result can be obtained based on the keywords corresponding to the target word slots in the target rule template and the target intention labels corresponding to the target word slot groups. For example, it is determined that there are 5 word slots a, b, c, d, e in the target rule template, where word slots (a, b) correspond to intention labels 1, word slot groups (c, d) correspond to intention labels 2, word slot group (e) corresponds to intention labels 3,
when determining that the keyword a corresponds to the word slot a, the keyword B corresponds to the word slot B, and the keyword corresponds to the word slot e, determining that the target word slot groups are (a, B) and (e), thereby obtaining the intention label 1 and the intention label 3 corresponding to the two word slot groups as target intention labels, and further obtaining the intention recognition result.
For another example, a rule template may be:
[D:action][D:device1][D:and][D:action][D:device2][W:0-1]。
wherein, an intention label of 'device_control first device control' is configured for a word slot group consisting of 2 word slots of each word slot (namely [ D: action ] [ D: device1 ]) of 0-1; the intention label of "device_control second device control" is configured for the word slot group consisting of 3-4 word slots (i.e., [ D: action ] [ D: device2] 2 word slots). According to the method and the device, the intention is configured for each word slot group, so that the intention recognition result can be obtained rapidly and accurately based on the intention in the target rule template when the obtained target rule template is matched later.
In another embodiment of the present invention, when performing intent recognition on the text corpus based on a trained intent recognition model to obtain an intent recognition result, each keyword in the text corpus and an intent label corresponding to each keyword may be obtained first; calculating and obtaining the probability of each keyword corresponding to each intention label by using the intention recognition model based on each keyword and the intention label corresponding to each keyword; a first target intent tag is determined based on the probability that each of the keywords corresponds to each of the intent tags to obtain an intent recognition result.
In this embodiment, the keyword may be a word or a word other than adjective, related word, auxiliary word, adverb, preposition, conjunctive, auxiliary word, interject, or personification, for example, and the part of speech of the keyword may be a part of speech such as verb, noun, pronoun, or adverb.
The intent recognition model in this implementation may include: long-short-Term Memory recurrent neural network module (Bi-directional Long Short-Term Memory, english abbreviated as BLSTM) The device is composed of a forward long-short-term memory recurrent neural network and a backward long-short-term memory recurrent neural network, an Attention (Intent Attention) module and a Slot Gate (Slot Gate) module. In the input aspect, word Sequence Word strings are added with a pre-label character in addition to a simple character. Where char-labeling is the feature vector of each word (here directly using the char-labeling of bert for initialization in 768 dimensions), pre-labellabeling is the labelstock (dictionary written in the labelstock) of labels (each label corresponds to one labelstock, again 768 dimensions, here using random initialization as the initialization of the vector).
That is, word substring (Word Sequence) =feature vector of each Word (char_labelcasting) +intention label (pre_labelcasting). Labeling and model transfusionIn aspect, due to the y of the intent I Is an intended tag-sized vector, such as: y is I The intention labels are only 4 in total [ 0.01,0.5,0.4,0.09 ], which are assumed to be weather, equipment control, delay control and alarm clock ], wherein the intention labels exceed a certain threshold value to be taken as the final intention recognition result. The final intent output is "device control-delay control" (the intent recognition result is two intents, with "-" as a split symbol).
In this embodiment, after obtaining keywords, the probability that each keyword corresponds to an individual intention label is calculated based on the intention recognition model, then the intention labels with the probability larger than a predetermined probability are determined to be target intention labels based on the probability of each intention label, and/or several intention labels with the highest probability are determined to be target intention labels, so that the intention recognition result can be obtained more accurately.
In another embodiment of the present invention, in order to improve accuracy of the intent recognition result, after determining the first target intent label based on the probability that each keyword corresponds to each intent label, the trained intent recognition model may further be used to perform secondary intent recognition on the text corpus, where a specific secondary intent recognition process is as follows: splitting the text corpus based on the connection relation words in the text corpus to obtain a plurality of sub-text corpora; obtaining keywords of each sub-text corpus and intention labels corresponding to the keywords; based on the keywords of each sub-text corpus and the intention labels corresponding to the keywords, respectively carrying out intention recognition on each sub-text corpus based on a trained intention recognition model to obtain second target intention labels corresponding to each sub-text corpus; and obtaining the intention recognition result based on each first target intention label and each second target intention label.
Specifically, for example, the help text corpus is: i turn on the light and turn off the television. The electronic device can split this text corpus according to the connection Guan Jici "and", the first sub-text corpus 'help me turn on the light' and the second sub-text corpus 'turn off the television' are obtained. Then, the electronic device can acquire the keywords on and off in the first sub-text corpus and acquire the keywords off and television in the second sub-text corpus.
And then the electronic equipment calculates the probabilities that the on and lamp respectively correspond to the equipment control, the delay control and the alarm clock by using the trained intention recognition model based on the keywords of the first text corpus and the intention labels such as the equipment control, the delay control and the alarm clock, so as to determine a second target intention label corresponding to the first sub-text corpus.
Similarly, the subsequent electronic device also calculates probabilities that the "off" and the "television" respectively correspond to the "device control", "delay control" and "alarm clock" based on the keywords "off" and "television" of the second text corpus and the intention labels such as the "device control", "delay control" and "alarm clock", by using the trained intention recognition model, thereby determining the second target intention label corresponding to the first sub-text corpus.
The connection relation word specifically refers to a word for connecting two sentences, and may be, for example, "and", "simultaneous", "then" or the like. By splitting a plurality of sentences in the text corpus according to the connection relation words, the splitting result can be accurate, namely each sub-text corpus can be accurately obtained. For example, if the text corpus is "help me turn on a lamp and then turn off it after an hour", the sentence can be split according to the connection relation word "then" to obtain two sub-text corpora, and one text corpus is: "help me turn on the light" another text corpus is "turn off it after one hour".
After the electronic equipment splits the text corpus according to the connection relation words to obtain a plurality of sub-text corpora, the trained intention recognition model is utilized to respectively carry out intention recognition on each sub-text corpus, so that an intention label corresponding to the sub-text corpus is obtained. Specifically, the principle of performing intent recognition on the sub-text corpus by using the intent recognition model is consistent with the principle of performing intent recognition on the text corpus before being split by using the intent recognition model, and is not described herein.
In this embodiment, by using the intent recognition model to perform dual intent recognition on the text corpus before disassembly and the text corpus after disassembly, the intent recognition result can be accurate and reliable.
In still another implementation of the present invention, after obtaining the target intent label, dependency relationship analysis may be performed on each keyword in the text corpus, to obtain a target correspondence relationship including an action keyword and a control device keyword; and obtaining the intention recognition result based on the target corresponding relations and the target intention labels corresponding to the action keywords in the target corresponding relations.
In a specific implementation, the method can be based on a predetermined dependency syntax analysis mode And carrying out dependency relation analysis on each keyword in the text corpus, namely analyzing the components of each keyword in the sentence, for example, analyzing whether the keywords are subjects, predicates, objects, complements, or words in the sentence, and the like, and determining the components of each keyword in the sentence, namely determining the relation among each keyword. The predetermined dependency syntax analysis method may be a dependency syntax analysis method of a chinese language processing package (Han Language Processing; english: hanlp). For example, the text corpus is "help me turn on light and air conditioner", and the dependency relationship analysis schematic diagram of the text corpus can be as shown in fig. 4 (a), and the direct objects of the "light" and the "air conditioner" in the text corpus can be analyzed through the dependency syntax analysis mode, so that the "light" and the "air conditioner" correspond to the "on" action, and the corresponding relationship of the "light" and the "on" and the corresponding relationship of the "air conditioner" and the "on" can be obtained.
If the text corpus is "help me turn on the lamp and turn off the air conditioner", then the text corpus is subjected to a dependency relationship analysis schematic diagram, as shown in fig. 4 (b), and the object that the lamp is turned on and the object that the air conditioner is turned off can be analyzed through a dependency syntax analysis mode. Thus, the corresponding relation between the lamp and the on and the corresponding relation between the air conditioner and the off can be obtained, and the corresponding relation between the corresponding equipment and the action can be obtained. In this embodiment, the complete intention content including the control action and the control object is obtained, so that the subsequent intention recognition result is obtained comprehensively and accurately.
In another embodiment of the present invention, the intention recognition model may be obtained through pre-training, and the specific model training process is as follows: acquiring a plurality of sample text corpus and sample intention labels corresponding to the sample text corpus; calculating the probability that each keyword corresponds to each keyword intention label in the sample text corpus by using an initial intention recognition model based on preset keywords and keyword intention labels corresponding to each preset keyword; determining a current intent recognition result based on the probability; and adjusting model parameters in the initial intention recognition model based on the difference between the current intention recognition result and the sample intention label until training is stopped when training conditions are met, so as to obtain a trained intention recognition model.
In this embodiment, the sample text corpus may be acquired from a human-computer dialogue identified by historical intent, or may be acquired manually according to each context setting, etc. Wherein the sample intention label represents known destination information that the user wants to reach, i.e. a pre-labeled intention label.
Specifically, keyword intention labels corresponding to preset keywords can be marked in advance for some preset keywords in the word stock. For example, "temperature" is intended to be "weather"; "turn on the light" is intended to be "device control"; "close after one hour" pertains to "delay control" intent, etc.
For example, a sample text corpus of which the light is turned off and the air conditioner is turned on after one hour may be configured with sample intention recognition results of "device control" and "delay control". Therefore, the probability that each keyword corresponds to each keyword intention label can be calculated and obtained by using an initial intention recognition model based on the keywords of 'light', 'on', 'off', and 'one hour' in the sample text corpus and the keyword intention labels corresponding to the keywords of 'equipment control', 'delay control', 'weather', 'alarm clock', 'timing reminding'.
For example, the probability of obtaining "device control" is calculated to be 0.3, the probability of "delay control" is calculated to be 0.2, the probability of "weather" is calculated to be 0.4, the probability of "alarm clock" is calculated to be 0.05, and the probability of "timed reminder" is calculated to be 0.05. The probability corresponding to the weather is determined to be the largest, so that the current intention recognition result is the weather, the sample intention label is the equipment control, the difference between the current intention recognition result and the sample intention label is determined, the difference loss between the current intention recognition result and the sample intention label is calculated, model parameters in the initial intention recognition model are adjusted based on the difference loss, and then the steps are repeated until the training condition is met, model training is stopped, and a trained intention recognition model with more accurate intention recognition capability is obtained. It may be understood that the training condition may be that the accuracy of the result between the current intention recognition result and the sample intention label reaches a preset threshold, or the training number reaches a predetermined number of times threshold, which is not limited in this application.
In this embodiment, the intention recognition model is obtained through training, so that the intention recognition result can be obtained based on the model recognition even if the rule template is not matched later, and the problem that the intention of the user cannot be recognized when the rule template is not matched is avoided.
Yet another embodiment of the present invention provides an intention recognition method, as shown in fig. 5, including the steps of:
step S201, training an initial intention recognition model based on a plurality of sample text corpus and sample intention recognition results corresponding to each sample text corpus to obtain an intention recognition model.
Step S202, constructing a plurality of rule templates; wherein each rule template comprises: a plurality of word slots and intention labels corresponding to each word slot group; each word slot is respectively configured with word type information.
In this step, a plurality of rule templates may be pre-established based on service experience data, so as to provide a guarantee for performing intent matching on the text corpus based on the rule templates.
In the implementation process of the step, a certain rule template can be specifically shown as follows:
[D:passive][D:action][D:device][D:and][D:time][D:action][D:refer][W:0-1];
0-2@device_control|||4-6@delay_control。
where D represents a word, i.e., a keyword. passive, action, device, time, refer each represent a word type. Wherein the passive represents a guest word of the word type, the action represents a movement of the word type, the device represents a device of the word type, the time represents a time of the word type, and the refer represents a pronoun of the word type. [ W:0-1] represents rule templates 0-1, i.e., sequence numbers of rule templates.
"0-2@device_control" indicates an intention label configured for the rule template, namely an intention label configured for "device_control device control" for a word slot group consisting of 0-2 word slots (i.e., [ D: passive ] [ D: action ] [ D: device ]).
"||" represents that this rule template has other intent labels as well.
"4-6@delay_control" represents an intention label configured for the rule template, namely an intention label for configuring "delay_control delay device control" for a word slot group consisting of 4-6 word slots (namely [ D: time ] [ D: action ] [ D: refer ]). That is, the [ D: time ] [ D: action ] [ D: refer ] clause is intended as delay_control.
Step S203, obtaining text corpus to be subjected to intention recognition.
In this step, the specific terminal device may obtain the voice information of the user, and then perform text conversion on the voice information, so as to obtain a corresponding text corpus. The text corpus in this step may be, for example: "help me turn on air conditioner", "help me turn on air conditioner and sound, turn off sound after one hour", "help me turn on lights and air conditioner in living room, turn off television, turn off them all after one hour", etc.
Step S204, obtaining a plurality of keywords based on the text corpus, and respectively matching the keywords and word types of the keywords with word types of word slots in each rule template based on the word types; under the condition that each keyword is matched with a corresponding word slot in the same rule template, determining that the keyword is matched with the target rule template, and executing step S205; in the case that each keyword is not matched to the corresponding word slot in any rule template at the same time, it is determined that the keyword is not matched to the target rule template, and step S206 is performed.
In this step, a rule template is exemplified as follows. Namely, a certain rule template is as follows: [ D: passive ] [ D: action ] [ D: device ] [ D: and ] [ D: time ] [ D: action ] [ D: refer ] [ W:0-1];0-2@device_control| 4-6@delay_control.
Wherein, the intention label of the slot group consisting of 0-2 word slots is 'equipment control', and the intention label of the slot group consisting of 4-6 word slots is 'delay equipment control'. When the corpus text is "turn on television, turn off all after one hour. By performing rule template matching, the keyword "open" can be matched to the word slot of [ D: action ] in the rule template, the "television" can be matched to the word slot of [ D: device ] in the rule template, the keyword "one hour" can be matched to the word slot of [ D: time ] in the rule template, and the keyword "it" can be matched to the word slot of [ D: refer ] in the rule template, so that the rule template can be determined to be the target rule template.
Step S205, obtaining at least one intention recognition result based on the keyword corresponding to each target word slot in the target rule template and the target intention label corresponding to each target word slot group.
In this step, after the terminal device matches the target rule template, the terminal device may have the target rule template to obtain the intention recognition result, for example, the following rule template:
[D:passive][D:action][D:device][D:and][D:time][D:action][D:refer][W:0-1];0-2@device_control|||4-6@delay_control;
the intention recognition result can be obtained according to the intention labels of the word slot groups in the rule template, namely, two intention recognition results of 'equipment control' and 'equipment delay control' are obtained.
Step S206, obtaining each keyword in the text corpus and an intention label corresponding to each keyword; calculating and obtaining the probability of each keyword corresponding to each intention label by using the intention recognition model based on each keyword and the intention label corresponding to each keyword; a first target intent tag is determined based on a probability that each of the keywords corresponds to each of the intent tags.
Step S207, splitting the text corpus based on the connection relation words in the text corpus to obtain a plurality of sub-text corpora; obtaining keywords of each sub-text corpus and intention labels corresponding to the keywords; based on the keywords of each sub-text corpus and the intention labels corresponding to the keywords, respectively carrying out intention recognition on each sub-text corpus based on a trained intention recognition model to obtain second target intention labels corresponding to each sub-text corpus; and obtaining target intention labels from the first target intention labels and the second target intention labels.
In the step, the text corpus is further split to obtain a plurality of sub-text corpora, then the intention recognition model is utilized to respectively carry out intention recognition on each sub-text corpus, so that an intention recognition result is obtained, then the intention recognition result obtained after the text corpus is split is compared with the intention recognition result obtained without splitting, and if the intention recognition result is consistent with the intention recognition result obtained without splitting, the model result is obtained. If the meaning identification result and the meaning identification result are inconsistent, judging the meaning identification score of each sentence of the words in the meaning identification result after the text corpus is disassembled, namely judging the meaning label probability in the meaning identification result, if the meaning label probability exceeds a set threshold value, taking a second target meaning label corresponding to the text corpus after the text corpus is disassembled as a final target meaning label, otherwise, taking a first target meaning label corresponding to the text corpus before the text corpus is disassembled as a final target meaning label.
Step S208, performing dependency relationship analysis on each keyword in the text corpus to obtain target corresponding relations containing action keywords and control equipment keywords; and obtaining the intention recognition result based on the target corresponding relations and the target intention labels corresponding to the action keywords in the target corresponding relations.
In this embodiment, in order to make the final intention recognition result more comprehensive, after obtaining the target intention labels, dependency analysis may be performed to obtain target device objects and target operations corresponding to the intention labels, that is, obtain device keyword objects and action keywords corresponding to the intention labels. Specifically, dependency relation analysis can be performed on each keyword in the text corpus based on a predetermined dependency syntax analysis mode, so as to obtain target corresponding relations containing action keywords and control equipment keywords; the predetermined dependency syntax analysis method may be a dependency syntax analysis method of a chinese language processing package (Han Language Processing; english: hanlp).
The dependency analysis in this embodiment is performed after rule matching and model recognition intention, by taking the intention and word slot label after the previous recognition, and then performing the dependency analysis on the word slot. Dependency analysis is performed to actually know what actions are to be performed for the device in each query. For example, "help me turn on lights and air conditioner" may get the target correspondence: the corresponding action of the air conditioner is on, and the corresponding action of the lamp is also on. And then, if 'help me turn on the lamp and the air conditioner is turned off for 1 hour'. The target correspondence may be obtained as: the lamp is turned on and the air conditioner is turned off.
The following describes a specific application scenario, as shown in fig. 6, the intention recognition method in this embodiment includes:
firstly, a voice assistant in electronic equipment acquires text corpus to be subjected to intention recognition, and then converts the text corpus into a character string query;
step two, the voice assistant in the electronic equipment utilizes the rule template to match the query, and if the matching is successful, the intention recognition result is directly obtained; otherwise, executing the third step;
thirdly, the voice assistant in the electronic equipment performs intention recognition on the query by using the model to obtain a first intention recognition result; the first intention recognition result comprises at least one intention label;
step four, a voice assistant in the electronic equipment judges whether the query has a marker word/connection Guan Jici; if the sign word does not exist, the first intention recognition result obtained in the step three is used as a final intention recognition result; if the mark word exists, executing the fifth step;
step five, the voice assistant in the electronic equipment disassembles the query according to the marker words/the connection relation words to obtain a plurality of sub-queries and utilizes the intention recognition model to recognize, so as to obtain a second intention recognition result; the second intention recognition result comprises at least one intention label;
Step six, a voice assistant in the electronic equipment combines the first intention recognition result and the second intention recognition result to obtain a final target intention recognition result;
in the step, the electronic device can compare the first intention recognition result with the second intention recognition result, and can take the first intention recognition result or the second intention recognition result as a final target intention recognition result under the condition that the first intention recognition result and the second intention recognition result are consistent; under the condition that the first intention recognition result and the second intention recognition result are inconsistent, determining that the probability values are all higher than a preset value and are the final target intention labels according to the intention label probability values in the first intention recognition result and the probability values of the intention labels in the second intention recognition result, and then determining that the corresponding intention recognition result is the final intention recognition result.
And step seven, the voice assistant in the electronic equipment performs dependency relation mining/analysis on the text corpus to obtain the corresponding relation between the actions and the equipment objects.
In this embodiment, after a voice assistant in an electronic device obtains a final target intention recognition result and a corresponding relationship between an action and a device object in a corpus text, one or more corresponding voice commands may be determined, and then devices such as curtains, lamps, televisions, and speakers in an intelligent home are controlled based on the voice commands, or control commands are sent to a gateway, which controls corresponding home devices based on control command control.
The method in the embodiment solves the bottleneck of semantic understanding of the voice assistant at present, namely that most voice assistant products can only understand simple sentence semantics at present, for example, simple semantics with only one meaning such as turning on an air conditioner, turning off light and the like. The intelligent home system can realize more humanized barrier-free semantic understanding of the voice assistant, so that the intelligent home system can more accurately analyze the voice of the user, and further can control the intelligent device according to accuracy.
Another embodiment of the present invention provides an intention recognition apparatus, as shown in fig. 7, including:
an obtaining module 11, configured to obtain a text corpus to be subjected to intent recognition;
the matching module 12 is used for respectively matching the text corpus with the rule templates to obtain matching results;
a first recognition module 13, configured to obtain an intention recognition result in the text corpus based on the target rule template recognition when the matching result is that the target rule template corresponding to the corpus text is matched;
and the second recognition module 14 is configured to perform intent recognition on the text corpus based on the trained intent recognition model to obtain an intent recognition result if the matching result is that the corresponding target rule template is not matched.
In the implementation process of the embodiment, each rule template comprises a plurality of word slots; the matching module is used for:
acquiring a plurality of keywords based on the text corpus, and respectively matching the keywords with word slots in each rule template based on each keyword;
under the condition that each keyword is matched with a corresponding word slot in the same rule template, determining that the keyword is matched with the target rule template;
and under the condition that the keywords are not matched with the corresponding word slots in any rule template at the same time, determining that the keywords are not matched with the target rule template.
In the implementation process of the embodiment, each rule template comprises an intention label corresponding to each word slot group; the first obtaining module is used for:
and obtaining an intention recognition result based on the keywords corresponding to the target word slots in the target rule template and the target intention labels corresponding to the target word slots.
In a specific implementation process of this embodiment, the second obtaining module is configured to: acquiring each keyword in the text corpus and an intention label corresponding to each keyword;
calculating and obtaining the probability of each keyword corresponding to each intention label by using the intention recognition model based on each keyword and the intention label corresponding to each keyword;
A first target intent tag is determined based on the probability that each of the keywords corresponds to each of the intent tags to obtain an intent recognition result.
In this embodiment, in a specific implementation process, the second obtaining module is further configured to: after determining a first target intention label based on the probability that each keyword corresponds to each intention label, splitting the text corpus based on the connection relation words in the text corpus to obtain a plurality of sub-text corpora;
obtaining keywords of each sub-text corpus and intention labels corresponding to the keywords;
based on the keywords of each sub-text corpus and the intention labels corresponding to the keywords, respectively carrying out intention recognition on each sub-text corpus based on a trained intention recognition model to obtain second target intention labels corresponding to each sub-text corpus;
and obtaining the intention recognition result based on each first target intention label and each second target intention label.
In a specific implementation process of the embodiment, the intention recognition device further includes an analysis module, where the analysis module is configured to perform dependency relationship analysis on each keyword in the text corpus after obtaining a target intention label, so as to obtain a target corresponding relationship including an action keyword and a control device keyword;
And obtaining the intention recognition result based on the target corresponding relations and the target intention labels corresponding to the action keywords in the target corresponding relations.
In a specific implementation process of this embodiment, the intention recognition device further includes a training module, where the training module is configured to: before the text corpus to be subjected to intention recognition is obtained, training is performed to obtain the intention recognition model, and the training module is specifically used for:
acquiring a plurality of sample text corpus and sample intention labels corresponding to the sample text corpus; calculating the probability that each keyword corresponds to each keyword intention label in the sample text corpus by using an initial intention recognition model based on preset keywords and keyword intention labels corresponding to each preset keyword; determining a current intent recognition result based on the probability; and adjusting model parameters in the initial intention recognition model based on the difference between the current intention recognition result and the sample intention label until training is stopped when training conditions are met, so as to obtain a trained intention recognition model.
According to the intention recognition device in the embodiment, the intention recognition is carried out on the text corpus by adopting a mode of combining the rule template with the intention recognition model, so that a final intention recognition result is more accurate, and the problem of inaccurate recognition result caused by single intention recognition by adopting the model in the prior art is solved.
Another embodiment of the present invention provides a storage medium storing a computer program, where the computer program when executed by a processor implements an embodiment of any of the above intent recognition methods, and this embodiment is not repeated herein.
According to the storage medium in the embodiment, the intent recognition is carried out on the text corpus by adopting a mode of combining the rule template with the intent recognition model, so that a final intent recognition result is more accurate, and the problem of inaccurate recognition result caused by single intent recognition by adopting the model in the prior art is solved.
Another embodiment of the present invention provides an electronic device, at least including a memory, and a processor, where the memory stores a computer program, and the processor implements an embodiment of any of the above-described intent recognition methods when executing the computer program on the memory, and this embodiment is not repeated herein.
According to the electronic device, the rule template and the intention recognition model are combined to perform intention recognition on the text corpus, so that a final intention recognition result is more accurate, and the problem of inaccurate recognition result caused by single intention recognition by using the model in the prior art is solved.
Another embodiment of the present invention provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium; the processor of the computer device reads the computer instructions from the computer readable storage medium, and when the processor executes the computer instructions, the embodiment of any intention recognition method described above is implemented, and this embodiment will not be repeated here.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (10)

1. An intent recognition method, comprising:
acquiring text corpus to be subjected to intention recognition;
matching the text corpus with the rule templates respectively to obtain a matching result;
under the condition that the matching result is that a target rule template corresponding to the corpus text is matched, obtaining an intention recognition result in the text corpus based on the target rule template recognition;
and under the condition that the matching result is that the corresponding target rule template is not matched, carrying out intention recognition on the text corpus based on the trained intention recognition model to obtain an intention recognition result.
2. The method of claim 1, wherein each of the rule templates includes a number of word slots; the step of matching the text corpus with the rule templates to obtain a matching result comprises the following steps:
acquiring a plurality of keywords based on the text corpus, and respectively matching the keywords with word slots in each rule template based on each keyword;
under the condition that each keyword is matched with a corresponding word slot in the same rule template, determining that the keyword is matched with the target rule template;
and under the condition that the keywords are not matched with the corresponding word slots in any rule template at the same time, determining that the keywords are not matched with the target rule template.
3. The method of claim 1, wherein each of the rule templates includes an intent tag corresponding to each word slot group;
the obtaining the intention recognition result in the text corpus based on the target rule template recognition comprises the following steps:
and obtaining an intention recognition result based on the keywords corresponding to the target word slots in the target rule template and the target intention labels corresponding to the target word slots.
4. The method of claim 1, wherein the performing intent recognition on the text corpus based on the trained intent recognition model to obtain an intent recognition result comprises:
Acquiring each keyword in the text corpus and an intention label corresponding to each keyword;
calculating and obtaining the probability of each keyword corresponding to each intention label by using the intention recognition model based on each keyword and the intention label corresponding to each keyword;
a first target intent tag is determined based on the probability that each of the keywords corresponds to each of the intent tags to obtain an intent recognition result.
5. The method of claim 4, wherein after determining a first target intent label based on a probability that each of the keywords corresponds to each of the intent labels, the method further comprises:
splitting the text corpus based on the connection relation words in the text corpus to obtain a plurality of sub-text corpora;
obtaining keywords of each sub-text corpus and intention labels corresponding to the keywords;
based on the keywords of each sub-text corpus and the intention labels corresponding to the keywords, respectively carrying out intention recognition on each sub-text corpus based on a trained intention recognition model to obtain second target intention labels corresponding to each sub-text corpus;
and obtaining the intention recognition result based on each first target intention label and each second target intention label.
6. The method of any of claims 3-5, wherein after obtaining the target intent tag, the method further comprises:
performing dependency relation analysis on each keyword in the text corpus to obtain a target corresponding relation containing action keywords and control equipment keywords;
and obtaining the intention recognition result based on the target corresponding relations and the target intention labels corresponding to the action keywords in the target corresponding relations.
7. The method of any of claims 1-5, wherein prior to the obtaining the text corpus to be intent recognized, the method further comprises: training to obtain the intent recognition model, comprising:
acquiring a plurality of sample text corpus and sample intention labels corresponding to the sample text corpus;
calculating the probability that each keyword corresponds to each keyword intention label in the sample text corpus by using an initial intention recognition model based on preset keywords and keyword intention labels corresponding to each preset keyword;
determining a current intention recognition result corresponding to the sample text corpus based on the probability;
and adjusting model parameters in the initial intention recognition model based on the difference between the current intention recognition result and the sample intention label until training is stopped when training conditions are met, so as to obtain a trained intention recognition model.
8. An intent recognition device, comprising:
the acquisition module is used for acquiring text corpus to be subjected to intention recognition;
the matching module is used for respectively matching the text corpus with the rule templates to obtain a matching result;
the first recognition module is used for obtaining an intention recognition result in the text corpus based on the target rule template recognition under the condition that the matching result is the target rule template corresponding to the corpus text;
and the second recognition module is used for carrying out intention recognition on the text corpus based on the trained intention recognition model under the condition that the matching result is not matched with the corresponding target rule template, so as to obtain an intention recognition result.
9. A storage medium storing a computer program which, when executed by a processor, implements the steps of the method of any one of the preceding claims 1-7.
10. An electronic device comprising at least a memory, a processor, the memory having stored thereon a computer program, the processor, when executing the computer program on the memory, implementing the steps of the method of any of the preceding claims 1-7.
CN202211657527.2A 2022-12-22 2022-12-22 Intention recognition method and device, storage medium and electronic equipment Pending CN116187335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211657527.2A CN116187335A (en) 2022-12-22 2022-12-22 Intention recognition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211657527.2A CN116187335A (en) 2022-12-22 2022-12-22 Intention recognition method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116187335A true CN116187335A (en) 2023-05-30

Family

ID=86441309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211657527.2A Pending CN116187335A (en) 2022-12-22 2022-12-22 Intention recognition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116187335A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662555A (en) * 2023-07-28 2023-08-29 成都赛力斯科技有限公司 Request text processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662555A (en) * 2023-07-28 2023-08-29 成都赛力斯科技有限公司 Request text processing method and device, electronic equipment and storage medium
CN116662555B (en) * 2023-07-28 2023-10-20 成都赛力斯科技有限公司 Request text processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Keneshloo et al. Deep reinforcement learning for sequence-to-sequence models
WO2021093449A1 (en) Wakeup word detection method and apparatus employing artificial intelligence, device, and medium
CN111914568B (en) Method, device and equipment for generating text sentence and readable storage medium
CN110309514B (en) Semantic recognition method and device
CN108170749B (en) Dialog method, device and computer readable medium based on artificial intelligence
CN112288075B (en) Data processing method and related equipment
Brants TnT-a statistical part-of-speech tagger
Räsänen Computational modeling of phonetic and lexical learning in early language acquisition: Existing models and future directions
Mirkovic et al. Where does gender come from? Evidence from a complex inflectional system
KR20200021429A (en) Method and apparatus for identifying key phrase in audio data, device and medium
WO2007075374A2 (en) Automatic grammar generation using distributedly collected knowledge
US20190179905A1 (en) Sequence conversion method and apparatus in natural language processing
EP4109324A2 (en) Method and apparatus for identifying noise samples, electronic device, and storage medium
US20220156467A1 (en) Hybrid Natural Language Understanding
CN114676255A (en) Text processing method, device, equipment, storage medium and computer program product
CN110210036A (en) A kind of intension recognizing method and device
CN116187335A (en) Intention recognition method and device, storage medium and electronic equipment
CN111710337A (en) Voice data processing method and device, computer readable medium and electronic equipment
Bahcevan et al. Deep neural network architecture for part-of-speech tagging for turkish language
CN116541493A (en) Interactive response method, device, equipment and storage medium based on intention recognition
CN111126084A (en) Data processing method and device, electronic equipment and storage medium
CN111968646B (en) Voice recognition method and device
CN111931503B (en) Information extraction method and device, equipment and computer readable storage medium
Nair et al. Enabling remote school education using knowledge graphs and deep learning techniques
GB2604317A (en) Dialogue management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination