CN113268994B - Intention identification method and device based on capsule network - Google Patents

Intention identification method and device based on capsule network Download PDF

Info

Publication number
CN113268994B
CN113268994B CN202110807780.0A CN202110807780A CN113268994B CN 113268994 B CN113268994 B CN 113268994B CN 202110807780 A CN202110807780 A CN 202110807780A CN 113268994 B CN113268994 B CN 113268994B
Authority
CN
China
Prior art keywords
intention
information
interactive
different dimensions
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110807780.0A
Other languages
Chinese (zh)
Other versions
CN113268994A (en
Inventor
吴岸城
孙梦轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202110807780.0A priority Critical patent/CN113268994B/en
Publication of CN113268994A publication Critical patent/CN113268994A/en
Application granted granted Critical
Publication of CN113268994B publication Critical patent/CN113268994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to artificial intelligence, and discloses an intention identification method based on a capsule network, which comprises the following steps: the method comprises the steps of obtaining interactive information related in interactive dialog, labeling the interactive information to form intention labels in different dimensions, vectorizing the interactive information carrying the intention labels in the different dimensions, inputting the vectorized interactive information into a capsule network model for training, constructing an intention recognition model, responding to an intention recognition instruction of target interactive dialog, vectorizing the interactive information in the target interactive dialog, inputting the vectorized interactive information into the intention recognition model for intention recognition, and obtaining intention recognition results carrying the intention labels of the target interactive dialog in the different dimensions. The invention also relates to a blockchain technology, and the related data of the intention identification model is stored in the blockchain. The invention can identify the intention aiming at the interactive information fusion multi-dimensional information characteristics of the interactive dialogue and provide a more accurate intention identification result.

Description

Intention identification method and device based on capsule network
Technical Field
The present invention relates to artificial intelligence, and more particularly, to a method, an apparatus, a computer device, and a computer storage medium for recognizing an intention based on a capsule network.
Background
In the intelligent dialogue, the intention recognition is a very important dialogue realization basis, namely, the purpose of user dialogue is clarified and feedback is carried out. In the intention recognition process, emotion and intention need to be found from the text content of the user and the expression of the user. In the related art, intention recognition is mainly distinguished by text classification, but due to great depth of language and rich emotion of human, the intention has different meanings under different emotions. The recognition in a plain text manner may misjudge the ambiguous intention, for example, "i do not need" may express true do not need, very bored do not want to answer the phone. Emotion recognition, which is another classification module, can be used together with intention recognition, and is generally classified into text emotion recognition and face emotion recognition. The text emotion recognition can recognize key words representing emotion, such as joy, anger, sadness and happiness, according to the speaking text, and judge specific emotion through a text classifier. The human face emotion recognition is carried out according to the judgment of human expressions, such as frowning, laughing, crying and the like, and specific emotions are judged through an image classifier. And further combining the emotion recognition result to output a more detailed intention recognition result.
In an actual application scene, intention recognition and emotion recognition are two mutually independent processes, the emotion of a user cannot be accurately captured by single facial expression, such as frown, possibly represents suspicion, no language, impatience, anger, a great amount of picture training is needed, the expression habits of each person are different, and similarly, the emotion information in the interaction process is not considered by the single intention recognition, and the real intention cannot be accurately expressed. However, as the information investigation is less in the interactive process, even if labels in two dimensions of intention and emotion are output, the final intention recognition result has deviation, and the accuracy of intention recognition is affected.
Disclosure of Invention
In view of the above, the present invention provides an intention recognition method, an intention recognition device, a computer device and a computer storage medium based on a capsule network, and mainly aims to solve the problem in the prior art that the intention recognition and the facial expression recognition are two mutually independent processes, so that the final intention recognition result has a deviation, which affects the accuracy of the intention recognition.
According to an aspect of the present invention, there is provided a capsule network-based intention recognition method, the method including:
acquiring interactive information related to interactive dialogue, and labeling the interactive information to form intention labels in different dimensions;
vectorizing interactive information carrying intention labels in different dimensions, inputting the vectorized interactive information into a capsule network model for training, and constructing an intention identification model, wherein the intention identification model identifies an intention identification result fused with the intention labels in multiple dimensions by combining the interactive information in different dimensions, the capsule network model is a carrier containing a plurality of neurons, each neuron represents various attributes of a specific entity appearing in an interactive dialogue, and the attributes are instantiation parameters of different types formed by information characteristics in different dimensions;
responding to an intention identification instruction of the target interactive dialogue, vectorizing interactive information in the target interactive dialogue, and inputting the vectorized interactive information into the intention identification model for intention identification to obtain an intention identification result carrying intention labels of the target interactive dialogue in different dimensions.
In another embodiment of the present invention, the vectorizing interactive information carrying intention labels in different dimensions and then inputting the information into a capsule network model for training to construct an intention recognition model specifically includes:
abstracting information of an intention label on a text intention dimension, information of an intention label on a text emotion dimension and information of an intention label on a face emotion dimension from the interaction information, wherein the information is used as interaction information carrying intention labels on different dimensions;
and quantizing the interactive information carrying the intention labels in different dimensions to represent vector matrixes of the intention labels in different dimensions, inputting the vector matrixes into a capsule network model for training, and constructing an intention recognition model.
In another embodiment of the present invention, the capsule network model is composed of a multilayer structure, and the vector matrix that quantifies the interaction information carrying the intention labels in different dimensions to represent the intention labels in different dimensions is input into the capsule network model for training to construct the intention recognition model, including:
performing feature extraction on a vector matrix formed by vectorizing the interactive information carrying the intention labels in different dimensions by utilizing the coding layer of the capsule network model to obtain information features of the intention labels in different dimensions;
and weighting the information characteristics of the intention labels in different dimensions by utilizing a decoding layer of the capsule network model, and constructing an intention identification model by taking the probability values of the intention labels of the interactive information in different dimensions as intention identification results.
In another embodiment of the present invention, the extracting features of the vector matrix formed by vectorizing the interactive information carrying the intention labels in different dimensions by using the coding layer of the capsule network model to obtain the information features of the intention labels in different dimensions specifically includes:
converting a vector matrix formed by vectorizing interactive information carrying intention labels in different dimensions into a description vector of the intention label in a text intention dimension, a description vector of the intention label in a text emotion dimension and a description vector of the intention label in a face emotion dimension by utilizing a coding layer of a capsule network model;
determining a feature extraction model suitable for preprocessing the interactive information of different information types aiming at the information types corresponding to the interactive information;
and performing feature learning on the description vector of the intention label on the text intention dimension, the description vector of the intention label on the text emotion dimension and the description vector of the intention label on the face emotion dimension by using the feature extraction model for preprocessing the interactive information suitable for different information types to obtain the information feature of the intention label on the text intention dimension, the information feature of the intention label on the text emotion dimension and the information feature of the intention label on the face emotion dimension.
In another embodiment of the present invention, the determining, for the information type corresponding to the interactive information, a feature extraction model suitable for preprocessing the interactive information of different information types includes:
aiming at the interactive information of the text type, a time cycle neural network model is used as a feature extraction model for preprocessing the interactive information;
and aiming at the interactive information of the picture type, a feature extraction model for preprocessing by using the convolutional neural network model as the interactive information.
In another embodiment of the present invention, after the information features of the intention labels in different dimensions are weighted by using a decoding layer of the capsule network model, and the probability values of the intention labels of the interactive information in different dimensions are used as intention identification results to construct an intention identification model, the method further includes:
and adjusting the model parameters of the intention identification model by utilizing a preset loss function and combining intention labels on different dimensions, and updating the intention identification model.
In another embodiment of the present invention, the related data of the intention identification model is stored in a block chain, and after the intention identification instruction responding to the target interaction dialog vectorizes the interaction information in the target interaction dialog and inputs the vectorized interaction information into the intention identification model for intention identification to obtain an intention identification result carrying intention labels of the target interaction dialog in different dimensions, the method further includes:
and screening out corpora matched with the intention labels in different dimensions from a preset corpus as answer corpora according to the intention identification results of the intention labels in different dimensions of the target interactive dialog, and outputting the corpora.
In accordance with another aspect of the present invention, there is provided a capsule network-based intention recognition apparatus, the apparatus including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring interactive information related in an interactive conversation and marking the interactive information to form intention labels on different dimensions;
the system comprises a construction unit, a capsule network model and a plurality of neurons, wherein the construction unit is used for vectorizing interactive information carrying intention labels in different dimensions and inputting the interactive information into the capsule network model for training to construct an intention identification model, the intention identification model identifies an intention identification result fused with the intention labels in multiple dimensions by combining the interactive information in different dimensions, the capsule network model is a carrier containing a plurality of neurons, each neuron represents various attributes of a specific entity appearing in an interactive dialogue, and the attributes are different types of instantiation parameters formed by information features in different dimensions;
and the identification unit is used for responding to the intention identification instruction of the target interactive dialogue, vectorizing the interactive information in the target interactive dialogue and inputting the vectorized interactive information into the intention identification model for intention identification to obtain the intention identification result carrying the intention label of the target interactive dialogue on different dimensions.
In another embodiment of the present invention, the construction unit includes:
the abstraction module is used for abstracting information of an intention label on a text intention dimension, information of an intention label on a text emotion dimension and information of an intention label on a face emotion dimension from the interaction information, and the information is used as interaction information carrying intention labels on different dimensions;
and the construction module is used for quantizing the interactive information carrying the intention labels in different dimensions to express vector matrixes of the intention labels in different dimensions, inputting the vector matrixes into the capsule network model for training, and constructing the intention recognition model.
In another embodiment of the present invention, the capsule network model is composed of a multi-layer structure, and the building module includes:
the extraction submodule is used for performing feature extraction on a vector matrix formed by vectorizing the interactive information carrying the intention labels in different dimensions by utilizing the coding layer of the capsule network model to obtain the information features of the intention labels in different dimensions;
and the weighting submodule is used for weighting the information characteristics of the intention labels in different dimensions by utilizing the decoding layer of the capsule network model, and constructing an intention identification model by taking the probability values of the intention labels of the interactive information in different dimensions as intention identification results.
In another embodiment of the present invention, the extracting submodule is specifically configured to convert a vector matrix formed by vectorizing the interaction information carrying the intention labels in different dimensions into a description vector of the intention label in a text intention dimension, a description vector of the intention label in a text emotion dimension, and a description vector of the intention label in a face emotion dimension, respectively, by using an encoding layer of the capsule network model;
the extraction submodule is specifically used for determining a feature extraction model suitable for preprocessing the interactive information of different information types aiming at the information types corresponding to the interactive information;
the extraction submodule is specifically further configured to perform feature learning on the description vector of the intention label in the text intention dimension, the description vector of the intention label in the text emotion dimension, and the description vector of the intention label in the face emotion dimension by using the feature extraction model for preprocessing the interaction information applicable to different information types, so as to obtain an information feature of the intention label in the text intention dimension, an information feature of the intention label in the text emotion dimension, and an information feature of the intention label in the face emotion dimension.
In another embodiment of the present invention, the extracting sub-module is further specifically configured to, for interactive information of a text type, use a time-cycle neural network model as a feature extraction model for preprocessing the interactive information;
the extraction submodule is specifically further used for utilizing the convolutional neural network model as a feature extraction model for preprocessing the interactive information aiming at the interactive information of the picture type.
In another embodiment of the present invention, the building module further includes:
and the adjusting submodule is used for weighting the information characteristics of the intention labels in different dimensions by using a decoding layer of the capsule network model, taking the probability values of the interactive information on the intention labels in different dimensions as intention identification results, constructing an intention identification model, then adjusting the model parameters of the intention identification model by using a preset loss function and combining the intention labels in different dimensions, and updating the intention identification model.
In another embodiment of the present invention, the data related to the intention recognition model is stored in a blockchain, and the apparatus further comprises:
and the screening unit is used for vectorizing the interactive information in the target interactive dialog and then inputting the vectorized interactive information into the intention recognition model for intention recognition after the intention recognition instruction responding to the target interactive dialog obtains intention recognition results carrying intention labels of the target interactive dialog in different dimensions, and screening out corpora matched with the intention labels in different dimensions from a preset corpus as answer corpora for outputting aiming at the intention recognition results of the intention labels of the target interactive dialog in different dimensions.
According to yet another aspect of the invention, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the capsule network based intention identification method when executing the computer program.
According to yet another aspect of the present invention, a computer storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the capsule network based intention recognition method.
By means of the technical scheme, the invention provides an intention identification method and device based on a capsule network, interactive information related in interactive dialogue is obtained and labeled to form intention labels in different dimensions, the interactive information carrying the intention labels in different dimensions is vectorized and then input into a capsule network model for training, an intention identification model is constructed, the intention identification model is combined with the interactive information in different dimensions to identify intention identification results fusing the intention labels in multiple dimensions, and in response to an intention identification instruction of a target interactive dialogue, the interactive information in the target interactive dialogue is vectorized and then input into the intention identification model for intention identification, so that the intention identification results carrying the intention labels in different dimensions of the target interactive dialogue are obtained. Compared with the mode of carrying out intention identification on interactive information by using a single dimensionality in the prior art, the interactive information can be investigated from the information characteristics of multiple dimensionalities by carrying out intention identification on the information characteristics of the interactive information on different dimensionalities through an intention identification model, the generalization capability of the intention identification is increased, multi-modal information is fused in the intention identification process, the occupation of memory and machine resources can be saved, and a more accurate intention identification result is provided.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a capsule network-based intention recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another capsule network-based intention recognition method provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a capsule network model-based architecture provided by an embodiment of the invention;
FIG. 4 is a schematic structural diagram of an intention recognition apparatus based on a capsule network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another capsule network-based intention recognition apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides an intention identification method based on a capsule network, wherein the intention identification model can perform intention identification aiming at interactive information fusion multidimensional information characteristics of interactive conversations and provide more accurate intention identification results, and as shown in figure 1, the method comprises the following steps:
101. and acquiring interactive information related in the interactive dialogue, and labeling the interactive information to form intention labels in different dimensions.
The interactive information is equivalent to information such as texts, voices and pictures acquired in an interactive conversation process, the interactive information comprises different information types, mainly comprises text types and picture types, the interactive conversation can be a conversation between a service person and a user in a network platform, conversation content in a text form or a conversation content in a voice form can be deposited, the conversation between the service person and the user in an actual scene can be deposited, the conversation content in the voice form or the conversation content in the picture form can be deposited, and the conversation content in the picture form can capture expression information of the user. In the actual application process, the interactive information in the text form does not need to be processed, and for the interactive information in the voice form, the voice needs to be converted into the interactive information in the text form.
Specifically, in the process of labeling the interactive information, the interactive information may be labeled by combining intention information expressed by the interactive information on a text intention, a text emotion and a human face emotion, a plurality of text intentions and emotion labels may be predefined according to business requirements, and the statements in the interactive information are labeled to form intention labels in different dimensions, for example, an intention label expressed by a statement a in the text intention dimension is shopping, an intention label in the text emotion dimension is neutral, and an intention label in the human face emotion dimension is happy.
In the embodiment of the invention, the execution subject can be an intention recognition device based on a capsule network, and is particularly applied to a server side. Because the interactive information related in the interactive dialogue can show the interactive intention of the user in different dimensions, the method and the device can identify the intention of the user by extracting the information characteristics shown in the different dimensions from the interactive information and fusing the information characteristics in the different dimensions, and can improve the accuracy of the intention identification result.
102. And vectorizing the interactive information carrying the intention labels in different dimensions, inputting the information into a capsule network model for training, and constructing an intention recognition model.
The intention recognition model is combined with the interactive information of different dimensions to recognize the intention recognition result fused with the multi-dimensional intention label, the interactive information can be subjected to vectorization to form information characteristics on different dimensions to be recognized, so that the intention label of the interactive dialogue is output, and the intention label is equivalent to a more accurate intention recognition result which is output by judging in the intention recognition process in combination with different dimensions. Specifically, the vectorization process of the interactive information can be used for abstracting features from the interactive information and quantizing the features to represent information features on different dimensions, wherein the information features mainly comprise information features on text intention dimensions, information features on text emotion dimensions and information features of human face emotion dimensions. It is emphasized that, in order to further ensure the privacy and security of the data related to the intention recognition model, the data related to the intention recognition model may also be stored in a node of a block chain.
The method comprises the steps of firstly forming a vector processing on the previous stage to form information characteristics on different dimensions, inputting the information characteristics to a network model, adjusting model parameters in the training process of the network model by combining intention labels marked on the different dimensions, and constructing an intention recognition model, so that the intention recognition model can output more accurate intention labels aiming at the information characteristics, and the intention labels can combine demand intention and emotional tendency to provide an intention recognition result which is more in line with the real intention of a user in an interactive conversation.
The capsule network model is a carrier containing a plurality of neurons, each neuron represents various attributes of a specific entity appearing in an interactive dialogue, the attributes are instantiation parameters of different types formed by information features on different dimensions, the attributes specifically comprise a text intention label, a text emotion label and a face emotion label, the text intention label specifically can be an interest intention, such as a movie, a game, an automobile and the like, and can also be a behavior intention, such as purchase, purchase and the like, the text emotion label can be happy, angry, impatient and the like, the face emotion label is similar to the text emotion label and can also reflect emotion of joy and fun of a user, and existence probabilities of the intention label on different dimensions of the interactive dialogue are further output by using the instantiation parameters of different types.
103. Responding to an intention identification instruction of the target interactive dialogue, vectorizing interactive information in the target interactive dialogue, and inputting the vectorized interactive information into the intention identification model for intention identification to obtain an intention identification result carrying intention labels of the target interactive dialogue in different dimensions.
It can be understood that information features formed by vectorizing interactive information of intention labels in different dimensions are input to an intention identification model as multi-mode fusion, compared with a mode of respectively performing intention identification on text intention and emotion intention to output intention labels in two dimensions in the prior art, the intention identification model in the application can investigate the interactive information of texts and images, increase generalization capability of intention identification, and fuse emotion information in an intention identification process, including not only emotion information on texts but also emotion information on expression faces, so that memory and machine resource occupation can be saved on line by multi-mode fusion, and the effects of accuracy and resource saving are achieved.
The embodiment of the invention provides an intention identification method based on a capsule network, which comprises the steps of obtaining interactive information related to interactive dialog, labeling the interactive information to form intention labels on different dimensions, vectorizing the interactive information carrying the intention labels on the different dimensions, inputting the vectorized interactive information into a capsule network model for training, constructing an intention identification model, identifying an intention identification result fusing multi-dimensional intention labels by combining the interactive information on the different dimensions through the intention identification model, responding to an intention identification instruction of target interactive dialog, vectorizing the interactive information in the target interactive dialog, inputting the vectorized interactive information into the intention identification model for intention identification, and obtaining the intention identification result carrying the intention labels on the different dimensions through the target interactive dialog. Compared with the mode of carrying out intention identification on interactive information by using a single dimensionality in the prior art, the interactive information can be investigated from the information characteristics of multiple dimensionalities by carrying out intention identification on the information characteristics of the interactive information on different dimensionalities through an intention identification model, the generalization capability of the intention identification is increased, multi-modal information is fused in the intention identification process, the occupation of memory and machine resources can be saved, and a more accurate intention identification result is provided.
The embodiment of the invention provides another intention identification method based on a capsule network, the intention identification model can perform intention identification aiming at interactive information fusion multidimensional information characteristics of interactive dialog and provide more accurate intention identification results, as shown in fig. 2, the method comprises the following steps:
201. and acquiring interactive information related in the interactive dialogue, and labeling the interactive information to form intention labels in different dimensions.
Since the interactive information of different information types can reflect different interactive intentions from different dimensions, the interactive information in a specific text form can judge the interactive intention of the user from the word content in combination with the emotional dimension, the interactive intention can be the requirement intention in a scene, such as purchasing intention and consulting intention, and the interactive information in a specific picture form mainly judges whether the requirement intention of the user has deviation from the emotional dimension, in general, although the interactive intention reflected by the user in the interactive session can explain the requirement intention of the user, the emotion of the user in the interactive session can influence the requirement intention to some extent, for example, keywords in the text indicate shopping requirement, the emotion in the text shows neutral, which indicates that the requirement intention of the user is shopping, and for example, keywords in the text indicate consulting intention, the emotion in the text and the picture shows boredom, and the situation shows that the user's demand intention is probably to finish consultation as soon as possible.
202. Abstracting the information of the intention label on the text intention dimension, the information of the intention label on the text emotion dimension and the information of the intention label on the face emotion dimension from the interaction information, and taking the information as the interaction information carrying the intention labels on different dimensions.
203. And quantizing the interactive information carrying the intention labels in different dimensions to represent vector matrixes of the intention labels in different dimensions, inputting the vector matrixes into a capsule network model for training, and constructing an intention recognition model.
The capsule network model is composed of a multilayer structure and comprises a coding layer and a decoding layer, specifically, the coding layer of the capsule network model can be used for carrying out feature extraction on a vector matrix formed after vectorization on interactive information carrying intention labels in different dimensions to obtain information features of the intention labels in different dimensions, the decoding layer of the capsule network model is used for carrying out weighting processing on the information features of the intention labels in different dimensions, and probability values of the interactive information on the intention labels in different dimensions are used as intention identification results to construct the intention identification model.
Specifically, in the process of performing feature extraction on a vector matrix formed by vectorizing interactive information carrying intention labels in different dimensions by using a coding layer of a capsule network model to obtain information features of the intention labels in different dimensions, the vector matrix formed by vectorizing the interactive information carrying the intention labels in different dimensions can be respectively converted into a description vector of the intention label in a text intention dimension, a description vector of the intention label in a text emotion dimension and a description vector of the intention label in a face emotion dimension by using the coding layer of the capsule network model, a feature extraction model suitable for preprocessing the interactive information of different information types is determined for information types corresponding to the interactive information, and a feature extraction model suitable for preprocessing the interactive information of different information types is used to perform preprocessing on the description vector, the face emotion label, the intention label, the face emotion label, the face emotion label, the description vector and the face emotion label and the description vector and the face label and the description vector are all corresponding to different information type of the information corresponding to the text information type are obtained by using the information corresponding to the information type are converted in the text information corresponding to the text information are converted in the text information are converted into the description vector, the description vector are converted into the, And performing feature learning on the description vector of the intention label on the text emotion dimension and the description vector of the intention label on the face emotion dimension to obtain the information feature of the intention label on the text intention dimension, the information feature of the intention label on the text emotion dimension and the information feature of the intention label on the face emotion dimension.
Specifically, a structure for constructing an intention recognition model is shown in fig. 3, an embedding layer in fig. 3 can perform vectorization processing on interaction information of intention labels input in different dimensions for intention recognition to form information vectors of the intention labels in different dimensions, further perform feature extraction on the information vectors of the intention labels in different dimensions to form information features in different dimensions, input the information features to a capsule layer through attribution weighting, the capsule layer is a capsule network, can identify probability values of interaction dialogues in different intention labels by combining the information features of the intention labels in different dimensions, and finally select an intention label with the highest probability value to output, wherein the intention label carries corresponding demand intention and emotion trend.
Furthermore, the learning of the specific position information of the capsule network model can also assist in judging whether the user uses the interactive dialogue, so that the influence of external noise is eliminated, the text information can also learn the position information of phrases in the interactive dialogue, and the accuracy of intention identification is further improved.
The interactive information is expressed as a plurality of information types, and specifically, the interactive information can be converted into description information on different dimensions aiming at the information types, wherein the description information on the text intention dimension and the description information on the text emotion dimension are mainly text types, the description information on the face emotion dimension is mainly picture types, and further the description information on the different dimensions is subjected to feature extraction, wherein the feature extraction is equivalent to a preprocessing process for the description information on the text types and the picture types, and the preprocessing process is to combine the description information with the dimensions for vectorization, so as to output information features on the different dimensions. By processing the interactive information to form information characteristics on different dimensions and combining the information characteristics on different dimensions to perform intention recognition, the intention recognition process can mix multi-modal information to output an accurate intention recognition result. Specifically, a feature extraction model suitable for preprocessing the interactive information of different information types is determined according to the information types corresponding to the interactive information, a time-cycle neural network model can be used as the feature extraction model for preprocessing the interactive information according to the interactive information of text types, and a convolutional neural network model is used as the feature extraction model for preprocessing the interactive information according to the interactive information of picture types.
In practical application, for the description information of the text type, the LSTM can be used in the preprocessing process to perform feature learning on the input vector, and then output information features on the text intention dimension and information features on the text emotion dimension, where the LSTM can consider the text preceding and following sentence sequence, and for the description information of the picture type, the CNN can be used in the preprocessing process to perform feature learning on the input vector, and then output information features on the face emotion dimension, and further output information features on different dimensions through bidirectional GRU and attention weighting, respectively.
Specifically, in the process of inputting the vectorized interactive information carrying the intention labels in different dimensions into the capsule network model for training, in order to improve the accuracy of model construction, parameters of the intention identification model need to be continuously adjusted, and here, the model parameters of the intention identification model can be adjusted by using a preset loss function and combining the intention labels in different dimensions, so as to update the intention identification model.
204. Responding to an intention identification instruction of the target interactive dialogue, vectorizing interactive information in the target interactive dialogue, and inputting the vectorized interactive information into the intention identification model for intention identification to obtain an intention identification result carrying intention labels of the target interactive dialogue in different dimensions.
205. And screening out corpora matched with the intention labels in different dimensions from a preset corpus as answer corpora according to the intention identification results of the intention labels in different dimensions of the target interactive dialog, and outputting the corpora.
In a specific application scene, intention identification can be applied to an intelligent customer service session, the core of the intelligent customer service session is to perform user intention matching, only when the user intention is clear, a targeted answer can be given, corresponding linguistic data can be matched according to intention labels on different dimensions in a preset corpus, for example, the air ticket is not needed, the intention identification result output by an intention identification model is used as a refund requirement intention with an untightened emotion intention label, the linguistic data aiming at different emotion intention labels are arranged in the corpus according to the condition that the requirement intention is a refund scene, and the linguistic data matched with the refund requirement intention of the untightened emotion intention label are further screened out.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present invention provides an intention recognition apparatus based on a capsule network, as shown in fig. 4, the apparatus including: an acquisition unit 31, a construction unit 32, and an identification unit 33.
The acquiring unit 31 may be configured to acquire interaction information related to an interaction session, label the interaction information, and form intention labels in different dimensions;
the constructing unit 32 may be configured to vectorize the interaction information carrying the intent tags in different dimensions, and input the vectorized interaction information into the capsule network model for training to construct an intent recognition model, where the intent recognition model recognizes an intent recognition result that fuses the multi-dimensional intent tags in combination with the interaction information in different dimensions;
the identifying unit 33 may be configured to respond to an intention identifying instruction of the target interactive dialog, vectorize the interactive information in the target interactive dialog, and input the vectorized interactive information into the intention identifying model to identify the intention, so as to obtain an intention identifying result carrying intention labels of the target interactive dialog in different dimensions.
The intention recognition device based on the capsule network, provided by the embodiment of the invention, is used for obtaining interactive information related to interactive dialog, labeling the interactive information to form intention labels in different dimensions, vectorizing the interactive information carrying the intention labels in the different dimensions, inputting the vectorized interactive information into a capsule network model for training, constructing an intention recognition model, recognizing an intention recognition result fusing multi-dimensional intention labels by combining the interactive information in the different dimensions, responding to an intention recognition instruction of a target interactive dialog, vectorizing the interactive information in the target interactive dialog, inputting the vectorized interactive information into the intention recognition model for intention recognition, and obtaining the intention recognition result carrying the intention labels in the different dimensions of the target interactive dialog. Compared with the mode of carrying out intention identification on interactive information by using a single dimensionality in the prior art, the interactive information can be investigated from the information characteristics of multiple dimensionalities by carrying out intention identification on the information characteristics of the interactive information on different dimensionalities through an intention identification model, the generalization capability of the intention identification is increased, multi-modal information is fused in the intention identification process, the occupation of memory and machine resources can be saved, and a more accurate intention identification result is provided.
As a further description of the capsule network based intention recognition device shown in fig. 4, fig. 5 is a schematic structural diagram of another capsule network based intention recognition device according to an embodiment of the present invention, and as shown in fig. 5, the constructing unit 32 includes:
the abstraction module 321 may be configured to abstract information of an intention tag in a text intention dimension, information of an intention tag in a text emotion dimension, and information of an intention tag in a face emotion dimension from the interaction information, as interaction information carrying intention tags in different dimensions;
the building module 322 may be configured to quantize the interaction information carrying the different-dimensional intention labels to represent vector matrices of the different-dimensional intention labels, and input the vector matrices into the capsule network model for training to build the intention recognition model.
In a specific application scenario, the capsule network model is composed of a multi-layer structure, and the building module 322 includes:
the extracting sub-module 3221 may be configured to perform feature extraction on a vector matrix formed by vectorizing the interaction information carrying the intention labels in different dimensions by using the coding layer of the capsule network model, so as to obtain information features of the intention labels in different dimensions;
the weighting submodule 3222 may be configured to perform weighting processing on the information features of the intention tags in different dimensions by using a decoding layer of the capsule network model, and construct the intention identification model by using probability values of the intention tags in different dimensions of the interactive information as intention identification results.
In a specific application scenario, the extracting sub-module 3221 may be specifically configured to convert, by using a coding layer of the capsule network model, a vector matrix formed by vectorizing the interaction information carrying the intention tags in different dimensions into a description vector of the intention tag in the text intention dimension, a description vector of the intention tag in the text emotion dimension, and a description vector of the intention tag in the face emotion dimension, respectively;
the extracting sub-module 3221 may be further configured to determine, for information types corresponding to the interactive information, a feature extraction model suitable for preprocessing the interactive information of different information types;
the extracting sub-module 3221 may be further configured to perform feature learning on the description vector of the intention tag in the text intention dimension, the description vector of the intention tag in the text emotion dimension, and the description vector of the intention tag in the face emotion dimension by using the feature extracting model for preprocessing the interaction information applicable to different information types, so as to obtain an information feature of the intention tag in the text intention dimension, an information feature of the intention tag in the text emotion dimension, and an information feature of the intention tag in the face emotion dimension.
In a specific application scenario, the extracting sub-module 3221 may be further configured to specifically use, for interactive information of a text type, a time-cycle neural network model as a feature extraction model for preprocessing the interactive information;
the extracting sub-module 3221 may be further configured to, for the interactive information of the picture type, use a convolutional neural network model as a feature extraction model for preprocessing the interactive information.
In a specific application scenario, the building module 322 further includes:
the adjusting sub-module 3223 may be configured to, at the decoding layer of the capsule network model, perform weighting processing on the information features of the intention tags in different dimensions, use probability values of the interactive information on the intention tags in different dimensions as intention identification results, construct an intention identification model, and then adjust model parameters of the intention identification model by using a preset loss function in combination with the intention tags in different dimensions, so as to update the intention identification model.
In a specific application scenario, the data related to the intention recognition model is stored in a blockchain, and the apparatus further includes:
the screening unit 34 may be configured to, after the intention identification instruction responding to the target interaction dialog is executed, vectorize the interaction information in the target interaction dialog, input the vectorized interaction information into the intention identification model to perform intention identification, and obtain an intention identification result carrying intention labels of the target interaction dialog in different dimensions, and then, screen out, from a preset corpus, a corpus matched with the intention labels in different dimensions as an answer corpus with respect to the intention identification result of the target interaction dialog in different dimensions.
It should be noted that other corresponding descriptions of the functional units related to the intention identification apparatus based on the capsule network provided in this embodiment may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not repeated herein.
Based on the above methods shown in fig. 1 and fig. 2, correspondingly, the present embodiment further provides a storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the above capsule network-based intention identifying method shown in fig. 1 and fig. 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 4 and fig. 5, in order to achieve the above object, an embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-described capsule network-based intention recognition method as shown in fig. 1 and 2.
Optionally, the computer device may also include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, sensors, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
Those skilled in the art will appreciate that the physical device structure of the capsule network-based intention recognition apparatus provided in the present embodiment does not constitute a limitation to the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the computer device described above, supporting the operation of information handling programs and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. Through the technical scheme, compared with the prior art, the intention recognition is carried out on the information characteristics of the interactive information in different dimensions through the intention recognition model, the interactive information can be investigated from the information characteristics of multiple dimensions, the generalization capability of the intention recognition is increased, the multi-mode information is fused in the intention recognition process, the occupation of memory and machine resources can be saved, and a more accurate intention recognition result is provided.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A capsule network-based intention recognition method, characterized in that the method comprises:
acquiring interactive information related in interactive dialogue, and marking the interactive information by combining intention information expressed by the interactive information on text intention, text emotion and human face emotion to form intention labels in different dimensions;
vectorizing interactive information carrying intention labels in different dimensions, inputting the vectorized interactive information into a capsule network model for training, and constructing an intention recognition model, wherein the intention recognition model recognizes an intention recognition result fused with the intention labels in multiple dimensions by combining the interactive information in different dimensions, the capsule network model is a carrier containing a plurality of neurons, each neuron represents various attributes of a specific entity appearing in an interactive dialog, the attributes are instantiation parameters of different types formed by information features in different dimensions, and the existence probability of the intention labels in different dimensions of the interactive dialog is output by utilizing the instantiation parameters of different types, and the vectorizing of the interactive information specifically comprises the following steps: abstracting characteristics from the interactive information to quantize to represent information characteristics on different dimensions, wherein the information characteristics mainly comprise information characteristics on a text intention dimension, information characteristics on a text emotion dimension and information characteristics of a human face emotion dimension;
responding to an intention identification instruction of the target interactive dialogue, vectorizing interactive information in the target interactive dialogue, and inputting the vectorized interactive information into the intention identification model for intention identification to obtain an intention identification result carrying intention labels of the target interactive dialogue in different dimensions.
2. The method according to claim 1, wherein the vectorizing interactive information carrying intention labels in different dimensions and inputting the information into a capsule network model for training to construct an intention recognition model specifically comprises:
abstracting information of an intention label on a text intention dimension, information of an intention label on a text emotion dimension and information of an intention label on a face emotion dimension from the interaction information, wherein the information is used as interaction information carrying intention labels on different dimensions;
and quantizing the interactive information carrying the intention labels in different dimensions to represent vector matrixes of the intention labels in different dimensions, inputting the vector matrixes into a capsule network model for training, and constructing an intention recognition model.
3. The method according to claim 2, wherein the capsule network model is composed of a multi-layer structure, and the vector matrix for representing the intention labels in different dimensions by quantizing the interactive information carrying the intention labels in different dimensions is input into the capsule network model for training to construct the intention recognition model, which comprises:
performing feature extraction on a vector matrix formed by vectorizing interactive information carrying intention labels in different dimensions by using a coding layer of a capsule network model to obtain information features of the intention labels in different dimensions;
and weighting the information characteristics of the intention labels in different dimensions by utilizing a decoding layer of the capsule network model, and constructing an intention identification model by taking the probability values of the intention labels of the interactive information in different dimensions as intention identification results.
4. The method according to claim 3, wherein the extracting features of the vector matrix formed by vectorizing the interactive information carrying the intention labels in different dimensions by using the coding layer of the capsule network model to obtain the information features of the intention labels in different dimensions specifically comprises:
converting a vector matrix formed by vectorizing interactive information carrying intention labels in different dimensions into a description vector of the intention label in a text intention dimension, a description vector of the intention label in a text emotion dimension and a description vector of the intention label in a face emotion dimension by utilizing a coding layer of a capsule network model;
determining a feature extraction model suitable for preprocessing the interactive information of different information types aiming at the information types corresponding to the interactive information;
and performing feature learning on the description vector of the intention label on the text intention dimension, the description vector of the intention label on the text emotion dimension and the description vector of the intention label on the face emotion dimension by using the feature extraction model for preprocessing the interactive information suitable for different information types to obtain the information feature of the intention label on the text intention dimension, the information feature of the intention label on the text emotion dimension and the information feature of the intention label on the face emotion dimension.
5. The method according to claim 4, wherein the determining, for the information type corresponding to the interaction information, a feature extraction model suitable for preprocessing interaction information of different information types includes:
aiming at the interactive information of the text type, a time cycle neural network model is used as a feature extraction model for preprocessing the interactive information;
and aiming at the interactive information of the picture type, a feature extraction model for preprocessing by using the convolutional neural network model as the interactive information.
6. The method of claim 3, wherein after the information features of the intention labels in different dimensions are weighted by a decoding layer of the capsule network model, and the intention identification model is constructed by taking probability values of the intention labels in different dimensions of the interaction information as intention identification results, the method further comprises:
and adjusting the model parameters of the intention identification model by utilizing a preset loss function and combining intention labels on different dimensions, and updating the intention identification model.
7. The method according to any one of claims 1 to 6, wherein relevant data of the intention recognition model is stored in a blockchain, and after the intention recognition instruction responding to the target interaction dialog vectorizes interaction information in the target interaction dialog and inputs the vectorized interaction information into the intention recognition model for intention recognition to obtain an intention recognition result carrying intention labels of the target interaction dialog in different dimensions, the method further comprises:
and screening out corpora matched with the intention labels in different dimensions from a preset corpus as answer corpora according to the intention identification results of the intention labels in different dimensions of the target interactive dialog, and outputting the corpora.
8. An intent recognition device based on a capsule network, the device comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring interactive information related to interactive conversation, and marking the interactive information by combining intention information expressed by the interactive information on text intention, text emotion and human face emotion to form intention labels in different dimensions;
the building unit is used for vectorizing the interactive information carrying the intention labels in different dimensions, inputting the vectorized interactive information into a capsule network model for training, and building an intention recognition model, wherein the intention recognition model recognizes an intention recognition result fused with the intention labels in multiple dimensions by combining the interactive information in different dimensions, the capsule network model is a carrier containing a plurality of neurons, each neuron represents that various attributes of a specific entity appear in an interactive session, the attributes are instantiation parameters of different types formed by information features in different dimensions, the existence probabilities of the intention labels in different dimensions of the interactive session are output by utilizing the instantiation parameters of different types, and the vectorizing of the interactive information specifically comprises the following steps: abstracting characteristics from the interactive information to quantize to represent information characteristics on different dimensions, wherein the information characteristics mainly comprise information characteristics on a text intention dimension, information characteristics on a text emotion dimension and information characteristics of a human face emotion dimension;
and the identification unit is used for responding to the intention identification instruction of the target interactive dialogue, vectorizing the interactive information in the target interactive dialogue and inputting the vectorized interactive information into the intention identification model for intention identification to obtain the intention identification result carrying the intention label of the target interactive dialogue on different dimensions.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer storage medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110807780.0A 2021-07-16 2021-07-16 Intention identification method and device based on capsule network Active CN113268994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110807780.0A CN113268994B (en) 2021-07-16 2021-07-16 Intention identification method and device based on capsule network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110807780.0A CN113268994B (en) 2021-07-16 2021-07-16 Intention identification method and device based on capsule network

Publications (2)

Publication Number Publication Date
CN113268994A CN113268994A (en) 2021-08-17
CN113268994B true CN113268994B (en) 2021-10-01

Family

ID=77236564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110807780.0A Active CN113268994B (en) 2021-07-16 2021-07-16 Intention identification method and device based on capsule network

Country Status (1)

Country Link
CN (1) CN113268994B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449085B (en) * 2021-09-02 2021-11-26 华南师范大学 Multi-mode emotion classification method and device and electronic equipment
CN114969293A (en) * 2022-05-31 2022-08-30 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment
CN116383027B (en) * 2023-06-05 2023-08-25 阿里巴巴(中国)有限公司 Man-machine interaction data processing method and server
CN116821691B (en) * 2023-08-28 2024-02-23 清华大学 Method and device for training emotion recognition model based on task fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus
CN110188195A (en) * 2019-04-29 2019-08-30 苏宁易购集团股份有限公司 A kind of text intension recognizing method, device and equipment based on deep learning
CN111144124A (en) * 2018-11-02 2020-05-12 华为技术有限公司 Training method of machine learning model, intention recognition method, related device and equipment
CN111625641A (en) * 2020-07-30 2020-09-04 浙江大学 Dialog intention recognition method and system based on multi-dimensional semantic interaction representation model
CN111767729A (en) * 2020-06-30 2020-10-13 北京百度网讯科技有限公司 Text classification method, device, equipment and storage medium
CN112100337A (en) * 2020-10-15 2020-12-18 平安科技(深圳)有限公司 Emotion recognition method and device in interactive conversation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144542A1 (en) * 2018-01-26 2019-08-01 Institute Of Software Chinese Academy Of Sciences Affective interaction systems, devices, and methods based on affective computing user interface
US20190318262A1 (en) * 2018-04-11 2019-10-17 Christine Meinders Tool for designing artificial intelligence systems
US11455527B2 (en) * 2019-06-14 2022-09-27 International Business Machines Corporation Classification of sparsely labeled text documents while preserving semantics
CN111651604B (en) * 2020-06-04 2023-11-10 腾讯科技(深圳)有限公司 Emotion classification method and related device based on artificial intelligence
CN112231477B (en) * 2020-10-20 2023-09-22 淮阴工学院 Text classification method based on improved capsule network
CN112487989B (en) * 2020-12-01 2022-07-15 重庆邮电大学 Video expression recognition method based on capsule-long-and-short-term memory neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus
CN111144124A (en) * 2018-11-02 2020-05-12 华为技术有限公司 Training method of machine learning model, intention recognition method, related device and equipment
CN110188195A (en) * 2019-04-29 2019-08-30 苏宁易购集团股份有限公司 A kind of text intension recognizing method, device and equipment based on deep learning
CN111767729A (en) * 2020-06-30 2020-10-13 北京百度网讯科技有限公司 Text classification method, device, equipment and storage medium
CN111625641A (en) * 2020-07-30 2020-09-04 浙江大学 Dialog intention recognition method and system based on multi-dimensional semantic interaction representation model
CN112100337A (en) * 2020-10-15 2020-12-18 平安科技(深圳)有限公司 Emotion recognition method and device in interactive conversation

Also Published As

Publication number Publication date
CN113268994A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN113268994B (en) Intention identification method and device based on capsule network
CN109101537B (en) Multi-turn dialogue data classification method and device based on deep learning and electronic equipment
CN110990543A (en) Intelligent conversation generation method and device, computer equipment and computer storage medium
US20200097820A1 (en) Method and apparatus for classifying class, to which sentence belongs, using deep neural network
CN111695352A (en) Grading method and device based on semantic analysis, terminal equipment and storage medium
CN114357973B (en) Intention recognition method and device, electronic equipment and storage medium
CN112100337B (en) Emotion recognition method and device in interactive dialogue
CN112732911A (en) Semantic recognition-based conversational recommendation method, device, equipment and storage medium
KR102315830B1 (en) Emotional Classification Method in Dialogue using Word-level Emotion Embedding based on Semi-Supervised Learning and LSTM model
CN108470188B (en) Interaction method based on image analysis and electronic equipment
CN110765294B (en) Image searching method and device, terminal equipment and storage medium
CN114298121A (en) Multi-mode-based text generation method, model training method and device
CN110704586A (en) Information processing method and system
CN111666400B (en) Message acquisition method, device, computer equipment and storage medium
CN114972823A (en) Data processing method, device, equipment and computer medium
CN111159358A (en) Multi-intention recognition training and using method and device
CN115713797A (en) Method for training emotion recognition model, emotion recognition method and device
CN112966568A (en) Video customer service quality analysis method and device
CN117521675A (en) Information processing method, device, equipment and storage medium based on large language model
CN116226785A (en) Target object recognition method, multi-mode recognition model training method and device
CN113128284A (en) Multi-mode emotion recognition method and device
CN110795531B (en) Intention identification method, device and storage medium
CN117349402A (en) Emotion cause pair identification method and system based on machine reading understanding
CN116543798A (en) Emotion recognition method and device based on multiple classifiers, electronic equipment and medium
CN113869068A (en) Scene service recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant