WO2021042904A1 - 会话意图识别方法、装置、计算机设备和存储介质 - Google Patents
会话意图识别方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2021042904A1 WO2021042904A1 PCT/CN2020/104674 CN2020104674W WO2021042904A1 WO 2021042904 A1 WO2021042904 A1 WO 2021042904A1 CN 2020104674 W CN2020104674 W CN 2020104674W WO 2021042904 A1 WO2021042904 A1 WO 2021042904A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- conversation
- text
- feature
- intent
- message
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Definitions
- This application relates to the field of artificial intelligence technology, and in particular to a conversation intention method, device, computer equipment, and storage medium.
- the virtual user object is a virtual user object that can respond to user demands and communicate with the user, which is implemented through software.
- the inventor realizes that the traditional way to recognize conversational intentions is mainly to use the keyword matching tool of Neuro-Linguistic Programming (NLP) to recognize conversational message intentions.
- NLP Neuro-Linguistic Programming
- this method relies on a lot of keyword labeling work, and is no longer applicable to non-text conversation messages such as drawing.
- a method for identifying a conversation intention comprising: obtaining a conversation message; when the conversation message includes a conversation picture, extracting graphic features of the conversation picture; and determining a category label text corresponding to the conversation picture according to the graphic characteristics Fusion of the graphic feature and the corresponding category label text to obtain a comprehensive feature; based on the comprehensive feature, the conversation message is intended to be recognized.
- a device for recognizing a conversation intention comprising: a feature extraction module for acquiring a conversation message; when the conversation message includes a conversation picture, extracting graphic features of the conversation picture; a feature fusion module for obtaining a conversation picture based on the graphic feature , Determining the category label text corresponding to the conversation picture; fusing the graphic feature and the corresponding category label text to obtain a comprehensive feature; an intention recognition module for recognizing the intention of the conversation message based on the comprehensive feature.
- a computer device includes a memory and a processor, the processor and the memory are connected to each other, wherein the memory is used to store a computer program, the computer program includes program instructions, and the processor is used to execute the The program instructions of the memory, wherein: obtain a conversation message; when the conversation message includes a conversation picture, extract the graphic feature of the conversation picture; determine the category label text corresponding to the conversation picture according to the graphic feature; The graphic feature and the corresponding category label text are fused to obtain a comprehensive feature; and the conversation message is intended to be recognized based on the comprehensive feature.
- a computer-readable storage medium wherein the computer-readable storage medium stores a computer program, the computer program includes program instructions, and when the program instructions are executed by a processor, the program instructions are used to implement the following steps: obtaining a conversation message; When the conversation message includes a conversation picture, extract the graphic feature of the conversation picture; determine the category label text corresponding to the conversation picture according to the graphic feature; merge the graphic feature and the corresponding category label text to obtain Comprehensive features; based on the comprehensive features, perform intent recognition on the conversation message.
- the above-mentioned method, device, computer equipment and storage medium for recognizing the conversation intention can quickly and accurately obtain the corresponding category label text of the conversation picture according to the graphic characteristics of the conversation picture obtained by extraction.
- the graphic feature and the corresponding category label text are cross-modally fused to obtain a comprehensive feature, and then based on the comprehensive feature, the conversational intention of the conversation message is identified.
- the features of the conversation pictures are used in detail and fully.
- the double guidance of the graphic features and the category label text is obtained, which greatly improves the accuracy of the conversation picture understanding information.
- Fig. 1 is an application scenario diagram of a method for recognizing conversation intention in an embodiment.
- Fig. 2 is a schematic flowchart of a method for recognizing a session intent in an embodiment.
- Fig. 3 is a structural block diagram of an apparatus for recognizing a conversation intention in an embodiment.
- Figure 4 is a diagram of the internal structure of a computer device in an embodiment.
- the session intention recognition method provided in this application can be applied to the application environment as shown in FIG. 1.
- the terminal 102 and the server 104 communicate through the network.
- the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
- the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
- a session application is running on the terminal 102. Based on conversational applications, users can have conversations with virtual user objects.
- the session message processing method can be implemented in the terminal 102 or the server 104.
- the terminal 102 may directly recognize the session message intent, or may send the session message to the server 104 after obtaining the session message, and the server 104 may perform intent recognition on the session message.
- the conversation message submitted by the user is used to reply to the conversation message sent by the virtual user object.
- the session message sent by the virtual user object is recorded as the above session message
- the session message submitted by the user is recorded as the above session message below.
- a method for recognizing session intent is provided, and the method is applied to the server in FIG. 1 as an example for description, including the following steps.
- Step 202 Obtain a conversation message.
- a conversational application may be an application in which a user sends conversation messages with other users or virtual user objects to achieve different social purposes.
- the conversational application may specifically be an instant messaging application, an intelligent customer service application, a skill sparring application, and so on.
- the skill sparring application is an application in which a user in a certain role by a virtual user object conducts a simulated conversation with a user in another role to be trained, so as to improve the skills of the user to be trained.
- the virtual user object acts as a customer to conduct a conversation with a salesperson to improve the service ability of the salesperson; or, a virtual user object acts as a student or a parent to conduct a conversation with a teacher to improve the teacher's teaching level.
- the skill sparring application includes multiple dialogue components such as narration dialogue, fixed dialogue, fixed question and answer, intention dialogue and scoring dialogue, and supports multi-branch dialogue. Users can freely drag and drop multiple dialog components to quickly create dialog flow tasks, and publish pre-configured dialog flow tasks to users to be trained for practice. Specifically, by dragging and dropping different dialog components, practice dialogs of different conversation types can be generated. For example, based on the dialogue component "intent dialogue”, the conversation type can be realized as "intent recognition”; based on the dialogue component “scoring dialogue”, the conversation type can be realized as "professional scoring” and so on.
- Each group of practice dialogues includes the preset above conversation messages and the corresponding following reference messages.
- the user can configure the model image and facial expressions of the virtual object that tells the above conversation message.
- Users can also configure the conversation mode of each group of practice conversations. Conversation mode is the way for the designated user to reply to the conversation message above, such as oral explanation, graphic explanation, etc.
- the user configures the following reference message of the above session message for each session mode as "graphic explanation"
- the user needs to pre-configure the corresponding reference explanation diagram.
- the reference explanation diagram is divided into multiple explanation steps. The whole reference explanation diagram is disassembled into multiple step diagrams according to the explanation steps.
- a dialogue flow task may have one or more conversation branches, that is, after the current sequence of exercise conversations are over, there are multiple next sequence of exercise conversations, and the current sequence of exercise conversations can be identified according to the conversation type of the current sequence of exercise conversations. Or analysis and processing such as scoring, and determine which conversation branch to jump to according to the analysis result.
- the virtual user object displays the previous conversation messages in the current sequence of practice conversations in the conversation window.
- the user can enter the following conversation messages in the conversation window by means of oral explanations or graphic explanations.
- the user needs to draw and explain according to the prompts, and enter the following session message in picture format in the session window (recorded as the following session picture).
- monitoring the following conversation message used to reply to the above conversation message includes: displaying the above conversation message of the current conversation branch; determining the conversation mode of the current conversation branch; when the conversation mode is graphic explanation, displaying the drawing page ; Monitor the drawing operation of the drawing page to get the following conversation picture.
- the terminal will display the drawing explanation prompt in the session window and display the drawing page.
- the drawing page can be the conversation message entry area in the conversation window, or it can be another page different from the conversation window.
- Step 204 When the conversation message includes a conversation picture, extract the graphic feature of the conversation picture.
- the server extracts the feature of the conversation picture based on the pre-trained first model.
- the model is a model composed of an artificial neural network.
- the neural network model can be a CNN (Convolutional Neural Network) model such as VGG (Visual Geometry Group) network model, GoogleNet (Google network) model, or ResNet (energy efficiency evaluation system) network model, or it can be DNN (Deep Neural Network, deep neural network) model, can also be LSTM (Long Short-Term Memory Neural Network, long short-term memory neural network) model and other RNN (Recurrent Neural Network, cyclic neural network) model, etc.
- CNN Convolutional Neural Network
- VGG Visual Geometry Group
- GoogleNet Google network
- ResNet energy efficiency evaluation system
- DNN Deep Neural Network, deep neural network
- LSTM Long Short-Term Memory Neural Network, long short-term memory neural network
- RNN Recurrent Neural Network, cyclic neural network
- the graphic feature can specifically be the data that the computer device extracts from the following conversation pictures that can represent the shape or spatial relationship of the picture, and obtain the representation or description of the "non-picture" of the picture, such as a value, a vector, or a symbol.
- the first model may specifically be a convolutional neural network model, such as ResNet-80.
- the computer device can input the following conversation picture into the first model, and extract the graphic features of the following conversation picture through the first model.
- a computer device can input the following conversation picture into the convolutional neural network model, and perform convolution processing on the following conversation picture through the convolutional layer of the convolutional neural network, and extract the feature map (feature map) of the following conversation picture, which is the original Graphical features in the embodiment.
- the first model uses a large number of hand-drawn pictures and corresponding category labels in a graphics library (ImageNet) as training data, and is a model obtained by learning and training for classifying the following conversational pictures.
- ImageNet graphics library
- the computer device inputs the hand-drawn picture into the first model, extracts the graphic features of the hand-drawn picture through the convolutional layer structure of the first model, and determines it through the pooling layer structure and/or the fully connected layer structure of the first model The corresponding category label text of the hand-drawn image.
- Step 206 Determine the category label text corresponding to the conversation picture according to the graphic characteristics.
- the category label text is the label text corresponding to the category to which the conversation picture below belongs.
- the computer device may extract graphic features through the first model, and then classify the extracted graphic features to obtain the category of the following conversational picture, and then determine the corresponding category label text of the following conversational picture.
- the first model may specifically be a convolutional neural network model.
- the computer device can input the following conversation pictures into the convolutional neural network model to extract the graphic features of the following conversation pictures. Then, the graph features are processed through the pooling layer and the fully connected layer to obtain the probability value of the category of the conversation picture below. Use the category label corresponding to the maximum probability value as the category label corresponding to the conversation picture below.
- step 208 the graphic feature and the corresponding category label text are merged to obtain a comprehensive feature.
- the server extracts the text features of the category label text based on the pre-trained natural language model, and performs cross-modal fusion of the graphic features and the text features.
- cross-modal fusion is the fusion of data with different modalities.
- the data of different modalities specifically refer to the graphic features corresponding to the following conversation pictures and the text data corresponding to the category label text.
- the computer device can map the extracted graphic features and the corresponding category label text to data in the same space, and then perform fusion processing on the mapped data to obtain comprehensive features.
- the graphic features of the following conversational pictures are extracted through the first model.
- the computer equipment can extract the text features of the category label text through the cyclic neural network.
- the form of expression of both graphic features and text features can be in vector form.
- the computer equipment can convert the graphic feature and the text feature into a standard form before fusing the graphic feature and the text feature, so that the feature vectors of both are in the same range.
- the graphic feature and text feature can be normalized separately. Commonly used normalization algorithms include function method and probability density method.
- the function method such as the maximum-minimum function, the mean-variance function (normalizing the features to a consistent interval, such as the interval with a mean of 0 and a variance of 1) or hyperbolic sigmoid (S-shaped growth curve) function Wait.
- the computer device can perform a fusion operation on the normalized graphic feature and the text feature corresponding to the corresponding category label text to obtain a comprehensive feature.
- the algorithm for fusing graphic features and text features can specifically adopt algorithms based on Bayesian decision theory, algorithms based on sparse representation theory, or algorithms based on deep learning theory.
- the computer device may perform a weighted summation on the two vectors after the normalization process, and the graphic feature and the text feature have been merged to obtain a comprehensive feature.
- the computer device can extract the text features of the category label text through the cyclic neural network, and perform attention distribution processing on the graphic features and text features, that is, attention processing, to obtain the attention distribution weight, that is, the attention right. Value value), and then combine the attention value with the graphic feature to get the comprehensive feature.
- attention processing can be understood as selectively filtering out a small amount of important information from a large amount of information and focusing on these important information, ignoring most of the unimportant information. The focusing process is reflected in the calculation of the attention distribution weight. The larger the attention distribution weight, the more focused on the corresponding graphic features.
- Step 210 Perform intent recognition on the conversation message based on the comprehensive features.
- the server processes the comprehensive features through the second model, and outputs the conversation intention of the conversation picture, such as recognizing the objects in the conversation picture below, understanding the relationship between the objects, and so on.
- Conversational intention can be represented in the form of a word, a whole sentence, or paragraph text.
- the second model may specifically be a recurrent neural network model, such as an LSTM model.
- the intent recognition of the conversation message based on the comprehensive feature includes: obtaining the intent pre-description text corresponding to the conversation picture; generating the predicted feature of the conversation picture based on each word vector of the intent pre-description text; combining the comprehensive feature and the predicted feature Input the pre-training model, and output the conversational intention of the drawing picture.
- the intention pre-description text is the text that describes the following conversation pictures in advance.
- the intention pre-description text can be considered to be the initial rougher description text obtained after understanding the following conversation pictures.
- the computer device may obtain the intent pre-description text corresponding to the following conversation picture, and obtain each word vector of the intent pre-description text.
- the computer equipment can use the encoding-decoding method, input the comprehensive feature as the first moment, and use each word vector as the input at the subsequent moments, and process the sequentially input comprehensive features and word vectors through the second model to output the conversational intention of the conversation message .
- the second model can combine the comprehensive features and the intention pre-description text, so that the output conversation intention is more suitable for the real intention expressed in the conversation picture below, and the accuracy of the graphic understanding information is greatly improved.
- the category label text corresponding to the conversation picture can be quickly and accurately obtained.
- the graphic feature and the corresponding category label text are cross-modally fused to obtain a comprehensive feature, and then based on the comprehensive feature, the conversational intention of the conversation message is identified.
- the features of the conversation pictures are used in detail and fully.
- obtaining the session message includes: listening to the following session message used to reply to the above session message in the current session branch; calculating the amount of message data of the following session message; obtaining when the session mode of the current session branch is intent recognition The intent level of the conversation message above; the intent recognition strategy for the following conversation message is determined according to the amount of message data and the intent level; when the conversation message includes a conversation picture, extracting the graphic features of the conversation picture includes: when the intent recognition strategy is model recognition And when the following session message includes a session picture, the graphic features of the session picture are extracted through the pre-training model.
- each practice dialogue whose conversation mode is "intention recognition” has a corresponding intent level. If the previous sequence of the practice dialogue of "intention recognition” is another conversation mode, then the intention level of the practice dialogue of "intention recognition” is the first level. If the previous sequence of the "intention recognition” exercise dialogue is also in the “intention recognition” conversation mode, then the intent level of the "intention recognition” exercise dialogue is that of the previous sequence of "intention recognition” exercise dialogue corresponding to the intention level next level. For example, the intent level of the exercise dialogue in the previous sequence “intention recognition” is the second level, and the intent level of the exercise dialogue in the current sequence “intention recognition” is the third level, and so on.
- the computer equipment is preset with a variety of intent recognition strategies, and different intent recognition strategies can be used in different situations to recognize the intent of the following conversation messages according to requirements.
- the intention recognition strategy of this embodiment includes rule matching and model recognition.
- the rule matching may be a way of intent identification by identifying whether there are preset keywords that can represent a certain session intent in the following session message.
- Model recognition may be the above-mentioned way of intent recognition based on the first model and the second model. It is easy to understand that more intent recognition strategies can also be preset, such as intent recognition based on the LDA model, which is not limited.
- Each intent recognition strategy has corresponding usage conditions. The usage condition may be that one or more indicators of the following conversation messages reach the threshold respectively.
- the indicators specifically include the amount of message data, the intent level of the current session score, and the business scenario to which it belongs.
- the amount of message data can be determined according to the length of the included text or the size of the picture involved. For example, when the amount of message data in the following conversation messages is large or the intent level is relatively low, rule matching may be preferred.
- the intention recognition is first based on the rule matching method with simple calculation logic, and the intention recognition is performed based on the model only when the rule matching is not applicable, which can not only calculate the resources of the computer equipment, but also ensure the accuracy of the intention recognition.
- cross-modal fusion of the graphic feature and the corresponding category label text to obtain the comprehensive feature includes: determining the coded data corresponding to the category label text; performing attention distribution processing on the graphic feature according to the coded data to obtain attention Power weights; weighted splicing of graphic features based on attention weights to obtain comprehensive features.
- the encoded data is data obtained by encoding the text of the category label.
- unipolar codes, polar codes, bipolar codes, return-to-zero codes, bi-phase codes, non-return-to-zero codes, Manchester encoding, differential Manchester encoding, multi-level encoding, etc. can be used for encoding.
- the computer device may preset the mapping relationship between the category label text and the encoded data. According to the mapping relationship, the coded data corresponding to the category label text is determined. For example, for example, it can be preset that the category label text "expenditure” corresponds to the coded data "0001", the category label text "income” corresponds to the coded data "0002”, and the category label text "lifetime” corresponds to the coded data "0003", The category label text "house” corresponds to the coded data "0101” and the like.
- the computer device determines that the category label corresponding to the image feature is "expenditure", it can determine the corresponding encoded data "0001".
- the computer device can extract the text feature of the category label text through the cyclic neural network, and use the corresponding text feature as the coded data corresponding to the category label text.
- the computer equipment can perform attention distribution processing on image features according to the encoded data to obtain attention weights.
- the computer device can map the coded data and graphic features into standard vectors in the same space according to preset standard rules. Then, the standard vectors corresponding to the coded data and graphic features are subjected to dot multiplication operations to obtain intermediate results.
- the intermediate results are sequentially pooled (such as sum pooling processing) and regression processing (such as softmax processing) to obtain the attention weight.
- the computer equipment can combine the attention weight with the corresponding graphic feature to obtain the weighted comprehensive feature.
- the computer device can use the attention model to realize the step of cross-modal fusion of graphic features and corresponding category label text to obtain comprehensive features.
- the graphic features and the corresponding category label text are input into the attention model, and the attention model can automatically learn the weights through the network structure to obtain the attention weights. Then combine the attention weight with the graphic feature to get the comprehensive feature.
- the attention weight is obtained by performing attention distribution processing on the graphic feature and the corresponding coded data, and then the attention weight is combined with the image feature to obtain the integrated feature, so that the more important element in the integrated feature is The larger the weight is, the target element can be focused during the graphics processing, which greatly improves the accuracy of graphics understanding information, and improves the computer equipment's ability to understand conversation graphics.
- the above-mentioned conversational intention recognition method further includes: when the conversation message includes the conversation text, determining the part of speech of each word segment in the conversation text; obtaining the intent pre-description text associated with each part of speech whose part of speech belongs to the target part of speech; When the intent pre-expression text associated with multiple word segmentation is the same, the intent description text of the conversation text is generated according to the intent pre-description text.
- the computer device pre-stores a variety of parts of speech, and each part of speech is associated with a corresponding intent pre-description text.
- part of speech refers to the classification of parts of speech based on the characteristics of words.
- the part of speech specifically includes word slots, characteristic words, wildcards, and so on.
- the word slot is the query condition under the scenario response intention, such as the time and place in the weather intention, the date and destination in the booking intention, etc.; it can be used as a condition to manage the dialog logic.
- Feature words are words with a certain type of feature, as long as they meet this feature condition, they can be represented by feature words. Wildcards refer to special sentences used for fuzzy search.
- the computer device queries whether the part of speech corresponding to each word segmentation in the conversation text contains the pre-stored part of speech (denoted as the target part of speech). If only one part of speech is the target part of speech, or there are multiple part of speech as the target part of speech, and correspond to the same target part of speech, the computer device directly determines the intent pre-description text associated with the target part of speech as the final corresponding conversation text The intent description text. If there are multiple part of speech whose part of speech is the target part of speech, and the corresponding multiple target parts of speech are different, the computer device can perform intent recognition in the above-mentioned manner.
- the computer device may also determine the intent pre-description text associated with the word segmentation in the first order of appearance in the conversation text as the target part-of-speech The final intent description text corresponding to the corresponding conversation text.
- priority is given to intent recognition based on the association relationship between part of speech and different intent pre-description texts. Only when the intent cannot be accurately recognized based on this association relationship, the intent recognition is performed based on the model, which simplifies the intent recognition logic and saves computer equipment. Computing resources.
- the above-mentioned conversational intention recognition method further includes: when the intent description texts associated with multiple word segmentation are different, generating a part-of-speech vector corresponding to each word segmentation according to the part-of-speech; inputting the word vectors of the multiple word segmentation into the third model to obtain The topic vector corresponding to the conversation text; the word vector, part of speech vector and topic vector of multiple word segmentation are merged to obtain the feature vector corresponding to the conversation text; the feature vector is processed by the fourth model to obtain the intent description text of the conversation text.
- the intention dialogue component has a corresponding intention recognition mode.
- Intent recognition modes include "fixed intent” and "customized intent”.
- the fixed intent is that the user configures the dialog flow task through a variety of standard intents provided by the selected conversational application, and performs intent recognition based on the intent recognition model.
- Custom intent is that users configure dialog flow tasks through custom non-standard intents, and perform intent recognition based on semantic analysis models.
- the semantic analysis model is a language model with natural language processing (NLP, natural language processing) capabilities after training.
- NLP natural language processing
- the text in a preset corpus can be used as training data, and the language model obtained by learning and training is used to extract the semantics of the text.
- word2vector model word2vec model word to vector, a model used to generate word vectors
- elmo model Embeddings from Language Models, text embedding model
- bert model Bidirectional Encoder Representations from Transformers, bidirectional encoding transformation model
- the pre-trained semantic analysis model has a fixed model structure and model parameters.
- the semantic analysis model includes a text feature extraction model and a similarity calculation model.
- the computer device performs word segmentation on the text, and uses words such as stop words and punctuation marks in the obtained multiple word segmentation that have little effect on characterizing the semantics of the text, thereby improving the efficiency of subsequent text feature extraction.
- Stop words refer to words that appear more than a preset threshold in the text, but have little actual meaning, such as me,, and him.
- the computer device may also perform synonym expansion on the obtained multiple word segmentation. Synonyms refer to words that have the same or similar meaning as the original participle. For example, the original word is "awesome", and synonyms can be "excellent", "excellent", “excellent”, etc.
- the computer device inputs the processed word segmentation into the pre-trained text feature extraction model to obtain the text feature of the conversational text.
- the text feature is a feature that represents the semantics of the text.
- the expression form of the text feature can be a vector form.
- the computer device calculates the text characteristics of the above-mentioned conversation message corresponding to the following reference message in the current conversation branch according to the same logic.
- the computer device scores the current conversation text based on the similarity between the text feature of the conversation text and the text feature of the corresponding reference message below.
- the intention recognition model includes the above-mentioned first model and the second model.
- the first model and the second model can perform intent recognition on conversation messages in a picture format.
- the intention recognition model further includes a third model and a fourth model, and the third model and the fourth model can perform intent recognition on conversation messages in text format.
- the computer device splices the word vectors corresponding to the multiple word segments in the conversation text according to the appearance order of the multiple word segmentation in the conversation text to obtain the first representation vector corresponding to the conversation text.
- the computer device inputs the first representation vector into the third model to obtain the topic vector corresponding to the conversation text.
- the third model may be a pre-trained LDA model or the like.
- the computer device generates a part-of-speech vector corresponding to each word segmentation in the conversation text according to the part-of-speech corresponding to each word segmentation.
- the computer device splices the word vectors and part-of-speech vectors corresponding to the multiple word segments in the conversation text according to the appearance order of the multiple word segmentation in the conversation text to obtain a second representation vector corresponding to the conversation text.
- the computer device performs feature fusion of the second representation vector and the topic vector to obtain a feature vector corresponding to the conversation text.
- the computer device inputs the feature vector into the pre-trained classification model to obtain the matching probability of the conversational text and each preset intent.
- the computer device compares whether the highest matching probability value reaches the threshold. If yes, the preset intention with the highest matching probability is determined as the conversation intention of the answering conversation. If not, it is determined that the classification result based on the classification model is inaccurate, and the computer device performs part-of-speech tagging in the above-mentioned manner, and determines the conversational intention based on the target part-of-speech related intention pre-description text.
- the intent cannot be accurately identified based on the classification model, the user may be prompted to re-reply to the above conversation message, and there is no restriction on this.
- the intention recognition is performed by fully combining the part-of-speech feature and the topic feature of each word segmentation in the conversation text, which can improve the accuracy of the intention recognition result.
- steps in the flowchart of FIG. 2 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in FIG. 2 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
- a conversation intention recognition device which includes: a feature extraction module 302, a feature fusion module 304, and an intention recognition module 306.
- the feature extraction module 302 is used to obtain a conversation message; when the conversation message includes a conversation picture, extract the graphic characteristics of the conversation picture.
- the feature fusion module 304 is used to determine the category label text corresponding to the conversation picture according to the graphic feature; merge the graphic feature and the corresponding category label text to obtain a comprehensive feature.
- the intention recognition module 306 is configured to perform intention recognition on the conversation message based on the comprehensive feature.
- the feature extraction module 302 is also used to monitor the following conversation messages used to reply to the above conversation messages in the current conversation branch; calculate the message data volume of the following conversation messages; when the conversation mode of the current conversation branch is intent recognition , Obtain the intent level of the above conversation message; determine the intent recognition strategy for the following conversation message according to the amount of message data and the intention level; when the intent recognition strategy is model recognition, and the following conversation message includes conversation pictures, extract the conversation through the pre-training model Graphical characteristics of the picture.
- the feature fusion module 304 is also used to determine the coded data corresponding to the category label text; perform attention distribution processing on the graphic features according to the coded data to obtain the attention weight; The graphic features are weighted and spliced to obtain comprehensive features.
- the intent recognition module 306 is further configured to obtain the intent pre-description text corresponding to the conversation picture; generate the predicted feature of the conversation picture based on each word vector of the intent pre-description text; combine the comprehensive feature and The prediction feature is input to the pre-training model, and the conversation intention of the drawing picture is output.
- the intent recognition module 306 is also used to determine the part of speech of each word segment in the conversation text when the conversation message includes the conversation text; obtain the intent pre-description text associated with each part of speech whose part of speech belongs to the target part of speech; When the intent pre-expression text associated with the word segmentation is the same, the intent description text of the conversation text is generated according to the intent pre-description text.
- the intention recognition module 306 is also used to generate a part-of-speech vector corresponding to each part-of-speech when the intent description texts associated with multiple word segmentation are different; input the word vectors of the multiple word-segmentation into the third model to obtain the conversation The topic vector corresponding to the text; the word vector, part-of-speech vector and topic vector of multiple word segmentation are merged to obtain the feature vector corresponding to the conversation text; the feature vector is processed by the fourth model to obtain the intent description text of the conversation text.
- Each module in the apparatus for recognizing the above-mentioned conversation intention may be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 4.
- the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, a computer program, and a database.
- the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
- the database of the computer equipment is used to store dialog flow task information.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer program is executed by the processor to realize a method for recognizing the intent of a conversation.
- FIG. 4 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the method for identifying a conversational intention provided in any one of the embodiments of the present application are realized.
- Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Machine Translation (AREA)
Abstract
本申请涉及一种会话意图识别方法、装置、计算机设备和存储介质,可在人工智能中实现。所述方法包括:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。采用本方法能够准确识别图片格式会话消息所表达意图。
Description
本申请要求于2019年09月06日提交中国专利局、申请号为201910842789.8,发明名称为“会话意图识别方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能技术领域,特别是涉及一种会话意图方法、装置、计算机设备和存储介质。
随着通信技术的发展,出现了很多可以发起会话的应用,用户可通过这些应用实现与真实的用户或虚拟用户对象之间的通信交流。其中,虚拟用户对象是通过软件实现的可以响应用户诉求的、且与用户进行交流的虚拟的用户对象。基于专业培训、服务质量监控、信息安全保证等需求,有时需要对用户与虚拟用户对象之间的会话消息进行监控。比如,在专业培训场景中,需要对用户答复虚拟用户对象的会话消息所表达意图进行监控。
发明人意识到,传统的进行会话意图识别的方式主要是利用神经语言程序学NLP(Neuro-Linguistic Programming)的关键词匹配工具识别会话消息的意图。但这种方法依赖大量的关键词标注工作,且对于绘图等非文本会话消息不再适用。
基于此,有必要针对上述技术问题,提供一种能够准确图片格式会话消息所表达意图的会话意图识别方法、装置、计算机设备和存储介质。
一种会话意图识别方法,所述方法包括:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
一种会话意图识别装置,所述装置包括:特征提取模块,用于获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;特征融合模块,用于根据所述图形特征,确定与所述会话图片相应的类别标签文本;将图形特征和相应的类别标签文本进行融合,得到综合特征;意图识别模块,用于基于所述综合特征对所述会话消息进行意图识别。
一种计算机设备,包括存储器和处理器,所述处理器、和所述存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器用于执行所述存储器的所述程序指令,其中:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时,用于实现以下步骤:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
上述会话意图识别方法、装置、计算机设备和存储介质,根据提取得到的会话图片的图形特征,可以快速准确地获得会话图片相应的类别标签文本。将图形特征和相应的类别标签文本进行跨模态融合,得到综合特征,再根据综合特征,识别得到会话消息的会话意图。这样,可以使得在意图识别过程中既能充分利用会话图片本身的图形特征,又能结合会话图片所属的类别信息。这样细致且充分地利用了会话图片的特征,在对会话图片进行理解时,得到了图形特征和类别标签文本的双重指导,大大提高了会话图片理解信息的准确性。
图1为一个实施例中会话意图识别方法的应用场景图。
图2为一个实施例中会话意图识别方法的流程示意图。
图3为一个实施例中会话意图识别装置的结构框图。
图4为一个实施例中计算机设备的内部结构图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的会话意图识别方法,可以应用于如图1所示的应用环境中。其中,终端102与服务器104通过网络进行通信。其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。终端102上运行了会话应用。基于会话应用,用户可以与虚拟用户对象进行会话。该会话消息处理方法可以在终端102或服务器104完成。当用户基于终端102上的会话应用提交会话消息时,终端102可以直接对会话消息进行意图识别,也可以在获取会话消息之后将会话消息发送至服务器104,由服务器104对会话消息进行意图识别。用户提交的会话消息用于答复虚拟用户对象发送的会话消息。为了描述方便,下文将虚拟用户对象发送的会话消息记作上文会话消息,将用户提交的会话消息记作上文会话消息。
在一个实施例中,如图2所示,提供了一种会话意图识别方法,以该方法应用于图1中的服务器为例进行说明,包括以下步骤。
步骤202,获取会话消息。
终端上运行了会话应用。会话应用可以是用户通过与其他用户或虚拟用户对象之间发送会话消息,以实现不同社交用途的应用。会话应用具体可以是即时通讯应用、智能客服应用、技能陪练应用等。其中,技能陪练应用是由虚拟用户对象充当某种角色的用户与待培训的另一种角色的用户进行模拟会话,以提高待培训用户技能的应用程序。比如,虚拟用户对象充当客户与业务员进行会话,以提高业务员服务能力;或者,虚拟用户对象充当学生或家长与老师进行会话,以提高老师教学水平等。
技能陪练应用包括旁白对话、固定对话、固定问答、意图对话和评分对话等多个对话组件,支持多分支对话。用户可以通过自由拖拽多个对话组件的方式快速创建对话流任务,并发布预配置的对话流任务给待培训用户进行练习。具体地,通过拖拽不同对话组件可以生成不同会话类型的练习对话。比如,基于对话组件“意图对话”可以实现会话类型为“意图识别”;基于对话组件“评分对话”可以实现会话类型为“专业评分”等。
每组练习对话包括预置的上文会话消息以及对应的下文参考消息。用户可以对讲述该上文会话消息的虚拟对象的模特形象和表情等进行配置。用户还可以对每组练习对话的会话模式进行配置。会话模式是指定的用户答复上文会话消息的方式,比如口头讲解、图文讲解等。用户在配置每个会话模式为“图文讲解”的上文会话消息的下文参考消息时,需要预先配置对应的参考讲解图。参考讲解图被划分为多个讲解步骤。将整个参考讲解图按讲解步骤拆解为多个步骤图。
多组练习对话按照一定顺序排列形成一个对话流任务。一个对话流任务可能存在一个或多个会话分支,即在当前顺序的练习对话结束后,存在多个下一顺序的练习对话,可以根据当前顺序练习对话的会话类型对当前顺序练习会话进行意图识别或评分等分析处理,根据分析结果确定具体跳转至哪一会话分支。
当基于技能陪练应用完成不同的对话流任务时,虚拟用户对象在会话窗口展示当前顺序练习对话中的上文会话消息,用户可以采用口头讲解或图文讲解的方式在会话窗口录入下文会话消息,以答复上文会话消息。对于会话类型为“专业评分”、会话模式为“图文讲解”的练习对话,用户需要按照提示进行绘图及讲解,在会话窗口录入图片格式的下文会话消息(记作下文会话图片)。
在一个实施例中,监听用于答复上文会话消息的下文会话消息包括:展示当前会话分支的上文会话消息;确定当前会话分支的会话模式;当会话模式为图文讲解时,展示绘图页面;监听绘图页面的绘图操作,得到下文会话图片。
若当前顺序练习对话的会话类型为“专业评分”、会话模式为“图文讲解”,终端在会话窗口展示绘图讲解提示,并展示绘图页面。绘图页面可以是会话窗口中的会话消息录入区域,也可以是区别于会话窗口的其他页面。
步骤204,当会话消息包括会话图片时,提取会话图片的图形特征。
服务器基于预训练的第一模型提取会话图片特征。其中,模型是由人工神经网络构成的模型。神经网络模型具体可以是VGG(Visual Geometry Group 视觉集合组)网络模型、GoogleNet(谷歌网络)模型或ResNet(能效评估系统)网络模型等CNN(Convolutional Neural Network,卷积神经网络)模型,也可以是DNN(Deep Neural Network,深度神经网络)模型,还可以是LSTM(Long Short-Term Memory Neural Network,长短时记忆神经网络)模型等RNN(Recurrent Neural Network,循环神经网络)模型等。
图形特征具体可以是计算机设备从下文会话图片中提取出的可以表示图片的形状或空间关系等数据,得到图片的“非图片”的表示或描述,如数值、向量或符号等。
在本实施例中,第一模型具体可以是卷积神经网络模型,比如ResNet-80。计算机设备可将下文会话图片输入至第一模型中,通过第一模型提取下文会话图片的图形特征。比如,计算机设备可将下文会话图片输入至卷积神经网络模型中,通过卷积神经网络的卷积层对下文会话图片进行卷积处理,提取下文会话图片的feature map(特征图),即本实施例中的图形特征。
在一个实施例中,第一模型是以图形库(ImageNet)中大量的手绘图片和相应的类别标签作为训练数据,进行学习训练得到的用于对下文会话图片进行分类的模型。计算机设备在获取到手绘图片后,将手绘图片输入第一模型,通过第一模型的卷积层结构提取手绘图片的图形特征,通过第一模型的池化层结构和/或全连接层结构确定手绘图片相应的类别标签文本。
步骤206,根据图形特征,确定与会话图片相应的类别标签文本。
其中,类别标签文本是下文会话图片所属的类别对应的标签文本。具体地,计算机设备可通过第一模型提取图形特征,再对提取的图形特征进行分类处理,得到下文会话图片的类别,进而确定下文会话图片相应的类别标签文本。
在一个实施例中,第一模型具体可以是卷积神经网络模型。计算机设备可将下文会话图片输入至卷积神经网络模型中,以提取下文会话图片的图形特征。再通过池化层和全连接层对图形特征进行处理,得到下文会话图片所属类别的概率值。将最大概率值所对应的类别标签作为与下文会话图片相应的类别标签。
步骤208,将图形特征和相应的类别标签文本进行融合,得到综合特征。
服务器基于预训练的自然语言模型提取类别标签文本的文本特征,并将图形特征与文本特征进行跨模态融合。其中,跨模态融合是将具有不同模态的数据进行融合。在本实施例中,不同模态的数据具体是指与下文会话图片对应的图形特征、以及与类别标签文本对应的文本数据。具体地,计算机设备可将提取的图形特征和相应的类别标签文本映射至同一空间内的数据,再对映射后的数据进行融合处理,得到综合特征。
在一个实施例中,通过第一模型提取下文会话图片的图形特征。计算机设备可通过循环神经网络提取类别标签文本的文本特征。其中,图形特征和文本特征的表现形式都可以是向量形式。计算机设备在对图形特征和文本特征进行融合之前,可将图形特征和文本特征分别转换成标准形式,使两者的特征向量都处于同一范围内。比如,可分别对图形特征和文本特征进行归一化处理。常用的归一化算法有函数法和概率密度法。其中,函数法,比如最大-最小函数、均值-方差函数(将特征都归一化到了一个一致的区间,比如均值为0,方差为1的区间)或双曲sigmoid(S型生长曲线)函数等。
进一步地,计算机设备可对归一化处理后的图形特征和相应的类别标签文本对应的文本特征,执行融合操作,得到综合特征。其中,将图形特征和文本特征进行融合的算法具体可采用基于贝叶斯决策理论的算法、基于稀疏表示理论的算法或基于深度学习理论算法等。或者,计算机设备可对归一化处理后的两个向量进行加权求和,已将图形特征和文本特征进行融合,得到综合特征。
在一个实施例中,计算机设备可通过循环神经网络提取类别标签文本的文本特征,对图形特征和文本特征做注意力分配处理,也就是attention处理,得到注意力分配权值,也就是注意力权值(attention
value),再将attention value和图形特征结合,得到综合特征。其中,attention处理,可以理解为从大量信息中有选择地筛选出少量重要信息并聚焦到这些重要信息上,忽略大多不重要的信息。聚焦的过程体现在注意力分配权值的计算上,注意力分配权值越大越,则越聚焦于其对应的图形特征上。
步骤210,基于综合特征对会话消息进行意图识别。
服务器通过第二模型处理综合特征,输出得到会话图片的会话意图,比如识别下文会话图片中的物体、理解物体间的关系等。会话意图具体可以一个词、一个整句或段落文本等的形式表征。第二模型具体可以是循环神经网络模型,如LSTM模型。
在一个实施例中,基于综合特征对会话消息进行意图识别包括:获取与会话图片对应的意图预描述文本;基于意图预描述文本各个词向量,生成会话图片的预测特征;将综合特征以及预测特征输入预训练模型,输出得到绘图图片的会话意图。
其中,意图预描述文本是预先对下文会话图片进行描述的文本。意图预描述文本具体可以是认为对下文会话图片进行理解后,得到的初始的较为粗糙的描述文本。
在一个实施例中,计算机设备可获取与下文会话图片对应的意图预描述文本,并获取意图预描述文本的各个词向量。计算机设备可以采用编码-解码的方式,将综合特征作为第一时刻输入,将各个词向量分别作为后续时刻的输入,通过第二模型处理依次输入的综合特征和词向量,输出会话消息的会话意图。这样,第二模型可以结合综合特征和意图预描述文本,使得输出的会话意图更贴合下文会话图片所表达真实意图,大大提高了图形理解信息的准确性。
上述会话意图识别方法中,根据提取得到的会话图片的图形特征,可以快速准确地获得会话图片相应的类别标签文本。将图形特征和相应的类别标签文本进行跨模态融合,得到综合特征,再根据综合特征,识别得到会话消息的会话意图。这样,可以使得在意图识别过程中既能充分利用会话图片本身的图形特征,又能结合会话图片所属的类别信息。这样细致且充分地利用了会话图片的特征,在对会话图片进行理解时,得到了图形特征和类别标签文本的双重指导,大大提高了会话图片理解信息的准确性。
在一个实施例中,获取会话消息包括:监听用于答复当前会话分支中上文会话消息的下文会话消息;计算下文会话消息的消息数据量;在当前会话分支的会话模式为意图识别时,获取上文会话消息的意图层级;根据消息数据量及意图层级确定对下文会话消息的意图识别策略;所述当会话消息包括会话图片时,提取会话图片的图形特征包括:当意图识别策略为模型识别,且下文会话消息包括会话图片时,通过预训练模型提取会话图片的图形特征。
对话流任务中每个会话模式为“意图识别”的练习对话具有对应的意图层级。若“意图识别”的练习对话的前一顺序练习对话为其他会话模式,则该“意图识别”的练习对话的意图层级为第一层级。若“意图识别”的练习对话的前一顺序练习对话也为“意图识别”会话模式,则该“意图识别”的练习对话的意图层级为前一顺序“意图识别”的练习对话对应意图层级的下一级。比如,前一顺序“意图识别”的练习对话的意图层级为第二层级,则当前顺序“意图识别”的练习对话的意图层级为第三层级,依次类推。
计算机设备预置了多种意图识别策略,可以根据需求在不同情况下采用不同的意图识别策略识别下文会话消息的意图。本实施例意图识别策略包括规则匹配和模型识别。其中,规则匹配可以是通过识别下文会话消息中是否存在预设的能够表征某种会话意图的关键词进行意图识别的方式。模型识别可以是上述基于第一模型和第二模型进行意图识别的方式。容易理解,还可以预置更多的意图识别策略,如基于LDA模型进行意图识别等,对此不作限制。每种意图识别策略具有对应的使用条件。使用条件可以是下文会话消息的一项或多项指标分别达到阈值。其中,指标具体包括消息数据量、当前会话分值的意图层级、所属业务场景等。消息数据量可以根据所包含文本长度或者所涉及图片大小等确定。比如,当下文会话消息的消息数据量大,或意图层级比较低的时候,可以优先采用规则匹配。
上述实施例中,先基于计算逻辑简单的规则匹配方式进行意图识别,只有在规则配不适用时才基于模型进行意图识别,既可以计算机设备计算资源,又可以保证意图识别准确性。
在一个实施例中,将图形特征和相应的类别标签文本进行跨模态融合,得到综合特征包括:确定与类别标签文本相应的编码数据;根据编码数据对图形特征进行注意力分配处理,得到注意力权值;基于注意力权值对图形特征进行加权拼接,得到综合特征。
其中,编码数据是对类别标签文本进行编码处理得到的数据。具体可以采用单极性码、极性码、双极性码、归零码、双相码、不归零码、曼彻斯特编码、差分曼彻斯特编码、多电平编码等方式进行编码。
在一个实施例中,计算机设备可预先设置类别标签文本和编码数据的映射关系。根据映射关系,确定与类别标签文本相应的编码数据。举例说明,比如可预先设置类别标签文本“支出”对应于编码数据“0001”、类别标签文本“收入”对应于编码数据“0002”、类别标签文本“终生”对应于编码数据“0003”、
类别标签文本“房子”对应于编码数据“0101”等。当计算机设备确定与图像特征相应的类别标签为“支出”时,则可确定相应的编码数据“0001”。在另一个实施例中,计算机设备可通过循环神经网络提取类别标签文本的文本特征,将相应的文本特征作为与类别标签文本相应的编码数据。
计算机设备可以根据编码数据,对图像特征进行注意力分配处理,得到注意力权值。具体地,计算机设备可将编码数据和图形特征按预设标准规则分别映射成同一空间内的标准向量。再对分别与编码数据和图形特征相应的标准向量进行点乘操作,得到中间结果。对中间结果依次进行池化处理(比如sum pooling处理)和回归处理(比如softmax处理),得到注意力权值。
计算机设备可将注意力权值和相应的图形特征结合,得到加权后的综合特征。在一个实施例中,计算机设备可通过注意力模型来实现将图形特征和相应的类别标签文本进行跨模态融合,得到综合特征的步骤。将图形特征和相应的类别标签文本输入至注意力模型中,注意力模型可通过网络结构自动的学习权重,得到注意力权值。再将注意力权值和图形特征进行结合,得到综合特征。在得到的综合特征中,注意力模型越聚焦的地方,所占的权重就越大。
上述实施例中,通过对图形特征和相应的编码数据进行注意力分配处理,得到注意力权值,再将注意力权值和图像特征相结合,得到综合特征,使得综合特征中越重要的元素所占的权重越大,可使得在图形处理过程中能聚焦到目标元素,大大提高了图形理解信息的准确性,提高了计算机设备对会话图形的理解能力。
在一个实施例中,上述会话意图识别方法还包括:当会话消息包括会话文本时,确定会话文本中每个分词的词性;获取词性属于目标词性的每个分词所关联的意图预描述文本;当多个分词关联的意图预表述文本相同时,根据意图预描述文本生成会话文本的意图描述文本。
计算机设备预存储了多种词性,每种词性关联有对应的意图预描述文本。其中,词性是指指以词的特点作为划分词类的根据。在本实施例中,词性具体包括词槽、特征词和通配符等。词槽是场景答复意图下的查询条件,例如天气意图里的时间和地点,订票意图里的日期和终点等;可以作为条件来管理对话逻辑。特征词是具有某类特征的词,只要符合这个特征条件,都可以用特征词来表示。通配符是指用于模糊搜索的特殊语句。比如,会话文本“订一下北京到上海的火车票,多谢了”中,“北京”和“上海”的词性为词槽、“订一下”、“到”“火车票”为特征词、“多谢了”为通配符。
计算机设备查询会话文本中每个分词对应的词性是否包含预存储的词性(记作目标词性)。若仅存在一个分词的词性为目标词性,或存在多个分词的词性为目标词性,且对应同一目标词性时,计算机设备直接将该目标词性关联的意图预描述文本确定为相应会话文本对应的最终的意图描述文本。若存在多个分词的词性为目标词性,且所对应多个目标词性不同,则计算机设备可以按照上述方式进行意图识别。在另一个实施例中,当存在多个分词的词性为目标词性,且所对应多个目标词性不同时,计算机设备也可以将会话文本中第一出现顺序的分词关联的意图预描述文本确定为相应会话文本对应的最终的意图描述文本。
本实施例中,优先基于词性与不同意图预描述文本的关联关系进行意图识别,只有在基于这种关联关系无法准确识别意图时,才基于模型进行意图识别,简化意图识别逻辑,进而节约计算机设备计算资源。
在一个实施例中,上述会话意图识别方法还包括:当多个分词关联的意图描述文本不同时,根据词性生成每个分词对应的词性向量;将多个分词的词向量输入第三模型,得到会话文本对应的主题向量;将多个分词的词向量、词性向量及主题向量进行融合,得到会话文本对应的特征向量;通过第四模型对特征向量进行处理,得到会话文本的意图描述文本。
意图对话组件具有对应的意图识别模式。意图识别模式包括 “固定意图”和“自定义意图”。通过拖拽不同的意图对话组件可以生成包含一个或多个具有不同意图识别模式的会话模式为“意图识别”的练习对话。其中,固定意图是用户通过选定会话应用提供的多种标准意图来配置对话流任务,基于意图识别模型进行意图识别。自定义意图是用户通过自定义的非标准意图来配置对话流任务,基于语义分析模型进行意图识别。
语义分析模型是经过训练后具有自然语言处理(NLP,natural language processing)能力的语言模型,具体可以是以预设语料库中文本作为训练数据,进行学习训练得到的用于提取文本语义的语言模型。比如word2vector模型word2vec模型(word to vector,用于产生词向量的模型)、elmo模型(Embeddings from Language Models,文本嵌入模型)、bert模型(Bidirectional Encoder Representations from Transformers,双向编码变换模型)等。预训练的语义分析模型具有固定的模型结构和模型参数。语义分析模型包括文本特征提取模型和相似度计算模型。
具体地,计算机设备对文本进行分词,并将得到的多个分词中的停用词、标点符号等对表征文本语义作用小的词语,从而提高后续文本特征提取的效率。停用词是指文本中出现频率超过预设阈值但实际意义不大的词,如我、的、他等。在一个实施例中,计算机设备还可以对得到的多个分词进行同义词扩展。同义词是指与原始分词含义相同或相近的词语,如原始词语为“真棒”,同义词可为“厉害了”“了不起”“优秀”等。计算机设备将进行上述处理后的分词输入预训练的文本特征提取模型,得到会话文本的文本特征。文本特征是表示文本的语义的特征。文本特征的表现形式可以是向量形式。计算机设备按照相同逻辑计算当前会话分支中上文会话消息对应下文参考消息的文本特征。计算机设备将该会话文本的文本特征与相应下文参考消息的文本特征的相似度,根据相似度对当前的会话文本进行评分。
意图识别模型包括上述第一模型和第二模型,第一模型与第二模型可以对图片格式的会话消息进行意图识别。在本实施例中,意图识别模型还包括第三模型和第四模型,第三模型与第四模型可以对文本格式的会话消息进行意图识别。具体地,计算机设备根据多个分词在会话文本中的出现顺序,对会话文本对应多个分词的词向量进行拼接,得到会话文本对应的第一表征向量。计算机设备将第一表征向量输入第三模型,得到会话文本对应的主题向量。第三模型可以是预训练的LDA模型等。计算机设备根据每个分词对应的词性,生成会话文本中每个分词对应的词性向量。计算机设备按照多个分词在会话文本中的出现顺序,对会话文本对应多个分词的词向量及词性向量进行拼接,得到会话文本对应的第二表征向量。计算机设备将第二表征向量与主题向量进行特征融合,得到会话文本对应的特征向量。
计算机设备将特征向量输入预训练的分类模型,得到会话文本与每个预设意图的匹配概率。计算机设备比较最高的匹配概率值是否达到阈值。若是,将匹配概率最高的预设意图确定为应答会话的会话意图。若否,则判定基于分类模型的分类结果不准确,计算机设备按照上述方式进行词性标注,根据目标词性关联的意图预描述文本确定会话意图。当然,也可以在基于分类模型无法准确识别意图时,提示用户重新答复上文会话消息,对此不作限制。
在上述实施例中,充分结合会话文本中每个分词的词性特征以及主题特征进行意图识别,可以提高意图识别结果准确性。
应该理解的是,虽然图2的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图3所示,提供了一种会话意图识别装置,包括:特征提取模块302、特征融合模块304和意图识别模块306,其中。
特征提取模块302,用于获取会话消息;当会话消息包括会话图片时,提取会话图片的图形特征。
特征融合模块304,用于根据图形特征,确定与会话图片相应的类别标签文本;将图形特征和相应的类别标签文本进行融合,得到综合特征。
意图识别模块306,用于基于所述综合特征对所述会话消息进行意图识别。
在一个实施例中,特征提取模块302还用于监听用于答复当前会话分支中上文会话消息的下文会话消息;计算下文会话消息的消息数据量;在当前会话分支的会话模式为意图识别时,获取上文会话消息的意图层级;根据消息数据量及意图层级确定对下文会话消息的意图识别策略;当意图识别策略为模型识别,且下文会话消息包括会话图片时,通过预训练模型提取会话图片的图形特征。
在一个实施例中,特征融合模块304还用于确定与类别标签文本相应的编码数据;根据编码数据对图形特征进行注意力分配处理,得到注意力权值;基于所述注意力权值对所述图形特征进行加权拼接,得到综合特征。
在一个实施例中,意图识别模块306还用于获取与会话图片对应的意图预描述文本;基于所述意图预描述文本各个词向量,生成所述会话图片的预测特征;将所述综合特征以及所述预测特征输入预训练模型,输出得到所述绘图图片的会话意图。
在一个实施例中,意图识别模块306还用于当会话消息包括会话文本时,确定会话文本中每个分词的词性;获取词性属于目标词性的每个分词所关联的意图预描述文本;当多个分词关联的意图预表述文本相同时,根据意图预描述文本生成会话文本的意图描述文本。
在一个实施例中,意图识别模块306还用于当多个分词关联的意图描述文本不同时,根据词性生成每个分词对应的词性向量;将多个分词的词向量输入第三模型,得到会话文本对应的主题向量;将多个分词的词向量、词性向量及主题向量进行融合,得到会话文本对应的特征向量;通过第四模型对特征向量进行处理,得到会话文本的意图描述文本。
关于会话意图识别装置的具体限定可以参见上文中对于会话意图识别方法的限定,在此不再赘述。上述会话意图识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图4所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储对话流任务信息。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种会话意图识别方法。
本领域技术人员可以理解,图4中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现本申请任意一个实施例中提供的会话意图识别方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于计算机可读取存储介质中,其中,所述计算机可读存储介质可以是非易失性,也可以是易失性的。该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (20)
- 一种会话意图识别方法,所述方法包括:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
- 根据权利要求1所述的方法,其特征在于,所述获取会话消息包括:监听用于答复当前会话分支中上文会话消息的下文会话消息;计算所述下文会话消息的消息数据量;在当前会话分支的会话模式为意图识别时,获取上文会话消息的意图层级;根据所述消息数据量及意图层级确定对下文会话消息的意图识别策略;所述当所述会话消息包括会话图片时,提取会话图片的图形特征包括:当意图识别策略为模型识别,且所述下文会话消息包括会话图片时,通过预训练模型提取会话图片的图形特征。
- 根据权利要求1所述的方法,其特征在于,所述将所述图形特征和相应的类别标签文本进行融合,得到综合特征包括:确定与所述类别标签文本相应的编码数据;根据所述编码数据对所述图形特征进行注意力分配处理,得到注意力权值;基于所述注意力权值对所述图形特征进行加权拼接,得到综合特征。
- 根据权利要求1所述的方法,其特征在于,所述基于所述综合特征对所述会话消息进行意图识别包括:获取与所述会话图片对应的意图预描述文本;基于所述意图预描述文本各个词向量,生成所述会话图片的预测特征;将所述综合特征以及所述预测特征输入预训练模型,输出得到所述绘图图片的会话意图。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当所述会话消息包括会话文本时,确定所述会话文本中每个分词的词性;获取所述词性属于目标词性的每个分词所关联的意图预描述文本;当多个分词关联的意图预表述文本相同时,根据所述意图预描述文本生成所述会话文本的意图描述文本。
- 根据权利要求5所述的方法,其特征在于,所述方法还包括:当多个分词关联的意图描述文本不同时,根据所述词性生成每个分词对应的词性向量;将多个分词的词向量输入第三模型,得到所述会话文本对应的主题向量;将多个分词的词向量、词性向量及主题向量进行融合,得到所述会话文本对应的特征向量;通过第四模型对所述特征向量进行处理,得到会话文本的意图描述文本。
- 一种会话意图识别装置,所述装置包括:特征提取模块,用于获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;特征融合模块,用于通过所述第一模型并根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;意图识别模块,用于基于所述综合特征对所述会话消息进行意图识别。
- 根据权利要求7所述的装置,其特征在于,所述特征融合模块还用于确定与所述类别标签文本相应的编码数据;根据所述编码数据对所述图形特征进行注意力分配处理,得到注意力权值;基于所述注意力权值对所述图形特征进行加权拼接,得到综合特征。
- 一种计算机设备,包括存储器和处理器,所述处理器、和所述存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器用于执行所述存储器的所述程序指令,其中:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
- 根据权利要求9所述的计算机设备,其中,所述处理器用于:监听用于答复当前会话分支中上文会话消息的下文会话消息;计算所述下文会话消息的消息数据量;在当前会话分支的会话模式为意图识别时,获取上文会话消息的意图层级;根据所述消息数据量及意图层级确定对下文会话消息的意图识别策略;所述当所述会话消息包括会话图片时,提取会话图片的图形特征包括:当意图识别策略为模型识别,且所述下文会话消息包括会话图片时,通过预训练模型提取会话图片的图形特征。
- 根据权利要求9所述的计算机设备,其中,所述处理器用于:确定与所述类别标签文本相应的编码数据;根据所述编码数据对所述图形特征进行注意力分配处理,得到注意力权值;基于所述注意力权值对所述图形特征进行加权拼接,得到综合特征。
- 根据权利要求9所述的计算机设备,其中,所述处理器用于:获取与所述会话图片对应的意图预描述文本;基于所述意图预描述文本各个词向量,生成所述会话图片的预测特征;将所述综合特征以及所述预测特征输入预训练模型,输出得到所述绘图图片的会话意图。
- 根据权利要求9所述的计算机设备,其中,所述处理器用于:当所述会话消息包括会话文本时,确定所述会话文本中每个分词的词性;获取所述词性属于目标词性的每个分词所关联的意图预描述文本;当多个分词关联的意图预表述文本相同时,根据所述意图预描述文本生成所述会话文本的意图描述文本。
- 根据权利要求13所述的计算机设备,其中,所述处理器用于:当多个分词关联的意图描述文本不同时,根据所述词性生成每个分词对应的词性向量;将多个分词的词向量输入第三模型,得到所述会话文本对应的主题向量;将多个分词的词向量、词性向量及主题向量进行融合,得到所述会话文本对应的特征向量;通过第四模型对所述特征向量进行处理,得到会话文本的意图描述文本。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时,用于实现以下步骤:获取会话消息;当所述会话消息包括会话图片时,提取会话图片的图形特征;根据所述图形特征,确定与所述会话图片相应的类别标签文本;将所述图形特征和相应的类别标签文本进行融合,得到综合特征;基于所述综合特征对所述会话消息进行意图识别。
- 根据权利要求15所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:监听用于答复当前会话分支中上文会话消息的下文会话消息;计算所述下文会话消息的消息数据量;在当前会话分支的会话模式为意图识别时,获取上文会话消息的意图层级;根据所述消息数据量及意图层级确定对下文会话消息的意图识别策略;所述当所述会话消息包括会话图片时,提取会话图片的图形特征包括:当意图识别策略为模型识别,且所述下文会话消息包括会话图片时,通过预训练模型提取会话图片的图形特征。
- 根据权利要求15所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:确定与所述类别标签文本相应的编码数据;根据所述编码数据对所述图形特征进行注意力分配处理,得到注意力权值;基于所述注意力权值对所述图形特征进行加权拼接,得到综合特征。
- 根据权利要求15所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:获取与所述会话图片对应的意图预描述文本;基于所述意图预描述文本各个词向量,生成所述会话图片的预测特征;将所述综合特征以及所述预测特征输入预训练模型,输出得到所述绘图图片的会话意图。
- 根据权利要求15所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:当所述会话消息包括会话文本时,确定所述会话文本中每个分词的词性;获取所述词性属于目标词性的每个分词所关联的意图预描述文本;当多个分词关联的意图预表述文本相同时,根据所述意图预描述文本生成所述会话文本的意图描述文本。
- 根据权利要求19所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:当多个分词关联的意图描述文本不同时,根据所述词性生成每个分词对应的词性向量;将多个分词的词向量输入第三模型,得到所述会话文本对应的主题向量;将多个分词的词向量、词性向量及主题向量进行融合,得到所述会话文本对应的特征向量;通过第四模型对所述特征向量进行处理,得到会话文本的意图描述文本。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910842789.8A CN110717514A (zh) | 2019-09-06 | 2019-09-06 | 会话意图识别方法、装置、计算机设备和存储介质 |
CN201910842789.8 | 2019-09-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021042904A1 true WO2021042904A1 (zh) | 2021-03-11 |
Family
ID=69210299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/104674 WO2021042904A1 (zh) | 2019-09-06 | 2020-07-25 | 会话意图识别方法、装置、计算机设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110717514A (zh) |
WO (1) | WO2021042904A1 (zh) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139055A (zh) * | 2021-04-22 | 2021-07-20 | 康键信息技术(深圳)有限公司 | 对话文本的行为倾向识别方法、装置、设备及存储介质 |
CN113157876A (zh) * | 2021-03-18 | 2021-07-23 | 平安普惠企业管理有限公司 | 信息反馈方法、装置、终端和存储介质 |
CN113239701A (zh) * | 2021-05-07 | 2021-08-10 | 京东数字科技控股股份有限公司 | 数据分析处理方法和装置 |
CN113268610A (zh) * | 2021-06-22 | 2021-08-17 | 中国平安人寿保险股份有限公司 | 基于知识图谱的意图跳转方法、装置、设备及存储介质 |
CN113408619A (zh) * | 2021-06-21 | 2021-09-17 | 江苏苏云信息科技有限公司 | 语言模型预训练方法、装置 |
CN113505607A (zh) * | 2021-06-15 | 2021-10-15 | 北京三快在线科技有限公司 | 一种意图识别方法、装置、电子设备及可读存储介质 |
CN113641803A (zh) * | 2021-06-30 | 2021-11-12 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备及存储介质 |
CN114140127A (zh) * | 2022-01-27 | 2022-03-04 | 广州卓远虚拟现实科技有限公司 | 一种基于区块链的支付处理方法及系统 |
CN114648027A (zh) * | 2022-05-23 | 2022-06-21 | 每日互动股份有限公司 | 一种文本信息的处理方法、装置、计算机设备及存储介质 |
CN114661877A (zh) * | 2022-03-17 | 2022-06-24 | 广州荔支网络技术有限公司 | 中心词提取方法、装置、计算机设备和存储介质 |
CN115022268A (zh) * | 2022-06-24 | 2022-09-06 | 深圳市六度人和科技有限公司 | 一种会话识别方法及装置、可读存储介质、计算机设备 |
CN115249017A (zh) * | 2021-06-23 | 2022-10-28 | 马上消费金融股份有限公司 | 文本标注方法、意图识别模型的训练方法及相关设备 |
CN115955451A (zh) * | 2023-03-09 | 2023-04-11 | 广东维信智联科技有限公司 | 一种在线会话信息安全监控系统 |
CN116580408A (zh) * | 2023-06-06 | 2023-08-11 | 上海任意门科技有限公司 | 一种图像生成方法、装置、电子设备及存储介质 |
CN118228740A (zh) * | 2024-05-24 | 2024-06-21 | 北京中关村科金技术有限公司 | 一种会话信息处理方法、装置、设备、存储介质及产品 |
CN118445401A (zh) * | 2024-07-02 | 2024-08-06 | 厦门鹅卵石网络科技有限公司 | 多模态的对话数据痛点洞察方法、装置、设备及存储介质 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110717514A (zh) * | 2019-09-06 | 2020-01-21 | 平安国际智慧城市科技股份有限公司 | 会话意图识别方法、装置、计算机设备和存储介质 |
CN111737458B (zh) * | 2020-05-21 | 2024-05-21 | 深圳赛安特技术服务有限公司 | 基于注意力机制的意图识别方法、装置、设备及存储介质 |
CN111737987B (zh) * | 2020-06-24 | 2023-01-20 | 深圳前海微众银行股份有限公司 | 意图识别方法、装置、设备及存储介质 |
CN111813899A (zh) * | 2020-08-31 | 2020-10-23 | 腾讯科技(深圳)有限公司 | 基于多轮会话的意图识别方法及装置 |
CN112231027A (zh) * | 2020-09-27 | 2021-01-15 | 中国建设银行股份有限公司 | 一种任务型会话配置方法和系统 |
CN112347247B (zh) * | 2020-10-29 | 2023-10-13 | 南京大学 | 基于LDA和Bert的特定类别文本标题二分类方法 |
CN112817604B (zh) * | 2021-02-18 | 2022-08-05 | 北京邮电大学 | 安卓系统控件意图识别方法、装置、电子设备及存储介质 |
CN113157887B (zh) * | 2021-04-20 | 2024-08-30 | 中国平安人寿保险股份有限公司 | 知识问答意图识别方法、装置、及计算机设备 |
CN113204638B (zh) * | 2021-04-23 | 2024-02-23 | 上海明略人工智能(集团)有限公司 | 基于工作会话单元的推荐方法、系统、计算机和存储介质 |
CN114400005A (zh) * | 2022-01-18 | 2022-04-26 | 平安科技(深圳)有限公司 | 语音消息生成方法和装置、计算机设备、存储介质 |
CN116911314B (zh) * | 2023-09-13 | 2023-12-19 | 北京中关村科金技术有限公司 | 意图识别模型的训练方法、会话意图识别方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110004829A1 (en) * | 2008-12-23 | 2011-01-06 | Microsoft Corporation | Method for Human-Centric Information Access and Presentation |
CN107612814A (zh) * | 2017-09-08 | 2018-01-19 | 北京百度网讯科技有限公司 | 用于生成候选回复信息的方法和装置 |
CN108549850A (zh) * | 2018-03-27 | 2018-09-18 | 联想(北京)有限公司 | 一种图像识别方法及电子设备 |
CN110163220A (zh) * | 2019-04-26 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 图片特征提取模型训练方法、装置和计算机设备 |
CN110717514A (zh) * | 2019-09-06 | 2020-01-21 | 平安国际智慧城市科技股份有限公司 | 会话意图识别方法、装置、计算机设备和存储介质 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002852B (zh) * | 2018-07-11 | 2023-05-23 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机可读存储介质和计算机设备 |
-
2019
- 2019-09-06 CN CN201910842789.8A patent/CN110717514A/zh active Pending
-
2020
- 2020-07-25 WO PCT/CN2020/104674 patent/WO2021042904A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110004829A1 (en) * | 2008-12-23 | 2011-01-06 | Microsoft Corporation | Method for Human-Centric Information Access and Presentation |
CN107612814A (zh) * | 2017-09-08 | 2018-01-19 | 北京百度网讯科技有限公司 | 用于生成候选回复信息的方法和装置 |
CN108549850A (zh) * | 2018-03-27 | 2018-09-18 | 联想(北京)有限公司 | 一种图像识别方法及电子设备 |
CN110163220A (zh) * | 2019-04-26 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 图片特征提取模型训练方法、装置和计算机设备 |
CN110717514A (zh) * | 2019-09-06 | 2020-01-21 | 平安国际智慧城市科技股份有限公司 | 会话意图识别方法、装置、计算机设备和存储介质 |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113157876A (zh) * | 2021-03-18 | 2021-07-23 | 平安普惠企业管理有限公司 | 信息反馈方法、装置、终端和存储介质 |
CN113139055A (zh) * | 2021-04-22 | 2021-07-20 | 康键信息技术(深圳)有限公司 | 对话文本的行为倾向识别方法、装置、设备及存储介质 |
CN113239701A (zh) * | 2021-05-07 | 2021-08-10 | 京东数字科技控股股份有限公司 | 数据分析处理方法和装置 |
CN113505607A (zh) * | 2021-06-15 | 2021-10-15 | 北京三快在线科技有限公司 | 一种意图识别方法、装置、电子设备及可读存储介质 |
CN113408619A (zh) * | 2021-06-21 | 2021-09-17 | 江苏苏云信息科技有限公司 | 语言模型预训练方法、装置 |
CN113408619B (zh) * | 2021-06-21 | 2024-02-13 | 江苏苏云信息科技有限公司 | 语言模型预训练方法、装置 |
CN113268610A (zh) * | 2021-06-22 | 2021-08-17 | 中国平安人寿保险股份有限公司 | 基于知识图谱的意图跳转方法、装置、设备及存储介质 |
CN113268610B (zh) * | 2021-06-22 | 2023-10-03 | 中国平安人寿保险股份有限公司 | 基于知识图谱的意图跳转方法、装置、设备及存储介质 |
CN115249017B (zh) * | 2021-06-23 | 2023-12-19 | 马上消费金融股份有限公司 | 文本标注方法、意图识别模型的训练方法及相关设备 |
CN115249017A (zh) * | 2021-06-23 | 2022-10-28 | 马上消费金融股份有限公司 | 文本标注方法、意图识别模型的训练方法及相关设备 |
CN113641803A (zh) * | 2021-06-30 | 2021-11-12 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备及存储介质 |
CN113641803B (zh) * | 2021-06-30 | 2023-06-06 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备及存储介质 |
CN114140127A (zh) * | 2022-01-27 | 2022-03-04 | 广州卓远虚拟现实科技有限公司 | 一种基于区块链的支付处理方法及系统 |
CN114661877A (zh) * | 2022-03-17 | 2022-06-24 | 广州荔支网络技术有限公司 | 中心词提取方法、装置、计算机设备和存储介质 |
CN114648027A (zh) * | 2022-05-23 | 2022-06-21 | 每日互动股份有限公司 | 一种文本信息的处理方法、装置、计算机设备及存储介质 |
CN115022268A (zh) * | 2022-06-24 | 2022-09-06 | 深圳市六度人和科技有限公司 | 一种会话识别方法及装置、可读存储介质、计算机设备 |
CN115022268B (zh) * | 2022-06-24 | 2023-05-12 | 深圳市六度人和科技有限公司 | 一种会话识别方法及装置、可读存储介质、计算机设备 |
CN115955451B (zh) * | 2023-03-09 | 2023-07-14 | 广东维信智联科技有限公司 | 一种在线会话信息安全监控系统 |
CN115955451A (zh) * | 2023-03-09 | 2023-04-11 | 广东维信智联科技有限公司 | 一种在线会话信息安全监控系统 |
CN116580408A (zh) * | 2023-06-06 | 2023-08-11 | 上海任意门科技有限公司 | 一种图像生成方法、装置、电子设备及存储介质 |
CN116580408B (zh) * | 2023-06-06 | 2023-11-03 | 上海任意门科技有限公司 | 一种图像生成方法、装置、电子设备及存储介质 |
CN118228740A (zh) * | 2024-05-24 | 2024-06-21 | 北京中关村科金技术有限公司 | 一种会话信息处理方法、装置、设备、存储介质及产品 |
CN118445401A (zh) * | 2024-07-02 | 2024-08-06 | 厦门鹅卵石网络科技有限公司 | 多模态的对话数据痛点洞察方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110717514A (zh) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021042904A1 (zh) | 会话意图识别方法、装置、计算机设备和存储介质 | |
US11568855B2 (en) | System and method for defining dialog intents and building zero-shot intent recognition models | |
WO2019153522A1 (zh) | 智能交互方法、电子装置及存储介质 | |
WO2021068321A1 (zh) | 基于人机交互的信息推送方法、装置和计算机设备 | |
CN111128394B (zh) | 医疗文本语义识别方法、装置、电子设备及可读存储介质 | |
WO2021190259A1 (zh) | 一种槽位识别方法及电子设备 | |
US20220222441A1 (en) | Machine learning based named entity recognition for natural language processing | |
US7860705B2 (en) | Methods and apparatus for context adaptation of speech-to-speech translation systems | |
CN110704576B (zh) | 一种基于文本的实体关系抽取方法及装置 | |
CN110825867B (zh) | 相似文本推荐方法、装置、电子设备和存储介质 | |
CN113254637B (zh) | 一种融合语法的方面级文本情感分类方法及系统 | |
US20230394247A1 (en) | Human-machine collaborative conversation interaction system and method | |
Banerjee et al. | A dataset for building code-mixed goal oriented conversation systems | |
US11636272B2 (en) | Hybrid natural language understanding | |
CN109857865B (zh) | 一种文本分类方法及系统 | |
US20230072171A1 (en) | System and method for training and refining machine learning models | |
CN113723105A (zh) | 语义特征提取模型的训练方法、装置、设备及存储介质 | |
WO2023173554A1 (zh) | 坐席违规话术识别方法、装置、电子设备、存储介质 | |
US20210248473A1 (en) | Attention neural networks with linear units | |
WO2023226239A1 (zh) | 对象情绪的分析方法、装置和电子设备 | |
CN112699686A (zh) | 基于任务型对话系统的语义理解方法、装置、设备及介质 | |
WO2021114682A1 (zh) | 会话任务生成方法、装置、计算机设备和存储介质 | |
CN114417851A (zh) | 一种基于关键词加权信息的情感分析方法 | |
CN112036186A (zh) | 语料标注方法、装置、计算机存储介质及电子设备 | |
CN117272977A (zh) | 人物描写语句的识别方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20861182 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/07/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20861182 Country of ref document: EP Kind code of ref document: A1 |