CN109241255B - Intention identification method based on deep learning - Google Patents

Intention identification method based on deep learning Download PDF

Info

Publication number
CN109241255B
CN109241255B CN201810945991.9A CN201810945991A CN109241255B CN 109241255 B CN109241255 B CN 109241255B CN 201810945991 A CN201810945991 A CN 201810945991A CN 109241255 B CN109241255 B CN 109241255B
Authority
CN
China
Prior art keywords
intention
vector
category
word
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810945991.9A
Other languages
Chinese (zh)
Other versions
CN109241255A (en
Inventor
何婷婷
潘敏
汤丽
王逾凡
孙博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN201810945991.9A priority Critical patent/CN109241255B/en
Publication of CN109241255A publication Critical patent/CN109241255A/en
Application granted granted Critical
Publication of CN109241255B publication Critical patent/CN109241255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Machine Translation (AREA)

Abstract

A deep learning based dialog system intention recognition method extracts keywords from dialog corpus through word frequency weight as intention recognition rule and carries out intention predictionMatching the identified dialog D by using rules to obtain an intention classification result PA(ii) a Training a deep learning model CNN-BLSTM by using dialogue corpus, fusing a convolutional neural network CNN and a bidirectional long-short term memory network BLSTM by using the deep learning model CNN-BLSTM, and identifying a dialogue D by using the trained deep learning model CNN-BLSTM to obtain a pre-intention classification result PB(ii) a Finally, by classifying the intentions into the results PAAnd intention classification result PBAnd performing linear fusion to obtain the final intention of the dialog D. The invention can effectively improve the accuracy of user intention identification.

Description

Intention identification method based on deep learning
Technical Field
The invention belongs to the technical field of man-machine conversation systems, and particularly relates to an intention identification method based on deep learning.
Background
The man-machine conversation system is one of core technologies in the field of artificial intelligence, is about to become a new man-machine interaction mode, and has great research value. People have long sought to communicate with computers using natural language because of the great significance: people can use the computer with the language which is most familiar with the most used language and interact with the computer without spending a great deal of time to learn and adapt to the computer language. With the advent of the internet era, the use demand of man-machine interactive systems has increased greatly. For example, the intelligent customer service system is widely applied to the intelligent customer service in online shopping, not only greatly improves the communication efficiency between people and computers, but also facilitates the life and work of people. All the major scientific and technological professions join the research ranks of the intelligent dialogue system and release related products such as: siri of apple, Cortana of microsoft, brix, etc. Perhaps in the near future, people will no longer use the currently mainstream input devices and natural language will instead become the most widely used way of human-computer interaction. The main steps of human-computer natural language interaction comprise: speech recognition, natural language understanding, dialog state tracking, natural language generation, speech synthesis.
The natural language understanding is a key module in a man-machine conversation system and is used for converting natural language spoken by a user to a computer into semantic representation which can be understood by the computer, so that the purpose of understanding the natural language of the user is achieved. To understand the words of the user, it is necessary to know the field related to the natural language of the user or the intention that the user wants to express, and the user intention recognition is to adopt a classification method to achieve the above purpose. The increased accuracy of user intent recognition greatly assists the dialog system in generating reasonable replies.
In a human-machine dialog system, the correct recognition of the user's intent is the basis for the dialog system to generate a reasonable reply. If the user's intention is not judged correct, the dialog system will generate a reply to the question, which does not have any meaning. Therefore, it is very important to improve the performance of the dialog system, increase the user experience, and accurately identify the user's intention. In addition, by accurately judging the intention of the user, the commercial intelligent dialogue system can provide useful recommendations of consumption, entertainment, products and the like for the user, and has great commercial value. In conclusion, the user intention identification has important research value and research significance.
Disclosure of Invention
The invention aims to solve the problem of improving the accuracy of user intention identification by utilizing a deep learning technology.
The technical scheme of the invention provides a deep learning-based dialog system intention identification method, which comprises the steps of firstly extracting keywords from dialog linguistic data through word frequency weight to serve as an intention identification rule, matching dialog D subjected to intention identification in advance by using the rule to obtain an intention classification result PA(ii) a Training a deep learning model CNN-BLSTM by using dialogue corpus, fusing a convolutional neural network CNN and a bidirectional long-short term memory network BLSTM by using the deep learning model CNN-BLSTM, and identifying a dialogue D by using the trained deep learning model CNN-BLSTM to obtain an intention classification result PB(ii) a Finally, by classifying the intentions into the results PAAnd intention classification result PBAnd performing linear fusion to obtain the final intention of the dialog D.
Furthermore, the extracting of the keywords from the dialog corpus by the word frequency weight as the rule of the intention recognition includes performing the following processing for each category,
performing word segmentation processing, counting the number N of all entries in the category, combining all the entries into a word list, and counting the occurrence frequency W of the ith word in the categoryiThe total number M of sentences and the length L of the jth sentencej,i=1,2,3…N,j=1,2,3…M;
The average length AveL of the sentences in the category is calculated according to the following formula,
Figure BDA0001770216080000021
calculating the word frequency weight F of the ith wordiAs follows below, the following description will be given,
Figure BDA0001770216080000022
wherein, S represents the length S of all sentences in which the ith word appears in the category and is added up;
after the word frequency weight of each word in the category is obtained, all the entries in the word list are sorted from large to small according to the word frequency weight, and a plurality of entries ranked at the top are selected as keywords to serve as rules of the category.
Moreover, the deep learning model CNN-BLSTM trained by dialogue corpus is realized as follows,
after the training corpus is subjected to word segmentation, the vector of each word in each sentence obtained after word segmentation is represented by xbThe vector representations combined into the sentence are X ═ X1,x2,x3,…xl]L is the sequence length of the vector, b is 1,2,3 … l;
inputting the vector X of the sentence into the convolution layer of the convolution neural network for calculation, and obtaining all the characteristic graphs saCombined to obtain output result S ═ S1,s2,s3…sn]N represents the total number of feature maps, and a is 1,2,3 … n;
rearranging the structure of the S, inputting a vector V obtained after rearrangement into a BLSTM neural network model, wherein the BLSTM is a bidirectional long and short term memory neural network and consists of a forward long and short term memory neural network and a backward long and short term memory neural network; for each time step t, the forward long-short term memory neural network outputs a hidden layer state
Figure BDA0001770216080000023
Backward long-short term memory neural network output hidden layer state
Figure BDA0001770216080000024
Combining the vectors of the two hidden layer states to obtain a vector ht(ii) a Obtaining a vector H corresponding to the whole sentence according to the vector representation of all time steps, wherein the H implies context semantic information;
performing maximum pooling operation on the S to obtain a vector O, wherein the vector O comprises the most important semantic features and category feature information in sentences;
vector H and vector O are concatenated as vector T,
representing T as a final dialog sentence feature vector, and connecting all sentence feature T to obtain an intermediate quantity ycAccording to ycObtaining the probability of each category, and selecting the intention category with the highest probability as an intention recognition result PB
Further, let us say the intention classification result PAAnd intention classification result PBAnd performing linear fusion to obtain final intention category probability distribution P, and selecting the intention category corresponding to the maximum probability in P as a final intention identification result.
On one hand, extracting key words from dialogue linguistic data through word frequency weight as rules of intention recognition, and matching dialogue D which is subjected to intention recognition in advance by using the rules to obtain a result P of intention classification in advanceAOn the other hand, the dialogue corpus is used to train the deep learning model (CNN-BLSTM) which combines the Convolutional Neural Network (CNN) and the bidirectional long-short term memory network (BLSTM), and the dialogue corpus is used to train the deep learning model (CNN-BLSTM)The trained model identifies the dialogue D to obtain a pre-intention classification result PBFinally, by classifying the intention into the result PAAnd intention classification result PBAnd performing linear fusion to obtain the final intention of the dialog D. According to the intention identification method based on the CNN-BLSTM provided by the invention, the defect that the basic deep learning model only considers time sequence information and ignores local important information in intention identification can be effectively overcome. In addition, the sentence length is integrated into the word frequency weight calculation to obtain the rule, so that the importance degree of the quantifier can be effectively measured, and more representative words are selected as the rule. And finally, combining the classification result of the CNN-BLSTM model with the classification result matched with the rule, so that the intention of the user can be accurately obtained. The comparison test result of a plurality of internationally best models on a plurality of official intention recognition data sets shows that the intention recognition method combining the combined deep learning model and the rule matching of the length information of the merged sentence provided by the invention realizes remarkable improvement on the recognition precision. The invention can effectively improve the accuracy of user intention recognition, the correct recognition of the user intention is the basis of reasonable reply generation of the intelligent dialogue system, the intention of the user is accurately recognized, the performance of the dialogue system can be improved, the user experience is increased, and the invention has great value and research significance.
Drawings
Fig. 1 is a flow chart of intent recognition in an embodiment of the present invention.
Detailed description of the invention
The invention provides a combined deep learning model for integrating a Convolutional Neural Network (CNN) into a bidirectional long-short term memory neural network (LSTM) to realize intention classification, and sentence length information in a corpus is taken as an influence factor to be integrated into rule extraction, and the accuracy of intention identification is further improved by combining a classification result of rule matching.
The combined deep learning model provided by the invention is called CNN-BLSTM model and is used for intention identification. The traditional deep learning model usually adopts a Recurrent Neural Network (RNN) and a variant long-short term memory neural network (LSTM) thereof and the like in an intention recognition task, and the neural network can well grasp the time sequence information of sentences, but lacks of local important information. On the basis, the Convolutional Neural Network (CNN) is fused on the traditional model to acquire the local important semantic information in the sentence. The combined model can capture the user's intention with more information.
Aiming at the unreasonable rule extraction method in the classical rule matching method, the invention provides that the sentence length information in the corpus is taken as an influence factor and considered in the word frequency weight calculation, and more reasonable key words are obtained as rules. Generally, the importance degree of a word in a short sentence is larger than that in a long sentence (for example, the importance of the word "song" in "i want to listen to a song", "playing a song now is afraid of disturbing a roommate to sleep" is larger than that of the word in the former sentence), and the real intention of a user can be better captured by effectively utilizing the length information of the sentence.
The method combines the results of the rule matching method and the combined deep learning model method, and the traditional method and the deep learning method complement each other, so that the accuracy of user intention identification is further improved. Firstly, extracting keywords from the dialogue corpus by word frequency weight as the rules of intention recognition, matching the dialogue D which is subjected to intention recognition by the rules to obtain the intention classification result PAThen, a deep learning model (CNN-BLSTM) fusing a Convolutional Neural Network (CNN) and a bidirectional long and short term memory network (BLSTM) is trained by dialogue corpus, and the dialogue D is identified by the trained model to obtain an intention classification result PBFinally, by classifying the intention into the result PAAnd intention classification result PBAnd performing linear fusion to obtain the final intention of the dialog D.
Referring to fig. 1, the specific implementation process of the embodiment is as follows:
step 1, for the dialog corpus (training corpus) labeled with the category, the following processing is respectively carried out for each category to obtain the rule of each category:
and performing word segmentation processing by using a Jieba word segmentation toolkit, counting the number N of all the entries in the category, and forming a vocabulary by using all the entries. And count out the classIn other words, the number of times W the ith word appearsi(i ═ 1,2,3 … N), total number of sentences M, and length L of jth sentencej(j ═ 1,2,3 … M). The Jieba word segmentation toolkit is an existing software tool, and the invention is not repeated in detail.
Calculating the average length of sentences under the category, accumulating and summing the lengths of all sentences of the category, and then dividing by the total number of the sentences to obtain the average length of the sentences of the category as AveL, wherein the calculation formula is as follows,
Figure BDA0001770216080000041
the word frequency weight F of the ith word can be calculated according to the formula (1)iAs shown in equation (2):
Figure BDA0001770216080000042
in the above formula, FiWord frequency weight, W, representing the ith word of the categoryiRepresenting the number of times of the occurrence of the word i in the category, N representing the number of all entries in the category, AveL being the average length of the sentences in the category, S representing the length of a certain sentence containing the ith word in the category, and Σ S being the cumulative sum of the lengths of all the sentences in which the ith word occurs in the category;
and (3) obtaining the word frequency weight of each word in the category according to a formula (2), sequencing each entry in the word list from large to small according to the word frequency weight, and taking the entry with the top ranking of 1% as a keyword which is taken as a rule of the category. In specific implementation, a user can preset a selection ratio, and the preferred ratio is 1% in the embodiment.
And 2, deleting repeated rules in different categories based on the rules of each category obtained in the step 1, and taking the finally obtained rules as the final rule extraction result.
Step 3, matching the dialogs D which are subjected to intention identification one by using the rules obtained in the step 2 (once a rule is successfully matched with a ruleThen, the matching is terminated), if a rule is included in D, the probability of the intention type corresponding to the rule is marked as 1, the probabilities of the other intention types are all marked as 0, and the probability distribution P of D corresponding to all intention types is obtainedA=[p1,p2,p3,…pd]D denotes the total number of intention categories, p1,p2,p3,…pdThe probabilities of the 1 st to d th intention categories, respectively. For ease of understanding, the following will be exemplified: if the term "song" is a rule of music intention category, if the dialog sentence D is "what kind of song you like to listen to" and the rule of "song" is included in D, it is determined that the music intention category probability of D is 1 and the other intention category probability is 0.
And 4, training on the large-scale corpus of the Chinese Wikipedia through a word2vec tool to obtain a word vector set. The set of word vectors contains a large number of vector representations of words. The word2vec tool is an existing software tool for converting words into vector form, and the description of the invention is omitted.
Step 5, after the training corpus is participled by a Jieba participle toolkit, the vector of each word in each sentence obtained after participle represents xb(found in the set of word vectors obtained in step 4) the vector representations combined into the sentence are X ═ X1,x2,x3,…xl]And l is the sequence length of the vector (the embodiment of the invention selects l as 40 according to the length statistics and analysis of sentences in the corpus), when the length exceeds l, the length part not exceeding l is cut, and when the length is insufficient, zero is filled, and b is 1,2,3 … l. In specific implementation, a user can preset and select a value of l.
Inputting the vector X of the sentence into a convolution layer of a convolution neural network for calculation, wherein in the convolution process, a sliding window is convoluted according to the dimensionality direction of the sentence sequence, and the length of the sliding window, namely the size of a convolution kernel, is respectively set to be 2,3 and 5. The number of each convolution kernel is set to 128. Each convolution kernel slides to carry out convolution operation on sentence vectors to obtain feature maps s with different degreesa(A convolution kernel of each size will generate 128 feature maps), will all sa(a-1, 2,3 … n) are combined to obtain an output result S-S1,s2,s3…sn]And n represents the total number of feature maps (for example, n is 3 × 128 in the present embodiment).
To maintain the relative order of the sentences, the structure of S is rearranged to obtain a result V, and the rearrangement mode is as follows:
S=[s1,s2,…sn]formula (3)
Figure BDA0001770216080000051
Figure BDA0001770216080000052
In the above formula (3), S is the convolution result obtained and is composed of a plurality of feature maps, Sa(a ═ 1,2,3 … n) denotes the a-th feature map obtained after the convolution operation, each feature map being a vector, in equations (4) and (5)
Figure BDA0001770216080000061
The values of the elements corresponding to the b-th (b is 1,2,3 … l) dimension of the a-th feature map in the convolution result are shown. v. ofbIs to mix saRearranging the corresponding vectors, vbThe combination yields the final rearrangement result V.
V=[v1,v2,…vl]Formula (6)
And inputting the rearranged vector V into a BLSTM neural network model, wherein the BLSTM is a bidirectional long and short term memory neural network and consists of a forward long and short term memory neural network and a backward long and short term memory neural network. For each time step t (t is 1,2,3 … l) (one word is input as one time step in one sentence), the long-short term memory neural network in the front outputs the hidden layer state
Figure BDA0001770216080000062
Backward long and short term memoryRecall neural network output hidden layer state
Figure BDA0001770216080000064
Combining the vectors of the two hidden layer states to obtain a vector ht
Figure BDA0001770216080000063
H=[h1,h2,h3…hl]Formula (8)
Wherein, H represents the vector representation of all time steps, namely the vector representation of the whole sentence, and the context semantic information is hidden in H.
And 6, performing pooling operation on the convolution result S obtained in the step 5, wherein the pooling operation performs characteristic sampling on the output of the convolution layer, and can combine the characteristics extracted by convolution windows of various sizes. The invention employs a max-pooling (max-pooling) method that preserves the largest feature vector to extract the most significant features. The output of the maximum pooling is defined as,
smax=max sa,a∈[1,n]formula (9)
O=h(W·smax+ b) formula (10)
S in the above formula (9)aCharacteristic diagram, s, representing the output of the convolutional layermaxRepresentation selection saThe largest feature in (1). In the formula (10), the operation h (-) is a nonlinear activation function, the LeakyReLU function is used as the activation function, W and b are parameters in a convolution network, and the initial value is randomly selected from 0 to 1. Obtaining a vector O after the maximum pooling operation, wherein the vector O comprises the most important semantic features and category feature information in the sentence;
and 7, splicing the vector H obtained in the step 5 and the vector O obtained in the step 6 to form a new vector T, wherein the method for splicing the vector H and the vector O comprises the following steps:
t ═ O, H formula (11)
And (5) taking T as the final dialog sentence feature vector to represent, and connecting all sentence features T through a full connection layer to obtain ycConnecting blockThe formula is as follows,
yc=h(Wc×T+bc) Formula (12)
Will ycThe probability of each category is obtained by inputting the probability into a softmax function, and the calculation formula is as follows,
Figure BDA0001770216080000071
in equations (12) and (13), c (c ═ 1,2,3 …, d) represents the class-c intention category, and y representscIs an intermediate quantity, WcIs a parameter of the full link layer of the convolutional neural network, bcIs a bias term parameter, d represents the total number of intention categories, the operation h (-) is a non-linear activation function, the embodiment of the invention adopts tanh function as the activation function, e is the base number of the natural logarithm, pcIndicating the probability that the user statement belongs to the category c intent category. Calculate the probability P of all intention classesB=[p1,p2,p3,…pd],p1,p2,p3,…pdThe probabilities of the 1 st to d th intention categories, respectively.
Step 8, repeating the steps 5-7 for each sentence in the training corpus to obtain the probability distribution of the intention category, selecting the intention category with the highest probability as the prediction intention category, comparing the prediction intention category with the real intention (the corpus dialogue provides the real intention of each sentence) of the sentence to train the CNN-BLSTM model and continuously iterate and optimize the parameters in the model, calculating the dialogue D for pre-intention recognition by using the trained CNN-BLSTM model (calculating by the corresponding mode of the steps 5-7), and obtaining the probability distribution P of D corresponding to all the intention categoriesB
Step 9, aiming at the dialog D which is subjected to intention identification in advance, the intention type probability distribution P obtained in the step 3 is usedAAnd the intention category probability distribution P obtained in step 8BLinear fusion is carried out to obtain the final intention category probability distribution P, the fusion mode is as follows,
P=α×PA+β×PBformula (14)
In the formula (14), α + β is 1, and α, β ∈ (0.0, 0.1, 0.2 … 1.0.0) may be taken, and values of α and β are preset as needed, for example, α is 0.5 and β is 0.5.
And finally, selecting the intention category corresponding to the maximum probability in the P as a final intention identification result.
And (3) realizing classification based on rule matching, 4-8 realizing classification based on a CNN-BLSTM model, and 9 combining the two.
In specific implementation, a person skilled in the art can implement automatic operation of the above processes by using software technology. Accordingly, it is within the scope of the present invention if an intention recognition method based on deep learning technology is provided, which includes a computer or server, and the above process is executed on the computer or server to realize intention recognition by combining the CNN-BLSTM model matched with rules.

Claims (3)

1. A dialog system intention recognition method based on deep learning is characterized in that: firstly, extracting keywords from the dialogue corpus through word frequency weight as the rule of intention recognition, matching the dialogue D which is subjected to intention recognition by using the rule to obtain an intention classification result PA(ii) a Training a deep learning model CNN-BLSTM by using dialogue corpus, fusing a convolutional neural network CNN and a bidirectional long-short term memory network BLSTM by using the deep learning model CNN-BLSTM, and identifying a dialogue D by using the trained deep learning model CNN-BLSTM to obtain an intention classification result PB(ii) a Finally, by classifying the intentions into the results PAAnd intention classification result PBPerforming linear fusion to obtain the final intention of the conversation D;
the deep learning model CNN-BLSTM is trained by dialogue corpus in the following way,
after the training corpus is subjected to word segmentation, the vector of each word in each sentence obtained after word segmentation is represented by xbThe vector representations combined into the sentence are X ═ X1,x2,x3,…xl]L is the sequence length of the vector, b 1,2, 3.. l;
inputting vector X of sentence into volumeCalculating convolution layer of the product neural network to obtain all characteristic maps saCombined to obtain output result S ═ S1,s2,s3...sn]N represents the total number of feature maps, and a is 1,2,3 … n;
rearranging the structure of the S, inputting a vector V obtained after rearrangement into a BLSTM neural network model, wherein the BLSTM is a bidirectional long and short term memory neural network and consists of a forward long and short term memory neural network and a backward long and short term memory neural network; for each time step t, the forward long-short term memory neural network outputs a hidden layer state
Figure FDA0003018734850000011
Backward long-short term memory neural network output hidden layer state
Figure FDA0003018734850000012
Combining the vectors of the two hidden layer states to obtain a vector ht(ii) a Obtaining a vector H corresponding to the whole sentence according to the vector representation of all time steps, wherein the H implies context semantic information;
performing maximum pooling operation on the S to obtain a vector O, wherein the vector O comprises the most important semantic features and category feature information in sentences;
vector H and vector O are concatenated as vector T,
representing T as a final dialog sentence feature vector, and connecting all sentence feature T to obtain an intermediate quantity ycAccording to ycObtaining the probability of each category, and selecting the intention category with the highest probability as an intention recognition result PB
2. The deep learning based dialog system intention recognition method of claim 1, further comprising: the rule for extracting the keywords from the dialogue corpus through the word frequency weight as the intention identification comprises the following processing for each category,
performing word segmentation processing, counting the number N of all entries in the category, combining all the entries into a word list,and counting the number W of the ith word in the categoryiThe total number M of sentences and the length L of the jth sentencej,i=1,2,3…N,j=1,2,3…M;
The average length AveL of the sentences in the category is calculated according to the following formula,
Figure FDA0003018734850000021
calculating the word frequency weight F of the ith wordiAs follows below, the following description will be given,
Figure FDA0003018734850000022
wherein, S represents the length S of all sentences in which the ith word appears in the category and is added up;
after the word frequency weight of each word in the category is obtained, all the entries in the word list are sorted from large to small according to the word frequency weight, and a plurality of entries ranked at the top are selected as keywords to serve as rules of the category.
3. The deep learning based dialog system intention recognition method of claim 1 or 2, characterized in that: let the intention classify the result PAAnd intention classification result PBAnd performing linear fusion to obtain final intention category probability distribution P, and selecting the intention category corresponding to the maximum probability in P as a final intention identification result.
CN201810945991.9A 2018-08-20 2018-08-20 Intention identification method based on deep learning Active CN109241255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810945991.9A CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810945991.9A CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109241255A CN109241255A (en) 2019-01-18
CN109241255B true CN109241255B (en) 2021-05-18

Family

ID=65071796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810945991.9A Active CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109241255B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871446B (en) * 2019-01-31 2023-06-06 平安科技(深圳)有限公司 Refusing method in intention recognition, electronic device and storage medium
CN109893095A (en) * 2019-03-11 2019-06-18 常州市贝叶斯智能科技有限公司 A kind of intelligent robot system of body composition detection and analysis
US11580970B2 (en) * 2019-04-05 2023-02-14 Samsung Electronics Co., Ltd. System and method for context-enriched attentive memory network with global and local encoding for dialogue breakdown detection
CN110232114A (en) * 2019-05-06 2019-09-13 平安科技(深圳)有限公司 Sentence intension recognizing method, device and computer readable storage medium
CN110334340B (en) * 2019-05-06 2021-08-03 北京泰迪熊移动科技有限公司 Semantic analysis method and device based on rule fusion and readable storage medium
CN110245348B (en) * 2019-05-17 2023-11-24 北京百度网讯科技有限公司 Intention recognition method and system
CN110209791B (en) * 2019-06-12 2021-03-26 百融云创科技股份有限公司 Multi-round dialogue intelligent voice interaction system and device
CN110321564B (en) * 2019-07-05 2023-07-14 浙江工业大学 Multi-round dialogue intention recognition method
CN110377911B (en) * 2019-07-23 2023-07-21 中国工商银行股份有限公司 Method and device for identifying intention under dialog framework
US10916242B1 (en) 2019-08-07 2021-02-09 Nanjing Silicon Intelligence Technology Co., Ltd. Intent recognition method based on deep learning network
CN110232439B (en) * 2019-08-07 2019-12-24 南京硅基智能科技有限公司 Intention identification method based on deep learning network
CN111639152B (en) * 2019-08-29 2021-04-13 上海卓繁信息技术股份有限公司 Intention recognition method
CN110795944A (en) * 2019-10-11 2020-02-14 腾讯科技(深圳)有限公司 Recommended content processing method and device, and emotion attribute determining method and device
CN110928997A (en) * 2019-12-04 2020-03-27 北京文思海辉金信软件有限公司 Intention recognition method and device, electronic equipment and readable storage medium
CN111400440A (en) * 2020-02-28 2020-07-10 深圳市华海同创科技有限公司 Intention identification method and device
CN111462752B (en) * 2020-04-01 2023-10-13 北京思特奇信息技术股份有限公司 Attention mechanism, feature embedding and BI-LSTM (business-to-business) based customer intention recognition method
CN111737544A (en) * 2020-05-13 2020-10-02 北京三快在线科技有限公司 Search intention recognition method and device, electronic equipment and storage medium
CN111597320A (en) * 2020-05-26 2020-08-28 成都晓多科技有限公司 Intention recognition device, method, equipment and storage medium based on hierarchical classification
CN114547435A (en) * 2020-11-24 2022-05-27 腾讯科技(深圳)有限公司 Content quality identification method, device, equipment and readable storage medium
CN112667816B (en) * 2020-12-31 2022-07-05 华中师范大学 Deep learning-based aspect level emotion analysis method and system
CN112989003B (en) * 2021-04-01 2023-04-18 网易(杭州)网络有限公司 Intention recognition method, device, processing equipment and medium
CN113158062A (en) * 2021-05-08 2021-07-23 清华大学深圳国际研究生院 User intention identification method and device based on heterogeneous graph neural network
CN113094475B (en) * 2021-06-08 2021-09-21 成都晓多科技有限公司 Dialog intention recognition system and method based on context attention flow

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339551B (en) * 2007-07-05 2013-01-30 日电(中国)有限公司 Natural language query demand extension equipment and its method
US10268679B2 (en) * 2016-12-02 2019-04-23 Microsoft Technology Licensing, Llc Joint language understanding and dialogue management using binary classification based on forward and backward recurrent neural network
EP3559869A1 (en) * 2016-12-21 2019-10-30 Xbrain, Inc. Natural transfer of knowledge between human and artificial intelligence
CN107679199A (en) * 2017-10-11 2018-02-09 北京邮电大学 A kind of external the Chinese text readability analysis method based on depth local feature
CN108415923B (en) * 2017-10-18 2020-12-11 北京邮电大学 Intelligent man-machine conversation system of closed domain
CN107679234B (en) * 2017-10-24 2020-02-11 上海携程国际旅行社有限公司 Customer service information providing method, customer service information providing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109241255A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109241255B (en) Intention identification method based on deep learning
CN108363790B (en) Method, device, equipment and storage medium for evaluating comments
CN110059188B (en) Chinese emotion analysis method based on bidirectional time convolution network
CN109829104B (en) Semantic similarity based pseudo-correlation feedback model information retrieval method and system
CN109165294B (en) Short text classification method based on Bayesian classification
CN106599032B (en) Text event extraction method combining sparse coding and structure sensing machine
CN110321418B (en) Deep learning-based field, intention recognition and groove filling method
CN110287320A (en) A kind of deep learning of combination attention mechanism is classified sentiment analysis model more
CN110033281B (en) Method and device for converting intelligent customer service into manual customer service
CN109460737A (en) A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN110321563B (en) Text emotion analysis method based on hybrid supervision model
CN112732916B (en) BERT-based multi-feature fusion fuzzy text classification system
CN111027595B (en) Double-stage semantic word vector generation method
CN110489523B (en) Fine-grained emotion analysis method based on online shopping evaluation
CN110175221B (en) Junk short message identification method by combining word vector with machine learning
CN110297888B (en) Domain classification method based on prefix tree and cyclic neural network
CN112232087B (en) Specific aspect emotion analysis method of multi-granularity attention model based on Transformer
CN113505200B (en) Sentence-level Chinese event detection method combined with document key information
WO2021135457A1 (en) Recurrent neural network-based emotion recognition method, apparatus, and storage medium
CN111177383A (en) Text entity relation automatic classification method fusing text syntactic structure and semantic information
CN111191442A (en) Similar problem generation method, device, equipment and medium
CN112131345B (en) Text quality recognition method, device, equipment and storage medium
CN112287106A (en) Online comment emotion classification method based on dual-channel hybrid neural network
CN113449084A (en) Relationship extraction method based on graph convolution
CN110472245A (en) A kind of multiple labeling emotional intensity prediction technique based on stratification convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant