CN111177381A - Slot filling and intention detection joint modeling method based on context vector feedback - Google Patents
Slot filling and intention detection joint modeling method based on context vector feedback Download PDFInfo
- Publication number
- CN111177381A CN111177381A CN201911331309.8A CN201911331309A CN111177381A CN 111177381 A CN111177381 A CN 111177381A CN 201911331309 A CN201911331309 A CN 201911331309A CN 111177381 A CN111177381 A CN 111177381A
- Authority
- CN
- China
- Prior art keywords
- vector
- slot
- intention
- matrix
- context
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000013598 vector Substances 0.000 title claims abstract description 184
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 title claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 77
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 6
- 230000007246 mechanism Effects 0.000 claims abstract description 4
- 239000010410 layer Substances 0.000 claims description 20
- 238000010606 normalization Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 239000002356 single layer Substances 0.000 claims description 4
- 230000015654 memory Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000007787 long-term memory Effects 0.000 abstract description 2
- 230000006403 short-term memory Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 238000005429 filling process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a combined modeling method for slot filling and intention detection based on context vector feedback. The method comprises the following steps: inputting the sentences into a word expression generation network pre-trained by a language model to obtain a word expression matrix of the current sentences; inputting the word feature matrix into a bidirectional long and short term memory network and obtaining a slot feature matrix and an intention feature vector by using an attention mechanism; processing the slot feature matrix and the intention feature vector to obtain a context vector; and splicing the context vector with the slot feature matrix and the intention feature vector respectively, and inputting the spliced context vector into a full-connection network to obtain a slot marking matrix and an intention weight vector so as to obtain the intention and slot information of the statement. By utilizing the embodiment of the invention, global context information auxiliary judgment can be added in the intention recognition and slot filling tasks, the accuracy of the tasks is improved, the model effect is obviously superior to other language understanding models, and the method has great practical value.
Description
Technical Field
The invention relates to the field of natural language understanding, in particular to a combined modeling method for slot filling and intention detection based on context vector feedback.
Background
The man-machine conversation system has been developed for more than 50 years, has achieved a lot of progress, and is facing huge development opportunities at present. The main goal of Natural Language Understanding (NLU) in a traditional human-computer dialog system is to recognize the intent of an input utterance and obtain task-related semantic information (also called semantic slot filling). The most common method at present is slot filling by using a sequence labeling method, and intention identification by using a classification method such as a support vector machine.
For a long time, both slot filling and intent recognition have been handled as two independent subtasks of the natural language understanding task, or implicitly modeled jointly by a joint loss function, which does not make full use of the correlation between the two. In the actual scenario, the intention input by the user is correlated with the slot information in the sentence. In a dialogue of a ticket booking scene, a statement intended by a user to inform travel information often has a slot value of a starting place and a destination; conversely, if a departure destination is mentioned in a sentence, the intention of the sentence is to inform the departure information more likely.
For the traditional deep network structure, for the intention classification task, an intention feature vector for sentence intention classification is finally generated; for the slot filling task, each word generates a slot feature vector for slot attribute classification. However, the prior art does not have a method for jointly modeling slot filling and intention recognition, which results in low accuracy of the natural language understanding task.
Disclosure of Invention
The invention provides a combined modeling method for slot filling and intention detection based on context vector feedback. The method comprises the steps of carrying out a series of processing on an intention feature vector and a slot feature vector to obtain a context vector, wherein the context vector comprises all features of a current sentence facing two tasks; the context vector is used as additional information for original intention classification and slot filling tasks, joint modeling can be performed on the original intention classification and slot filling tasks in an explicit mode, and accuracy of the two tasks is improved.
The technical scheme of the invention is realized as follows:
a slot filling and intention detection joint modeling method based on context vector feedback comprises the following steps:
(1) inputting the statement into a word representation generation network (ELMo) pre-trained by a language model to obtain a word representation matrix of the current statement;
(2) inputting the word representation matrix into a bidirectional long-short term memory network to obtain a hidden layer state matrix;
(3) processing the hidden layer state matrix by using an attention mechanism to obtain a slot feature matrix and an intention feature vector;
(4) processing the slot feature matrix and the intention feature vector to obtain a context vector;
(5) splicing the context vector and the slot feature matrix and then inputting the spliced context vector and slot feature matrix into a full-connection network to obtain a slot marking matrix;
(6) splicing the context vector and the intention characteristic vector and inputting the spliced context vector and the intention characteristic vector into a full-connection network to obtain an intention weight vector;
(7) and acquiring the slot information and the intention of the input statement according to the slot marking matrix and the intention weight vector.
As a preferred embodiment of the present invention, the step (3) specifically includes:
(31) calculating the similarity of each row vector of the hidden layer state matrix and other row vectors;
(32) normalizing the similarity by using softmax to obtain a similarity weight vector;
(33) multiplying the hidden layer state matrix by the similarity weight vector to obtain a slot feature vector of the row vector pointed by the step (31);
(34) merging each slot context vector into a slot context matrix by rows;
(35) and (3) calculating the similarity of the row vector of the last row of the hidden-layer state matrix and other row vectors by using a single-layer feedforward neural network, and normalizing by using softmax to obtain the intention characteristic vector.
As a preferred embodiment of the present invention, the step (4) specifically includes:
(41) multiplying each row of the slot feature matrix by the intention feature vector according to bits, and flattening the result into a row vector;
(42) inputting the row vector output by (41) into a full-connection network, and outputting a vector with the same dimension as the intention characteristic vector;
(43) the vector output by (42) is added to the intention feature vector to obtain a context vector.
As a preferred embodiment of the present invention, the step (5) specifically includes:
(51) splicing each row of the slot feature matrix with a context vector;
(52) inputting the output vector of the step (51) into a full-connection network, and obtaining a slot marking vector after softmax normalization;
(53) the slot label vectors are merged into a slot label matrix by rows.
As a preferred embodiment of the present invention, the step (6) specifically includes:
(61) splicing the intention characteristic vector and the context vector;
(62) and (4) inputting the output vector of the step (61) into the fully-connected network, and obtaining an intention weight vector after the output vector is subjected to softmax normalization.
The invention has the beneficial effects that: global context information can be added in the intention identification and slot filling tasks through the introduction of the context vector, and the model can restrict each intention identification or slot filling process according to the joint probability distribution of the intention and the slot, so that the accuracy of the tasks is improved. Experiments prove that the effect of the structure is superior to that of other intention identification and groove filling models.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of an embodiment of a method for jointly modeling slot filling and intent detection based on context vector feedback according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in FIG. 1, the invention discloses a combined modeling method for slot filling and intention detection based on context vector feedback, which comprises the following steps:
(1) inputting the statement into a word representation generation network (ELMo) pre-trained by a language model to obtain a word representation matrix of the current statement;
(2) inputting the word representation matrix into a bidirectional long-short term memory network to obtain a hidden layer state matrix;
(3) processing the hidden layer state matrix by using an attention mechanism to obtain a slot feature matrix and an intention feature vector;
the step (3) specifically comprises the following steps:
(31) calculating the similarity of each row vector of the hidden layer state matrix and other row vectors;
(32) normalizing the similarity by using softmax to obtain a similarity weight vector;
(33) multiplying the hidden layer state matrix by the similarity weight vector to obtain a slot feature vector of the row vector pointed by the step (31);
(34) merging each slot context vector into a slot context matrix by rows;
(35) and (3) calculating the similarity of the row vector of the last row of the hidden-layer state matrix and other row vectors by using a single-layer feedforward neural network, and normalizing by using softmax to obtain the intention characteristic vector.
(4) Processing the slot feature matrix and the intention feature vector to obtain a context vector;
the step (4) specifically comprises the following steps:
(41) multiplying each row of the slot feature matrix by the intention feature vector according to bits, and flattening the result into a row vector;
(42) inputting the row vector output by (41) into a full-connection network, and outputting a vector with the same dimension as the intention characteristic vector;
(43) the vector output by (42) is added to the intention feature vector to obtain a context vector.
(5) Splicing the context vector and the slot feature matrix and then inputting the spliced context vector and slot feature matrix into a full-connection network to obtain a slot marking matrix;
the step (5) specifically comprises the following steps:
(51) splicing each row of the slot feature matrix with a context vector;
(52) inputting the output vector of the step (51) into a full-connection network, and obtaining a slot marking vector after softmax normalization;
(53) the slot label vectors are merged into a slot label matrix by rows.
(6) Splicing the context vector and the intention characteristic vector and inputting the spliced context vector and the intention characteristic vector into a full-connection network to obtain an intention weight vector;
the step (6) specifically comprises the following steps:
(61) splicing the intention characteristic vector and the context vector;
(62) inputting the output vector of the step (61) into a fully-connected network, and obtaining an intention weight vector after softmax normalization;
(7) and acquiring the slot information and the intention of the input statement according to the slot marking matrix and the intention weight vector.
One embodiment of the present invention will be described below by way of example.
Step (1) is that the user sentence t is equal to (t)1,…,tT) A word token matrix obtained by inputting to a pre-trained word token Generation network (ELMo) of a language modelWherein t isiI-th word, x, after word segmentation for user sentenceiAnd obtaining word representation vectors for the ith word, wherein T is the number of the sentence words, and M is the dimension of word representation.
Step (2) inputting the deep contextualized word representation matrix X into a bidirectional LSTM to obtain a hidden state matrixWherein h isiThe hidden layer state vector of the bidirectional long and short term memory network after the ith word is input, and N is the dimension of the hidden layer vector.
Step (3) is to input the hidden layer state matrix H to obtain a slot characteristic matrix through an attention mechanismAnd intention feature vectorThe steps specifically include:
(31) for each row vector of the hidden state matrixCalculating similarity sim with other row vectorsik;
simik=F(hi,hk)
The F function represents a function for calculating similarity, and a feed-forward neural network may be used, or other methods such as vector dot product, cosine similarity, and the like may be used.
(32) Normalizing the similarity by using softmax to obtain a similarity weight vector wi;
wi=softmax(simi1,…,simik,…,simiT)
(33) Combining the hidden layer state matrix H and the similarity weight vector wiMultiplying to obtain a slot context vector for the row vector indicated by (31)
(35) Calculating the similarity of the row vector of the last row of the hidden-layer state matrix and other row vectors by using a single-layer feedforward neural network, and normalizing by using softmax to obtain an intention context vector cI。
Step (4) is to use the slot feature matrix cSAnd intention feature vector cIProcessing to obtain context vectorsThe method specifically comprises the following steps:
(41) multiplying each row of the slot feature matrix by the intention feature vector according to bits, and flattening the result into a row vector;
(42) inputting the row vector output by (41) into a full-connection network, and outputting a vector with the same dimension as the intention characteristic vector;
(43) adding (42) the output vector to the intent feature vector to obtain a context vector:
g=WC×(flatten(cS⊙[cI;…;cI]))+cI
wherein WCAll parameters for a fully connected network; flatten is a matrix flattening operation, such as the matrix [ [1,2,3 ]],[4,5,6],[7,8,9]]Is flattened into [1,2,3,4,5,6,7,8,9 ]]; _ means bit-wise multiplication; is T number of cIA matrix formed by rows.
Step (5) splicing the context vector and the slot feature matrix and inputting the spliced context vector and slot feature matrix into a full-connection network to obtain a slot marking matrixWhere k is the kind of slot label. The method specifically comprises the following steps:
(52) Inputting the output vector of the step (51) into a full-connection network, and obtaining a slot marking vector after softmax normalization:
wherein WSAll parameters for a fully connected network;
(53) the slot label vectors are merged into a slot label matrix Y ═ Y (Y) by rows1,…,yT)。
Step (6) is to splice the context vector and the intention characteristic vector and input the spliced context vector and intention characteristic vector into a full-connection network to obtain an intention weight vectorWhere l is the number of intent classes. The method specifically comprises the following steps:
(62) Inputting the output vector of the step (61) into a fully-connected network, and obtaining an intention weight vector after softmax normalization:
yI=softmax(WI×[cI,g])
wherein WIAll parameters of a fully connected network.
Step (7) is to label the matrix Y and the intention weight vector Y according to the grooveISlot information and intent of the input sentence are obtained.
The embodiments of the proposed context vector feedback-based slot filling and intention detection joint modeling method and modules are described above with reference to the accompanying drawings. The method has the advantages that mutually independent slot filling and intention identification in the prior art are combined, combined modeling is carried out, global context information can be added in the intention identification and slot filling tasks through introduction of context vectors, the model can restrict each intention identification or slot filling process according to joint probability distribution of the intention and the slot, and accuracy of the tasks is improved. Experiments prove that the effect of the structure is superior to that of other intention identification and groove filling models. Through the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform, and certainly can be implemented by hardware, but the former is a better embodiment.
The technical scheme discloses the improvement point of the invention, and technical contents which are not disclosed in detail can be realized by the prior art by a person skilled in the art.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. A slot filling and intention detection joint modeling method based on context vector feedback is characterized by comprising the following steps:
(1) inputting the sentences into a word expression generation network pre-trained by a language model to obtain a word expression matrix of the current sentences;
(2) inputting the word representation matrix into a bidirectional long-short term memory network to obtain a hidden layer state matrix;
(3) processing the hidden layer state matrix by using an attention mechanism to obtain a slot feature matrix and an intention feature vector;
(4) processing the slot feature matrix and the intention feature vector to obtain a context vector;
(5) splicing the context vector and the slot feature matrix and then inputting the spliced context vector and slot feature matrix into a full-connection network to obtain a slot marking matrix;
(6) splicing the context vector and the intention characteristic vector and inputting the spliced context vector and the intention characteristic vector into a full-connection network to obtain an intention weight vector;
(7) and acquiring the slot information and the intention of the input statement according to the slot marking matrix and the intention weight vector.
2. The method according to claim 1, wherein the step (3) comprises the following steps:
(31) calculating the similarity of each row vector of the hidden layer state matrix and other row vectors;
(32) normalizing the similarity by using softmax to obtain a similarity weight vector;
(33) multiplying the hidden layer state matrix by the similarity weight vector to obtain a slot feature vector of the row vector pointed by the step (31);
(34) merging each slot context vector into a slot context matrix by rows;
(35) and (3) calculating the similarity of the row vector of the last row of the hidden-layer state matrix and other row vectors by using a single-layer feedforward neural network, and normalizing by using softmax to obtain the intention characteristic vector.
3. The method for jointly modeling slot filling and intent detection based on context vector feedback according to claim 1, wherein the step (4) specifically comprises:
(41) multiplying each row of the slot feature matrix by the intention feature vector according to bits, and flattening the result into a row vector;
(42) inputting the row vector output by (41) into a full-connection network, and outputting a vector with the same dimension as the intention characteristic vector;
(43) the vector output by (42) is added to the intention feature vector to obtain a context vector.
4. The method for jointly modeling slot filling and intent detection based on context vector feedback according to claim 1, wherein the step (5) specifically comprises:
(51) splicing each row of the slot feature matrix with a context vector;
(52) inputting the output vector of the step (51) into a full-connection network, and obtaining a slot marking vector after softmax normalization;
(53) the slot label vectors are merged into a slot label matrix by rows.
5. The method for jointly modeling slot filling and intent detection based on context vector feedback according to claim 1, wherein said step (6) comprises:
(61) splicing the intention characteristic vector and the context vector;
(62) and (4) inputting the output vector of the step (61) into the fully-connected network, and obtaining an intention weight vector after softmax normalization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911331309.8A CN111177381A (en) | 2019-12-21 | 2019-12-21 | Slot filling and intention detection joint modeling method based on context vector feedback |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911331309.8A CN111177381A (en) | 2019-12-21 | 2019-12-21 | Slot filling and intention detection joint modeling method based on context vector feedback |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111177381A true CN111177381A (en) | 2020-05-19 |
Family
ID=70654134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911331309.8A Pending CN111177381A (en) | 2019-12-21 | 2019-12-21 | Slot filling and intention detection joint modeling method based on context vector feedback |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111177381A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625634A (en) * | 2020-05-25 | 2020-09-04 | 泰康保险集团股份有限公司 | Word slot recognition method and device, computer-readable storage medium and electronic device |
CN112800190A (en) * | 2020-11-11 | 2021-05-14 | 重庆邮电大学 | Intent recognition and slot value filling joint prediction method based on Bert model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785833A (en) * | 2019-01-02 | 2019-05-21 | 苏宁易购集团股份有限公司 | Human-computer interaction audio recognition method and system for smart machine |
US20190244603A1 (en) * | 2018-02-06 | 2019-08-08 | Robert Bosch Gmbh | Methods and Systems for Intent Detection and Slot Filling in Spoken Dialogue Systems |
CN110263160A (en) * | 2019-05-29 | 2019-09-20 | 中国电子科技集团公司第二十八研究所 | A kind of Question Classification method in computer question answering system |
CN110309514A (en) * | 2019-07-09 | 2019-10-08 | 北京金山数字娱乐科技有限公司 | A kind of method for recognizing semantics and device |
CN110321418A (en) * | 2019-06-06 | 2019-10-11 | 华中师范大学 | A kind of field based on deep learning, intention assessment and slot fill method |
CN110364251A (en) * | 2019-06-14 | 2019-10-22 | 南京理工大学 | It is a kind of to read the intelligent interaction hospital guide's consulting system understood based on machine |
-
2019
- 2019-12-21 CN CN201911331309.8A patent/CN111177381A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190244603A1 (en) * | 2018-02-06 | 2019-08-08 | Robert Bosch Gmbh | Methods and Systems for Intent Detection and Slot Filling in Spoken Dialogue Systems |
CN109785833A (en) * | 2019-01-02 | 2019-05-21 | 苏宁易购集团股份有限公司 | Human-computer interaction audio recognition method and system for smart machine |
CN110263160A (en) * | 2019-05-29 | 2019-09-20 | 中国电子科技集团公司第二十八研究所 | A kind of Question Classification method in computer question answering system |
CN110321418A (en) * | 2019-06-06 | 2019-10-11 | 华中师范大学 | A kind of field based on deep learning, intention assessment and slot fill method |
CN110364251A (en) * | 2019-06-14 | 2019-10-22 | 南京理工大学 | It is a kind of to read the intelligent interaction hospital guide's consulting system understood based on machine |
CN110309514A (en) * | 2019-07-09 | 2019-10-08 | 北京金山数字娱乐科技有限公司 | A kind of method for recognizing semantics and device |
Non-Patent Citations (1)
Title |
---|
CHIH-WEN GOO: "Slot-gated Modeling for Joint Slot Filing anf Intent Prediction" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625634A (en) * | 2020-05-25 | 2020-09-04 | 泰康保险集团股份有限公司 | Word slot recognition method and device, computer-readable storage medium and electronic device |
CN111625634B (en) * | 2020-05-25 | 2023-08-22 | 泰康保险集团股份有限公司 | Word slot recognition method and device, computer readable storage medium and electronic equipment |
CN112800190A (en) * | 2020-11-11 | 2021-05-14 | 重庆邮电大学 | Intent recognition and slot value filling joint prediction method based on Bert model |
CN112800190B (en) * | 2020-11-11 | 2022-06-10 | 重庆邮电大学 | Intent recognition and slot value filling joint prediction method based on Bert model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111859960B (en) | Semantic matching method, device, computer equipment and medium based on knowledge distillation | |
WO2022142041A1 (en) | Training method and apparatus for intent recognition model, computer device, and storage medium | |
CN111062217B (en) | Language information processing method and device, storage medium and electronic equipment | |
US11929060B2 (en) | Consistency prediction on streaming sequence models | |
CN110879938A (en) | Text emotion classification method, device, equipment and storage medium | |
CN113326374B (en) | Short text emotion classification method and system based on feature enhancement | |
CN112699679B (en) | Emotion recognition method and device, electronic equipment and storage medium | |
CN110795549B (en) | Short text conversation method, device, equipment and storage medium | |
CN110874411A (en) | Cross-domain emotion classification system based on attention mechanism fusion | |
CN112395417A (en) | Network public opinion evolution simulation method and system based on deep learning | |
CN111581968A (en) | Training method, recognition method, system, device and medium for spoken language understanding model | |
CN112200664A (en) | Repayment prediction method based on ERNIE model and DCNN model | |
CN113449084A (en) | Relationship extraction method based on graph convolution | |
CN111599339B (en) | Speech splicing synthesis method, system, equipment and medium with high naturalness | |
CN111177381A (en) | Slot filling and intention detection joint modeling method based on context vector feedback | |
CN112349294A (en) | Voice processing method and device, computer readable medium and electronic equipment | |
CN111400340A (en) | Natural language processing method and device, computer equipment and storage medium | |
CN111368066A (en) | Method, device and computer readable storage medium for acquiring dialogue abstract | |
Chan et al. | Applying and optimizing NLP model with CARU | |
CN112380861A (en) | Model training method and device and intention identification method and device | |
CN112307179A (en) | Text matching method, device, equipment and storage medium | |
CN116186219A (en) | Man-machine dialogue interaction method, system and storage medium | |
CN116258147A (en) | Multimode comment emotion analysis method and system based on heterogram convolution | |
CN115270818A (en) | Intention identification method and device, storage medium and computer equipment | |
CN114357164A (en) | Emotion-reason pair extraction method, device and equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200519 |
|
WD01 | Invention patent application deemed withdrawn after publication |