CN110334340B - Semantic analysis method and device based on rule fusion and readable storage medium - Google Patents

Semantic analysis method and device based on rule fusion and readable storage medium Download PDF

Info

Publication number
CN110334340B
CN110334340B CN201910372887.XA CN201910372887A CN110334340B CN 110334340 B CN110334340 B CN 110334340B CN 201910372887 A CN201910372887 A CN 201910372887A CN 110334340 B CN110334340 B CN 110334340B
Authority
CN
China
Prior art keywords
word
vectors
vector
rule
text data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910372887.XA
Other languages
Chinese (zh)
Other versions
CN110334340A (en
Inventor
崔燕红
竺成浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Teddy Bear Mobile Technology Co ltd
Beijing Teddy Future Technology Co ltd
Original Assignee
Beijing Teddy Bear Mobile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Teddy Bear Mobile Technology Co ltd filed Critical Beijing Teddy Bear Mobile Technology Co ltd
Priority to CN201910372887.XA priority Critical patent/CN110334340B/en
Publication of CN110334340A publication Critical patent/CN110334340A/en
Application granted granted Critical
Publication of CN110334340B publication Critical patent/CN110334340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a semantic analysis method, a semantic analysis device and a readable storage medium based on rule fusion, wherein the method comprises the following steps: acquiring text data; preprocessing and pre-training the text data on a data set to obtain a word vector and/or a word vector; obtaining a rule vector corresponding to each word vector and/or word vector through matching of a system rule engine; combining each word vector and/or word vector with the corresponding regular vector to obtain a corresponding combined vector; and coding all the obtained combined vectors through a bidirectional cyclic neural network, fusing the combined vectors with regular vectors again to obtain the feature representation of the text data to perform tasks such as intention analysis and the like, and adding a rule engine on the basis of the existing deep learning model to improve the accuracy of model intention identification and sequence marking.

Description

Semantic analysis method and device based on rule fusion and readable storage medium
Technical Field
The invention relates to the technical field of natural language processing, in particular to a semantic analysis method and device based on rule fusion and a readable storage medium.
Background
Natural language understanding is one of the core problems of artificial intelligence and also the core problems of intelligent voice interaction and man-machine conversation at present.
The technology has an iterative process which is expressed in the natural language understanding field and is expressed in the alternation process from a rule engine to a depth engine. However, in this process, many problems occur. For example, a deep learning engine requires annotation data of considerable size, whereas a rule-based engine does not; meanwhile, the rule-based engine requires experts to construct a rule system, which is time-consuming and labor-consuming and has limited effect.
In the prior art, the development from a rule engine to a depth engine is a subversive development, namely, a method of completely abandoning the former and emphatically developing the latter is adopted. The latter does not lend itself well to the advantages of the expert system based rule engine of the former. Therefore, a solution is needed that can simultaneously achieve the above advantages.
Disclosure of Invention
In order to effectively solve the problems in the prior art, the embodiments of the present invention creatively provide a semantic analysis method and apparatus based on rule fusion, and a readable storage medium.
The invention provides a semantic analysis method based on rule fusion, which comprises the following steps: acquiring text data; pre-training the text data on a data set to obtain a plurality of word vectors and/or word vectors; obtaining a rule vector corresponding to each word vector and/or word vector through matching of a system rule engine; combining each word vector and/or word vector with the corresponding regular vector to obtain a corresponding combined vector; and sequentially taking all the obtained combination vectors as the input of a recurrent neural network to obtain intention data for representing the text data.
Preferably, the pre-training of the text data on the data set to obtain a plurality of word vectors and/or word vectors includes: performing word segmentation processing on the text data to obtain a word segmentation processing result; preprocessing the word segmentation processing result on a data set to generate a plurality of word vectors and/or word vectors;
preferably, the sequentially using all the obtained combination vectors as the input of the recurrent neural network to obtain the intention data for characterizing the text data includes: sequentially inputting the obtained combined vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching and performing feature coding to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a Softmax layer for intention recognition, thereby obtaining intention data representing the text data.
Preferably, all the matched rule vectors are combined, and feature coding is performed after pooling operation.
Preferably, in the process of sequentially inputting the obtained combination vectors into a recurrent neural network layer for coding, the method further comprises the step of taking the sequentially obtained first coding result as the input of a conditional random field CRF, so as to obtain the sequence labels corresponding to the word and/or word vectors.
Another aspect of the present invention provides a semantic analysis apparatus based on rule fusion, where the apparatus includes: the data acquisition module is used for acquiring text data; the character and/or word vector generation module is used for pre-training the text data on a data set to obtain a plurality of character vectors and/or word vectors; the rule vector generation module is used for obtaining a rule vector corresponding to each word vector and/or word vector through matching of a system rule engine; the combination module is used for combining the word vectors and/or the word vectors with the corresponding regular vectors to obtain combined vectors; and the intention identification module is used for sequentially taking all the obtained combination vectors as the input of the recurrent neural network to obtain intention data for representing the text data.
Preferably, the word and/or word vector generation module is specifically configured to: performing word segmentation processing on the text data to obtain a word segmentation processing result; and pre-training the word segmentation processing result on a data set to obtain a plurality of word vectors and/or word vectors.
Preferably, the intention identifying module is specifically configured to: sequentially inputting the obtained combined vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching, and performing feature coding after pooling operation to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a Softmax layer for intention recognition, thereby obtaining intention data representing the text data.
Preferably, the apparatus further comprises a sequence recognition module, which takes the first encoding result obtained in sequence as an input of the conditional random field CRF, so as to obtain the sequence labels corresponding to the words and/or word vectors.
In another aspect, the present invention further provides a computer-readable storage medium, which includes a set of computer-executable instructions, when executed, for performing the rule fusion-based semantic analysis method.
The semantic analysis method, the semantic analysis device and the readable storage medium based on rule fusion of the embodiment of the invention are characterized in that text data is preprocessed through a data set to obtain a plurality of word vectors and/or word vectors, rule vectors corresponding to the word vectors and/or the word vectors are matched through a rule engine, then each word vector and/or word vector is combined with the corresponding rule vector to form a combined vector, the combined vector is used as input of a recurrent neural network to obtain intention data for representing the text data, and the rule engine is combined on the basis of a depth model, so that the output data can be more accurate compared with the prior art.
It is to be understood that the teachings of the present invention need not achieve all of the above-described benefits, but rather that specific embodiments may achieve specific technical results, and that other embodiments of the present invention may achieve benefits not mentioned above.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a schematic diagram of an implementation flow of a semantic analysis method based on rule fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a specific implementation flow of intent recognition and sequence recognition in a semantic analysis method based on rule fusion according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a semantic analysis device based on rule fusion according to an embodiment of the present invention.
In the figure:
101. collecting data; 102. preprocessing text data; 103. loading a rule engine; 104. vector preprocessing; 105. intention recognition; 301. a data acquisition module; 302. a word and/or word vector generation module; 303. a rule vector generation module; 304. combining the modules; 305. an intent recognition module; 306. and a sequence identification module.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a semantic analysis method based on rule fusion, where the method includes:
step 101, data acquisition: text data is acquired.
Step 102, preprocessing text data: and pre-training the text data on a data set to obtain a plurality of word vectors and/or word vectors.
Step 103, loading a rule engine: and matching by a system rule engine to obtain a rule vector corresponding to each word vector and/or word vector.
Step 104, vector preprocessing: and combining each word vector and/or word vector with the corresponding regular vector to obtain a corresponding combined vector.
Step 105, intention recognition: and sequentially taking all the obtained combination vectors as the input of a recurrent neural network to obtain intention data for representing the text data, wherein the intention data is specifically represented as: sequentially inputting the obtained combined vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching and performing feature coding to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a Softmax layer for intention recognition, thereby obtaining intention data representing the text data.
In the embodiment of the present invention, first, in step 101, text data is obtained, where the text data is text data of a specific field, the specific field refers to the same type of data or resource and services provided around the data or resource, such as "restaurant", "hotel", "airplane ticket", "train ticket", "yellow phone page", and the text data may be a third-party corpus such as wiki, or data crawled from the internet, and the like.
In step 102, the obtained text data is processed by a word segmentation device such as jieba or a machine learning algorithm based on statistics to obtain at least one word and/or word, and the obtained word and/or word is processed by a data set for word vector pre-training to obtain a corresponding word and/or word vector, wherein the data set is preferably an encyclopedic data set.
And step 103, loading a rule engine, matching rules of the characters and/or words in the corresponding text through the rule engine, and representing all matched rules in one-hot codes.
Further, a rule vector matrix is initialized, and a rule vector of a specified rule is obtained by multiplying the one-hot vector by the initialized rule vector matrix.
Through step 104, combining the obtained multiple word and/or phrase vectors with the corresponding regular vectors to obtain a combined vector, where the specific combination mode is concatenation, for example: if the word and/or word vector w is [ a1, a2], the corresponding regular vector r is [ b1, b2], and the combined vector X after concatenation is [ w, r ], i.e., [ a1, a2, b1, b2 ].
By step 105, shown in connection with FIG. 2, where Xt-1,Xt,Xt+1Respectively representing the input values of the combined vector at t-1, t +1, ht-1,ht,ht+1Respectively representing hidden state vector values of the combined vector at t-1, t and t +1 moments, yt-1,yt,yt+1Respectively representing the output values of the combined vector at t-1, t and t +1, wherein w1 and w2 are weighted values, and the value of the hidden state vector h at t is: h ist=f(w1Xt+w2ht-1)。
Multiplying the obtained plurality of combined vectors X by a weight w1 andtaking the multiplication results as the input of the recurrent neural network in turn, in this embodiment, the recurrent neural network is preferably a bidirectional recurrent neural network, and the first coding result, i.e. the hidden layer state vector value h at the last moment, is obtainedn
Combining all the matched regular vectors such as R1 and r2.. the combination mode is splicing, and the vector R obtained by combining R1, R2 and R3 is [ R1, R2 and R3]]Performing averaging pooling or max pooling on the vector R obtained by splicing to obtain a feature code R ', wherein the feature code R' is a second coding result, and then hiding the last-moment first coding result into a hidden state vector hnAnd fusing the second coding result in a splicing mode, wherein the splicing mode is consistent with the splicing mode to obtain a fused coding result F, and finally inputting the fused coding result F into a softmax layer for intention identification to obtain intention data representing the text data.
Further, after passing through the neural cycle network, the method further includes a step 106 of labeling the sequence: and sequentially passing the obtained first coding result through a conditional random field CRF to obtain the sequence labels of the corresponding characters and/or word vectors.
By step 106, the first encoding result generated by each cycle is input into the conditional random field CRF according to step 105, and the sequence label corresponding to each word and/or word vector is obtained.
Through the steps, the existing depth model is combined with the rule engine, so that the output intention identification and sequence marking are more accurate.
Based on the above mentioned semantic analysis method based on rule fusion, the invention further provides a semantic analysis device based on rule fusion.
As shown in fig. 3, the apparatus includes:
and the data acquisition module 301 is configured to acquire text data.
A word and/or word vector generation module 302, configured to perform preprocessing on the text data on a data set to generate a plurality of word vectors and/or word vectors.
And the rule vector generating module 303 is configured to obtain a rule vector corresponding to each word vector and/or word vector through matching by a system rule engine.
And the combination module 304 is configured to combine the word vector and/or the word vector with the corresponding rule vector to obtain a combination vector.
An intention identifying module 305, configured to use all the obtained combination vectors as an input of a recurrent neural network in turn, and obtain intention data for characterizing the text data, which is embodied as: sequentially inputting the obtained combined vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching, and performing feature coding after pooling operation to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a SoftMax layer for intention identification, thereby obtaining intention data representing the text data.
In the embodiment of the present invention, first, the data acquisition module 301 acquires text data in a specific field.
Through the word and/or word vector generation module 302, the obtained text data is processed by a word segmentation device such as jieba or a machine learning algorithm based on statistics to obtain at least one word and/or word, and then the obtained word and/or word is processed by a data set for pre-training word vectors to obtain a corresponding word and/or word vector, wherein the data set is preferably an encyclopedia data set.
And loading a rule engine through a rule vector generation module 303, matching rules of the characters and/or words in the corresponding text through the rule engine, and representing all matched rules by one-hot coding.
Further, a rule vector matrix is initialized, and a rule vector of a specified rule is obtained by multiplying the one-hot vector by the initialized rule vector matrix.
The obtained multiple word and/or word vectors and the corresponding regular vectors are combined in a splicing manner by the combination module 304 to obtain a combined vector.
The intent recognition module 305 sequentially uses the obtained multiple combined vectors as an input of a recurrent neural network, in this embodiment, the recurrent neural network is a bidirectional recurrent neural network, and a first encoding result, that is, a hidden state vector value at the last moment, is obtained.
Combining all the rule vectors obtained by matching, such as R1 and r2.. the combination mode is splicing, for example, the vector R obtained by combining R1, R2 and R3 is [ R1, R2 and R3], performing averaging pooling or max pooling on the vector R obtained by splicing to obtain a feature code R ', wherein the feature code R' is a second coding result, then fusing the first coding result and the second coding result at the last moment, wherein the fusion mode is splicing, the splicing mode is consistent with the splicing mode, so that a fusion coding result F is obtained, and finally inputting the fusion coding result F into a SoftMax layer for intention identification, so that intention data representing text data is obtained.
Further, the apparatus also includes a sequence identification module 306: and sequentially inputting the obtained first coding result into a conditional random field CRF so as to obtain sequence labels corresponding to the characters and/or word vectors.
The first coding result obtained after each cycle is input to the conditional random field CRF by the sequence recognition module 306, so as to obtain a sequence label corresponding to each word and/or word vector.
Through the module, the rule engine is combined on the basis of the existing depth model, so that the output intention identification and sequence marking are more accurate.
Based on the above-mentioned method and apparatus for rule fusion based semantic analysis, the present invention additionally provides a computer-readable storage medium comprising a set of computer-executable instructions for any one of the rule fusion based semantic analysis methods.
The semantic analysis method, the semantic analysis device and the readable storage medium based on rule fusion in the embodiment of the invention are characterized in that text data is firstly pre-trained through a data set to obtain a plurality of word vectors and/or word vectors, then the rule vectors corresponding to the word vectors and/or word vectors are matched through a rule engine, then each word vector and/or word vector is combined with the corresponding rule vector to form a combined vector, the combined vector is used as the input of a recurrent neural network to obtain intention data for representing the text data, and the rule engine is combined on the basis of a depth model, so that the output data can be more accurate compared with the prior art.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A semantic analysis method based on rule fusion is characterized by comprising the following steps:
acquiring text data;
pre-training the text data on a data set to obtain a plurality of word vectors and/or word vectors;
obtaining a rule vector corresponding to each word vector and/or word vector through matching of a system rule engine;
combining each word vector and/or word vector with the corresponding regular vector to obtain a corresponding combined vector;
sequentially taking all the obtained combination vectors as the input of a recurrent neural network to obtain intention data for representing the text data, wherein the intention data specifically comprises the following steps: sequentially inputting the obtained combined vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching and performing feature coding to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a Softmax layer for intention recognition, thereby obtaining intention data representing the text data.
2. The method of claim 1, wherein pre-training the text data on a data set to obtain a plurality of word vectors and/or word vectors comprises:
performing word segmentation processing on the text data to obtain a word segmentation processing result;
and pre-training the word segmentation processing result on a data set to obtain a plurality of word vectors and/or word vectors.
3. The method of claim 1, wherein all matched regular vectors are combined and feature encoded after pooling.
4. The method of claim 1, wherein in the process of sequentially inputting the obtained combination vectors into the recurrent neural network layer for encoding, the method further comprises:
and taking the first coding result obtained in sequence as the input of a conditional random field CRF, thereby obtaining the sequence labels corresponding to the characters and/or word vectors.
5. A semantic analysis apparatus based on rule fusion, the apparatus comprising:
the data acquisition module is used for acquiring text data;
the character and/or word vector generation module is used for pre-training the text data on a data set to obtain a plurality of character vectors and/or word vectors;
the rule vector generation module is used for obtaining a rule vector corresponding to each word vector and/or word vector through matching of a system rule engine;
the combination module is used for combining the word vectors and/or the word vectors with the corresponding regular vectors to obtain combined vectors;
the intention identification module is used for sequentially taking all the obtained combination vectors as the input of a recurrent neural network to obtain intention data for representing the text data; the intention identification module is specifically used for sequentially inputting the obtained combination vectors into a recurrent neural network layer for coding to obtain a first coding result; combining all the rule vectors obtained by matching, and performing feature coding after pooling operation to obtain a second coding result; fusing the obtained first coding result and the second coding result to obtain a fused coding result; and adding the fusion coding result to a Softmax layer for intention recognition, thereby obtaining intention data representing the text data.
6. The apparatus of claim 5,
the word and/or word vector generating module is specifically configured to perform word segmentation processing on the text data to obtain a word segmentation processing result; and pre-training the word segmentation processing result on a data set to obtain a plurality of word vectors and/or word vectors.
7. The apparatus of claim 5, further comprising
And the sequence recognition module is used for taking the first coding result obtained in sequence as the input of a conditional random field CRF so as to obtain the sequence label corresponding to the character and/or word vector.
8. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the method for rule fusion based semantic analysis of any one of claims 1-4.
CN201910372887.XA 2019-05-06 2019-05-06 Semantic analysis method and device based on rule fusion and readable storage medium Active CN110334340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910372887.XA CN110334340B (en) 2019-05-06 2019-05-06 Semantic analysis method and device based on rule fusion and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910372887.XA CN110334340B (en) 2019-05-06 2019-05-06 Semantic analysis method and device based on rule fusion and readable storage medium

Publications (2)

Publication Number Publication Date
CN110334340A CN110334340A (en) 2019-10-15
CN110334340B true CN110334340B (en) 2021-08-03

Family

ID=68140083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910372887.XA Active CN110334340B (en) 2019-05-06 2019-05-06 Semantic analysis method and device based on rule fusion and readable storage medium

Country Status (1)

Country Link
CN (1) CN110334340B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737999A (en) * 2020-06-24 2020-10-02 深圳前海微众银行股份有限公司 Sequence labeling method, device and equipment and readable storage medium
CN112906380A (en) * 2021-02-02 2021-06-04 北京有竹居网络技术有限公司 Method and device for identifying role in text, readable medium and electronic equipment
CN113256459A (en) * 2021-04-30 2021-08-13 深圳市鹰硕教育服务有限公司 Micro-course video management method, device, system and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871138A (en) * 2017-11-01 2018-04-03 电子科技大学 A kind of target intention recognition methods based on improvement D S evidence theories
CN108415923A (en) * 2017-10-18 2018-08-17 北京邮电大学 The intelligent interactive system of closed domain
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109241255A (en) * 2018-08-20 2019-01-18 华中师范大学 A kind of intension recognizing method based on deep learning
CN109376847A (en) * 2018-08-31 2019-02-22 深圳壹账通智能科技有限公司 User's intension recognizing method, device, terminal and computer readable storage medium
CN109543190A (en) * 2018-11-29 2019-03-29 北京羽扇智信息科技有限公司 A kind of intension recognizing method, device, equipment and storage medium
CN109697282A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 A kind of the user's intension recognizing method and device of sentence

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415923A (en) * 2017-10-18 2018-08-17 北京邮电大学 The intelligent interactive system of closed domain
CN109697282A (en) * 2017-10-20 2019-04-30 阿里巴巴集团控股有限公司 A kind of the user's intension recognizing method and device of sentence
CN107871138A (en) * 2017-11-01 2018-04-03 电子科技大学 A kind of target intention recognition methods based on improvement D S evidence theories
CN109241255A (en) * 2018-08-20 2019-01-18 华中师范大学 A kind of intension recognizing method based on deep learning
CN109376847A (en) * 2018-08-31 2019-02-22 深圳壹账通智能科技有限公司 User's intension recognizing method, device, terminal and computer readable storage medium
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109543190A (en) * 2018-11-29 2019-03-29 北京羽扇智信息科技有限公司 A kind of intension recognizing method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dialogue Intent Classification with Long Short-Term Memory Networks;Lian Meng 等;《National CCF Conference on Natural Language Processing and》;20180105;第42-50页 *
Query Intent Recognition Based on Multi-Class Features;LIRONG QIU 等;《SPECIAL SECTION ON MULTIMEDIA ANALYSIS FOR INTERNET-OF-THINGS》;20180910;第52195-52204页 *

Also Published As

Publication number Publication date
CN110334340A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110609891B (en) Visual dialog generation method based on context awareness graph neural network
CN108073711B (en) Relation extraction method and system based on knowledge graph
CN104598611B (en) The method and system being ranked up to search entry
CN110298043B (en) Vehicle named entity identification method and system
CN110334340B (en) Semantic analysis method and device based on rule fusion and readable storage medium
CN109214006B (en) Natural language reasoning method for image enhanced hierarchical semantic representation
CN113127624B (en) Question-answer model training method and device
CN114298121B (en) Multi-mode-based text generation method, model training method and device
CN109933792A (en) Viewpoint type problem based on multi-layer biaxially oriented LSTM and verifying model reads understanding method
CN112699686B (en) Semantic understanding method, device, equipment and medium based on task type dialogue system
CN113035311A (en) Medical image report automatic generation method based on multi-mode attention mechanism
CN111368066B (en) Method, apparatus and computer readable storage medium for obtaining dialogue abstract
CN113392209A (en) Text clustering method based on artificial intelligence, related equipment and storage medium
CN115438215A (en) Image-text bidirectional search and matching model training method, device, equipment and medium
CN113408287A (en) Entity identification method and device, electronic equipment and storage medium
CN112948505A (en) Entity relationship classification model construction method, device and storage medium
CN115470232A (en) Model training and data query method and device, electronic equipment and storage medium
CN112349294A (en) Voice processing method and device, computer readable medium and electronic equipment
CN115409038A (en) Natural language processing method and device, electronic equipment and storage medium
CN112380861B (en) Model training method and device and intention recognition method and device
CN113486174A (en) Model training, reading understanding method and device, electronic equipment and storage medium
CN117828024A (en) Plug-in retrieval method, device, storage medium and equipment
CN116578671A (en) Emotion-reason pair extraction method and device
CN114519353B (en) Model training method, emotion message generation method and device, equipment and medium
CN114416941B (en) Knowledge graph-fused dialogue knowledge point determination model generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: East of 1st floor, No.36 Haidian Street, Haidian District, Beijing, 100080

Patentee after: Beijing Teddy Future Technology Co.,Ltd.

Address before: East of 1st floor, No.36 Haidian Street, Haidian District, Beijing, 100080

Patentee before: Beijing Teddy Bear Mobile Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: East of 1st floor, No.36 Haidian Street, Haidian District, Beijing, 100080

Patentee after: Beijing Teddy Bear Mobile Technology Co.,Ltd.

Address before: 100085 07a36, block D, 7 / F, No.28, information road, Haidian District, Beijing

Patentee before: BEIJING TEDDY BEAR MOBILE TECHNOLOGY Co.,Ltd.