CN110516035A - A kind of man-machine interaction method and system of mixing module - Google Patents

A kind of man-machine interaction method and system of mixing module Download PDF

Info

Publication number
CN110516035A
CN110516035A CN201910605120.7A CN201910605120A CN110516035A CN 110516035 A CN110516035 A CN 110516035A CN 201910605120 A CN201910605120 A CN 201910605120A CN 110516035 A CN110516035 A CN 110516035A
Authority
CN
China
Prior art keywords
network
user
word
input
man
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910605120.7A
Other languages
Chinese (zh)
Inventor
赵生捷
张冰
张�林
史清江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201910605120.7A priority Critical patent/CN110516035A/en
Publication of CN110516035A publication Critical patent/CN110516035A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The present invention relates to a kind of man-machine interaction method of mixing module and system, the exchange method is specifically includes the following steps: obtain the Chinese corpus data of user's input;Word segmentation processing is carried out to Chinese corpus and obtains term vector;User's intent classifier is carried out by LSTM network according to term vector, judgement is to chat or complete particular task;The Seq2Seq network that non task is oriented to if being judged as chat is responded and is handled;The Mem2Seq network of task orientation is responded and is handled if being judged as completion particular task.Compared with prior art, the present invention can complete the particular task and carry out communication chat with user that user specifies, have better practicability with it is comprehensive.

Description

A kind of man-machine interaction method and system of mixing module
Technical field
The present invention relates to field of human-computer interaction, more particularly, to the man-machine interaction method and system of a kind of mixing module.
Background technique
In information age today, human-computer interaction is the basic technology for having significant impact to human production life, research People and calculate influencing each other between equipment, target be make machine help people efficiently, it is comfortable, be safely completed mission requirements.It is right Telephone system is then one of the field of human-computer interaction technology core the most, by assigning computer understanding Human Natural Language, completing Particular task and the ability for carrying out natural language reply can greatly improve people's convenience in life.
In current man-machine interaction method, single task orientation type conversational system or non task guidance type are often used Conversational system.Task orientation type conversational system is absorbed in the particular task for completing user, does not have the ability of chat usually, only Exchanging for specific area is carried out with user;Rather than task orientation type conversational system can only then be chatted with user, do not had completion and appointed The ability of business, so that the smaller scope of application of human-computer interaction.
Summary of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide a kind of people of mixing module Machine exchange method and system.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of man-machine interaction method of mixing module, specifically includes the following steps:
S1, the Chinese corpus data for obtaining user's input;
S2, word segmentation processing is carried out to Chinese corpus and obtains term vector;
S3, user's intent classifier is carried out by LSTM network according to term vector, judgement is to chat or complete particular task; The Seq2Seq network that non task is oriented to if being judged as chat is responded and is handled;Appoint if being judged as completion particular task The Mem2Seq network of business guiding is responded and is handled.
Further, in the step S2, chinese character sequence is cut into one by one individually using Chinese word segmentation module Word;The acquisition of term vector uses word2vec model.
Further, in the step S3, user's input is analyzed using LSTM Recognition with Recurrent Neural Network model, it is defeated The user predicted out is intended to.
Further, the expression formula of the LSTM Recognition with Recurrent Neural Network model are as follows:
X=w1,...,wn, < EOS >
Y=i
Y=LSTM (x)
P (y | x)=p (i | w1,...,wn)
Wherein, x expression includes the list entries of n+1 word, by word w1,…,wnWith statement terminator<EOS> It constitutes;Y is to be intended to i, is exported after obtaining input x by LSTM;P (y | x) is input output is y when being x probability.
Further, the model expression of the Seq2Seq network are as follows:
Wherein, x1,x2,…,xTFor the user input sequence comprising T word, y1,y2,…,yT' it is that Seq2Seq network is raw At the response sequence comprising a word of T ', c is context vector.
Further, in the step S3, slot filling technique is used when Mem2Seq network is responded and handled, from Task key word is obtained in user's input, external knowledge base is retrieved according to the keyword, candidate Knowledge Set is obtained, by it As in dialog history data input Mem2Seq network, system reply is thus generated.
Further, the Mem2Seq network is responsible for completing the dialogue of task orientation type, is recorded using memory past Conversation history, and set a classifier and judge that the reply word being currently generated should extract or use language model from conversation history It generates.
A kind of man-machine interactive system of mixing module, comprising:
Input module, for obtaining the Chinese corpus data of user's input;
Preprocessing module, for carrying out word segmentation processing to Chinese corpus and obtaining term vector;
Categorization module, for carrying out user's intent classifier according to term vector;
Seq2Seq network is to respond when chatting for user's intent classifier;
Mem2Seq network is to respond when completing particular task for user's intent classifier.
Compared with prior art, the invention has the following advantages that
1, the present invention using task orientation type conversational system end-to-end method --- Mem2Seq network and non task are led Model --- Seq2Seq network is generated to the nerve of type conversational system, both neural network structures is based on, constructs task orientation It is oriented to the Chinese conversational system combined with non task, so that conversational system can complete user by the intention of identification user Specified particular task, and communication chat can be carried out with user, have better practicability with it is comprehensive.
2, word segmentation processing easily can be carried out to Chinese corpus using the word2vec model of Google, by natural language The sentence of form is characterized as the form that machine can understand, processing capacity is strong.
3, the present invention analyzes list entries using LSTM Recognition with Recurrent Neural Network structure, carries out the intention of user's input Classification, is handled respectively convenient for Systematic selection Mem2Seq or Seq2Seq model.The i.e. long memory network in short-term of LSTM, it is to tradition RNN Recognition with Recurrent Neural Network improve after model, long-term memory and context can be kept, with solve handle long text when Long-distance dependence problem.
Detailed description of the invention
Fig. 1 is general frame schematic diagram of the invention;
Fig. 2 is skip-gram model schematic;
Fig. 3 is the inside neurons working drawing of LSTM network;
Fig. 4 is the LSTM network structure overall schematic of user's intention assessment;
Fig. 5 is the Seq2Seq network diagram for handling non task guiding dialogue;
Fig. 6 is the Mem2Seq network diagram for handling task orientation dialogue.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention Premised on implemented, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to Following embodiments.
It mainly include three compositions as shown in Figure 1, present embodiments providing a kind of man-machine interaction method of mixing module Part, i.e. Text Pretreatment part, user's intent classifier part and reply generating portion.It is by Text Pretreatment, user is defeated The text entered is converted into the form that computer is understood that, is then responsible for detecting the intention of user by intent classifier, hereafter, according to It is intended to corresponding neural network model in testing result selection reply generating portion and handles Task dialogue and non task type respectively Dialogue.
Specifically includes the following steps:
Step S1, the Chinese corpus data of user's input is obtained;
Step S2, word segmentation processing is carried out to Chinese corpus and obtains term vector;
Step S3, user's intent classifier is carried out according to term vector, judgement is to chat or complete particular task;If being judged as The Seq2Seq network for chatting then non task guiding is responded and is handled;The task orientation if being judged as completion particular task Mem2Seq network is responded and is handled.
One, Text Pretreatment part
Machine can not directly understand the natural language of the mankind, therefore need to convert text to the shape that computer is understood that Formula, that is, to carry out text vector processing indicates a word with low-dimensional, continuous vector.For English language Sentence, there are spaces between word as word separator, and Chinese text is then a continuous long character string, thus need it is right first Input in Chinese carries out word segmentation processing, that is, chinese character sequence is cut into individual word one by one, can be used in existing Literary word segmentation module --- jieba participle.Then, the acquisition of term vector uses the word2vec model of Google currently popular, Obtaining the corresponding low-dimensional vector of each Chinese word indicates.Specifically, the skip-gram model of the present embodiment application word2vec Pre-training term vector, skip-gram model are as shown in Figure 2.
Two, user's intent classifier part
In order to realize intent classifier, the present invention is predicted using LSTM network structure.To the original Chinese of user's input After sequence carries out Text Pretreatment, the term vector sequence for indicating user's input is obtained, as the input variable of LSTM network, Prediction output user is intended to.
LSTM network is by using forgetting door (forget gate), input gate (input gate) and out gate (output Gate network long term state) is controlled, enables the network to keep memory, there is the input sequence explained and depend on information and context The ability of column.
Forget door, it determines network in the location mode c of last momentt-1How many can remain into current time state ct, so that LSTM network, which has, saves for a long time the ability of information before.Its calculation formula indicates are as follows:
ft=σ (Wf·[ht-1,xt]+bf),
Wherein, WfIndicate the weight matrix of forgetting door, ht-1Indicate the network output of last moment, xtIndicate current time net Network input, bfIndicate the bias term of forgetting door, σ indicates sigmoid function.
Input gate, it determines the input x at network current timetHow many is saved to current time state ct, so that LSTM Network has the ability for avoiding current inessential content from entering memory.Input gate calculation formula indicates are as follows:
it=σ (Wi·[ht-1,xt]+bi),
Current time inputs corresponding stateCalculation formula indicate are as follows:
Wherein, WiAnd WcRespectively indicate corresponding weight matrix, biAnd bcRespectively indicate corresponding bias.It can obtain as a result, To current time state ct, calculation formula expression are as follows:
Out gate, it controls current time state ctHow many is output to the current output value h of LSTM networkt, so that LSTM The ability that there is network control long-term memory to influence on current output.The calculation formula of out gate indicates are as follows:
ot=σ (Wo·[ht-1,xt]+bo),
Wherein, WoIndicate the weight matrix of out gate, boIndicate the bias of out gate.Thus current time can be obtained Network output valve, indicate are as follows:
ht=otοtanh(ct)。
The inside neurons working drawing of LSTM network is as shown in Figure 3.
For the present invention, when training LSTM network, input sample is user's read statement, such as < you are in a good humor today >, that is, the network inputs for corresponding to each moment are x1=you, x2=modern, x3=day, x4=the heart, x5=feelings, x6=good, x7=, Exporting sample is judging result h7=0 (chat) or h7=1 (task).That is training sample by similar sample<how do you do, 0>or< Subscribe western-style restaurant, 1 > composition.
When the optimization aim of LSTM network structure is given list entries, the conditional probability of intention is maximized, formula indicates It is as follows:
X=w1,...,wn,<EOS>
Y=i
Y=LSTM (x)
P (y | x)=p (i | w1,...,wn)
Wherein, x indicates the list entries comprising n+1 word, by word w1,…,wnWith statement terminator<EOS>structure At;Y is to be intended to i, is exported after obtaining input x by LSTM;P (y | x) is input output is y when being x probability.
The LSTM network structure used is as shown in Figure 4.
Three, generating portion is replied
This part includes two network structures, respectively Seq2Seq network and Mem2Seq network.According to intent classifier knot Fruit transfers to Seq2Seq network to be handled if user is intended to " chat ", generates reasonable reply;If user is intended to " task " then transfers to Mem2Seq network to be handled, and completes particular task required by user, and generates corresponding reply.
Seq2Seq network
As shown in figure 5, Seq2Seq network, is responsible for completing non task guiding dialogue.Its by encoder (encoder) and Decoder (decoder) two parts composition.Wherein, encoder is the RNN network an of several layers, and list entries is by encoder It from left to right successively handles, obtains the hidden layer state at each the last one moment of layer as context vector c;Decoder is then one A and completely identical in structure RNN network of encoder, the context vector c that encoder is obtained work as input, prediction Preceding output symbol.The specific RNN network structure that the present invention uses is LSTM Recognition with Recurrent Neural Network.
Visual representation is carried out using symbol, given includes the list entries X=(x of T word1,x2,…,xT) and length For the target sequence Y=(y of Y '1,y2,…,yT′), Seq2Seq network maximizes conditional probability p=(y of the Y at X1,…,yT′| x1,x2,…,xT)。
In conjunction with encoder process and decoder process, then encoder process is to carry out semanteme using LSTM recirculating network Vector generates:
ht=LSTM (xt,ht-1)
C=φ (h1,...,hT)
Wherein, ht-1It is upper hidden node output, xtIt is current time input, context vector c is usually the last of LSTM One hidden node.Decoder process uses another LSTM, passes through current state htTo predict current output symbol yt
Therefore the objective function of Seq2Seq network is defined as:
Mem2Seq network
As shown in fig. 6, Mem2Seq network, is responsible for completing the dialogue of task orientation type.It utilizes memory (memory) structure It records past conversation history, and sets a classifier (classifier) to judge that the reply word being currently generated should be from right It extracts in words history or is generated with linguistic network, be the network for combining retrieval and generation.
When completing conversation tasks using Mem2Seq network, slot filling technique is first used, the keyword of user task is obtained. Then relevant entry is retrieved in knowledge base according to keyword, is stored in memory structure as initial conversation history. Hereafter it whenever generating new dialogue, all stores it in memory as the conversation history updated.
Mem2Seq network equally includes encoder (encoder) and decoder (decoder) two parts.
For encoder, it includes a MemNN structures, can be using user's input vector as query vector pair Conversation history in memory is mapped, and a memory vector is expressed as after mapping layer by layer.Then, it is read by decoder Memory vector is taken to generate response.Sharp symbolically: its memory, which is expressed as one, can train embeded matrix collection C= {C1,…,CK+1, wherein each CkIt can be by input vector qkIt is mapped as new vector.Use input vector qkTo CkIt carries out Operation, formula indicate are as follows:
Wherein,For CkPosition i column vector,To reflect input vector qkWithCorrelation it is soft Remember selector, okFor the output vector of acquisition.Hereafter, Ck+1Query vector be qk+1=qk+ok.Finally obtain encoder's Memory vector is oK
For decoder, it includes a RNN structure and a MemNN structure, the memory in MemNN structure is multiple The content of memory in encoder is made.Specifically, the RNN structure of decoder uses GRU Recognition with Recurrent Neural Network, it is used to As the dynamic queries generator of MemNN, i.e., in each time step, GRU generates word and previous moment for previous moment Inquiry generates new query vector and passes to MemNN structure, by the new response word of MemNN structural generation as input.Benefit Symbolically: its memory is identical as in encoder, as C={ C1,…,CK+1, and have
Wherein,For the response word that previous moment generates, ht-1For the inquiry of previous moment, stipulated that h0For encoder The memory vector o of acquisitionK.Hereafter, htIt is next for generating that inquiry as current time generation is delivered to MemNN structure Response character.In each time step, when generating response character by MemNN, character be may be from memory i.e. dialog history Data, it is also possible to be generated by linguistic network, respectively correspond two distribution PptrAnd Pvocab, it is formulated are as follows:
Wherein,For the soft memory selector that MemNN is calculated, htFor GRU network generate current time inquiry, o1For the C corresponding to MemNN1Output vector, W1Be one can training parameter.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that those skilled in the art without It needs creative work according to the present invention can conceive and makes many modifications and variations.Therefore, all technologies in the art Personnel are available by logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Technical solution, all should be within the scope of protection determined by the claims.

Claims (8)

1. a kind of man-machine interaction method of mixing module, which is characterized in that specifically includes the following steps:
S1, the Chinese corpus data for obtaining user's input;
S2, word segmentation processing is carried out to Chinese corpus and obtains term vector;
S3, user's intent classifier is carried out by LSTM network according to term vector, judgement is to chat or complete particular task;If sentencing Break and responds and handled for the chat Seq2Seq network that then non task is oriented to;Task is led if being judged as completion particular task To Mem2Seq network respond and handled.
2. the man-machine interaction method of mixing module according to claim 1, which is characterized in that in the step S2, adopt Chinese character sequence is cut into individual word one by one with Chinese word segmentation module;The acquisition of term vector uses word2vec model.
3. the man-machine interaction method of mixing module according to claim 1, which is characterized in that in the step S3, adopt User's input is analyzed with LSTM Recognition with Recurrent Neural Network model, the user for exporting prediction is intended to.
4. the man-machine interaction method of mixing module according to claim 4, which is characterized in that the LSTM recycles nerve The expression formula of network model are as follows:
X=w1..., wn,<EOS>
Y=i
Y=LSTM (x)
P (y | x)=p (i | w1..., wn)
Wherein, x expression includes the list entries of n+1 word, by word w1.., wnIt is constituted with statement terminator<EOS>; Y is to be intended to i, is exported after obtaining input x by LSTM;P (y | x) is input output is y when being x probability.
5. the man-machine interaction method of mixing module according to claim 1, which is characterized in that the Seq2Seq network Model expression are as follows:
Wherein, x1, x2..., xTFor the user input sequence comprising T word, y1, y2..., yT, generated for Seq2Seq network The response sequence comprising a word of T ', c is context vector.
6. the man-machine interaction method of mixing module according to claim 1, which is characterized in that in the step S3, Slot filling technique is used when Mem2Seq network is responded and handled, and task key word is obtained from user's input, according to the pass Key word retrieves external knowledge base, obtains candidate Knowledge Set, inputs Mem2Seq network as dialog history data In, thus generate system reply.
7. the man-machine interaction method of mixing module according to claim 1, which is characterized in that the Mem2Seq network It is responsible for completing the dialogue of task orientation type, records past conversation history using memory, and it is current to set a classifier judgement The reply word of generation should be extracted from conversation history or be generated with language model.
8. a kind of man-machine interactive system of mixing module characterized by comprising
Input module, for obtaining the Chinese corpus data of user's input;
Preprocessing module, for carrying out word segmentation processing to Chinese corpus and obtaining term vector;
Categorization module, for carrying out user's intent classifier according to term vector;
Seq2Seq network is to respond when chatting for user's intent classifier;
Mem2Seq network is to respond when completing particular task for user's intent classifier.
CN201910605120.7A 2019-07-05 2019-07-05 A kind of man-machine interaction method and system of mixing module Pending CN110516035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910605120.7A CN110516035A (en) 2019-07-05 2019-07-05 A kind of man-machine interaction method and system of mixing module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910605120.7A CN110516035A (en) 2019-07-05 2019-07-05 A kind of man-machine interaction method and system of mixing module

Publications (1)

Publication Number Publication Date
CN110516035A true CN110516035A (en) 2019-11-29

Family

ID=68622370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910605120.7A Pending CN110516035A (en) 2019-07-05 2019-07-05 A kind of man-machine interaction method and system of mixing module

Country Status (1)

Country Link
CN (1) CN110516035A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177339A (en) * 2019-12-06 2020-05-19 百度在线网络技术(北京)有限公司 Dialog generation method and device, electronic equipment and storage medium
CN111274374A (en) * 2020-01-19 2020-06-12 出门问问信息科技有限公司 Data processing method and device, computer storage medium and electronic equipment
CN111881280A (en) * 2020-07-28 2020-11-03 南方电网深圳数字电网研究院有限公司 Intelligent man-machine interaction system and method for power industry
CN112800204A (en) * 2021-02-24 2021-05-14 浪潮云信息技术股份公司 Construction method of intelligent dialogue system
CN113220856A (en) * 2021-05-28 2021-08-06 天津大学 Multi-round dialogue system based on Chinese pre-training model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193865A (en) * 2017-04-06 2017-09-22 上海奔影网络科技有限公司 Natural language is intended to understanding method and device in man-machine interaction
CN108197167A (en) * 2017-12-18 2018-06-22 深圳前海微众银行股份有限公司 Human-computer dialogue processing method, equipment and readable storage medium storing program for executing
CN108228559A (en) * 2016-12-22 2018-06-29 苏宁云商集团股份有限公司 A kind of human-computer interaction realization method and system for customer service
CN108920622A (en) * 2018-06-29 2018-11-30 北京奇艺世纪科技有限公司 A kind of training method of intention assessment, training device and identification device
CN109145100A (en) * 2018-08-24 2019-01-04 深圳追科技有限公司 A kind of the Task customer service robot system and its working method of customizable process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108228559A (en) * 2016-12-22 2018-06-29 苏宁云商集团股份有限公司 A kind of human-computer interaction realization method and system for customer service
CN107193865A (en) * 2017-04-06 2017-09-22 上海奔影网络科技有限公司 Natural language is intended to understanding method and device in man-machine interaction
CN108197167A (en) * 2017-12-18 2018-06-22 深圳前海微众银行股份有限公司 Human-computer dialogue processing method, equipment and readable storage medium storing program for executing
CN108920622A (en) * 2018-06-29 2018-11-30 北京奇艺世纪科技有限公司 A kind of training method of intention assessment, training device and identification device
CN109145100A (en) * 2018-08-24 2019-01-04 深圳追科技有限公司 A kind of the Task customer service robot system and its working method of customizable process

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREA MADOTTO等: "《Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems》", 《PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177339A (en) * 2019-12-06 2020-05-19 百度在线网络技术(北京)有限公司 Dialog generation method and device, electronic equipment and storage medium
CN111177339B (en) * 2019-12-06 2023-07-25 百度在线网络技术(北京)有限公司 Dialogue generation method and device, electronic equipment and storage medium
CN111274374A (en) * 2020-01-19 2020-06-12 出门问问信息科技有限公司 Data processing method and device, computer storage medium and electronic equipment
CN111274374B (en) * 2020-01-19 2023-05-23 出门问问信息科技有限公司 Data processing method and device, computer storage medium and electronic equipment
CN111881280A (en) * 2020-07-28 2020-11-03 南方电网深圳数字电网研究院有限公司 Intelligent man-machine interaction system and method for power industry
CN112800204A (en) * 2021-02-24 2021-05-14 浪潮云信息技术股份公司 Construction method of intelligent dialogue system
CN113220856A (en) * 2021-05-28 2021-08-06 天津大学 Multi-round dialogue system based on Chinese pre-training model

Similar Documents

Publication Publication Date Title
US11631007B2 (en) Method and device for text-enhanced knowledge graph joint representation learning
CN108920622B (en) Training method, training device and recognition device for intention recognition
CN109241255B (en) Intention identification method based on deep learning
CN107798140B (en) Dialog system construction method, semantic controlled response method and device
CN110516035A (en) A kind of man-machine interaction method and system of mixing module
Su et al. LSTM-based text emotion recognition using semantic and emotional word vectors
CN109871538A (en) A kind of Chinese electronic health record name entity recognition method
CN109858041B (en) Named entity recognition method combining semi-supervised learning with user-defined dictionary
CN108829719A (en) The non-true class quiz answers selection method of one kind and system
CN107330011A (en) The recognition methods of the name entity of many strategy fusions and device
CN110647612A (en) Visual conversation generation method based on double-visual attention network
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN109657054A (en) Abstraction generating method, device, server and storage medium
CN108628935A (en) A kind of answering method based on end-to-end memory network
Cai et al. Intelligent question answering in restricted domains using deep learning and question pair matching
CN111597341B (en) Document-level relation extraction method, device, equipment and storage medium
CN110347831A (en) Based on the sensibility classification method from attention mechanism
CN109635080A (en) Acknowledgment strategy generation method and device
Fung et al. Empathetic dialog systems
CN107679225A (en) A kind of reply generation method based on keyword
CN105955953A (en) Word segmentation system
CN110597968A (en) Reply selection method and device
CN111222330A (en) Chinese event detection method and system
Guo et al. Who is answering whom? Finding “Reply-To” relations in group chats with deep bidirectional LSTM networks
CN114462420A (en) False news detection method based on feature fusion model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191129

RJ01 Rejection of invention patent application after publication