CN108153913B - Training method of reply information generation model, reply information generation method and device - Google Patents

Training method of reply information generation model, reply information generation method and device Download PDF

Info

Publication number
CN108153913B
CN108153913B CN201810068600.XA CN201810068600A CN108153913B CN 108153913 B CN108153913 B CN 108153913B CN 201810068600 A CN201810068600 A CN 201810068600A CN 108153913 B CN108153913 B CN 108153913B
Authority
CN
China
Prior art keywords
state vector
hidden state
reply information
input
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810068600.XA
Other languages
Chinese (zh)
Other versions
CN108153913A (en
Inventor
蒋宏飞
王萌萌
李健铨
崔培君
晋耀红
杨凯程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Science and Technology (Beijing) Co., Ltd.
Original Assignee
Dingfu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dingfu Intelligent Technology Co Ltd filed Critical Dingfu Intelligent Technology Co Ltd
Priority to CN201810068600.XA priority Critical patent/CN108153913B/en
Publication of CN108153913A publication Critical patent/CN108153913A/en
Application granted granted Critical
Publication of CN108153913B publication Critical patent/CN108153913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Abstract

The embodiment of the invention discloses a training method of a reply information generation model and a reply information generation method, wherein the training method comprises the following steps: acquiring a first input sequence; the first decoding hidden state vector is used as an initial value of a hidden layer of a first encoder, and the first encoder is used for encoding a first input sequence to obtain a second encoding hidden state vector; decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence; calculating the error of the standard output sequence and the first output sequence; if the error is above a preset end threshold, parameters of the first encoder and decoder are updated according to the error. When the reply information is generated by using the reply information generation model obtained by training in the training method in the technical scheme, the conversation information of the previous round or the previous rounds can be introduced into the generation process of the reply information of the next round of conversation, so that more accurate reply information is generated, and the natural language interaction habit of a user is better met.

Description

Training method of reply information generation model, reply information generation method and device
Technical Field
The invention relates to the technical field of a recurrent neural network, in particular to a training method and a training device for a reply information generation model. In addition, the application also relates to a reply message generation method and a reply message generation device.
Background
The intelligent question answering is a technology for providing information services for users in a question-answer mode through man-machine interaction. According to the division of the answer generation and feedback mechanism, the intelligent question-answering system can be divided into a question-answering system based on a search formula and a question-answering system based on a generation formula. The answer feedback mechanism based on the generation formula mainly utilizes a large number of interactive data pairs (question-answer pairs) to train to obtain a reply information generation model, then inputs user input information into the trained reply information generation model, and the reply information generation model can automatically generate reply information consisting of word sequences.
An intelligent question-answering system based on a Recurrent Neural Network (RNN) model is one of question-answering systems based on a generating formula, and mainly comprises a coding-decoding model, namely a reply information generating model. Referring to fig. 1, the encoding-decoding model generally includes two parts, an Encoder (Encoder) and a Decoder (Decoder). The input information of a user is regarded as a sequence (input sequence x) represented by a single character, and an encoder is an RNN model and has the function of encoding the input sequence to obtain an encoding hidden state vector h. The decoder is also an RNN model, which functions to decode the encoded hidden state vector h and convert it into another sequence represented by a single word (output sequence y).
In the prior art, the intelligent question-answering system, including the above-mentioned intelligent question-answering system based on RNN model, is directed to a single round of conversation, and such a conversation form does not actually conform to the natural language interaction habit of the user. When people adopt natural language interaction, one topic often needs multiple rounds of interaction. In the process of multiple rounds of interaction, the session information of the current round often depends on the session information of the previous round, for example, the aforementioned information may be omitted from the session information of the current round. The existing intelligent question-answering system only depends on the input information of the current turn to generate the reply information, so that the accuracy of the reply information fed back to the user is low.
Disclosure of Invention
In order to solve the technical problem, the application provides a training method for generating a model by using reply information and a method for generating reply information by using the trained model, so that the accuracy of the generated reply information is higher, and further, the natural language interaction habit of a user is better met.
In a first aspect, a method for training a reply information generation model is provided, including:
acquiring a first input sequence, wherein the first input sequence is obtained by converting input information of a current round of conversation in a training corpus;
coding the first input sequence by using a first coder to obtain a second coding hidden state vector by taking the first decoding hidden state vector as an initial value of a hidden layer of the first coder, wherein the first coder is a coder based on an RNN (radio network node) model, and the first decoding hidden state vector is a state value of the last step in the hidden layer of a decoder in the previous round of conversation;
decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence, wherein the decoder is based on an RNN model;
calculating the error between a standard output sequence and the first output sequence, wherein the standard output sequence is obtained by converting reply information of the current round of conversation in the training corpus;
updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold.
With reference to the first aspect, in a first possible implementation manner of the first aspect, before the step of encoding the first input sequence by using the first encoder with the first decoding hidden state vector as an initial value of a hidden layer of the first encoder, the method further includes:
obtaining a semantic guide input sequence, wherein the semantic guide input sequence is obtained by converting semantic guide words, and the semantic guide words are words representing the semantics of the reply information of the current round in the training corpus;
encoding the semantic guidance input sequence by adopting a second encoder to obtain a semantic guidance hidden state vector, wherein the second encoder is an RNN (radio network node) model-based encoder, and the semantic guidance hidden state vector is a state value of the last step in a hidden layer of the second encoder;
horizontally connecting the semantic guidance hidden state vector with the first decoding hidden state vector to obtain a first controlled hidden state vector;
the method for coding the first input sequence by using the first encoder to obtain the second coding hidden state vector by using the first decoding hidden state vector as the initial value of the hidden layer of the first encoder specifically comprises the following steps:
and coding the first input sequence by adopting the first coder by taking the first controlled hidden state vector as an initial value of a hidden layer of the first coder to obtain a second coded hidden state vector.
With reference to the first implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the step of obtaining the semantic guidance input sequence includes:
acquiring reply information of the current round of conversation in the training corpus;
performing word segmentation on the reply information;
extracting semantic guide words from the word segmentation result;
and converting the semantic guide words into semantic guide input sequences.
With reference to the first or second implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the step of decoding the second encoded hidden state vector by using a decoder to obtain a first output sequence includes:
calculating decoding hidden state vector s of j time in decoderjSeparately from the semantic guidance implicit state vector h in the second encoder0All encoded hidden state vectors { h } in the first encoder1,...,hi,...,hnAttention assignment weight of
Figure BDA0001557484360000021
Wherein i is 0, 1, 2, …, n; j is 1, 2, …, m; n is the encoded hidden state vector h in the first decoderiM is the output value y in the output sequence of the decoderjThe total number of (c);
calculation using softmax function
Figure BDA0001557484360000022
Obtaining a weighted average cjWherein j is 0, 1, 2, …, m;
decoding the second encoded hidden state vector with a decoder to obtain a first output sequence { y }1’,...,yj’,...,ym’},yj’=g(yj-1,sj,cj),sj=f(yj-1,sj-1,cj) Wherein f is a nonlinear activation function, g is a softmax function, yj-1Is the input value, s, of the input layer of the decoder at the jth time instantjThe decoded hidden state vector of the hidden layer of the decoder at the jth moment.
With reference to the first aspect and the first to third implementation manners, in a fourth possible implementation manner of the first aspect, the training method further includes:
and if the error is lower than or equal to a preset end threshold value, determining current parameters of the first encoder and the decoder as parameters of a reply information generation model.
In a second aspect, a reply information generation method is provided, including:
inputting a second input sequence into a reply information generation model for encoding and decoding by taking a third decoding hidden state vector as an initial value of a hidden layer of a first encoder to obtain a second output sequence; the third decoding hidden state vector is a state value of the last step in a hidden layer of a decoder of a reply information generation model in the previous round of conversation, the second input sequence is obtained by converting second input information input by a user in the current round of conversation, and the reply information generation model is obtained by training by adopting the training method of the reply information generation model according to any one of claims 1 to 5;
and converting the second output sequence into a second reply message.
With reference to the second aspect, in a first possible implementation manner of the second aspect, before the step of inputting the second input sequence into the reply information generation model for encoding and decoding by using the third decoding hidden state vector as an initial value of the hidden layer of the first encoder to obtain the second output sequence, the method further includes:
acquiring a keyword input sequence, wherein the keyword input sequence is obtained by converting a preset second keyword;
coding the keyword input sequence by adopting a second coder to obtain a keyword hidden state vector, wherein the second coder is a coder based on an RNN (neural network) model, and the keyword hidden state vector is a state value of the last step in a hidden layer of the second coder;
horizontally connecting the keyword hidden state vector with the third decoding hidden state vector to obtain a second controlled hidden state vector;
and inputting a second input sequence into the reply information generation model for encoding and decoding by taking the third decoding hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence, wherein the method specifically comprises the following steps of:
and inputting the second input sequence into the reply information generation model for encoding and decoding by taking the second controlled hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence.
With reference to the first implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the step of obtaining the keyword input sequence includes:
acquiring second input information input by a user in the current round of conversation;
extracting a first keyword from second input information, wherein the first keyword is a real word in the second input information;
acquiring a second keyword associated with the first keyword from a preset statistical library, wherein the statistical library is constructed on the basis of input information and reply information in the training corpus;
and converting the second keyword into a keyword input sequence.
In a third aspect, a training apparatus for generating a model from reply information is provided, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first input sequence, and the first input sequence is obtained by converting input information of a current round of conversation in a training corpus;
the training unit is used for coding the first input sequence by adopting a first encoder by taking a first decoding hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second coding hidden state vector; decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence; calculating an error of a standard output sequence from the first output sequence; and updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold; the first encoder is an RNN model-based encoder, the first decoding hidden state vector is a state value of the last step in a hidden layer of a decoder in the previous round of conversation, the decoder is an RNN model-based decoder, and the standard output sequence is obtained by converting reply information of the current round of conversation in the corpus.
In a fourth aspect, there is provided a reply information generation apparatus including:
the generating unit is used for inputting the second input sequence into the reply information generating model for encoding and decoding by taking the third decoding hidden state vector as an initial value of the hidden layer of the first encoder to obtain a second output sequence; the third decoding hidden state vector is a state value of the last step in a hidden layer of a decoder of a reply information generation model in the previous round of conversation, the second input sequence is obtained by converting second input information input by a user in the current round of conversation, and the reply information generation model is obtained by training by adopting any one of the training methods of the reply information generation model in the first aspect;
and the conversion unit is used for converting the second output sequence into second reply information.
According to the method for training the reply information generation model, the reply information generation model is trained by adopting the training corpora of multiple rounds of conversations, the reply information of the previous round of conversations in the training corpora is introduced into the reply information generation process of the next round of conversations, and the reply information generation model is obtained through training. And then generating reply information by using the trained reply information generation model. Similarly to the training process, when generating the reply information, the state value of the last step in the hidden layer of the decoder in the previous round of conversation is used as the initial value of the hidden layer of the first encoder in the next round of conversation, the second input sequence of the next round is encoded, and then the decoder is used for decoding. Therefore, the conversation information of the previous round or the previous rounds is introduced into the generation process of the reply information of the next round of conversation, so that more accurate reply information is generated, and the natural language interaction habit of the user is better met.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic structural diagram of a reply information generation model (encoding-decoding model) in the prior art.
FIG. 2 is a flowchart illustrating a first embodiment of a training method for a reply information generation model according to the present application.
Fig. 3 is a schematic diagram illustrating the encoding and decoding steps of the first encoder and decoder based on the simple RNN model in the first embodiment of the training method for the reply information generation model according to the present application are expanded in time sequence.
Fig. 4 is a schematic diagram illustrating encoding and decoding steps of a first encoder and decoder based on a bidirectional RNN model in a time sequence according to the first embodiment of the training method for a reply information generation model of the present application.
FIG. 5 is a flowchart illustrating a second embodiment of the training method for the reply information generation model according to the present application.
Fig. 6 is a schematic diagram illustrating encoding and decoding steps of a first encoder and decoder based on a bidirectional RNN model in a time sequence according to a second embodiment of the training method for a reply information generation model of the present application.
Fig. 7 is a flowchart of one implementation manner of step S710 in the second embodiment of the training method for a reply information generation model according to the present application.
Fig. 8 is a flowchart of a reply message generation method according to a first embodiment of the present application.
Fig. 9 is a schematic diagram illustrating the encoding and decoding steps of the first encoder and decoder based on the simple RNN model in a time sequence according to the first embodiment of the reply information generation method of the present application.
Fig. 10 is a flowchart of a reply message generation method according to a second embodiment of the present application.
Fig. 11 is a flowchart of one implementation manner of step S1010 in the second embodiment of the reply message generation method according to the present application.
Fig. 12 is a schematic structural diagram of one embodiment of a training apparatus for generating a model according to the reply information of the present application.
Fig. 13 is a schematic structural diagram of one embodiment of a reply information generation apparatus according to the present application.
Detailed Description
The following provides a detailed description of the embodiments of the present application.
In order to solve the problem of low accuracy of reply information generated only by input information of the current round, a reply information generation method based on a time sliding window has been developed, that is, a time sliding window is set to extract a session record within a certain time range before the current session, characteristics are extracted from the session record and supplemented to the session information of the current round, and then reply information of the current round is generated according to the session information after the current round is supplemented. However, such methods have several problems: first, the length of the time sliding window is difficult to determine; secondly, there is a great difficulty in extracting features from the session records and complementing the features into the session information of the current round. This is because spoken language phenomena are common in natural language expressions, for example, sentence components are omitted, and therefore, there is no satisfactory solution to how to extract effective features from the conversation records and how to accurately complement the features to appropriate positions in the current round of conversation information. Therefore, the reply information generation method based on the time sliding window has the problem of low accuracy of reply information.
Therefore, the application provides a new reply information generation method, which mainly utilizes a reply information generation model comprising a first encoder and a decoder, takes the state value of the last step in the hidden layer of the decoder in the previous round of conversation as the initial value of the hidden layer of the first encoder in the next round of conversation, encodes the input sequence of the next round, and then decodes by the decoder, thereby introducing the conversation information of the previous round or the previous rounds into the generation process of the reply information of the next round of conversation, and further generating more accurate reply information.
In this application, the reply information generation model includes a first encoder and a decoder, and the first encoder is an RNN model-based encoder and the decoder is an RNN model-based decoder, that is, the first encoder and the decoder are both implemented by an RNN model.
The RNN model is classified according to cells of the RNN model, and comprises a simple RNN model, an L STM (Short-time Memory L ong Short-Term Memory) model, a GRU (threshold cycle unit) model, a Bidirectional RNN (Bidirectional recurrent neural Network) model and the like, wherein the Bidirectional RNN model comprises a Bidirectional simple RNN model, a Bidirectional L STM model and a Bidirectional GRU model.
The first encoder and decoder in the reply information generation model in the present application may be based on any of the RNN models described above. Wherein the first encoder comprises an input layer and a hidden layer, and the decoder comprises an input layer, a hidden layer and an output layer. It should be noted here that the first encoder may not include the output layer, and may also include the output layer, which is not limited in this application. Even if the first encoder comprises an output layer, the output values of its output layer are not used in the scheme of the present application, so in the most basic case, the first encoder in the present application may comprise only an input layer and a hidden layer.
Through experiments, compared with a one-way RNN model, the two-way RNN model is adopted by the first encoder and the decoder, and the accuracy of the reply information generated by the obtained reply information generation model is higher.
For convenience of understanding, the present embodiment first describes a training process of the reply information generation model, and then describes a process of generating reply information using the trained reply information generation model.
The training phase is mainly used for determining parameters in the reply information generation model. Taking a simple RNN model as an example, the main parameters determined in the training phase include: the weight U from the input layer to the hidden layer, the weight W of the hidden state vector transfer between the hidden layers, and the weight V from the hidden layer to the output layer.
Referring to fig. 2 and 3, in a first embodiment of the present application, a method for training a reply information generation model is provided, which includes steps S100-S500.
S100: and acquiring a first input sequence, wherein the first input sequence is obtained by converting input information of the current round of conversation in the training corpus.
In the present application, the corpus includes at least one set of multi-turn corpus, the multi-turn corpus includes at least two turns of session information, and the session information of each turn is a data pair of "input information-reply information". For example, a set of multi-turn corpora may be:
the first round of input information: i want to eat the mango in Yunnan.
First round reply information: the mango fruit of Yunnan is not wrong.
And inputting information in the second round: can you buy it on the network?
Second round reply information: help you find an internet shop bar selling yunnan mango?
The third round of input information: preferably o.
The third round replies with the message: the sales of XX [ shop name ] are high.
In the actual training process, the training corpus usually contains many groups of multiple rounds of corpuses, for example, the training corpus may include one thousand groups, ten thousand groups, or even more.
In step S100, the first input sequence is obtained by converting input information of a current turn of conversation in the corpus. Specifically, each word of the input information may be converted into a word vector by using a preset dictionary, where the preset dictionary includes a corresponding relationship between each word and the word vector. The word vector for each word can be trained in advance using existing methods, for example, word2vec, which is a tool for word vector calculation.
For example, assume that the current round is the second round, and the input information of the second round is: can you buy it on the network?
Converting the input information into a first input sequence:
web → x 1; up → x 2; energy → x 3; buy → x 4; to → x 5; do → x 6; is there a → x 7. A first input sequence x1, x2, x3, x4, x5, x6, x7 is obtained.
S200: and coding the first input sequence by adopting a first coder to obtain a second coding hidden state vector by taking the first decoding hidden state vector as an initial value of a hidden layer of the first coder, wherein the first coder is a coder based on an RNN (radio network node) model, and the first decoding hidden state vector is a state value of the last step in the hidden layer of a decoder in the previous round of conversation.
The hidden state is a state of a hidden layer of an encoder or decoder based on the RNN model (which may be understood as a certain time), and a value of the state, that is, a hidden state vector may be represented by a vector.
Here, a process of training the reply information generation model using the input information and the reply information of the previous session in the corpus is described. For example, the previous corpus example is assumed to be the first round of conversation.
And calculating to obtain a coding hidden state vector at a first moment by taking a randomly set vector as an initial value of a hidden layer of a first coder and taking a word vector of a first word I of the first round of input information as an input value of an input layer of the first coder. In this step, the initial value of the first encoder hidden layer may be randomly set or may be preset by a user, for example, may be normally preset to [0, 0, 0, 0, … …, 0 ]. When the initial value of the hidden layer of the first encoder is randomly set, random value taking can be performed according to the rules of uniform distribution or truncated positive distribution and the like. And then, the coding hidden state vector at the first moment is taken as the input value of the hidden layer of the first coder, the word vector of the second word 'want' of the first round of input information is taken as the input value of the input layer of the first coder, and the coding hidden state vector at the second moment is obtained through calculation. According to the calculation method, until the word vector of the eighth word effect of the first round of input information is used as the input value of the input layer of the first encoder, the encoding hidden state vector at the eighth moment, namely the first encoding hidden state vector, is calculated. To this end, the first encoder completes the encoding process for the input sequence of the first round of the session.
The first encoded hidden state vector is used as the initial value of the decoder hidden layer, and the terminator is used ". "the word vector is the input value of the decoder input layer, and the decoding hidden state vector at the first moment is obtained by calculation. And then, calculating to obtain the decoding hidden state vector at the second moment by taking the decoding hidden state vector at the first moment as the input value of the hidden layer of the decoder and taking the word vector of the first word cloud of the first round of recovery information as the input value of the input layer of the decoder. According to the calculation method, until the word vector of the eighth word "of the first round of reply information is used as the input value of the decoder input layer, a decoding hidden state vector at the ninth moment, namely the first decoding hidden state vector, is calculated, namely the state value of the last step in the hidden layer of the decoder in the first round of conversation. To this end, the decoder completes the decoding process for the first round of the session.
In step S200, the first encoder encodes the first input sequence similarly to the process of encoding the input information of the first session in the corpus. The difference is that the random vector is not used as the initial value of the first encoder hidden layer, but the first decoding hidden state vector is used as the initial value of the first encoder hidden layer. And coding the first input sequence by adopting a first coder to obtain a second coding hidden state vector.
S300: and decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence, wherein the decoder is based on an RNN model.
In step S300, the process of decoding the second encoded hidden state vector by the decoder is similar to the process of decoding the reply message of the first round session in the corpus. Firstly, taking a second coding hidden state vector as an initial value of a hidden layer of a decoder, taking a word vector of an end character of current round input information as an input value of an input layer of the decoder, and calculating to obtain a decoding hidden state vector at a first moment; and calculating to obtain an output vector of the first moment by using the decoding hidden state vector of the first moment. Then, the decoding hidden state vector at the first moment is taken as the input value of the hidden layer of the decoder, the word vector of the first word of the current turn reply information is taken as the input value of the input layer of the decoder, and the decoding hidden state vector at the second moment is obtained through calculation; and calculating the output vector of the second moment by using the decoding hidden state vector of the second moment. According to the calculation method, until the word vector of the last word of the current round reply information is used as the input value of the decoder input layer, the decoding hidden state vector at the last moment is obtained through calculation, namely the second decoding hidden state vector; and calculating by using the second decoding hidden state vector to obtain an output vector of the last moment. At this point, the decoder completes the decoding process for the second encoded hidden state vector of the current round. The set of output vectors at all time instants is the first output sequence.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating steps of encoding and decoding of the reply information generation model in a training process according to a time sequence, wherein a first encoder in the reply information generation model is a simple RNN model, and a decoder is the simple RNN model.
The first encoder comprises an input layer and a hidden layer, the set of input values in the input layer being denoted x1,...,xi,xi+1,.. }; the set of hidden state vectors in the hidden layer is labeled h1,...,hi,hi+1,...}。
The decoder comprises an input layer, a hidden layer and an output layer, the set of input values in the input layer being denoted y1,...,yj,yj+1,.. }; the set of hidden state vectors in the hidden layer is labeled as sj,...,sj,sj+1,.. }; the set of output values in the output layer is labeled y1’,...,yj’,yj+1’,...}。
xiFor the input of the input layer of the first encoder at the ith time instantThe value is obtained.
hiThe encoded hidden state vector of the hidden layer of the first encoder at the ith time instant. Wherein h isi=f(xi,hi-1) F is generally a non-linear activation function, such as tanh or Re L UiOne common calculation method of (1) is: h isi=f(Uxi+Whi-1)。
yj-1Is the input value of the input layer of the decoder at the jth time instant.
sjDecoding hidden state vector, s, for the hidden layer of the decoder at time jj=f(yj-1,sj-1) Where f is generally a non-linear activation function, such as tanh or Re L U, etcjOne common calculation method of (1) is: sj=f(Uyj-1+Wsj-1)。
yj' is the output value of the output layer of the decoder at the jth time instant, yj’=g(sj). Wherein g is a softmax function. More specifically, yjOne common calculation method of' is: y isj’=softmax(Vsj)。
hzIs the first encoded hidden state vector.
szThe hidden state vector is decoded for the first time.
U is the weight from the input layer to the hidden layer in the first encoder/decoder and can be represented by a matrix.
W is the weight of the last concealment layer to the next concealment layer in the first encoder/decoder and can be represented by a matrix.
V is the weight from the hidden layer to the output layer in the decoder, and can be represented by a matrix.
U, W and V are parameters of the reply information generation model, and after the reply information generation model is expanded in the time dimension, please refer to FIG. 3, U, W and V, the values of which at each moment are kept unchanged. That is, in the first encoder and decoder, the parameters U, W and V are shared.
At the beginning of the model for generating the training reply information, the initial values of the parameters U, W and V can be randomly selected or manually preset. During the training, the values of U, W and V are continuously updated. When the training of the reply information generation model is completed, U, W and V are determined, and the error between the standard output sequence and the first output sequence is minimized.
S400: and calculating the error between a standard output sequence and the first output sequence, wherein the standard output sequence is obtained by converting reply information of the current round of conversation in the training corpus.
The standard output sequence is obtained by converting reply information of the current round of conversation in the training corpus, and the first output sequence is obtained by converting the reply information predicted by the reply information generation model under the current parameters. That is, the standard output sequence may be understood as the correct reply information, the first output sequence may be understood as the predicted reply information, and by calculating the error between the standard output sequence and the predicted reply information, the difference between the predicted reply information and the correct reply information generated by the reply information generation model under the current parameter may be observed, so as to adjust the parameter of the reply information generation model according to the error.
Specifically, each word of the reply information of the current round of session in the corpus may be converted into a word vector by using a preset dictionary, where the preset dictionary may be the preset dictionary in the step S100.
The error of the standard output sequence and the first output sequence can be calculated by using known error calculation methods commonly used in the process of training the RNN model. For example, cross-entropy (cross-entropy) loss functions may be used for the calculation.
Specifically, taking the cross entropy loss function as an example to calculate the error, referring to fig. 3, it can be seen from fig. 3 that the standard output sequences are { y1, y2, y3, y4}, and the first output sequences are { y1 ', y 2', y3 ', y 4' }. Assuming that the first input sequence has a total of n word vectors, the total error between the standard output sequence and the first output sequence can be expressed as:
L(y,y’)=-1/nΣj∈N(yjlogyj’);
wherein, yjRepresenting the j-th value, y, in the standard output sequencej' denotes the jth output value in the first output sequence.
S500: updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold.
Here, according to the step of error updating the parameters of the first encoder and the decoder, an existing deep learning parameter updating method may be employed, for example, a Back-propagation through time (BPTT) algorithm may be employed to update the parameters of the first encoder and the decoder.
The parameters of the specific update differ according to the RNN model used by the first encoder and the decoder, respectively. For example, if the first encoder and decoder are implemented by a simple RNN model, the parameters updated here may include U, W and V.
For another example, if the first encoder and decoder are implemented by a bi-directional RNN model, please refer to fig. 4, the parameters updated at this time may include U, W, V, U ', W ', and V '. Wherein, U is the weight from the input layer to the hidden layer during forward encoding or forward decoding; w is the weight from the previous hidden layer to the next hidden layer in the forward coding or the forward decoding; v is the weight from the hidden layer to the output layer in forward coding or decoding. U' is the weight from the input layer to the hidden layer during negative coding or negative decoding; w' is the weight from the previous hidden layer to the next hidden layer during negative coding or negative decoding; v' is the weight from the hidden layer to the output layer in the negative encoding or negative decoding. U, W, V, U ', W ', and V ' may each be represented by a matrix.
Optionally, the training method for the reply information generation model of this embodiment further includes:
s600: and if the error is lower than or equal to a preset end threshold value, determining current parameters of the first encoder and the decoder as parameters of a reply information generation model.
Here, if the error is less than or equal to a preset ending threshold, that is, the reply information generation model is trained, the current parameters of the first encoder and the decoder are determined as the parameters of the reply information generation model, so that the reply information generation model with determined parameter values, that is, the trained reply information generation model, is obtained. The model can then be used to make predictions and generate reply messages.
Referring to fig. 5 and 6, in a second embodiment of the present application, a training method of a reply information generation model is provided, including steps S100, S710, S720, S730, S201, S300, S400, and S500.
S100: and acquiring a first input sequence, wherein the first input sequence is obtained by converting input information of the current round of conversation in the training corpus.
In this embodiment, the step S100 in the first embodiment may refer to the description of the step S100 in the first embodiment, and is not described herein again.
S710: and acquiring a semantic guide input sequence, wherein the semantic guide input sequence is obtained by converting semantic guide words, and the semantic guide words are words representing the semantics of the reply information of the current round in the training corpus. The semantic guide input sequence may be represented as z1,…zk,zk+1,…}。
Specifically, referring to fig. 7, the step of obtaining the semantic guidance input sequence may include:
s711: acquiring reply information of the current round of conversation in the training corpus;
s712: performing word segmentation on the reply information;
s713: extracting semantic guide words from the word segmentation result;
s714: and converting the semantic guide words into semantic guide input sequences.
In the step S712, the word segmentation may be implemented by using an existing word segmentation method or a word segmentation tool, for example, a word segmentation method based on string matching, a word segmentation method based on understanding, or a word segmentation method based on statistics, or a word segmentation tool such as jieba, THU L AC, which is not limited in this application.
For example, the reply message of the current round session of the obtained corpus is "help you find an internet shop bar selling yunnan mango? ". The reply information is participated to obtain a participated result: help/you/find/sell/yunnan/mango/internet shop/bar. Wherein, the parts of speech of 'help', 'finding' and 'selling' are verbs, the part of speech of 'you' is pronoun, the parts of speech of 'Yunnan', 'mango' and 'online shop' are nouns, and 'bar' is a pronoun.
In step S713, as described above, the semantic guide word is a word that represents the semantics of the reply information of the current round in the corpus. In general, a real word (e.g., verb, noun, adjective, etc.) in a sentence can characterize the semantics of the sentence to some extent. Thus, in one implementation, the semantic guide words may be extracted from the part-of-speech of the participles in the participle result.
In the step of S714, the semantic guide word is converted into a semantic guide input sequence, and each word of the semantic guide word may be converted into a word vector by using a preset dictionary, thereby obtaining the semantic guide input sequence. The preset dictionary may be the preset dictionary in the step of S100 in the first embodiment.
S720: and coding the semantic guidance input sequence by adopting a second coder to obtain a semantic guidance hidden state vector, wherein the second coder is a coder based on an RNN (radio network node) model, and the semantic guidance hidden state vector is a state value of the last step in a hidden layer of the second coder.
In step S720, the second encoder is an RNN model-based encoder, and may be based on any one of a simple RNN model, L STM model, GRU model, and bi-directional RNN model, which is not limited in this applicationk. Semantic guidanceR when the hidden state vector is k with maximum valuekIt is also denoted as h0
Referring to fig. 6, the second encoder in fig. 6 adopts a bi-directional RNN model, which includes forward encoding and backward encoding processes, so that the encoded hidden state vector in the hidden layer at the k-th time in the second encoder is represented as
Figure BDA0001557484360000101
When the hidden state vector is maximum in k value under semantic guidance
Figure BDA0001557484360000102
It is also denoted as
Figure BDA0001557484360000103
Assume that the semantic directive word is "online store eos", where eos is an end character. The process of the second encoder encoding the semantic guidance input sequence is exemplified below with a second encoder based on a simple RNN model.
The randomly set vector is used as an initial value of a hidden layer of a second encoder, a word vector of a first word 'net' of a semantic guidance input sequence is used as an input value of an input layer of the second encoder, and a coding hidden state vector r at a first moment is obtained through calculation1. In this step, the hidden state vector r is then encoded at the first time instant1Calculating to obtain a coding hidden state vector r at a second moment by taking a word vector of a second word store of a semantic guidance input sequence as an input value of a hidden layer of a second coder2. Then, the hidden state vector r is encoded2Calculating to obtain a coding hidden state vector r at a third moment by taking a word vector of an end character of a semantic guidance input sequence as an input value of a hidden layer of a second coder and taking a word vector of an end character of the semantic guidance input sequence as an input value of the input layer of the second coder3. Here, r3I.e. the semantic-directed hidden state vector, which is also denoted as h0. To this end, the second encoder completes the encoding process for the semantic guidance input sequence.
The process of encoding the semantic guidance input sequence by using the second encoder based on the bi-directional RNN model is similar to this, except that the encoding process includes a forward encoding process and a reverse encoding process, and reference may be made to the schematic diagram of the second encoder in fig. 6, which is not described herein again.
S730: and horizontally connecting the semantic guidance hidden state vector with the first decoding hidden state vector to obtain a first controlled hidden state vector.
Following the example in step S720, the semantic-directed hidden-state vector is h0The first decoding hidden state vector is szConnecting the two horizontally to obtain a first controlled hidden state vector [ h ]0,sz]. Here, h0And szAre all expressed in a matrix, assuming h0Represented as a matrix, s, of 3 × 3zRepresented as a matrix of 3 × 4, the first controlled hidden state vector h0,sz]Shown as a matrix of 3 × 7.
If the second encoder and the first encoder are both realized by a bidirectional RNN model, the semantic guidance hidden state vector is
Figure BDA0001557484360000104
The first decoding hidden state vector is
Figure BDA0001557484360000105
Connecting the two horizontally to obtain a first controlled hidden state vector
Figure BDA0001557484360000106
S201: and coding the first input sequence by adopting the first coder by taking the first controlled hidden state vector as an initial value of a hidden layer of the first coder to obtain a second coded hidden state vector.
In the step of S201, the process of the first encoder encoding the first input sequence is similar to the process in the step of S200. The difference is that the step S200 uses the first decoding hidden state vector as the initial value of the hidden layer of the first encoder, and the step S201 uses the first controlled hidden state vector as the initial value of the hidden layer of the first encoder.
In this embodiment, through the steps of S710, S720, S730, and S201, the semantic guidance word extracted from the answer to the corpus is introduced into the training process of the reply information generation model. Therefore, when the reply information is generated by using the trained reply information generation model, the second keyword preset by the user or the second keyword obtained according to a certain rule can be introduced to guide the generation of the reply information, so that better and more accurate reply information can be obtained.
For example, if no semantic guide word is introduced for training, the training results in the reply information generation model M1.
The input information input by the user is as follows: the people on the photo are my girlfriend.
Then, through the prediction of the model M1 and through the transformation, the obtained reply message is: is.
If a semantic guide word is introduced for training, the reply information generation model M2 is obtained.
The input information input by the user is as follows: the people on the photo are my girlfriend.
The second keywords acquired for semantic guidance are: and (4) the appearance is beautiful.
Then, through the prediction of the model M2 and through the transformation, the obtained reply message is: am beautiful!
As can also be seen from this example, when a semantic guide word is introduced to train the reply information generation model and reply information is generated using the trained model, more accurate reply information can be obtained.
S300: and decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence, wherein the decoder is based on an RNN model.
S400: and calculating the error between a standard output sequence and the first output sequence, wherein the standard output sequence is obtained by converting reply information of the current round of conversation in the training corpus.
S500: updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold.
In this embodiment, the steps S300 to S500 may refer to the descriptions of the steps S300 to S500 in the first embodiment, and are not described herein again.
Since the reply information generation model is always linked with the second encoding hidden state vector in the encoding and decoding processes, each word vector x in the first input sequenceiFor each decoded output value y in the decoderjThe contribution of (c) is the same. In such an approach, the first encoder compresses the information of the entire first input sequence into a fixed-length vector, which easily causes two problems. First, the second encoded hidden state vector does not fully convey the information of the entire first input sequence. Second, information input to the encoder input layer at the beginning of the first input sequence is easily overwritten by information input to the encoder input layer later, so that much detail information is lost, which is especially apparent when the length of the first input sequence is long.
Therefore, an Attention Mechanism (Attention Mechanism) is introduced in the decoding process of the training method of the reply information generation model, so that each word vector x in the first input sequence can be embodiediFor the decoded output value yjAnd the decoded output value yjCan focus more on the significance and the output value y in the first input sequencejAnd related word vectors to improve the accuracy of the obtained reply message.
Specifically, in one implementation, the decoding, by the decoder, the second encoded hidden state vector in the step of S300 to obtain the first output sequence may include the steps of S301 to S303.
S301: calculating decoding hidden state vector s of j time in decoderjSeparately from the semantic guidance implicit state vector h in the second encoder0All encoded hidden state vectors { h } in the first encoder1,...,hi,...,hnAttention assignment weight of
Figure BDA0001557484360000111
Wherein i is 0, 1, 2, …, n; j is 1, 2, …, m; n is the encoded hidden state vector h in the first decoderiM is the output value y in the output sequence of the decoderjThe total number of the cells.
When i is equal to 0, the data is transmitted,
Figure BDA0001557484360000112
representing the output value y at the jth instant in the decoderjImplicit state vector h for semantic guidance in the second encoder0The attention of (1) is assigned a weight.
Figure BDA0001557484360000113
The higher the value of (d), the higher the output value y at the jth time in the decoderjThe more attention that is allocated on the semantic-guiding input sequence in the second encoder, y is generated in the decoding processjTime-of-flight semantically guided hidden state vector h0The greater the effect of (c).
When i is 1, 2, … n,
Figure BDA0001557484360000121
representing the output value y at the jth instant in the decoderjRespectively encoding all the hidden state vectors { h } in the first encoder1,...,hi,hi+1,.. } assigning weights to attention.
Figure BDA0001557484360000122
The higher the value of (d), the higher the output value y at the jth time in the decoderjEncoding an i-th latent state vector h in a first encoderiThe more attention is allocated. It can also be understood that the output value y at the jth time in the decoderjIn the first encoder the ith input value xiThe more attention is allocated, so y is generated in the decoding processjTime receiving xiThe greater the effect of (c).
In an implementation methodIn the formula (I), the compound is shown in the specification,
Figure BDA0001557484360000123
can be calculated by the following formula:
Figure BDA0001557484360000124
eji=vatanh(Wasj-1+Uahi) Where Va, Wa and Ua are parameter values, they may be represented by a matrix.
Here, ejiIn effect, an alignment model, which is a feedforward neural network nested in the RNN model, is trained together when training the reply information generation model.
S302: calculation using softmax function
Figure BDA0001557484360000125
Obtaining a weighted average cj(ii) a Wherein j is 0, 1, 2, …, m.
Here, cjIs a semantically guided latent state vector h from the second encoder0And a set of hidden state vectors h of the first encoder at encoding1,...,hi,hi+1,., by adding weights.
In particular, the amount of the solvent to be used,
Figure BDA0001557484360000126
s303: decoding the second encoded hidden state vector with a decoder to obtain a first output sequence { y }1’,...,yj’,...,ym’},yj’=g(yj-1,sj,cj),sj=f(yj-1,sj-1,cj) Where f is generally a nonlinear activation function, such as tanh or Re L U, and g is a softmax function.
In one implementation, sj=f(Uyj-1+Wsj-1+cj)。
The use phase of the reply information generation model, that is, the process of generating the reply information using the reply information generation model, will be described below by way of the third embodiment and the fourth embodiment.
Referring to fig. 8 and 9, in a third embodiment of the present application, a reply information generation method is provided, which includes steps S800 and S900.
S800: and inputting the second input sequence into the reply information generation model for encoding and decoding by taking the third decoding hidden state vector as an initial value of the hidden layer of the first encoder to obtain a second output sequence.
Here, the third decoding hidden state vector is obtained by converting a state value of a last step in a hidden layer of a decoder of a reply information generation model in a previous session, the second input sequence is obtained by converting second input information input by a user in a current session, and the reply information generation model is obtained by training with the training method of any one of the foregoing first embodiment and the second embodiment.
In step S800, the second input sequence is converted from the second input information input by the user in the current round of session. Specifically, each word of the input information may be converted into a word vector by using a preset dictionary, where the preset dictionary includes a corresponding relationship between each word and the word vector. The word vector for each word can be trained in advance using existing methods, for example, word2vec, which is a tool for word vector calculation. The preset dictionary may be the same as the preset dictionary used for training the reply information generation model.
The process of encoding and decoding the second input sequence input to the reply information generation model is similar to the process of encoding and decoding when the reply information generation model is trained. The difference is that in the training process, the word vector of each word of the current round reply information in the training corpus is used as the input value of the input layer of the decoder during decoding; in the use process, the output value decoded at the previous moment in the decoder is used as the input value of the input layer at the next moment in the decoding process.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating steps of encoding and decoding of the reply information generation model in a time sequence during use, wherein a first encoder in the reply information generation model is a simple RNN model, and a decoder in the reply information generation model is a simple RNN model.
The first encoder comprises an input layer and a hidden layer, the set of input values in the input layer being denoted x1,...,xi,xi+1,.. }; the set of hidden state vectors in the hidden layer is labeled h1,...,hi,hi+1,...}。
The decoder comprises an input layer, a hidden layer and an output layer, the set of hidden state vectors in the hidden layer being denoted as sj,...,sj,sj+1,.. }; the set of output values in the output layer is labeled y1’,...,yj’,yj+1’,...}。
xiIs the input value of the input layer of the first encoder at the ith time instant.
hiThe encoded hidden state vector of the hidden layer of the first encoder at the ith time instant. Wherein h isi=f(xi,hi-1) F is generally a non-linear activation function, such as tanh or Re L UiOne common calculation method of (1) is: h isi=f(Uxi+Whi-1)。
yj-1' is the input value of the input layer of the decoder at the jth time instant.
sjDecoding hidden state vector, s, for the hidden layer of the decoder at time jj=f(yj-1,sj-1) Where f is generally a non-linear activation function, such as tanh or Re L U, etcjOne common calculation method of (1) is: sj=f(Uyj-1+Wsj-1)。
yj' is the output value of the output layer of the decoder at the jth time instant, yj’=g(sj). Wherein g is a softmax function. More specifically, yjOne common calculation method of' is: y isj’=softmax(Vsj)。
hzA third encoded hidden state vector.
szThe hidden state vector is decoded for a third time.
U is the weight from the input layer to the hidden layer in the first encoder/decoder and can be represented by a matrix.
W is the weight of the last concealment layer to the next concealment layer in the first encoder/decoder and can be represented by a matrix.
V is the weight from the hidden layer to the output layer in the decoder, and can be represented by a matrix.
S900: and converting the second output sequence into a second reply message.
The conversion of the second output sequence into the second reply message may also be performed by using a predetermined dictionary, as described above, the predetermined dictionary includes the corresponding relationship between each word and the word vector, and each word vector in the second output sequence is sequentially converted into the corresponding word, that is, the second reply message. The preset dictionary here may be the same as the preset dictionary employed in the step of S800.
In the reply information generation method in this embodiment, a trained reply information generation model is used, a state value of a last step in an implicit layer of a decoder in a previous session is used as an initial value of an implicit layer of a first encoder in a next session, a second input sequence of the next session is encoded, and the encoded second input sequence is decoded by the decoder, so that session information of the previous session or previous sessions is introduced into a generation process of reply information of the next session, and more accurate reply information is generated.
For example,
the user input information in the first session is as follows: i want to eat litchi.
Reply information generated in the first round of conversation: the litchi of Guangdong is not wrong.
The user input information in the second session is: can you buy it on the network?
If a conventional intelligent question-answering system is adopted, reply information can be generated only according to information input by a user in the second round of conversation, and because the information which can be acquired in the second round of conversation is limited, the system can only generate preset safe reply information as the reply information of the second round of conversation, such as: interesting problem!
By adopting the method, the state value of the last step in the hidden layer of the decoder in the previous round of conversation contains all or most of the semantics of the reply information of the previous round of conversation, and the state value is used as the initial value of the hidden layer of the first encoder in the next round of conversation, so that the conversation information of the previous round or the previous rounds is introduced into the generation process of the reply information of the next round of conversation, a supplementary effect is achieved, and the reply information generation model can generate more accurate reply information in the second round of conversation.
For example, by using the method of the present application, the reply information generated in the second round of session is: help you find an internet shop bar selling guangzhou litchis?
Referring to fig. 10, in a fourth embodiment of the present application, a reply information generation method is provided, including the steps of S1010, S1020, S1030, S801, and S900.
S1010: and acquiring a keyword input sequence, wherein the keyword input sequence is obtained by converting a preset second keyword.
Here, the second keyword may be preset directly by the user, or may be preset in another manner. The second keyword is used for performing semantic guidance on the reply information generation process, so that the generated reply information can be more accurate and better conforms to the natural language communication habit of the user.
Referring to fig. 11, in one implementation manner, the obtaining the keyword input sequence may specifically include:
s1011: acquiring second input information input by a user in the current round of conversation;
s1012: extracting a first keyword from second input information, wherein the first keyword is a real word in the second input information;
s1013: acquiring a second keyword associated with the first keyword from a preset statistical library, wherein the statistical library is constructed on the basis of input information and reply information in the training corpus;
s1014: and converting the second keyword into a keyword input sequence.
In the step S1012, specifically, the second input information may be segmented first, and during the segmentation process, the part of speech of the segmentation result may be labeled at the same time. And then extracting at least one real word from the word segmentation result as a first keyword, wherein the real word can represent the semantics of second input information input by the user in the current round of conversation to a certain extent.
In the step of S1013, a second keyword corresponding to the first keyword is acquired from a preset statistical library;
the preset statistical library is constructed based on the input information and the reply information in the corpus. Specifically, the method comprises the steps of segmenting words of an input information-reply information pair in a training corpus, extracting at least one first real word from input information, extracting at least one second real word from reply information, and establishing association between each first real word and each second real word. And after all the training corpora are extracted and the association is established, counting the second real words associated with each first real word to obtain statistical data. The probability that when the first real word appears in the input information, the second real word associated with the first real word appears in the reply information can be obtained from the statistical data.
Therefore, the statistical library includes at least one first real word, at least one second real word, and the probability of the second real word associated with each first real word appearing in the reply message.
For example, the first real word is "eat", and the second real words associated therewith are "good eat", "rice", and "noodles". When the input information has "eat", the probability of the second real word "good eating" appearing in the reply information is 0.6, the probability of the second real word "rice" appearing in the reply information is 0.2, and the probability of the second real word "noodle" appearing in the reply information is 0.2.
If the first keyword extracted from the second input information in the step S1012 is "eat", in this step, the second real word associated with "eat" is randomly acquired from the statistical library according to the probability distribution with "eat" as the first real word, and is used as the second keyword.
In the step of S1014, the second keyword is converted into a keyword input sequence, and each word of the second keyword may be converted into a word vector by using a preset dictionary, so as to obtain the keyword input sequence. The preset dictionary may be the same as the preset dictionary used for training the reply information generation model.
S1020: and coding the keyword input sequence by adopting a second coder to obtain a keyword hidden state vector, wherein the second coder is a coder based on an RNN model, and the keyword hidden state vector is a state value of the last step in a hidden layer of the second coder.
S1030: and horizontally connecting the keyword hidden state vector with the third decoding hidden state vector to obtain a second controlled hidden state vector.
In steps S1020 and S1030, the process of encoding the keyword input sequence by the second encoder is similar to the process of encoding the semantic guidance input sequence by the second encoder in the training process, and reference may be made to the description related to step S720 in the second embodiment; the process of horizontally connecting the keyword hidden state vector and the third decoding hidden state vector is similar to the process of horizontally connecting the semantic guidance hidden state vector and the first decoding hidden state vector in the training process, and reference may be made to the description related to step S730 in the second embodiment, which is not described herein again.
S801: and inputting the second input sequence into the reply information generation model for encoding and decoding by taking the second controlled hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence.
Here, the second input sequence is obtained by converting second input information input by a user in a current round of session, and the reply information generation model is obtained by training in the method for training the reply information generation model according to any one of the first embodiment or the second embodiment.
In the step of S801, the process of the first encoder encoding the second input sequence is similar to that in the step of S800 in the third embodiment. The difference is that the third decoding hidden state vector is used as the initial value of the hidden layer of the first encoder in the step S800, and the second controlled hidden state vector is used as the initial value of the hidden layer of the first encoder in the step S801.
S900: and converting the second output sequence into a second reply message.
In this embodiment, the step S900 in the third embodiment may refer to the description of the step S900 in the third embodiment, and is not described herein again.
In this embodiment, through the steps of S1010, S1020, S1030, and S801, a second keyword preset by a user or a second keyword obtained according to a certain rule is introduced to guide generation of reply information, so that better and more accurate reply information can be obtained.
Referring to fig. 12, in a fifth embodiment of the present application, there is provided a training apparatus for a reply information generation model, including:
the device comprises an acquisition unit 1, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first input sequence, and the first input sequence is obtained by converting input information of a current round of conversation in a training corpus;
a training unit 2, configured to use a first decoding hidden state vector as an initial value of a hidden layer of a first encoder, and encode the first input sequence with the first encoder to obtain a second encoding hidden state vector; decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence; calculating an error of a standard output sequence from the first output sequence; and updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold; the first encoder is an RNN model-based encoder, the first decoding hidden state vector is a state value of the last step in a hidden layer of a decoder in the previous round of conversation, the decoder is an RNN model-based decoder, and the standard output sequence is obtained by converting reply information of the current round of conversation in the corpus.
Optionally, the obtaining unit 1 is further configured to obtain a semantic guidance input sequence, where the semantic guidance input sequence is obtained by converting a semantic guidance word, and the semantic guidance word is a word representing the semantics of the reply information of the current round in the training corpus;
the training unit 2 is further configured to encode the semantic guidance input sequence by using a second encoder to obtain a semantic guidance hidden state vector; horizontally connecting the semantic guidance hidden state vector with the first decoding hidden state vector to obtain a first controlled hidden state vector; and coding the first input sequence by using the first encoder to obtain a second coded hidden state vector by taking the first controlled hidden state vector as an initial value of a hidden layer of the first encoder; wherein the second encoder is an RNN model-based encoder, and the semantic guidance hidden state vector is a state value of a last step in a hidden layer of the second encoder.
Optionally, the obtaining unit 1 is further configured to obtain reply information of a current round of session in the corpus; performing word segmentation on the reply information; extracting semantic guide words from the word segmentation result; and converting the semantic guide words into a semantic guide input sequence.
Optionally, the training unit 2 is further configured to calculate a decoding hidden state vector s at a jth time in a decoderjSeparately from the semantic guidance implicit state vector h in the second encoder0All encoded hidden state vectors { h } in the first encoder1,...,hi,...,hnAttention assignment weight of
Figure BDA0001557484360000161
Wherein i is 0, 1, 2, …, n; j is 1, 2, …, m; n is the encoded hidden state vector h in the first decoderiM is the output value y in the output sequence of the decoderjThe total number of (c); calculation using softmax function
Figure BDA0001557484360000162
Obtaining a weighted average cjWherein j is 0, 1, 2, …, m; and decoding the second encoded hidden state vector with a decoder to obtain a first output sequence { y }1’,...,yj’,...,ym’},yj’=g(yj-1,sj,cj),sj=f(yj-1,sj-1,cj) Wherein f is a nonlinear activation function, g is a softmax function, yj-1Is the input value, s, of the input layer of the decoder at the jth time instantjThe decoded hidden state vector of the hidden layer of the decoder at the jth moment.
Optionally, the training unit 2 is further configured to determine, if the error is lower than or equal to a preset end threshold, current parameters of the first encoder and the decoder as parameters of a reply information generation model.
Referring to fig. 13, in a sixth embodiment of the present application, there is provided a reply information generating apparatus, including:
the generating unit 3 is used for inputting the second input sequence into the reply information generating model for encoding and decoding by taking the third decoding hidden state vector as an initial value of the first encoder hidden layer to obtain a second output sequence; the third decoding hidden state vector is a state value of the last step in a hidden layer of a decoder of a reply information generation model in the previous round of conversation, the second input sequence is obtained by converting second input information input by a user in the current round of conversation, and the reply information generation model is obtained by training by adopting any one of the training methods of the reply information generation model;
and the conversion unit 4 is configured to convert the second output sequence into second reply information.
Optionally, the generating unit 3 is further configured to obtain a keyword input sequence; coding the keyword input sequence by adopting a second coder to obtain a keyword hidden state vector; horizontally connecting the keyword hidden state vector with the third decoding hidden state vector to obtain a second controlled hidden state vector; inputting a second input sequence into the reply information generation model for encoding and decoding by taking a second controlled hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence, wherein the keyword input sequence is obtained by converting a preset second keyword; the second encoder is an RNN model-based encoder, and the keyword hidden state vector is a state value of the last step in a hidden layer of the second encoder.
Optionally, the generating unit 3 is further configured to obtain second input information input by the user in the current round session; extracting a first keyword from the second input information; acquiring a second keyword associated with the first keyword from a preset statistical library; converting the second keyword into a keyword input sequence; the first keyword is a real word in the second input information, and the statistical library is constructed based on the input information and the reply information in the training corpus.
It should be noted that, in China, the skilled person does not perform unified chinese translation on "cell" and "word 2 vec" in "cell of RNN model", but refers to english original text to describe the cell. Therefore, in order to avoid ambiguity in translation, the present embodiment also uses english language, and those skilled in the art can understand these terms.
The same and similar parts in the various embodiments in this specification may be referred to each other. The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (9)

1. A training method for a reply information generation model is characterized by comprising the following steps:
acquiring a first input sequence, wherein the first input sequence is obtained by converting input information of a current round of conversation in a training corpus;
coding the first input sequence by using a first coder to obtain a second coding hidden state vector by taking the first decoding hidden state vector as an initial value of a hidden layer of the first coder, wherein the first coder is a coder based on an RNN (radio network node) model, and the first decoding hidden state vector is a state value of the last step in the hidden layer of a decoder in the previous round of conversation;
decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence, wherein the decoder is based on an RNN model;
calculating the error between a standard output sequence and the first output sequence, wherein the standard output sequence is obtained by converting reply information of the current round of conversation in the training corpus;
updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold;
before the step of encoding the first input sequence by using the first encoder with the first decoding hidden state vector as an initial value of a hidden layer of the first encoder, the method further includes:
obtaining a semantic guide input sequence, wherein the semantic guide input sequence is obtained by converting semantic guide words, and the semantic guide words are words representing the semantics of the reply information of the current round in the training corpus;
encoding the semantic guidance input sequence by adopting a second encoder to obtain a semantic guidance hidden state vector, wherein the second encoder is an RNN (radio network node) model-based encoder, and the semantic guidance hidden state vector is a state value of the last step in a hidden layer of the second encoder;
horizontally connecting the semantic guidance hidden state vector with the first decoding hidden state vector to obtain a first controlled hidden state vector;
the method for coding the first input sequence by using the first encoder to obtain the second coding hidden state vector by using the first decoding hidden state vector as the initial value of the hidden layer of the first encoder specifically comprises the following steps:
and coding the first input sequence by adopting the first coder by taking the first controlled hidden state vector as an initial value of a hidden layer of the first coder to obtain a second coded hidden state vector.
2. The method for training a reply information generation model according to claim 1, wherein the step of obtaining the semantic guidance input sequence comprises:
acquiring reply information of the current round of conversation in the training corpus;
performing word segmentation on the reply information;
extracting semantic guide words from the word segmentation result;
and converting the semantic guide words into semantic guide input sequences.
3. The method for training a reply information generation model according to claim 1, wherein the step of decoding the second encoded hidden state vector by a decoder to obtain a first output sequence comprises:
calculating decoding hidden state vector s of j time in decoderjSeparately from the semantic guidance implicit state vector h in the second encoder0All encoded hidden state vectors { h } in the first encoder1,...,hi,...,hnAttention assignment weight of
Figure FDA0002462814490000011
Wherein i is 0, 1, 2, …, n; j is 1, 2, …, m; n is the encoded hidden state vector h in the first decoderiM is the output value y in the output sequence of the decoderjThe total number of (c);
calculation using softmax function
Figure FDA0002462814490000012
Obtaining a weighted average cjWherein j is 0, 1, 2, …, m;
decoding the second encoded hidden state vector with a decoder to obtain a first output sequence { y }1’,...,yj’,...,ym’},yj’=g(yj-1,sj,cj),sj=f(yj-1,sj-1,cj) Wherein f is a nonlinear activation function, g is a softmax function, yj-1Is the input value, s, of the input layer of the decoder at the jth time instantjThe decoded hidden state vector of the hidden layer of the decoder at the jth moment.
4. The method for training a reply information generation model according to claim 1, further comprising:
and if the error is lower than or equal to a preset end threshold value, determining current parameters of the first encoder and the decoder as parameters of a reply information generation model.
5. A reply information generating method, comprising:
inputting a second input sequence into a reply information generation model for encoding and decoding by taking a third decoding hidden state vector as an initial value of a hidden layer of a first encoder to obtain a second output sequence; the third decoding hidden state vector is a state value of the last step in a hidden layer of a decoder of a reply information generation model in the previous round of conversation, the second input sequence is obtained by converting second input information input by a user in the current round of conversation, and the reply information generation model is obtained by training by adopting the training method of the reply information generation model according to any one of claims 1 to 4;
and converting the second output sequence into a second reply message.
6. The method as claimed in claim 5, wherein before the step of inputting the second input sequence into the reply information generation model for encoding and decoding to obtain the second output sequence with the third decoding hidden state vector as an initial value of the hidden layer of the first encoder, the method further comprises:
acquiring a keyword input sequence, wherein the keyword input sequence is obtained by converting a preset second keyword;
coding the keyword input sequence by adopting a second coder to obtain a keyword hidden state vector, wherein the second coder is a coder based on an RNN (neural network) model, and the keyword hidden state vector is a state value of the last step in a hidden layer of the second coder;
horizontally connecting the keyword hidden state vector with the third decoding hidden state vector to obtain a second controlled hidden state vector;
and inputting a second input sequence into the reply information generation model for encoding and decoding by taking the third decoding hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence, wherein the method specifically comprises the following steps of:
and inputting the second input sequence into the reply information generation model for encoding and decoding by taking the second controlled hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second output sequence.
7. The reply message generation method according to claim 6, wherein the step of obtaining the keyword input sequence includes:
acquiring second input information input by a user in the current round of conversation;
extracting a first keyword from second input information, wherein the first keyword is a real word in the second input information;
acquiring a second keyword associated with the first keyword from a preset statistical library, wherein the statistical library is constructed on the basis of input information and reply information in the training corpus;
and converting the second keyword into a keyword input sequence.
8. A training apparatus for replying to a message generation model, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first input sequence, and the first input sequence is obtained by converting input information of a current round of conversation in a training corpus;
the training unit is used for coding the first input sequence by adopting a first encoder by taking a first decoding hidden state vector as an initial value of a hidden layer of the first encoder to obtain a second coding hidden state vector; decoding the second coding hidden state vector by adopting a decoder to obtain a first output sequence; calculating an error of a standard output sequence from the first output sequence; and updating parameters of the first encoder and the decoder according to the error if the error is above a preset end threshold; the first encoder is an RNN model-based encoder, the first decoding hidden state vector is a state value of the last step in a hidden layer of a decoder in the previous round of conversation, the decoder is an RNN model-based decoder, and the standard output sequence is obtained by converting reply information of the current round of conversation in the corpus;
the acquisition unit is further configured to acquire a semantic guidance input sequence, where the semantic guidance input sequence is obtained by converting a semantic guidance word, and the semantic guidance word is a word representing the semantics of the reply information of the current round in the training corpus;
the training unit is also used for encoding the semantic guidance input sequence by adopting a second encoder to obtain a semantic guidance hidden state vector; horizontally connecting the semantic guidance hidden state vector with the first decoding hidden state vector to obtain a first controlled hidden state vector; and coding the first input sequence by adopting the first coder by taking the first controlled hidden state vector as an initial value of a hidden layer of the first coder to obtain a second coded hidden state vector.
9. A reply information generation apparatus, comprising:
the generating unit is used for inputting the second input sequence into the reply information generating model for encoding and decoding by taking the third decoding hidden state vector as an initial value of the hidden layer of the first encoder to obtain a second output sequence; the third decoding hidden state vector is a state value of the last step in a hidden layer of a decoder of a reply information generation model in the previous round of conversation, the second input sequence is obtained by converting second input information input by a user in the current round of conversation, and the reply information generation model is obtained by training by adopting the training method of the reply information generation model according to any one of claims 1 to 4;
and the conversion unit is used for converting the second output sequence into second reply information.
CN201810068600.XA 2018-01-24 2018-01-24 Training method of reply information generation model, reply information generation method and device Active CN108153913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810068600.XA CN108153913B (en) 2018-01-24 2018-01-24 Training method of reply information generation model, reply information generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810068600.XA CN108153913B (en) 2018-01-24 2018-01-24 Training method of reply information generation model, reply information generation method and device

Publications (2)

Publication Number Publication Date
CN108153913A CN108153913A (en) 2018-06-12
CN108153913B true CN108153913B (en) 2020-08-07

Family

ID=62458988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810068600.XA Active CN108153913B (en) 2018-01-24 2018-01-24 Training method of reply information generation model, reply information generation method and device

Country Status (1)

Country Link
CN (1) CN108153913B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002500A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 Talk with generation method, device, equipment and computer-readable medium
CN108932342A (en) * 2018-07-18 2018-12-04 腾讯科技(深圳)有限公司 A kind of method of semantic matches, the learning method of model and server
KR20200023664A (en) * 2018-08-14 2020-03-06 삼성전자주식회사 Response inference method and apparatus
CN109325109B (en) * 2018-08-27 2021-11-19 中国人民解放军国防科技大学 Attention encoder-based extraction type news abstract generating device
CN110874402A (en) * 2018-08-29 2020-03-10 北京三星通信技术研究有限公司 Reply generation method, device and computer readable medium based on personalized information
CN110968775A (en) * 2018-09-30 2020-04-07 北京京东尚科信息技术有限公司 Training method of commodity attribute generation model, generation method, search method and system
CN109408630B (en) * 2018-10-17 2021-10-29 杭州世平信息科技有限公司 Method for automatically generating court opinions according to description of crime facts
CN109472031B (en) * 2018-11-09 2021-05-04 电子科技大学 Aspect level emotion classification model and method based on double memory attention
CN109543017B (en) * 2018-11-21 2022-12-13 广州语义科技有限公司 Legal question keyword generation method and system
CN109558605B (en) * 2018-12-17 2022-06-10 北京百度网讯科技有限公司 Method and device for translating sentences
CN110297895B (en) * 2019-05-24 2021-09-17 山东大学 Dialogue method and system based on free text knowledge
CN111079945B (en) 2019-12-18 2021-02-05 北京百度网讯科技有限公司 End-to-end model training method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN105868829A (en) * 2015-02-06 2016-08-17 谷歌公司 Recurrent neural networks for data item generation
CN106528858A (en) * 2016-11-29 2017-03-22 北京百度网讯科技有限公司 Lyrics generating method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2807786C (en) * 2012-03-08 2016-06-21 Research In Motion Limited Motion vector sign bit hiding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868829A (en) * 2015-02-06 2016-08-17 谷歌公司 Recurrent neural networks for data item generation
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN106528858A (en) * 2016-11-29 2017-03-22 北京百度网讯科技有限公司 Lyrics generating method and device

Also Published As

Publication number Publication date
CN108153913A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN108153913B (en) Training method of reply information generation model, reply information generation method and device
CN110427617B (en) Push information generation method and device
US20180329884A1 (en) Neural contextual conversation learning
WO2018014835A1 (en) Dialog generating method, device, apparatus, and storage medium
CN110390397B (en) Text inclusion recognition method and device
CN109190134B (en) Text translation method and device
CN111914067B (en) Chinese text matching method and system
CN111858932A (en) Multiple-feature Chinese and English emotion classification method and system based on Transformer
CN109964223A (en) Session information processing method and its device, storage medium
KR102352251B1 (en) Method of High-Performance Machine Reading Comprehension through Feature Selection
CN111274375B (en) Multi-turn dialogue method and system based on bidirectional GRU network
CN110457661B (en) Natural language generation method, device, equipment and storage medium
CN108959388B (en) Information generation method and device
CN112214591A (en) Conversation prediction method and device
CN113254616B (en) Intelligent question-answering system-oriented sentence vector generation method and system
CN107679225A (en) A kind of reply generation method based on keyword
CN112767917A (en) Speech recognition method, apparatus and storage medium
CN112364148B (en) Deep learning method-based generative chat robot
CN111767697B (en) Text processing method and device, computer equipment and storage medium
CN111382257A (en) Method and system for generating dialog context
CN112100350B (en) Open domain dialogue method for intensifying reply personalized expression
CN115495568B (en) Training method and device for dialogue model, dialogue response method and device
CN116246213B (en) Data processing method, device, equipment and medium
WO2023231513A1 (en) Conversation content generation method and apparatus, and storage medium and terminal
CN110597968A (en) Reply selection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190904

Address after: Room 630, 6th floor, Block A, Wanliu Xingui Building, 28 Wanquanzhuang Road, Haidian District, Beijing

Applicant after: China Science and Technology (Beijing) Co., Ltd.

Address before: Room 601, Block A, Wanliu Xingui Building, 28 Wanquanzhuang Road, Haidian District, Beijing

Applicant before: Beijing Shenzhou Taiyue Software Co., Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 230000 zone B, 19th floor, building A1, 3333 Xiyou Road, hi tech Zone, Hefei City, Anhui Province

Applicant after: Dingfu Intelligent Technology Co., Ltd

Address before: Room 630, 6th floor, Block A, Wanliu Xingui Building, 28 Wanquanzhuang Road, Haidian District, Beijing

Applicant before: DINFO (BEIJING) SCIENCE DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant