US20180285348A1 - Dialog generation method, apparatus, and device, and storage medium - Google Patents
Dialog generation method, apparatus, and device, and storage medium Download PDFInfo
- Publication number
- US20180285348A1 US20180285348A1 US15/997,912 US201815997912A US2018285348A1 US 20180285348 A1 US20180285348 A1 US 20180285348A1 US 201815997912 A US201815997912 A US 201815997912A US 2018285348 A1 US2018285348 A1 US 2018285348A1
- Authority
- US
- United States
- Prior art keywords
- term
- round
- query sentence
- latent vector
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 239000013598 vector Substances 0.000 claims abstract description 490
- 230000008569 process Effects 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000015654 memory Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims 4
- 238000013528 artificial neural network Methods 0.000 description 14
- 230000000306 recurrent effect Effects 0.000 description 10
- 230000002457 bidirectional effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G06F17/2785—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3347—Query execution using vector based model
-
- G06F17/30654—
-
- G06F17/30684—
-
- G06F17/3069—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Definitions
- the present disclosure relates to the field of speech processing, and in particular, to a dialog generation method, apparatus, and device, and a storage medium.
- dialog robots attract more attention.
- research emphasis is on how to improve correlation of an automatically generated reply sentence in a multi-round dialog, and how to reduce a generation probability of a high-frequency answer, to order to generate a high-quality dialog.
- a dialog system is an important application direction of natural language processing.
- the dialog system may include a rule-based dialog system, a searching-based dialog system, or a generation-type dialog system.
- the rule-based dialog system has a simple structure and high accuracy, but has a relatively poor generalization capability.
- the searching-based dialog system requires a relatively large number of corpuses with relatively high quality, otherwise problems such as low recall easily occur.
- the generation-type dialog system may relatively desirably establish a language model, and may generate a reply sentence corresponding to any input sentence.
- a modeling manner of the generation-type dialog system may include single-round modeling and multi-round modeling. In a single-round generation-type dialog model, modeling is performed only on a query and reply pair.
- a method including converting, by at least one processor, each term in a Kth round of a query sentence into a first word vector, and calculating a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2; obtaining, by the at least one processor, a content topic of the Kth round of the query sentence, and converting the content topic into a second word vector; determining an initial latent vector output for the Kth round of the query sentence according to the second word vector, the positive latent vector of a last term in the Kth round of the query sentence, a latent vector of a last term in a (K ⁇ 1)th round of a reply sentence output for a (K ⁇ 1)th round of the query sentence, and an initial latent vector of the (K ⁇ 1)th round of the reply sentence output for the (K ⁇ 1)th round of the query sentence; and generating, by the at least one processor, a reply sentence
- an apparatus and a computer readable storage medium consistent with the method.
- FIG. 1 is a schematic flowchart of a dialog generation method according to an exemplary embodiment
- FIG. 2A is a schematic architectural diagram of a dialog generation system according to an exemplary embodiment
- FIG. 2B is a schematic flowchart of a dialog generation method according to an exemplary embodiment
- FIG. 3 is a schematic structural diagram of a dialog generation apparatus according to an exemplary embodiment.
- FIG. 4 is a schematic structural diagram of another dialog generation apparatus according to an exemplary embodiment.
- each term in a K th round of a query sentence is converted into a first word vector, and a positive latent vector and a negative latent vector of each term is calculated according to the first word vector; a content topic of the K th round of the query sentence is obtained, and the content topic is converted into a second word vector; an initial latent vector output for the K th round of the query sentence is determined according to the second word vector, the positive latent vector of the last term in the K th round of the query sentence, a latent vector of the last term in a (K ⁇ 1) th round of a reply sentence output for a (K ⁇ 1) th round of the query sentence, and an initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence; and a reply sentence for the K th round of the query sentence is generated according to the positive latent vector and the negative latent vector of each term in the K th round of the query sentence and the initial latent vector
- Recurrent neural network A recurrent neural network may be used to model a time sequence behavior.
- LSM Long short-term memory
- Gated recurrent unit A gated recurrent unit.
- the GRU combines a forgetting gate and an input gate to form a single updating gate.
- a cell state and a hiding state are contained in the updating gate, that is, a unit status is removed, and information is stored directly by outputting. Therefore, the structure is simpler than the LSTM.
- the GRU is similar to the LSTM.
- the GRU is applicable to processing of long-time independence, and has a simpler cell structure.
- a one-hot is a vector.
- a dimension number of the one-hot is a size of a dictionary. Each dimension corresponds to a word in the dictionary.
- the one-hot is 1 only at a corresponding position, and is 0 at other positions.
- Word vector A low-dimension (for example, usually 200 dimensions to 300 dimensions) vector with a fixed length is used to represent a term, and has high term correlation and a small vector distance.
- Softmax Promotion of a logistic regression model in a multi-categorization problem.
- Biterm topic model A main idea of a biterm topic model is counting a co-occurrence word pair (that is, in a word co-occurrence mode) formed by any two words, and resolving a problem of corpus feature sparsity by modeling by using a co-occurrence word pair as a unit.
- FIG. 1 is a schematic flowchart of a dialog generation method according to an exemplary embodiment. As shown in FIG. 1 , the method in this exemplary embodiment includes:
- a multi-round dialog model may be established. As shown in FIG. 2A , each round of the query sentence and a corresponding reply sentence may be embedded in a single-round dialog model.
- the multi-round dialog model may be considered as expansion of the single-round dialog model.
- the single-round dialog model may include an encoding layer, an intention layer, and a decoding layer.
- the K th round of the query sentence input by a user may be obtained, and word segmentation is performed on the K th round of the query sentence by using a term as a unit.
- a word vector of each term in the query is represented by using one-hot encoding.
- the word vector of each term is converted into a vector x t (en) of a preset dimension by using an embedding space matrix (ESM).
- ESM embedding space matrix
- a dimension number of the one-hot encoding is a size of a preset dictionary. Each dimension corresponds to a term in the dictionary.
- the one-hot encoding is 1 only at a corresponding position, and is 0 at other positions.
- the K th round of the query sentence is scanned from head to tail; the word vector of each term is input to a positive gated recurrent unit in sequence; and a positive latent vector after each term is input is recorded.
- the K th round of query sentence is scanned from tail to head; the word vector of each term is input to a negative gated recurrent unit; and a negative latent vector after each term is input is recorded.
- a positive latent vector of a target term in the K th round of the query sentence may be calculated according to a first word vector of the target term and a positive latent vector of a previous word of the target word.
- the positive latent vector of the target term may be represented as
- h ⁇ t ( en ) f ⁇ en ⁇ ( x t ( en ) , h ⁇ t - 1 ( en ) ) .
- a negative latent vector of a target term in the K th round of the query sentence may be calculated according to a first word vector of the target term and a negative latent vector of a next term of the target term.
- the negative latent vector of the target term may be represented as
- h ⁇ t ( en ) f ⁇ en ( x t ( en ) , h ⁇ t - 1 ( en ) ) .
- the K th round of query sentence may be “ni kan guo dian ying ma?”.
- “ni kan guo dian ying ma” may be positively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x 1 (en) , x 2 (en) , x 3 (en) , x 4 (en) , x 5 (en) , and x 6 (en) .
- a positive latent vector of a first term “ni” is determined according to the word vector x 1 (en) of the first term “ni”; a positive latent vector of a second term “kan” is determined according to the word vector x 2 (en) of the second term “kan” and the positive latent vector of the first term “ni”; a positive latent vector of a third term “guo” is determined according to the word vector x 3 (en) of the third term “guo” and the positive latent vector of the second term “kan”.
- the process is repeated, so as to respectively calculate a positive latent vector of a fourth term “dian”, a positive latent vector of a fifth term “ying”, and a positive latent vector of a sixth word “ma”.
- ni kan guo dian ying ma may be negatively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x 1 (en) , x 2 (en) , x 3 (en) , x 4 (en) , x 5 (en) , and x 6 (en) .
- a negative latent vector of a sixth term “ma” is determined according to the word vector x 6 (en) of the sixth term “ma”; a negative latent vector of a fifth term “ying” is determined according to the word vector x 5 (en) of the fifth term “ying” and the negative latent vector of the sixth term “ma”; a negative latent vector of a fourth term “dian” is determined according to the word vector x 4 (en) of the fourth term “dian” and the negative latent vector of the fifth term “ying”.
- the process is repeated, so as to respectively calculate a negative latent vector of a third term “guo”, a negative latent vector of a second term “kan”, and a negative latent vector of a first term “ni”.
- each word of a plurality of words may be trained by using a BTM algorithm, and may be determined as a probability distribution of the content topic.
- the K th round of the query sentence is matched with the plurality of words, to determine a content topic having a highest probability in the K th round of the query sentence.
- the content topic having the highest probability may be represented by using one-hot encoding, and an embedding space matrix of the content topic is established, so as to obtain a word vector E (k) of the content topic.
- S 103 Determine an initial latent vector output for the K th round of the query sentence according to the second word vector, a positive latent vector of the last term in the K th round of the query sentence, a latent vector of the last term in a (K ⁇ 1) th round of a reply sentence output for a (K ⁇ 1) th round of the query sentence, and an initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence.
- the positive latent vector of the last term in the K th round of the query sentence output by the encoding layer 21 , the word vector E (k) of the content topic, the latent vector of the last term in the (K ⁇ 1) th round of a reply sentence output for the (K ⁇ 1) th round of the query sentence, and the initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence may be input to a simple-RNN, to calculate the initial latent vector output for the K th round of the query sentence.
- the initial latent vector may be represented as:
- h ( in , k ) ⁇ ⁇ ( W ( in , in ) ⁇ h ( in , k - 1 ) + W ( in , de ) ⁇ h T ( de , k - 1 ) + W ( in , en ) ⁇ h ⁇ T ( en , k ) + W ( in , e ) ⁇ E ( k ) ) ; ⁇ W ( in , in ) , W ( in , de ) , W ( in , en ) , and ⁇ ⁇ W ( in , e )
- ⁇ is used to compress the initial the latent vector h (in,k) in an interval of [0, 1], so as to improve a nonlinear representation capability of a model.
- a generated reply sentence may be limited within a range of the content topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- S 104 Generate a reply sentence for the K th round of the query sentence according to the positive latent vector and the negative latent vector of each term in the K th round of the query sentence and the initial latent vector output for the K th round of the query sentence.
- the positive latent vector and the negative latent vector of each term in the K th round of the query sentence are spliced to obtain a latent vector of each term in the K th round of the query sentence, where the latent vector of each term is given by
- h t ( en ) [ h ⁇ t ( en ) ; h ⁇ t ( en ) ] ,
- a second latent vector output for the K th round of the query sentence is determined according to the initial latent vector output for the K th round of the query sentence and a word vector of a preset identification character, and the first reply term to be output for the K th round of the query sentence is determined according to the second latent vector; contribution of each term in the K th round of the query sentence to generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the K th round of the query sentence; a third latent vector is calculated according to the contribution of each term in the K th round of the query sentence to the generation of the second reply term, the second latent vector, and a word vector of the first reply term; and the second reply term for the K th round of the query sentence is generated according to the third latent vector, and the process is repeated to generate the reply sentence for the K th round of the query sentence.
- key information in a context may be described more accurately by using a latent state of a bidirectional structure as an input of the attention layer, thereby effectively alleviating a problem that key information is close to the end in a unidirectional structure.
- the latent state of the bidirectional structure may increase global information of each term to some extent, a problem that a term closer to the end includes more information in the unidirectional structure, so that correlation of a generated reply sentence is higher.
- a weight of each term in the K th round of the query sentence for the generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the K th round of the query sentence; a weighted sum of the latent vector of each term in the K th round of the query sentence is calculated according to the weight of each term in the K th round of the query sentence for the generation of the second reply term, and the weighted sum is used as the contribution of each term in the K th round of the query sentence to the generation of the second reply term.
- a probability distribution of each term in the preset dictionary may be calculated according to the third latent vector; a term having a highest probability in the preset dictionary may be selected as the second reply term for output, and a third reply term, a fourth reply term, a fifth reply term, and the like may be output in sequence.
- Each time 50 terms may be selected and a reply sentence is generated term by term, and the first five sentences having a higher probability are selected.
- h j ⁇ 1 (de) denote the latent vector of the previous term of the term
- h t (en) denotes the latent vector of each term in the query sentence
- W (de,de) and W (de,en) denote respectively parameters in a neural network.
- the importance degree g jt is normalized, to calculate a weight
- x t (de) denotes a word vector of a previous term of the term
- h t ⁇ 1 (de) denotes a latent vector of a previous term of the term.
- word segmentation is performed on a query sentence “ni kan guo dian ying ma” 25 by using a term as a unit, to obtain “ni”, “kan” “, guo”, “dian”, “ying”, and “ma”.
- Positive encoding is performed from “ni” to “ma”, to form positive latent vectors
- a latent vector of “ni” in the query sentence is:
- h 0 [ h ⁇ 0 ; h ⁇ 0 ] ,
- a content topic of the query sentence “ni kan guo dian ying ma” 25 is calculated as “dian ying”, and the content topic “dian ying” is encoded to obtain a topic vector.
- An output vector of the intention layer in a previous round, an output vector of the decoding layer in the previous round, an output vector of the encoding layer 21 in this round, and the topic vector are all input to the intention layer.
- An initial latent vector is calculated and output by using a neural network. The initial latent vector may be used to determine the first term of a reply sentence at the decoding layer.
- a process may be considered as a reverse process at the encoding layer 21 .
- the word vectors and the latent vectors may be decoded as a natural language.
- a reply sentence “wo xi huan ou mei dian ying” may be generated according to the initial latent vector output by the intention layer and the word vector of each term in the query sentence at the attention layer. It is assumed that, in a dictionary of ten thousand terms, each time the decoding layer 23 performs decoding, the decoding layer 23 generates a probability distribution of the ten thousand terms, and selects a term having a highest probability for output each time. The process is as follows:
- the intention layer 22 outputs the initial latent vector, inputs the initial latent vector and a word vector whose first character is an identification character “_EOS_” to the decoding layer 23 , and updates the latent vector by using a neural network to obtain a second latent vector.
- the second latent vector generates a probability distribution of ten thousand terms by using a softmax regression algorithm.
- a term “wo” has a highest probability, and therefore a reply term “wo” is output.
- the second latent vector and a word vector of the reply term “wo” are used as an input, to generate a third latent vector.
- a probability distribution of a next term is calculated according to the third latent vector, and a term “xi” having a highest probability is selected for output. The foregoing process is repeated and is ended until the special symbol _EOS_ is output.
- a reply sentence “wo xi huan ou mei dian ying _EOS_” 26 may be generated.
- each term in the K th round of a query sentence is converted into the first word vector, and the positive latent vector and the negative latent vector of each term is calculated according to the first word vector; the content topic of the K th round of the query sentence is obtained, and the content topic is converted into the second word vector; the initial latent vector output for the K th round of the query sentence is determined according to the second word vector, the positive latent vector of the last term in the K th round of the query sentence, the latent vector of the last term in the (K ⁇ 1) th round of a reply sentence output for the (K ⁇ 1) th round of the query sentence, and the initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence; and the reply sentence for the K th round of the query sentence is generated according to the positive latent vector and the negative latent vector of each term in the K th round of the query sentence and the initial latent vector output for the K th round of the query sentence.
- Exemplary embodiments relate to the field of computer technologies and machine learning.
- a robot may understand meanings of human natural languages through a multi-round dialog, and generate a corresponding reply sentence.
- how to improve correlation of an automatically generated reply sentence in a multi-round dialog, and how to reduce a generation probability of a high-frequency answer to generate a high-quality dialog are problems to be resolved by related researchers.
- the technical solutions provided in this exemplary embodiment not only may avoid a low generalization capability of a rule-based dialog system and a low recall capability in a searching algorithm-based dialog system, but also may effectively alleviate a problem of a high generation probability of a high-frequency reply sentence in a mainstream dialog generation system based on counting and learning, thereby improving practicality of a dialog generation algorithm.
- a single sentence is encoded by using a GRU unit at the decoding layer 23 , to prevent gradient dispersion; dialog topic information based on the BTM algorithm is creatively added to the intention layer 22 , and is used as dialog generation surveillance information, so as to reduce a generation probability of a high-frequency answer to some extent; and a bidirectional attention mechanism (the attention layer 24 ) is used at the decoding layer 23 , to capture key information in a context, so that a generated dialog has higher correlation.
- the method includes two processes, training and prediction.
- An input of the multi-round dialog generation model is query and reply pairs of the first four rounds of dialogs and a current round of query sentence
- an output of the multi-round dialog generation model is a current round of reply sentence generated by the algorithm according to information of previous texts.
- a real reply sentence of the last round is selected as surveillance information of a training algorithm, a loss function is calculated by using the generated reply sentence, and a neural network is trained until converges.
- a query sentence and a reply sentence in each round is embedded in a single-round dialog generation model.
- the multi-round dialog generation may be considered as expansion of the single-round dialog generation in time sequence.
- a processing process includes three parts: processing processes of an encoding layer, an intention layer, and a decoding layer.
- Encoding layer The layer is used to map an input natural language to a vector with a fixed dimension. Therefore, an input of the layer is a sentence in a form of a natural language, and an output is a vector with a fixed length. Specifically, the process includes the following steps:
- h ⁇ t ( en ) f ⁇ en ⁇ ( x t ( en ) , h ⁇ t - 1 ( en ) )
- h ⁇ t ( en ) f ⁇ en ⁇ ( x t ( en ) , h ⁇ t - 1 ( en ) )
- h t ( en ) [ h ⁇ t ( en ) ; h ⁇ t ( en ) ] ,
- a latent state thereof when a latent state thereof is used as an input of the intention layer, key information in a context may be described more accurately, thereby effectively alleviating a problem that key information is close to the end in the unidirectional structure.
- the latent state of the bidirectional structure enables each word to carry global information to some extent, thereby avoiding a problem that a word closer to the end in the unidirectional structure carries more information, so that a generated reply sentence has higher correlation.
- Intention layer The layer is used to encode a topic transfer process of a multi-round dialog.
- An input of the intention layer is in 1), the last latent state h T (de,k ⁇ 1) of the decoding layer in a previous round of query and reply, an output h (in,k ⁇ 1) of the intention layer in the previous round of query and reply, and a topic E (k) of a current round of query sentence, and an output is a vector h (in,k) obtained by comprehensively encoding the current topic and context information.
- the process includes the following steps:
- h ( in , k ) ⁇ ⁇ ( W ( in , in ) ⁇ h ( in , k - 1 ) + W ( in , de ) ⁇ h T ( de , k - 1 ) + W ( in , en ) ⁇ h ⁇ T ( en , k ) + W ( in , e ) ⁇ E ( k ) ) , ⁇ ⁇ W ( in , in ) , W ( in , de ) , W ( in , en ) , and ⁇ ⁇ W ( in , e )
- ⁇ is used to compress the initial latent vector h (in,k) in an interval of [0, 1], to improve a nonlinear representation capability of the model; and using h (in,k) as an input of the decoding layer.
- the topic of the current query is calculated, which is equivalent to that surveillance information is added to the calculation process, so that generation of a reply sentence in a next step is limited by the topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- Decoding layer The layer is used to input a probability distribution of a next term in a dictionary by analyzing the output vectors of the encoding layer and the intention layer.
- An input is the output h (in,k) of the intention layer and the output h t (en) of the encoding layer, and an output is a probability distribution of a next term in a dictionary.
- the process includes the following steps:
- a jt exp ⁇ ( g jt ) ⁇ m ⁇ g jm ,
- the technical solutions provided in this exemplary embodiment derive from a translation model.
- Establishment of the translation model is equivalent space conversion from a language to another language, and therefore semantic space is relatively fixed.
- mapping to multi-semantic space is to be performed, because different people provides different replies to a same query sentence.
- some general but doctrinal replies such as “Oh, OK” become mainstream in corpuses.
- a trained robot tends to use these high-frequency replies.
- semantic space of sentence generation is reduced by using topic information of a semantic section, thereby suppressing generation of high-frequency meaningless reply sentences to some extent.
- a bidirectional attention model is used, so as to capture key semantic information more accurately, thereby ensuring correlation of sentence generation more desirably.
- the technical solutions provided in this exemplary embodiment may be implemented by using a deep learning frame MXNET 0.5.0, and training and prediction may be performed on Tesla K40.
- the technical solutions provided in this exemplary embodiment may be applied to service scenarios such as a chat robot, an automatic email reply, and automatic generation of a candidate reply sentence in social software, so as to automatically generate several reply sentences that are more proper according to the first few rounds of dialogs in real time.
- a generation process is controlled by using an algorithm, without a need of control of a user.
- the chat robot may automatically reply directly according to an input of a user, thereby achieving a function of emotional accompany.
- the service of automatically generating a candidate reply sentence several candidate reply sentences are generated for a user according to statuses of the first few rounds of chats, and when it is not convenient for the user to enter a reply, the service may provide a rapid reply for the user.
- the latent calculation section 301 is configured to: convert each term in a K th round of a query sentence into a first word vector, and calculate a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2.
- a multi-round dialog model may be established. As shown in FIG. 2A , each round of query sentence and a corresponding reply sentence may be embedded in a single-round dialog model.
- the multi-round dialog model may be considered as expansion of the single-round dialog model.
- the single-round dialog model may include an encoding layer, an intention layer, and a decoding layer.
- the K th round of the query sentence is scanned from head to tail; the word vector of each term is input to a positive gated recurrent unit in sequence; and a positive latent vector after each term is input is recorded.
- the K th round of the query sentence is scanned from tail to head; the word vector of each term is input to a negative gated recurrent unit; and a negative latent vector after each term is input is recorded.
- h ⁇ t ( en ) f ⁇ en ⁇ ( x t ( en ) , H ⁇ t - 1 ( en ) ) .
- a negative latent vector of a target term in the K th round of the query sentence may be calculated according to a first word vector of the target term and a negative latent vector of a next term of the target term.
- the negative latent vector of the target term may be represented as
- h ⁇ t ( en ) f ⁇ en ⁇ ( x t ( en ) , h ⁇ t - 1 ( en ) ) .
- a negative latent vector x 6 (en) of a sixth term “ma” is determined according to the word vector of the sixth term “ma”; a negative latent vector x 5 (en) of a fifth term “ying” is determined according to the word vector of the fifth term “ying” and the negative latent vector of the sixth term “ma”; a negative latent vector x 4 (en) of a fourth term “dian” is determined according to the word vector of the fourth term “dian” and the negative latent vector of the fifth term “ying”.
- the process is repeated, so as to respectively calculate a negative latent vector of a third term “guo”, a negative latent vector of a second term “kan”, and a negative latent vector of a first term “ni”.
- the topic determining section 302 is configured to: obtain a content topic of the K th round of the query sentence, and convert the content topic into a second word vector.
- the vector calculation section 303 is configured to determine an initial latent vector output for the K th round of the query sentence according to the second word vector, the positive latent vector of the last term in the K th round of the query sentence, a latent vector of the last term in a (K ⁇ 1) th round of a reply sentence output for a (K ⁇ 1) th round of the query sentence, and an initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence.
- the positive latent vector of the last term in the K th round of the query sentence output by the encoding layer 21 , the word vector E (k) of the content topic, the latent vector of the last term in the (K ⁇ 1) th round of a reply sentence output for the (K ⁇ 1) th round of the query sentence, and the initial latent vector of the (K ⁇ 1) th round of the reply sentence output for the (K ⁇ 1) th round of the query sentence may be input to a simple RNN, to calculate the initial latent vector output for the K th round of the query sentence.
- the initial latent vector may be represented as:
- ⁇ is used to compress the initial the latent vector h (in,k) in an interval of [0, 1], so as to improve a nonlinear representation capability of a model.
- a generated reply sentence may be limited within a range of the content topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- the reply output section 304 is configured to generate a reply sentence for the K th round of the query sentence according to the positive latent vector and the negative latent vector of each term in the K th round of the query sentence and the initial latent vector output for the K th round of the query sentence.
- the positive latent vector and the negative latent vector of each term in the K th round of the query sentence are spliced to obtain a latent vector of each term in the K th round of query sentence, where the latent vector of each term
- h t ( en ) [ h ⁇ t ( en ) ; h ⁇ t ( en ) ] .
- a weight of each term in the K th round of the query sentence for the generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the K th round of the query sentence; a weighted sum of the latent vector of each term in the K th round of the query sentence is calculated according to the weight of each term in the K th round of the query sentence for the generation of the second reply term, and the weighted sum is used as the contribution of each term in the K th round of the query sentence to the generation of the second reply term.
- a probability distribution of each term in the preset dictionary may be calculated according to the third latent vector; a term having a highest probability in the preset dictionary may be selected as the second reply term for output, and a third reply term, a fourth reply term, a fifth reply term, and the like are output in sequence.
- Each time 50 terms may be selected and a reply sentence is generated term by term, and the first five sentences having a higher probability are selected.
- h j ⁇ 1 (de) denotes the latent vector of the previous term of the term
- h t (en) denotes the latent vector of each term in the query sentence
- W (de,de) and W (de,en) denote respectively parameters in a neural network.
- the importance degree g jt is normalized, to calculate a weight
- word segmentation is performed on a query sentence “ni kan guo dian ying ma” by using a term as a unit, to obtain “ni”, “kan” “, guo”, “dian”, “ying”, and “ma”.
- Positive encoding is performed from “ni” to “ma”, to form positive latent vectors
- h 0 [ h ⁇ 0 ; h ⁇ 0 ] ,
- the processor 401 is configured to perform the following steps:
- an exemplary embodiment further provides a computer storage medium, computer executable instructions being stored in the computer storage medium, and the computer executable instructions being used to perform the dialog generation method according to any of the exemplary embodiments described above.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application is a continuation of International Application No. PCT/CN2017/093417, filed on Jul. 18, 2017, which is based on and claims priority from Chinese Patent Application No. 2016105675040, filed in the Chinese Patent Office on Jul. 19, 2016, the disclosures of each of which are incorporated by reference herein in their entirety.
- The present disclosure relates to the field of speech processing, and in particular, to a dialog generation method, apparatus, and device, and a storage medium.
- In recent years, human-machine interaction manners have changed rapidly. As a new interaction mode, dialog robots attract more attention. In the field of natural language processing, research emphasis is on how to improve correlation of an automatically generated reply sentence in a multi-round dialog, and how to reduce a generation probability of a high-frequency answer, to order to generate a high-quality dialog. A dialog system is an important application direction of natural language processing.
- In related art technical solutions, the dialog system may include a rule-based dialog system, a searching-based dialog system, or a generation-type dialog system. The rule-based dialog system has a simple structure and high accuracy, but has a relatively poor generalization capability. The searching-based dialog system requires a relatively large number of corpuses with relatively high quality, otherwise problems such as low recall easily occur. The generation-type dialog system may relatively desirably establish a language model, and may generate a reply sentence corresponding to any input sentence. A modeling manner of the generation-type dialog system may include single-round modeling and multi-round modeling. In a single-round generation-type dialog model, modeling is performed only on a query and reply pair. When a multi-round dialog is processed, contexts are directly spliced into a long query sentence. However, when there are a relatively large number of dialog rounds and a relatively high amount of context content, information compression chaos easily occurs, causing problems such as relatively low quality of a generated reply sentence. In a multi-round generation-type dialog model, modeling is performed on a multi-round query and reply transfer process. However, the model tends to generate a high-frequency answer, and therefore has low accuracy.
- It is an aspect to provide a dialog generation method, apparatus, and device, and a storage medium, so as to resolve a technical problem of low accuracy of dialog generation.
- According to an aspect of one or more exemplary embodiments, there is provided a method including converting, by at least one processor, each term in a Kth round of a query sentence into a first word vector, and calculating a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2; obtaining, by the at least one processor, a content topic of the Kth round of the query sentence, and converting the content topic into a second word vector; determining an initial latent vector output for the Kth round of the query sentence according to the second word vector, the positive latent vector of a last term in the Kth round of the query sentence, a latent vector of a last term in a (K−1)th round of a reply sentence output for a (K−1)th round of the query sentence, and an initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence; and generating, by the at least one processor, a reply sentence for the Kth round of the query sentence according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence.
- According to other aspects of one or more exemplary embodiments, there is also provided an apparatus and a computer readable storage medium consistent with the method.
- Exemplary embodiments will now be described with reference to the accompanying drawings, in which:
-
FIG. 1 is a schematic flowchart of a dialog generation method according to an exemplary embodiment; -
FIG. 2A is a schematic architectural diagram of a dialog generation system according to an exemplary embodiment; -
FIG. 2B is a schematic flowchart of a dialog generation method according to an exemplary embodiment; -
FIG. 3 is a schematic structural diagram of a dialog generation apparatus according to an exemplary embodiment; and -
FIG. 4 is a schematic structural diagram of another dialog generation apparatus according to an exemplary embodiment. - The following clearly and completely describes the technical solutions in the exemplary embodiments with reference to the accompanying drawings in which the exemplary embodiments are illustrated. The described exemplary embodiments are some but not all of the exemplary embodiments. All other exemplary embodiments obtained by a person of ordinary skill in the technology based on the exemplary embodiments without creative effects shall fall within the protection scope of the present disclosure and its accompanying claims.
- During implementation of the exemplary embodiments, each term in a Kth round of a query sentence is converted into a first word vector, and a positive latent vector and a negative latent vector of each term is calculated according to the first word vector; a content topic of the Kth round of the query sentence is obtained, and the content topic is converted into a second word vector; an initial latent vector output for the Kth round of the query sentence is determined according to the second word vector, the positive latent vector of the last term in the Kth round of the query sentence, a latent vector of the last term in a (K−1)th round of a reply sentence output for a (K−1)th round of the query sentence, and an initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence; and a reply sentence for the Kth round of the query sentence is generated according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence. Topic content is added to a dialog generation process, so as to effectively suppress generation of a cross-topic general high-frequency reply sentence, and to improve accuracy of dialog generation.
- To better understand exemplary embodiments, the following provides meanings of some technical terms.
- Recurrent neural network (RNN): A recurrent neural network may be used to model a time sequence behavior.
- Long short-term memory (LSTM): A time recursion neural network that may be understood as a cell structure of the recurrent neural network, that includes an input gate, an output gate and a forgetting gate, and that is applicable to process and predict an important event with a quite long interval and delay in a time sequence.
- Gated recurrent unit (GRU): A gated recurrent unit. As a variant RNN neural network, the GRU combines a forgetting gate and an input gate to form a single updating gate. Similarly, a cell state and a hiding state are contained in the updating gate, that is, a unit status is removed, and information is stored directly by outputting. Therefore, the structure is simpler than the LSTM. The GRU is similar to the LSTM. The GRU is applicable to processing of long-time independence, and has a simpler cell structure.
- One-hot: A one-hot is a vector. A dimension number of the one-hot is a size of a dictionary. Each dimension corresponds to a word in the dictionary. The one-hot is 1 only at a corresponding position, and is 0 at other positions.
- Word vector: A low-dimension (for example, usually 200 dimensions to 300 dimensions) vector with a fixed length is used to represent a term, and has high term correlation and a small vector distance.
- Softmax: Promotion of a logistic regression model in a multi-categorization problem.
- Biterm topic model (BTM): A main idea of a biterm topic model is counting a co-occurrence word pair (that is, in a word co-occurrence mode) formed by any two words, and resolving a problem of corpus feature sparsity by modeling by using a co-occurrence word pair as a unit.
- Referring to
FIG. 1 ,FIG. 1 is a schematic flowchart of a dialog generation method according to an exemplary embodiment. As shown inFIG. 1 , the method in this exemplary embodiment includes: - S101: Convert each term in a Kth round of a query sentence into a first word vector, and calculate a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2.
- During implementation, a multi-round dialog model may be established. As shown in
FIG. 2A , each round of the query sentence and a corresponding reply sentence may be embedded in a single-round dialog model. The multi-round dialog model may be considered as expansion of the single-round dialog model. The single-round dialog model may include an encoding layer, an intention layer, and a decoding layer. - At the encoding layer, the Kth round of the query sentence input by a user may be obtained, and word segmentation is performed on the Kth round of the query sentence by using a term as a unit. A word vector of each term in the query is represented by using one-hot encoding. The word vector of each term is converted into a vector xt (en) of a preset dimension by using an embedding space matrix (ESM). A dimension number of the one-hot encoding is a size of a preset dictionary. Each dimension corresponds to a term in the dictionary. The one-hot encoding is 1 only at a corresponding position, and is 0 at other positions. The Kth round of the query sentence is scanned from head to tail; the word vector of each term is input to a positive gated recurrent unit in sequence; and a positive latent vector after each term is input is recorded. In addition, the Kth round of query sentence is scanned from tail to head; the word vector of each term is input to a negative gated recurrent unit; and a negative latent vector after each term is input is recorded.
- A positive latent vector of a target term in the Kth round of the query sentence may be calculated according to a first word vector of the target term and a positive latent vector of a previous word of the target word. The positive latent vector of the target term may be represented as
-
- A negative latent vector of a target term in the Kth round of the query sentence may be calculated according to a first word vector of the target term and a negative latent vector of a next term of the target term. The negative latent vector of the target term may be represented as
-
- For example, the Kth round of query sentence may be “ni kan guo dian ying ma?”. “ni kan guo dian ying ma” may be positively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x1 (en), x2 (en), x3 (en), x4 (en), x5 (en), and x6 (en). A positive latent vector of a first term “ni” is determined according to the word vector x1 (en) of the first term “ni”; a positive latent vector of a second term “kan” is determined according to the word vector x2 (en) of the second term “kan” and the positive latent vector of the first term “ni”; a positive latent vector of a third term “guo” is determined according to the word vector x3 (en) of the third term “guo” and the positive latent vector of the second term “kan”. The process is repeated, so as to respectively calculate a positive latent vector of a fourth term “dian”, a positive latent vector of a fifth term “ying”, and a positive latent vector of a sixth word “ma”.
- In addition, “ni kan guo dian ying ma” may be negatively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x1 (en), x2 (en), x3 (en), x4 (en), x5 (en), and x6 (en). A negative latent vector of a sixth term “ma” is determined according to the word vector x6 (en) of the sixth term “ma”; a negative latent vector of a fifth term “ying” is determined according to the word vector x5 (en) of the fifth term “ying” and the negative latent vector of the sixth term “ma”; a negative latent vector of a fourth term “dian” is determined according to the word vector x4 (en) of the fourth term “dian” and the negative latent vector of the fifth term “ying”. The process is repeated, so as to respectively calculate a negative latent vector of a third term “guo”, a negative latent vector of a second term “kan”, and a negative latent vector of a first term “ni”.
- S102: Obtain a content topic of the Kth round of the query sentence, and convert the content topic into a second word vector.
- During implementation, each word of a plurality of words may be trained by using a BTM algorithm, and may be determined as a probability distribution of the content topic. The Kth round of the query sentence is matched with the plurality of words, to determine a content topic having a highest probability in the Kth round of the query sentence. The content topic having the highest probability may be represented by using one-hot encoding, and an embedding space matrix of the content topic is established, so as to obtain a word vector E(k) of the content topic.
- S103: Determine an initial latent vector output for the Kth round of the query sentence according to the second word vector, a positive latent vector of the last term in the Kth round of the query sentence, a latent vector of the last term in a (K−1)th round of a reply sentence output for a (K−1)th round of the query sentence, and an initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence.
- During implementation, as shown in
FIG. 2A , at the intention layer, the positive latent vector of the last term in the Kth round of the query sentence output by theencoding layer 21, the word vector E(k) of the content topic, the latent vector of the last term in the (K−1)th round of a reply sentence output for the (K−1)th round of the query sentence, and the initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence may be input to a simple-RNN, to calculate the initial latent vector output for the Kth round of the query sentence. The initial latent vector may be represented as: -
- denote respectively parameters in the simple-RNN. σ is used to compress the initial the latent vector h(in,k) in an interval of [0, 1], so as to improve a nonlinear representation capability of a model.
- It should be noted that, in the process of calculating the initial latent vector, because the content topic in the Kth round of the query sentence is added to the intention layer for calculation, which is equivalent to that surveillance information is added to the calculation process, a generated reply sentence may be limited within a range of the content topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- S104: Generate a reply sentence for the Kth round of the query sentence according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence.
- During implementation, the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence are spliced to obtain a latent vector of each term in the Kth round of the query sentence, where the latent vector of each term is given by
-
- A second latent vector output for the Kth round of the query sentence is determined according to the initial latent vector output for the Kth round of the query sentence and a word vector of a preset identification character, and the first reply term to be output for the Kth round of the query sentence is determined according to the second latent vector; contribution of each term in the Kth round of the query sentence to generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the Kth round of the query sentence; a third latent vector is calculated according to the contribution of each term in the Kth round of the query sentence to the generation of the second reply term, the second latent vector, and a word vector of the first reply term; and the second reply term for the Kth round of the query sentence is generated according to the third latent vector, and the process is repeated to generate the reply sentence for the Kth round of the query sentence.
- It should be noted that, key information in a context may be described more accurately by using a latent state of a bidirectional structure as an input of the attention layer, thereby effectively alleviating a problem that key information is close to the end in a unidirectional structure. Because the latent state of the bidirectional structure may increase global information of each term to some extent, a problem that a term closer to the end includes more information in the unidirectional structure, so that correlation of a generated reply sentence is higher.
- In another exemplary embodiment, a weight of each term in the Kth round of the query sentence for the generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the Kth round of the query sentence; a weighted sum of the latent vector of each term in the Kth round of the query sentence is calculated according to the weight of each term in the Kth round of the query sentence for the generation of the second reply term, and the weighted sum is used as the contribution of each term in the Kth round of the query sentence to the generation of the second reply term.
- In another exemplary embodiment, a probability distribution of each term in the preset dictionary may be calculated according to the third latent vector; a term having a highest probability in the preset dictionary may be selected as the second reply term for output, and a third reply term, a fourth reply term, a fifth reply term, and the like may be output in sequence. Each time 50 terms may be selected and a reply sentence is generated term by term, and the first five sentences having a higher probability are selected.
- For example, an importance degree gjt of each term in the query sentence to generation of the term is calculated by using a latent vector a previous term of a term in the reply sentence and a latent vector of each term in the query sentence, where gjt=vT thanh(W(de,de)hj−1 (de)+W(de,en)ht (en)). hj−1 (de) denote the latent vector of the previous term of the term, ht (en) denotes the latent vector of each term in the query sentence, and W(de,de) and W(de,en) denote respectively parameters in a neural network. The importance degree gjt is normalized, to calculate a weight
-
- of a latent vector of each term in the Kth round of the query sentence. A weighted sum Cj=Σiαjtht (en) of the latent vector of each term in the Kth round of the query sentence is calculated, so as to generate, according to ht (de)=ƒde(xt (de),ht−1 (de), Ct), a latent vector of the reply sentence term by term. xt (de) denotes a word vector of a previous term of the term, and ht−1 (de) denotes a latent vector of a previous term of the term.
- For the dialog generation method provided in the foregoing exemplary embodiment, refer to
FIG. 2B . The following describes implementation steps of the method by using detailed examples: - At the
encoding layer 21, word segmentation is performed on a query sentence “ni kan guo dian ying ma” 25 by using a term as a unit, to obtain “ni”, “kan” “, guo”, “dian”, “ying”, and “ma”. Positive encoding is performed from “ni” to “ma”, to form positive latent vectors -
- of the 6 terms, that is, vectors from left to right at the
attention layer 24. Negative encoding is performed from “ma” to “ni”, to form negative latent vectors , , . . . , and of the 6 terms, that is, vectors from right to left at the attention layer. The positive latent vector and the negative latent vector are serially spliced, to form a latent vector of a term. For example, a latent vector of “ni” in the query sentence is: -
- At the
intention layer 22, a content topic of the query sentence “ni kan guo dian ying ma” 25 is calculated as “dian ying”, and the content topic “dian ying” is encoded to obtain a topic vector. An output vector of the intention layer in a previous round, an output vector of the decoding layer in the previous round, an output vector of theencoding layer 21 in this round, and the topic vector are all input to the intention layer. An initial latent vector is calculated and output by using a neural network. The initial latent vector may be used to determine the first term of a reply sentence at the decoding layer. - At the
decoding layer 23, a process may be considered as a reverse process at theencoding layer 21. The word vectors and the latent vectors may be decoded as a natural language. A reply sentence “wo xi huan ou mei dian ying” may be generated according to the initial latent vector output by the intention layer and the word vector of each term in the query sentence at the attention layer. It is assumed that, in a dictionary of ten thousand terms, each time thedecoding layer 23 performs decoding, thedecoding layer 23 generates a probability distribution of the ten thousand terms, and selects a term having a highest probability for output each time. The process is as follows: - The
intention layer 22 outputs the initial latent vector, inputs the initial latent vector and a word vector whose first character is an identification character “_EOS_” to thedecoding layer 23, and updates the latent vector by using a neural network to obtain a second latent vector. The second latent vector generates a probability distribution of ten thousand terms by using a softmax regression algorithm. A term “wo” has a highest probability, and therefore a reply term “wo” is output. The second latent vector and a word vector of the reply term “wo” are used as an input, to generate a third latent vector. A probability distribution of a next term is calculated according to the third latent vector, and a term “xi” having a highest probability is selected for output. The foregoing process is repeated and is ended until the special symbol _EOS_ is output. A reply sentence “wo xi huan ou mei dian ying _EOS_” 26 may be generated. - In this exemplary embodiment, each term in the Kth round of a query sentence is converted into the first word vector, and the positive latent vector and the negative latent vector of each term is calculated according to the first word vector; the content topic of the Kth round of the query sentence is obtained, and the content topic is converted into the second word vector; the initial latent vector output for the Kth round of the query sentence is determined according to the second word vector, the positive latent vector of the last term in the Kth round of the query sentence, the latent vector of the last term in the (K−1)th round of a reply sentence output for the (K−1)th round of the query sentence, and the initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence; and the reply sentence for the Kth round of the query sentence is generated according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence. Topic content is added to the dialog generation process, so as to effectively suppress generation of a cross-topic general high-frequency reply sentence, and to improve accuracy of dialog generation.
- In recent years, human-machine interaction manners have changed rapidly. As a new interaction mode, dialog robots attract more attention. Exemplary embodiments relate to the field of computer technologies and machine learning. By means of deep learning technologies, a robot may understand meanings of human natural languages through a multi-round dialog, and generate a corresponding reply sentence. However, how to improve correlation of an automatically generated reply sentence in a multi-round dialog, and how to reduce a generation probability of a high-frequency answer to generate a high-quality dialog are problems to be resolved by related researchers. The technical solutions provided in this exemplary embodiment not only may avoid a low generalization capability of a rule-based dialog system and a low recall capability in a searching algorithm-based dialog system, but also may effectively alleviate a problem of a high generation probability of a high-frequency reply sentence in a mainstream dialog generation system based on counting and learning, thereby improving practicality of a dialog generation algorithm.
- Referring to
FIG. 2A , based on a multi-round dialog model, a single sentence is encoded by using a GRU unit at thedecoding layer 23, to prevent gradient dispersion; dialog topic information based on the BTM algorithm is creatively added to theintention layer 22, and is used as dialog generation surveillance information, so as to reduce a generation probability of a high-frequency answer to some extent; and a bidirectional attention mechanism (the attention layer 24) is used at thedecoding layer 23, to capture key information in a context, so that a generated dialog has higher correlation. - According to the dialog generation method based on a multi-round dialog generation model, the method includes two processes, training and prediction. An input of the multi-round dialog generation model is query and reply pairs of the first four rounds of dialogs and a current round of query sentence, and an output of the multi-round dialog generation model is a current round of reply sentence generated by the algorithm according to information of previous texts.
- In a training process, if there are, for example, five real rounds of query and reply pairs, a real reply sentence of the last round is selected as surveillance information of a training algorithm, a loss function is calculated by using the generated reply sentence, and a neural network is trained until converges. A query sentence and a reply sentence in each round is embedded in a single-round dialog generation model. In this case, the multi-round dialog generation may be considered as expansion of the single-round dialog generation in time sequence. In the single-round generation model, a processing process includes three parts: processing processes of an encoding layer, an intention layer, and a decoding layer.
- 1) Encoding layer: The layer is used to map an input natural language to a vector with a fixed dimension. Therefore, an input of the layer is a sentence in a form of a natural language, and an output is a vector with a fixed length. Specifically, the process includes the following steps:
-
- I>. performing word segmentation on the sentence by using a term as unit, and converting a one-hot expression of each term to a word vector xt (en) of 200 dimensions by using an embedding space matrix;
- II>. scanning the sentence from head to tail, inputting a word vector of each term in sequence to a positive GRU network, and recording a latent state
-
- after the term is input;
-
- III>. scanning the sentence from tail to head, inputting a word vector of each term in sequence to a negative GRU network, and recording a latent state
-
- after the term is input;
-
- IV>. using a last state in II> as a fixed-length vector expression of the entire sentence, that is, sentence embedding, and using the sentence embedding as an input of the intention layer; and
- V>. serially splicing the positive latent vector and the negative latent vector that are respectively obtained in II> and III>, that is,
-
- using the obtained expression as an expression of the term, and using the expression as an input of the decoding layer. Compared with a unidirectional structure, for a bidirectional structure, when a latent state thereof is used as an input of the intention layer, key information in a context may be described more accurately, thereby effectively alleviating a problem that key information is close to the end in the unidirectional structure. The latent state of the bidirectional structure enables each word to carry global information to some extent, thereby avoiding a problem that a word closer to the end in the unidirectional structure carries more information, so that a generated reply sentence has higher correlation.
- 2) Intention layer: The layer is used to encode a topic transfer process of a multi-round dialog. An input of the intention layer is in 1), the last latent state hT (de,k−1) of the decoding layer in a previous round of query and reply, an output h(in,k−1) of the intention layer in the previous round of query and reply, and a topic E(k) of a current round of query sentence, and an output is a vector h(in,k) obtained by comprehensively encoding the current topic and context information. Specifically, the process includes the following steps:
-
- I>. calculating the topic of the current query sentence, performing offline training by using the BTM algorithm to obtain a topic distribution of each word, calculating online a number of a topic of current query sentence having a highest probability, where the number may be considered as a one-hot expression of the topic, and establishing a topic embedding matrix, to obtain a word vector E(k) of the topic; and
- II>. calculating topic transfer by using a simple-RNN, where
-
- denote respectively parameters in the simple-RNN, and σ is used to compress the initial latent vector h(in,k) in an interval of [0, 1], to improve a nonlinear representation capability of the model; and using h(in,k) as an input of the decoding layer. In this process, in this exemplary embodiment, the topic of the current query is calculated, which is equivalent to that surveillance information is added to the calculation process, so that generation of a reply sentence in a next step is limited by the topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- 3) Decoding layer: The layer is used to input a probability distribution of a next term in a dictionary by analyzing the output vectors of the encoding layer and the intention layer. An input is the output h(in,k) of the intention layer and the output ht (en) of the encoding layer, and an output is a probability distribution of a next term in a dictionary. Specifically, the process includes the following steps:
-
- I>. calculating attention by using ht (en): calculating an importance degree of a term in a query sentence by using a latent vector of a previous term of the term in a reply sentence and a latent vector of each term in the query sentence, where gjt=vT thanh(W(de,de)hj−1 (de)+W(de,en)ht (en)), hj−1 (de) denotes the latent vector of the previous term of the term, ht (en) denotes the latent vector of each term in the query sentence, and W(de,de) and W(de,en) denote respectively parameters in a neural network; performing normalization on probabilities by using softmax, to obtain a weight of the attention layer,
-
- that is, calculating an element in the query sentence having highest contribution to generation of the term; and calculating a weighted sum of the latent vector of each term in the query sentence, that is, Cj=Σiαjtht (en); and
-
- II>. generating a next latent state term by term by using a GRU unit according to ht (de)=ƒde(xt (de),ht−1 (de), Ct), enabling each latent state to access an entire connection layer, and calculating the probability distribution of the next term in the dictionary by using softmax. During training, a loss is calculated by calculating a negative log likelihood of a probability distribution that is of a corresponding term in a standard reply sentence and that is in a predicted reply sentence, a total loss sum of the standard reply sentence is calculated and used as a loss of a current round, and error backhaul is performed by using a back propagation through time (BPTT) algorithm of a recurrent neural network. During prediction, the first 50 terms having a higher probability are selected by using a beam search algorithm of machine learning, a reply sentence is generated term by term, and the first 5 sentences having a higher probability are output.
- The technical solutions provided in this exemplary embodiment derive from a translation model. Establishment of the translation model is equivalent space conversion from a language to another language, and therefore semantic space is relatively fixed. In a dialog model, mapping to multi-semantic space is to be performed, because different people provides different replies to a same query sentence. However, for a large amount of data, some general but doctrinal replies such as “Oh, OK” become mainstream in corpuses. As a result, a trained robot tends to use these high-frequency replies. According to the technical solutions provided by the exemplary embodiments, semantic space of sentence generation is reduced by using topic information of a semantic section, thereby suppressing generation of high-frequency meaningless reply sentences to some extent. In addition, a bidirectional attention model is used, so as to capture key semantic information more accurately, thereby ensuring correlation of sentence generation more desirably.
- During implementation, the technical solutions provided in this exemplary embodiment may be implemented by using a deep learning frame MXNET 0.5.0, and training and prediction may be performed on Tesla K40. The technical solutions provided in this exemplary embodiment may be applied to service scenarios such as a chat robot, an automatic email reply, and automatic generation of a candidate reply sentence in social software, so as to automatically generate several reply sentences that are more proper according to the first few rounds of dialogs in real time. A generation process is controlled by using an algorithm, without a need of control of a user. For example, the chat robot may automatically reply directly according to an input of a user, thereby achieving a function of emotional accompany. For another example, in the service of automatically generating a candidate reply sentence, several candidate reply sentences are generated for a user according to statuses of the first few rounds of chats, and when it is not convenient for the user to enter a reply, the service may provide a rapid reply for the user.
- Referring to
FIG. 3 ,FIG. 3 is a schematic structural diagram of a dialog generation apparatus according to an exemplary embodiment. Each part included in the apparatus may be implemented by using a dialog generation device, for example, a processor in a terminal such as a mobile phone, a tablet computer, or a personal computer. Certainly, a function implemented by the processor may also be implemented by using a logic circuit. During implementation, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or the like. As shown inFIG. 3 , the apparatus in this exemplary embodiment includes: a latent calculation section 301, a topic determining section 302, and a reply input section 304 - The latent calculation section 301 is configured to: convert each term in a Kth round of a query sentence into a first word vector, and calculate a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2.
- During implementation, a multi-round dialog model may be established. As shown in
FIG. 2A , each round of query sentence and a corresponding reply sentence may be embedded in a single-round dialog model. The multi-round dialog model may be considered as expansion of the single-round dialog model. The single-round dialog model may include an encoding layer, an intention layer, and a decoding layer. - At the encoding layer, the Kth round of query sentence input by a user may be obtained, and word segmentation is performed on the Kth round of query sentence by using a term as a unit. A word vector of each term in the query is represented by using one-hot encoding. The word vector of each term is converted into a vector xt (en) of a preset dimension by using an ESM. A dimension number of the one-hot encoding is a size of a preset dictionary. Each dimension corresponds to a term in the dictionary. The one-hot encoding is 1 only at a corresponding position, and is 0 at other positions. The Kth round of the query sentence is scanned from head to tail; the word vector of each term is input to a positive gated recurrent unit in sequence; and a positive latent vector after each term is input is recorded. In addition, the Kth round of the query sentence is scanned from tail to head; the word vector of each term is input to a negative gated recurrent unit; and a negative latent vector after each term is input is recorded.
- A positive latent vector of a target term in the Kth round of the query sentence may be calculated according to a first word vector of the target term and a positive latent vector of a previous word of the target word. The positive latent vector of the target term may be represented as
-
- A negative latent vector of a target term in the Kth round of the query sentence may be calculated according to a first word vector of the target term and a negative latent vector of a next term of the target term. The negative latent vector of the target term may be represented as
-
- For example, the Kth round of query sentence may be “ni kan guo dian ying ma?”. “ni kan guo dian ying ma” may be positively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x1 (en), x2 (en), x3 (en), x4 (en), x5 (en), and x6 (en). A positive latent vector x1 (en) of a first term “ni” is determined according to the word vector of the first term “ni”; a positive latent vector x2 (en) of a second term “kan” is determined according to the word vector of the second term “kan” and the positive latent vector of the first term “ni”; a positive latent vector x3 (en) of a third term “guo” is determined according to the word vector of the third term “guo” and the positive latent vector of the second term “kan”. The process is repeated, so as to respectively calculate a positive latent vector of a fourth term “dian”, a positive latent vector of a fifth term “ying”, and a positive latent vector of a sixth word “ma”.
- In addition, “ni kan guo dian ying ma” may be negatively encoded, to convert each term in the query sentence into a word vector, and the word vectors are respectively x1 (en), x2 (en), x3 (en), x 4 (en), x5 (en), and x6 (en). A negative latent vector x6 (en) of a sixth term “ma” is determined according to the word vector of the sixth term “ma”; a negative latent vector x5 (en) of a fifth term “ying” is determined according to the word vector of the fifth term “ying” and the negative latent vector of the sixth term “ma”; a negative latent vector x4 (en) of a fourth term “dian” is determined according to the word vector of the fourth term “dian” and the negative latent vector of the fifth term “ying”. The process is repeated, so as to respectively calculate a negative latent vector of a third term “guo”, a negative latent vector of a second term “kan”, and a negative latent vector of a first term “ni”.
- The topic determining section 302 is configured to: obtain a content topic of the Kth round of the query sentence, and convert the content topic into a second word vector.
- During implementation, each word of a plurality of words may be trained by using a BTM algorithm, and is determined as a probability distribution of the content topic. The Kth round of the query sentence is matched with the plurality of words, to determine a content topic having a highest probability in the Kth round of the query sentence. The content topic having the highest probability may be represented by using one-hot encoding, and an embedding space matrix of the content topic is established, so as to obtain a word vector E(k) of the content topic.
- The vector calculation section 303 is configured to determine an initial latent vector output for the Kth round of the query sentence according to the second word vector, the positive latent vector of the last term in the Kth round of the query sentence, a latent vector of the last term in a (K−1)th round of a reply sentence output for a (K−1)th round of the query sentence, and an initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence.
- During implementation, as shown in
FIG. 2A , at the intention layer, the positive latent vector of the last term in the Kth round of the query sentence output by theencoding layer 21, the word vector E(k) of the content topic, the latent vector of the last term in the (K−1)th round of a reply sentence output for the (K−1)th round of the query sentence, and the initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence may be input to a simple RNN, to calculate the initial latent vector output for the Kth round of the query sentence. The initial latent vector may be represented as: -
- denote respectively parameters in the simple-RNN. σ is used to compress the initial the latent vector h(in,k) in an interval of [0, 1], so as to improve a nonlinear representation capability of a model.
- It should be noted that, in the process of calculating the initial latent vector, because the content topic in the Kth round of the query sentence is added to the intention layer for calculation, which is equivalent to that surveillance information is added to the calculation process, a generated reply sentence may be limited within a range of the content topic, thereby reducing a generation probability of some general high-frequency reply sentences.
- The reply output section 304 is configured to generate a reply sentence for the Kth round of the query sentence according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence.
- During implementation, the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence are spliced to obtain a latent vector of each term in the Kth round of query sentence, where the latent vector of each term
-
- A second latent vector output for the Kth round of the query sentence is determined according to the initial latent vector output for the Kth round of the query sentence and a word vector of a preset identification character, and the first reply term to be output for the Kth round of the query sentence is determined according to the second latent vector; contribution of each term in the Kth round of the query sentence to generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the Kth round of the query sentence; a third latent vector is calculated according to the contribution of each term in the Kth round of the query sentence to the generation of the second reply term, the second latent vector, and a word vector of the first reply term; and the second reply term for the Kth round of the query sentence is generated according to the third latent vector, and the process is repeated to generate the reply sentence for the Kth round of the query sentence.
- It should be noted that, key information in a context may be described more accurately by using a latent state of a bidirectional structure as an input of the attention layer, thereby effectively alleviating a problem that key information is close to the end in a unidirectional structure. Because the latent state of the bidirectional structure may increase global information of each term to some extent, a problem that a term closer to the end carries more information in the unidirectional structure, so that correlation of a generated reply sentence is higher.
- In another exemplary embodiment, a weight of each term in the Kth round of the query sentence for the generation of the second reply term is calculated according to the second latent vector and the latent vector of each term in the Kth round of the query sentence; a weighted sum of the latent vector of each term in the Kth round of the query sentence is calculated according to the weight of each term in the Kth round of the query sentence for the generation of the second reply term, and the weighted sum is used as the contribution of each term in the Kth round of the query sentence to the generation of the second reply term.
- In another exemplary embodiment, a probability distribution of each term in the preset dictionary may be calculated according to the third latent vector; a term having a highest probability in the preset dictionary may be selected as the second reply term for output, and a third reply term, a fourth reply term, a fifth reply term, and the like are output in sequence. Each time 50 terms may be selected and a reply sentence is generated term by term, and the first five sentences having a higher probability are selected.
- For example, an importance degree gjt of each term in the query sentence to generation of the term is calculated by using a latent vector a previous term of a term in the reply sentence and a latent vector of each term in the query sentence, where gjt=vT thanh(W(de,de)hj−1 (de)+W(de,en)ht (en)). hj−1 (de) denotes the latent vector of the previous term of the term, ht (en) denotes the latent vector of each term in the query sentence, and W(de,de) and W(de,en) denote respectively parameters in a neural network. The importance degree gjt is normalized, to calculate a weight
-
- of a latent vector of each term in the Kth round of query sentence. A weighted sum Cj=Σiαjtht (en) of the latent vector of each term in the Kth round of query sentence is calculated, so as to generate, according to ht (de)=ƒde(xt (de),ht−1 (de), Ct), a latent vector of the reply sentence term by term. xt (de) denotes a word vector of a previous term of the term, and ht−1 (de) denotes a latent vector of a previous term of the term.
- For the dialog generation apparatus provided in the foregoing exemplary embodiment, the following describes implementation steps of the method by using detailed examples:
- At the encoding layer, word segmentation is performed on a query sentence “ni kan guo dian ying ma” by using a term as a unit, to obtain “ni”, “kan” “, guo”, “dian”, “ying”, and “ma”. Positive encoding is performed from “ni” to “ma”, to form positive latent vectors
-
- of the 6 terms, that is, vectors from left to right at the
attention layer 24. Negative encoding is performed from “ma” to “ni”, to form negative latent vectors , , . . . , and of the 6 terms, that is, vectors from right to left at the attention layer. The positive latent vector and the negative latent vector are serially spliced, to form a latent vector of a term. For example, a latent vector of “ni” in the query sentence is: -
- At the intention layer, a content topic of the query sentence “ni kan guo dian ying ma” is calculated as “dian ying”, and the content topic “dian ying” is encoded to obtain a topic vector. An output vector of the intention layer in a previous round, an output vector of the decoding layer in the previous round, an output vector of the encoding layer in this round, and the topic vector are all input to the intention layer. An initial latent vector is calculated and output by using a neural network. The initial latent vector may be used to determine the first term of a reply sentence at the decoding layer.
- At the decoding layer, a process may be considered as a reverse process at the encoding layer. The word vectors and the latent vectors may be decoded as a natural language. A reply sentence “wo xi huan ou mei dian ying” may be generated according to the initial latent vector output by the intention layer and the word vector of each term in the query sentence at the attention layer. It is assumed that, in a dictionary of ten thousand terms, each time the decoding layer performs decoding, the decoding layer generates a probability distribution of the ten thousand terms, and selects a term having a highest probability for output each time. The process is as follows: the intention layer outputs the initial latent vector, inputs the initial latent vector and a word vector whose first character is an identification character “_EOS_” to the decoding layer, and updates the latent vector by using a neural network to obtain a second latent vector. The second latent vector generates a probability distribution of ten thousand terms by using a softmax regression algorithm. A term “wo” has a highest probability, and therefore a reply term “wo” is output. The second latent vector and a word vector of the reply term “wo” are used as an input, to generate a third latent vector. A probability distribution of a next term is calculated according to the third latent vector, and a term “xi” having a highest probability is selected for output. The foregoing process is repeated and is ended until the special symbol _EOS_ is output. A reply sentence “wo xi huan ou mei dian ying _EOS_” may be generated.
- In this exemplary embodiment, each term in the Kth round of the query sentence is converted into the first word vector, and the positive latent vector and the negative latent vector of each term is calculated according to the first word vector; the content topic of the Kth round of the query sentence is obtained, and the content topic is converted into the second word vector; next, the initial latent vector output for the Kth round of the query sentence is determined according to the second word vector, the positive latent vector of the last term in the Kth round of the query sentence, the latent vector of the last term in the (K−1)th round of a reply sentence output for the (K−1)th round of the query sentence, and the initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence; and the reply sentence for the Kth round of the query sentence is generated according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence. Topic content is added to the dialog generation process, so as to effectively suppress generation of a cross-topic general high-frequency reply sentence, and to improve accuracy of dialog generation.
- Referring to
FIG. 4 ,FIG. 4 is a schematic flowchart of a dialog generation device according to an exemplary embodiment. As shown in the drawing, the device may include: at least oneprocessor 401, such as a CPU, at least oneinterface circuit 402, at least onememory 403, and at least one bus 404. - The communications bus 404 is configured to implement connection and communication between the components.
- The
interface circuit 402 in this exemplary embodiment may be a wired sending port, or may be a wireless device, for example, may be an antenna apparatus, and is configured to perform signal or data communicate with another node device. - The
memory 403 may be a high-speed RAM memory, or may be a non-volatile memory, for example, at least one magnetic disk memory. In some exemplary embodiments, thememory 403 may alternatively be at least one storage apparatus that is located far away from theprocessor 401. A group of program code may be stored in thememory 403, and theprocessor 401 may be configured to: invoke the program code stored in the memory, and perform the following operations: -
- converting each term in a Kth round of a query sentence into a first word vector, and calculating a positive latent vector and a negative latent vector of each term according to the first word vector, K being a positive integer greater than or equal to 2;
- obtaining a content topic of the Kth round of the query sentence, and converting the content topic into a second word vector;
- determining an initial latent vector output for the Kth round of the query sentence according to the second word vector, the positive latent vector of the last term in the Kth round of the query sentence, a latent vector of the last term in a (K−1)th round of a reply sentence output for a (K−1)th round of the query sentence, and an initial latent vector of the (K−1)th round of the reply sentence output for the (K−1)th round of the query sentence; and
- generating a reply sentence for the Kth round of the query sentence according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence.
- The
processor 401 is configured to perform the following steps: -
- calculating a positive latent vector of a target term in the Kth round of the query sentence according to a first word vector of the target term and a positive latent vector of a previous term of the target term; or
- calculating a negative latent vector of a target term in the Kth round of the query sentence according to a first word vector of the target term and a negative latent vector of a next term of the target term.
- The
processor 401 is configured to perform the following steps: -
- splicing the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence to obtain a latent vector of each term in the Kth round of the query sentence; and
- generating the reply sentence for the Kth round of the query sentence according to the initial latent vector output for the Kth round of the query sentence and the latent vector of each term in the Kth round of the query sentence.
- The
processor 401 is configured to perform the following steps: -
- determining a second latent vector output for the Kth round of the query sentence according to the initial latent vector output for the Kth round of the query sentence and a word vector of a preset identification character, and determine the first reply term to be output for the Kth round of the query sentence according to the second latent vector;
- calculating contribution of each term in the Kth round of the query sentence to generation of the second reply term according to the second latent vector and the latent vector of each term in the Kth round of the query sentence;
- calculating a third latent vector according to the contribution of each term in the Kth round of the query sentence to the generation of the second reply term, the second latent vector, and a word vector of the first reply term; and
- generating the second reply term for the Kth round of the query sentence according to the third latent vector, and repeating the process to generate the reply sentence for the Kth round of the query sentence.
- The
processor 401 is configured to perform the following steps: -
- calculating, according to the second latent vector and the latent vector of each term in the Kth round of the query sentence, a weight of each term in the Kth round of the query sentence for the generation of the second reply term; and
- calculating a weighted sum of the latent vector of each term in the Kth round of the query sentence according to the weight of each term in the Kth round of the query sentence for the generation of the second reply term, and using the weighted sum as the contribution of each term in the Kth round of the query sentence to the generation of the second reply term.
- The
processor 401 is configured to perform the following steps: -
- calculating a probability distribution of each term in a preset dictionary according to the third latent vector; and
- selecting a term having a highest probability in the preset dictionary as the second reply term for output.
- In the exemplary embodiments, if implemented in the form of software functional parts and sold or used as independent products, the foregoing dialog generation method may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the exemplary embodiments essentially, or the part contributing to related art technologies may be implemented in a form of a software product. The software product may be stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the methods described in the exemplary embodiments. The foregoing storage medium includes: any medium that may store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a magnetic disk, or an optical disc. Therefore, the exemplary embodiments are not limited to any combination of particular hardware and software.
- Correspondingly, an exemplary embodiment further provides a computer storage medium, computer executable instructions being stored in the computer storage medium, and the computer executable instructions being used to perform the dialog generation method according to any of the exemplary embodiments described above.
- It should be noted that, for ease of description, the foregoing method exemplary embodiments are described as a series of action combinations. However, a person skilled in the technology should understand that the present disclosure is not limited to the described sequence of the actions, because some steps may be performed in another sequence or performed at the same time according to the present disclosure. In addition, a person skilled in the technology should also know that all the exemplary embodiments described in this specification are exemplary embodiments, and the related actions and modules are not necessarily required in the present disclosure.
- In the foregoing exemplary embodiments, the description of each exemplary embodiment has respective focuses. For a part that is not described in detail in an exemplary embodiment, refer to related descriptions in other exemplary embodiments.
- A person of ordinary skill in the technology may understand that all or some of the steps of the methods in the foregoing exemplary embodiments may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. The storage medium may include: a flash drive, a ROM, a random access memory (RAM), a magnetic disk, an optical disc, or the like.
- The input method processing method and device and a system provided in the exemplary embodiments are described in detail above. Principles and implementations of the present disclosure have been explained herein with reference to specific exemplary embodiments. The exemplary embodiments are used only to help understand the method and core thought of the present disclosure. Meanwhile, a person with ordinary skills in the technology may have variations in specific implementations and the application scope based on thoughts of the present disclosure. In conclusion, content of the present specification should not be regarded as a limitation on the present disclosure.
- In the exemplary embodiments, the reply sentence for the Kth round of a query sentence is generated according to the positive latent vector and the negative latent vector of each term in the Kth round of the query sentence and the initial latent vector output for the Kth round of the query sentence. Topic content is added to the dialog generation process, so as to effectively suppress generation of a cross-topic general high-frequency reply sentence, and to improve accuracy of dialog generation.
Claims (20)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610567504 | 2016-07-19 | ||
CN201610567504.0A CN107632987B (en) | 2016-07-19 | 2016-07-19 | A kind of dialogue generation method and device |
CN201610567504.0 | 2016-07-19 | ||
PCT/CN2017/093417 WO2018014835A1 (en) | 2016-07-19 | 2017-07-18 | Dialog generating method, device, apparatus, and storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/093417 Continuation WO2018014835A1 (en) | 2016-07-19 | 2017-07-18 | Dialog generating method, device, apparatus, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180285348A1 true US20180285348A1 (en) | 2018-10-04 |
US10740564B2 US10740564B2 (en) | 2020-08-11 |
Family
ID=60991987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/997,912 Active 2038-01-07 US10740564B2 (en) | 2016-07-19 | 2018-06-05 | Dialog generation method, apparatus, and device, and storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US10740564B2 (en) |
CN (1) | CN107632987B (en) |
WO (1) | WO2018014835A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558585A (en) * | 2018-10-26 | 2019-04-02 | 深圳点猫科技有限公司 | A kind of answer Automatic-searching method and electronic equipment based on educational system |
CN109726394A (en) * | 2018-12-18 | 2019-05-07 | 电子科技大学 | Short text Subject Clustering method based on fusion BTM model |
CN109933809A (en) * | 2019-03-15 | 2019-06-25 | 北京金山数字娱乐科技有限公司 | A kind of interpretation method and device, the training method of translation model and device |
CN109992785A (en) * | 2019-04-09 | 2019-07-09 | 腾讯科技(深圳)有限公司 | Content calculation method, device and equipment based on machine learning |
US20190317955A1 (en) * | 2017-10-27 | 2019-10-17 | Babylon Partners Limited | Determining missing content in a database |
CN111091011A (en) * | 2019-12-20 | 2020-05-01 | 科大讯飞股份有限公司 | Domain prediction method, domain prediction device and electronic equipment |
CN111428014A (en) * | 2020-03-17 | 2020-07-17 | 北京香侬慧语科技有限责任公司 | Non-autoregressive conversational speech generation method and model based on maximum mutual information |
WO2020225446A1 (en) * | 2019-05-09 | 2020-11-12 | Genpact Luxembourg S.À R.L | Method and system for training a machine learning system using context injection |
CN112925896A (en) * | 2021-04-04 | 2021-06-08 | 河南工业大学 | Topic extension emotional dialogue generation method based on joint decoding |
CN113076408A (en) * | 2021-03-19 | 2021-07-06 | 联想(北京)有限公司 | Session information processing method and device |
US20210303606A1 (en) * | 2019-01-24 | 2021-09-30 | Tencent Technology (Shenzhen) Company Limited | Dialog generation method and apparatus, device, and storage medium |
US20220043985A1 (en) * | 2020-10-14 | 2022-02-10 | Beijing Baidu Netcom Science Technology Co., Ltd. | Role labeling method, electronic device and storage medium |
US11270084B2 (en) * | 2018-10-12 | 2022-03-08 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for using trigger words to generate human-like responses in virtual assistants |
US11373642B2 (en) * | 2019-08-29 | 2022-06-28 | Boe Technology Group Co., Ltd. | Voice interaction method, system, terminal device and medium |
CN115293132A (en) * | 2022-09-30 | 2022-11-04 | 腾讯科技(深圳)有限公司 | Conversation processing method and device of virtual scene, electronic equipment and storage medium |
US11494564B2 (en) * | 2020-03-27 | 2022-11-08 | Naver Corporation | Unsupervised aspect-based multi-document abstractive summarization |
CN116226356A (en) * | 2023-05-08 | 2023-06-06 | 深圳市拓保软件有限公司 | NLP-based intelligent customer service interaction method and system |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309275A (en) * | 2018-03-15 | 2019-10-08 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus that dialogue generates |
CN108491514B (en) * | 2018-03-26 | 2020-12-01 | 清华大学 | Method and device for questioning in dialog system, electronic equipment and computer readable medium |
CN109241262B (en) * | 2018-08-31 | 2021-01-05 | 出门问问信息科技有限公司 | Method and device for generating reply sentence based on keyword |
CN109241265B (en) * | 2018-09-17 | 2022-06-03 | 四川长虹电器股份有限公司 | Multi-round query-oriented field identification method and system |
CN109376222B (en) * | 2018-09-27 | 2021-05-25 | 国信优易数据股份有限公司 | Question-answer matching degree calculation method, question-answer automatic matching method and device |
CN109635093B (en) * | 2018-12-17 | 2022-05-27 | 北京百度网讯科技有限公司 | Method and device for generating reply statement |
CN109597884B (en) * | 2018-12-28 | 2021-07-20 | 北京百度网讯科技有限公司 | Dialog generation method, device, storage medium and terminal equipment |
IT201900000526A1 (en) * | 2019-01-11 | 2020-07-11 | Userbot S R L | ARTIFICIAL INTELLIGENCE SYSTEM FOR BUSINESS PROCESSES |
CN110134790B (en) * | 2019-05-17 | 2022-09-30 | 中国科学技术大学 | Method and device for matching context set and reply set |
CN110413729B (en) * | 2019-06-25 | 2023-04-07 | 江南大学 | Multi-turn dialogue generation method based on clause-context dual attention model |
US11176330B2 (en) * | 2019-07-22 | 2021-11-16 | Advanced New Technologies Co., Ltd. | Generating recommendation information |
CN110598206B (en) * | 2019-08-13 | 2023-04-07 | 平安国际智慧城市科技股份有限公司 | Text semantic recognition method and device, computer equipment and storage medium |
CN111597339B (en) * | 2020-05-22 | 2023-06-30 | 北京慧闻科技(集团)有限公司 | Document-level multi-round dialogue intention classification method, device, equipment and storage medium |
CN114238549A (en) * | 2021-12-15 | 2022-03-25 | 平安科技(深圳)有限公司 | Training method and device of text generation model, storage medium and computer equipment |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5974412A (en) * | 1997-09-24 | 1999-10-26 | Sapient Health Network | Intelligent query system for automatically indexing information in a database and automatically categorizing users |
US6654735B1 (en) * | 1999-01-08 | 2003-11-25 | International Business Machines Corporation | Outbound information analysis for generating user interest profiles and improving user productivity |
US6347313B1 (en) * | 1999-03-01 | 2002-02-12 | Hewlett-Packard Company | Information embedding based on user relevance feedback for object retrieval |
US7567958B1 (en) * | 2000-04-04 | 2009-07-28 | Aol, Llc | Filtering system for providing personalized information in the absence of negative data |
ATE297588T1 (en) * | 2000-11-14 | 2005-06-15 | Ibm | ADJUSTING PHONETIC CONTEXT TO IMPROVE SPEECH RECOGNITION |
US8566102B1 (en) * | 2002-03-28 | 2013-10-22 | At&T Intellectual Property Ii, L.P. | System and method of automating a spoken dialogue service |
US7590603B2 (en) * | 2004-10-01 | 2009-09-15 | Microsoft Corporation | Method and system for classifying and identifying messages as question or not a question within a discussion thread |
JP4476786B2 (en) * | 2004-11-10 | 2010-06-09 | 株式会社東芝 | Search device |
CN1952928A (en) * | 2005-10-20 | 2007-04-25 | 梁威 | Computer system to constitute natural language base and automatic dialogue retrieve |
US9129300B2 (en) * | 2010-04-21 | 2015-09-08 | Yahoo! Inc. | Using external sources for sponsored search AD selection |
US10331785B2 (en) * | 2012-02-17 | 2019-06-25 | Tivo Solutions Inc. | Identifying multimedia asset similarity using blended semantic and latent feature analysis |
US9465833B2 (en) * | 2012-07-31 | 2016-10-11 | Veveo, Inc. | Disambiguating user intent in conversational interaction system for large corpus information retrieval |
US20140236578A1 (en) * | 2013-02-15 | 2014-08-21 | Nec Laboratories America, Inc. | Question-Answering by Recursive Parse Tree Descent |
US9298757B1 (en) * | 2013-03-13 | 2016-03-29 | International Business Machines Corporation | Determining similarity of linguistic objects |
US20140280088A1 (en) * | 2013-03-15 | 2014-09-18 | Luminoso Technologies, Inc. | Combined term and vector proximity text search |
US9514753B2 (en) * | 2013-11-04 | 2016-12-06 | Google Inc. | Speaker identification using hash-based indexing |
US20150169772A1 (en) * | 2013-12-12 | 2015-06-18 | Microsoft Corporation | Personalizing Search Results Based on User-Generated Content |
CN104050256B (en) * | 2014-06-13 | 2017-05-24 | 西安蒜泥电子科技有限责任公司 | Initiative study-based questioning and answering method and questioning and answering system adopting initiative study-based questioning and answering method |
CN104462064B (en) * | 2014-12-15 | 2017-11-03 | 陈包容 | A kind of method and system of information of mobile terminal communication prompt input content |
CN104615646A (en) * | 2014-12-25 | 2015-05-13 | 上海科阅信息技术有限公司 | Intelligent chatting robot system |
US20160189556A1 (en) * | 2014-12-29 | 2016-06-30 | International Business Machines Corporation | Evaluating presentation data |
CN105095444A (en) * | 2015-07-24 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Information acquisition method and device |
US10102206B2 (en) * | 2016-03-31 | 2018-10-16 | Dropbox, Inc. | Intelligently identifying and presenting digital documents |
US10133949B2 (en) * | 2016-07-15 | 2018-11-20 | University Of Central Florida Research Foundation, Inc. | Synthetic data generation of time series data |
US11113732B2 (en) * | 2016-09-26 | 2021-09-07 | Microsoft Technology Licensing, Llc | Controlling use of negative features in a matching operation |
CN107885756B (en) * | 2016-09-30 | 2020-05-08 | 华为技术有限公司 | Deep learning-based dialogue method, device and equipment |
US10540967B2 (en) * | 2016-11-14 | 2020-01-21 | Xerox Corporation | Machine reading method for dialog state tracking |
US10635733B2 (en) * | 2017-05-05 | 2020-04-28 | Microsoft Technology Licensing, Llc | Personalized user-categorized recommendations |
-
2016
- 2016-07-19 CN CN201610567504.0A patent/CN107632987B/en active Active
-
2017
- 2017-07-18 WO PCT/CN2017/093417 patent/WO2018014835A1/en active Application Filing
-
2018
- 2018-06-05 US US15/997,912 patent/US10740564B2/en active Active
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190317955A1 (en) * | 2017-10-27 | 2019-10-17 | Babylon Partners Limited | Determining missing content in a database |
US11270084B2 (en) * | 2018-10-12 | 2022-03-08 | Johnson Controls Tyco IP Holdings LLP | Systems and methods for using trigger words to generate human-like responses in virtual assistants |
CN109558585A (en) * | 2018-10-26 | 2019-04-02 | 深圳点猫科技有限公司 | A kind of answer Automatic-searching method and electronic equipment based on educational system |
CN109726394A (en) * | 2018-12-18 | 2019-05-07 | 电子科技大学 | Short text Subject Clustering method based on fusion BTM model |
US20210303606A1 (en) * | 2019-01-24 | 2021-09-30 | Tencent Technology (Shenzhen) Company Limited | Dialog generation method and apparatus, device, and storage medium |
CN109933809A (en) * | 2019-03-15 | 2019-06-25 | 北京金山数字娱乐科技有限公司 | A kind of interpretation method and device, the training method of translation model and device |
CN109992785A (en) * | 2019-04-09 | 2019-07-09 | 腾讯科技(深圳)有限公司 | Content calculation method, device and equipment based on machine learning |
US11604962B2 (en) | 2019-05-09 | 2023-03-14 | Genpact Luxembourg S.à r.l. II | Method and system for training a machine learning system using context injection |
WO2020225446A1 (en) * | 2019-05-09 | 2020-11-12 | Genpact Luxembourg S.À R.L | Method and system for training a machine learning system using context injection |
US11373642B2 (en) * | 2019-08-29 | 2022-06-28 | Boe Technology Group Co., Ltd. | Voice interaction method, system, terminal device and medium |
CN111091011A (en) * | 2019-12-20 | 2020-05-01 | 科大讯飞股份有限公司 | Domain prediction method, domain prediction device and electronic equipment |
CN111428014A (en) * | 2020-03-17 | 2020-07-17 | 北京香侬慧语科技有限责任公司 | Non-autoregressive conversational speech generation method and model based on maximum mutual information |
US11494564B2 (en) * | 2020-03-27 | 2022-11-08 | Naver Corporation | Unsupervised aspect-based multi-document abstractive summarization |
US11907671B2 (en) * | 2020-10-14 | 2024-02-20 | Beijing Baidu Netcom Science Technology Co., Ltd. | Role labeling method, electronic device and storage medium |
US20220043985A1 (en) * | 2020-10-14 | 2022-02-10 | Beijing Baidu Netcom Science Technology Co., Ltd. | Role labeling method, electronic device and storage medium |
CN113076408A (en) * | 2021-03-19 | 2021-07-06 | 联想(北京)有限公司 | Session information processing method and device |
CN112925896A (en) * | 2021-04-04 | 2021-06-08 | 河南工业大学 | Topic extension emotional dialogue generation method based on joint decoding |
CN115293132A (en) * | 2022-09-30 | 2022-11-04 | 腾讯科技(深圳)有限公司 | Conversation processing method and device of virtual scene, electronic equipment and storage medium |
CN116226356A (en) * | 2023-05-08 | 2023-06-06 | 深圳市拓保软件有限公司 | NLP-based intelligent customer service interaction method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107632987A (en) | 2018-01-26 |
US10740564B2 (en) | 2020-08-11 |
WO2018014835A1 (en) | 2018-01-25 |
CN107632987B (en) | 2018-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740564B2 (en) | Dialog generation method, apparatus, and device, and storage medium | |
CN108319599B (en) | Man-machine conversation method and device | |
US10431205B2 (en) | Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network | |
US20190228070A1 (en) | Deep learning based dialog method, apparatus, and device | |
CN110134971B (en) | Method and device for machine translation and computer readable storage medium | |
WO2019174450A1 (en) | Dialogue generation method and apparatus | |
CN110489567B (en) | Node information acquisition method and device based on cross-network feature mapping | |
US11355097B2 (en) | Sample-efficient adaptive text-to-speech | |
US11475225B2 (en) | Method, system, electronic device and storage medium for clarification question generation | |
CN110991165A (en) | Method and device for extracting character relation in text, computer equipment and storage medium | |
WO2020151689A1 (en) | Dialogue generation method, device and equipment, and storage medium | |
EP3885966B1 (en) | Method and device for generating natural language description information | |
CN108959388B (en) | Information generation method and device | |
CN109582970B (en) | Semantic measurement method, semantic measurement device, semantic measurement equipment and readable storage medium | |
CN115309877B (en) | Dialogue generation method, dialogue model training method and device | |
CN112214591A (en) | Conversation prediction method and device | |
WO2016173326A1 (en) | Subject based interaction system and method | |
CN111767697B (en) | Text processing method and device, computer equipment and storage medium | |
CN111797220B (en) | Dialog generation method, apparatus, computer device and storage medium | |
CN116821306A (en) | Dialogue reply generation method and device, electronic equipment and storage medium | |
Wang et al. | Emily: Developing An Emotion-affective Open-Domain Chatbot with Knowledge Graph-based Persona | |
Shah et al. | Chatbot Analytics Based on Question Answering System Movie Related Chatbot Case Analytics. pdf | |
CN113901841A (en) | Translation method, translation device and storage medium | |
CN115169367B (en) | Dialogue generating method and device, and storage medium | |
CN117521674B (en) | Method, device, computer equipment and storage medium for generating countermeasure information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHU, YUE;LU, YAN XIONG;LIN, FEN;REEL/FRAME:045990/0137 Effective date: 20180403 Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHU, YUE;LU, YAN XIONG;LIN, FEN;REEL/FRAME:045990/0137 Effective date: 20180403 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |