CN112163067A - Sentence reply method, sentence reply device and electronic equipment - Google Patents

Sentence reply method, sentence reply device and electronic equipment Download PDF

Info

Publication number
CN112163067A
CN112163067A CN202011013920.9A CN202011013920A CN112163067A CN 112163067 A CN112163067 A CN 112163067A CN 202011013920 A CN202011013920 A CN 202011013920A CN 112163067 A CN112163067 A CN 112163067A
Authority
CN
China
Prior art keywords
input
vector
intention
sentence
word slot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011013920.9A
Other languages
Chinese (zh)
Inventor
阎守卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Zhitong Consulting Co Ltd Shanghai Branch
Original Assignee
Ping An Zhitong Consulting Co Ltd Shanghai Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Zhitong Consulting Co Ltd Shanghai Branch filed Critical Ping An Zhitong Consulting Co Ltd Shanghai Branch
Priority to CN202011013920.9A priority Critical patent/CN112163067A/en
Publication of CN112163067A publication Critical patent/CN112163067A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The application belongs to the technical field of artificial intelligence and discloses a sentence responding method and device and electronic equipment. The method comprises the following steps: receiving an input sentence; obtaining the intention and word slot of the input sentence through the trained intention recognition model and the trained word slot filling model; performing characteristic expression on the intention and word slot of the input statement based on a preset characteristic structure to obtain a state vector of the input statement; acquiring a feature vector of each historical statement from a preset historical dialogue table, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement; inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain target feedback information output by the conversation strategy model; outputting a reply to the input sentence based on the target feedback information. Through the scheme, the expandability and the flexibility of the response can be improved, and meanwhile, the workload of conversation design is reduced.

Description

Sentence reply method, sentence reply device and electronic equipment
Technical Field
The present application belongs to the technical field of artificial intelligence, and in particular, relates to a sentence replying method, a sentence replying apparatus, an electronic device, and a computer-readable storage medium.
Background
The application of the question-answering robot in the industrial field is increasingly widespread, the status is increasingly improved, and the artificial intelligence and Natural Language Processing (NLP) technology behind the question-answering robot becomes the core influencing the quality of the question-answering robot. The natural language processing technology related to the question and answer robot mainly comprises intention recognition and word slot filling, and also relates to conversation management in multiple rounds of conversations, and the technologies relate to the reasonability and the user experience of the conversations. Currently, the answer of the input sentence given based on the configured template is a widely used multi-turn dialogue management mode, and although the design difficulty is low, the expansibility and flexibility are weak, and the generated answer is easy to solidify.
Disclosure of Invention
The application provides a sentence reply method, a sentence reply device, an electronic device and a computer readable storage medium, which can improve the expansibility and flexibility of reply.
In a first aspect, the present application provides a sentence reply method, including:
receiving an input sentence;
inputting the input sentence into a trained intention recognition model and a trained word slot filling model, and obtaining the intention of the input sentence output by the intention recognition model and the word slot of the input sentence output by the word slot filling model;
performing characteristic expression on the intention and word slot of the input sentence based on a preset characteristic structure to obtain a state vector of the input sentence;
constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement;
inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain target feedback information output by the conversation strategy model;
outputting a reply to the input sentence based on the target feedback information.
In a second aspect, the present application provides a sentence replying apparatus comprising:
a receiving unit for receiving an input sentence;
an intention acquisition unit configured to input the input sentence into a trained intention recognition model and acquire an intention of the input sentence output by the intention recognition model;
a word slot obtaining unit, configured to input the input sentence into a trained word slot filling model, and obtain a word slot of the input sentence output by the word slot filling model;
a first vector obtaining unit, configured to perform characteristic expression on the intention and word slot of the input sentence based on a preset feature structure, and obtain a state vector of the input sentence;
the second vector acquisition unit is used for constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement;
a target feedback information obtaining unit, configured to input the feature vector of each history statement and the state vector of the input statement into a trained dialogue strategy model, and obtain target feedback information output by the dialogue strategy model;
a reply output unit for outputting a reply to the input sentence based on the target feedback information.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
As can be seen from the above, according to the present invention, after receiving an input sentence, first inputting the input sentence into a trained intent recognition model and a trained word slot filling model, obtaining an intent of the input sentence output by the intent recognition model and a word slot of the input sentence output by the word slot filling model, then characterizing the intent and the word slot of the input sentence based on a preset feature structure, obtaining a state vector of the input sentence, and simultaneously constructing a feature vector of each historical sentence through a preset historical dialogue table and the feature structure, where the feature vector is a characterization expression of an intent, a word slot and feedback information of a historical sentence, and finally inputting the feature vector of each historical sentence and the state vector of the input sentence into a trained dialogue strategy model, and obtaining target feedback information output by the conversation strategy model, and outputting a reply to the input statement based on the target feedback information. According to the scheme, both the historical sentences and the currently received input sentences are characterized and expressed according to the preset feature structures, so that the feature vectors of all the sentences are obtained, the feedback information of the input sentences is predicted through the feature vectors, and compared with the mode that the answer of the input sentences is given based on the configured template, the scheme is higher in expansibility and flexibility. It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a sentence reply method provided by an embodiment of the present application;
fig. 2 is a schematic network structure diagram of a dialogue policy model in a sentence reply method provided in an embodiment of the present application;
FIG. 3 is a flow framework diagram of a sentence reply method provided by an embodiment of the present application;
fig. 4 is a block diagram of a sentence replying apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Example one
Referring to fig. 1, a sentence replying method provided in an embodiment of the present application is described below, where the sentence replying method in the embodiment of the present application includes:
step 101, receiving an input statement;
in the embodiment of the present application, firstly, a sentence input by a user this time, that is, an input sentence, may be received. The input sentence is the object of the current sentence reply, that is, the purpose of the current sentence reply is to identify the intention and word slot of the input sentence, and then give a reply to the input sentence based on the intention, word slot and context of the input sentence. Specifically, the user can input the input sentence in a text input mode; alternatively, the user may input the input term by voice input, and the input term is not limited to this.
Step 102, inputting the input sentence into a trained intention recognition model and a trained word slot filling model, and obtaining the intention of the input sentence output by the intention recognition model and the word slot of the input sentence output by the word slot filling model;
in the embodiment of the present application, a intent recognition model and a word slot filling model are trained, wherein the intent recognition model is used for predicting the intent of the input sentence, and the word slot filling model is used for extracting each entity in the input sentence and determining a corresponding word slot based on the extracted entities. Specifically, the intention recognition model and the word slot filling model can be trained based on a Natural Language Understanding (NLU) file. For ease of understanding, specific examples of the file contents in the NLU file are given below:
to inquire weather
How much weather today
-i want to look up { shanghai: city today weather
Today weather comfort is
How the air quality is today
# Notification of City
Shanghai (Shanghai)
I am in Beijing
Tianjin City
In the above example, the text beginning with the symbol "#" represents an intention, and "# asking weather" contained in the file content is used to represent the intention "asking weather", and "# telling city" is used to represent the intention "telling city"; the text beginning with the symbol "-" is used to represent a specific sentence (example) under the corresponding intention, that is, the text beginning with the symbol "-" can be considered as training samples under the corresponding intention; entities and corresponding word slots are labeled in a format of { entity words: word slot names }, an example that the NLU file comprises the format of the { entity words: word slot names } is word slot filling training data, and in the file contents, the word I wants to search for the data in the format of { Shanghai: city today weather "is the word slot filling training data, where { shanghai: city can be used to indicate that when the entity word is "shanghai", the word slot name corresponding to the entity word is "city". It is considered that the above NLU document actually provides a standard template that is more convenient and more suitable for the development of a natural language understanding model of a man-machine interaction system, and can realize the structural representation of intentions and word slots. It should be noted that the file contents included in the NLU file are manually filled by the corporations, that is, the corporations can fill the NLU file according to the format provided by the NLU file according to their research and development requirements, so as to form a training sample for training an intention recognition model and a word slot filling model. That is, the natural language understanding file includes the exhaustive intentions and word slots of the corpus.
In some embodiments, the training for the intent recognition model described above may be: extracting a training sample under a target intention through the NLU file, wherein the target intention is any intention shown in the NLU; then calculating a Term Frequency-Inverse text Frequency (TF-IDF) index of each Term in each training sample under the target intention, and carrying out Term vectorization on each training sample under the target intention based on the TF-IDF index to obtain a Term vector expression of each training sample; or, word vectorization can be directly performed on each training sample under the target intention based on the word2vec technology to obtain word vector expression of each training sample, and the obtaining mode of the word vector expression is not limited here; and then, inputting the word vector expression of each training sample into the multi-classification model, comparing the intention output by the multi-classification model with the correct intention (namely the target intention), adjusting the model parameters of the multi-classification model, and finally obtaining the trained intention recognition model.
In some embodiments, the training for the word slot filling model may be: extracting word slot filling training data through the NLU file, characterizing the word slot filling training data through a specified labeling system (such as a BIO labeling system or other labeling systems), performing word segmentation, part of speech labeling and named entity recognition on the characterized word slot filling training data through a bilSTM-CRF model, comparing the results of word segmentation, part of speech labeling and named entity recognition output by the bilSTM-CRF model with the results of correct word segmentation, part of speech labeling and named entity recognition of the word slot filling training data, adjusting model parameters of the bilSTM-CRF model, and finally obtaining a trained word slot filling model.
It should be noted that, the more intentions and word slots stored in the NLU file, the better the training effect of the intention recognition model and the word slot filling model; that is, although not absolutely exhaustive, there are still many possible examples of possible (and non-repeating) intents and word slots that may be suggested by a speaker.
103, performing characteristic expression on the intention and word slot of the input sentence based on a preset characteristic structure to obtain a state vector of the input sentence;
in the embodiment of the application, the input statement is characterized to obtain a state vector of the input statement; it is considered that the state vector of the feature sentence indicates the intention of the input sentence and the word slot.
Specifically, the step 103 may include:
a1, creating an intention vector of the input sentence according to the intention of the input sentence;
in some embodiments, an intent vector of the input sentence may be generated by an intent of the input sentence output by the intent recognition model; that is, the intention of the input sentence described above is represented by the intention vector. In this embodiment of the present application, the intention vector may be a one-bit effective encoding vector, that is, the intention vector may be generated by a one-hot encoding method, and the specific process is as follows:
a11, acquiring the number of intentions contained in a preset NLU file;
in some embodiments, as can be seen from the above description of the NLU file, the pragmatizers can propose different intentions as many as possible, and store the proposed intentions in the NLU file set in advance; that is, the NLU file stores various possible intentions given by the speaker, such as "listen to music" intention, "ask weather" intention, and "navigate" intention, and the speaker can set the intention included in the NLU file by himself, which is not limited herein. By counting intents contained in the NLU file, the number of intents can be obtained.
A12, initializing the intention vector of the input sentence based on the intention quantity;
in some embodiments, an intention vector of an input sentence may be initialized according to a counted number of intentions, where dimensions of the intention vector of the input sentence are the same as the number of intentions, and there is a one-to-one correspondence between the dimensions in the intention vector of the input sentence and the intentions in the NLU file. For example, assuming that there are M intents in the NLU file, an intention vector of M dimensions can be obtained through initialization, and the initial values of all dimensions of the intention vector are preset first values, specifically, the first values are "0".
And A13, changing the value of the dimension corresponding to the intention of the input sentence in the intention vector of the input sentence into a preset second value.
In some embodiments, the second value is specifically "1". Since each dimension in the intent vector corresponds to an intent, and the intentions corresponding to different dimensions are different, the value in the dimension corresponding to the intent of the input sentence in the intent vector of the input sentence can be changed from the first value "0" to the second value "1" based on the one-hot encoding method. Taking three items of intentions including weather asking, music listening and navigation in the NLU file as examples, the initialized intention vector is [0,0,0], wherein the first dimension corresponds to the intention of weather asking, the second dimension corresponds to the intention of music listening and the third dimension corresponds to the intention of navigation; if the intention of the input sentence obtained by the intention recognition model is to ask weather, a 3-dimensional vector [1,0,0] can be obtained, and the 3-dimensional vector is the intention vector of the input sentence.
A2, creating a word slot vector of the input sentence according to the word slot of the input sentence;
in some embodiments, a word slot vector of the input sentence may be generated by filling a word slot of the input sentence output by a word slot filling model; that is, the word slot vector represents the word slot included in the input sentence. It should be noted that the number of word slots included in the input sentence may be more than one. In this embodiment of the present application, the intention vector may be a one-bit effective encoding vector, that is, an intention vector generated by a one-hot encoding method, and the specific process includes:
a21, acquiring the number of word slots contained in a preset natural language understanding file;
in some embodiments, similar to the above step a11, as can be seen from the above description of the NLU file, the speaker can propose various word slots as many as possible, and store the proposed word slots in the NLU file that is preset; that is, the NLU file stores various possible word slots given by the speaker, such as a "city" word slot, a "singer" word slot, a "song" word slot, and an "album" word slot, and the speaker can set the word slots included in the NLU file by himself, which is not limited herein. The number of word slots can be obtained by counting the word slots contained in the NLU file.
A22, initializing word slot vectors of the input sentences based on the number of the word slots, wherein the dimensions of the word slot vectors of the input sentences are the same as the number of the word slots, and the dimensions in the word slot vectors of the input sentences and the word slots in the natural language understanding file are in one-to-one correspondence;
in some embodiments, similar to the step a12, a word slot vector of an input sentence may be initialized according to the counted number of word slots, where the dimension of the word slot vector of the input sentence is the same as the number of word slots, and there is a one-to-one correspondence between the dimension of the word slot vector of the input sentence and the word slot in the NLU file. For example, assuming that N word slots are shown in the NLU file, an N-dimensional word slot vector can be obtained through initialization, and the initial values of all dimensions of the word slot vector are preset first values, specifically, the first values are "0".
And A23, changing the dimension value corresponding to any word slot of the input sentence in the word slot vector of the input sentence into a preset second value.
In some embodiments, the second value is specifically a "1" similar to step a13 described above. Because each dimension in the word slot vector corresponds to a word slot, and the word slots corresponding to different dimensions are different, the value in the dimension corresponding to the intention of the input sentence in the intention vector of the input sentence can be changed from the first value "0" to the second value "1" based on the one-hot encoding method. It should be noted that, in step a13, since there is often only one intention to be expressed by the sentence, in step a13, the intention vector usually has only one dimension, and the value is changed to the second value "1"; in step a23, since there may be more than one word slot included in the sentence, the values of the multiple dimensions may be changed to a second value "1" according to the number of word slots included in the input sentence.
For example, when the input sentence is "i want to listen to the whitish balloon of zhougeny", the entity word of "zhougeny" may be extracted to obtain the word slot "singer", and the entity word of "whitish balloon" may also be extracted to obtain the word slot "song", that is, the input sentence may be represented by the word slot "i want to listen to { zhougeny: singer { caucasian balloon: song } ", there are two word slots contained in the input sentence, which are" singer "and" song ", respectively. Taking four items of word slots [ city, singer, song, album ] contained in the NLU file as an example, the initialized intention vector is [0,0,0,0], wherein the first dimension corresponds to the word slot of "city", the second dimension corresponds to the word slot of "singer", the third dimension corresponds to the word slot of "song", and the fourth dimension corresponds to the word slot of "album"; based on the input sentence "i want to listen to the whitish balloon of zhou jeren", a 4-dimensional vector [0,1,1,0] can be obtained, which is the word-slot vector of the input sentence.
And A3, splicing the intention vector and the word slot vector of the input sentence based on the splicing sequence indicated by the characteristic structure to obtain the state vector of the input sentence.
In some embodiments, the electronic device may concatenate the intent vector and the word slot vector of the input sentence according to a preset feature structure to obtain a state vector of the input sentence.
104, constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure;
in the embodiment of the present application, a developer may preset a history dialogue table in which history statements are recorded, where the history statements refer to statements input by a user in a history turn of a current turn of a dialogue. In addition to the history sentences, the history dialogue table may record the intention of each history sentence, the word slot of each history sentence, and the feedback information (action) predicted by the electronic device based on each history sentence. The feature vectors for each history statement may be constructed in a similar manner as shown in step 103 above. For example, assuming that a corpus staff proposes M intentions, N word slots, and P feedback information, for a historical statement 1, an M-dimensional intention vector, an N-dimensional word slot vector, and a P-dimensional feedback information vector of the historical statement 1 may be initially obtained, a numerical value of a corresponding dimension in the M-dimensional intention vector may be set according to the intention of the historical statement 1 recorded in a historical dialogue, a numerical value of a corresponding dimension in the N-dimensional word slot vector may be set according to a word slot included in the historical statement 1 recorded in the historical dialogue, a numerical value of a corresponding dimension in the P-dimensional feedback information vector may be set according to the feedback information of the historical statement 1 recorded in the historical dialogue, and then the intention vector, the word slot vector, and the feedback information vector of the historical statement 1 may be spliced according to a splicing order indicated by the feature structure to obtain a feature vector of the historical statement 1.
It should be noted that, in order to facilitate the operation of the subsequent dialogue strategy model, the state vector of the input sentence may be filled from M + N dimension to M + N + P dimension, so that the state vector of the input sentence and the dimension of the feature vector of each historical sentence are consistent, wherein the filling position is determined by the above feature structure. For example, assuming that the feature vectors of the history statements are spliced according to the splicing order indicated by the feature structure of intention vector, word slot vector, feedback information vector, the last dimension of the state vector of the M + N dimension of the input statement can be filled with the value "0" backwards to the M + N + P dimension; that is, it can be considered that a P-dimensional feedback information vector with all values of 0 is generated for the input sentence, and the intention vector, the word slot vector, and the feedback information vector of the input sentence are spliced according to the splicing sequence indicated by the characteristic structure of the intention vector, the word slot vector, and the feedback information vector, so as to obtain an M + N + P-dimensional state vector of the input sentence.
Step 105, inputting the feature vectors of the historical sentences and the state vectors of the input sentences into a trained dialogue strategy model to obtain target feedback information output by the dialogue strategy model;
in the embodiment of the present application, the dialog policy model is specifically a Long Short-Term Memory (LSTM) network model based on attention mechanism, and the network model is widely applied in the field of NLP, and is briefly introduced here:
referring to fig. 2, the network model is composed of an input layer, an LSTM layer, an Attention layer, a full link layer, and an output layer.
The input layer is configured to input data to a network model, and specifically, to use feature vectors of each historical statement and state vectors of the input statements as inputs of the network model, pre-process each input vector (that is, the feature vectors of each historical statement and the state vectors of the input statements), and construct a vector sequence based on each feature vector and each input vector. It should be noted that, the developer may preset the length L of the vector sequence, and when the number of dialog turns is not enough (i.e. the total number of feature vectors and state vectors is not enough), the vector sequence is filled with-1.
The LSTM layer is used to extract the hidden state (hidden state) of each input vector in the vector sequence, and it should be noted that a mask operation is performed before the vector sequence is input into the LSTM layer.
The Attention layer is configured to generate a weight vector according to a hidden state of each input vector, and the weight vector is configured to indicate Attention distribution of the input vector of each history statement.
Activating the full connection layer through a softmax function to calculate a classification result based on the attention distribution of the input vector corresponding to each historical statement, wherein the classification result refers to the score of more than one piece of feedback information predicted by the network model; that is, P pieces of feedback information are preset for the corpus, and the score of each piece of feedback information, which is used to indicate the probability of the piece of feedback information predicted based on the input sentence, is obtained by activating the full link layer by the softmax classification function.
Wherein, the output layer is used for outputting the classification result. The electronic equipment selects an optimal solution based on the classification result, and specifically determines the feedback information with the highest score as the target feedback information.
In some embodiments, the above-described dialog strategy model may be trained based on a preset store file. For ease of understanding, specific examples of the contents of the store file are given below:
# ask weather scene
To inquire weather
+ask_for_city
# Notification of City
+return_answer_weather
# Address asking scene
……
Wherein, the text beginning with the symbol "##" represents the scene name, which has no actual effect and only belongs to the annotation of the scene; the text beginning with "#" is consistent with the meaning in the above NLU file, representing an intention; the text beginning with "+" represents feedback information of the electronic device, i.e., a response to the sentence (i.e., the corresponding intention) input by the user. Such a set of "intention-feedback information-intention-feedback information …" constitutes a scenario. It should be noted that the data of the NLU file and the store file are structured data, and can be converted into a data format required by the model, and the speaker (i.e. the dialog designer) only needs to write according to a simple rule, so that the workload is reduced. In this way, the corpus can be familiar with the business or the dialog scenario without being familiar with the machine learning background, and the corpus can assist the electronic device to complete the training of the intent recognition by editing the NLU file and edit the typical dialog scenario by editing the store file.
In some embodiments, the training for the above-described dialog strategy model is: extracting strategy model training data through a preset STORY file; and performing characteristic expression on the current statement and the historical statement in the strategy model training data to obtain a state vector of the current statement and a characteristic vector of the historical statement, training the LSTM network model based on the attention mechanism based on the state vector and the characteristic vector, adjusting model parameters of the LSTM network model based on the attention mechanism through back propagation according to an output result of the training, and finally obtaining the trained conversation strategy model.
It should be noted that the more feedback information stored in the SOTRY file, the better the training effect of the dialogue strategy model; that is, although the feedback information cannot be exhausted at present, it is still necessary to exemplify as many corpus staff as possible, and feedback information possible in a plurality of scenarios is provided.
And 106, outputting a reply to the input sentence based on the target feedback information.
In the embodiment of the present application, a reply to the input sentence may be output based on the target feedback information. In order to further enhance the richness of the sentence response, a response template may be randomly selected from the response templates associated with the target feedback information as the target response template, and the response of the input sentence may be output based on the target response template.
Optionally, the step 106 specifically includes:
b1, detecting whether the reply template associated with the target feedback information has content to be filled;
in some embodiments, there may be content to be populated in the reply template to which certain feedback information is associated. For example, if it is determined that the target feedback information is "return _ answer _ weather", the associated reply template may be "weather of your city is … …", and the "… …" is the content to be filled, and needs to be filled by the electronic device according to actual situations.
B2, if the content to be filled exists, the content to be filled in the reply template is filled in the network;
in some embodiments, assuming that the content to be filled exists in the reply template associated with the target feedback information, the electronic device may further network to obtain information required by the content to be filled, for example, when the reply template is "weather of your city is … …", the electronic device needs to prefer to network to obtain weather information of the user's location and fill the weather information into the content to be filled, so as to obtain a complete reply sentence.
B3, outputting the reply to the input sentence based on the filled reply template.
In some embodiments, replies to the input sentences may be output based on the populated reply template. For example, after the reply template is filled with "weather of your city is … …", the obtained complete reply sentence is "weather of your city is cloudy, temperature is 18 to 24 degrees, humidity is 60%, southeast wind is 1-3 level", and the complete reply sentence is output as the reply of the input sentence.
Optionally, after step 105, the sentence reply method further includes:
updating the history dialogue table based on the input sentence, the intention of the input sentence, the word slot of the input sentence, and the target feedback information.
In the embodiment of the present application, it has been explained above that the history dialogue table includes elements such as a sentence, an intention of the sentence, a word slot of the sentence, and feedback information based on the sentence, and the embodiment of the present application can construct a feature vector of the history sentence through the history dialogue table. Considering that multiple rounds of dialogs are dependent on the foregoing, when each round of dialogs is ended, that is, each time target feedback information is obtained based on the input statement prediction of the current round, the relevant content of the current round of dialogs (including the input statement itself, the intention of the input statement, the word slot of the input statement, and the target feedback information predicted based on the input statement) needs to be added into the history dialogue table, so that the history statement obtained by the next round of dialogs does not lose information.
Referring to fig. 3, fig. 3 is a schematic diagram of a flow framework of a sentence replying method according to an embodiment of the present application. As can be seen from fig. 3, after receiving an input sentence, the intention of the input sentence and the word slot of the input sentence can be obtained through the intention recognition model and the word slot filling model which are arranged in parallel; obtaining a characteristic expression of the input sentence (namely a state vector of the input sentence) based on the intention of the input sentence and the word slot of the input sentence; meanwhile, the characteristic expression of the historical statement (namely the characteristic vector of the historical statement) stored in a preset dialogue record table can be obtained; then, inputting the state vector of the input statement and the feature vector of the historical statement into a dialogue strategy model together to obtain target feedback information of the input statement output by the dialogue strategy model; finally, the dialogue log table (i.e., the dotted line portion in fig. 3) is updated based on the input sentence, the intention of the previous input sentence, the word slot of the input sentence, and the target feedback information, and the reply to the input sentence is output based on the target feedback information.
As can be seen from the above, in the sentence reply method provided in the present application, the current input sentence and the historical sentence of the user are characterized by the preset feature structure, so as to obtain the state vector of the input sentence and the feature vector of the historical sentence, and the feedback information of the input sentence is predicted by these vectors. Furthermore, the configuration file written in the preset format is adopted in the model training process, training data can be conveniently converted into structured data through the configuration file, and the configuration file is easier for linguistic staff to write and modify, so that the workload of the linguistic staff can be reduced.
It should be emphasized that the configuration files, the structured data, and the like written in the predetermined format disclosed above can be stored in the blockchain to enhance the security of the above information, and the blockchain referred to in the present invention is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Example two
A second embodiment of the present application provides a sentence replying apparatus, which can be integrated in an electronic device, as shown in fig. 4, the sentence replying apparatus 400 in the second embodiment of the present application includes:
a receiving unit 401, configured to receive an input sentence;
an intention acquisition unit 402 configured to input the input sentence into a trained intention recognition model, and acquire an intention of the input sentence output by the intention recognition model;
a word slot obtaining unit 403, configured to input the input sentence into a trained word slot filling model, and obtain a word slot of the input sentence output by the word slot filling model;
a first vector obtaining unit 404, configured to perform characteristic expression on the intention and word slot of the input sentence based on a preset feature structure, so as to obtain a state vector of the input sentence;
a second vector obtaining unit 405, configured to construct a feature vector of each history statement according to a preset history dialogue table and the feature structure, where the feature vector is a feature expression of an intention, a word slot, and feedback information of the history statement;
a target feedback information obtaining unit 406, configured to input the feature vector of each history statement and the state vector of the input statement into a trained dialog policy model, and obtain target feedback information output by the dialog policy model;
a reply output unit 407, configured to output a reply to the input sentence based on the target feedback information.
Optionally, the first vector obtaining unit 404 includes:
an intention vector creating subunit, configured to create an intention vector of the input sentence according to an intention of the input sentence;
a word slot vector creating subunit, configured to create a word slot vector of the input sentence according to the word slot of the input sentence;
and the vector splicing subunit is used for splicing the intention vector and the word slot vector of the input statement based on the splicing sequence indicated by the characteristic structure to obtain the state vector of the input statement.
Optionally, the intention vector is a one-bit significant code vector, and the intention vector creating subunit includes:
an intention quantity acquiring subunit, configured to acquire an intention quantity included in a preset natural language understanding file;
an intention vector initialization subunit, configured to initialize intention vectors of the input sentences based on the number of intentions, where dimensions of the intention vectors of the input sentences are the same as the number of intentions, and the dimensions of the intention vectors of the input sentences and intentions in the natural language understanding file are in a one-to-one correspondence relationship, and a value of each dimension is a preset first value;
and an intention vector setting subunit which changes the value of the dimension corresponding to the intention of the input sentence in the intention vector of the input sentence to a preset second value.
Optionally, the word slot vector is a one-bit effective encoding vector, and the word slot vector creating subunit includes:
the word slot number acquiring subunit is used for acquiring the number of word slots contained in a preset natural language understanding file;
a word slot vector initialization subunit, configured to initialize the word slot vectors of the input sentences based on the number of the word slots, where dimensions of the word slot vectors of the input sentences are the same as the number of the word slots, and a one-to-one correspondence relationship is between the dimensions of the word slot vectors of the input sentences and the word slots in the natural language understanding file, and a numerical value of each dimension is a preset first numerical value;
and the word slot vector setting subunit is used for changing the dimension value corresponding to any word slot of the input sentence in the word slot vector of the input sentence into a preset second value.
Optionally, the reply output unit includes:
a content detection subunit, configured to detect whether there is content to be filled in the reply template associated with the target feedback information;
a content filling subunit, configured to, if there is content to be filled, fill the content to be filled in the reply template through networking;
and the reply output subunit is used for outputting the reply to the input sentence based on the filled reply template.
Optionally, the target feedback information obtaining unit includes:
a score obtaining subunit, configured to input the feature vector of each history statement and the state vector of the input statement to a trained dialogue strategy model, and obtain a score of one or more pieces of feedback information output by the dialogue strategy model;
and the target feedback information determining subunit is used for determining the feedback information with the highest score as the target feedback information.
Optionally, the sentence replying device further comprises:
and a history dialogue table updating subunit configured to update the history dialogue table based on the input sentence, the intention of the input sentence, the word slot of the input sentence, and the target feedback information.
As can be seen from the above, in the sentence responding apparatus provided in the present application, the current input sentence and the historical sentence of the user are characterized by the preset feature structure, so as to obtain the state vector of the input sentence and the feature vector of the historical sentence, and the feedback information of the input sentence is predicted by these vectors. Furthermore, the configuration file written in the preset format is adopted in the model training process, training data can be conveniently converted into structured data through the configuration file, and the configuration file is easier for linguistic staff to write and modify, so that the workload of the linguistic staff can be reduced.
EXAMPLE III
Referring to fig. 5, an electronic device 5 in the embodiment of the present application includes: a memory 501, one or more processors 502 (only one shown in fig. 5), and a computer program stored on the memory 501 and executable on the processors. Wherein: the memory 501 is used for storing software programs and modules, and the processor 502 executes various functional applications and data processing by running the software programs and units stored in the memory 501, so as to acquire resources corresponding to the preset events. Specifically, the processor 502 realizes the following steps by running the above-mentioned computer program stored in the memory 501:
receiving an input sentence;
inputting the input sentence into a trained intention recognition model and a trained word slot filling model, and obtaining the intention of the input sentence output by the intention recognition model and the word slot of the input sentence output by the word slot filling model;
performing characteristic expression on the intention and word slot of the input sentence based on a preset characteristic structure to obtain a state vector of the input sentence;
constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement;
inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain target feedback information output by the conversation strategy model;
outputting a reply to the input sentence based on the target feedback information.
Assuming that the above is the first possible embodiment, in a second possible embodiment based on the first possible embodiment, the obtaining the state vector of the input sentence by characterizing the intention and the word slot of the input sentence based on a preset feature structure includes:
creating an intention vector of the input sentence according to the intention of the input sentence;
creating a word slot vector of the input sentence according to the word slot of the input sentence;
and splicing the intention vector and the word slot vector of the input statement based on the splicing sequence indicated by the characteristic structure to obtain the state vector of the input statement.
In a third possible implementation form based on the second possible implementation form, the creating an intention vector of the input sentence according to the intention of the input sentence, where the intention vector is a one-bit significant code vector, includes:
acquiring the number of intentions contained in a preset natural language understanding file;
initializing an intention vector of the input sentence based on the number of intentions, wherein dimensions of the intention vector of the input sentence are the same as the number of intentions, the dimensions in the intention vector of the input sentence and intentions in the natural language understanding file are in one-to-one correspondence, and numerical values of each dimension are preset first numerical values;
and changing the value of the dimension corresponding to the intention of the input statement in the intention vector of the input statement into a preset second value.
In a fourth possible embodiment based on the second possible embodiment, the creating a word slot vector of the input sentence based on the word slot of the input sentence, the word slot vector being a one-bit significant code vector, includes:
acquiring the number of word slots contained in a preset natural language understanding file;
initializing word slot vectors of the input sentences based on the number of the word slots, wherein the dimensionalities of the word slot vectors of the input sentences are the same as the number of the word slots, the dimensionalities of the word slot vectors of the input sentences correspond to the word slots in the natural language understanding file one by one, and the numerical value of each dimensionality is a preset first numerical value;
and changing the dimension value corresponding to any word slot of the input sentence into a preset second value in the word slot vector of the input sentence.
In a fifth possible implementation form based on the first possible implementation form, the outputting a response to the input sentence based on the target feedback information includes:
detecting whether the reply template associated with the target feedback information has content to be filled;
if the content to be filled exists, the content to be filled in the reply template is filled in the network;
and outputting the reply to the input sentence based on the filled reply template.
In a sixth possible embodiment based on the first possible embodiment, the inputting feature vectors of the respective history sentences and state vectors of the input sentences into a trained dialogue strategy model to obtain target feedback information output by the dialogue strategy model includes:
inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain the score of more than one piece of feedback information output by the conversation strategy model;
and determining the feedback information with the highest score as the target feedback information.
In a seventh possible implementation form based on the first possible implementation form, the second possible implementation form, the third possible implementation form, the fourth possible implementation form, the fifth possible implementation form, or the sixth possible implementation form, after the feature vectors of the history statements and the state vectors of the input statements are input into a trained dialogue strategy model and target feedback information output by the dialogue strategy model is obtained, the processor 502 implements the following steps when operating the computer program stored in the memory 501:
updating the history dialogue table based on the input sentence, the intention of the input sentence, the word slot of the input sentence, and the target feedback information.
It should be understood that in the embodiments of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor may be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 501 may include both read-only memory and random access memory and provides instructions and data to processor 502. Some or all of the memory 501 may also include non-volatile random access memory. For example, the memory 501 may also store device class information.
As can be seen from the above, in the electronic device provided in the present application, the current input sentence and the historical sentence of the user are characterized by the preset feature structure, so as to obtain the state vector of the input sentence and the feature vector of the historical sentence, and the feedback information of the input sentence is predicted by these vectors. Furthermore, the configuration file written in the preset format is adopted in the model training process, training data can be conveniently converted into structured data through the configuration file, and the configuration file is easier for linguistic staff to write and modify, so that the workload of the linguistic staff can be reduced.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A sentence response method, comprising:
receiving an input sentence;
inputting the input sentence into a trained intention recognition model and a trained word slot filling model, and obtaining the intention of the input sentence output by the intention recognition model and the word slot of the input sentence output by the word slot filling model;
performing characteristic expression on the intention and word slot of the input statement based on a preset characteristic structure to obtain a state vector of the input statement;
constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement;
inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain target feedback information output by the conversation strategy model;
outputting a reply to the input sentence based on the target feedback information.
2. The sentence reply method of claim 1, wherein the characterizing the intent and the word slot of the input sentence based on the preset feature structure to obtain the state vector of the input sentence comprises:
creating an intention vector of the input statement according to the intention of the input statement;
creating a word slot vector of the input sentence according to the word slot of the input sentence
And splicing the intention vector and the word slot vector of the input statement based on the splicing sequence indicated by the characteristic structure to obtain the state vector of the input statement.
3. The sentence reply method of claim 2, wherein the intention vector is a one-bit significant code vector, and the creating of the intention vector of the input sentence according to the intention of the input sentence comprises:
acquiring the number of intentions contained in a preset natural language understanding file;
initializing the intention vector of the input statement based on the number of intentions, wherein the dimension of the intention vector of the input statement is the same as the number of intentions, the dimension of the intention vector of the input statement and the intentions in the natural language understanding file are in one-to-one correspondence, and the value of each dimension is a preset first value;
and changing the value of the dimension corresponding to the intention of the input statement in the intention vector of the input statement into a preset second value.
4. The sentence reply method of claim 2, wherein the word slot vector is a one-bit significance encoding vector, and the creating of the word slot vector of the input sentence from the word slot of the input sentence comprises:
acquiring the number of word slots contained in a preset natural language understanding file;
initializing word slot vectors of the input sentences based on the number of the word slots, wherein the dimensions of the word slot vectors of the input sentences are the same as the number of the word slots, the dimensions of the word slot vectors of the input sentences and the word slots in the natural language understanding file are in one-to-one correspondence, and the numerical value of each dimension is a preset first numerical value;
and changing the dimension value corresponding to any word slot of the input sentence into a preset second value in the word slot vector of the input sentence.
5. The sentence reply method of claim 1, wherein the outputting of the reply to the input sentence based on the target feedback information comprises:
detecting whether the reply template associated with the target feedback information has content to be filled;
if the content to be filled exists, the content to be filled in the reply template is filled in the network;
outputting a reply to the input sentence based on the populated reply template.
6. The sentence reply method of claim 1, wherein the inputting the feature vector of each historical sentence and the state vector of the input sentence into the trained dialogue strategy model to obtain the target feedback information output by the dialogue strategy model comprises:
inputting the feature vector of each historical statement and the state vector of the input statement into a trained conversation strategy model to obtain the score of more than one piece of feedback information output by the conversation strategy model;
and determining the feedback information with the highest score as the target feedback information.
7. The sentence reply method according to any one of claims 1 to 6, wherein after the feature vector of each of the historical sentences and the state vector of the input sentence are input into the trained dialogue strategy model, and the target feedback information output by the dialogue strategy model is obtained, the sentence reply method further comprises:
updating the historical dialogue table based on the input sentence, the intention of the input sentence, the word slot of the input sentence, and the target feedback information.
8. A sentence reply device characterized by comprising:
a receiving unit for receiving an input sentence;
an intention acquisition unit, configured to input the input sentence into a trained intention recognition model, and obtain an intention of the input sentence output by the intention recognition model;
a word slot obtaining unit, configured to input the input sentence into a trained word slot filling model, and obtain a word slot of the input sentence output by the word slot filling model;
the first vector acquisition unit is used for performing characteristic expression on the intention and word slot of the input statement based on a preset characteristic structure to obtain a state vector of the input statement;
the second vector acquisition unit is used for constructing a feature vector of each historical statement through a preset historical dialogue table and the feature structure, wherein the feature vector is the characteristic expression of the intention, word slot and feedback information of the historical statement;
the target feedback information acquisition unit is used for inputting the feature vectors of the historical statements and the state vectors of the input statements into a trained conversation strategy model to acquire target feedback information output by the conversation strategy model;
a reply output unit for outputting a reply to the input sentence based on the target feedback information.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011013920.9A 2020-09-24 2020-09-24 Sentence reply method, sentence reply device and electronic equipment Pending CN112163067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011013920.9A CN112163067A (en) 2020-09-24 2020-09-24 Sentence reply method, sentence reply device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011013920.9A CN112163067A (en) 2020-09-24 2020-09-24 Sentence reply method, sentence reply device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112163067A true CN112163067A (en) 2021-01-01

Family

ID=73863631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011013920.9A Pending CN112163067A (en) 2020-09-24 2020-09-24 Sentence reply method, sentence reply device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112163067A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908315A (en) * 2021-03-10 2021-06-04 北京思图场景数据科技服务有限公司 Question-answer intention judgment method based on voice characteristics and voice recognition
CN113139816A (en) * 2021-04-26 2021-07-20 北京沃东天骏信息技术有限公司 Information processing method, device, electronic equipment and storage medium
CN113326365A (en) * 2021-06-24 2021-08-31 中国平安人寿保险股份有限公司 Reply statement generation method, device, equipment and storage medium
CN113617036A (en) * 2021-08-06 2021-11-09 网易(杭州)网络有限公司 Game dialogue processing method, device, equipment and storage medium
CN114490985A (en) * 2022-01-25 2022-05-13 北京百度网讯科技有限公司 Dialog generation method and device, electronic equipment and storage medium
CN116595995A (en) * 2023-07-17 2023-08-15 通号通信信息集团有限公司 Determination method of action decision, electronic equipment and computer readable storage medium
CN113139816B (en) * 2021-04-26 2024-07-16 北京沃东天骏信息技术有限公司 Information processing method, apparatus, electronic device and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908315A (en) * 2021-03-10 2021-06-04 北京思图场景数据科技服务有限公司 Question-answer intention judgment method based on voice characteristics and voice recognition
CN113139816A (en) * 2021-04-26 2021-07-20 北京沃东天骏信息技术有限公司 Information processing method, device, electronic equipment and storage medium
CN113139816B (en) * 2021-04-26 2024-07-16 北京沃东天骏信息技术有限公司 Information processing method, apparatus, electronic device and storage medium
CN113326365A (en) * 2021-06-24 2021-08-31 中国平安人寿保险股份有限公司 Reply statement generation method, device, equipment and storage medium
CN113326365B (en) * 2021-06-24 2023-11-07 中国平安人寿保险股份有限公司 Reply sentence generation method, device, equipment and storage medium
CN113617036A (en) * 2021-08-06 2021-11-09 网易(杭州)网络有限公司 Game dialogue processing method, device, equipment and storage medium
CN114490985A (en) * 2022-01-25 2022-05-13 北京百度网讯科技有限公司 Dialog generation method and device, electronic equipment and storage medium
CN114490985B (en) * 2022-01-25 2023-01-31 北京百度网讯科技有限公司 Dialogue generation method and device, electronic equipment and storage medium
CN116595995A (en) * 2023-07-17 2023-08-15 通号通信信息集团有限公司 Determination method of action decision, electronic equipment and computer readable storage medium
CN116595995B (en) * 2023-07-17 2023-10-24 通号通信信息集团有限公司 Determination method of action decision, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
JP6799574B2 (en) Method and device for determining satisfaction with voice dialogue
CN112163067A (en) Sentence reply method, sentence reply device and electronic equipment
CN108847241B (en) Method for recognizing conference voice as text, electronic device and storage medium
US20220337538A1 (en) Customized message suggestion with user embedding vectors
CN111460115B (en) Intelligent man-machine conversation model training method, model training device and electronic equipment
CN111046667B (en) Statement identification method, statement identification device and intelligent equipment
CN111090728A (en) Conversation state tracking method and device and computing equipment
CN107798123A (en) Knowledge base and its foundation, modification, intelligent answer method, apparatus and equipment
CN111046653B (en) Statement identification method, statement identification device and intelligent equipment
CN112463942A (en) Text processing method and device, electronic equipment and computer readable storage medium
CN113569017A (en) Model processing method and device, electronic equipment and storage medium
CN117591663B (en) Knowledge graph-based large model promt generation method
CN111402864A (en) Voice processing method and electronic equipment
CN112487813B (en) Named entity recognition method and system, electronic equipment and storage medium
CN111898363B (en) Compression method, device, computer equipment and storage medium for long and difficult text sentence
CN111508481B (en) Training method and device of voice awakening model, electronic equipment and storage medium
CN113268593A (en) Intention classification and model training method and device, terminal and storage medium
CN116680387A (en) Dialogue reply method, device, equipment and storage medium based on retrieval enhancement
US20230351098A1 (en) Custom display post processing in speech recognition
CN111401069A (en) Intention recognition method and intention recognition device for conversation text and terminal
CN113392190B (en) Text recognition method, related equipment and device
WO2022262080A1 (en) Dialogue relationship processing method, computer and readable storage medium
US11823671B1 (en) Architecture for context-augmented word embedding
US20230140480A1 (en) Utterance generation apparatus, utterance generation method, and program
CN112328751A (en) Method and device for processing text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination