CN111694941B - Reply information determining method and device, storage medium and electronic equipment - Google Patents
Reply information determining method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN111694941B CN111694941B CN202010443989.9A CN202010443989A CN111694941B CN 111694941 B CN111694941 B CN 111694941B CN 202010443989 A CN202010443989 A CN 202010443989A CN 111694941 B CN111694941 B CN 111694941B
- Authority
- CN
- China
- Prior art keywords
- reply
- information
- sample information
- target
- reference text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 109
- 239000013598 vector Substances 0.000 claims abstract description 187
- 230000011218 segmentation Effects 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 14
- 238000003058 natural language processing Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 10
- 230000000391 smoking effect Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 230000036541 health Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Machine Translation (AREA)
Abstract
The application provides a reply information determining method, a device, a storage medium and electronic equipment, belongs to the technical field of computers, and relates to artificial intelligence and natural language processing technology. The method comprises the steps of obtaining a reply information candidate set corresponding to target upper information, and dividing reply sample information into a plurality of semantic class groups according to the distance between each reference text vector in the reply information candidate set, wherein the reference text vectors are used for representing the semantics of the corresponding reply sample information. The reply information is determined by selecting at least one reply sample information from each semantic class group to compose a reply information set. Because the plurality of semantic groups are obtained according to semantic division of the reply sample information, different semantic groups represent different semantic sides, and the reply information is determined by selecting the reply sample information from each semantic group, a plurality of reply information with different semantics can be obtained, so that the reply information output aiming at the same above information is richer, and the quality of dialogue reply is effectively improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for determining reply information, a storage medium, and an electronic device.
Background
Recently, dialogue systems constructed based on natural language processing technology are increasingly used in electronic products, for example, dialogue systems can be used in chat robots, intelligent small assistants, and the like. The dialogue system realizes the communication between people and machines through natural language processing technology, and comprises three types of question-answering type, task type and boring type. Several types of dialog systems, as described above, typically require a response to a question entered by the user (also referred to as a dialog context).
In the related art, for the same dialogue context, the dialogue system will generally give the same reply. For example, each time a user sends a "you are in" conversation to the chat robot, the chat robot replies "present, i am always present". This single reply approach results in a lower quality dialog reply.
Disclosure of Invention
The embodiment of the application provides a reply information determining method, a device, a storage medium and electronic equipment, which can obtain a plurality of reply information with different semantics aiming at the same above information, thereby improving the quality of dialogue reply.
In order to achieve the above purpose, the technical solution of the embodiments of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a reply information determining method, including:
obtaining a reply information candidate set corresponding to the target upper information;
respectively determining reference text vectors of the reply sample information in the reply information candidate set, wherein the reference text vectors are used for representing the semantics of the corresponding reply sample information;
dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distance between each reference text vector;
at least one reply sample information is selected from each semantic class group to form a reply information set.
In a second aspect, an embodiment of the present application provides a reply information determining apparatus, including:
the information retrieval unit is used for obtaining a reply information candidate set corresponding to the target context information;
the information determining unit is used for respectively determining reference text vectors of the reply sample information in the reply information candidate set, wherein the reference text vectors are used for representing the semantics of the corresponding reply sample information; dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distance between each reference text vector; at least one reply sample information is selected from each semantic class group to form a reply information set.
In an alternative embodiment, the information determining unit is specifically configured to:
selecting the preset number of reference text vectors from the reference text vectors as initial class group centers, and dividing reply sample information corresponding to each reference text vector and the initial class group center closest to the reply sample information into the same semantic class group; the reference text vector is used for representing the semantics of the corresponding reply sample information;
and repeatedly executing the steps of determining the class group center of each semantic class group and dividing the reply sample information corresponding to each reference text vector and the class group center closest to the same semantic class group until the obtained change amplitude of each class group center is within the expected amplitude, thereby obtaining a plurality of semantic class groups according to the semantic division of each reply sample information.
In an alternative embodiment, the information determining unit is specifically configured to:
for each piece of reply sample information, acquiring word vectors of each word segmentation contained in the reply sample information;
and carrying out weighted average on word vectors of each word segmentation contained in the reply sample information to obtain a reference text vector of the reply sample information.
In an alternative embodiment, the information retrieval unit is specifically configured to:
Determining similar context sample information of the target context information in a dialogue data set; the dialogue data set comprises a plurality of pieces of above sample information and reply sample information corresponding to each piece of above sample information;
and obtaining the reply information candidate set based on reply sample information corresponding to each similar sample information.
In an alternative embodiment, the information retrieval unit is specifically configured to:
performing word segmentation processing on the target context information and removing stop words to obtain target words in the target context information;
for the above sample information in the dialogue data set, determining the similarity between the target above information and each above sample information according to the word frequency of each target word in each above sample information;
and taking the above sample information with the similarity meeting the preset similarity condition as the similar above sample information of the target above information.
In an alternative embodiment, the information retrieval unit is specifically configured to:
respectively determining a target text vector of the target context information and a reference text vector of each reply sample information;
and according to the correlation degree of each reference text vector and the target text vector, reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition is formed into the reply information candidate set.
In an alternative embodiment, the information retrieval unit is specifically configured to:
determining cosine similarity between the target text vector and a reference text vector;
and taking the cosine similarity between the target text vector and the reference text vector as the correlation between the target text vector and the reference text vector.
In an alternative embodiment, the information determining unit is specifically configured to:
and selecting one reply sample information with highest correlation degree with the target above information from each semantic class group, and forming a reply information set by all obtained reply sample information.
In an alternative embodiment, the reply message determining apparatus further includes:
and the information feedback unit is used for taking at least one reply sample information in the reply information set as feedback information aiming at the target context information and outputting the feedback information.
In an alternative embodiment, the information retrieval unit is further adapted to:
receiving voice data;
converting the voice data into the target context information;
the information feedback unit is further configured to:
and converting one reply sample information in the reply information set into reply voice data, and playing the reply voice data.
In a third aspect, embodiments of the present application further provide a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the reply information determining method of the first aspect.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a memory and a processor, where the memory stores a computer program that can be executed on the processor, and when the computer program is executed by the processor, the reply information determining method of the first aspect is implemented.
According to the reply information determining method, device, storage medium and electronic equipment, a reply information candidate set corresponding to target upper information is obtained, reply sample information in the reply information candidate set is divided into a plurality of semantic class groups according to the distance between each reference text vector in the reply information candidate set, and the reference text vectors are used for representing semantics of the corresponding reply sample information. The reply information is determined by selecting at least one reply sample information from each semantic class group to compose a reply information set. Because the plurality of semantic groups are obtained by dividing according to the semantics of the reply sample information, different semantic groups represent different semantic sides, and the reply information is determined by selecting the reply sample information from each semantic group, a plurality of reply information with different semantics can be obtained, so that the reply information output aiming at the same above information is richer, and the quality of dialogue reply is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is an application scenario schematic diagram of a reply message determining method provided in an embodiment of the present application;
fig. 2 is a flow chart of a reply message determining method provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of determining similar context sample information of target context information according to an embodiment of the present application;
fig. 4 is a schematic flow chart of determining a reply information candidate set corresponding to target context information according to an embodiment of the present application;
fig. 5 is an application interface schematic diagram of a reply message determining method provided in an embodiment of the present application;
fig. 6 is a flowchart of another reply message determining method provided in the embodiment of the present application;
fig. 7 is a schematic diagram of an implementation process of another reply message determining method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a reply message determining apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another reply message determining apparatus according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprises" and "comprising," along with their variants, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Some of the terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Text vector: refers to a vector for describing semantic features expressed by natural language text, and is a numeric representation of natural language text in a machine. In this embodiment of the present application, the target context information and the context sample information and the corresponding reply sample information stored in the dialogue database are both natural language text, where the target context information may be a sentence including one or more words input by a user in a human-computer interaction process, or may be obtained by converting based on voice data input by the user in the human-computer interaction process.
(2) Word vector: the term Vector features may also be referred to as word Vector features, and are used to describe semantic features of terms included in the natural language text, and generally refer to a Dense Vector (Vector) or matrix form that can be understood by a machine that performs Vector conversion on terms represented by the natural language, where a word Vector is a numeric representation of terms in the natural language text in the machine. If a natural language text includes a plurality of words, a text vector of the natural language text may be determined from word vectors of the plurality of words.
(3) Word2Vec model: the open source word vector tool of Google can convert words into word vectors by utilizing semantic relations among words in text data, and can identify words by utilizing semantic distance relations among word vectors.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms "first," "second," and the like herein are used for descriptive purposes only and are not to be construed as either explicit or implicit relative importance or to indicate the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Embodiments of the present application relate to artificial intelligence (Artificial Intelligence, AI) and Machine Learning techniques, designed based on natural language processing (natural language processing, NLP) techniques and Machine Learning (ML) in artificial intelligence.
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. Artificial intelligence techniques mainly include computer vision techniques, natural language processing techniques, speech processing techniques (Speech Technology), machine learning/deep learning, and other directions.
With research and progress of artificial intelligence technology, artificial intelligence is developed in various fields such as common smart home, intelligent customer service, virtual assistant, smart speaker, smart marketing, unmanned, automatic driving, robot, smart medical, etc., and it is believed that with the development of technology, artificial intelligence will be applied in more fields and become more and more important value.
Machine learning is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, and the like. According to the word vector recognition method and device based on the machine learning, a word vector recognition model based on the machine learning is adopted, word vector learning is conducted on training text data, words are expressed as word vectors according to semantic relations among words in the training text data, and the trained word vector recognition model is obtained. And converting the word segmentation contained in the target context information and the reply sample information in the dialogue data set into a word vector by adopting a trained word vector recognition model.
The speech processing techniques include automatic speech recognition techniques and speech synthesis techniques and voiceprint recognition techniques. The method has the advantages that the computer can listen, watch, say and feel, and is a development direction of human-computer interaction in the future, wherein the voice interaction is one of main modes of human-computer interaction in the future.
Natural language processing technology is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like.
Text processing is a main process in natural language processing technology, and can be widely applied to various application scenes, such as a dialogue system and other man-machine interaction systems. For example, in the embodiment of the present application, the computer performs text processing on the natural language text (i.e. the problem or the dialogue described above) input by the user, converts the natural language text into a digitized vector that can be understood by the machine, determines reply information corresponding to the natural language text according to the obtained digitized vector, and feeds back the obtained reply information to the client.
In the related art, when a computer replies to a dialogue context input by a user, the same reply message is generally given to the same dialogue context. In some techniques, the semantics of multiple reply messages are singular, although multiple reply messages may be given to the same dialog context. This single reply approach results in a lower quality dialog reply.
In view of the above problems in the related art, embodiments of the present application provide a method, an apparatus, a storage medium, and an electronic device for determining reply information, which can improve the quality of dialogue reply. The reply information determining method firstly searches reply sample information related to target context information in the dialogue data set to construct a reply information candidate set corresponding to the target context information, wherein the target context information can be dialogue context input by a user. The reply information is then determined by selecting at least one reply sample information from each semantic class group to compose a reply information set. Because the plurality of semantic class groups are obtained by dividing according to the semantics of the reply sample information, the reply information is determined by selecting the reply sample information from each semantic class group, a plurality of reply information with different semantics can be obtained, the reply information output aiming at the same dialogue upper line is richer, and the quality of dialogue reply is effectively improved.
In order to better understand the technical solution provided by the embodiments of the present application, a few simple descriptions are provided below for application scenarios applicable to the technical solution provided by the embodiments of the present application, and it should be noted that the application scenarios described below are only used to illustrate the embodiments of the present application and are not limiting. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 1 shows an application scenario of the data identification method provided in the embodiment of the present application, and referring to fig. 1, the application scenario includes a plurality of terminal devices 11 and a server 12. The terminal device 11 and the server 12 may be connected by a wired connection or a wireless connection (such as a communication network) and transmit data. For example, the terminal device 11 and the server 12 may be connected by a data line or by a wired network; the terminal device 11 and the server 12 may also be connected through a radio frequency module, a bluetooth module, or a wireless network.
The terminal device 11 may be a mobile phone, a palm computer (Personal Digital Assistant, PDA), a computer, a notebook, a tablet computer, a smart wearable device (e.g. a smart watch and a smart helmet), etc. The server 12 may be a server or a server cluster or a cloud computing center composed of several servers, or a virtualization platform, or may be a personal computer, a mainframe computer, a computer cluster, or the like. Any number of terminal devices and servers can be provided in the application scenario in the embodiments of the present application according to implementation requirements. The present application is not particularly limited thereto.
The reply information determining method provided in the embodiment of the present application may be executed by the server 12, or may be executed by the terminal device 11 and the server 12 cooperatively. For example, a voice small assistant client or a story generation system client is installed on the terminal device 11, and the terminal device 11 receives voice data input by a user through the client, converts the voice data into target context information, and transmits the target context information to the server 12. The server 12 receives the target context information sent by the terminal device 11, firstly searches the reply sample information related to the target context information in the dialogue data set, constructs a reply information candidate set corresponding to the target context information, then divides the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the semantics of each reply sample information, selects at least one reply sample information from each semantic class group to form a reply information set, the server 12 sends the determined reply information set to the terminal device 11, and the terminal device 11 feeds back the reply information to the user based on the reply information set, so that the quality of dialogue reply can be effectively improved.
The reply information determining method provided in the embodiment of the present application may also be executed by the terminal device 11, for example, the terminal device may be an electronic device such as a chat robot or an intelligent small assistant, or a mobile terminal installed with a chat robot client or an intelligent small assistant client.
In order to further explain the technical solutions provided in the embodiments of the present application, the following details are described with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operational steps as shown in the following embodiments or figures, more or fewer operational steps may be included in the method, either on a routine or non-inventive basis. In steps where there is logically no necessary causal relationship, the execution order of the steps is not limited to the execution order provided by the embodiments of the present application. The methods may be performed sequentially or in parallel as shown in the embodiments or the drawings when the actual processing or the apparatus is performed.
Fig. 2 shows a reply message determining method provided in the embodiment of the present application, which may be performed by the server 12 in fig. 1, or may be performed by the terminal device 11 or other electronic devices. Illustratively, the following describes a specific implementation procedure of the reply information determination method with the server 12 as an execution subject. As shown in fig. 2, the data processing method includes the steps of:
step S210, obtaining a reply information candidate set corresponding to the target context information.
The server stores a dialogue data set, wherein the dialogue data set comprises a plurality of pieces of above sample information and reply sample information corresponding to each piece of above sample information. One of the above sample information may correspond to one reply sample information, or may correspond to a plurality of reply sample information.
The target context information and the context sample information and reply sample information in the dialog data set may each be natural language text. For example, the above sample information may be "smoking is unhealthy", and the corresponding reply sample information may be "frequency of smoking is to be reduced", or "smoking should be prohibited".
After receiving the target context information sent by the terminal device, the server can search similar context sample information semantically similar to the target context information by searching the dialogue data set. In the dialogue data set, one or more pieces of similar upper sample information of the target upper sample information may be searched, and based on the found reply sample information corresponding to each piece of similar upper sample information, a reply information candidate set corresponding to the target upper sample information may be obtained.
Step S220, determining reference text vectors of the reply sample information in the reply information candidate set respectively.
The reference text vector is used for representing the semantics of the corresponding reply sample information.
In step S230, the reply sample information in the reply information candidate set is divided into a plurality of semantic class groups according to the obtained distances between the respective reference text vectors.
Each reply sample information has own semantics, and the semantics of the reply sample information can be reflected by the reference text vector corresponding to the reply sample information. The reply sample information with the same or similar semantics can be divided into the same semantic class group through clustering, and the reply sample information with different semantics is divided into different semantic class groups. The reply sample information in the reply information candidate set is divided into a plurality of semantic class groups, each semantic class group represents one semantic side, and different semantic class groups represent different semantic sides.
Step S240, at least one reply sample information is selected from each semantic class group to form a reply information set.
In some embodiments, one reply sample information may be selected from each semantic class group to make up a reply information set. For example, one reply sample information can be randomly selected from each semantic class group, and all obtained reply sample information is formed into a reply information set; or, one reply sample information with the highest correlation degree with the target context information can be selected from each semantic class group, and all obtained reply sample information forms a reply information set.
In other embodiments, multiple sets of reply sample information may be selected from each semantic class group to make up a reply information set. For example, reply sample information meeting a set condition may be selected from each semantic group, and all obtained reply sample information may be formed into a reply information set, where the set condition may be that the correlation degree with the target context information reaches a set threshold, or reply sample information in each semantic group is ordered from high to low according to the correlation degree with the target context information, and N reply sample information in the order of the top reply sample information is regarded as reply sample information meeting the set condition.
According to the reply information determining method, reply sample information in the reply information candidate set corresponding to the target context information is divided into a plurality of semantic class groups according to the semantics of each reply sample information, and reply information is determined by selecting at least one reply sample information from each semantic class group to form a reply information set. Because each semantic class group represents one semantic side, the reply information is determined by selecting the reply sample information from each semantic class group, a plurality of reply information of different semantic sides can be obtained, so that the reply information output aiming at the same above information is richer, and the quality of dialogue reply is effectively improved.
In some embodiments, in the above step S210, searching similar context sample information of the target context information in the session data set may be implemented by the method shown in fig. 3, including the following steps:
step S310, word segmentation processing is carried out on the target context information, stop words are removed, and target words in the target context information are obtained.
After receiving the target context information, the server may perform word segmentation on the target context information to obtain a plurality of words, and for example, the Jieba word segmentation method or other general word segmentation methods may be used to perform word segmentation on the target context information to obtain a plurality of words contained in the target context information, where some words may be composed of a single word, and other words may be composed of a plurality of words. For example, taking the target above information "smoking is harmful to health" as an example, 6 words of "smoking", "pair", "healthy", "yes", "harmful" are obtained after word segmentation.
The words obtained after word segmentation may include real words with actual meaning, such as nouns, verbs, adjectives, and the like, and also include imaginary words without actual meaning, such as prepositions, conjunctions, mood words, auxiliary words, exclaments, and the like. For example, the word "obtained by the above word segmentation has no practical meaning, and is an imaginary word, and in the computer field, the imaginary word is called an stop word, and after the stop word is ignored, the meaning of the whole text is not affected. Therefore, stop words in the words obtained by word segmentation can be removed, and the rest words with actual meanings are used as target words.
Step S320, for the above sample information in the dialogue data set, the similarity between the target above information and each above sample information is determined according to the word frequency of each target word in each above sample information.
Alternatively, for each of the above sample information in the session data set, a BM25 (Best Matching) algorithm may be used to determine the similarity of the target above information to the above sample information. The BM25 algorithm is a search algorithm based on a word bag model, only considers word frequencies of the above sample information of each target word, and does not consider word sequence, grammar, chapter relation and the like. The calculation formula is as follows:
Wherein omega T For the similarity of the target context information and the context sample information T, ω T The higher the value of (c) is, the higher the similarity between the target upper information and the upper sample information T is; n is the target word t contained in the target context information i If the target context information is an english sentence, the target word t i May be a word, if the target context information is a Chinese sentence, the target word t i May be a word consisting of one word or a plurality of words;for the target word t i The frequency of occurrence in the above sample information T, or word frequency; l (L) d For the length of the above sample information T, it may be expressed by the number of words included in the above sample information T; l (L) ave Is the average length of all the above sample information in the session dataset.
Two other parameters k 1 And b may take a set value for adjusting the accuracy, e.g., may be set to k 1 =2,b=0.75。IDF(t i ) Representing the target word t i Is a weight of (2).
Specifically, IDF (Inverse Dcument Frequency, inverse text frequency index) may represent the weight of a target word. For example, the IDF (t), i.e., the weight of the target word t, is calculated as follows:
where N is the total number of the above sample information in the session dataset; f (f) t The number of the above sample information for the occurrence of the target word t in the dialogue data set.
In step S330, the context sample information whose similarity with the target context information satisfies the preset similarity condition is used as the similar context sample information of the target context information.
In one embodiment, the above sample information, for which the similarity with the target above information reaches the set similarity threshold, may be regarded as similar above sample information of the target above information. The set similarity threshold may be 80% or 60% by way of example. In another embodiment, the context sample information in the dialogue data set may be sorted in order of high-to-low similarity with the target context information, and K1 context sample information in the order of the sorting is used as the similar context sample information of the target context information.
In some embodiments, after determining similar context sample information of the target context information in the dialogue data set, reply sample information corresponding to each determined similar context sample information may be added to the reply information candidate set corresponding to the target context information.
In other embodiments, it is contemplated that in the reply sample information corresponding to the similar above sample information, semantic consistency between a portion of the reply sample information and the target above information may be low, and noise data may be present if the reply sample information corresponding to the similar above sample information is all added to the reply information candidate set. For example, for the target context "smoking is harmful to health", the first reply sample information "smoking should be prohibited" and the second reply sample information "you think how smoking is prohibited" has higher semantic consistency with the target context, while the third reply sample information "i am maintaining the building" has lower semantic consistency with the target context, which can be considered as noise data. In order to ensure the consistency of the reply sample information in the reply information candidate set and the target context information, noise data in the reply information candidate set is reduced, after similar context sample information of the target context information is determined in the dialogue data set, a reply information candidate set corresponding to the target context information can be constructed by adopting the method shown in fig. 4, and the method comprises the following steps:
In step S410, a target text vector of the target context information and a reference text vector of each reply sample information are determined respectively.
The reply sample information in this step is reply sample information corresponding to similar previous sample information of the target previous information, and these reply sample information may be formed into an initial candidate set.
Alternatively, word vectors of the individual word segments contained in the target context information may be weighted-averaged to determine a target text vector of the target context information; similarly, word vectors of the segmented words contained in the reply sample information are weighted and averaged, and reference text vectors of the reply sample information are determined.
Since the determination of the target text vector is the same as the determination of the reference text vector, the method of determining the reference text vector will be described below as an example.
For any reply sample information in the initial candidate set, in one embodiment, word segmentation processing may be performed on the reply sample information, and Word vectors of the respective segmented words in the reply sample information are determined through a Word vector recognition model, such as a Word2Vec model. The Word2Vec model may determine a Word vector for a Word segment based on the context in which the Word segment is located (i.e., other segments in the reply sample information that are adjacent to the Word segment). In another embodiment, the reply sample information may be subjected to word segmentation, and word vectors of each segmented word in the reply sample information are found in the word vectors.
Before or after determining the word vector of each word segment, a weight corresponding to each word segment in the reply sample information may also be determined. For example, a TF-IDF (term frequency-inverse document frequency) value of a term may be used as the weight of the term. The TF-IDF may evaluate the importance of a word to any sample information in the dialog data set. The importance of a word to the reply sample information is proportional to its number of occurrences in the reply sample information and inversely proportional to its number of occurrences in the conversation data set. In the embodiment of the application, if the frequency TF of occurrence of a certain word in one reply sample information is high and the occurrence of the word in other sample information is rare, the word is considered to have good distinguishing capability, so that the TF-IDF value is larger, i.e. the weight is larger.
For any one of the tokens, for example, the ith token, its TF-IDF value may be determined by the following procedure. First, the TF value TF of the ith segmentation is determined i :
Wherein n is ij The number of times that the ith segmentation word appears in the jth reply sample information; sigma (sigma) k n k,j Is the sum of the number of occurrences of all the tokens in the jth reply sample message.
Next, the IDF value IDF of the ith segmentation word is determined i :
Where |d| is the number of reply sample information in the dialogue dataset;
|{j:t i e dj } | represents the number of reply sample information containing the ith segmentation.
The TF-IDF value of the ith segmentation can be expressed as: TF-idf=tf i *idf i 。
And for the reply sample information, determining the word vector of each word segment and the weight corresponding to each word segment in the reply sample information, and then carrying out weighted average on the word vector of each word segment through the following formula to obtain a reference text vector B.
Wherein m is the number of words contained in the reply sample information; b (B) i Word vectors for the ith word segment; f (F) i For the corresponding weight of the ith segmentation, F i The TF-IDF values above may be employed.
The process of determining the target text vector may be performed with reference to the above process of determining the reference text vector, and will not be described again.
Step S420, according to the correlation degree between each reference text vector and the target text vector, the reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition is formed into a reply information candidate set.
The correlation between the reference text vector and the target text vector corresponding to the reply sample information can characterize the continuity between the reply sample information and the target context information. The higher the correlation degree between the reference text vector corresponding to the reply sample information and the target text vector is, the better the continuity between the reply sample information and the target text vector is.
For the reference text vector corresponding to each reply sample information in the initial candidate set, cosine similarity of the reference text vector and the target text vector can be used as correlation between the reference text vector and the target text vector. The calculation formula of the correlation similarity can be as follows:
wherein A represents a target text vector corresponding to target context information; and B represents a reference text vector corresponding to the reply sample information.
And forming a reply information candidate set by reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition.
In one embodiment, the reply sample information corresponding to the reference text vector whose relevance of the target text vector reaches the set relevance threshold may be composed into a reply information candidate set. In another embodiment, the reference text vectors corresponding to the reply sample information in the initial data set may be sorted in order from high to low relative to the target text vector, and the reply sample information corresponding to the K2 reference text vectors with the first sorted order may be formed into the reply information candidate set.
The consistency of the reply sample information in the reply information candidate set obtained by the method and the target above information is higher, and the noise in the reply information candidate set is smaller.
Optionally, in the step S230, the classification of the semantic class group for the reply sample information in the reply information candidate set may be implemented as follows: selecting a preset number of reference text vectors from the reference text vectors as initial class group centers, and dividing reply sample information corresponding to each reference text vector and the initial class group center closest to the reply sample information into the same semantic class group; the reference text vector is used for representing the semantics of the corresponding reply sample information. And repeatedly executing the steps of determining the class group center of each semantic class group and dividing the reply sample information corresponding to each reference text vector and the class group center closest to the same semantic class group until the obtained change amplitude of each class group center is within the expected amplitude, thereby obtaining a plurality of semantic class groups according to the semantic division of each reply sample information.
For example, the K-Means algorithm may be used to partition the semantic class groups for reply sample information in the reply information candidate set. The K-Means algorithm (K-Means clustering algorithm ) is an iteratively solved clustering algorithm, and may divide the reply sample information in the reply information candidate set into K semantic class groups, where K may be a set value, for example, K may be set to 3, 4, or 5. The specific implementation process is as follows: k reference text vectors are randomly selected from the reference text vectors corresponding to the reply sample information to serve as initial class group centers, then the distance between each reference text vector and each initial class group center is calculated, and each reference text vector is distributed to the class group center closest to the reference text vector. Class group centers and the reference text vectors assigned to them represent a semantic class group. Each semantic class group is assigned a reference text vector and the semantic class group recalculates the class group center based on all reference text vectors currently contained by the semantic class group. The above process will be repeated until the termination condition is met. Illustratively, the termination condition may be that the magnitude of the change in the center of each class group is within a desired magnitude, or that the sum of the squares of the errors of the reference text vectors in each semantic class group and the distance center is minimal. Illustratively, the termination condition, which may also be referred to as a clustered target, may be expressed as:
Wherein K semantic class groups are included, each of which can be represented as S i ;μ i Is a semantic class S i Class group center of (a); x is the semantic class group S i The corresponding reference text vector of the reply sample information included in the list.
After the plurality of semantic class groups divided according to the semantics of each piece of reply sample information are obtained through the method, at least one piece of reply sample information can be selected from each semantic class group to form a reply information set, and the obtained reply information set comprises a plurality of pieces of reply sample information with different semantic sides, so that the reply information output aiming at the target context information is richer. Specifically, one reply sample information with the highest correlation degree with the target above information can be selected from each semantic class group, all obtained reply sample information is formed into a reply information set, the semantics of the reply sample information in the reply information set obtained by the method are different, and the consistency of each reply sample information and the target above information is good.
The reply information determining method provided by the application can be applied to products such as chat robots, intelligent small assistants, social applications and the like. For example, in some embodiments, after obtaining the reply information set, at least one reply sample information in the reply information set may be used as feedback information for the target context information, and the feedback information may be output.
For example, in a social application, user a sends a dialogue context "hello" to user B via a first terminal device, and in does "the server receives the dialogue context sent by the first terminal device and forwards the dialogue context to a second terminal device used by user B. Meanwhile, the server takes the received dialogue context as target context information, and a reply information set corresponding to the target context information is obtained through the reply information determining method. For example, the reply sample information may be included in the reply information set as follows: "I am there is something you say", "a little bit, etc., I am to contact you", "at, willing to serve you", etc. And the server sends all reply sample information in the reply information set to the second terminal equipment as feedback information aiming at the target context information. As shown in fig. 5, the user B clicks a chat dialog box in the social application on the second terminal device, and the second terminal device may display all feedback information, and the user B may select one piece of information from the displayed feedback information, and quickly reply to the user a. In other embodiments, if the number of reply sample information in the reply information set is greater, the server may also select a set number of reply general information from the reply information set, and send the set number of reply general information as feedback information for the target context information to the second terminal device. In the application scene, the reply information determining method provided by the embodiment of the application can assist the user to reply chat information of other users quickly without manual input of the user.
In other embodiments, for example, in chat robot applications, the terminal device receives a voice signal input by a user, generates voice data, and sends the voice data to the server. The server receives voice data sent by the terminal equipment, and converts the voice data into text data serving as target context information. The server obtains a reply information set corresponding to the target upper information through the reply information determining method, randomly selects one reply sample information from the reply information set, converts the reply sample information into reply voice data, and sends the reply voice data to the terminal equipment for playing. Or the server sends all the reply sample information in the reply information set as feedback information aiming at the target upper information to the terminal equipment, and the terminal equipment randomly selects one reply sample information in the reply information set to convert the reply sample information into reply voice data and plays the reply voice data.
In other embodiments, the reply message determining method may also be performed by the terminal device. For example, assuming that the terminal device is a chat robot, the chat robot receives voice data of a user, converts the voice data into text data, and serves as target context information. By the reply information determining method, the chat robot obtains a reply information set corresponding to the target upper information, randomly selects one reply sample information from the reply information set, converts the reply sample information into reply voice data, and plays the reply voice data. For example, by the method, when the chat robot chat with the accompanying user, the reply voice played when the voice data of "you are in" is received for the first time may be "in, i am all the time". When the voice data of "you are in" is received for the second time, the reply voice played may be "of course, i am boring". Thus, when the user speaks the same dialogue, different replies can be obtained, the freshness of the user can be increased, the chat interest of the user is improved, and the user experience is improved.
For easier understanding of the reply message determining method provided in the embodiments of the present application, fig. 6 shows a specific implementation manner of the reply message determining method. As shown in fig. 6, the method includes:
in step S610, target context information is received.
In step S620, similar context sample information of the target context information is determined in the dialogue data set.
In step S630, an initial candidate set is generated according to the reply sample information corresponding to the similar previous sample information.
Illustratively, as shown in fig. 7, the server receives the target context information "smoking is harmful to your physical health" sent by the terminal device, searches similar context sample information of the target context information in the dialogue data set through the BM25 algorithm, and adds reply sample information corresponding to the searched similar context sample information to the initial candidate set, and the resulting initial candidate set is shown in (a) in fig. 7.
In step S640, the target text vector of the target context information and the reference text vector of each reply sample information in the initial candidate set are determined respectively.
In step S650, according to the correlation between each reference text vector and the target text vector, the reply sample information corresponding to the Top-N reference text vectors is selected to form a reply information candidate set.
The cosine similarity can be adopted to determine the relevance between each reference text vector and the target text vector, and reply sample information corresponding to the first N reference text vectors with the highest relevance of the target text vector is selected to form a reply information candidate set. As shown in fig. 7, after the reply sample information in the initial candidate set (a) is screened by comparing the correlation degree, the reply sample information which is lower than the target context information such as "i am maintaining the building, i am too critical" is removed, and the obtained reply information candidate set is shown in (b) in fig. 7, wherein the correlation degree between the reference text vector and the target text vector of the reply sample information in the reply information candidate set is above 55%, and the correlation degree between the reference text vector and the target text vector of the reply sample information "i am thinking about how to smoke" is the highest and is 77.7%.
The correlation degree between the reference text vector and the target text vector can reflect the continuity of the reply sample information and the target upper information, so that the continuity of the reply sample information and the target upper information in the reply information candidate set obtained through the step is good.
Step S660, dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the semantics of each reply sample information.
Step S670, selecting at least one reply sample information from each semantic class group to form a reply information set.
As shown in fig. 7, the K-Means algorithm is used to perform packet processing on the reply sample information in the reply information candidate set (b), so as to obtain four semantic class groups, where the class group numbers are "0", "1", "2" and "3", respectively, as shown in (c) in fig. 7. The final grouping result can also be shown in the following table 1, wherein the reply sample information "how you think about smoking forbidden" and the reply sample information "forbidden smoking" belong to the semantic group of the group number "3", which indicates that the semantics of the two reply sample information expressions are similar, and the reply sample information belongs to the same semantic side. The reply sample information, namely how you think that smoking is forbidden, and the reply sample information, namely that junk food is harmful to health, belong to different semantic groups, and the semantics of the two reply sample information expressions are different.
One reply sample information can be selected from each semantic class group to form a reply information set, and the finally obtained reply information set comprises four reply sample information, wherein each reply sample information represents one semantic side.
Fig. 7 shows a process of processing the intermediate information by using the reply information determination method provided in the embodiment of the present application. The reply information determining method provided by the embodiment of the application can be also suitable for processing English information, and the specific processing process is the same as that of Chinese information. Table 1 is a comparison table of chinese information and english information provided in the embodiments of the present application.
TABLE 1
If the target context information "Smoking is harmful for your health" in table 1 is processed, an initial candidate set corresponding to the target context information is determined in the dialogue data set, and the initial candidate set includes reply sample information of "Sure", "But if the tea is good and well prepared, it will do much good" and the like shown in table 1. And screening part of the reply sample information from the initial candidate set by comparing the correlation degree of each reply sample information and the target upper information to form a reply information candidate set. The obtained reply message candidate set comprises sample information such as What in your opinion can be done to stop smoking, sure and the like. The reply sample information in the reply information candidate set is subjected to grouping processing, and the obtained grouping result is shown in table 1 and comprises four semantic class groups, namely: cluster0, cluster 1, cluster 2 and Cluster 3. And selecting one reply sample information from each semantic class group to form a reply information set, and finally obtaining the reply information set comprising four reply sample information.
Specifically, in some embodiments, the reply message determination method may be implemented by using the following program codes:
wherein,to restore the information set, < +.>Is an empty set. D is the dialogue data set, c is the above sample information in the dialogue data set, and x is the reply sample information corresponding to the above sample information in the dialogue data set. P is similar upper sample information of the target upper information, R is the initial candidate set corresponding to the target upper information, and ∈>Reply message candidates corresponding to target context informationAnd selecting s as a target text vector corresponding to the target context information, and r as a reference text vector corresponding to the reply sample information. K clusters represents K semantic class groups obtained by grouping the reply information candidate set. r is (r) j And representing the reference text vector with the highest correlation degree with the target text vector in a semantic class group.And representing a set obtained by selecting one reply sample information with highest correlation degree with the target upper information from each semantic class group.
Based on the same inventive concept, the embodiment of the application also provides a reply information determining device, which can be arranged in a server or a terminal device. Because the device is a device corresponding to the reply information determining method in the embodiment of the present application, and the principle of the device for solving the problem is similar to that of the method, the implementation of the device can refer to the implementation of the method, and the repetition is not repeated.
Fig. 8 is a schematic structural diagram of a reply message determination device provided in an embodiment of the present application, and as shown in fig. 8, the reply message determination device includes an information retrieval unit 81 and an information determination unit 82; wherein,
an information retrieval unit 81 for obtaining a reply information candidate set corresponding to the target context information;
an information determining unit 82, configured to determine reference text vectors of respective reply sample information in the reply information candidate set, where the reference text vectors are used to characterize semantics of the corresponding reply sample information; dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distance between each reference text vector; at least one reply sample information is selected from each semantic class group to form a reply information set.
In an alternative embodiment, the information retrieval unit 81 may be specifically configured to:
determining similar context sample information of the target context information in the dialogue data set; the dialogue data set comprises a plurality of pieces of upper sample information and reply sample information corresponding to each piece of upper sample information;
and obtaining a reply information candidate set based on reply sample information corresponding to each similar above sample information.
In an alternative embodiment, the information retrieval unit 81 may be specifically configured to: word segmentation processing is carried out on the target upper information, stop words are removed, and target words in the target upper information are obtained; for the upper sample information in the dialogue data set, determining the similarity between the target upper sample information and each upper sample information according to the word frequency of each target word in each upper sample information; and taking the above sample information with the similarity meeting the preset similarity condition as similar above sample information of the target above information.
In an alternative embodiment, the information retrieval unit 81 may be specifically configured to: respectively determining a target text vector of the target above information and a reference text vector of each reply sample information; and according to the correlation degree between each reference text vector and the target text vector, reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition forms a reply information candidate set.
In an alternative embodiment, the information retrieval unit is specifically configured to:
determining cosine similarity between the target text vector and a reference text vector;
And taking the cosine similarity between the target text vector and the reference text vector as the correlation between the target text vector and the reference text vector.
In an alternative embodiment, the information determining unit 82 may be specifically configured to: selecting a preset number of reference text vectors from the reference text vectors as initial class group centers, and dividing reply sample information corresponding to each reference text vector and the initial class group center closest to the reply sample information into the same semantic class group; the reference text vector is used for representing the semantics of the corresponding reply sample information; and repeatedly executing the steps of determining the class group center of each semantic class group and dividing the reply sample information corresponding to each reference text vector and the class group center closest to the same semantic class group until the obtained change amplitude of each semantic class group center is within the expected amplitude, thereby obtaining a plurality of semantic class groups according to the semantic division of each reply sample information.
In an alternative embodiment, the information determining unit 82 may be specifically configured to:
for each piece of reply sample information, acquiring word vectors of each word segmentation contained in the reply sample information;
and carrying out weighted average on word vectors of each word segmentation contained in the reply sample information to obtain a reference text vector of the reply sample information.
In an alternative embodiment, the information determining unit 82 may be specifically configured to: and selecting one reply sample information with highest correlation degree with the target above information from each semantic class group, and forming a reply information set by all obtained reply sample information.
In an alternative embodiment, as shown in fig. 9, the reply information determining apparatus may further include an information feedback unit 91.
An information feedback unit 91, configured to take at least one reply sample information in the reply information set as feedback information for the target context information, and output the feedback information.
In an alternative embodiment, the information retrieval unit may be further adapted to: receiving voice data; converting the voice data into target context information; the information feedback unit 91 may be further configured to: and converting one reply sample information in the reply information set into reply voice data and playing the reply voice data.
According to the reply information determining device, reply sample information in a reply information candidate set corresponding to target upper information is divided into a plurality of semantic class groups according to the semantics of each reply sample information, and reply information sets are formed by selecting at least one reply sample information from each semantic class group to determine the reply information. Because the plurality of semantic class groups are obtained by dividing according to the semantics of the reply sample information, the reply information is determined by selecting the reply sample information from each semantic class group, a plurality of reply information with different semantics can be obtained, the reply information output aiming at the same above information is richer, and the conversation reply quality is effectively improved.
Corresponding to the method embodiment, the embodiment of the application also provides electronic equipment. The electronic device may be a server, such as the server 12 shown in fig. 1, or a terminal device such as a mobile terminal, chat robot or computer, and includes at least a memory for storing data and a processor for data processing. Among them, for a processor for data processing, when performing processing, a microprocessor, a CPU, a GPU (Graphics Processing Unit, a graphics processing unit), a DSP, or an FPGA may be employed. For the memory, the operation instructions, which may be computer-executable codes, are stored in the memory, and each step in the flow of the reply information determination method in the embodiment of the present application is implemented through the operation instructions.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application; as shown in fig. 10, the electronic device 100 in the embodiment of the present application includes: a processor 101, a display 102, a memory 103, an input device 106, a bus 105, and a communication module 104; the processor 101, memory 103, input device 106, display 102, and communication module 104 are all coupled via a bus 105, and the bus 105 is used to transfer data between the processor 101, memory 103, display 102, communication module 104, and input device 106.
The memory 103 may be used to store software programs and modules, such as program instructions/modules corresponding to the reply information determining method in the embodiment of the application, and the processor 101 executes the software programs and modules stored in the memory 103, thereby executing various functional applications and data processing of the electronic device 100, such as the reply information determining method provided in the embodiment of the application. The memory 103 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program of at least one application, and the like; the storage data area may store data created according to the use of the electronic device 100, such as a conversation data set, and the like. In addition, memory 103 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 101 is a control center of the electronic device 100, connects various parts of the entire electronic device 100 using the bus 105 and various interfaces and lines, and performs various functions of the electronic device 100 and processes data by running or executing software programs and/or modules stored in the memory 103, and invoking data stored in the memory 103. Alternatively, the processor 101 may include one or more processing units, such as a CPU, GPU, digital processing unit, or the like.
Processor 101 may present reply information corresponding to the target context information to the user via display 102.
The processor 101 may also be connected to a network through the communication module 104 to obtain target context information, etc.
The input device 106 is mainly used to obtain input operations by a user, and the input device 106 may be different when the electronic devices are different. For example, when the electronic device is a computer, the input device 106 may be an input device such as a mouse, keyboard, etc.; when the electronic device is a portable device such as a smart phone or a tablet computer, the input device 106 may be a touch screen. When the electronic device is a chat robot, the input device 106 may be a key, microphone, or the like.
The embodiment of the application also provides a computer storage medium, and the computer storage medium stores computer executable instructions for implementing the reply information determining method described in any embodiment of the application.
In some possible embodiments, aspects of the reply-message determination method provided herein may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the reply-message determination method according to the various exemplary embodiments of the present application described herein above when the program product is run on the computer device, for example, the computer device may perform the flow of the reply-message determination method as shown in fig. 2, steps S201 to S203.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.
Claims (11)
1. A reply message determination method, characterized by comprising:
according to each target word in the target context information, determining the similarity between the target context information and corresponding context sample information in word frequencies of each context sample information included in a dialogue data set, and taking the context sample information with the similarity meeting the preset similarity condition as the similar context sample information of the target context information;
Respectively determining target text vectors of the target above information, and corresponding reference text vectors of reply sample information of the similar above sample information;
according to the correlation degree of each reference text vector and the target text vector, reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition is formed into the reply information candidate set;
respectively determining reference text vectors of the reply sample information in the reply information candidate set, wherein the reference text vectors are used for representing the semantics of the corresponding reply sample information;
dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distance between each reference text vector;
at least one reply sample information is selected from each semantic class group to form a reply information set.
2. The method of claim 1, wherein dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distances between the respective reference text vectors, comprises:
selecting the preset number of reference text vectors from the reference text vectors as initial class group centers, and dividing reply sample information corresponding to each reference text vector and the initial class group center closest to the reply sample information into the same semantic class group;
And repeatedly executing the steps of determining the class group center of each semantic class group and dividing the reply sample information corresponding to each reference text vector and the class group center closest to the same semantic class group until the obtained change amplitude of each class group center is within the expected amplitude, thereby obtaining a plurality of semantic class groups according to the semantic division of each reply sample information.
3. The method of claim 1, wherein the separately determining reference text vectors for each reply sample information in the reply information candidate set comprises:
for each piece of reply sample information, acquiring word vectors of each word segmentation contained in the reply sample information;
and carrying out weighted average on word vectors of each word segmentation contained in the reply sample information to obtain a reference text vector of the reply sample information.
4. The method of claim 1, wherein determining the similarity of the target context information to the corresponding context sample information based on the word frequency of each of the context sample information included in the dialogue data set, respectively, comprises:
performing word segmentation processing on the target context information and removing stop words to obtain target words in the target context information;
And for the above sample information in the dialogue data set, determining the similarity between the target above information and each above sample information according to the word frequency of each target word in each above sample information.
5. The method of claim 1, wherein the relevance of the target text vector to the reference text vector is determined by:
determining cosine similarity between the target text vector and a reference text vector;
and taking the cosine similarity between the target text vector and the reference text vector as the correlation between the target text vector and the reference text vector.
6. The method of claim 1, wherein selecting at least one reply sample information from each semantic class group comprises composing a reply information set, comprising:
and selecting one reply sample information with highest correlation degree with the target above information from each semantic class group, and forming a reply information set by all obtained reply sample information.
7. The method according to any one of claims 1-6, wherein after selecting at least one reply sample information from each semantic class group to compose a reply information set, the method further comprises:
And taking at least one reply sample information in the reply information set as feedback information aiming at the target context information, and outputting the feedback information.
8. The method according to any one of claims 1-6, wherein before obtaining the reply message candidate set corresponding to the target context information, the method further comprises:
receiving voice data;
converting the voice data into the target context information;
after selecting at least one reply sample information from each semantic class group to compose a reply information set, the method further comprises:
and converting one reply sample information in the reply information set into reply voice data, and playing the reply voice data.
9. A reply message determination apparatus, characterized by comprising:
the information retrieval unit is used for determining the similarity between the target upper information and the corresponding upper sample information according to the word frequency of each target word in the target upper information and the word frequency of each upper sample information included in the dialogue data set, and taking the upper sample information with the similarity meeting the preset similarity condition as the similar upper sample information of the target upper information;
Respectively determining target text vectors of the target above information, and corresponding reference text vectors of reply sample information of the similar above sample information;
according to the correlation degree of each reference text vector and the target text vector, reply sample information corresponding to the reference text vector with the correlation degree meeting the preset correlation degree condition is formed into the reply information candidate set;
the information determining unit is used for respectively determining reference text vectors of the reply sample information in the reply information candidate set, wherein the reference text vectors are used for representing the semantics of the corresponding reply sample information; dividing the reply sample information in the reply information candidate set into a plurality of semantic class groups according to the obtained distance between each reference text vector; at least one reply sample information is selected from each semantic class group to form a reply information set.
10. A computer-readable storage medium having a computer program stored therein, characterized in that: the computer program, when executed by a processor, implements the method of any of claims 1-8.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, the computer program, when executed by the processor, implementing the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010443989.9A CN111694941B (en) | 2020-05-22 | 2020-05-22 | Reply information determining method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010443989.9A CN111694941B (en) | 2020-05-22 | 2020-05-22 | Reply information determining method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111694941A CN111694941A (en) | 2020-09-22 |
CN111694941B true CN111694941B (en) | 2024-01-05 |
Family
ID=72477414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010443989.9A Active CN111694941B (en) | 2020-05-22 | 2020-05-22 | Reply information determining method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111694941B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114691815A (en) * | 2020-12-25 | 2022-07-01 | 科沃斯商用机器人有限公司 | Model training method and device, electronic equipment and storage medium |
CN112597292B (en) * | 2020-12-29 | 2024-04-26 | 招联消费金融股份有限公司 | Question reply recommendation method, device, computer equipment and storage medium |
CN113822034B (en) * | 2021-06-07 | 2024-04-19 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for replying text |
CN116226356B (en) * | 2023-05-08 | 2023-07-04 | 深圳市拓保软件有限公司 | NLP-based intelligent customer service interaction method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446287A (en) * | 2016-11-08 | 2017-02-22 | 北京邮电大学 | Answer aggregation method and system facing crowdsourcing scene question-answering system |
CN109783704A (en) * | 2019-01-03 | 2019-05-21 | 中国科学院自动化研究所 | Man-machine mixed answer method, system, device |
CN109947909A (en) * | 2018-06-19 | 2019-06-28 | 平安科技(深圳)有限公司 | Intelligent customer service answer method, equipment, storage medium and device |
CN110555094A (en) * | 2018-03-30 | 2019-12-10 | 北京金山安全软件有限公司 | information recommendation method and device, electronic equipment and storage medium |
CN110750629A (en) * | 2019-09-18 | 2020-02-04 | 平安科技(深圳)有限公司 | Robot dialogue generation method and device, readable storage medium and robot |
CN110795542A (en) * | 2019-08-28 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Dialogue method and related device and equipment |
CN111104799A (en) * | 2019-10-16 | 2020-05-05 | 中国平安人寿保险股份有限公司 | Text information representation method and system, computer equipment and storage medium |
CN111125335A (en) * | 2019-12-27 | 2020-05-08 | 北京百度网讯科技有限公司 | Question and answer processing method and device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9171542B2 (en) * | 2013-03-11 | 2015-10-27 | Nuance Communications, Inc. | Anaphora resolution using linguisitic cues, dialogue context, and general knowledge |
WO2016167424A1 (en) * | 2015-04-16 | 2016-10-20 | 주식회사 플런티코리아 | Answer recommendation device, and automatic sentence completion system and method |
-
2020
- 2020-05-22 CN CN202010443989.9A patent/CN111694941B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446287A (en) * | 2016-11-08 | 2017-02-22 | 北京邮电大学 | Answer aggregation method and system facing crowdsourcing scene question-answering system |
CN110555094A (en) * | 2018-03-30 | 2019-12-10 | 北京金山安全软件有限公司 | information recommendation method and device, electronic equipment and storage medium |
CN109947909A (en) * | 2018-06-19 | 2019-06-28 | 平安科技(深圳)有限公司 | Intelligent customer service answer method, equipment, storage medium and device |
CN109783704A (en) * | 2019-01-03 | 2019-05-21 | 中国科学院自动化研究所 | Man-machine mixed answer method, system, device |
CN110795542A (en) * | 2019-08-28 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Dialogue method and related device and equipment |
CN110750629A (en) * | 2019-09-18 | 2020-02-04 | 平安科技(深圳)有限公司 | Robot dialogue generation method and device, readable storage medium and robot |
CN111104799A (en) * | 2019-10-16 | 2020-05-05 | 中国平安人寿保险股份有限公司 | Text information representation method and system, computer equipment and storage medium |
CN111125335A (en) * | 2019-12-27 | 2020-05-08 | 北京百度网讯科技有限公司 | Question and answer processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111694941A (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI732271B (en) | Human-machine dialog method, device, electronic apparatus and computer readable medium | |
CN111694941B (en) | Reply information determining method and device, storage medium and electronic equipment | |
CN112131366B (en) | Method, device and storage medium for training text classification model and text classification | |
JP5825676B2 (en) | Non-factoid question answering system and computer program | |
CN110377916B (en) | Word prediction method, word prediction device, computer equipment and storage medium | |
KR20190019962A (en) | Architectures and processes for computer learning and understanding | |
JP2019504413A (en) | System and method for proposing emoji | |
CN111291549B (en) | Text processing method and device, storage medium and electronic equipment | |
EP2867802A1 (en) | Learning-based processing of natural language questions | |
US20180309874A1 (en) | System and method for a cognitive system plug-in answering subject matter expert questions | |
US20210374168A1 (en) | Semantic cluster formation in deep learning intelligent assistants | |
CN110795542A (en) | Dialogue method and related device and equipment | |
JP6076425B1 (en) | Interactive interface | |
US20230306205A1 (en) | System and method for personalized conversational agents travelling through space and time | |
CN111368555B (en) | Data identification method and device, storage medium and electronic equipment | |
JP6366749B2 (en) | Interactive interface | |
CN111026840A (en) | Text processing method, device, server and storage medium | |
CN113392640B (en) | Title determination method, device, equipment and storage medium | |
CN110069613A (en) | A kind of reply acquisition methods and device | |
JP6034459B1 (en) | Interactive interface | |
WO2024007119A1 (en) | Training method for text processing model, and text processing method and device | |
Otani et al. | Large-scale acquisition of commonsense knowledge via a quiz game on a dialogue system | |
JP6080137B1 (en) | Interactive interface | |
Bandyopadhyay et al. | Identification of depression strength for users of online platforms: a comparison of text retrieval approaches | |
Lee | N-Gram Language Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40028111 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |