CN112527998A - Reply recommendation method, reply recommendation device and intelligent device - Google Patents

Reply recommendation method, reply recommendation device and intelligent device Download PDF

Info

Publication number
CN112527998A
CN112527998A CN202011528963.0A CN202011528963A CN112527998A CN 112527998 A CN112527998 A CN 112527998A CN 202011528963 A CN202011528963 A CN 202011528963A CN 112527998 A CN112527998 A CN 112527998A
Authority
CN
China
Prior art keywords
target
user
history
statement
answer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011528963.0A
Other languages
Chinese (zh)
Inventor
罗沛鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202011528963.0A priority Critical patent/CN112527998A/en
Publication of CN112527998A publication Critical patent/CN112527998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Abstract

The application discloses a reply recommendation method, a reply recommendation device, an intelligent device and a computer readable storage medium. The reply recommendation method comprises the following steps: in the multi-turn conversation process, if a user sentence input in the turn is received, extracting entity words and relationship information of the user sentence; inputting a history splicing statement, the user statement and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model, wherein the history splicing statement is generated based on each history user statement and a history answer corresponding to each history user statement in the multi-turn conversation process; screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph; and outputting the target reply. Through the scheme of the application, meaningful responses can be recommended to the user on the basis of fully understanding historical conversations.

Description

Reply recommendation method, reply recommendation device and intelligent device
Technical Field
The present application belongs to the technical field of artificial intelligence, and in particular, relates to a reply recommendation method, a reply recommendation apparatus, an intelligent device, and a computer-readable storage medium.
Background
With the rapid development of artificial intelligence, intelligent devices have been able to interact with users. However, smart devices are always poorly performing in man-machine interaction with users; particularly, when the intelligent device is in chatting with a user, the intelligent device is difficult to actively associate information in the history session, and further lacks understanding of the history session, so that the answer is hard, and even a question is asked.
Disclosure of Invention
The application provides a reply recommendation method, a reply recommendation device, an intelligent device and a computer readable storage medium, which can recommend meaningful replies to a user on the basis of fully understanding historical conversations.
In a first aspect, the present application provides a response recommendation method, including:
in the multi-turn conversation process, if a user sentence input in the turn is received, extracting entity words and relationship information of the user sentence;
inputting a history spliced statement, the user statement and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model, wherein the history spliced statement is generated based on each history user statement and a history answer corresponding to each history user statement in the multi-turn conversation process;
screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph;
and outputting the target reply.
In a second aspect, the present application provides a response recommendation apparatus, including:
the extraction unit is used for extracting the entity words and the relation information of the user sentences if the user sentences input in the current round are received in the multi-round conversation process;
an obtaining unit, configured to input a history concatenation statement, the user statement, and the entity word into a trained natural language processing model to obtain at least one candidate response output by the natural language processing model, where the history concatenation statement is generated based on each history user statement and a history response corresponding to each history user statement in the multi-round conversation process;
the screening unit is used for screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph;
and the output unit is used for outputting the target reply.
In a third aspect, the present application provides a smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
Compared with the prior art, the application has the beneficial effects that: in the multi-turn conversation process, when a user sentence input in the current turn is received, entity words and relation information of the user sentence are extracted, then a history spliced sentence, the user sentence and the entity words are input into a trained natural language processing model, at least one candidate answer output by the natural language processing model is obtained, wherein the history spliced sentence is generated based on each history user sentence in the multi-turn conversation process and the history answer corresponding to each history user sentence, then a target answer is screened from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph, and finally the target answer is output. The process takes both historical conversation and current user sentences into consideration during multiple rounds of conversation, helps the intelligent device to obtain better candidate answers on the basis of understanding the historical conversation, and screens the candidate answers through a knowledge graph to obtain more meaningful target answers. It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an implementation flow of a reply recommendation method provided in an embodiment of the present application;
FIG. 2-1 is an exemplary diagram of a current concatenation statement obtained after receiving a first round of user statements provided by an embodiment of the present application;
2-2 is an exemplary diagram of a history concatenation statement obtained after a first wheel conversation provided by an embodiment of the present application is ended;
2-3 are exemplary diagrams of a current concatenation statement obtained after receiving a second round of user statements provided by an embodiment of the application;
2-4 are exemplary diagrams of a history concatenation statement obtained after a second wheel conversation provided by the embodiment of the application is ended;
2-5 are exemplary diagrams of a multi-turn conversation that has ended as provided by embodiments of the present application;
2-6 are exemplary diagrams of input data of the GPT2 model when a second word is generated in the process of generating a candidate reply by the GPT2 model provided by the embodiment of the application;
2-7 are exemplary diagrams of input data of the GPT2 model when a third word is generated in the process of generating a candidate reply by the GPT2 model provided by the embodiment of the application;
2-8 are exemplary diagrams of input data of the GPT2 model when a fourth word is generated in the process of generating a candidate reply by the GPT2 model provided by the embodiment of the application;
2-9 are exemplary diagrams of a history concatenation statement provided by an embodiment of the application;
FIG. 3 is an exemplary graph of the results predicted by the GPT2 model provided by embodiments of the present application;
FIG. 4 is a schematic diagram illustrating a training process of a natural language processing model according to an embodiment of the present application;
FIG. 5-1 is an exemplary diagram of input data of a natural language processing model based on training samples provided by an embodiment of the present application;
FIG. 5-2 is an exemplary diagram of input data of a natural language processing model based on a new sample provided by an embodiment of the present application;
fig. 6 is a block diagram of a structure of a reply recommendation device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an intelligent device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution proposed in the present application, the following description will be given by way of specific examples.
A reply recommendation method provided in an embodiment of the present application is described below. Referring to fig. 1, the reply recommendation method includes:
step 101, in a multi-turn conversation process, if a user sentence input in the turn is received, extracting entity words and relationship information of the user sentence.
In this embodiment of the present application, in the process of performing multiple rounds of conversations with a user, once a currently input user statement (a newly input user statement, that is, a user statement input in the current round) is received, the user statement may be extracted, and key information included in the user statement is extracted, where the key information specifically refers to: the entity words and relationship information contained in the user statement. The smart device tends to assume that the current round of responses to the user statement should be related to the key information of the user statement. Illustratively, a sequence annotation model such as BiLSTM + CRF can be used to extract entity words and relationship information in a user statement, and the extraction manner is not limited here.
For a user statement from which an entity word cannot be extracted, the entity word of the user statement can be directly made to be null. For entity words of which the relation information cannot be extracted, the fact that the target words cannot be found in the knowledge graph according to the user sentences can be directly determined.
And 102, inputting the history spliced sentences, the user sentences and the entity words into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model.
In the embodiment of the present application, the multi-turn dialog process generally involves historical conversations, i.e., historical user sentences (user sentences input in the previous turn), and historical responses output by the smart device for each historical user sentence (responses output in the previous turn). It should be noted that, if the current round is the first round of the multi-round dialog process, the history concatenation statement may be considered to be empty.
In some embodiments, the smart device may first splice the history spliced sentences, the user sentences and the entity words, and input a result (denoted as a current spliced sentence) obtained after splicing into a preset Natural Language Processing (NLP) model for Processing, so as to obtain at least one candidate answer output by the Natural Language Processing model for the current spliced result.
In some embodiments, the history concatenation statement is generated based on each history user statement and a history response corresponding to each history user statement in a multi-turn conversation process, and specifically includes: and when receiving the user sentences input in the next round, updating the historical spliced sentences based on the current spliced sentences and the finally determined target answers aiming at the current spliced sentences. That is, when the user sentence input in the (i + 1) th round is received, the history spliced sentence in the (i + 1) th round is obtained based on the current spliced sentence in the (i) th round and the target reply finally output in the (i) th round.
Illustratively, when the history splicing sentences, the user sentences and the entity words are spliced, the adopted splicing sequence is as follows: the method comprises the steps of historical splicing sentences, entity words and user sentences, wherein different identifiers are used for representing the types of spliced contents when the sentences are spliced and are used for separation. Assuming that in the process of multiple rounds of conversations, a user sentence input by a first round of users is 'who the father of the Liu-Chu is', since the current round is the first round, a history splicing sentence is empty, and an entity word is 'Liu-Chu', the spliced current splicing sentence of the current round is as shown in FIG. 2-1; assuming that the target response output by the smart device for the current spliced sentence of the round in the first round is "you guess", after the first round has called, the historical spliced sentence may be updated based on the current spliced sentence of the first round and the target response of the first round, as shown in fig. 2-2; assume that the user sentence inputted by the second round of user is "guess not", and the entity word of the user sentence in the current round is null, so that the current spliced sentence in the current round obtained by splicing is as shown in fig. 2-3; assuming that the target response output by the smart device in the second round for the current stitched sentence in the round is "clumsy", after the second round calls, the history stitched sentence may be updated again based on the current stitched sentence in the second round and the target response in the second round, as shown in fig. 2-4; by analogy, the direct multi-turn conversation ends, as shown in fig. 2-5.
In fig. 2-1 through 2-5, the identifier "B" indicates the start of a new multi-turn conversation; the identifier "S" represents the beginning of an entity identified in a round of user statements, which may be empty; identifier "Q" is the beginning of a round of user statements; the identifier "a" indicates the beginning and end of the resulting response for a round of user statements (i.e., the portion between 2 "a" after the round of user statements is the resulting response for the round of user statements); the identifier "E" indicates the end of the multi-turn conversation. It should be noted that the identifiers in fig. 2-1 to 2-5 are merely examples, and in practical applications, letters may not be used as unique identifiers.
In some embodiments, the natural language processing model may be a GPT2 model, the GPT2 model being a very large language model trained on very large data sets (e.g., data sets obtained based on microblogs, Wikipedia, and news portals, etc.) based on a transform's decoder. The embodiment of the application is realized by using the core idea of the GPT2 model, namely 'language modeling'. The following briefly introduces language modeling:
language modeling is for a set of words (x)1,x2,...,xn) With unsupervised distribution estimation, each text composed of characters can be represented as a variable-length symbol sequence(s)1,s2,...,sn). Since languages have a natural ordering, joint probabilities on symbols are usually decomposed into conditional probability products, as shown in the following equations.
Figure BDA0002851505270000071
For example, x1 ═ bei, x2 ═ many, x3 ═ fine, x ═ bazedoxifene; x-bepofin, then s 1-bepof and s 2-bepof. By the above formula, it is possible to obtain: p (bazedoxifene) ═ p (bazedoxifene) × p (bazedoxifene | bazedoxifene).
Through a sequence (word sequence) formed by words at the front n-1 positions in a sentence, the GPT2 model can predict the probability that the nth position is each Chinese character in all Chinese characters; through the pre-training learning of TB-level linguistic data, the GPT2 model learns the rule of the language, so that the GPT2 model can find the word which best meets the language, knowledge and common sense when predicting. As shown in fig. 3, when i ═ n, it is obvious that the word "nation" is the most common sense.
Based on the idea, when the intelligent device carries out man-machine conversation with the user, the intelligent device only needs to predict the response by taking the user sentence input by the user as a known sequence. For example, when the GPT2 model generates a reply, a plurality of candidate replies are generated to form a candidate set in order to ensure diversity and accuracy. Assuming that the generated candidate set is of a batch size, the GPT2 model generates a candidate reply for each batch based on the following generation strategy:
first, when generating a candidate reply, the TopK sampling method adopted by the GPT2 model is followed, which refers to: when the word at the current position is generated by the GPT2 model, the K words with the highest probability are selected first, the K words are normalized through Softmax, and the normalized probability of the K words is obtained, wherein the sum of the normalized probabilities of the K words is 1. Then, the normalized probability of each word can be used as the weight to be considered when extracting the GPT2 model, so as to obtain a weight distribution, and based on the weight distribution, extraction is performed on the K words, so that the word with the large weight (i.e. the large normalized probability) is easier to extract, and the word with the small weight (i.e. the small normalized probability) is less easy to extract, and it should be noted that the word with the small weight (i.e. the small normalized probability) is not impossible to extract. Finally, the extracted word is the word generated at the current position.
For example, K is 3. Assuming that when a word at the position where i is n is generated under one batch, the 3 words with the highest probability are word 1, word 2 and word 3; wherein the sum of the probabilities of the three words is not 1; after the probabilities of the three words are normalized, the normalized probability of the word 1 is 0.5, the normalized probability of the word 2 is 0.3, and the normalized probability of the word 3 is 0.2, that is, the weight of the word 1 is 0.5, the weight of the word 2 is 0.3, and the weight of the word 3 is 0.2; correspondingly, a weight distribution [0.5,0.3,0.2] can be generated to indicate that word 1 is more likely to be extracted, i.e., more likely to be extracted, at the time of extraction.
Each batch can generate word by word based on the above generation strategy to finally obtain a candidate reply. Eventually, a batch size number of candidate responses is available.
In some embodiments, a temperature parameter T may be added to Softmax, i.e., the formula for calculating the normalized probability may be based on Softmax
Figure BDA0002851505270000081
Is changed into
Figure BDA0002851505270000082
It is noted that the sum of the normalized probabilities of the TopK words remains 1 after increasing the temperature parameter T. The temperature parameter T can be configured according to actual requirements to increase or decrease the randomness and diversity of the generated words. Generally, the larger T, the more random the generated word, i.e., the more likely it is to generate an unexpected word.
In the process of generating candidate answers based on the GPT2 model, because the GPT2 model is generated word by word, the currently generated word has influence on the subsequent words; thus, each newly generated word is spliced sequentially after the current splice statement, and the latest splice result is input again into the GPT2 model after each splice is completed to generate the next word, and so on until the identifiers "A", "E" are generated or the longest producible length is reached.
For example, assuming that the user sentence in the first round is "who is the father of the liu-out," and the entity word is "liu-out," the user sentence and the entity word are spliced and input into the GPT2 model; in a batch, the first word "you" generated by the GPT2 model will be spliced after the current splice statement, as shown in FIGS. 2-6; then, the latest splicing result (i.e. shown in fig. 2-6) is input into the GPT2 model again, and the second word "sense" generated by the GPT2 model can be obtained; the "feel" word would be concatenated after the "you" word on the basis of fig. 2-6, as shown in fig. 2-7; then, the latest concatenation result (i.e., shown in fig. 2-7) is input into the GPT2 model again, and the third word "get" generated by the GPT2 model can be obtained; the "get" word would be concatenated after the "feel" word on the basis of fig. 2-7, as shown in fig. 2-8; then, the latest concatenation result (i.e., fig. 2-8) is input into the GPT2 model again, and the next word generated by the GPT2 model is obtained, so that each newly generated word is concatenated sequentially after the current concatenation statement, and the latest concatenation result is returned and input into the GPT2 model until the identifiers "a", "E" are generated or the longest producible length is reached, and the candidate reply generated last under the batch is obtained as "what you feel". Thereafter, if the reply is determined to be the target reply, the history stitched sentences may be updated based on the user sentence "who the father of the loop is", the entity word "loop" and the target reply "you feel like", as shown in fig. 2-9, as the history stitched sentences used in the next round.
And 103, screening a target answer from the at least one candidate answer according to the entity words, the relationship information and a preset knowledge graph.
In this embodiment, only one candidate reply of the at least one candidate reply screened in step 102 may be output as the target reply, so that the target reply may be screened from the at least one candidate reply according to the entity words and phrases of the user sentence of the current round, the relationship information, and the preset knowledge graph. The knowledge graph is essentially a network structure formed by describing relationships among a plurality of entities. An entity is usually a Node (Node) in a knowledge graph, and for example, the entity may be an object such as a person name, a place name, and an organization name, which exist in the real world and can be distinguished from other objects. Different nodes can be connected through a relationship (relationship) to form a network structure, and the network structure is a knowledge graph. Therefore, knowledge can be searched and inferred through the knowledge graph. Therefore, the method and the device can realize the screening of the target response based on the established knowledge graph of each field.
In some embodiments, the smart device may first find a target term in the knowledge graph, where a relationship between the target term and the entity term should match the relationship information, and then may screen the target response from the at least one candidate response according to the target term. To increase the filtering speed, it is possible to detect whether each candidate response includes the target word, and then determine the candidate response including the target word as the target response. Among these, three situations may arise:
the first case is that all candidate responses do not contain the target word. In this case, the target word may be directly determined as the target response; that is, the knowledge found in the knowledge graph is used as the target response of the current round for the user sentence.
The second case is that only one of all candidate responses contains the target term. In this case, the only one response containing the target word may be determined directly as the target word.
The third case is that of all candidate responses, more than two candidate responses contain the target term. In this case, the final target response may be selected according to the response scores of the two or more candidate responses containing the target word determined by the natural language processing model, specifically: of the two or more candidate responses that include the target term, the candidate response with the highest response score may be determined as the target response. Where the response score may be the mean of the weights (i.e., normalized probabilities) assigned to the words in the candidate response at the time of generation.
It should be noted that, since the smart device cannot exhaust all knowledge maps and the user's problems vary, in practical applications, the target word may not be found in the knowledge map. In this case, the smart device may directly obtain the reply score of each candidate reply, and determine the candidate reply with the highest reply score as the target reply.
And 104, outputting the target reply.
In the embodiment of the present application, the smart device may output the target response determined in step 103 as feedback for the current round of user sentences. Illustratively, the output mode of the target reply is kept uniform with the input mode of the user sentence in the current round, that is, if the user sentence is a text input, the target reply can be output in a text output mode; if the user sentence is voice input, the target reply can be output in a voice output mode.
In some embodiments, the smart device further needs to train the natural language processing model before performing the above steps 101-104. Referring to fig. 4, the training process of the natural language processing model is as follows:
step 401, extracting entity words and sample relation information of a sample aiming at a training sample of any multi-turn conversation;
in the embodiment of the present application, a sample entity word refers to an entity word included in a certain question in a training sample, sample relationship information refers to relationship information included in the question, and a training sample is obtained from at least one question and a corresponding answer. For example, the preset corpus of multiple rounds of conversations is "who is the father of Q-liu who is a question and who does not know a-stupid", wherein Q is a question and a is an answer, the corpus of the multiple rounds of conversations can be used as a training sample, and "liu he" is an entity word (i.e., sample entity word) extracted based on the first question of the training sample, and "father" is relationship information (i.e., sample relationship information) extracted based on the first question of the training sample. It should be noted that the sample entity words and sample relationship information considered in the embodiments of the present application need to be derived from the same problem.
Step 402, finding out target knowledge according to the sample entity words, the sample relation information and the knowledge graph;
in the embodiment of the application, the target knowledge is an answer matched with the sample entity words and the sample relation information, which is found from the knowledge map. Typically, the target knowledge is also a physical word. For example, in the training sample shown above, "liu chester" is a sample entity word, "father" is sample relationship information, and the target knowledge found in the knowledge graph is "liu cheng".
Step 403, generating a new sample based on the target knowledge and the training sample;
in the embodiment of the present application, on the premise of finding out the target knowledge, a new sample may be generated based on the found target knowledge and the original training sample, and the specific process is as follows: in the training sample, all sentences after the question are deleted, and the target knowledge is used as a new answer of the question, so that a new sample can be obtained. For example, for the original training sample, since the target knowledge "liu qi" has been found out based on the sample entity word "liu qi" extracted from the question "who is the father of liu qi" and the sample relationship information "father", all sentences after the question "who is the father of liu qi" will be replaced with the target knowledge "liu qi" in the original training sample. That is, the original training sample "who the father of Q-liu chester is a-you feel like Q-do not know a-really stupid" after being replaced, a new linguistic data of multiple rounds of conversations "who the father of Q-liu chester is a-liu chester" is obtained.
And step 404, training the natural language processing model to be trained through the training sample and the new sample respectively.
In the embodiment of the application, the original training sample or the new sample generated based on the training sample can be used as the input data of the natural language processing model to be trained in the training process to train the natural language processing model. That is, the training corpus of the original multi-turn dialog can be greatly expanded through the steps 401-403.
It should be noted that, similar to the foregoing description of the application process of the natural language processing model, the entity words in the training sample are also used as important features and input into the natural language processing model together with the training sample; that is, in the training process, the natural language processing model to be trained is trained based on the entity words of the questions, the responses of the questions and the preset identifiers in the training sample. For example, based on the example of training samples presented above, input data for a natural language processing model to be trained is shown in FIG. 5-1; based on the example of the new sample presented above, input data for the natural language processing model to be trained is input as shown in FIG. 5-2. In the training process, the input natural language processing model has problematic entity words besides questions and answers; that is, the entity words included in the questions in the multiple rounds of dialog can also be used as an important feature to train the natural language processing model, which can greatly improve the robustness and accuracy of the finally obtained trained natural language processing model.
In some embodiments, where the GPT2 model is used as the natural language processing model, the GPT2 model also uses mask Self authorization mechanism during training, so that when the GPT2 model generates words at i ≦ n position, only the word sequence at i ≦ n may be seen by the Self-Attention mechanism because: the GPT2 model is expected to predict unknown information one by one with known information, which, if seen during training, would confound the generation of the GPT2 model. Therefore, when generating words at positions i > n, the words at positions i > n will be masked, avoiding interference with the GPT2 model by the words at positions i > n.
As can be seen from the above, according to the embodiment of the application, in a multi-turn conversation process, when a user sentence input in the current turn is received, an entity word and relationship information of the user sentence are extracted, a history spliced sentence, the user sentence, and the entity word are input into a trained natural language processing model, so as to obtain at least one candidate response output by the natural language processing model, wherein the history spliced sentence is generated based on each history user sentence in the multi-turn conversation process and a history response corresponding to each history user sentence, a target response is screened from the at least one candidate response according to the entity word, the relationship information, and a preset knowledge graph, and the target response is output. The process takes both historical conversation and current user sentences into consideration during multiple rounds of conversation, helps the intelligent device to obtain better candidate answers on the basis of understanding the historical conversation, and screens the candidate answers through a knowledge graph to obtain more meaningful target answers.
Corresponding to the reply recommendation method provided in the foregoing, an embodiment of the present application provides a reply recommendation apparatus, which is integrated in an intelligent device. Referring to fig. 6, a reply recommendation apparatus 600 in the embodiment of the present application includes:
an extracting unit 601, configured to, in a multi-turn conversation process, if a user statement input in a current turn is received, extract an entity word and relationship information of the user statement;
an obtaining unit 602, configured to input a history concatenation statement, the user statement, and the entity word into a trained natural language processing model to obtain at least one candidate response output by the natural language processing model, where the history concatenation statement is generated based on each history user statement and a history response corresponding to each history user statement in the multi-round conversation process;
a screening unit 603, configured to screen a target response from the at least one candidate response according to the entity word, the relationship information, and a preset knowledge graph;
an output unit 604, configured to output the target reply.
Optionally, the obtaining unit 602 includes:
the splicing subunit is used for splicing the historical splicing sentences, the user sentences and the entity words based on a preset splicing sequence to obtain current splicing sentences;
and the processing subunit is used for processing the current spliced statement through the natural language processing model to obtain at least one candidate answer output by the natural language processing model.
Optionally, the reply recommending apparatus 600 further includes:
and the updating unit is used for updating the historical spliced sentences based on the current spliced sentences and the target answers when receiving the user sentences input in the next round.
Optionally, the screening unit 603 includes:
a searching subunit, configured to search a target word from the knowledge graph, where a relationship between the target word and the entity word is matched with the relationship information;
and the screening subunit is used for screening the target answer from the at least one candidate answer according to the target word.
Optionally, the screening subunit includes:
the detection subunit is used for detecting whether each candidate answer contains the target word or not;
and the determining subunit is used for determining the candidate response containing the target word as the target response.
Optionally, the reply recommending apparatus 600 further includes:
a training extraction unit, configured to extract sample entity words and sample relationship information for a training sample of any multi-turn conversation, where the sample entity words are entity words included in a question in the training sample, the sample relationship information is relationship information included in the question, and one training sample is obtained from at least one question and a corresponding answer;
the training search unit is used for searching out target knowledge according to the sample entity words, the sample relation information and the knowledge graph;
a training generation unit for generating a new sample based on the target knowledge and the training sample;
and the model training unit is used for training the natural language processing model to be trained through the training sample and the new sample respectively.
Optionally, the training generating unit is specifically configured to delete all sentences after the question in the training sample, and obtain the new sample by using the target knowledge as a new answer to the question.
As can be seen from the above, according to the embodiment of the application, in a multi-turn conversation process, when a user sentence input in the current turn is received, an entity word and relationship information of the user sentence are extracted, a history spliced sentence, the user sentence, and the entity word are input into a trained natural language processing model, so as to obtain at least one candidate response output by the natural language processing model, wherein the history spliced sentence is generated based on each history user sentence in the multi-turn conversation process and a history response corresponding to each history user sentence, a target response is screened from the at least one candidate response according to the entity word, the relationship information, and a preset knowledge graph, and the target response is output. The process takes both historical conversation and current user sentences into consideration during multiple rounds of conversation, helps the intelligent device to obtain better candidate answers on the basis of understanding the historical conversation, and screens the candidate answers through a knowledge graph to obtain more meaningful target answers.
An embodiment of the present application further provides an intelligent device, please refer to fig. 7, where the intelligent device 7 in the embodiment of the present application includes: a memory 701, one or more processors 702 (only one shown in fig. 7), and a computer program stored on the memory 701 and executable on the processors. Wherein: the memory 701 is used for storing software programs and units, and the processor 702 executes various functional applications and data processing by running the software programs and units stored in the memory 701, so as to acquire resources corresponding to the preset events. Specifically, the processor 702 realizes the following steps by running the above-mentioned computer program stored in the memory 701:
in the multi-turn conversation process, if a user sentence input in the turn is received, extracting entity words and relationship information of the user sentence;
inputting a history spliced statement, the user statement and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model, wherein the history spliced statement is generated based on each history user statement and a history answer corresponding to each history user statement in the multi-turn conversation process;
screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph;
and outputting the target reply.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided based on the first possible implementation manner, the inputting the history concatenation sentence, the user sentence, and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model includes:
splicing the historical spliced sentences, the user sentences and the entity words based on a preset splicing sequence to obtain current spliced sentences;
and processing the current spliced sentence through the natural language processing model to obtain at least one candidate reply output by the natural language processing model.
In a third possible implementation manner provided on the basis of the second possible implementation manner, the processor 702 implements the following steps by running the above computer program stored in the memory 701:
and when receiving the user sentence input in the next round, updating the historical spliced sentence based on the current spliced sentence and the target reply.
In a fourth possible implementation manner provided on the basis of the first possible implementation manner, the screening a target response from the at least one candidate response according to the entity words, the relationship information, and a preset knowledge graph includes:
finding out a target word from the knowledge graph, wherein the relation between the target word and the entity word is matched with the relation information;
and screening the target answer from the at least one candidate answer according to the target words.
In a fifth possible implementation manner provided as the basis of the fourth possible implementation manner, the screening the target response from the at least one candidate response according to the target word includes:
detecting whether each candidate answer contains the target word or not;
and determining the candidate response containing the target word as the target response.
In a sixth possible implementation form that is provided based on the first possible implementation form, the second possible implementation form, the third possible implementation form, the fourth possible implementation form, or the fifth possible implementation form, the reply recommendation method further includes:
extracting sample entity words and sample relation information aiming at a training sample of any multi-turn conversation, wherein the sample entity words are entity words contained in questions in the training sample, the sample relation information is relation information contained in the questions, and one training sample is obtained by at least one question and a corresponding answer;
finding out target knowledge according to the sample entity words, the sample relation information and the knowledge graph;
generating a new sample based on the target knowledge and the training sample;
and training the natural language processing model to be trained through the training sample and the new sample respectively.
In a seventh possible embodiment based on the sixth possible embodiment, the updating the training samples based on the target knowledge includes:
and deleting all sentences after the question in the training sample, and taking the target knowledge as a new answer of the question to obtain the new sample.
It should be understood that in the embodiments of the present Application, the Processor 702 may be a Central Processing Unit (CPU), and the Processor may be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 701 may include both read-only memory and random access memory and provides instructions and data to processor 702. Some or all of memory 701 may also include non-volatile random access memory. For example, memory 701 may also store information for device classes.
As can be seen from the above, according to the embodiment of the application, in a multi-turn conversation process, when a user sentence input in the current turn is received, an entity word and relationship information of the user sentence are extracted, a history spliced sentence, the user sentence, and the entity word are input into a trained natural language processing model, so as to obtain at least one candidate response output by the natural language processing model, wherein the history spliced sentence is generated based on each history user sentence in the multi-turn conversation process and a history response corresponding to each history user sentence, a target response is screened from the at least one candidate response according to the entity word, the relationship information, and a preset knowledge graph, and the target response is output. The process takes both historical conversation and current user sentences into consideration during multiple rounds of conversation, helps the intelligent device to obtain better candidate answers on the basis of understanding the historical conversation, and screens the candidate answers through a knowledge graph to obtain more meaningful target answers.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A response recommendation method, comprising:
in the multi-turn conversation process, if a user sentence input in the turn is received, extracting entity words and relationship information of the user sentence;
inputting a history splicing statement, the user statement and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model, wherein the history splicing statement is generated based on each history user statement and a history answer corresponding to each history user statement in the multi-turn conversation process;
screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph;
and outputting the target reply.
2. A response recommendation method in accordance with claim 1, wherein said inputting history concatenation statements, said user statements, and said entity words into a trained natural language processing model to obtain at least one candidate response output by said natural language processing model comprises:
splicing the historical spliced sentences, the user sentences and the entity words based on a preset splicing sequence to obtain current spliced sentences;
and processing the current spliced statement through the natural language processing model to obtain at least one candidate reply output by the natural language processing model.
3. A response recommendation method in accordance with claim 2, said response recommendation method further comprising:
and when a next round of input user sentences are received, updating the historical spliced sentences based on the current spliced sentences and the target answers.
4. The response recommendation method of claim 1, wherein said screening target responses from said at least one candidate response based on said entity terms, said relationship information and a predetermined knowledge-graph comprises:
finding out a target word in the knowledge graph, wherein the relation between the target word and the entity word is matched with the relation information;
and screening the target answer from the at least one candidate answer according to the target word.
5. The response recommendation method in accordance with claim 4, wherein said screening said target response from said at least one candidate response based on said target term comprises:
detecting whether each candidate answer contains the target word or not;
determining a candidate response containing the target term as the target response.
6. A reply recommendation method according to any one of claims 1 to 5, further comprising:
extracting sample entity words and sample relation information aiming at a training sample of any multi-turn conversation, wherein the sample entity words are entity words contained in questions in the training sample, the sample relation information is relation information contained in the questions, and one training sample is obtained by at least one question and a corresponding answer;
finding out target knowledge according to the sample entity words, the sample relation information and the knowledge graph;
generating a new sample based on the target knowledge and the training sample;
and training the natural language processing model to be trained through the training sample and the new sample respectively.
7. An answer recommendation method according to claim 6, wherein said generating a new sample based on said target knowledge and said training samples comprises:
and deleting all sentences after the question in the training sample, and taking the target knowledge as a new answer of the question to obtain the new sample.
8. A reply recommendation apparatus, comprising:
the extraction unit is used for extracting the entity words and the relation information of the user sentences if the user sentences input in the current round are received in the multi-round conversation process;
an obtaining unit, configured to input a history concatenation statement, the user statement, and the entity word into a trained natural language processing model to obtain at least one candidate answer output by the natural language processing model, where the history concatenation statement is generated based on each history user statement and a history answer corresponding to each history user statement in the multi-round conversation process;
the screening unit is used for screening a target answer from the at least one candidate answer according to the entity words, the relation information and a preset knowledge graph;
an output unit for outputting the target reply.
9. A smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011528963.0A 2020-12-22 2020-12-22 Reply recommendation method, reply recommendation device and intelligent device Pending CN112527998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011528963.0A CN112527998A (en) 2020-12-22 2020-12-22 Reply recommendation method, reply recommendation device and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011528963.0A CN112527998A (en) 2020-12-22 2020-12-22 Reply recommendation method, reply recommendation device and intelligent device

Publications (1)

Publication Number Publication Date
CN112527998A true CN112527998A (en) 2021-03-19

Family

ID=75002416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011528963.0A Pending CN112527998A (en) 2020-12-22 2020-12-22 Reply recommendation method, reply recommendation device and intelligent device

Country Status (1)

Country Link
CN (1) CN112527998A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111665A (en) * 2021-04-16 2021-07-13 清华大学 Personalized dialogue rewriting method and device
CN113127626A (en) * 2021-04-22 2021-07-16 广联达科技股份有限公司 Knowledge graph-based recommendation method, device and equipment and readable storage medium
CN113342945A (en) * 2021-05-11 2021-09-03 北京三快在线科技有限公司 Voice session processing method and device
CN113673257A (en) * 2021-08-18 2021-11-19 山东新一代信息产业技术研究院有限公司 Multi-turn question and answer semantic generation method, equipment and medium
CN113761157A (en) * 2021-05-28 2021-12-07 腾讯科技(深圳)有限公司 Response statement generation method and device
CN113806508A (en) * 2021-09-17 2021-12-17 平安普惠企业管理有限公司 Multi-turn dialogue method and device based on artificial intelligence and storage medium
CN114638231A (en) * 2022-03-21 2022-06-17 马上消费金融股份有限公司 Entity linking method and device and electronic equipment
CN114639489A (en) * 2022-03-21 2022-06-17 广东莲藕健康科技有限公司 Mutual learning-based inquiry quick reply recommendation method and device and electronic equipment
WO2023051021A1 (en) * 2021-09-30 2023-04-06 阿里巴巴达摩院(杭州)科技有限公司 Human-machine conversation method and apparatus, device, and storage medium
CN116955575A (en) * 2023-09-20 2023-10-27 深圳智汇创想科技有限责任公司 Information intelligent replying method and cross-border E-commerce system
CN113127626B (en) * 2021-04-22 2024-04-30 广联达科技股份有限公司 Recommendation method, device, equipment and readable storage medium based on knowledge graph

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193978A (en) * 2017-05-26 2017-09-22 武汉泰迪智慧科技有限公司 A kind of many wheel automatic chatting dialogue methods and system based on deep learning
CN109086329A (en) * 2018-06-29 2018-12-25 出门问问信息科技有限公司 Dialogue method and device are taken turns in progress based on topic keyword guidance more
CN110188248A (en) * 2019-05-28 2019-08-30 新华网股份有限公司 Data processing method, device and electronic equipment based on news question and answer interactive system
CN110309283A (en) * 2019-06-28 2019-10-08 阿里巴巴集团控股有限公司 A kind of answer of intelligent answer determines method and device
CN110737763A (en) * 2019-10-18 2020-01-31 成都华律网络服务有限公司 Chinese intelligent question-answering system and method integrating knowledge map and deep learning
CN111291172A (en) * 2020-03-05 2020-06-16 支付宝(杭州)信息技术有限公司 Method and device for processing text
CN111339283A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Method and device for providing customer service answers aiming at user questions
WO2020143186A1 (en) * 2019-01-10 2020-07-16 平安科技(深圳)有限公司 Recommendation system training method and apparatus, and computer device and storage medium
CN111428483A (en) * 2020-03-31 2020-07-17 华为技术有限公司 Voice interaction method and device and terminal equipment
CN111639171A (en) * 2020-06-08 2020-09-08 吉林大学 Knowledge graph question-answering method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193978A (en) * 2017-05-26 2017-09-22 武汉泰迪智慧科技有限公司 A kind of many wheel automatic chatting dialogue methods and system based on deep learning
CN109086329A (en) * 2018-06-29 2018-12-25 出门问问信息科技有限公司 Dialogue method and device are taken turns in progress based on topic keyword guidance more
WO2020143186A1 (en) * 2019-01-10 2020-07-16 平安科技(深圳)有限公司 Recommendation system training method and apparatus, and computer device and storage medium
CN110188248A (en) * 2019-05-28 2019-08-30 新华网股份有限公司 Data processing method, device and electronic equipment based on news question and answer interactive system
CN110309283A (en) * 2019-06-28 2019-10-08 阿里巴巴集团控股有限公司 A kind of answer of intelligent answer determines method and device
CN110737763A (en) * 2019-10-18 2020-01-31 成都华律网络服务有限公司 Chinese intelligent question-answering system and method integrating knowledge map and deep learning
CN111291172A (en) * 2020-03-05 2020-06-16 支付宝(杭州)信息技术有限公司 Method and device for processing text
CN111428483A (en) * 2020-03-31 2020-07-17 华为技术有限公司 Voice interaction method and device and terminal equipment
CN111339283A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Method and device for providing customer service answers aiming at user questions
CN111639171A (en) * 2020-06-08 2020-09-08 吉林大学 Knowledge graph question-answering method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111665A (en) * 2021-04-16 2021-07-13 清华大学 Personalized dialogue rewriting method and device
CN113111665B (en) * 2021-04-16 2022-10-04 清华大学 Personalized dialogue rewriting method and device
CN113127626A (en) * 2021-04-22 2021-07-16 广联达科技股份有限公司 Knowledge graph-based recommendation method, device and equipment and readable storage medium
CN113127626B (en) * 2021-04-22 2024-04-30 广联达科技股份有限公司 Recommendation method, device, equipment and readable storage medium based on knowledge graph
CN113342945A (en) * 2021-05-11 2021-09-03 北京三快在线科技有限公司 Voice session processing method and device
CN113761157A (en) * 2021-05-28 2021-12-07 腾讯科技(深圳)有限公司 Response statement generation method and device
CN113673257A (en) * 2021-08-18 2021-11-19 山东新一代信息产业技术研究院有限公司 Multi-turn question and answer semantic generation method, equipment and medium
CN113806508A (en) * 2021-09-17 2021-12-17 平安普惠企业管理有限公司 Multi-turn dialogue method and device based on artificial intelligence and storage medium
WO2023051021A1 (en) * 2021-09-30 2023-04-06 阿里巴巴达摩院(杭州)科技有限公司 Human-machine conversation method and apparatus, device, and storage medium
CN114638231A (en) * 2022-03-21 2022-06-17 马上消费金融股份有限公司 Entity linking method and device and electronic equipment
CN114639489B (en) * 2022-03-21 2023-03-24 广东莲藕健康科技有限公司 Mutual learning-based inquiry quick reply recommendation method and device and electronic equipment
CN114638231B (en) * 2022-03-21 2023-07-28 马上消费金融股份有限公司 Entity linking method and device and electronic equipment
CN114639489A (en) * 2022-03-21 2022-06-17 广东莲藕健康科技有限公司 Mutual learning-based inquiry quick reply recommendation method and device and electronic equipment
CN116955575A (en) * 2023-09-20 2023-10-27 深圳智汇创想科技有限责任公司 Information intelligent replying method and cross-border E-commerce system
CN116955575B (en) * 2023-09-20 2023-12-22 深圳智汇创想科技有限责任公司 Information intelligent replying method and cross-border E-commerce system

Similar Documents

Publication Publication Date Title
CN112527998A (en) Reply recommendation method, reply recommendation device and intelligent device
CN110309283B (en) Answer determination method and device for intelligent question answering
CN110727779A (en) Question-answering method and system based on multi-model fusion
CN110705301B (en) Entity relationship extraction method and device, storage medium and electronic equipment
CN112487173B (en) Man-machine conversation method, device and storage medium
CN111967224A (en) Method and device for processing dialog text, electronic equipment and storage medium
CN113672708A (en) Language model training method, question and answer pair generation method, device and equipment
CN112784066A (en) Information feedback method, device, terminal and storage medium based on knowledge graph
CN110597968A (en) Reply selection method and device
CN112668333A (en) Named entity recognition method and device, and computer-readable storage medium
CN113392197A (en) Question-answer reasoning method and device, storage medium and electronic equipment
CN116542297A (en) Method and device for generating countermeasure network based on text data training
CN113901837A (en) Intention understanding method, device, equipment and storage medium
CN117056479A (en) Intelligent question-answering interaction system based on semantic analysis engine
CN116610781A (en) Task model training method and device
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium
CN116069876A (en) Knowledge graph-based question and answer method, device, equipment and storage medium
CN113535930B (en) Model training method, device and storage medium
CN110502741B (en) Chinese text recognition method and device
CN114416941A (en) Generation method and device of dialogue knowledge point determination model fusing knowledge graph
CN113761152A (en) Question-answer model training method, device, equipment and storage medium
CN113095082A (en) Method, device, computer device and computer readable storage medium for text processing based on multitask model
CN112286900A (en) Data processing method, device, equipment and storage medium
CN116431779B (en) FAQ question-answering matching method and device in legal field, storage medium and electronic device
CN116955579B (en) Chat reply generation method and device based on keyword knowledge retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination