CN111339280A - Question and answer sentence processing method, device, equipment and storage medium - Google Patents

Question and answer sentence processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111339280A
CN111339280A CN202010209322.2A CN202010209322A CN111339280A CN 111339280 A CN111339280 A CN 111339280A CN 202010209322 A CN202010209322 A CN 202010209322A CN 111339280 A CN111339280 A CN 111339280A
Authority
CN
China
Prior art keywords
sentence
data
scene
inquiry
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010209322.2A
Other languages
Chinese (zh)
Inventor
蒋沪珍
徐宁
胡一川
汪冠春
褚瑞
李玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Benying Network Technology Co ltd
Original Assignee
Shanghai Benying Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Benying Network Technology Co ltd filed Critical Shanghai Benying Network Technology Co ltd
Priority to CN202010209322.2A priority Critical patent/CN111339280A/en
Publication of CN111339280A publication Critical patent/CN111339280A/en
Priority to CN202011135020.1A priority patent/CN112241450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a question and answer sentence processing method, device, equipment and storage medium. The method comprises the following steps: receiving an inquiry statement input by a user; determining a target scene corresponding to the query statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene; extracting an inquiry object from the inquiry statement, and extracting data of the inquiry object from a data table corresponding to the target scene; and generating an answer sentence according to the data of the query object, and pushing the answer sentence to the user. The method and the device can generate the answer sentences in a data table searching mode based on a plurality of scenes and corresponding data tables, so that the reply accuracy of the conversation robot in the conversation interaction process is improved.

Description

Question and answer sentence processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing question and answer sentences.
Background
With the continuous development of the robot technology, more and more platforms such as e-commerce customer service systems, wechat platforms, short message service platforms and the like all adopt conversation robots to provide customer service for users. The conversation robot may output a corresponding answer based on the question of the user.
Generally, the conversation robot interacts with the user based on a pre-configured knowledge base. The knowledge base stores questions and corresponding answers, and each question at least corresponds to one answer. When receiving a question asked by a user, the conversation robot searches the knowledge base for an answer corresponding to the question and returns the answer to the user.
However, if all possible questions and corresponding answers of the user are configured in the knowledge base, too much data with large similarity appears in the knowledge base, so that the robot is easy to return wrong answers to the user.
Disclosure of Invention
The embodiment of the application provides a question and answer sentence processing method, device and equipment and a storage medium, and aims to solve the problem that a conversation robot is poor in reply accuracy.
In a first aspect, an embodiment of the present application provides a question and answer sentence processing method, including:
receiving an inquiry statement input by a user;
determining a target scene corresponding to the query statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene;
extracting an inquiry object from the inquiry statement, and extracting data of the inquiry object from a data table corresponding to the target scene;
and generating an answer sentence according to the data of the query object, and pushing the answer sentence to the user.
In one possible embodiment, each scene corresponds to at least one preset period;
determining a target scene corresponding to the query statement from a plurality of preset scenes, including:
determining a preset sentence pattern which the inquiry sentence accords with;
and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with according to the corresponding relation between the preset scene and the preset sentence pattern, and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with as the target scene.
In a possible implementation manner, each preset sentence pattern includes position information of an object in the preset sentence pattern;
extracting a query object from the query statement, comprising:
determining the position information of an inquiry object in the inquiry statement according to a preset statement which the inquiry statement conforms to;
and extracting the query object from the query statement according to the position information of the query object.
In one possible embodiment, the method further comprises:
acquiring a plurality of inquiry statement samples under each scene;
counting the number of query statement samples belonging to the same sentence pattern in all query statement samples in each scene, sorting the number of query statement samples of each sentence pattern from large to small, and selecting at least one sentence pattern with the top sorting as a preset sentence pattern corresponding to the scene.
In a possible implementation, the extracting the data of the query object from the data table corresponding to the target scene includes:
matching the query object with each object in a data table corresponding to the target scene;
and determining the data of the object matched with the query object as the data of the query object.
In a possible embodiment, the determining the data of the object matching the query object as the data of the query object includes:
determining at least one condition element required to perform the data table match;
generating and outputting an inquiry statement of the condition element which is not acquired;
analyzing the received reply statement to acquire the unacquired condition elements;
the determining data of the object matching the query object as the data of the query object comprises:
and determining data corresponding to the at least one condition element in the data of the object matched with the query object as the data of the query object.
In one possible embodiment, the method further comprises:
acquiring a current personality attribute from preconfigured personality attributes, wherein the personality attribute corresponds to at least one welcome statement;
and pushing a welcome sentence corresponding to the current personality attribute to the user.
In one possible embodiment, the method further comprises:
when a target scene corresponding to the question sentence does not exist in a plurality of preset scenes, searching a question corresponding to the question sentence from a preset question-answer knowledge base, wherein the question-answer knowledge base comprises a plurality of questions and corresponding answer texts thereof;
and pushing an answer text of the question corresponding to the inquiry sentence to the user.
In a second aspect, an embodiment of the present application provides a question-answer sentence processing apparatus, including:
the receiving module is used for receiving an inquiry statement input by a user;
the processing module is used for determining a target scene corresponding to the inquiry statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene;
the extraction module is used for extracting an inquiry object from the inquiry statement and extracting data of the inquiry object from a data table corresponding to the target scene;
and the sending module is used for generating an answer sentence according to the data of the inquiry object and pushing the answer sentence to the user.
In a third aspect, an embodiment of the present application provides a computer device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory, so that the at least one processor performs the question-answer sentence processing method according to the first aspect and various possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer executing instruction is stored in the computer-readable storage medium, and when a processor executes the computer executing instruction, the question-answer sentence processing method according to the first aspect and various possible implementations of the first aspect is implemented.
The question-answer sentence processing method, device, equipment and storage medium provided by the embodiment of the application receive the question sentences input by the user; determining a target scene corresponding to an inquiry statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene; extracting an inquiry object from the inquiry statement, and extracting data of the inquiry object from a data table corresponding to the target scene; the answer sentences are generated according to the data of the inquiry objects and pushed to the user, and the answer sentences can be generated by searching the data table based on a plurality of scenes and the corresponding data table, so that the reply accuracy of the conversation robot in the conversation interaction process is improved, a group of flow type conversations can be completed, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic view of a scenario of a question-answer sentence processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a dialog interface provided in an embodiment of the present application;
fig. 3 is a schematic view of a scenario of a question-answer sentence processing method according to another embodiment of the present application;
fig. 4 is a schematic flowchart of a question-answer sentence processing method according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating a question-answer sentence processing method according to yet another embodiment of the present application;
fig. 6 is a schematic structural diagram of a question-answer sentence processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a scene schematic diagram of a question-answer sentence processing method according to an embodiment of the present application. The scenario may include a terminal 11 and a server 12. The terminal 11 may specifically be a mobile phone, a desktop computer, a vehicle-mounted terminal, or a tablet computer. An application including a conversation robot may be installed in the terminal 11, and a user may input with a conversation interface of the conversation robot in the application. The application program may be a program dedicated for the conversation robot, such as an intelligent assistant in a mobile phone, or may exist in the form of one of the functions or a plug-in of the application program, such as a robot service in a shopping application, a robot plug-in a social application, and the like. The terminal 11 may also be a robot with an entity, such as a self-service robot, a game accompanying robot, a sweeping robot, a child learning machine, or a smart speaker. The user can input through the input interface or through voice.
For example, the dialog interface shown in fig. 2, after the user enters the application program of the dialog robot, the user may input the input in the dialog interface in a text mode or a voice mode, for example, the user may click a text input box on the dialog interface to input a query sentence in a text mode, or click a voice input control to input a query sentence in a voice mode. For the inquiry sentences input by the user, the conversation robot gives corresponding answer sentences. Taking the gestational period knowledge service as an example, when the user inputs "can not eat apple", the conversation robot replies after acquiring the query sentence input by the user, the terminal 11 displays the answer sentence replied by the conversation robot on the conversation interface as shown in fig. 2, for example, the reply of the conversation robot may be "can eat apple during pregnancy", after the user checks the reply, other query sentences can be input again, and the conversation robot replies according to the query sentence input by the user, thereby carrying out conversation with the user.
The terminal 11 can run the application program, display a dialog interface of the application program through the display panel, collect query sentences input by the user through a microphone, a touch panel or keys and the like, correspondingly, the terminal 11 can display solution sentences pushed by the dialog robot through the display panel, and can output the solution sentences in a voice mode through a loudspeaker. After the terminal 11 acquires the query sentence input by the user, the query sentence input by the user may be transmitted to the server 12. The server 12 may execute the computer execution instruction of the question and answer sentence processing method provided in the embodiment of the present application, and implement the question and answer sentence processing method provided in the embodiment of the present application to generate an answer sentence, and push the answer sentence to the terminal 11. The terminal 11 may then output the answer sentence to the user.
Fig. 3 is a scene schematic diagram of a question-answer sentence processing method according to yet another embodiment of the present application. The conversation robot 30 may be included in the scene, and the conversation robot 30 may be a self-service robot, a game partner robot, a sweeping robot, a voice robot, or the like. Among them, the conversation robot 30 is a device having sufficient computing power. The conversation robot 30 may display a conversation interface through the display panel, collect an inquiry sentence input by the user through a microphone, a touch panel, or a key, and accordingly, the conversation robot 30 may display the generated solution sentence through the display panel, and may output the solution sentence in a form of voice through a speaker. The conversation robot 30 may execute the computer-executed instructions of the question and answer sentence processing method provided in the embodiment of the present application, and implement the question and answer sentence processing method provided in the embodiment of the present application, so that after the user inputs a question and answer sentence, an answer sentence is generated, and the answer sentence is output to the user.
It should be noted that the method provided in the embodiment of the present application is not limited to the application scenarios shown in fig. 1 and fig. 3, and may also be used in other possible application scenarios, which is not limited.
Fig. 4 is a schematic flow chart of a question-answer sentence processing method according to an embodiment of the present application. The execution subject of the method is a computer device, which may be a server in fig. 1, a conversation robot in fig. 3, and the like, and is not limited herein. As shown in fig. 4, the method includes:
s401, receiving an inquiry statement input by a user.
In this embodiment, in an application scenario in which the server executes the method, the terminal may collect an inquiry statement input by the user in the form of text, voice, or the like, and send the inquiry statement to the server, and the server receives the inquiry statement sent by the terminal. In an application scenario in which the method is executed by a dialog robot, the dialog robot may capture an inquiry sentence input by a user in the form of text or speech, etc.
S402, determining target scenes corresponding to the query sentences from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene.
In this embodiment, the preset division of the multiple scenes may be set according to actual requirements, and is not limited herein. Each scene corresponds to a data table in which data pertaining to a plurality of objects in the scene is recorded. Wherein the data for each object is used to determine the answer statement to reply when the object is queried.
For example, taking the pregnancy knowledge service as an example, the scenes can be divided into a diet consultation scene, a behavior consultation scene and a recipe recommendation scene. Wherein, the data sheet corresponding to the diet consultation scene can record whether various foods can be eaten in different pregnancy stages, corresponding cautions and the like, and each food is the object in the scene. Whether various behaviors can be done in different pregnancy stages, corresponding cautions and the like can be recorded in a data table corresponding to the behavior consultation scene, and each behavior is an object in the scene. The data sheet corresponding to the recipe recommendation scene can record the methods and cautions of various dishes, and each dish is the object in the scene.
The target scene of the query sentence can be determined by the sentence pattern and the keyword of the query sentence, or by performing semantic analysis on the query sentence, and the like. For example, when the query statement is "apple cannot be eaten", the target scene corresponding to the query statement is determined to be a diet consultation scene; and when the inquiry statement is 'can use the disinfectant', determining that the target scene corresponding to the inquiry statement is a behavior inquiry scene.
S403, extracting the query object from the query sentence, and extracting the data of the query object from the data table corresponding to the target scene.
In this embodiment, the query object refers to an object to be queried by the user. The query sentence includes the object, and the query object can be extracted from the query sentence by sentence analysis, keyword extraction, and the like. And then looking up the data of the query object in a data table corresponding to the target scene. For example, when the query statement is "apple cannot be eaten", it may be determined that "apple" is the query object, and the data of "apple" is extracted from the data table corresponding to the diet consultation scenario, for example, the data of "apple" may be: the apple can be eaten in the X week of pregnancy.
Optionally, extracting data of the query object from the data table corresponding to the target scene may include: matching the query object with each object in the data table corresponding to the target scene; the data of the object matching the query object is determined as the data of the query object. In this embodiment, the query object may be matched with each object in the data table, so as to accurately determine the data of the query object. For example, the data table corresponding to the diet consultation scene includes data on whether foods such as "apple", "pear", "banana" can be eaten at different stages of pregnancy, and the data of "apple" can be found by matching "apple" with each object in the table.
Optionally, before determining the data of the object matching the query object as the data of the query object, the method may further include:
determining at least one condition element required to perform the data table matching;
generating and outputting an inquiry statement of the condition elements which are not acquired;
analyzing the received reply statement to acquire the unacquired condition elements;
determining data of an object matching the query object as the query object data may include:
and determining data corresponding to at least one condition element in the data of the object matched with the query object as the data of the query object.
In this embodiment, the data table is pre-configured with at least one condition element, and after all the condition elements are acquired, data table matching can be performed to determine data of the query object. Thus, before performing data table matching, at least one condition element required for performing data table matching is first determined, and it is determined whether or not there is an unacquired condition element therein. And if the condition elements exist, generating an inquiry statement of the condition elements which are not acquired, outputting the inquiry statement to the user, and analyzing the reply statement after receiving the reply statement of the user to acquire the condition elements which are not acquired. And then determining data corresponding to at least one condition element in the data of the object matched with the query object as the data of the query object.
For example, for a baby's complementary food data table, the condition elements may be two: month of baby, sex of baby. When a user inquires about what complementary food is to be eaten by a baby, whether the two condition elements are acquired or not is determined, if the condition element 'month of baby' is not acquired, a 'asking for your baby for worries for several months' can be generated and output, and if the reply sentence of the user is '15 months', the condition element 'month of baby' is determined to be 15 months; if the condition element "the sex of the baby" has not been acquired yet, a "please ask your baby whether the baby is a boy or a girl" may be generated and output, and if the reply sentence of the user is "girl", the condition element "the sex of the baby" is determined to be a woman; in the data of the object matching the query object, the data for which "month of baby" is found to be 15 months and "sex of baby" is female is determined as the data of the query object.
By determining the condition elements required for executing data table matching and inquiring the condition elements which are not acquired to the user, the condition elements which are not acquired can be analyzed through the reply sentences of the user, and the data of the inquired object can be accurately found in the data table, so that the reply accuracy is improved.
S404, generating an answer sentence according to the data of the inquiry object, and pushing the answer sentence to the user.
In this embodiment, after the data of the query object is acquired, the answer sentence may be generated according to the data of the query object, and the answer sentence is pushed to the user. In an application scenario in which the method is executed by a server, the server may send the solution sentence to the terminal, and the terminal displays the solution sentence on a dialog interface or plays the solution sentence in a voice form. In an application scenario in which the method is executed by a dialog robot, the dialog robot may display the solution sentence on a dialog interface of a display panel or play the solution sentence in a voice form.
The embodiment of the application receives the inquiry sentences input by the user; determining a target scene corresponding to an inquiry statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene; extracting an inquiry object from the inquiry statement, and extracting data of the inquiry object from a data table corresponding to the target scene; the answer sentences are generated according to the data of the inquiry objects and pushed to the user, and the answer sentences can be generated by searching the data table based on a plurality of scenes and the corresponding data table, so that the reply accuracy of the conversation robot in the conversation interaction process is improved, a group of flow type conversations can be completed, and the user experience is improved.
Optionally, the method may further include:
acquiring a current personality attribute from preconfigured personality attributes, wherein the personality attribute corresponds to at least one welcome statement;
and pushing a welcome sentence corresponding to the current personality attribute to the user.
In this embodiment, when the application program enters the dialog interface, the welcome statement may be actively pushed on the dialog interface. In order to make the reply of the conversation robot more anthropomorphic and improve the user experience, the personality attributes of the conversation robot can be configured in advance. One personality attribute selected by a user in a plurality of preset personality attributes can be configured as the current personality attribute of the conversation robot. Or determining the personality attribute of the corresponding dialogue robot according to the user attribute. The user attributes corresponding to the user can be obtained in advance, and the current personality attributes of the conversation robot are determined according to the matching relationship between the user attributes and the personality attributes of the conversation robot. And the personality attributes of a plurality of preset dialogue robots are preset, and each personality attribute corresponds to one or more welcome sentences. The welcome statement for each personality attribute is generated by the conversational style that matches the personality attribute. Taking the pregnancy knowledge service as an example, the personality attributes may include a mentor having experience in profound pregnancy, a pregnant woman in the same stage of pregnancy, and the like. The current personality attribute can be obtained from the preset personality attributes on the session entering interface, and then the corresponding welcome statement is pushed. For example, if the current personality attribute is a mentor with experience of profound pregnancy, the corresponding welcome sentence is pushed.
Alternatively, when there are a plurality of welcome statements, one of the welcome statements may be output at random, or a plurality of welcome statements may be output in sequence. The welcome sentence may include a sentence indicating the capability and the indication range of the dialogue robot. Welcome statements may also be used to guide the user in the way the conversation robot is used.
Optionally, the method may further include:
when a target scene corresponding to an inquiry statement does not exist in a plurality of preset scenes, searching a question corresponding to the inquiry statement from a preset question-answer knowledge base, wherein the question-answer knowledge base comprises a plurality of questions and corresponding answer texts thereof;
and pushing answer text of the question corresponding to the inquiry sentence to the user.
In this embodiment, if there is no target scene corresponding to the question, the question is matched with a question in a preset question-answer knowledge base, the question corresponding to the question is determined, and then an answer text corresponding to the question is obtained and pushed to the user, so that the question proposed by the user is replied. One question in the question-answer knowledge base can correspond to one answer text or a plurality of answer texts with different personality attribute styles. Answer text that may select the current personality attribute is pushed to the user.
Fig. 5 is a schematic flow chart of a question-answer sentence processing method according to still another embodiment of the present application. The embodiment describes in detail a specific implementation process for determining a target scene corresponding to an inquiry statement. As shown in fig. 5, the method includes:
s501, receiving an inquiry statement input by a user.
In this embodiment, S501 is similar to S401 in the embodiment of fig. 4, and is not described here again.
And S502, determining a preset sentence pattern which the inquiry sentence accords with.
S503, according to the corresponding relation between the preset scene and the preset sentence pattern, determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with, and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with as the target scene, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises the data of a plurality of objects belonging to the scene.
In this embodiment, the preset sentence pattern that the query sentence conforms to may be determined by performing sentence pattern analysis on the query sentence, or by matching the query sentence with each preset sentence pattern. Each scene corresponds to one or more preset sentence patterns, and after the preset sentence patterns which are met by the inquiry sentences are determined, the scene corresponding to the preset sentence patterns is used as a target scene. The preset sentence pattern corresponding to one scene is an inquiry sentence pattern commonly used by the user in the scene. For example, the preset sentence pattern corresponding to the diet consultation scenario may include "XX cannot be eaten", "XX can be eaten", and the like. When the query sentence input by the user is 'apple can not be eaten', determining that the query sentence conforms to a preset sentence pattern 'XX can not be eaten', and further determining a diet consultation scene as a target scene of the query sentence.
Optionally, the preset sentence pattern corresponding to the scene may be determined through sample analysis, and the method may further include:
acquiring a plurality of inquiry statement samples under each scene;
counting the number of query statement samples belonging to the same sentence pattern in all query statement samples in each scene, sorting the number of query statement samples of each sentence pattern from large to small, and selecting at least one sentence pattern with the top sorting as a preset sentence pattern corresponding to the scene.
In this embodiment, a plurality of query statement samples in each scene may be obtained in a manner of crawling or searching a language database, so as to select a query statement used at a high frequency in each scene from the query statement samples. Specifically, for each scene, the number of query statement samples belonging to the same sentence pattern in all query statement samples in the scene may be counted, so as to obtain the number of query statement samples corresponding to each sentence pattern, then all sentence patterns are sorted according to the number, and the N sentence patterns with the largest number of corresponding query statement samples are selected as the corresponding preset sentence patterns in the scene, where N is an integer greater than zero, and may be set according to actual requirements.
S504, extracting the query object from the query statement, and extracting the data of the query object from the data table corresponding to the target scene.
In this embodiment, S504 is similar to S403 in the embodiment of fig. 4, and is not described here again.
Optionally, extracting the query object from the query statement in S504 may include:
determining the position information of an inquiry object in the inquiry statement according to a preset statement which the inquiry statement conforms to; each preset sentence pattern comprises position information of an object in the preset sentence pattern;
the query object is extracted from the query sentence based on the position information of the query object.
In this embodiment, after determining the preset sentence pattern that the query sentence conforms to, the position information of the query object in the query sentence may be determined. For example, the preset sentence pattern may include the position information of the object, or the query sentence may be compared with the preset sentence pattern to determine the position information of the query object, or other determination methods may be used, which are not limited herein. Then, the query object is extracted from the query sentence based on the position information of the query object. For example, when the query sentence is "apple cannot be eaten", it conforms to the preset sentence pattern of "XX cannot be eaten", in which the position of "XX" is the position of the query object, and the query object "apple" can be extracted from the query sentence.
And S505, generating an answer sentence according to the data of the inquiry object, and pushing the answer sentence to the user.
In this embodiment, S505 is similar to S404 in the embodiment of fig. 2, and is not described here again.
According to the embodiment, the target scene of the inquiry sentence is determined through the corresponding relation between the preset scene and the preset sentence pattern, the target scene can be accurately identified, and the reply accuracy of the conversation robot in the conversation interaction process is further improved.
Optionally, after determining the preset sentence pattern that the query sentence conforms to, a target scene of the query sentence may be determined, and a task flow corresponding to the target scene may be entered. One task flow can comprise one or more preset turns of dialogue sentences, dialogue is carried out with the user through the one or more turns of dialogue sentences, inquiry objects of the user are determined, data of the inquiry objects are inquired from a data table of target scene dialogue drinkings, and answer sentences are generated and pushed. For example, the query sentence input by the user is "can not eat", at this time, according to a preset sentence pattern "can not eat XX", the target scene can be determined as a diet consultation scene, a corresponding task flow is triggered, according to the task flow, a multi-turn dialog can be further used for asking what you want to ask about can not eat ", if the user replies to" fruit ", the dialog robot can ask again what you want to ask about can not eat, such as apple, banana and the like", if the user replies to "pear", the query object can be determined to be "pear", and the query result is searched for that pear can not eat, and the corresponding solution sentence is generated and pushed to the user. The task flow is not limited herein, and may be set according to actual requirements.
In the embodiment, by setting the task flow corresponding to the scene and triggering the corresponding task flow after the preset sentence pattern which the inquiry sentences conform to, when the inquiry sentences of the user are incomplete and cannot be answered, the inquiry objects of the user can be determined through multiple rounds of inquiry, and then the corresponding answer sentences are generated, so that the reply accuracy of the conversation robot in the conversation interaction process can be improved.
Optionally, after entering the dialog interface, a plurality of preset recommendation questions are displayed, and after receiving a certain recommendation question selected by the user, the answer to the recommendation question is pushed to the user. By setting the recommendation problem, when the user does not explicitly inquire the object and wants to know the related content of the current problem, the user can conveniently select the problem, and the convenience of operation is improved.
Fig. 6 is a schematic structural diagram of a question-answer sentence processing apparatus according to an embodiment of the present application. As shown in fig. 6, the question-answering sentence processing apparatus 60 includes: a receiving module 601, a processing module 602, an extracting module 603 and a sending module 604.
A receiving module 601, configured to receive an inquiry statement input by a user;
a processing module 602, configured to determine a target scene corresponding to an inquiry statement from a plurality of preset scenes, where each scene corresponds to a data table, and a data table corresponding to a scene includes data of a plurality of objects belonging to the scene;
an extracting module 603, configured to extract an inquiry object from the inquiry statement, and extract data of the inquiry object from a data table corresponding to the target scene;
the sending module 604 is configured to generate an answer sentence according to the data of the query object, and push the answer sentence to the user.
Optionally, each scene corresponds to at least one preset sentence pattern;
the processing module 602 is specifically configured to:
determining a preset sentence pattern which the inquiry sentence accords with;
and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with according to the corresponding relation between the preset scene and the preset sentence pattern, and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with as the target scene.
Optionally, each preset sentence pattern includes position information of an object in the preset sentence pattern;
the extracting module 603 is specifically configured to:
determining the position information of an inquiry object in the inquiry statement according to a preset statement which the inquiry statement conforms to;
the query object is extracted from the query sentence based on the position information of the query object.
Optionally, the processing module 602 is further configured to:
acquiring a plurality of inquiry statement samples under each scene;
counting the number of query statement samples belonging to the same sentence pattern in all query statement samples in each scene, sorting the number of query statement samples of each sentence pattern from large to small, and selecting at least one sentence pattern with the top sorting as a preset sentence pattern corresponding to the scene.
Optionally, the processing module 602 is specifically configured to:
matching the query object with each object in the data table corresponding to the target scene;
the data of the object matching the query object is determined as the data of the query object.
Optionally, the processing module 602 is specifically configured to:
determining at least one condition element required to perform the data table match;
generating and outputting an inquiry statement of the condition element which is not acquired;
analyzing the received reply statement to acquire the unacquired condition elements;
and determining data corresponding to the at least one condition element in the data of the object matched with the query object as the data of the query object.
Optionally, the processing module 602 is further configured to:
acquiring a current personality attribute from preconfigured personality attributes, wherein the personality attribute corresponds to at least one welcome statement;
and pushing a welcome sentence corresponding to the current personality attribute to the user.
Optionally, the processing module 602 is further configured to:
when a target scene corresponding to an inquiry statement does not exist in a plurality of preset scenes, searching a question corresponding to the inquiry statement from a preset question-answer knowledge base, wherein the question-answer knowledge base comprises a plurality of questions and corresponding answer texts thereof;
and pushing answer text of the question corresponding to the inquiry sentence to the user.
The question-answer sentence processing device provided in the embodiment of the present application can be used to execute the above method embodiments, and the implementation principle and technical effect thereof are similar, and this embodiment is not described herein again.
Fig. 7 is a schematic hardware structure diagram of a computer device according to an embodiment of the present application. As shown in fig. 7, the present embodiment provides a computer apparatus 70 including: at least one processor 701 and a memory 702. The computer device 70 further comprises a communication component 703. The processor 701, the memory 702, and the communication section 703 are connected by a bus 704.
In a specific implementation process, the at least one processor 701 executes computer-executable instructions stored in the memory 702, so that the at least one processor 701 executes the question-answer sentence processing method as described above.
For a specific implementation process of the processor 701, reference may be made to the above method embodiments, which implement principles and technical effects similar to each other, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 7, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The application also provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the question-answer sentence processing method is realized.
The readable storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A question-answer sentence processing method is characterized by comprising the following steps:
receiving an inquiry statement input by a user;
determining a target scene corresponding to the query statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene;
extracting an inquiry object from the inquiry statement, and extracting data of the inquiry object from a data table corresponding to the target scene;
and generating an answer sentence according to the data of the query object, and pushing the answer sentence to the user.
2. The method of claim 1, wherein each scene corresponds to at least one preset pattern of periods;
determining a target scene corresponding to the query statement from a plurality of preset scenes, including:
determining a preset sentence pattern which the inquiry sentence accords with;
and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with according to the corresponding relation between the preset scene and the preset sentence pattern, and determining the scene corresponding to the preset sentence pattern which the inquiry sentence accords with as the target scene.
3. The method of claim 2, wherein each of the predetermined sentence patterns includes position information of the object in the predetermined sentence pattern;
extracting a query object from the query statement, comprising:
determining the position information of an inquiry object in the inquiry statement according to a preset statement which the inquiry statement conforms to;
and extracting the query object from the query statement according to the position information of the query object.
4. The method of claim 2, further comprising:
acquiring a plurality of inquiry statement samples under each scene;
counting the number of query statement samples belonging to the same sentence pattern in all query statement samples in each scene, sorting the number of query statement samples of each sentence pattern from large to small, and selecting at least one sentence pattern with the top sorting as a preset sentence pattern corresponding to the scene.
5. The method of claim 1, wherein extracting the data of the query object from the data table corresponding to the target scene comprises:
matching the query object with each object in a data table corresponding to the target scene;
and determining the data of the object matched with the query object as the data of the query object.
6. The method of claim 5, wherein determining the data of the object matching the query object as the data of the query object is preceded by:
determining at least one condition element required to perform the data table match;
generating and outputting an inquiry statement of the condition element which is not acquired;
analyzing the received reply statement to acquire the unacquired condition elements;
the determining data of the object matching the query object as the data of the query object comprises:
and determining data corresponding to the at least one condition element in the data of the object matched with the query object as the data of the query object.
7. The method according to any one of claims 1-6, further comprising:
acquiring a current personality attribute from preconfigured personality attributes, wherein the personality attribute corresponds to at least one welcome statement;
and pushing a welcome sentence corresponding to the current personality attribute to the user.
8. The method according to any one of claims 1-6, further comprising:
when a target scene corresponding to the question sentence does not exist in a plurality of preset scenes, searching a question corresponding to the question sentence from a preset question-answer knowledge base, wherein the question-answer knowledge base comprises a plurality of questions and corresponding answer texts thereof;
and pushing an answer text of the question corresponding to the inquiry sentence to the user.
9. A question-answer sentence processing apparatus characterized by comprising:
the receiving module is used for receiving an inquiry statement input by a user;
the processing module is used for determining a target scene corresponding to the inquiry statement from a plurality of preset scenes, wherein each scene corresponds to a data table, and the data table corresponding to one scene comprises data of a plurality of objects belonging to the scene;
the extraction module is used for extracting an inquiry object from the inquiry statement and extracting data of the inquiry object from a data table corresponding to the target scene;
and the sending module is used for generating an answer sentence according to the data of the inquiry object and pushing the answer sentence to the user.
10. A computer device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the question-answer sentence processing method according to any one of claims 1-8.
11. A computer-readable storage medium, in which computer-executable instructions are stored, which, when executed by a processor, implement the question-answer sentence processing method according to any one of claims 1 to 8.
CN202010209322.2A 2020-03-23 2020-03-23 Question and answer sentence processing method, device, equipment and storage medium Pending CN111339280A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010209322.2A CN111339280A (en) 2020-03-23 2020-03-23 Question and answer sentence processing method, device, equipment and storage medium
CN202011135020.1A CN112241450A (en) 2020-03-23 2020-10-21 Question and answer sentence processing method, device, equipment and storage medium combining RPA and AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010209322.2A CN111339280A (en) 2020-03-23 2020-03-23 Question and answer sentence processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111339280A true CN111339280A (en) 2020-06-26

Family

ID=71180357

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010209322.2A Pending CN111339280A (en) 2020-03-23 2020-03-23 Question and answer sentence processing method, device, equipment and storage medium
CN202011135020.1A Pending CN112241450A (en) 2020-03-23 2020-10-21 Question and answer sentence processing method, device, equipment and storage medium combining RPA and AI

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011135020.1A Pending CN112241450A (en) 2020-03-23 2020-10-21 Question and answer sentence processing method, device, equipment and storage medium combining RPA and AI

Country Status (1)

Country Link
CN (2) CN111339280A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000786A (en) * 2020-06-30 2020-11-27 北京来也网络科技有限公司 Dialogue robot problem processing method, device and equipment combining RPA and AI
CN112395399A (en) * 2020-11-13 2021-02-23 四川大学 Specific personality dialogue robot training method based on artificial intelligence

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220852B (en) * 2021-05-06 2023-04-25 支付宝(杭州)信息技术有限公司 Man-machine dialogue method, device, equipment and storage medium
CN113769395B (en) * 2021-09-28 2023-11-14 腾讯科技(深圳)有限公司 Virtual scene interaction method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5152314B2 (en) * 2010-12-16 2013-02-27 沖電気工業株式会社 Dialog management apparatus, method and program, and consciousness extraction system
CN107861961A (en) * 2016-11-14 2018-03-30 平安科技(深圳)有限公司 Dialog information generation method and device
CN110471538B (en) * 2018-05-10 2023-11-03 北京搜狗科技发展有限公司 Input prediction method and device
CN109977405A (en) * 2019-03-26 2019-07-05 北京博瑞彤芸文化传播股份有限公司 A kind of intelligent semantic matching process
CN110245224B (en) * 2019-06-20 2021-08-10 网易(杭州)网络有限公司 Dialog generation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000786A (en) * 2020-06-30 2020-11-27 北京来也网络科技有限公司 Dialogue robot problem processing method, device and equipment combining RPA and AI
CN112395399A (en) * 2020-11-13 2021-02-23 四川大学 Specific personality dialogue robot training method based on artificial intelligence

Also Published As

Publication number Publication date
CN112241450A (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN111339280A (en) Question and answer sentence processing method, device, equipment and storage medium
CN107797984B (en) Intelligent interaction method, equipment and storage medium
JP6431993B2 (en) Automatic response method, automatic response device, automatic response device, automatic response program, and computer-readable storage medium
CN109360550B (en) Testing method, device, equipment and storage medium of voice interaction system
CN109949071A (en) Products Show method, apparatus, equipment and medium based on voice mood analysis
CN110597952A (en) Information processing method, server, and computer storage medium
CN110536166B (en) Interactive triggering method, device and equipment of live application program and storage medium
US20180075014A1 (en) Conversational artificial intelligence system and method using advanced language elements
CN109979450B (en) Information processing method and device and electronic equipment
CN109615009B (en) Learning content recommendation method and electronic equipment
CN112084305A (en) Search processing method, device, terminal and storage medium applied to chat application
CN116501960B (en) Content retrieval method, device, equipment and medium
CN110956016A (en) Document content format adjusting method and device and electronic equipment
CN114625855A (en) Method, apparatus, device and medium for generating dialogue information
CN112417107A (en) Information processing method and device
CN112036164A (en) Sample generation method and device, computer-readable storage medium and electronic device
CN109582780B (en) Intelligent question and answer method and device based on user emotion
CN112532507A (en) Method and device for presenting expression image and method and device for sending expression image
CN116775815B (en) Dialogue data processing method and device, electronic equipment and storage medium
CN113378037B (en) Tariff configuration acquisition method and tariff configuration acquisition device
CN115221303A (en) Dialogue processing method and dialogue processing device
WO2021240673A1 (en) Conversation program, device, and method
CN111556096B (en) Information pushing method, device, medium and electronic equipment
CN111161706A (en) Interaction method, device, equipment and system
CN110535749A (en) Talk with method for pushing, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200626