CN115221303A - Dialogue processing method and dialogue processing device - Google Patents

Dialogue processing method and dialogue processing device Download PDF

Info

Publication number
CN115221303A
CN115221303A CN202210901239.0A CN202210901239A CN115221303A CN 115221303 A CN115221303 A CN 115221303A CN 202210901239 A CN202210901239 A CN 202210901239A CN 115221303 A CN115221303 A CN 115221303A
Authority
CN
China
Prior art keywords
service
robot
conversation
business
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210901239.0A
Other languages
Chinese (zh)
Inventor
蔡坤祥
张宏伟
庞立敏
李凯凯
邹紫城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Clp Jinxin Software Shanghai Co ltd
Original Assignee
Clp Jinxin Software Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clp Jinxin Software Shanghai Co ltd filed Critical Clp Jinxin Software Shanghai Co ltd
Priority to CN202210901239.0A priority Critical patent/CN115221303A/en
Publication of CN115221303A publication Critical patent/CN115221303A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The application provides a dialogue processing method and a dialogue processing device, comprising the following steps: acquiring a query statement received by a service window; identifying the real service intention of the user from the query sentence by using a shunting conversation robot; determining a target service dialogue robot which is requested to be accessed by the user from all service dialogue robots by using the shunting dialogue robot according to the real service intention; and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot. Therefore, through the cooperation of the shunting conversation robot and each business conversation robot, a user can consult the corresponding business of a plurality of business conversation robots through one service window, so that the conversation between the user and the conversation robots is more convenient and efficient, and the problems of a service window for finding mistakes and a conversation robot for finding mistakes of the user are avoided.

Description

Dialogue processing method and dialogue processing device
Technical Field
The present application relates to the field of intelligent dialogue robots, and in particular, to a dialogue processing method and a dialogue processing apparatus.
Background
With the development of the related technology of artificial intelligence, the application of the intelligent dialogue robot is more and more popular, and more artificial service windows are replaced by the intelligent dialogue robot.
At present, taking an enterprise as an example, an enterprise often provides a plurality of service windows to users, and each service window is provided with corresponding service content by a corresponding intelligent conversation robot. The user needs to find a corresponding service window in the plurality of service windows according to the service requirement of the user and the service content provided by each intelligent conversation robot, and then inputs consultation information in the corresponding service window to obtain the service content. Obviously, the dialogue mode is complicated, a wrong service window is easy to appear, a wrong intelligent dialogue robot is accessed, and then a wrong work order is generated, so that the waste of resources of the intelligent dialogue robot and the time of a user is caused.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method and an apparatus for processing a dialog, which can recognize a real business intention of a user by a forking dialog robot, and then access a business dialog robot corresponding to the real business intention to a service window, so that the business dialog robot can continue a dialog with the user. Therefore, through the cooperation of the shunting conversation robot and each business conversation robot, a user can consult the corresponding business of a plurality of business conversation robots through one service window, so that the conversation between the user and the conversation robots is more convenient and efficient, and the problems of a service window for finding mistakes and a conversation robot for finding mistakes of the user are avoided.
The embodiment of the application provides a conversation processing method, which comprises the following steps:
acquiring a query statement received by a service window;
identifying the real service intention of the user from the query sentence by using a shunting conversation robot;
determining a target service dialogue robot which is requested to be accessed by the user from all service dialogue robots by using the shunting dialogue robot according to the real service intention;
and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
Further, the step of determining, by using the forking conversation robot, a target service conversation robot to which the user requests to access from the respective service conversation robots according to the real service intention includes:
if the real service intention is a unique service intention in any service dialogue robot, determining the service dialogue robot as the target service dialogue robot;
if the real service intention is a common service intention in a plurality of service dialogue robots, extracting entity conditions capable of indicating the service dialogue robots from the query sentences by using the shunting dialogue robot;
and determining the service conversation robot indicated by the entity condition as the target service conversation robot.
Further, after the step of extracting, by using the split dialog robot, the entity condition that can indicate the business dialog robot from the query statement, the method further includes:
if the entity condition which can indicate the business conversation robot is not extracted from the query statement, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, after the step of obtaining the query statement received by the service window, the method further includes:
if the real business intention of the user is not identified from the query sentence by using the shunting conversation robot, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, after the step of accessing the target business conversation robot to the service window and responding to the current round of conversation of the user through the target business conversation robot, the method further includes:
identifying a next query statement received by the service window by using the target business conversation robot;
if the target business conversation robot indicates that the user requests to call other business conversation robots according to the next real business intention identified from the next query sentence, determining the other business conversation robots requested to be called by the user as the target business conversation robots;
and accessing the target business conversation robot into the service window, and continuously responding to the current conversation through the target business conversation robot.
Further, the step of using the split dialog robot to identify the real service intention of the user from the query sentence includes:
identifying the query sentences by using the shunting conversation robot, and determining the possibility scores of the intentions indicated by the query sentences as the business intentions;
if the highest possible score among the possible scores of the service intentions is greater than a preset score threshold, determining the service intention corresponding to the highest possible score as the real service intention;
and if the highest score possible score is less than or equal to a preset score threshold value, and the difference value between the highest score possible score and the second highest score possible score is greater than a preset difference threshold value, determining the service intention corresponding to the highest score possible score as the real service intention.
Further, the step of constructing each service dialogue robot includes:
determining each service intention, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition under each service scene according to the service content in each service scene;
aiming at each service scene, constructing a service knowledge graph corresponding to the service scene by using each service intention under the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition;
training an initial business natural language understanding model by using each business intention, expression sentences capable of reflecting each business intention and entity conditions associated with each business intention under the business scene to obtain a business natural language understanding model corresponding to the business scene;
and combining the service knowledge graph corresponding to the service scene with the service natural language understanding model corresponding to the service scene to obtain the service conversation robot corresponding to the service scene.
Further, the step of constructing the split dialog robot includes:
according to each service conversation robot corresponding to each service scene, unique service intents and common service intents in the service conversation robots are determined;
respectively configuring a calling instruction for a unique service intention in the service dialogue robot;
respectively configuring a guide conversation for common service intentions of the service conversation robots;
constructing a shunting knowledge graph according to a unique calling instruction of service intention configuration in each service conversation robot and a common guide conversation technique of the service intention configuration;
according to unique business intentions and common business intentions in all the business conversation robots, training the initial shunting natural language understanding model to obtain a shunting natural language understanding model, wherein the expression sentences are extracted from the expression sentences used for training to obtain all the business natural language understanding models, and the entity conditions capable of indicating the business conversation robots;
and combining the flow splitting knowledge map with the flow splitting natural language understanding model to obtain the flow splitting conversation robot.
An embodiment of the present application further provides a dialog processing apparatus, where the apparatus includes:
the acquisition module is used for acquiring the query statement received by the service window;
the first identification module is used for identifying the real service intention of the user from the query sentence by using the shunting conversation robot;
the determining module is used for determining a target service dialogue robot which is requested to be accessed by the user from all the service dialogue robots by using the shunting dialogue robot according to the real service intention;
and the access module is used for accessing the target business conversation robot into the service window and responding to the current round of conversation of the user through the target business conversation robot.
Further, when the determining module is configured to determine, by using the forking conversation robot, a target service conversation robot to which the user requests to access from the service conversation robots according to the real service intention, the determining module is configured to:
if the real service intention is a unique service intention in any service dialogue robot, determining the service dialogue robot as the target service dialogue robot;
if the real service intention is a common service intention in a plurality of service dialogue robots, extracting entity conditions which can indicate the service dialogue robots from the query sentence by using the shunting dialogue robot;
and determining the service conversation robot indicated by the entity condition as the target service conversation robot.
Further, after the determining module is configured to extract the entity condition that can indicate the business conversation robot from the query statement using the split conversation robot, the determining module is further configured to:
if the entity condition which can indicate the business conversation robot is not extracted from the query statement, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, the device also comprises an output module; the output module is used for:
if the real business intention of the user is not identified from the query sentence by using the shunting conversation robot, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, the device also comprises a second identification module; the second identification module is configured to:
using the target business conversation robot to identify a next query statement received by the service window;
if the target service dialogue robot identifies a next real service intention from the next query sentence and indicates that the user requests to call other service dialogue robots, determining the other service dialogue robots called by the user request as the target service dialogue robots;
and accessing the target business conversation robot into the service window, and continuously responding to the current conversation through the target business conversation robot.
Further, when the first identification module is used for identifying the real business intention of the user from the query statement by using the split dialogue robot, the first identification module is used for:
identifying the query sentences by using the shunting conversation robot, and determining the possibility scores of the intentions indicated by the query sentences as the business intentions;
if the highest possible score among the possible scores of the service intentions is greater than a preset score threshold, determining the service intention corresponding to the highest possible score as the real service intention;
and if the highest score possible score is less than or equal to a preset score threshold value, and the difference value between the highest score possible score and the second highest score possible score is greater than a preset difference threshold value, determining the service intention corresponding to the highest score possible score as the real service intention.
Further, the dialogue processing device also comprises a construction module; the construction module is used for constructing the business dialogue robots by the following steps:
determining each service intention, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition under each service scene according to the service content in each service scene;
aiming at each service scene, constructing a service knowledge graph corresponding to the service scene by using each service intention under the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition;
training an initial business natural language understanding model by using each business intention, expression sentences capable of reflecting each business intention and entity conditions associated with each business intention under the business scene to obtain a business natural language understanding model corresponding to the business scene;
and combining the service knowledge graph corresponding to the service scene with the service natural language understanding model corresponding to the service scene to obtain the service conversation robot corresponding to the service scene.
Further, the building module is further configured to build the split dialog robot by:
according to each service conversation robot corresponding to each service scene, unique service intents and common service intents in the service conversation robots are determined;
respectively configuring a calling instruction for a unique service intention in the service dialogue robot;
respectively configuring a guide conversation for common service intentions of the service conversation robots;
constructing a shunting knowledge map according to a unique calling instruction of service intention configuration in each service conversation robot and a common guide dialect of the service intention configuration;
according to unique business intentions and common business intentions in all the business conversation robots, training the initial shunting natural language understanding model to obtain a shunting natural language understanding model, wherein the expression sentences are extracted from the expression sentences used for training to obtain all the business natural language understanding models, and the entity conditions capable of indicating the business conversation robots;
and combining the flow splitting knowledge map with the flow splitting natural language understanding model to obtain the flow splitting conversation robot.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of a dialog processing method as described above.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of a dialog processing method as described above.
The embodiment of the application provides a dialogue processing method and a dialogue processing device, which comprise the following steps: acquiring a query statement received by a service window; identifying the real service intention of the user from the query sentence by using a shunting conversation robot; determining a target service dialogue robot which is requested to be accessed by the user from all service dialogue robots by using the shunting dialogue robot according to the real service intention; and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
Therefore, through the cooperation of the shunting conversation robot and each business conversation robot, a user can consult the corresponding business of a plurality of business conversation robots through one service window, so that the conversation between the user and the conversation robots is more convenient and efficient, and the problems of a service window for finding mistakes and a conversation robot for finding mistakes of the user are avoided.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart illustrating a dialog processing method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a dialog processing apparatus according to an embodiment of the present application;
fig. 3 is a second schematic structural diagram of a dialog processing apparatus according to an embodiment of the present application;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
Research shows that with the development of the related technology of artificial intelligence, the application of the intelligent dialogue robot is more and more popular, and more artificial service windows are replaced by the intelligent dialogue robot.
At present, taking an enterprise as an example, an enterprise often provides a plurality of service windows to a user, and each service window is provided with corresponding service content by a corresponding intelligent conversation robot. The user needs to find a corresponding service window in the plurality of service windows according to the service requirement of the user and the service content provided by each intelligent conversation robot, and then consultative information is input in the corresponding service window to obtain the service content. Obviously, the dialogue mode is complicated, a wrong service window is easy to appear, a wrong intelligent dialogue robot is accessed, and then a wrong work order is generated, so that waste of resources and user time is caused.
Therefore, the embodiment of the application provides a dialogue processing method and a dialogue processing device, so that the dialogue between a user and a dialogue robot is more convenient and efficient, and the problems of a user mistake-finding service window and a mistake-finding dialogue robot are avoided.
Referring to fig. 1, fig. 1 is a flowchart illustrating a session processing method according to an embodiment of the present disclosure. As shown in fig. 1, a dialog processing method provided in an embodiment of the present application includes:
s101, acquiring the query statement received by the service window.
In this step, a virtual interface of the service window may be provided in the electronic device as a channel through which the user interacts with the session robot, for example, the user may perform a consultation session with the intelligent session robot through the service window, and the user may input different query statements in the service window to consult a service that the user wants to transact, so that the query statements input by the user may be received through the service window, and the shunting session robot is first accessed to the service window to perform a conversation with the user.
And S102, determining whether the real service intention of the user can be identified from the query sentence by using the shunting conversation robot.
It should be noted that, at present, a conversation robot often relies on a natural language understanding model (NLU model) to recognize a user intention indicated by a conversation in each application scene. Natural Language Understanding (NLU) is a generic term of a method model or task for supporting a machine to understand text content, and an electronic computer is used for Natural Language Understanding research to simulate a human Language communication process, so that the computer can understand and use Natural languages of human society, such as chinese and english, and realize Natural Language communication between human and computers. The natural language understanding model (NLU model) is a trained model capable of recognizing an intention indicated by a natural language, and is mainly used for recognizing an intention corresponding to the natural language, for example, a BERT model.
Therefore, in this step, the split conversation robot may identify the query sentence through its corresponding split natural language understanding model, and determine whether the real business intention of the user can be identified from the query sentence.
The real business intention refers to the real meaning expressed by the query statement input by the user, and the real business intention can indicate the business which the user wants to handle.
In one possible implementation, step S102 may include the following steps:
and S1021, identifying the query sentence by using the shunting conversation robot, and determining the probability score of the intention indicated by the query sentence as each service intention.
In a specific implementation, the query statement can be identified by using a split dialogue robot based on any mode in the prior art, and the possibility score of the intention indicated by the query statement as each business intention is determined; the likelihood score is used for representing the likelihood of each intention label in the schematic diagram indicated by the query statement. That is, the greater the likelihood score, the greater the likelihood that the query statement points to an intent that the intent tag corresponds to.
Further, whether the real business intention of the user can be identified from the query statement can be determined according to the probability score of the intention indicated by the query statement as each business intention. It should be noted that even the business intent with the highest likelihood score is not necessarily the true business intent that can accurately characterize the specific business indicated by the query statement. Therefore, in order to further increase the accuracy of intent recognition, it is further necessary to determine whether the business intent with the highest probability score is reliable, that is, it is necessary to determine whether the real business intent of the user can be recognized from the query sentence according to the probability scores of the intentions indicated by the query sentence as the business intentions.
And S1022, if the highest possible score among the possible scores of the service intentions is greater than a preset score threshold, determining the service intention corresponding to the highest possible score as the real service intention.
S1023, if the highest possibility score of the score is smaller than or equal to a preset score threshold value, and the difference value between the highest possibility score of the score and the second highest possibility score of the score is larger than a preset difference threshold value, determining the service intention corresponding to the highest possibility score of the score as the real service intention.
Specifically, if the probability score of the business intention with the highest probability score is greater than a preset score threshold, or if the probability score of the business intention with the highest probability score is less than or equal to the preset score threshold, but the difference between the probability score of the business intention with the highest probability score and the probability score of the business intention with the second highest probability score is greater than a preset difference threshold, the business intention with the highest probability score is considered to be authentic, and the business intention with the highest probability score is determined as the real business intention.
The predetermined score threshold may be a fixed constant, for example, 0.7; similarly, the predetermined difference threshold may be a fixed constant, for example, 0.25; here, as an example, regarding the sources of the preset score threshold and the preset difference threshold: after the shunting conversation robot is constructed, a large number of test sets (the test sets comprise test consultation sentences and test intention labels corresponding to the test consultation sentences) are tested to obtain test results, and the test results mainly comprise service intentions with highest possibility scores determined by the test consultation sentences in the shunting conversation robot, possibility scores corresponding to the service intentions with the second highest possibility scores and difference values between the service intentions. Wherein, the business intention with the highest probability score is the same as the intention indicated by the test intention label, and the business intention with the highest probability score is correctly identified, otherwise, the business intention is wrongly identified.
The statistical result shows that when the highest probability score is more than or equal to 0.7, the credible ratio of the intention result is higher and the intention result can be regarded as a credible result; when the highest likelihood score is less than 0.7, further calculating the difference between the highest likelihood score and the second highest likelihood score in each set of training samples is performed: and counting and grading the calculated difference values of each group to calculate an effective difference value threshold value. Wherein, the grading mode has two types, one is according to the total amount statistics, such as [0,1 ], [0.1,1 ], and [0.2,1); the other is according to interval statistics, such as [0,0.1 ], [0.1,0.2 ], and [0.2,0.3 ].
After statistics, the difference value between the score of the highest scoring intent and the score of the second highest scoring intent is 0.2-0.3, the number of false identifications which can ensure that the results which are not credible in a group of test samples are falsely identified as credible results is controlled within 3, and the number of correctly identified results is more than or equal to 42, so that the positive benefits are far greater than the negative effects for a group of test samples, and the negative effects can be controlled within an acceptable range. Therefore, when the highest likelihood score is less than 0.7 and the difference between the highest likelihood score and the next highest likelihood score is greater than or equal to 0.25, the highest score is considered to be a trusted result.
Further, if the split dialog robot is used to identify the real service intention of the user from the query statement, step S103 is executed: and determining the target service dialogue robot which is requested to be accessed by the user from all the service dialogue robots by using the shunting dialogue robot according to the real service intention.
It should be noted that, the service conversation robot is often constructed for a specific service scene, and the specific service scenes corresponding to the service conversation robots are mutually independent; the constructed business conversation robot can recognize various intentions related to a specific business scene, and the intentions can indicate the business that a user wants to transact. For example, the service dialogue robot a is constructed for a service scenario "related service in data transmission system", and the service intents that the service dialogue robot a can recognize include: "system cannot log in" and "announcement cannot download", etc. For another example, the business conversation robot B is constructed for a business scenario "related business in the merchandise display system", and the business intentions that the business conversation robot B can recognize include: the system can not log in, the product price is wrong, and the like.
In a possible implementation, the step of constructing each business conversation robot includes:
step 1, according to the service content in each service scene, determining each service intention in the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition.
And 2, aiming at each service scene, constructing a service knowledge graph corresponding to the service scene by using each service intention in the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition.
And 3, training the initial business natural language understanding model by using each business intention, expression sentences capable of reflecting each business intention and entity conditions associated with each business intention under the business scene to obtain a business natural language understanding model corresponding to the business scene.
And 4, combining the service knowledge graph corresponding to the service scene with the service natural language understanding model corresponding to the service scene to obtain the service dialogue robot corresponding to the service scene.
It should be noted that, in some cases, after the specific service intention is identified by the service natural language understanding model, it is determined that the actual service intention corresponds to a plurality of selectable service reply results by querying the service knowledge graph, at this time, it is not possible to determine a final accurate actual service reply result that should be fed back to the user, and a guide dialog needs to be fed back to the user, so that the user supplements each entity condition according to the guide dialog, and further determines an actual service reply result to the service intention according to each entity condition. And for each business intention corresponding to the multiple optional business reply results, the entity condition associated with the business intention is used for determining the actual business reply result to the business intention from the multiple optional business reply results. Exemplarily, after recognizing that the user's business intention is "announce cannot be downloaded", a guidance phrase "ask for the model number of the browser you use? "; after receiving the bootstrap session, the user may enter "i use xx browser" in the current consultation session. In this way, the business natural language understanding model can recognize the entity condition "browser model: xx ". Thus, after one or more interactive dialogues, the user completes the supplementation of the entity conditions required for obtaining the service reply result according to the guide dialogues, and at this time, the user does not need to feed back the guide dialogues, and the actual service reply result which should be fed back to the user can be determined, for example, the solution that "the announcement cannot be downloaded" is: xx ", and the like. In other cases, some real business intents may correspond to only one optional business reply result, and the real business intents do not have associated physical conditions and corresponding guide words. After the specific service intention is identified by the service natural language understanding model, the actual service intention can be directly determined to only correspond to an optional service reply result by inquiring the service knowledge map, and the optional service reply result is determined to be a final accurate actual service reply result which should be fed back to the user.
Therefore, after the service dialogue robot constructed in the above way identifies the service intention in the corresponding service scene by using the service natural language understanding model, firstly, the service knowledge graph is used to determine whether the service reply result to the real intention can be obtained; if so, feeding back a service reply result to the user so that the user transacts the service according to the indication of the service reply result; if not, determining a guide dialect corresponding to each entity condition required for obtaining the service reply result by using the service knowledge graph, and feeding back the guide dialect to the user so that the user supplements each entity condition required according to the guide dialect; and finally, determining a service reply result to the service intention according to each supplemented entity condition, and outputting the service reply result.
In one possible embodiment, the step of building the split dialog robot includes:
step 1, according to each service conversation robot corresponding to each service scene, unique service intentions and common service intentions in the service conversation robots are determined.
And 2, respectively configuring a calling instruction for the unique service intention in the service dialogue robot.
The calling instruction is used for calling the business conversation robots corresponding to the unique business intents so as to access the corresponding business conversation robots to the service window.
And 3, respectively configuring a guide conversation for the common service intention of the service conversation robots.
Wherein the guide dialog is used for guiding the user to supplement the entity condition which can indicate the business dialog robot.
And 4, constructing a shunting knowledge graph according to the unique calling instruction of the service intention configuration in each service conversation robot and the common guide dialect of the service intention configuration.
And step 5, training the initial shunting natural language understanding model according to unique business intents and common business intents in all the business conversation robots, expression sentences extracted from the expression sentences used for training to obtain all the business natural language understanding models and entity conditions capable of indicating the business conversation robots, and obtaining the shunting natural language understanding model.
And 6, combining the shunting knowledge graph with the shunting natural language understanding model to obtain the shunting conversation robot.
Therefore, after the shunting conversation robot constructed in the above way identifies a real service intention by using the service natural language understanding model, firstly, the shunting knowledge graph is used to determine whether the service intention is a unique service intention in a certain service conversation robot or a common service intention in two or more service conversation robots; if the service intention is a unique service intention in a certain service dialogue robot, calling the corresponding service dialogue robot according to a calling instruction configured by the unique service intention so as to access the corresponding service dialogue robot into a service window; if the service intention is a common service intention in two or more service dialogue robots, feeding back a guide dialogue configuration of the common service intention to the user; and then, identifying entity conditions from answers fed back by the user, determining a target business conversation robot which the user requests to access according to the entity conditions, and calling the corresponding business conversation robot by using a calling instruction so as to access the corresponding business conversation robot into a service window.
In one possible implementation, step S103 may include the following steps:
and S1031, determining whether the real service intention is a unique service intention in any service conversation robot.
S1032, if the real service intention is a unique service intention in any service dialogue robot, determining the service dialogue robot as the target service dialogue robot.
And S1033, if the real service intention is a service intention shared by a plurality of service dialogue robots, extracting entity conditions capable of indicating the service dialogue robots from the query sentences by using the shunting dialogue robot.
S1034, determining the service dialogue robot indicated by the entity condition as the target service dialogue robot.
For steps S1031 to S1034, corresponding to the above example, if the real service intention recognized by the forking conversation robot is "announce not to download", the intention is a unique service intention in the service conversation robot a, and thus, the service conversation robot a can be directly determined as the target service conversation robot.
If the actual service intention recognized by the forking dialogue robot is 'system login impossible', the intention is a service intention common to a plurality of service dialogue robots including the service dialogue robot a and the service dialogue robot B, and therefore a target service dialogue robot which a user requests to access cannot be directly determined. At this time, the shunting conversation robot needs to be used to identify the query sentence, so as to extract an entity condition that can indicate a business conversation robot from the query sentence, and determine the business conversation robot indicated by the entity condition as the target business conversation robot.
Further, if the entity condition indicating the service dialogue robot is not extracted from the query statement in step S1033, the target service dialogue robot to which the user requests to access is determined by the following steps:
the first step is as follows: and outputting the guided dialog through the service window.
The second step: receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique.
The third step: and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
And S104, accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
In a possible implementation, after step S104, the method further includes:
and S105, identifying the next query statement received by the service window by using the target business conversation robot.
S106, if the next real service intention identified by the target service dialogue robot from the next query statement indicates that the user requests to call other service dialogue robots, determining the other service dialogue robots requested to be called by the user as the target service dialogue robots.
And S107, accessing the target business conversation robot into the service window, and continuously responding to the current conversation through the target business conversation robot.
Here, the service intention recognizable by each service dialogue robot includes an intention of requesting access to any one of the other service dialogue robots, and the service reply result to the intention of requesting access to any one of the other service dialogue robots is a call instruction for calling the other service dialogue robot. For example, the business intention that the target business conversation robot a can recognize from the query sentence includes a business intention "user request to invoke business conversation robot B", "user request to invoke business conversation robot C", and the like, in addition to a business intention "system unable to log in", and "announce unable to download", and the like, in the business conversation robot a. When the target business conversation robot A indicates that the user requests to call other business conversation robots C according to the business intention identified by the next query statement received by the service window in the current round of conversation of the user, a call instruction is sent to the business conversation robot C so as to access the business conversation robot C requested to be called by the user to the service window, so that the business conversation robot C can continue to have conversation with the user through the service window and continue to respond to the current round of conversation.
In one possible implementation, after determining whether the real business intention of the user can be identified from the query sentence in step S102 using the forking dialog robot, if the real business intention of the user is not identified from the query sentence using the forking dialog robot, step S108 of outputting the guided dialog through the service window is performed.
And S109, receiving entity conditions which can indicate a business conversation robot and are supplemented according to the guide conversation.
S110, determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
And then, accessing the target business conversation robot determined by the supplemented entity condition into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
Here, when the real business intention of the user is not recognized from the query sentence, a guide word technique, for example, "ask what system business you are to consult? "and the like. Further, the entity condition included in the answer of the user is acquired, for example, the answer of the user is "i want to consult the business in the goods presentation system", and the entity condition included in the answer is "goods presentation system". Further, according to the entity condition of 'commodity display system', the target business conversation robot which the user requests to access is determined to be the business conversation robot B, and the business conversation robot B is accessed to the service window.
The conversation processing method provided by the embodiment of the application comprises the following steps: acquiring a query statement received by a service window; identifying the real service intention of the user from the query sentence by using a shunting conversation robot; determining a target service dialogue robot which is requested to be accessed by the user from all service dialogue robots by using the shunting dialogue robot according to the real service intention; and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
Therefore, through the cooperation of the shunting conversation robot and each business conversation robot, a user can consult the corresponding business of a plurality of business conversation robots through one service window, so that the conversation between the user and the conversation robots is more convenient and efficient, and the problems of a service window for finding mistakes and a conversation robot for finding mistakes of the user are avoided.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic structural diagram of a session processing apparatus according to an embodiment of the present application, and fig. 3 is a schematic structural diagram of a session processing apparatus according to an embodiment of the present application. As shown in fig. 2, the dialogue processing apparatus 200 includes:
an obtaining module 210, configured to obtain a query statement received by a service window;
a first identification module 220, configured to identify a real business intention of the user from the query sentence by using a split dialogue robot;
a determining module 230, configured to determine, by using the shunting conversation robot, a target service conversation robot to which the user requests to access from each service conversation robot according to the real service intention;
an access module 240, configured to access the target service conversation robot to the service window, and respond to the current round of conversation of the user through the target service conversation robot.
Further, when the determining module 230 is configured to determine, by using the forking conversation robot, a target service conversation robot to which the user requests to access from the service conversation robots according to the real service intention, the determining module 230 is configured to:
if the real service intention is a unique service intention in any service dialogue robot, determining the service dialogue robot as the target service dialogue robot;
if the real service intention is a common service intention in a plurality of service dialogue robots, extracting entity conditions capable of indicating the service dialogue robots from the query sentences by using the shunting dialogue robot;
and determining the service conversation robot indicated by the entity condition as the target service conversation robot.
Further, after the determining module 230 is configured to extract the entity condition indicative of the business conversation robot from the query statement by using the split conversation robot, the determining module 230 is further configured to:
if the entity condition which can indicate the business conversation robot is not extracted from the query statement, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, the dialog processing device 200 further includes an output module 250; the output module 250 is configured to:
if the real service intention of the user is not identified from the query sentence by using the shunting conversation robot, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
Further, the dialog processing device 200 further includes a second recognition module 260; the second identification module 260 is configured to:
identifying a next query statement received by the service window by using the target business conversation robot;
if the target service dialogue robot identifies a next real service intention from the next query sentence and indicates that the user requests to call other service dialogue robots, determining the other service dialogue robots called by the user request as the target service dialogue robots;
and accessing the target business conversation robot into the service window, and continuously responding to the current conversation through the target business conversation robot.
Further, when the first identification module 220 is configured to identify the real service intention of the user from the query statement by using a split dialog robot, the first identification module 220 is configured to:
identifying the query sentences by using the shunting conversation robot, and determining the possibility scores of the intentions indicated by the query sentences as the business intentions;
if the highest possible score among the possible scores of the service intentions is greater than a preset score threshold, determining the service intention corresponding to the highest possible score as the real service intention;
and if the highest score possible score is less than or equal to a preset score threshold value, and the difference value between the highest score possible score and the second highest score possible score is greater than a preset difference threshold value, determining the service intention corresponding to the highest score possible score as the real service intention.
Further, the dialog processing device 200 further includes a construction module 270; the building module 270 is configured to build the business dialogue robots by:
determining each service intention, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition under each service scene according to the service content in each service scene;
aiming at each service scene, constructing a service knowledge graph corresponding to the service scene by using each service intention under the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition;
training an initial business natural language understanding model by using each business intention, expression sentences capable of reflecting each business intention and entity conditions associated with each business intention under the business scene to obtain a business natural language understanding model corresponding to the business scene;
and combining the service knowledge graph corresponding to the service scene with the service natural language understanding model corresponding to the service scene to obtain the service conversation robot corresponding to the service scene.
Further, the building module 270 is further configured to build the split dialog robot by:
according to each service conversation robot corresponding to each service scene, unique service intents and common service intents in the service conversation robots are determined;
respectively configuring a calling instruction for a unique service intention in the service dialogue robot;
respectively configuring a guide conversation for common service intentions of the service conversation robots;
constructing a shunting knowledge graph according to a unique calling instruction of service intention configuration in each service conversation robot and a common guide conversation technique of the service intention configuration;
according to unique business intentions and common business intentions in all the business conversation robots, training the initial shunting natural language understanding model to obtain a shunting natural language understanding model, wherein the expression sentences are extracted from the expression sentences used for training to obtain all the business natural language understanding models, and the entity conditions capable of indicating the business conversation robots;
and combining the shunting knowledge graph with the shunting natural language understanding model to obtain the shunting conversation robot.
The embodiment of the application provides a dialogue processing device, which comprises: acquiring a query statement received by a service window; identifying the real service intention of the user from the query sentence by using a shunting conversation robot; determining a target service dialogue robot which is requested to be accessed by the user from all service dialogue robots by using the shunting dialogue robot according to the real service intention; and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
Therefore, through the matching of the shunting conversation robot and each business conversation robot, a user can consult the corresponding business of a plurality of business conversation robots through one service window, so that the conversation between the user and the conversation robots is more convenient and efficient, and the problems of mistakenly finding the service window and the mistakenly finding conversation robot of the user are avoided.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 4, the electronic device 400 includes a processor 410, a memory 420, and a bus 430.
The memory 420 stores machine-readable instructions executable by the processor 410, when the electronic device 400 runs, the processor 410 communicates with the memory 420 through the bus 430, and when the machine-readable instructions are executed by the processor 410, the steps of the dialog processing method in the method embodiment shown in fig. 1 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the step of a dialog processing method in the method embodiment shown in fig. 1 may be executed.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of dialog processing, the method comprising:
acquiring a query statement received by a service window;
identifying the real service intention of the user from the query sentence by using a shunting conversation robot;
determining a target service dialogue robot which the user requests to access from each service dialogue robot by using the shunting dialogue robot according to the real service intention;
and accessing the target business conversation robot into the service window, and responding to the current round of conversation of the user through the target business conversation robot.
2. The method according to claim 1, wherein the step of using the forking dialogue robot to determine the target business dialogue robot requested to be accessed by the user from the business dialogue robots according to the real business intention comprises:
if the real service intention is a unique service intention in any service dialogue robot, determining the service dialogue robot as the target service dialogue robot;
if the real service intention is a common service intention in a plurality of service dialogue robots, extracting entity conditions capable of indicating the service dialogue robots from the query sentences by using the shunting dialogue robot;
and determining the service conversation robot indicated by the entity condition as the target service conversation robot.
3. The method of claim 2, wherein after the step of extracting from the query statement using the forking dialog robot an entity condition indicative of a business dialog robot, the method further comprises:
if the entity condition which can indicate the service dialogue robot is not extracted from the query sentence, outputting a guide dialogue operation through the service window;
receiving entity conditions which can indicate business conversation robots supplemented according to the guide dialogues;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
4. The method of claim 1, wherein after the step of obtaining the query statement received by the service window, the method further comprises:
if the real service intention of the user is not identified from the query sentence by using the shunting conversation robot, outputting a guide conversation operation through the service window;
receiving entity conditions that can indicate a business conversation robot supplemented according to the bootstrap technique;
and determining the service dialogue robot indicated by the supplemented entity condition as the target service dialogue robot.
5. The method of claim 1, wherein after the step of accessing the target business conversation robot to the service window, the target business conversation robot responding to the user's present turn of conversation, the method further comprises:
using the target business conversation robot to identify a next query statement received by the service window;
if the target business conversation robot indicates that the user requests to call other business conversation robots according to the next real business intention identified from the next query sentence, determining the other business conversation robots requested to be called by the user as the target business conversation robots;
and accessing the target business conversation robot into the service window, and continuously responding to the current conversation through the target business conversation robot.
6. The method of claim 1, wherein the step of using a split dialog robot to identify the user's true business intent from the query statement comprises:
identifying the query sentences by using the shunting conversation robot, and determining the possibility scores of the intentions indicated by the query sentences as the intentions of each service;
if the highest possible score among the possible scores of the service intentions is larger than a preset score threshold, determining the service intention corresponding to the highest possible score as the real service intention;
and if the highest score possible score is less than or equal to a preset score threshold value, and the difference value between the highest score possible score and the second highest score possible score is greater than a preset difference threshold value, determining the service intention corresponding to the highest score possible score as the real service intention.
7. The method of claim 1, wherein the step of building each business conversation robot comprises:
determining each service intention, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition under each service scene according to the service content in each service scene;
aiming at each service scene, using each service intention under the service scene, at least one optional service reply result corresponding to each service intention, entity conditions associated with each service intention and a guide dialect corresponding to each entity condition to construct a service knowledge graph corresponding to the service scene;
training an initial business natural language understanding model by using each business intention, expression sentences capable of reflecting each business intention and entity conditions related to each business intention in the business scene to obtain a business natural language understanding model corresponding to the business scene;
and combining the service knowledge graph corresponding to the service scene with the service natural language understanding model corresponding to the service scene to obtain the service conversation robot corresponding to the service scene.
8. The method of claim 7, wherein the step of building the split dialog robot comprises:
according to each service conversation robot corresponding to each service scene, unique service intents and common service intents in the service conversation robots are determined;
respectively configuring a calling instruction for a unique service intention in the service dialogue robot;
respectively configuring a guide conversation for common service intentions of the service conversation robots;
constructing a shunting knowledge graph according to a unique calling instruction of service intention configuration in each service conversation robot and a common guide conversation technique of the service intention configuration;
according to unique business intentions and common business intentions in all the business conversation robots, training the initial shunting natural language understanding model to obtain a shunting natural language understanding model, wherein the expression sentences are extracted from the expression sentences used for training to obtain all the business natural language understanding models, and the entity conditions capable of indicating the business conversation robots;
and combining the shunting knowledge graph with the shunting natural language understanding model to obtain the shunting conversation robot.
9. A conversation processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the query statement received by the service window;
the first identification module is used for identifying the real service intention of the user from the query sentence by using the shunting conversation robot;
the determining module is used for determining a target service dialogue robot which the user requests to access from each service dialogue robot by using the shunting dialogue robot according to the real service intention;
and the access module is used for accessing the target business conversation robot into the service window and responding to the current round of conversation of the user through the target business conversation robot.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operated, the machine-readable instructions being executable by the processor to perform the steps of a dialog processing method according to any of claims 1 to 8.
CN202210901239.0A 2022-07-28 2022-07-28 Dialogue processing method and dialogue processing device Pending CN115221303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210901239.0A CN115221303A (en) 2022-07-28 2022-07-28 Dialogue processing method and dialogue processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210901239.0A CN115221303A (en) 2022-07-28 2022-07-28 Dialogue processing method and dialogue processing device

Publications (1)

Publication Number Publication Date
CN115221303A true CN115221303A (en) 2022-10-21

Family

ID=83614332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210901239.0A Pending CN115221303A (en) 2022-07-28 2022-07-28 Dialogue processing method and dialogue processing device

Country Status (1)

Country Link
CN (1) CN115221303A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117301074A (en) * 2023-11-17 2023-12-29 浙江孚宝智能科技有限公司 Control method and chip of intelligent robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117301074A (en) * 2023-11-17 2023-12-29 浙江孚宝智能科技有限公司 Control method and chip of intelligent robot
CN117301074B (en) * 2023-11-17 2024-04-30 浙江孚宝智能科技有限公司 Control method and chip of intelligent robot

Similar Documents

Publication Publication Date Title
CN111914568B (en) Method, device and equipment for generating text sentence and readable storage medium
KR102256240B1 (en) Non-factoid question-and-answer system and method
CN110148416A (en) Audio recognition method, device, equipment and storage medium
CN110597952A (en) Information processing method, server, and computer storage medium
US10755595B1 (en) Systems and methods for natural language processing for speech content scoring
CN108920450B (en) Knowledge point reviewing method based on electronic equipment and electronic equipment
CN109508441B (en) Method and device for realizing data statistical analysis through natural language and electronic equipment
CN114547274B (en) Multi-turn question and answer method, device and equipment
CN112686051B (en) Semantic recognition model training method, recognition method, electronic device and storage medium
CN109615009B (en) Learning content recommendation method and electronic equipment
CN114625855A (en) Method, apparatus, device and medium for generating dialogue information
CN113468894A (en) Dialogue interaction method and device, electronic equipment and computer-readable storage medium
CN115221303A (en) Dialogue processing method and dialogue processing device
CN114297359A (en) Dialog intention recognition method and device, electronic equipment and readable storage medium
CN112541109B (en) Answer abstract extraction method and device, electronic equipment, readable medium and product
CN113626441A (en) Text management method, device and equipment based on scanning equipment and storage medium
CN113434653A (en) Method, device and equipment for processing query statement and storage medium
CN109273004B (en) Predictive speech recognition method and device based on big data
US8666987B2 (en) Apparatus and method for processing documents to extract expressions and descriptions
CN111639160A (en) Domain identification method, interaction method, electronic device and storage medium
CN114490986B (en) Computer-implemented data mining method, device, electronic equipment and storage medium
CN113505293B (en) Information pushing method and device, electronic equipment and storage medium
CN112735465B (en) Invalid information determination method and device, computer equipment and storage medium
CN113642334A (en) Intention recognition method and device, electronic equipment and storage medium
CN113806475A (en) Information reply method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination