CN110543556A - Dialogue configuration method, storage medium and electronic equipment - Google Patents

Dialogue configuration method, storage medium and electronic equipment Download PDF

Info

Publication number
CN110543556A
CN110543556A CN201910838538.2A CN201910838538A CN110543556A CN 110543556 A CN110543556 A CN 110543556A CN 201910838538 A CN201910838538 A CN 201910838538A CN 110543556 A CN110543556 A CN 110543556A
Authority
CN
China
Prior art keywords
dialog
conversation
component
text
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910838538.2A
Other languages
Chinese (zh)
Inventor
任清卉
谷博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volkswagen China Investment Co Ltd
Mobvoi Innovation Technology Co Ltd
Original Assignee
Mobvoi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobvoi Information Technology Co Ltd filed Critical Mobvoi Information Technology Co Ltd
Priority to CN201910838538.2A priority Critical patent/CN110543556A/en
Publication of CN110543556A publication Critical patent/CN110543556A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Abstract

a dialog configuration method, a storage medium, and an electronic device are disclosed. And sending the dialog text to the target dialog component to obtain a dialog execution result, and generating a final interactive execution result from the received dialog execution result and sending the final interactive execution result to a corresponding dialog channel by obtaining the current dialog information and determining the target dialog component executing the current dialog. Therefore, simultaneous interaction of multiple types of conversations can be realized, and the fluency and integrity of intelligent conversations are improved.

Description

Dialogue configuration method, storage medium and electronic equipment
Technical Field
The present invention relates to the field of intelligent dialog technologies, and in particular, to a dialog configuration method, a storage medium, and an electronic device.
Background
in recent years, with the development of artificial intelligence technology, more and more intelligent conversation robots enter the visual field and life of people, and great convenience is brought to human life. The intelligent conversation robot has the characteristics of high efficiency, intelligence, low cost, no fatigue, no emotion, no leaving, and the like, and is widely applied to various industries.
the existing intelligent dialogue robot is basically based on a voice dialogue database which is matched manually in advance, wherein a voice text of input voice and a voice text of output voice corresponding to the input voice are stored in the voice dialogue database, the robot obtains the voice text through voice recognition according to the obtained input voice, and the voice text matched with the robot is found in the voice dialogue database and is used as the output voice to be output, so that dialogue is realized.
however, the conversation types include multi-turn task type conversations, question-and-answer conversations, emotion type conversations and the like, while the existing intelligent conversation robot can only realize interaction based on one conversation type, and if a group of conversations includes multiple conversation types, situations of wrong answers, missed answers and the like may occur, and the fluency and integrity of the conversations are poor.
Disclosure of Invention
In view of this, embodiments of the present invention provide a dialog configuration method, a storage medium, and an electronic device, which can improve fluency and integrity of an intelligent dialog.
In a first aspect, an embodiment of the present invention provides a dialog configuration method, where the method includes:
Acquiring current conversation information, wherein the conversation information comprises a conversation text, a conversation timestamp and a conversation channel identifier;
Determining a target dialog component set, wherein the target dialog component set comprises at least one target dialog component, and the target dialog component is a dialog component for executing the current dialog;
sending the dialog text to the target dialog component;
receiving a dialog execution result returned by the target dialog component;
determining an interactive execution result according to the received conversation execution result; and
And sending the interaction execution result to a corresponding conversation channel according to the conversation channel identifier.
Preferably, determining the set of target dialog components comprises:
acquiring historical dialogue data;
Acquiring the starting state of the conversation component;
Determining a set of intermediate dialog components according to the historical dialog data and the enabling state of the dialog components, the set of intermediate dialog components including at least one dialog component;
Sending the dialog text to dialog components in the set of intermediate dialog components;
Receiving an analysis result of a dialog text returned by the dialog component in the intermediate dialog component set, wherein the analysis result comprises a dialog type, a corresponding dialog intention name and a confidence coefficient; and
and determining the target dialog component set according to the analysis result of the dialog text.
Preferably, the conversation types include a question-and-answer conversation, a plurality of rounds of conversations, and an emotional comfort conversation.
Preferably, determining a set of intermediate dialog components based on the historical dialog data and the enablement status of the dialog components comprises:
Determining the association state of the current conversation and the historical conversation according to the historical conversation data and the current conversation information;
In response to the current conversation being associated with the historical conversation, determining the set of intermediate conversation components according to the conversation components executing the historical conversation and the conversation component enabling state; and
In response to a current dialog not being associated with a historical dialog, determining the set of intermediate dialog components in accordance with the dialog component enablement state.
Preferably, the determining the target dialog component set according to the parsing result of the dialog text comprises:
Obtaining a dialog component with highest confidence level in each intention name in the intermediate dialog component set according to the analysis result of the dialog text;
acquiring the priority of the conversation components; and
And determining the target dialog component set in the dialog components with highest confidence level in all intention names according to the dialog timestamp, the dialog channel identification and the priority of the dialog components.
in a second aspect, an embodiment of the present invention provides a dialog configuration method, where the method includes:
Receiving a dialog text;
Obtaining an analysis result of the dialog text, wherein the analysis result comprises a dialog type, a corresponding dialog intention name and a confidence level; and
and sending the analysis result of the dialog text to a central control engine.
Preferably, the conversation types include a question-and-answer conversation, a plurality of rounds of conversations, and an emotional comfort conversation.
preferably, the method further comprises:
Obtaining a conversation execution result according to the conversation text; and
And sending the conversation execution result to a central control engine.
in a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory is used to store one or more computer program instructions, where the one or more computer program instructions are executed by the processor to implement the method according to the first aspect and the second aspect.
in a fourth aspect, embodiments of the present invention provide a computer-readable storage medium on which computer program instructions are stored, which when executed by a processor implement the method according to the first and second aspects.
According to the technical scheme of the embodiment of the invention, the current conversation information is acquired, the target conversation component for executing the current conversation is determined, the conversation text is sent to the target conversation component to obtain the conversation execution result, and the received conversation execution result is generated into the final interaction execution result and sent to the corresponding conversation channel. Therefore, simultaneous interaction of multiple types of conversations can be realized, and the fluency and integrity of intelligent conversations are improved.
Drawings
the above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic structural diagram of a dialog configuration system according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a dialog configuration method of an embodiment of the present invention;
FIG. 3 is a flowchart of a method for a central control engine to perform dialog configuration according to an embodiment of the present invention;
FIG. 4 is a flow diagram of a determine target dialog component of an embodiment of the present invention;
FIG. 5 is a flow diagram of a dialog component performing a dialog configuration method according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of an electronic device of an embodiment of the invention.
Detailed Description
the present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
unless the context clearly requires otherwise, throughout this specification, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
in the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
fig. 1 is a schematic structural diagram of a dialog configuration system according to an embodiment of the present invention. As shown in fig. 1, the dialog configuration system of the embodiment of the present invention includes a dialog channel unit 1, a central control engine 2, and a dialog component unit 3. The conversation channel unit 1 is configured to obtain conversation information, where the conversation information includes a conversation text, a conversation timestamp, and a conversation channel identifier. The central control engine 2 is used for determining a target dialogue component for executing the current dialogue and sending the dialogue text to the target dialogue component. The dialogue component unit is used for acquiring a dialogue execution result of the dialogue text and sending the dialogue execution result to the central control engine 2.
In this embodiment, the conversation channel unit 1 is used for performing human-computer interaction, and a user obtains user input information through a conversation channel in the conversation channel unit 1, where the user input may be voice information, text information, or video information. Further, the conversation channel unit 1 includes a plurality of conversation channels, which may be web pages 11, telephones 12 and WeChat 13, or other applications that can perform human-computer interaction.
further, the conversation channel unit 1 may be a smart phone, a tablet computer, a notebook computer, a desktop computer or other special equipment provided with a conversation channel.
Further, the conversation channel unit 1 is further configured to obtain an interaction execution result, and display the interaction execution result through a corresponding conversation channel. The interaction execution result may be voice information, text information, or video information, etc.
Further, the conversation channel unit 1 further includes an analysis module (not shown in the figure) for converting the received voice information or video information input by the user into text information and then sending the text information to the central control engine 2, and converting the received interaction execution result of the central control engine 2 into corresponding voice information, text information or video information.
In this embodiment, the central control engine 2 is configured to obtain a dialog text, determine a target dialog component, and send the dialog text to the target dialog component, and the target dialog component unit is configured to obtain a dialog execution result of the dialog text and send the dialog execution result to the central control engine 2. And the central control engine 2 receives the conversation execution result returned by the target conversation component, determines an interaction execution result according to the received conversation execution result, and sends the interaction execution result to a corresponding conversation channel according to the conversation channel identifier.
fig. 2 is a flowchart of a dialog configuration method according to an embodiment of the present invention. As shown in fig. 2, the dialog configuration method according to the embodiment of the present invention includes the following steps:
step S201, the dialogue channel acquires user input information.
In the present embodiment, the user input information may be voice information, text information, video information, or the like. Further, the conversation channel can be a webpage, a telephone, a WeChat, and the like, or other applications capable of human-computer interaction.
Further, the user may input the user input information through a smart phone, a tablet computer, a notebook computer, a desktop computer, or other special devices provided with a conversation channel.
Step S202, analyzing the input information of the user through the conversation channel.
In this embodiment, the dialog channel parses the user input information to obtain the dialog text.
And step S203, the conversation channel sends a conversation text to the central control engine.
in this embodiment, the central control engine receives the dialog text and obtains a corresponding dialog timestamp and a dialog channel identifier.
Further, the conversation channel identification may be a name or a corresponding number of the conversation channel.
And step S204, the central control engine acquires historical dialogue data.
In this embodiment, the central control engine acquires historical dialog data to determine whether the current dialog is related to the historical dialog according to the historical dialog data.
step S205, the dialogue component sends the enabling state to the central control engine.
In this embodiment, the central control engine obtains an enabling state of each dialog component, where the enabling state represents whether each dialog component can execute a dialog task at the current time.
And step S206, the central control engine determines an intermediate conversation component set.
In this embodiment, the central control engine determines an intermediate dialog component set, where the intermediate dialog component set includes at least one dialog component, and a dialog component in the intermediate dialog component set is used to obtain a parsing result of a dialog text, where the parsing result includes a dialog type and a corresponding dialog intention name and confidence.
Further, the central control engine determines a set of intermediate dialog components according to the historical dialog data and the enabled states of the dialog components. Specifically, the central control engine judges whether the current conversation is related to the historical conversation according to the historical conversation data, obtains a conversation component for executing the historical conversation in response to the fact that the current conversation is related to the historical conversation, selects a conversation component with an enabled state being opened from the conversation components for executing the historical conversation according to the enabled state of the conversation component, and determines the conversation component as an intermediate conversation component set. And in response to the fact that the current conversation is not associated with the historical conversation, selecting the conversation component with the starting state as the starting state according to the conversation component starting state to determine the conversation component as the intermediate conversation component set.
and step S207, the central control engine sends the dialogue text to the dialogue components in the intermediate dialogue component set.
in this embodiment, the dialog channels and the dialog components are not directly interfaced, the central control engine serves as a unified entry to receive the dialog texts in different dialog channels, the central control engine interfaces with the dialog components, and based on the configured central control policy, the dialog component(s) to execute the component-embedded dialog policy corresponding to the user input is selected.
And S208, the dialogue component acquires the interest counting result of the dialogue text.
in this embodiment, the dialog components in the intermediate dialog component set obtain the parsing result of the dialog text, where the parsing result includes the dialog type and the corresponding dialog intention name and confidence.
further, intelligent conversations include multiple types of conversations, including question-and-answer conversations, multi-turn task conversations, emotional calming conversations, and the like. The dialog intentions are preset during design, and semantic understanding and text information mining can be carried out on the dialog texts through model training, so that language-level meanings of contents expressed by a user are recognized as well-defined dialog intentions. Specifically, the dialog intentions are standard questions matched in question-answer, intention recognition results in multiple rounds of dialog, and textual emotion recognition results in an emotion-soothing dialog.
And step S209, the dialogue component sends the analysis result to the central control engine.
In this embodiment, the dialog component sends the analysis result to the central control engine, and after receiving the analysis result, the central control engine determines the target dialog component according to the analysis result.
And step S210, the central control engine acquires the priority of the conversation components.
and step S211, the central control engine determines a target conversation component.
in this embodiment, the central control engine determines the target dialog component set according to the parsing result of the dialog text and the priority of the dialog components, where the target dialog component set includes at least one target dialog component, and the target dialog component is a dialog component for executing a current dialog.
Further, the parsed result of the dialog text includes the dialog type and the corresponding dialog intention name and confidence. Because the dialog configuration system of the embodiment of the invention comprises a large number of dialog components, when the dialog text is analyzed, the dialog text is generally analyzed by a plurality of dialog components, and the analysis result of each dialog component is different. And the central control engine acquires the analyzed dialogue intents and selects the dialogue component with the highest confidence coefficient from the components with the same kind of dialogue intents as the analysis result. And if only one dialog component with the highest confidence coefficient exists, determining the dialog component as the target dialog component corresponding to the dialog intention. If the dialog component with the highest confidence coefficient is multiple, the dialog component with the highest priority is selected according to the priority order of the dialog components, and the dialog component is determined as the target dialog component corresponding to the dialog intention.
and step S212, the central control engine sends the dialog text to a target dialog component.
Step S213, the target dialog component determines the dialog execution result of the dialog text.
In this embodiment, after receiving the dialog text, the target dialog component determines a dialog execution result according to a preset dialog policy, and the execution action of each dialog component includes parameter operation, API (Application Programming Interface) call, event forwarding, natural language generation, and the like.
And step S214, the target dialogue component sends a dialogue execution result to the central control engine.
Step S215, the central control engine determines the interactive execution result.
In this embodiment, the interactive execution policy is an execution action that needs to be finally fed back to the user.
furthermore, after the dialog text is distributed to each dialog component for execution, the dialog components return a dialog execution result to the central control engine according to the condition group and the execution action in the dialog strategy, but because each dialog component is not directly connected with the dialog channel, the interaction with the user is finally controlled by the central control engine in a unified way. Therefore, based on the execution result of each dialog component, the central control engine judges whether the central control strategy condition group is met, and based on the central control strategy condition group judgment result and the execution action result of each dialog component, a final interactive execution decision is fused, and the user carries out interactive interaction according to the final interactive execution result.
Furthermore, the central control engine is used as a decision engine which is uniquely connected with the conversation channel and directly interacts with each conversation component, and all user input information, condition group judgment results, execution action results of each conversation component and final interaction execution decisions in one group of conversations are recorded in the central control engine according to the occurrence time sequence.
And S216, the central control engine sends the interaction execution result to a conversation channel.
And S217, displaying the interactive execution result through a conversation channel.
In this embodiment, the dialog channel displays the interaction execution result to the user through corresponding text information, voice information, video information, or the like.
The embodiment of the invention obtains the current conversation information, determines the target conversation component executing the current conversation, sends the conversation text to the target conversation component to obtain the conversation execution result, and generates the final interaction execution result from the received conversation execution result and sends the final interaction execution result to the corresponding conversation channel. Therefore, simultaneous interaction of multiple types of conversations can be realized, and the fluency and integrity of intelligent conversations are improved.
Fig. 3 is a flowchart of a method for executing dialog configuration by the central control engine according to an embodiment of the present invention. As shown in fig. 3, the method for the central control engine to execute the dialog configuration includes the following steps:
and step S310, acquiring current conversation information.
in this embodiment, the dialog information includes a dialog text, a dialog timestamp, and a dialog channel identifier.
step S320, determining a target dialogue component set, wherein the target dialogue component set comprises at least one target dialogue component, and the target dialogue component is a dialogue component for executing the current dialogue.
further, FIG. 4 is a flow diagram of a determine target dialog component of an embodiment of the present invention. As shown in fig. 4, the central control engine determining the target dialog component includes the following steps:
And step S410, acquiring historical dialogue data.
in this embodiment, the central control engine acquires historical dialog data to determine whether the current dialog is related to the historical dialog according to the historical dialog data.
And step S420, acquiring the starting state of the dialog component.
in this embodiment, the central control engine obtains an enabling state of each dialog component, where the enabling state represents whether each dialog component can execute a dialog task at the current time.
and step S430, determining a middle conversation component set according to the historical conversation data and the starting state of the conversation components.
In this embodiment, the set of intermediate dialog components includes at least one dialog component.
Further, an association state of the current dialog and the historical dialog is determined according to the historical dialog data and the current dialog information, the set of intermediate dialog components is determined according to the dialog components executing the historical dialog and the dialog component enabling state in response to the fact that the current dialog is associated with the historical dialog, and the set of intermediate dialog components is determined according to the dialog component enabling state in response to the fact that the current dialog is not associated with the historical dialog.
Specifically, further, the central control engine determines a set of intermediate dialog components according to the historical dialog data and the enabled states of the dialog components. Specifically, the central control engine judges whether the current conversation is related to the historical conversation according to the historical conversation data, obtains a conversation component for executing the historical conversation in response to the fact that the current conversation is related to the historical conversation, selects a conversation component with an enabled state being opened from the conversation components for executing the historical conversation according to the enabled state of the conversation component, and determines the conversation component as an intermediate conversation component set. And in response to the fact that the current conversation is not associated with the historical conversation, selecting the conversation component with the starting state as the starting state according to the conversation component starting state to determine the conversation component as the intermediate conversation component set.
And step S440, sending the dialog text to dialog components in the intermediate dialog component set.
And step S450, receiving the analysis result of the dialog text returned by the dialog component in the intermediate dialog component set.
In this embodiment, the parsing result includes a dialog type and a corresponding dialog intention name and confidence.
Further, the conversation types include a question-and-answer conversation, a plurality of rounds of conversations, an emotion-soothing conversation, and the like.
And step S460, determining the target dialog component set according to the analysis result of the dialog text.
in this embodiment, the central control engine acquires, from the intermediate dialog component set, a dialog component with the highest confidence level among the intention names according to the analysis result of the dialog text, acquires a priority order of the dialog components, and determines the target dialog component set among the dialog components with the highest confidence level among the intention names according to the dialog timestamp, the dialog channel identifier, and the priority order of the dialog component.
Further, the parsed result of the dialog text includes the dialog type and the corresponding dialog intention name and confidence. Because the dialog configuration system of the embodiment of the invention comprises a large number of dialog components, when the dialog text is analyzed, the dialog text is generally analyzed by a plurality of dialog components, and the analysis result of each dialog component is different. And the central control engine acquires the analyzed dialogue intents and selects the dialogue component with the highest confidence coefficient from the components with the same kind of dialogue intents as the analysis result. And if only one dialog component with the highest confidence coefficient exists, determining the dialog component as the target dialog component corresponding to the dialog intention. If the dialog component with the highest confidence coefficient is multiple, the dialog component with the highest priority is selected according to the priority order of the dialog components, and the dialog component is determined as the target dialog component corresponding to the dialog intention.
And step S330, sending the dialog text to the target dialog component.
And step S340, receiving a dialog execution result returned by the target dialog component.
in this embodiment, after receiving the dialog text, the target dialog component determines a dialog execution result according to a preset dialog policy, and the execution action of each dialog component includes parameter operation, API call, event forwarding, natural language generation, and the like. The target dialog component sends the dialog execution result to the central control engine.
And step S350, determining an interactive execution result according to the received conversation execution result.
in this embodiment, after the central control engine distributes the dialog text to each dialog component for execution, the dialog component returns a dialog execution result to the central control engine according to the condition group and the execution action in the dialog policy, but since each dialog component is not directly connected to the dialog channel, the interaction with the user is finally controlled by the central control engine in a unified manner. Therefore, based on the execution result of each dialog component, the central control engine judges whether the central control strategy condition group is met, and based on the central control strategy condition group judgment result and the execution action result of each dialog component, a final interactive execution decision is fused, and the user carries out interactive interaction according to the final interactive execution result.
Furthermore, the central control engine is used as a decision engine which is uniquely connected with the conversation channel and directly interacts with each conversation component, and all user input information, condition group judgment results, execution action results of each conversation component and final interaction execution decisions in one group of conversations are recorded in the central control engine according to the occurrence time sequence.
and S360, sending the interaction execution result to a corresponding conversation channel according to the conversation channel identification.
In this embodiment, the central control engine sends the interaction execution result to a corresponding conversation channel according to the conversation channel identifier, and the conversation channel displays the interaction execution result to the user through corresponding text information, voice information, video information, or the like.
The embodiment of the invention obtains the current conversation information, determines the target conversation component executing the current conversation, sends the conversation text to the target conversation component to obtain the conversation execution result, and generates the final interaction execution result from the received conversation execution result and sends the final interaction execution result to the corresponding conversation channel. Therefore, simultaneous interaction of multiple types of conversations can be realized, and the fluency and integrity of intelligent conversations are improved.
Fig. 5 is a flow diagram of a dialog component execution dialog configuration method of an embodiment of the present invention. As shown in fig. 5, the dialog component execution dialog configuration method includes the following steps:
And step S510, receiving the dialog text.
in this embodiment, the dialog component receives dialog text sent by the central control engine.
and step S520, obtaining the analysis result of the dialog text.
in this embodiment, the dialog component obtains a parsing result of the dialog text, which includes the dialog type and the corresponding dialog intention name and confidence.
And step S530, sending the analysis result of the dialog text to a central control engine.
And step S540, obtaining a conversation execution result according to the conversation text.
In this embodiment, the dialog configuration method can implement multiple types of human-computer dialogs, each dialog type is implemented by a dialog component, including a question-and-answer dialog, a multi-turn task dialog, an emotion soothing dialog, an abnormal flow pull-back dialog, and the like, and each dialog component must be designed with a corresponding built-in dialog strategy in practical application. And generating a conversation execution result according to the conversation strategy. Specifically, at the time of design, the dialogue intention which can be understood by each dialogue component is preset, and the dialogue component can carry out semantic understanding and text information mining on the text of the content expressed by the user through model training, so that the language-level meaning of the content expressed by the user is recognized as the well-defined dialogue intention. The dialog intentions are standard questions matched in question-answer, intention recognition results in multiple rounds of dialog, and textual emotion recognition results in emotion-soothing dialog. And after judging that the current conversation meets the conditions, executing actions corresponding to the conversation strategies are executed, wherein the executing actions of each conversation component comprise parameter operation, API (application program interface) calling, event forwarding, natural language generation and the like. And after recognizing the conversation intention, the conversation components interact according to a conversation strategy, and the robot generates a corresponding natural language according to strategy setting and communicates with the user.
And step S550, sending the dialogue execution result to a central control engine.
In this embodiment, the dialog component sends the dialog execution result to the central control engine. After the dialog text is distributed to each dialog component by the central control engine to be executed, the dialog components return dialog execution results to the central control engine according to the condition group and the execution action in the dialog strategy, but because each dialog component is not directly connected with a dialog channel, the interaction with the user is finally controlled by the central control engine in a unified way. Therefore, based on the execution result of each dialog component, the central control engine judges whether the central control strategy condition group is met, and based on the central control strategy condition group judgment result and the execution action result of each dialog component, a final interactive execution decision is fused, and the user carries out interactive interaction according to the final interactive execution result.
the embodiment of the invention obtains the current conversation information, determines the target conversation component executing the current conversation, sends the conversation text to the target conversation component to obtain the conversation execution result, and generates the final interaction execution result from the received conversation execution result and sends the final interaction execution result to the corresponding conversation channel. Therefore, simultaneous interaction of multiple types of conversations can be realized, and the fluency and integrity of intelligent conversations are improved.
fig. 6 is a schematic diagram of an electronic device of an embodiment of the invention. The electronic device shown in fig. 6 is an automatic question answering apparatus, which includes a general-purpose computer hardware structure including at least a processor 61 and a memory 62. The processor 61 and the memory 62 are via a bus. And (4) connecting. The memory 62 is adapted to store instructions or programs executable by the processor 51. The processor 61 may be a stand-alone microprocessor or a collection of one or more microprocessors. Thus, the processor 61 implements the processing of data and the control of other devices by executing instructions stored by the memory 62 to perform the method flows of embodiments of the present invention as described above. The bus 63 connects the above components together, and also connects the above components to a display controller 64 and a display device and an input/output (I/O) device 65. Input/output (I/O) devices 65 may be a mouse, keyboard, modem, network interface, touch input device, motion sensing input device, printer, and other devices known in the art. Typically, the input/output device 65 is connected to the system through an input/output (I/O) controller 66.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, various aspects of embodiments of the invention may take the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. Furthermore, various aspects of embodiments of the invention may take the form of: a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer-readable media may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of embodiments of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. The computer readable signal medium may be any computer readable medium; is not a computer readable storage medium and may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of embodiments of the present invention may be written in any combination of one or more programming languages, including: object-oriented programming languages such as Java, Small talk, C + + and the like; and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer; or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention described above describe various aspects of embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A dialog configuration method, characterized in that the method comprises:
Acquiring current conversation information, wherein the conversation information comprises a conversation text, a conversation timestamp and a conversation channel identifier;
determining a target dialog component set, wherein the target dialog component set comprises at least one target dialog component, and the target dialog component is a dialog component for executing the current dialog;
sending the dialog text to the target dialog component;
Receiving a dialog execution result returned by the target dialog component;
Determining an interactive execution result according to the received conversation execution result; and
And sending the interaction execution result to a corresponding conversation channel according to the conversation channel identifier.
2. The method of claim 1, wherein determining a set of target dialog components comprises:
Acquiring historical dialogue data;
Acquiring the starting state of the conversation component;
Determining a set of intermediate dialog components according to the historical dialog data and the enabling state of the dialog components, the set of intermediate dialog components including at least one dialog component;
Sending the dialog text to dialog components in the set of intermediate dialog components;
Receiving an analysis result of a dialog text returned by the dialog component in the intermediate dialog component set, wherein the analysis result comprises a dialog type, a corresponding dialog intention name and a confidence coefficient; and
And determining the target dialog component set according to the analysis result of the dialog text.
3. The method of claim 2, wherein the conversation types include a question-and-answer conversation, a plurality of rounds of conversation, and an emotional comfort conversation.
4. The method of claim 2, wherein determining a set of intermediate conversation components based on the historical conversation data and the enablement states of the conversation components comprises:
Determining the association state of the current conversation and the historical conversation according to the historical conversation data and the current conversation information;
In response to the current conversation being associated with the historical conversation, determining the set of intermediate conversation components according to the conversation components executing the historical conversation and the conversation component enabling state; and
In response to a current dialog not being associated with a historical dialog, determining the set of intermediate dialog components in accordance with the dialog component enablement state.
5. The method of claim 2, wherein determining the set of target dialog components based on the parsing of the dialog text comprises:
Obtaining a dialog component with highest confidence level in each intention name in the intermediate dialog component set according to the analysis result of the dialog text;
Acquiring the priority of the conversation components; and
And determining the target dialog component set in the dialog components with highest confidence level in all intention names according to the dialog timestamp, the dialog channel identification and the priority of the dialog components.
6. A dialog configuration method, characterized in that the method comprises:
receiving a dialog text;
Obtaining an analysis result of the dialog text, wherein the analysis result comprises a dialog type, a corresponding dialog intention name and a confidence level; and
And sending the analysis result of the dialog text to a central control engine.
7. The method of claim 6, wherein the conversation types include a question-and-answer conversation, a plurality of rounds of conversation, and an emotional comfort conversation.
8. The method of claim 6, further comprising:
Obtaining a conversation execution result according to the conversation text; and
And sending the conversation execution result to a central control engine.
9. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-8.
10. A computer-readable storage medium on which computer program instructions are stored, which, when executed by a processor, implement the method of any one of claims 1-8.
CN201910838538.2A 2019-09-05 2019-09-05 Dialogue configuration method, storage medium and electronic equipment Pending CN110543556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838538.2A CN110543556A (en) 2019-09-05 2019-09-05 Dialogue configuration method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838538.2A CN110543556A (en) 2019-09-05 2019-09-05 Dialogue configuration method, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110543556A true CN110543556A (en) 2019-12-06

Family

ID=68712553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838538.2A Pending CN110543556A (en) 2019-09-05 2019-09-05 Dialogue configuration method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110543556A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966805A (en) * 2020-08-13 2020-11-20 贝壳技术有限公司 Method, device, medium and electronic equipment for assisting in realizing session

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943998A (en) * 2017-12-05 2018-04-20 竹间智能科技(上海)有限公司 A kind of human-machine conversation control system and method for knowledge based collection of illustrative plates
CN109145104A (en) * 2018-09-29 2019-01-04 北京百度网讯科技有限公司 For talking with interactive method and apparatus
CN109616108A (en) * 2018-11-29 2019-04-12 北京羽扇智信息科技有限公司 More wheel dialogue interaction processing methods, device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943998A (en) * 2017-12-05 2018-04-20 竹间智能科技(上海)有限公司 A kind of human-machine conversation control system and method for knowledge based collection of illustrative plates
CN109145104A (en) * 2018-09-29 2019-01-04 北京百度网讯科技有限公司 For talking with interactive method and apparatus
CN109616108A (en) * 2018-11-29 2019-04-12 北京羽扇智信息科技有限公司 More wheel dialogue interaction processing methods, device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966805A (en) * 2020-08-13 2020-11-20 贝壳技术有限公司 Method, device, medium and electronic equipment for assisting in realizing session

Similar Documents

Publication Publication Date Title
CN109760041B (en) Chat robot-based cloud management system and operation method thereof
CN108416041B (en) Voice log analysis method and system
CN106407178A (en) Session abstract generation method and device
CN109429522A (en) Voice interactive method, apparatus and system
CN105391730A (en) Information feedback method, device and system
CN112334892A (en) Selectively generating extended responses for directing continuation of a human-machine conversation
JP2017215931A (en) Conference support system, conference support device, conference support method, and program
CN110288995B (en) Interaction method and device based on voice recognition, storage medium and electronic equipment
CN111680517B (en) Method, apparatus, device and storage medium for training model
CN112331213A (en) Intelligent household equipment control method and device, electronic equipment and storage medium
WO2019060520A1 (en) Method, apparatus, and computer-readable media for customer interaction semantic annotation and analytics
CN109271503A (en) Intelligent answer method, apparatus, equipment and storage medium
CN116521841A (en) Method, device, equipment and medium for generating reply information
CN111460124A (en) Intelligent interaction method and device and robot
CN112286485B (en) Method and device for controlling application through voice, electronic equipment and storage medium
CN111695360B (en) Semantic analysis method, semantic analysis device, electronic equipment and storage medium
CN110543556A (en) Dialogue configuration method, storage medium and electronic equipment
CN113591463A (en) Intention recognition method and device, electronic equipment and storage medium
CN109147792A (en) A kind of voice resume system
CN110111793B (en) Audio information processing method and device, storage medium and electronic device
CN110740212A (en) Call answering method and device based on intelligent voice technology and electronic equipment
CN105979394A (en) Smart television browser operation method and smart television
CN114724561A (en) Voice interruption method and device, computer equipment and storage medium
CN114880498A (en) Event information display method and device, equipment and medium
CN115101053A (en) Emotion recognition-based conversation processing method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220525

Address after: 210038 8th floor, building D11, Hongfeng science and Technology Park, Nanjing Economic and Technological Development Zone, Jiangsu Province

Applicant after: New Technology Co.,Ltd.

Applicant after: VOLKSWAGEN (CHINA) INVESTMENT Co.,Ltd.

Address before: 100190 1001, 10th floor, office building a, 19 Zhongguancun Street, Haidian District, Beijing

Applicant before: MOBVOI INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206

RJ01 Rejection of invention patent application after publication