CN109670025B - Dialogue management method and device - Google Patents

Dialogue management method and device Download PDF

Info

Publication number
CN109670025B
CN109670025B CN201811566830.5A CN201811566830A CN109670025B CN 109670025 B CN109670025 B CN 109670025B CN 201811566830 A CN201811566830 A CN 201811566830A CN 109670025 B CN109670025 B CN 109670025B
Authority
CN
China
Prior art keywords
jump
state
semantic information
app
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811566830.5A
Other languages
Chinese (zh)
Other versions
CN109670025A (en
Inventor
甘艺萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201811566830.5A priority Critical patent/CN109670025B/en
Publication of CN109670025A publication Critical patent/CN109670025A/en
Application granted granted Critical
Publication of CN109670025B publication Critical patent/CN109670025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure relates to a dialogue management method and device. The method comprises the following steps: determining current user semantic information; determining a specific Application (APP) context according to the current user semantic information; inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping; the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic information. The technical scheme can greatly simplify the structure of the state machine, and the state machine is easier to design and maintain.

Description

Dialogue management method and device
Technical Field
The present disclosure relates to the field of natural language processing technologies, and in particular, to a method and an apparatus for dialog management.
Background
With the development of artificial intelligence technology, more and more man-machine dialogue systems are presented, and the man-machine dialogue systems can enable users to interact with computers through natural language, and the interaction flow is as follows: firstly, receiving a voice signal input by a user, converting the voice signal into characters through voice recognition, extracting the semantics and the context through semantic understanding, determining the given reply information through the semantics and the context, and generating natural voice through language generation and voice synthesis to play the natural voice.
Disclosure of Invention
The embodiment of the disclosure provides a dialogue management method and device. The technical scheme is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a dialog management method, including:
determining current user semantic information;
determining a specific Application (APP) context according to the current user semantic information;
inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping;
the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic.
In one embodiment, after making the jump, the method further comprises:
and sending action data to the APP according to the next dialogue state and the current user semantic information, and outputting response data.
In one embodiment, the method further comprises:
reading configuration information in a configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
initializing the state machine according to the configuration information.
In one embodiment, the method further comprises:
storing the information of the current jump from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
In one embodiment, the method further comprises:
and controlling the APP to execute the operation corresponding to the action data.
According to a second aspect of the embodiments of the present disclosure, there is provided a dialog management device, including:
the first determining module is used for determining the current user semantic information;
the second determining module is used for determining the application APP context according to the current user semantic information;
the jump module is used for inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into the state machine, determining the next dialogue state corresponding to the met jump condition and carrying out jump;
the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic.
In one embodiment, the apparatus further comprises:
and the output module is used for sending action data to the APP according to the next dialogue state and the current user semantic information and outputting response data.
In one embodiment, the apparatus further comprises:
the reading module is used for reading the configuration information in the configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
and the initialization module is used for initializing the state machine according to the configuration information.
In one embodiment, the apparatus further comprises:
the storage module is used for storing the current jump information from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
In one embodiment, the apparatus further comprises:
and the control module is used for controlling the APP to execute the operation corresponding to the action data.
According to a third aspect of the embodiments of the present disclosure, there is provided a dialog management device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the above method.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps in the above-described method.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the embodiment can determine the current user semantic information; determining a specific Application (APP) context according to the current user semantic information; inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping; the state machine considers the three-party interaction of a person, a terminal and a third party APP, and the state jump is related to the previous history operation record and APP context, so that the same complex state management can be realized by using a simpler network structure, the structure of the state machine is greatly simplified, and the state of the state machine corresponds to the APP context, so that the state of the APP is easier to track by a dialogue system.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a method of dialog management, according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of dialog management, according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Fig. 4 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Fig. 5 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Fig. 6 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Fig. 8 is a block diagram illustrating a dialog management device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Currently, dialog management is mainly tree or state machine based dialog management, which generally models dialog as a path through a tree or finite state machine. The method merges the context and can complete the modeling of the dialogue by using a limited set of information exchange templates. The session is considered as a process of jumping within finite states, each state having a corresponding action and reply, and if a smooth flow from the start node to the end node is possible, the task is completed. However, the finite state machine only considers interaction between a person and the dialogue machine, and the state jump of the adopted finite state machine is only related to the previous state, if the history information is stored in a first-order structure of the state machine, the state machine is difficult to design and maintain; each state corresponds to a reply word, different reply words need to be designed even though the meanings of the reply words are similar, and the state machine has a complex structure.
In order to solve the above problem, the present embodiment may configure a new state machine, where each time of jump, a dialogue state before jump, a jump condition, and a dialogue state after jump are set in the state machine, where the jump condition includes user semantic information, APP context, and a history operation record; in this way, after obtaining the user semantic information and determining the application APP context aimed at according to the user semantic information, the terminal can jump from the current dialogue state to the next dialogue state based on the state machine and the stored historical operation record; the dialogue process considers the three-party interaction of the user, the terminal and the third-party APP, and each state jump of the state machine is related to the historical operation record, so that the structure of the state machine can be greatly simplified.
Fig. 1 is a flowchart illustrating a session management method according to an exemplary embodiment, which is used in a terminal, as shown in fig. 1, and includes the following steps 101-103:
in step 101, current user semantic information is determined.
Here, when the man-machine conversation starts, the user may input a voice signal to the terminal, and after the terminal receives the voice signal of the user, the terminal may perform voice recognition on the voice signal, convert the voice signal into text, and perform semantic analysis on the text to determine the current semantic information of the user.
In step 102, a specific APP context is determined according to the current user semantic information.
After determining the current user semantic information, the terminal can determine the application aimed by the user semantic information, and further can acquire the context of the application APP corresponding to the user semantic information from the storage space of the APP, if the user semantic information aims at a WeChat application, the terminal needs to acquire the context of the WeChat application.
In step 103, the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored history operation record are input into a state machine, the next dialogue state corresponding to the satisfied skip condition is determined, and the skip is performed.
Here, each time of jump, the state machine is provided with a dialogue state before jump, a jump condition and a dialogue state after jump, wherein the jump condition comprises user semantic information, an APP context and a history operation record for which the user semantic information is aimed. Therefore, after the terminal acquires the user semantic information and the APP context aimed at by the user semantic information, the terminal can input the user semantic information, the APP context and the history operation records stored before, such as history man-machine conversation information and history conversation state jump procedure, into the state machine, and the state machine can determine the satisfied jump condition to finish one state jump according to the state jump, and jump from the current conversation state to the next conversation state.
For example, when a user sends a red packet to Zhang Sanwei through man-machine conversation, the current conversation state is a state that a micro-letter enters a communication interface with Zhang Santong, and the acquired user semantic information is "send red packet", at this time, the state machine can jump from the current conversation state to the next conversation state, i.e. a state that the micro-letter enters a red packet interface towards three red packets according to the context of the user semantic information "send red packet" and the micro-letter application, and the previously stored historical operation record (such as man-machine conversation information and historical conversation state jump procedure when the user sends red packets with the micro-letter) can jump from the current conversation state to the next conversation state. Thus, a state jump is achieved.
It should be noted that, the history operation record may include the history of N rounds of conversations performed between the user and the terminal, that is, N rounds of instructions sent by the user and corresponding response operations of the terminal, where N is greater than or equal to 0, that is, N is 0 at the beginning of the conversation, and N gradually increases as the conversation proceeds subsequently; the state machine has at least one initial state and at least one final state, and the user can input voice jump with different semantics to different final states.
The embodiment can configure a new state machine, wherein each time of jump, the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, and the jump condition comprises user semantic information, APP context and history operation record; in this way, after obtaining the user semantic information and determining the application APP context aimed at according to the user semantic information, the terminal can jump from the current dialogue state to the next dialogue state based on the state machine and the stored historical operation record; the dialogue process considers the three-party interaction of the user, the terminal and the third-party APP, and each state jump of the state machine is related to the historical operation record, so that the structure of the state machine can be greatly simplified.
In one possible implementation manner, after the jump, the session management method may further include the following step A1.
In step A1, according to the next dialogue state and the current user semantic information, sending action data to the APP, and outputting response data.
After the state machine performs state jump, the terminal can generate action data required by the APP to execute the user semantic information according to the next dialogue state and the user semantic information. Still according to the above example, after the state machine jumps from the current dialogue state to the next dialogue state, i.e. the state that the WeChat enters the red packet interface for sending the red packet to Zhang three, the terminal may generate the action data of the WeChat application as the action data of the WeChat application entering the red packet interface for sending the red packet to Zhang three according to the state that the WeChat enters the red packet interface for sending the red packet to Zhang three and the user semantic information "send the red packet".
Here, the terminal may also generate response data to the user according to the next session state and the user semantic information, and still taking the above example as an example, the terminal may generate response data "prepare to Zhang Sanfa micro-letter red packet, please input red packet amount", and so on. The terminal can output the response data through voice, can display the response data on a user interface of an application, and the like, so that a dialogue process between a user and the terminal is completed.
The embodiment can strip the generation of the response data from the state machine, and can greatly simplify the structure of the state machine, so that the state machine is easier to design and maintain.
In a possible implementation manner, the session management method may further include the following steps A2 to A3.
In step A2, the configuration information in the configuration file is read.
In step A3, the state machine is initialized according to the configuration information.
Here, the developer may store configuration information of the state machine in a configuration file of the terminal in advance, where the configuration information includes a session state before a jump, a jump condition, and a session state after a jump each time the state machine jumps, the jump condition includes user semantic information, APP context, and a history operation record, and the configuration file may define a structure of the state machine in the form of an adjacency table as shown in table 1 below:
Figure BDA0001912428000000071
Figure BDA0001912428000000081
TABLE 1
Here, the configuration parser in the terminal reads the table row by row, saves all the skip conditions and the dialogue states before and after the skip in the data structure of the state machine, and initializes the state machine.
The embodiment can read the configuration information in the configuration file, initialize the state machine according to the configuration information, and the jump condition of the configuration information comprises the history operation record and the APP context, so that the same complex state management can be realized by a simpler network structure, and the physical state of the APP can be tracked by the APP context.
In one possible implementation manner, the session management method may further include the following step B1.
In step B1, the information of the current jump from the current dialogue state to the next dialogue state is stored as a history of operation.
Here, the current skip information includes: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump; thus, the terminal can store each state jump in history and the dialogue between the user and the terminal in the history operation record; so as to facilitate the subsequent state machine to jump next time according to the history operation record of the preset times.
According to the embodiment, the historical dialogue state skip from the current dialogue state to the next dialogue state, the user semantic information and the response data can be stored in the historical operation record, and the state machine can conveniently skip the next time according to the historical operation record.
In one possible implementation manner, the session management method may further include the following step C1.
In step C1, the APP is controlled to execute an operation corresponding to the action data.
Here, after receiving the action data, the APP may execute an operation corresponding to the action data, and still according to the above example, after the terminal generates the action data of the WeChat application, which is the action data of the WeChat application entering the red packet interface of the Zhang san-Ding red packet, the APP may be controlled to execute the operation corresponding to the action data, and enter the red packet interface of the Zhang san-Ding red packet.
According to the embodiment, after the action data is received, the APP is controlled to execute the operation corresponding to the action data, and execution of the semantic information of the user is completed.
The implementation is described in detail below by way of several embodiments.
Fig. 2 is a flow chart of a dialog management method, which may be implemented by a terminal or the like, as shown in fig. 2, including steps 201-206, according to an exemplary embodiment.
In step 201, configuration information in a configuration file is read.
The configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
in step 202, the state machine is initialized according to the configuration information.
In step 203, the current user semantic information is determined.
In step 204, a specific APP context is determined according to the current user semantic information.
In step 205, the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored history operation record are input into a state machine, the next dialogue state corresponding to the satisfied skip condition is determined, and the skip is performed.
The state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic information.
In step 206, according to the next dialogue state and the current user semantic information, sending action data to the APP, and outputting response data.
In step 207, the information of the current jump from the current dialogue state to the next dialogue state is stored as a history of operation.
Wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
In step 208, the APP is controlled to execute an operation corresponding to the action data.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure.
Fig. 3 is a block diagram of a dialog management device that may be implemented as part or all of an electronic device by software, hardware, or a combination of both, according to an example embodiment. As shown in fig. 3, the dialog management device comprises:
a first determining module 301, configured to determine current user semantic information;
a second determining module 302, configured to determine, according to the current user semantic information, a targeted application APP context;
a skip module 303, configured to input a current dialogue state of a user, the semantic information of the user, the determined APP context, and a pre-stored history operation record into a state machine, determine a next dialogue state corresponding to a skip condition that is satisfied, and skip the next dialogue state;
the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic.
As a possible embodiment, fig. 4 is a block diagram of a dialog management device according to an exemplary embodiment, where, as shown in fig. 4, the above-disclosed dialog management device may also be configured to include an output module 304, where:
and the output module 304 is configured to send action data to the APP according to the next dialog state and the current user semantic information, and output response data.
As a possible embodiment, fig. 5 is a block diagram of a dialog management device according to an exemplary embodiment, where, as shown in fig. 5, the dialog management device disclosed above may also be configured to include a reading module 305 and an initialization module 306, where:
a reading module 305, configured to read configuration information in the configuration file; the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
an initialization module 306, configured to initialize the state machine according to the configuration information.
As a possible embodiment, fig. 6 is a block diagram of a dialog management device according to an exemplary embodiment, as shown in fig. 6, where the above disclosed dialog management device may also be configured to include a storage module 307, where:
a storage module 307, configured to store, as a history of operation, current skip information for skipping from the current session state to a next session state;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
As a possible embodiment, fig. 7 is a block diagram of a dialog management device according to an exemplary embodiment, where, as shown in fig. 7, the above-disclosed dialog management device may also be configured to include a control module 308, where:
and the control module 308 is used for controlling the APP to execute the operation corresponding to the action data.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating a device for dialog management, which is suitable for use in a terminal device, according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, a game console, a computer, a tablet device, a personal digital assistant, or the like.
The apparatus 800 may include one or more of the following components: a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, an input/output (I/O) interface 806, a sensor component 807, and a communication component 808.
The processing component 801 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 801 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 801 may include one or more modules that facilitate interactions between the processing component 801 and other components. For example, processing component 801 may include multimedia modules to facilitate interactions between multimedia component 804 and processing component 801.
Memory 802 is configured to store various types of data to support operations at apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 802 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 803 provides power to the various components of the apparatus 800. The power components 803 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 804 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 804 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 802 or transmitted via the communication component 808. In some embodiments, the audio component 805 further comprises a speaker for outputting audio signals.
The I/O interface 806 provides an interface between the processing component 801 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 807 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 807 may detect the open/closed state of the device 800, the relative positioning of the components, such as the display and keypad of the device 800, the sensor assembly 807 may also detect the change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and the change in temperature of the device 800. The sensor assembly 807 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 807 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 807 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 808 is configured to facilitate communication between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 808 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 808 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 802, including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of apparatus 800, enables apparatus 800 to perform the above-described dialog management method, the method comprising:
determining current user semantic information;
determining a specific Application (APP) context according to the current user semantic information;
inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping;
the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic information.
In one embodiment, after making the jump, the method further comprises:
and sending action data to the APP according to the next dialogue state and the current user semantic information, and outputting response data.
In one embodiment, the method further comprises:
reading configuration information in a configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
initializing the state machine according to the configuration information.
In one embodiment, the method further comprises:
storing the information of the current jump from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
In one embodiment, the method further comprises:
and controlling the APP to execute the operation corresponding to the action data.
The embodiment also provides a dialogue management device, which comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining current user semantic information;
determining a specific Application (APP) context according to the current user semantic information;
inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping;
the state machine is provided with a conversation state before jump, a jump condition and a conversation state after jump, wherein the jump condition comprises user semantic information, APP context and historical operation records aimed by the user semantic information.
In one embodiment, the processor may be further configured to:
after making the jump, the method further comprises:
and sending action data to the APP according to the next dialogue state and the current user semantic information, and outputting response data.
In one embodiment, the processor may be further configured to:
the method further comprises the steps of:
reading configuration information in a configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
initializing the state machine according to the configuration information.
In one embodiment, the processor may be further configured to:
the method further comprises the steps of:
storing the information of the current jump from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
In one embodiment, the processor may be further configured to:
the method further comprises the steps of:
and controlling the APP to execute the operation corresponding to the action data.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A session management method, applied to a terminal, comprising:
determining current user semantic information;
determining the application APP context according to the current user semantic information, wherein the determining comprises the following steps: determining an Application (APP) aimed by user semantic information, and acquiring the context of the Application (APP) corresponding to the user semantic information from a storage space of the Application (APP);
inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into a state machine, determining the next dialogue state corresponding to the met jump condition, and jumping;
the state machine is provided with a conversation state before the jump, a jump condition and a conversation state after the jump, wherein the jump between the states in the state machine represents a man-machine conversation process, and the state machine is provided with the conversation state before the jump, the jump condition and the conversation state after the jump each time, and the jump condition comprises user semantic information, APP context and history operation records aimed by the user semantic information.
2. The method of claim 1, wherein after making the jump, the method further comprises:
and sending action data to the APP according to the next dialogue state and the current user semantic information, and outputting response data.
3. The method according to claim 1, wherein the method further comprises:
reading configuration information in a configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
initializing the state machine according to the configuration information.
4. The method according to claim 2, wherein the method further comprises:
storing the information of the current jump from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
5. The method according to claim 2, wherein the method further comprises:
and controlling the APP to execute the operation corresponding to the action data.
6. A dialog management device for use with a terminal, comprising:
the first determining module is used for determining the current user semantic information;
the second determining module is configured to determine, according to the current user semantic information, a targeted application APP context, and includes: determining an Application (APP) aimed by user semantic information, and acquiring the context of the Application (APP) corresponding to the user semantic information from a storage space of the Application (APP);
the jump module is used for inputting the current dialogue state of the user, the semantic information of the user, the determined APP context and the pre-stored historical operation record into the state machine, determining the next dialogue state corresponding to the met jump condition and carrying out jump;
the state machine is provided with a conversation state before the jump, a jump condition and a conversation state after the jump, wherein the jump between the states in the state machine represents a man-machine conversation process, and the state machine is provided with the conversation state before the jump, the jump condition and the conversation state after the jump each time, and the jump condition comprises user semantic information, APP context and history operation records aimed by the user semantic.
7. The apparatus of claim 6, wherein the apparatus further comprises:
and the output module is used for sending action data to the APP according to the next dialogue state and the current user semantic information and outputting response data.
8. The apparatus of claim 6, wherein the apparatus further comprises:
the reading module is used for reading the configuration information in the configuration file;
the configuration information comprises dialogue state before jump, jump condition and dialogue state after jump when each jump is carried out in the state machine, wherein the jump condition comprises user semantic information, APP context and history operation record;
and the initialization module is used for initializing the state machine according to the configuration information.
9. The apparatus of claim 7, wherein the apparatus further comprises:
the storage module is used for storing the current jump information from the current dialogue state to the next dialogue state as a historical operation record;
wherein, the current jump information comprises: the dialogue state before the current jump, the user semantic information, the response data and the dialogue state after the jump.
10. The apparatus of claim 7, wherein the apparatus further comprises:
and the control module is used for controlling the APP to execute the operation corresponding to the action data.
11. A dialog management device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 5.
12. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 5.
CN201811566830.5A 2018-12-19 2018-12-19 Dialogue management method and device Active CN109670025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811566830.5A CN109670025B (en) 2018-12-19 2018-12-19 Dialogue management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811566830.5A CN109670025B (en) 2018-12-19 2018-12-19 Dialogue management method and device

Publications (2)

Publication Number Publication Date
CN109670025A CN109670025A (en) 2019-04-23
CN109670025B true CN109670025B (en) 2023-06-16

Family

ID=66145059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811566830.5A Active CN109670025B (en) 2018-12-19 2018-12-19 Dialogue management method and device

Country Status (1)

Country Link
CN (1) CN109670025B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104502A (en) * 2019-12-24 2020-05-05 携程计算机技术(上海)有限公司 Dialogue management method, system, electronic device and storage medium for outbound system
CN111652001B (en) * 2020-06-04 2023-01-17 联想(北京)有限公司 Data processing method and device
CN113779214B (en) * 2021-08-17 2022-10-18 深圳市人马互动科技有限公司 Automatic generation method and device of jump condition, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575396A (en) * 2014-08-15 2017-04-19 微软技术许可有限责任公司 Quick navigation of message conversation history
CN106911812A (en) * 2017-05-05 2017-06-30 腾讯科技(上海)有限公司 A kind of processing method of session information, server and computer-readable recording medium
CN108846030A (en) * 2018-05-28 2018-11-20 苏州思必驰信息科技有限公司 Access method, system, electronic equipment and the storage medium of official website

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7257537B2 (en) * 2001-01-12 2007-08-14 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
CN104796392B (en) * 2014-01-22 2019-01-08 腾讯科技(北京)有限公司 One kind jumping context synchronizing device, method and client
CN105589848A (en) * 2015-12-28 2016-05-18 百度在线网络技术(北京)有限公司 Dialog management method and device
CN105845137B (en) * 2016-03-18 2019-08-23 中国科学院声学研究所 A kind of speech dialog management system
CN106874259B (en) * 2017-02-23 2019-07-16 腾讯科技(深圳)有限公司 A kind of semantic analysis method and device, equipment based on state machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575396A (en) * 2014-08-15 2017-04-19 微软技术许可有限责任公司 Quick navigation of message conversation history
CN106911812A (en) * 2017-05-05 2017-06-30 腾讯科技(上海)有限公司 A kind of processing method of session information, server and computer-readable recording medium
CN108846030A (en) * 2018-05-28 2018-11-20 苏州思必驰信息科技有限公司 Access method, system, electronic equipment and the storage medium of official website

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
口语对话管理综述;王菁华,钟义信,王枞,刘建毅;计算机应用研究(10);全文 *

Also Published As

Publication number Publication date
CN109670025A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
US10152207B2 (en) Method and device for changing emoticons in a chat interface
EP3188066B1 (en) A method and an apparatus for managing an application
CN107423106B (en) Method and apparatus for supporting multi-frame syntax
CN107767864B (en) Method and device for sharing information based on voice and mobile terminal
EP3176709A1 (en) Video categorization method and apparatus, computer program and recording medium
EP3104587B1 (en) Method and apparatus for displaying a conversation interface
CN107341509B (en) Convolutional neural network training method and device and readable storage medium
CN109670025B (en) Dialogue management method and device
US9959487B2 (en) Method and device for adding font
CN111461304B (en) Training method of classified neural network, text classification method, device and equipment
CN109063101B (en) Video cover generation method and device
CN107945806B (en) User identification method and device based on sound characteristics
CN107463372B (en) Data-driven page updating method and device
CN108270661B (en) Information reply method, device and equipment
CN111985635A (en) Method, device and medium for accelerating neural network inference processing
CN115273831A (en) Voice conversion model training method, voice conversion method and device
CN108766427B (en) Voice control method and device
CN109992754B (en) Document processing method and device
CN106447747B (en) Image processing method and device
CN105786561B (en) Method and device for calling process
CN107885464B (en) Data storage method, device and computer readable storage medium
CN107864263B (en) Recording method and device of application layer audio data
CN111667827B (en) Voice control method and device for application program and storage medium
CN114462410A (en) Entity identification method, device, terminal and storage medium
CN113923517A (en) Background music generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant