CN111290677B - Self-service equipment navigation method and navigation system thereof - Google Patents

Self-service equipment navigation method and navigation system thereof Download PDF

Info

Publication number
CN111290677B
CN111290677B CN201811496702.8A CN201811496702A CN111290677B CN 111290677 B CN111290677 B CN 111290677B CN 201811496702 A CN201811496702 A CN 201811496702A CN 111290677 B CN111290677 B CN 111290677B
Authority
CN
China
Prior art keywords
menu
task
dialogue
service
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811496702.8A
Other languages
Chinese (zh)
Other versions
CN111290677A (en
Inventor
唐嵩
易恒柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Great Wall Changsha Information Technology Co ltd
Original Assignee
China Electronics Great Wall Changsha Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Great Wall Changsha Information Technology Co ltd filed Critical China Electronics Great Wall Changsha Information Technology Co ltd
Priority to CN201811496702.8A priority Critical patent/CN111290677B/en
Publication of CN111290677A publication Critical patent/CN111290677A/en
Application granted granted Critical
Publication of CN111290677B publication Critical patent/CN111290677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Abstract

The invention discloses a self-service equipment navigation method and a navigation system thereof, which realize two-section dialogue stage function navigation based on a dialogue management mechanism, wherein one is for identifying customer service menu selection, and the other is for driving a natural man-machine dialogue mode of a business function step task, thereby realizing the whole-course dialogue interaction to complete business handling and improving the man-machine interaction of the self-service equipment. And combining the rule matching mode with the dialogue management mechanism to identify the menu intention, thereby improving the reliability of the menu matching result.

Description

Self-service equipment navigation method and navigation system thereof
Technical Field
The invention belongs to the technical field of self-service equipment, and particularly relates to a self-service equipment navigation method and a navigation system thereof.
Background
With the continuous development of science and technology, networking, intellectualization and self-service of various industries such as finance, securities, electronic government and the like are development trends, and self-service equipment is providing all-round services for clients. Meanwhile, the application functions of the deep fusion service on the equipment are more and more rich. In order to bear rich services, the functional menu layers on the equipment application service system are more, the step flow interaction is more and more complex, and how to use an intelligent functional navigation scheme to guide clients to efficiently and conveniently complete the services is an important subject in the field of self-service man-machine interaction. The natural language processing english full name NATURAL LANGUAGE PROCESSING, called NLP for short. The natural language processing is an important direction in the fields of computer science and artificial intelligence, and takes a language as a calculation object to study an algorithm, so that people can perform man-machine interaction with a computer system by using the natural language, and information management is more convenient and effective.
At present, a conventional means is to perform simple keyword query matching on a function menu through voice recognition and guide to a service entrance, and the function navigation mode has the problems of low recognition accuracy, low dialog interaction humanization degree and the like.
Disclosure of Invention
The invention aims to provide a self-service equipment navigation method and a self-service equipment navigation system with higher user menu node intention navigation accuracy and higher man-machine interaction degree.
In one aspect, the invention provides a self-service device navigation method, comprising the following steps:
s1: when the starting information is received, waking up a self-service equipment navigation system;
s2: starting a menu function node navigation dialogue based on a finite state machine or a task tree dialogue mode to identify the menu function node intention of a user;
the menu function node navigation dialogue is a man-machine interaction rule preset by taking the current menu function node of a user as a target; if the menu navigation information input by the voice or the text of the user is received in the dialogue process, carrying out menu function node intention recognition in a rule matching mode based on the input menu navigation information;
wherein each menu function consists of a plurality of steps, and each step corresponds to a task;
S3: linking to the first step of the menu function node acquired in the step S2, and starting the service dialogue of the menu function node based on a finite state machine or a task tree dialogue mode until the service flow is completed;
the service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
The invention adopts the preset menu function node navigation dialogue to realize the intention recognition of the user menu function node, and further improves the man-machine interaction of the self-service equipment. The invention selects a finite state machine or task tree dialogue mode to realize man-machine dialogue, namely two alternative schemes are provided to realize man-machine dialogue, and more selectivity is provided for developers. The task tree dialogue mode developed by the invention is suitable for complex dialogue systems, particularly has stronger logic property, can be suitable for the conditions of multiple levels and complex service relevance, can support the rapid switching of dialogue scenes, and greatly improves the flexibility of the dialogue systems. The finite state machine based on the prior art can flexibly capture any type of interaction, is suitable for a simple dialogue system and is easy to develop and maintain, and the finite state machine is used as an expanded choice to provide more selectivity for a self-service equipment navigation system.
In addition, in the menu function node navigation dialogue, if the user inputs the menu navigation information through voice or text, the method adopts a rule matching mode to identify the menu function node intention, wherein the accuracy of the text similarity matching mode and the regular expression matching mode in the rule matching mode is higher than that of simple identification of keywords. Machine learning classifier algorithms, such as SVM classifiers, may also be employed in the menu function node intent recognition.
Further preferably, the procedure of identifying the intention of the menu function node based on the input menu navigation information in the step S2 by adopting a rule matching manner is as follows:
s21: word segmentation is carried out on the input menu navigation information based on the domain naming entity dictionary, the function menu dictionary and the word stopping dictionary; if the menu navigation information input by the user is voice information, converting the voice information into text information, and then segmenting the text information;
the domain naming entity dictionary contains industry terms of the business domain; the function menu dictionary comprises IDs of all function menu nodes, a mark word set and a hierarchical path thereof, the IDs corresponding to the two function menu nodes with the same name are different, and the word stopping dictionary comprises filtering words;
S22: calculating the similarity between each segmented word and each function menu node by adopting a text similarity comparison method based on a mark word set of each function menu node in the function menu dictionary, and selecting the function menu node with the similarity exceeding a threshold value;
s23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: if the number of the selected function menu nodes is greater than or equal to 2, starting menu function node filtering dialogue on the selected function menu nodes based on a finite state machine or task tree dialogue mode to obtain the current function menu nodes of the user;
if a plurality of function menu nodes with the same function menu names are matched in the S22, menu function node filtering dialogue is carried out based on the hierarchical paths corresponding to the function menu nodes to eliminate ambiguity;
the menu function node filtering dialogue is a preset man-machine interaction rule for coming function menu nodes;
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: if the number of the selected function menu nodes is 0, continuing the menu function node navigation dialogue based on the finite state machine or task tree dialogue mode.
It should be noted that, the flag word includes the names of the function menu nodes and synonyms, related words, etc. which are mainly related to the names of the function menu nodes, so that the hierarchical paths are different, but the function menu nodes with the same names have the same flag word set, so that the plurality of function menu nodes with the same names obtained by adopting the text similarity comparison method in S22 are caused, and at this time, a dialogue is needed to be performed based on the hierarchical paths to disambiguate. Hierarchical paths such as: main interface/card service (sub interface)/card transfer (menu function node). The word segmentation obtained by text word segmentation is divided according to a dictionary, for example, the invention is divided at least according to a domain naming entity dictionary and/or a function menu dictionary, and can also be divided according to other dictionaries, such as a Xinhua dictionary. Text similarity is, for example, cosine similarity and word displacement similarity.
Further preferably, the similarity between the cut word and the function menu node in step S22 is obtained based on the similarity between the cut word and the flag word in the flag word set of the function menu node;
the similarity between the cut words and the marker words in the marker word set is calculated by adopting a word displacement similarity algorithm, and the calculation formula is as follows:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
In the formula, S is the similarity of a word and a mark word, wa is the same word number difference proportionality coefficient, wb is the word displacement difference proportionality coefficient, M is the same word number between the word and the mark word, N is the same word number at the same position, and Length (A) and Length (B) are the text lengths of the word and the mark word.
The similarity between the optimized word and the function menu node is the maximum value of the calculated similarity between each marker word in the marker word set of the function menu node and the word.
Further preferably, the value range of the same-word number difference scaling factor Wa is: [0.6,0.8]; the value range of the word displacement difference proportion coefficient Wb is as follows: [0.2,0.4].
Further preferably, the word segmentation process in step S21 is further performed according to a preset negative word dictionary, and the word segmentation process reserves negative words and word segmentation for extracting the filter words, where before step S23 is performed, the method further includes the following steps: filtering the function menu node selected in the step S22 based on the negative word dictionary;
judging whether a negative word exists before the word segmentation corresponding to the function menu section selected in the S22, and judging whether the number of the negative words is even or odd if the negative word exists;
if no negatives exist or the number of the negatives is even, reserving corresponding function menu nodes, and executing S23; and if the number of the negative words is odd, removing the function menu node, and executing S23. Further preferably, the process of conducting a conversation based on the task tree conversation mode is as follows:
A: pushing the dialogue task in the task tree corresponding to the current dialogue service into a task stack, and taking the root task as a stack top task of the task stack;
b: judging whether all tasks in the task stack are executed and completed, if so, completing the current dialogue service; if not, executing the step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, extracting target key information from the user input information after the user input information is acquired, and binding the target key information to matched record items in a to-be-data table;
the order of the record items in the expected data table corresponds to the execution order of the tasks in the task stack;
d: and C, clearing the completed task in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
The task tree dialogue mode is carried out in a task stack mode, so that the tasks are managed more orderly, and the scene is more convenient to rapidly switch. If the user is detected to be switched into another dialogue scene from the current scene, the dialogue task of the other dialogue scene is quickly pressed into the task stack in a push-stack mode, and when the dialogue task of the other dialogue scene is completed, the unfinished dialogue task in the previous task stack is continued.
In addition, the data table is expected to be used for storing and collecting data input from the outside or used in the conversation process, the sequence of the record items corresponds to the execution sequence of the tasks in the task stack, and the data of the upper-layer tasks is conveniently accessed by the subtasks by adopting a hierarchical structure, so that conversation context data is saved.
Further preferably, the task execution process in the task stack further includes monitoring whether an error dialogue occurs in real time, and if the error dialogue occurs, starting an error processing task;
wherein the decision mechanism for error dialogue monitoring is based on user defined rules or unified configuration to identify if an error dialogue is present.
User-defined rules or unified configuration means that a user can customize a decision algorithm to decide whether to enter an error handling process or define rules, such as entering an error handling process when the confidence level of information extraction is less than 80%. The confidence level can be obtained by adopting the existing text similarity calculation method.
On the other hand, the invention provides a self-service equipment navigation system based on the method, which at least comprises a voice recognition service module, a semantic/intention recognition module, a dialogue management engine module, a functional navigation agent module and a self-service equipment application operation platform;
The voice recognition service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the functional navigation agent module is connected with the self-service equipment application operation platform;
the voice recognition service module is used for converting collected user voice into text information;
the semantic/intention recognition module is used for recognizing menu function node intention matched with menu navigation information by adopting a rule matching method;
the dialogue management engine module is used for conducting dialogue task management and comprises a finite state machine and a task tree mode;
the dialogue at least comprises a menu function node navigation dialogue and a menu function node business dialogue;
the self-service equipment application running platform is used for responding to the navigation requirement of the functional navigation agent to execute tasks.
The functional navigation agent module provides a functional navigation interface and performs entrusting service, so that the self-service application running platform is separated from the service module.
The system further preferably comprises an equipment service module, a transaction service module and a flow interface collection module, wherein the equipment service module, the transaction service module and the flow interface collection module are all connected with the self-service equipment application operation platform;
The self-service equipment application operation platform responds to the navigation need of the functional navigation agent to call and execute transaction service, equipment driving service and flow page jump service.
Further preferably, the system further comprises a voice synthesis service module for converting text information into natural voice.
Advantageous effects
1. Aiming at the service characteristics (rich service functions, more menu levels and relatively complex service flow) of the existing self-service application system, the invention adds a humanized and intelligent function navigation system and method for the system, and can help clients to conduct service handling conveniently and efficiently. The two-section dialogue stage function navigation is realized based on a dialogue management mechanism, one is to identify customer service menu selection, and the other is driven by a natural man-machine dialogue mode of a business function step task, so that the whole-course dialogue interaction is realized to complete business handling, and the man-machine interaction of self-service equipment is improved. The invention selects the finite state machine or task tree dialogue mode to realize man-machine dialogue, namely, two alternative schemes are provided to realize man-machine dialogue, more selectivity is provided for developers, and simultaneously, simple and complex dialogue scenes are supported, so that the defects of the two schemes are shielded. The task tree dialogue mode developed by the invention is suitable for complex dialogue systems, particularly has stronger logic property, can be suitable for the conditions of multiple levels and complex service relevance, can support the rapid switching of dialogue scenes, and greatly improves the flexibility of the dialogue systems.
2. The invention selects a text similarity matching mode to identify the menu function node intention of a user, which is to divide words based on the information input by the user voice or words, calculate the similarity between the divided words and the function menu nodes in a preset function menu dictionary, and then determine the menu intention of the user, wherein compared with a simple keyword identification matching mode, the reliability of the obtained matching result is higher, on one hand, the words are respectively matched with a plurality of mark words of the menu function node, and the comprehensive similarity is obtained as the similarity between the words and the menu function node according to the matching result, and the accuracy is necessarily higher than the simple keyword identification; on the other hand, according to the invention, when the selected word displacement similarity algorithm is used for describing the similarity between two words, the same word is considered, and the word position is considered, so that the obtained similarity can describe the correlation degree between the two words more accurately. From the above, the invention utilizes the text similarity, especially the function menu node obtained by the word displacement similarity algorithm is more consistent with the actual intention of the user, and the accuracy of the obtained result is higher.
3. The task tree dialogue mode adopted by the invention is realized based on a task stack mode, namely dialogue tasks are managed in the task stack in a stack mode, so that orderly and effective implementation of the tasks can be ensured, and dialogue scene switching can be realized more quickly.
Drawings
FIG. 1 is a diagram of a multi-level functional interface provided by the present invention;
FIG. 2 is an organizational chart of a self-service equipment navigation system provided by the invention;
FIG. 3 is a flow chart of a self-service device navigation method provided by the invention;
FIG. 4 is a flow chart of a task tree dialog mode provided by the present invention;
fig. 5 is a dialogue flow chart of a transfer service task tree mode provided by the invention.
Detailed Description
The invention will be further illustrated with reference to examples.
As shown in FIG. 1, the main interface of the self-service equipment provided by the invention comprises a sub-interface and menu function nodes, and hierarchical nesting is performed in a tree mode. The menu function node is comprised of a plurality of steps consisting of interactive pages, transaction service calls, device service call logic, while steps may also nest sub-steps. Steps may be understood as tasks in the present invention. The scheme mainly aims at the typical structure to conduct function navigation.
As shown in FIG. 2, the self-service device navigation system of the present invention comprises a speech recognition service module, a speech synthesis service module, a semantic/intention recognition module, a dialogue management engine module, a function navigation agent, a self-service device application running platform, a device service module, a transaction service module and a flow interface aggregation module.
The voice recognition service module, the voice synthesis service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the functional navigation agent module, the equipment service module, the transaction service module and the flow interface collection module are all connected with the self-service equipment application operation platform. Therefore, the functional navigation agent module is used as an interactive interface and a message conversion transmission middle layer with each subsystem of NLP processing, and indirectly realizes the interactive communication and control logic of each core module of functional navigation, provides the functional navigation interface and execution entrusting service, so that the self-service application operation platform is separated from the NLP service module, and the NLP service module comprises the voice recognition service module, the voice synthesis service module, the semantic/intention recognition module and the dialogue management engine module.
The voice recognition (AUTOMATIC SPEECH RECOGNITION) service module is used for converting collected user voice into text information, and realizing an automatic voice recognition function.
And a SPEECH synthesis (TEXT TO SPEECH) service module for converting TEXT into natural SPEECH and providing the function of broadcasting materials by the SPEECH of the self-service application system.
The semantic/intention recognition module is used for recognizing the menu function node intention matched with the menu navigation information by adopting a rule matching method; the semantic/intention understanding service module also provides basic service interfaces such as Chinese word segmentation, part-of-speech tagging, entity identification, dependency syntactic analysis and the like for natural language understanding, and can provide RESTFUL mode calling service.
The dialog management engine module provides dialog task management functions independent of application services, ensuring that the dialog is properly conducted by executing the appropriate services. DIALOG management DIALOG MANAGEMENT is a process that controls human-machine DIALOG, implements task-driven multi-round DIALOG, implements DIALOG state maintenance (DIALOG STATE TRACKING) and generates system decisions (DIALOG POLICY), while interacting with backend/task models as interfaces and providing desired values for semantic expressions. The dialog management engine described herein supports a dialog management model of a finite state machine, task tree.
The self-service equipment application operation platform is a software operation framework for realizing self-service application service, the core carries out logic assembly on an interaction page, transaction service call and equipment driving service, each module is organically combined through parameter configuration and event driving mode, and an interface expansion module and a configuration mechanism are provided for realizing customized service functions.
As shown in fig. 3, based on the self-service device navigation system, the self-service device navigation method provided by the invention comprises the following steps:
s1: and when the starting information is received, waking up the self-service equipment navigation system.
Wherein, active inquiry/detection mechanism or passive trigger NLP function navigation start is adopted. The startup information is generated in the following three ways in this embodiment, but the present invention is not limited to the following three ways.
The first mode is that the self-service application operation platform starts an infrared sensor to detect that a person approaches to the equipment, and generates starting information to trigger a function navigation agent to prompt a customer to conduct service selection.
The second mode is when the user clicks a navigation button of the application interface to generate starting information, and the functional navigation agent of the self-service device is activated.
The third mode is to wake up the self-service device application system, the function navigation agent, using voice command generation initiation information.
S2: starting a menu function node navigation dialogue based on a finite state machine or a task tree dialogue mode to identify the menu function node intention of a user;
the menu function node navigation dialogue is a man-machine interaction rule preset by taking the current menu function node of a user as a target.
Such as the following menu function node navigation dialog:
the infrared human body sensing module service of the application operation platform detects that a person approaches to the equipment, triggers the function navigation agent system and starts a menu function node navigation dialogue, and the dialogue content comprises:
the function navigation system broadcasts that you get good and welcome you to use the XXX self-service system.
The function navigation system broadcasts asking what business you need to transact?
User input: i want to handle the xxx tasks. "
In the dialogue process, the information input by the user is the received menu navigation information input by the user voice or words, and the method adopts a rule matching mode to identify the intention of the menu function node based on the menu navigation information input by the user. If the menu navigation information input by the user cannot be matched with the menu function node, continuing the menu function node navigation dialogue, for example, as follows:
"functional navigation System-let us fail to correctly recognize what you want to perform, possibly if our system does not support the business or we fail to understand what you mean.
Functional navigation system-do business supported by our system need to be introduced for you? ((required to answer "good", not required to answer))
User input: good.
The application system pops up a list of clickable functions (listed for the sake of clarity by displaying the list in the figure)
Simultaneously broadcast: please select the following service.
User input: the transfer service or selecting the transfer service on the interface. "
If the user is based on the input menu navigation information, such as the transfer service in the dialogue, the menu function node intention recognition is continuously performed in a rule matching mode; if the user selects the transfer service on the interface, the user can know that the selected menu function node is the transfer service, and the menu function node navigation dialogue is ended.
Aiming at menu navigation information input by a user, the method adopts a rule matching mode to identify the intention of the menu function node, and the implementation process is as follows:
s21: word segmentation is carried out on the input menu navigation information based on the domain naming entity dictionary, the function menu dictionary, the word stopping dictionary and the negative word dictionary; if the menu navigation information input by the user is voice information, the voice information is converted into text information, and then the text information is segmented.
In this embodiment, the text message initiates a text semantic request based on JSON format to the semantic/intent understanding service through a communication protocol (such as HTTP), and the semantic/intent understanding service performs chinese word segmentation, such as IKanalyzer, on the text.
With respect to the domain naming entity dictionary: based on the JSON format, the word segmentation configuration file is defined, and word segmentation expansion times are defined according to the industry belonging to the field of self-service application, such as the definition of the nouns of XX securities exchange, double-record, bin holding, three-party storage management, one-user communication and the like aiming at the securities industry field.
Regarding the function menu dictionary: an ID, a set of marker words and a hierarchical path of each function menu node are defined, wherein the marker words comprise function menu names, synonyms, expansion words and the like. Hierarchical paths such as (main interface/card business (sub interface)/card transfer (menu function node)). Joining the hierarchical paths mainly takes into account the fact that the same menu functions exist in different traffic paths of the traffic system.
Regarding the stop word dictionary: the method includes the step of incorporating the Chinese language words, adverbs and the like into a stop word dictionary library as filtering words to be used for filtering texts.
Regarding the negative word dictionary: and taking the negation words such as not, none, not and the like as the negation dictionary content.
In summary, the word segmentation process reserves negative words and word segmentation for removing the filter words;
s22: calculating the similarity between each word and each function menu node by adopting a text similarity comparison method based on the mark word set of each function menu node in the function menu dictionary, selecting the function menu node with the similarity exceeding a threshold value, and filtering the selected function menu node based on the negative word dictionary.
In this embodiment, the similarity between the optimized word and the function menu node is the maximum value of the calculated similarity between each flag word in the flag word set of the function menu node and the word. It is also preferable that the similarity between the cut word and the flag word is calculated by using a word displacement similarity algorithm, and the calculation formula is as follows:
the calculation formula is as follows:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
in the formula, S is the similarity of a word and a mark word, wa is the same word number difference proportionality coefficient, wb is the word displacement difference proportionality coefficient, M is the same word number between the word and the mark word, N is the same word number at the same position, and Length (A) and Length (B) are the text lengths of the word and the mark word. In this embodiment, the value range of the same word number difference scaling factor Wa is preferably: [0.6,0.8]; the value range of the word displacement difference proportion coefficient Wb is as follows: [0.2,0.4]. The value confirmation method of the same-word number difference scaling factor Wa and the word shift difference scaling factor Wb is, for example, to confirm by:
and analyzing the text sample data successfully matched through a defined menu, and determining a proportionality coefficient by counting the average proportion of the same word number difference and the word displacement difference of the samples. Which reflects the average weight of the same word number difference and word displacement in the text of the sample statistics that is deemed to be a correct recognition matching function menu.
In other possible embodiments, cosine similarity may be selected to calculate similarity of the cut word to the tag word.
Regarding how to select the function menu nodes with the similarity exceeding the threshold, the invention determines the returned result by setting the threshold. A function menu, such as the contracted threshold > = 0.8, may be returned in the result, and the result of the function navigation intent recognition may be described based on JSON, in the form of a result return value { retCode:2, result: [ { order: 'function menu 1', similarity:0.8}, { order: 'function menu 2', similarity:0.9 }. When the return value is-1, the menu without the matching function returns; the return value is 0, which means that a full matching function menu with the similarity of 1 is returned; the return value is 2, which indicates that one or more fuzzy function menu matching items with similarity greater than or equal to a threshold value are returned.
Judging whether the negative words exist before the word segmentation corresponding to the selected functional menu node based on the functional menu node selected in the negative word dictionary filtering step, and judging whether the number of the negative words is even or odd if the negative words exist;
if no negatives exist or the number of the negatives is even, reserving corresponding function menu nodes, and executing S23; and if the number of the negative words is odd, removing the function menu node, and executing S23.
S23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: if the number of the selected function menu nodes is greater than or equal to 2, starting menu function node filtering dialogue for the selected function menu nodes based on a finite state machine or task tree dialogue mode to obtain the current function menu nodes of the user.
The menu function node filtering dialogue is a preset man-machine interaction rule for the function menu node. If the return result is that two function menus are completely matched, the service A or the transfer service B can be fetched, the dialogue can be initiated, whether the user wants to transact the service A or the service B is generated by voice consultation, and the service selection intention of the user is further defined.
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: if the number of the selected function menu nodes is 0, continuing the menu function node navigation dialogue based on the finite state machine or task tree dialogue mode.
S3: linking to the first step of the menu function node acquired in the step S2, and starting the service dialogue of the menu function node based on a finite state machine or a task tree dialogue mode until the service flow is completed;
The service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
As shown in fig. 4, when the execution condition of the sub-step of the present service is satisfied, the sub-step is continued until the present service is completed.
In the embodiment, when the intention recognition of the menu function node of the user is performed in the preferred S2, a finite state machine dialogue mode is adopted to perform the menu function node navigation dialogue; the service session of the menu function node is performed in S3 using the task tree session mode.
The invention relates to a dialogue management system of dialogue tasks, which comprises two parts of dialogue tasks and a dialogue management engine module. Taking the task tree dialogue mode as an example:
(1) Business dialogue tasks when a task tree dialogue mode is adopted.
The business system designs dialogue tasks according to dialogue requirements. By decomposing a conversation into tasks, the business system specifies the content and logic of the conversation to the conversation management engine, requiring specific business conversations to be designed according to business needs. The dialogue is expressed in a task mode, the task can be further decomposed into subtasks, and the whole business dialogue is actually a task tree and represents dialogue content and dialogue logic of the business. The dialog management engine defaults to executing tasks on the task tree in a preface traversal mode to complete the entire dialog. If the conversation process has conversation scene switching, the conversation management engine manages the whole conversation process, and at the moment, the conversation management engine can flexibly execute tasks in the conversation tree according to requirements. The service system only needs to provide a dialogue task tree related to the service according to the service requirement.
The dialogue task is divided into an intermediate task and an actual task. The intermediate tasks do not perform specific actual tasks, and they provide business logic for conversations by managing subtasks. For example, the transfer information task of the system is an intermediate task, and the system comprises two subtasks, namely a transfer information input task and a transfer information confirmation task; the actual tasks correspond to leaf nodes in the conversational task tree. The actual task performs a specific actual task, such as prompting the user for a period of time, waiting for user input, or invoking a function of the device to fulfill the user's requirements in the dialogue, such as notifying the system to open the transfer interface. Such as a transfer information input task and a transfer information confirmation task.
The dialogue management system provides different dialogue task base classes, the business system expands the corresponding task base classes, and defines the specific task class of the business dialogue. The service system may specify a conversation task completion condition, a conversation task start execution condition, data required to be obtained from a user by the conversation task, a subtask of the task, etc., and may define specific operations required to be executed by the task, such as notifying the service system to open a transfer interface, etc., by rewriting or implementing a corresponding method of the task base class.
(1) Task start execution conditions: only if this condition is true will the dialog management engine start executing the task.
(2) Task completion conditions: when the condition is true, the dialog management engine considers that the task has been successfully executed to completion.
(3) Data required for the task: tasks require data to be obtained from the user, extracted from the user's input and bound to the variables of the task.
The above are all conditions set for the tasks to be sequentially executed.
(2) A dialog management engine module when using a dialog tree mode:
the dialogue management engine module provides dialogue task management function independent of specific service, and completes dialogue with user by executing dialogue task tree provided by dialogue service system. The dialog management engine can support dialog scene switching and also provides an error handling mechanism to ensure that the dialog can be smoothly conducted in the event that the user's speaking content is not accurately understood.
Regarding the task tree dialogue mode, it uses the task stack mode to execute dialogue tasks, and the procedure is as follows:
a: pushing the dialogue task in the task tree corresponding to the current dialogue service into a task stack, and taking the root task as a stack top task of the task stack;
B: judging whether all tasks in the task stack are executed and completed, if so, completing the current dialogue service; if not, executing the step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, extracting target key information from the user input information after the user input information is acquired, and binding the target key information to matched record items in a to-be-data table;
the order of the record items in the expected data table corresponds to the execution order of the tasks in the task stack;
d: and C, clearing the completed task in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
Regarding the task stack:
the dialogue task to be executed is stored in the task stack, and is managed in the task stack in a stack mode by default. The top task of the task stack is the task currently being executed, called the focus task. Other tasks in the task stack are tasks to be performed. In each round of execution process of the dialogue management system, firstly, the stack top task is taken out from the task stack and executed. Executing the task may cause the task stack or other tasks in the dialog task tree to be in a completed state, or may cause the task stack or other tasks in the dialog task tree to be new focus tasks, the dialog management system clears the completed dialog tasks from the task stack, then pushes the new focus tasks in the dialog task tree to the stack, and then starts the next execution until the dialog is completed.
Regarding the next focus task, if the current task is an internal task, pushing the first executable subtask of the task into a task stack; if the current task is a specific task, the actual execution process of the task is called, such as opening a transfer interface or prompting a user for a piece of information. When the task is executed, if the system fails to understand the user input, the dialogue management system enters an error processing task, the error processing task becomes a new focus task and is merged into a stack, the smooth progress of the dialogue is ensured by requiring the user to speak again or requiring the user to confirm, the dialogue system can obtain the required data, and the successful completion of the dialogue task is ensured. Or if the user requests to perform conversation scene conversion, pushing the conversation task of the conversion scene into a task stack to be used as a focus task to be executed, and continuing to execute the task which is not completed before the completion of the execution.
The input information processing stage in the step C is as follows:
generating an expected data table according to tasks in a task stack; extracting information from the user input; binding the extracted information into the expected data table.
The information extraction manner of the input stage can be implemented in the prior art, and the present invention is not particularly limited and described herein. Regarding the expected data table for recording extraction information, the expected data table is mainly used for storing and collecting user or external input data in the conversation process, and the conversation engine generates a hierarchical expected data table structure by traversing a task stack, and manages and saves the input data information of conversation context; subtasks may access hierarchically upper layer task expectation data, which is more useful for conducting contextual analysis of conversations. Such as account numbers and money amounts in the transfer information input task from data obtained by the user, other data obtained externally such as card number data read from the card reader peripheral, or transaction information data returned from the service host are all extraction information. It is expected that one record in the data table may contain multiple data variables, because a conversation task may expect multiple data inputs, such as account number and money amount, for example, for a transfer information input task. The data variable expects the record item sequence of the data table structure according to the task sequence in the task tree and corresponds to the task sequence in the task stack. When two different entries in the table expect one data at a time, the data is bound to the entries in the table in order. The user or external data is bound to the record item in the waiting data table, so that the task completion condition is triggered to be in a completion state, and the task starting condition is triggered to start execution. If the binding fails, the error processing task of the dialogue system becomes a focus task to be stacked, the operation of the dialogue flow is ensured by requiring the user to reenter or confirm the user, and the task is ensured to acquire the required data so as to complete the task.
Regarding the aforementioned error handling task, the present invention also monitors whether an error dialogue occurs in the dialogue process in real time, and if so, starts the error handling task to ensure that the dialogue task is completed successfully.
The decision mechanism for error dialogue monitoring is to support user definition to decide whether to enter an error processing flow based on rules or a unified mode. The dialog management system supports a user-defined decision algorithm to decide whether an error handling procedure needs to be entered. The system enters an error processing task when the confidence level of information extraction is lower than 80% by defining rules.
In this embodiment, both explicit and implicit error handling tasks are set, and the display error handling directly requires the user to repeat the description or directly ask the user for the value of the corresponding expected data. Implicit error handling implies that the user confirms the value of the corresponding expected data by means of a back question, and does not affect the current conversation process, for example, when the user says that the confidence that the conversation system recognizes the transfer is low, at this time, the conversation system may process in such a way that:
"is money to be transferred? Preferably, the transfer interface is being opened for you, please wait "do" wherein "is to transfer? Good "is implicit error handling, letting the user confirm whether or not the transfer service is to be executed. "being open transfer interface for you, please wait slightly" is a prompt for normal transfer task.
As shown in fig. 5, for the above, the present invention provides a specific example as an explanation:
step 1, an infrared human body sensing module service of an application operation platform detects that a person approaches to equipment, a function navigation agent is triggered to execute a dialogue state request, a dialogue management system executes a function menu selection task, and subtasks are pushed into a task stack.
The dialogue content:
the function navigation system broadcasts that you get good and welcome you to use the XXX self-service system.
And then continues to execute the task prompting user, and the dialogue content is:
the function navigation system broadcasts asking what business you need to transact?
User input: is not known.
And 2, invoking semantic/intention understanding service, wherein the dialog management system fails to successfully recognize the intention of the user, and the dialog system makes an error processing decision to execute an explicit error processing task, and simultaneously pushes a function menu to be a focus task and pushes the function menu to a task stack, and executes the task. At this point, the dialog and system content:
functional navigation system-let us fail to correctly recognize the operation you want to perform, possibly that our system does not support the service or we fail to understand your meaning successfully.
Functional navigation system-do business supported by our system need to be introduced for you? ((required to answer "good", not required to answer))
User input: good.
The application system pops up a clickable function list;
simultaneously broadcast: please select the following service.
User input: the transfer service or selecting the transfer service on the interface.
And step 3, invoking semantic/intention understanding service, successfully identifying user service selection intention by a rule-based matching mode, executing a link service flow page task, ending a session at one stage after the execution is finished, starting the next session at the same time, and pushing the transfer service task and sub-tasks thereof into a stack.
And 4, executing a transfer service to an account information input task, wherein a word slot preset by the task is transferred to an account number and a transfer amount. The user is required to input the account number and the amount of money, initiate a dialogue and acquire the input information filled by the user. The dialogue content:
a functional navigation system: please input the account number for transfer.
User input: the account number XXXXXXX is transferred.
A functional navigation system: please input the transfer amount.
User input: 800
After the step execution condition is met, executing the business substep task, and turning to the next business flow step, and executing the transfer information determination task.
And 5, completing all subtasks of the transfer information task, wherein the transfer information task is also in a completed state. Pushing the subtask verification password into a task stack.
Performing a verification password task, the dialog content:
a functional navigation system: please input a password;
the system should turn on the code keypad device service.
User (password keyboard) input: a code XXXXX;
and after the service system successfully verifies the password, notifying the dialogue management system that the password verification task is completed.
And 6, executing the transfer transaction, printing the transaction receipt and taking the card in sequence. This dialogue and system content:
a functional navigation system: a transfer transaction is ongoing, awaiting.
And the application system executes the transfer transaction action, and the execution is successful.
A functional navigation system: the transfer operation has been completed for you, a transaction receipt is being printed for you, please wait.
The application system executes the operation of calling the receipt printing equipment service, printing the transaction receipt, and the execution is successful.
A functional navigation system: the transaction is completed and you take the card.
And the application system executes and calls the card reader equipment service to finish the card withdrawing operation.
And 7, after the dialogue management system executes the task of the task tree, ending the dialogue in the second stage.
In summary, the invention aims at the service characteristics (rich service functions, more menu levels and relatively complex service flow) of the existing self-service application system, adds a humanized and intelligent function navigation system and method for the system, and can help clients to conduct service handling conveniently and efficiently. The combination of the semantic/intention understanding service and the dialogue management service is obviously superior to that of the conventional means voice recognition, and the functional operation is guided and completed through simple keyword matching, so that the recognition accuracy and the interactive humanization degree are effectively improved. The system is guided by two sections of dialogue stage functions, one is for identifying customer service menu selection, and the other is driven by a natural man-machine dialogue mode of a service function step task, so that the whole-course dialogue interaction is realized to complete service handling. Furthermore, the system supports finite state machine dialog and task tree dialog management, while supporting simple and complex dialog scenarios, masking the respective drawbacks. The finite state machine mode is more suitable for simple and flexible dialogue scenes, task tree dialogue management supports scene switching, for example, you need to switch from withdrawal to financial service consultation, only need to consult the financial service as a focus task, and after execution, the task returns to the withdrawal task to continue running. The method is very suitable for tasks with complex business function steps and multi-layer sub-function steps. Therefore, the method of the invention innovatively improves the man-machine interaction mode in the self-service application field, improves the intelligent degree of the self-service equipment in the business handling process, adopts a natural language processing mechanism, realizes menu functions and business processes driven by multi-round conversations, and improves the convenience and the use efficiency of the self-service equipment.
It should be emphasized that the examples described herein are illustrative rather than limiting, and that this invention is not limited to the examples described in the specific embodiments, but is capable of other embodiments in accordance with the teachings of the present invention, as long as they do not depart from the spirit and scope of the invention, whether modified or substituted, and still fall within the scope of the invention.

Claims (8)

1. A self-service equipment navigation method is characterized in that: the method comprises the following steps:
s1: when the starting information is received, waking up a self-service equipment navigation system;
s2: starting a menu function node navigation dialogue based on a finite state machine or a task tree dialogue mode to identify the menu function node intention of a user;
the menu function node navigation dialogue is a man-machine interaction rule preset by taking the current menu function node of a user as a target; if the menu navigation information input by the voice or the text of the user is received in the dialogue process, carrying out menu function node intention recognition in a rule matching mode based on the input menu navigation information;
the rule matching mode comprises text similarity pattern matching and regular expression pattern matching, each menu function consists of a plurality of steps, and each step corresponds to a task;
In step S2, the process of recognizing the intention of the menu function node by using rule matching based on the input menu navigation information is as follows:
s21: word segmentation is carried out on the input menu navigation information based on the domain naming entity dictionary, the function menu dictionary and the word stopping dictionary; if the menu navigation information input by the user is voice information, converting the voice information into text information, and then segmenting the text information;
the domain naming entity dictionary contains industry terms of the business domain; the function menu dictionary comprises IDs of all function menu nodes, a mark word set and a hierarchical path thereof, the IDs corresponding to the two function menu nodes with the same name are different, and the word stopping dictionary comprises filtering words;
s22: calculating the similarity between each segmented word and each function menu node by adopting a text similarity comparison method based on a mark word set of each function menu node in the function menu dictionary, and selecting the function menu node with the similarity exceeding a threshold value;
the similarity between the cut word and the function menu node in the step S22 is obtained based on the similarity between the cut word and the mark words in the mark word set of the function menu node;
The similarity between the cut words and the marker words in the marker word set is calculated by adopting a word displacement similarity algorithm, and the calculation formula is as follows:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
wherein S is the similarity of a word and a mark word, wa is the same word number difference proportionality coefficient, wb is the word displacement difference proportionality coefficient, M is the same word number between the word and the mark word, N is the same word number at the same position, and Length (A) and Length (B) are the text lengths of the word and the mark word;
s23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: if the number of the selected function menu nodes is greater than or equal to 2, starting menu function node filtering dialogue on the selected function menu nodes based on a finite state machine or task tree dialogue mode to obtain the current function menu nodes of the user;
the menu function node filtering dialogue is a preset man-machine interaction rule for coming function menu nodes;
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: if the number of the selected function menu nodes is 0, continuing menu function node navigation dialogue based on a finite state machine or task tree dialogue mode;
S3: linking to the first step of the menu function node acquired in the step S2, and starting the service dialogue of the menu function node based on a finite state machine or a task tree dialogue mode until the service flow is completed;
the service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
2. The method according to claim 1, characterized in that: the value range of the same word number difference proportionality coefficient Wa is as follows: [0.6,0.8]; the value range of the word displacement difference proportion coefficient Wb is as follows: [0.2,0.4].
3. The method according to claim 1, characterized in that: the word segmentation process of the step S21 is further performed according to a preset negative word dictionary, and the word segmentation process reserves negative words and word segmentation for extracting the filtered words, wherein the step S23 further includes the following steps before being executed: filtering the function menu node selected in the step S22 based on the negative word dictionary;
judging whether a negative word exists before the word segmentation corresponding to the function menu section selected in the S22, and judging whether the number of the negative words is even or odd if the negative word exists;
if no negatives exist or the number of the negatives is even, reserving corresponding function menu nodes, and executing S23; and if the number of the negative words is odd, removing the function menu node, and executing S23.
4. The method according to claim 1, characterized in that: the process of conducting a conversation based on the task tree conversation model is as follows:
a: pushing the dialogue task in the task tree corresponding to the current dialogue service into a task stack, and taking the root task as a stack top task of the task stack;
b: judging whether all tasks in the task stack are executed and completed, if so, completing the current dialogue service; if not, executing the step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, extracting target key information from the user input information after the user input information is acquired, and binding the target key information to matched record items in a to-be-data table;
the order of the record items in the expected data table corresponds to the execution order of the tasks in the task stack;
d: and C, clearing the completed task in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
5. The method according to claim 4, wherein: the task executing process in the task stack also comprises the steps of monitoring whether an error dialogue occurs in real time, and if the error dialogue occurs, starting an error processing task;
Wherein the decision mechanism for error dialogue monitoring is based on user defined rules or unified configuration to identify if an error dialogue is present.
6. A self-service device navigation system based on the method of any one of claims 1-5, characterized by: the system at least comprises a voice recognition service module, a semantic/intention recognition module, a dialogue management engine module, a functional navigation agent module and a self-service equipment application operation platform;
the voice recognition service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the functional navigation agent module is connected with the self-service equipment application operation platform;
the voice recognition service module is used for converting collected user voice into text information;
the semantic/intention recognition module is used for recognizing menu function node intention matched with menu navigation information by adopting a rule matching method;
the dialogue management engine module is used for conducting dialogue task management and comprises a finite state machine and a task tree mode;
the dialogue at least comprises a menu function node navigation dialogue and a menu function node business dialogue;
The self-service equipment application running platform is used for responding to the navigation requirement of the functional navigation agent to execute tasks.
7. The system according to claim 6, wherein: the system also comprises a device service module, a transaction service module and a flow interface collection module, wherein the device service module, the transaction service module and the flow interface collection module are all connected with the self-service device application operation platform;
the self-service equipment application operation platform responds to the navigation need of the functional navigation agent to call and execute transaction service, equipment driving service and flow page jump service.
8. The system according to claim 6, wherein: the system also comprises a voice synthesis service module which is used for converting the text information into natural voice.
CN201811496702.8A 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof Active CN111290677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811496702.8A CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811496702.8A CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Publications (2)

Publication Number Publication Date
CN111290677A CN111290677A (en) 2020-06-16
CN111290677B true CN111290677B (en) 2023-09-19

Family

ID=71021267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811496702.8A Active CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Country Status (1)

Country Link
CN (1) CN111290677B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590235A (en) * 2021-07-27 2021-11-02 京东科技控股股份有限公司 Business process execution method and device, electronic equipment and storage medium
CN113571069A (en) * 2021-08-03 2021-10-29 北京房江湖科技有限公司 Information processing method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356285B1 (en) * 1997-12-17 2002-03-12 Lucent Technologies, Inc System for visually representing modification information about an characteristic-dependent information processing system
DE102005024638A1 (en) * 2005-05-30 2006-12-07 Siemens Ag Word/text inputs navigation method, for mobile telephone, involves displacing menu based on requirements of electronic device movement found by image recording device, where relative position of cursor and menu entry is found by device
US7197460B1 (en) * 2002-04-23 2007-03-27 At&T Corp. System for handling frequently asked questions in a natural language dialog service
JP2009032118A (en) * 2007-07-27 2009-02-12 Nec Corp Information structuring device, information structuring method, and program
CN104536588A (en) * 2014-12-15 2015-04-22 沈阳美行科技有限公司 Keyboard associating method for navigation equipment using map data
CN105162996A (en) * 2014-07-18 2015-12-16 上海触乐信息科技有限公司 Intelligent service interaction platform apparatus, system, and implementing method
WO2015188454A1 (en) * 2014-06-11 2015-12-17 中兴通讯股份有限公司 Method and device for quickly accessing ivr menu

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7392185B2 (en) * 1999-11-12 2008-06-24 Phoenix Solutions, Inc. Speech based learning/training system using semantic decoding
RU2014111971A (en) * 2014-03-28 2015-10-10 Юрий Михайлович Буров METHOD AND SYSTEM OF VOICE INTERFACE
US10452233B2 (en) * 2014-07-18 2019-10-22 Shanghai Chule (Cootek) Information Technology Co., Ltd. Information interactive platform, system and method
TWI670639B (en) * 2017-05-18 2019-09-01 美商愛特梅爾公司 Techniques for identifying user interface elements and systems and devices using the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356285B1 (en) * 1997-12-17 2002-03-12 Lucent Technologies, Inc System for visually representing modification information about an characteristic-dependent information processing system
US7197460B1 (en) * 2002-04-23 2007-03-27 At&T Corp. System for handling frequently asked questions in a natural language dialog service
DE102005024638A1 (en) * 2005-05-30 2006-12-07 Siemens Ag Word/text inputs navigation method, for mobile telephone, involves displacing menu based on requirements of electronic device movement found by image recording device, where relative position of cursor and menu entry is found by device
JP2009032118A (en) * 2007-07-27 2009-02-12 Nec Corp Information structuring device, information structuring method, and program
WO2015188454A1 (en) * 2014-06-11 2015-12-17 中兴通讯股份有限公司 Method and device for quickly accessing ivr menu
CN105162996A (en) * 2014-07-18 2015-12-16 上海触乐信息科技有限公司 Intelligent service interaction platform apparatus, system, and implementing method
CN104536588A (en) * 2014-12-15 2015-04-22 沈阳美行科技有限公司 Keyboard associating method for navigation equipment using map data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Myounghoon Jeon.Menu Navigetion With In-Vehicle Technologies:Auditory Menu Cues Improve Dual Task Performance,Preference,and Workload.IEEE.2014,第1-16页. *
甘厚勇 ; .基于语音识别的自助语音系统.金融电子化.2012,(第12期),第56-58页. *
缪淮扣 ; 陈圣波 ; 曾红卫 ; .基于模型的Web应用测试.计算机学报.2011,(第06期),第64-80页. *

Also Published As

Publication number Publication date
CN111290677A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
US11087094B2 (en) System and method for generation of conversation graphs
US11030412B2 (en) System and method for chatbot conversation construction and management
US8639517B2 (en) Relevance recognition for a human machine dialog system contextual question answering based on a normalization of the length of the user input
US9639601B2 (en) Question answering system adapted to style of user requests
US8738384B1 (en) Method and system for creating natural language understanding grammars
KR101279738B1 (en) Dialog analysis
CN112632961B (en) Natural language understanding processing method, device and equipment based on context reasoning
KR100818979B1 (en) Dialog management apparatus and method for chatting agent
KR20080020649A (en) Diagnosing recognition problems from untranscribed data
WO2013134871A1 (en) System and method for conversation-based information search
CN110268472B (en) Detection mechanism for automated dialog system
US20220068279A1 (en) Automatic extraction of conversation highlights
CN111290677B (en) Self-service equipment navigation method and navigation system thereof
CN114691852B (en) Man-machine conversation system and method
CN111930912A (en) Dialogue management method, system, device and storage medium
CN115392264A (en) RASA-based task-type intelligent multi-turn dialogue method and related equipment
Armentano et al. Plan recognition for interface agents: state of the art
CN110246494A (en) Service request method, device and computer equipment based on speech recognition
US11380306B2 (en) Iterative intent building utilizing dynamic scheduling of batch utterance expansion methods
US11669697B2 (en) Hybrid policy dialogue manager for intelligent personal assistants
CN115129865A (en) Work order classification method and device, electronic equipment and storage medium
Tsai et al. Command management system for next-generation user input
RU2759090C1 (en) Method for controlling a dialogue and natural language recognition system in a platform of virtual assistants
CN113822506A (en) Multi-round voice interaction intelligent retrieval system and method for electric power regulation
Hattimare et al. Maruna Bot: An extensible retrieval-focused framework for task-oriented dialogues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant