CN111290677A - Self-service equipment navigation method and navigation system thereof - Google Patents

Self-service equipment navigation method and navigation system thereof Download PDF

Info

Publication number
CN111290677A
CN111290677A CN201811496702.8A CN201811496702A CN111290677A CN 111290677 A CN111290677 A CN 111290677A CN 201811496702 A CN201811496702 A CN 201811496702A CN 111290677 A CN111290677 A CN 111290677A
Authority
CN
China
Prior art keywords
menu
task
service
word
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811496702.8A
Other languages
Chinese (zh)
Other versions
CN111290677B (en
Inventor
唐嵩
易恒柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Great Wall Changsha Information Technology Co ltd
Original Assignee
China Electronics Great Wall Changsha Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Great Wall Changsha Information Technology Co ltd filed Critical China Electronics Great Wall Changsha Information Technology Co ltd
Priority to CN201811496702.8A priority Critical patent/CN111290677B/en
Publication of CN111290677A publication Critical patent/CN111290677A/en
Application granted granted Critical
Publication of CN111290677B publication Critical patent/CN111290677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Abstract

The invention discloses a self-service equipment navigation method and a navigation system thereof, which realize function navigation in two sections of conversation stages based on a conversation management mechanism, wherein one is to identify client service menu selection, and the other is to be driven by a natural man-machine conversation mode of a service function step flow task, thereby realizing the whole-course conversation interaction to finish service handling and improving the man-machine interaction of self-service equipment. And the menu intention recognition is carried out by combining the rule matching mode and the dialogue management mechanism, so that the reliability of the menu matching result is improved.

Description

Self-service equipment navigation method and navigation system thereof
Technical Field
The invention belongs to the technical field of self-service equipment, and particularly relates to a self-service equipment navigation method and a navigation system thereof.
Background
With the continuous development of science and technology, networking, intellectualization and self-service of various industries such as finance, securities, electronic government affairs and the like are development trends, and self-service equipment is providing all-round services for customers. Meanwhile, the application functions of the deep convergence service on the equipment are more and more abundant. In order to bear abundant services, function menus on an equipment application service system have multiple levels, step flow interaction is multiple and complex, and how to use an intelligent function navigation scheme to guide a client to efficiently and conveniently finish the services is an important subject in the field of self-service human-computer interaction. Natural LANGUAGE PROCESSING is called NATURAL LANGUAGE PROCESSING, NLP for short. Natural language processing is an important direction in the fields of computer science and artificial intelligence, and takes language as a calculation object to research an algorithm, so that people can use the natural language to carry out man-machine interaction with a computer system, and information management is carried out more conveniently and effectively.
At present, a conventional means is to perform simple keyword query and matching on a function menu through voice recognition to guide to point to a service entrance, and the function navigation mode has the problems of low recognition accuracy, low interactive humanization degree of conversation and the like.
Disclosure of Invention
The invention aims to provide a self-service equipment navigation method and a navigation system thereof, wherein the user menu node intention navigation accuracy rate is higher, and the human-computer interaction degree is higher.
In one aspect, the invention provides a self-service equipment navigation method, which comprises the following steps:
s1: when the starting information is received, the self-service equipment navigation system is awakened;
s2: starting a menu function node navigation dialogue to identify the menu function node intention of the user based on a finite state machine or a task tree dialogue mode;
the menu function node navigation dialogue is a preset human-computer interaction rule with the current menu function node of the user as a target; if menu navigation information input by voice or characters of a user is received in the conversation process, performing menu function node intention identification by adopting a rule matching mode based on the input menu navigation information;
each menu function consists of a plurality of steps, and each step corresponds to one task;
s3: linking to the first step of the menu function node acquired in the step S2, and starting a service session of the menu function node based on a finite state machine or a task tree session mode until a service flow is completed;
the service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
The invention realizes the intention recognition of the menu function nodes of the user by adopting the preset menu function node navigation dialogue, and further improves the man-machine interaction of the self-service equipment. The invention selects a finite state machine or a task tree dialogue mode to realize the man-machine dialogue, namely two selectable schemes are provided to realize the man-machine dialogue and more choices are provided for developers. The task tree conversation mode developed by the invention is suitable for a complex conversation system, especially the task tree conversation mode has stronger logicality, can be suitable for the conditions of multiple levels and complex business relevance, and can support the quick switching of conversation scenes, thereby greatly improving the flexibility of the conversation system. While the finite state machine based on the prior art can flexibly capture any type of interaction, is suitable for a simple dialog system and is easy to develop and maintain, the invention takes the finite state machine as an expanded choice to provide more selectivity for a self-service equipment navigation system.
In addition, in the menu function node navigation dialogue, if the user inputs menu navigation information through voice or characters, the menu function node intention recognition is carried out in a rule matching mode, wherein the accuracy of a text similarity matching mode and a regular expression matching mode in the rule matching mode is higher than that of simple recognition of keywords. Machine learning classifier algorithms, such as SVM classifiers, may also be used in the menu function node intent recognition.
Further preferably, the process of performing the menu function node intention recognition based on the input menu navigation information in the step S2 by adopting the rule matching manner is as follows:
s21: segmenting input menu navigation information based on a domain named entity dictionary, a functional menu dictionary and a word stopping dictionary; if the menu navigation information input by the user is voice information, converting the voice information into text information, and then segmenting the text information;
the domain named entity dictionary comprises industry terms of business domains; the function menu dictionary comprises IDs (identity) of all function menu nodes, a marker word set and a hierarchical path thereof, the IDs corresponding to the function menu nodes with the same name are different, and the stop dictionary comprises filter words;
s22: calculating the similarity between each segmented word and each function menu node by adopting a text similarity comparison method based on the marker word set of each function menu node in the function menu dictionary, and selecting the function menu node with the similarity exceeding a threshold value;
s23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: if the number of the selected function menu nodes is more than or equal to 2, starting menu function node filtering conversations for the selected function menu nodes based on a finite state machine or a task tree conversation mode to obtain the current function menu nodes of the user;
if a plurality of function menu nodes with the same function menu name are matched in the step S22, performing menu function node filtering dialogue based on the hierarchical path corresponding to each function menu node to eliminate ambiguity;
the menu function node filtering dialogue is a preset human-computer interaction rule for the function menu node;
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: and if the number of the selected function menu nodes is 0, continuing the menu function node navigation dialogue based on the finite state machine or the task tree dialogue mode.
It should be noted that the token includes names of function menu nodes, synonyms thereof, related words, and the like, which are mainly related to the names of the function menus, so that the function menu nodes with the same name have the same token set but different hierarchical paths, which results in that a plurality of function menu nodes with the same name obtained by the text similarity comparison method in S22 need to perform a dialog based on the hierarchical paths to eliminate ambiguity. The hierarchical path is as follows: main interface/card service (sub-interface)/card transfer (menu function node). The word segmentation obtained by text word segmentation is divided according to a dictionary, for example, the invention is divided according to at least a domain named entity dictionary and/or a function menu dictionary, and can also be divided according to other dictionaries, such as a Xinhua dictionary. The text similarity includes cosine similarity and word position and phase similarity.
Further preferably, the similarity between the word segmentation and the function menu node in step S22 is obtained based on the similarity between the word segmentation and the tagged word in the tagged word set of the function menu node;
the similarity between the word segmentation and the sign words in the sign word set is calculated by adopting a word position phase similarity algorithm, and the calculation formula is as follows:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
in the formula, S is the similarity between a word-cutting word and a mark word, Wa is the coefficient of proportionality of difference in number of same words, Wb is the coefficient of proportionality of difference in word displacement, M is the number of same words between the word-cutting word and the mark word, N is the number of same-word and same-position words, and length (a) and length (b) are the lengths of texts of the word-cutting word and the mark word.
The similarity between the word segmentation and the function menu node is preferably the maximum value of the calculated similarity between each marker word in the marker word set of the function menu node and the word segmentation.
Further preferably, the value range of the same-word number difference proportionality coefficient Wa is as follows: [0.6, 0.8 ]; the numeric area of the word displacement difference proportionality coefficient Wb is: [0.2,0.4].
Further preferably, the word segmentation process of step S21 is further performed according to a preset negative word dictionary, and the word segmentation process retains negative words and word segmentation for extracting the filter words, wherein before step S23 is executed, the method further includes the following steps: filtering the function menu node selected in step S22 based on the negative word dictionary;
judging whether a negative word exists before the word segmentation corresponding to the functional menu section selected in the step S22, and if the negative word exists, judging whether the number of the negative words is an even number or an odd number;
if no negative word exists or the number of the negative words is even, the corresponding function menu node is reserved, and then S23 is executed; if the number of the negative words is odd, the function menu node is removed, and then S23 is executed. Further preferably, the process of conducting a dialog based on the task tree dialog mode is as follows:
a: pressing a conversation task in a task tree corresponding to the current conversation service into a task stack, and taking a root task as a stack top task of the task stack;
b: judging whether all tasks in the task stack are executed completely, and if the execution is completed, completing the current conversation service; if not, executing step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, after the user input information is acquired, extracting target key information from the user input information and binding the target key information to a matched record item in the waiting data table;
the sequence of the record items in the expectation data table corresponds to the execution sequence of the tasks in the task stack;
d: and C, clearing the completed tasks in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
The invention carries out the task tree dialogue mode in a task stack mode, on one hand, the tasks are managed more orderly, and on the other hand, the fast switching of scenes is more convenient. If the fact that the user is switched into another conversation scene from the current scene is detected, the conversation task of the other conversation scene is quickly pushed into the task stack in a stack pressing mode, and after the conversation task of the other conversation scene is completed, the uncompleted conversation task in the previous task stack is continued.
In addition, the expected data table is used for storing and collecting data used for inputting or from the outside in the conversation process, the order of the record items corresponds to the execution order of the tasks in the task stack, and a hierarchical structure is adopted, so that the subtasks can conveniently access the data of the upper-layer tasks and the conversation context data can be stored.
Preferably, the task execution process in the task stack further includes monitoring whether an error conversation occurs in real time, and if an error conversation occurs, starting an error processing task;
the decision mechanism of the error dialogue monitoring is to identify whether an error dialogue occurs based on a user-defined rule or a uniform configuration mode.
The user-defined rule or unified configuration means that the user can customize the decision algorithm to decide whether to enter the error handling process or define the rule, for example, enter the error handling process when the confidence of information extraction is lower than 80%. The confidence level can be obtained by adopting the existing text similarity calculation method.
On the other hand, the invention provides a self-service equipment navigation system based on the method, which at least comprises a voice recognition service module, a semantic/intention recognition module, a dialogue management engine module, a functional navigation agent module and a self-service equipment application running platform;
the voice recognition service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the functional navigation agent module is connected with the self-service equipment application running platform;
the voice recognition service module is used for converting the collected user voice into text information;
the semantic/intention identification module is used for identifying the menu function node intention matched with the menu navigation information by adopting a rule matching method;
the dialogue management engine module is used for carrying out dialogue task management and comprises a finite state machine and a task tree mode;
the dialogue at least comprises a menu function node navigation dialogue and a service dialogue of the menu function node;
and the self-service equipment application running platform is used for responding to the navigation requirement of the functional navigation agent to execute the task.
The function navigation agent module provides a function navigation interface and executes entrusted service, so that the self-service application running platform is separated from the service module.
Preferably, the system also comprises an equipment service module, a transaction service module and a flow interface assembly module, wherein the equipment service module, the transaction service module and the flow interface assembly module are all connected with the self-service equipment application running platform;
and the self-service equipment application running platform responds to the navigation requirement of the functional navigation agent and calls transaction execution service, equipment driving service and flow page jump service.
Further preferably, the system further comprises a speech synthesis service module, which is used for converting the text information into natural speech.
Advantageous effects
1. Aiming at the service characteristics (rich service functions, more menu levels and relatively complex service flow) of the conventional self-service application system, the invention adds a humanized and intelligent function navigation system and method for the system, and can help clients to handle services conveniently and efficiently. And the function navigation of two sections of conversation stages is realized based on a conversation management mechanism, one is to identify the selection of a customer service menu, and the other is to be driven by a natural man-machine conversation mode of a service function step flow task, so that the service handling is completed by whole-course conversation interaction, and the man-machine interaction of the self-service equipment is improved. The invention selects a finite state machine or a task tree dialogue mode to realize the man-machine dialogue, namely two selectable schemes are provided to realize the man-machine dialogue, more choices are provided for developers, and simultaneously, simple and complex dialogue scenes are supported, and the respective defects are shielded. The task tree conversation mode developed by the invention is suitable for a complex conversation system, especially the task tree conversation mode has stronger logicality, can be suitable for the conditions of multiple levels and complex business relevance, and can support the quick switching of conversation scenes, thereby greatly improving the flexibility of the conversation system.
2. The method selects a text similarity matching mode to identify the menu function node intention of a user, performs word segmentation on the basis of information input by the user voice or characters, calculates the similarity between segmented words and function menu nodes in a preset function menu dictionary, and determines the menu intention of the user according to the similarity, compared with a simple identification matching mode of keywords, the reliability of an obtained matching result is higher, on one hand, the segmented words are respectively matched with a plurality of sign words of the menu function nodes, a comprehensive similarity is obtained according to the matching result and is used as the similarity between the segmented words and the menu function nodes, and the accuracy is higher than that of simple identification of the keywords; on the other hand, the invention discovers that when the similarity between two words is described by the selected word position phase-shifting similarity algorithm through research, the same word is considered, and the word position is also considered, so that the obtained similarity can more accurately describe the correlation degree between the two words. In conclusion, the functional menu nodes obtained by using the text similarity, particularly the word position phase-shifting similarity algorithm, are more consistent with the actual intention of the user, and the accuracy of the obtained result is higher.
3. The task tree conversation mode adopted by the invention is realized based on a task stack mode, namely, the conversation tasks are managed in the task stack mode according to the stack mode, so that the task order and the effective implementation can be ensured, and the conversation scene switching can be realized more quickly.
Drawings
FIG. 1 is a diagram of a multi-level functional interface provided by the present invention;
FIG. 2 is an organizational chart of a self-service device navigation system provided by the present invention;
FIG. 3 is a flow chart of a method for self-service device navigation provided by the present invention;
FIG. 4 is a flow diagram of a task tree dialog schema provided by the present invention;
FIG. 5 is a flow chart of a dialog of a task tree approach to transfer service provided by the present invention.
Detailed Description
The present invention will be further described with reference to the following examples.
As shown in fig. 1, the main interface of the self-service device provided by the present invention includes a sub-interface and menu function nodes, and hierarchical nesting is performed in a tree-type manner. The menu function node is composed of a plurality of steps, the steps are composed of interaction pages, transaction service calling and equipment service calling logics, and the steps can also be nested with sub-steps. Steps can be understood as tasks in the present invention. The scheme mainly aims at the typical structure to perform function navigation.
As shown in fig. 2, the self-service device navigation system of the present invention includes a voice recognition service module, a voice synthesis service module, a semantic/intention recognition module, a session management engine module, a functional navigation agent, a self-service device application running platform, a device service module, a transaction service module, and a flow interface aggregation module.
The voice recognition service module, the voice synthesis service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the function navigation agent module, the equipment service module, the transaction service module and the flow interface assembly module are all connected with the self-service equipment application operation platform. Therefore, the functional navigation agent module is used as an interactive interface and a message conversion transmission intermediate layer of each subsystem of the NLP processing, indirectly realizes interactive communication and control logic of each core module of the functional navigation, provides a functional navigation interface and executes entrusted service, and enables the self-service application running platform to be separated from the NLP service module, wherein the NLP service module comprises the speech recognition service module, the speech synthesis service module, the semantic/intention recognition module and the dialogue management engine module.
The SPEECH RECOGNITION (AUTOMATIC SPEECH RECOGNITION) service module is used for converting the collected user SPEECH into text information to realize the AUTOMATIC SPEECH RECOGNITION function.
And a SPEECH synthesis (TEXT TO SPECCH) service module converts TEXT into natural SPEECH and provides the function of the self-service application system for broadcasting the material by SPEECH.
The semantic/intention identification module is used for identifying the menu function node intention matched with the menu navigation information by adopting a rule matching method; the semantic/intention understanding service module also provides basic service interfaces of Chinese word segmentation, part of speech tagging, entity recognition, dependency syntax analysis and the like for natural language understanding, and can provide RESTFUL mode calling service.
The dialogue management engine module provides dialogue task management functions independent of application services, and ensures that the dialogue is correctly performed by executing appropriate services. DIALOG MANAGEMENT is a process of controlling man-machine DIALOGs, implementing task-driven multi-turn DIALOGs, implementing DIALOG state maintenance (DIALOG STATE TRACKING) and generating system Decisions (DIALOGPOLICY), while interacting as an interface with the backend/task model and providing expectations for semantic expressions. The dialog management engine described herein supports a dialog management model with a finite state machine, a task tree.
The self-service equipment application operation platform is a software operation framework for realizing self-service application services, and is characterized in that a core carries out logic assembly on an interactive page, transaction service calling and equipment driving services, all modules are organically combined through a parameter configuration mode and an event driving mode, and an interface extension module and a configuration mechanism are provided for realizing customized service functions.
As shown in fig. 3, based on the self-service device navigation system, the self-service device navigation method provided by the invention comprises the following steps:
s1: and awakening the self-service equipment navigation system after receiving the starting information.
Wherein, an active inquiry/detection mechanism or a passive trigger NLP function navigation start is adopted. The start-up information is generated in the following three ways in the present embodiment, but the present invention is not limited to the following three ways.
The first mode is that the self-service application running platform starts an infrared sensor to detect that a person approaches the equipment, and generates starting information to trigger a functional navigation agent to prompt a customer to select a service.
The second mode is that when the user clicks the navigation button of the application interface to generate the starting information, the functional navigation agent of the self-service equipment is activated.
The third mode is to use voice commands to generate startup information to wake up the self-service device application system, the functional navigation agent.
S2: starting a menu function node navigation dialogue to identify the menu function node intention of the user based on a finite state machine or a task tree dialogue mode;
the menu function node navigation dialogue is a man-machine interaction rule preset by aiming at obtaining the current menu function node of a user.
Such as the following menu function nodes to navigate part of the dialog:
"the infrared human body response module service of application operation platform detects someone and is close to equipment, triggers function navigation agent system and opens menu function node navigation dialogue, and the content of this moment dialogue includes:
the function navigation system broadcasts that you are good and welcome you to use the XXX self-service system.
The functional navigation system broadcasts asking what business you need to transact?
And (3) user input: i want to handle xxx tasks. "
In the conversation process, the information input by the user is the received menu navigation information input by the voice or the characters of the user. If the menu navigation information input by the user cannot be matched with the menu function node, the menu function node navigation dialog is continued, for example, as follows:
functional navigation system-in failure, we cannot correctly identify the operation to be executed, and it may be that our system does not support the service or we cannot successfully understand your meaning.
Functional navigation system-do you need to introduce the services supported by our system? ((answer good, answer not required))
And (3) user input: good results are obtained.
The application system pops up the clickable function list (listed by displaying the list in the figure for the sake of space)
Simultaneously broadcast: please select the following services.
And (3) user input: transfer service or select transfer service on the interface. "
If the user is based on the input menu navigation information, such as the transfer service in the conversation, the intention of the menu function node is continuously identified in a rule matching manner; if the user selects the transfer service on the interface, the user can know that the menu function node selected by the user is the transfer service, and the menu function node navigation conversation is ended.
Aiming at menu navigation information input by a user, the invention adopts a rule matching mode to identify the intention of menu function nodes, and the implementation process is as follows:
s21: segmenting input menu navigation information based on a domain named entity dictionary, a functional menu dictionary, a word stopping dictionary and a negative word dictionary; if the menu navigation information input by the user is voice information, the voice information is converted into text information, and then the text information is segmented.
In this embodiment, the text information initiates a text semantic request based on a JSON format to a semantic/intent understanding service through a communication protocol (e.g., HTTP), and the semantic/intent understanding service performs chinese word segmentation, such as IKanalyzer segmentation of a text.
Regarding the domain named entity dictionary: based on JSON format definition participle configuration files, according to the industry of the field of self-service application, the extension times of the participle are defined, such as defining terms like XX stock exchange, double recording, taking a position, three-party storage management, one-user communication and the like aiming at the field of the securities industry.
Regarding the function menu dictionary: defining ID, label word set and its hierarchical path of each function menu node, its label words including function menu name and its synonym, expansion word, etc. A hierarchy path such as (main interface/card traffic (sub-interface)/card transfer (menu function node)). Adding hierarchical paths mainly considers the situation that the same menu function exists in different service paths of a service system.
With respect to stop dictionary: and (4) incorporating the mood auxiliary words, the adverbs and the like into the word-stop dictionary library as filter words for filtering the text.
With respect to the negative word dictionary: negative words such as no, none, not, don't, not, no, etc. are used as the content of the negative dictionary.
In conclusion, the word segmentation process is to remove the filter words, and reserve negative words and cut words;
s22: and calculating the similarity between each cut word and each function menu node by adopting a text similarity comparison method based on the marker word set of each function menu node in the function menu dictionary, selecting the function menu node with the similarity exceeding a threshold value, and filtering the selected function menu node based on a negative word dictionary.
In this embodiment, it is preferable that the similarity between the word segmentation and the function menu node is a maximum value of the calculated similarity between each token and the word segmentation in the token set of the function menu node. Preferably, the similarity between the word segmentation and the sign word is calculated by a word position phase similarity algorithm, and the calculation formula is as follows:
calculating the formula:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
in the formula, S is the similarity between a word-cutting word and a mark word, Wa is the coefficient of proportionality of difference in number of same words, Wb is the coefficient of proportionality of difference in word displacement, M is the number of same words between the word-cutting word and the mark word, N is the number of same-word and same-position words, and length (a) and length (b) are the lengths of texts of the word-cutting word and the mark word. In this embodiment, the preferable value range of the same-word-number difference proportionality coefficient Wa is: [0.6, 0.8 ]; the numeric area of the word displacement difference proportionality coefficient Wb is: [0.2,0.4]. The method for confirming the values of the homomorphic difference proportionality coefficient Wa and the word displacement difference proportionality coefficient Wb is, for example, as follows:
and analyzing the successfully matched text sample data through a defined menu, and determining a proportionality coefficient by counting the average proportion of the same word number difference and the word displacement difference of the samples. Which is used to reflect the average weight of the same word count differences and word shifts in text in the sample statistics that is deemed to correctly identify a matching function menu.
In other possible embodiments, cosine similarity may be selected to calculate the similarity between the word segmentation and the token word.
The invention determines the return result by setting the threshold value as to how to select the function menu node with the similarity exceeding the threshold value. Such as a function menu with about 0.8 being a fixed threshold, may be returned in the result, and the result of the function navigation intent recognition may be based on the JSON description, as the result returns a value of { retCode:2, result [ { order: 'function menu 1', similarity:0.8}, { order: 'function menu 2', similarity:0.9} ]. When the return value is-1, the non-matching function menu returns; the return value is 0, which indicates that a full matching function menu with the similarity of 1 is returned; and the return value is 2, which indicates that one or more fuzzy function menu matching items with the similarity greater than or equal to the threshold value are returned.
Judging whether a negative word exists before the word segmentation corresponding to the selected functional menu node based on the filtering step of the negative word dictionary, and if the negative word exists, judging whether the number of the negative words is even or odd;
if no negative word exists or the number of the negative words is even, the corresponding function menu node is reserved, and then S23 is executed; if the number of the negative words is odd, the function menu node is removed, and then S23 is executed.
S23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: and if the number of the selected function menu nodes is not more than or equal to 2, starting menu function node filtering conversation on the selected function menu nodes based on a finite state machine or a task tree conversation mode to obtain the current function menu nodes of the user.
The menu function node filtering dialogue is a preset human-computer interaction rule for the function menu node. And initiating multiple rounds of conversations, further locking the user intention, and if the returned result is that two completely matched function menus draw the service A or the transfer service B, initiating the conversations to generate voice to inquire whether the user wants to transact the service A or the service B, and further defining the service selection intention of the user.
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: and if the number of the selected function menu nodes is 0, continuing the menu function node navigation dialogue based on the finite state machine or the task tree dialogue mode.
S3: linking to the first step of the menu function node acquired in the step S2, and starting a service session of the menu function node based on a finite state machine or a task tree session mode until a service flow is completed;
the service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
As shown in fig. 3, when the execution condition of the present service from the step is satisfied, the steps are continuously executed until the present service is completed.
In this embodiment, it is preferable that a finite state machine dialog mode is used for performing a menu function node navigation dialog when the menu function node intention of the user is identified in S2; the service dialogue for the menu function node in S3 is performed using the task tree dialogue mode.
The invention relates to a dialogue management system of a dialogue task, which comprises a dialogue task and a dialogue management engine module. Taking the task tree dialog mode as an example:
(1) and adopting a task tree dialogue mode to carry out business dialogue tasks.
And the business system designs a conversation task according to the conversation requirement. By breaking down a conversation into tasks, the business system assigns the content and logic of the conversation to the conversation management engine, requiring the design of a specific business conversation according to the business requirements. The conversation is expressed in a task mode, the task can be further decomposed into subtasks, and the whole business conversation is actually a task tree and represents the conversation content and the conversation logic of the business. And executing the tasks on the task tree by default according to a pre-sequence traversal mode by the conversation management engine so as to complete the whole conversation. If the conversation process has conversation scene switching, the whole conversation process is managed by the conversation management engine, and at the moment, the conversation management engine can flexibly execute tasks in a conversation tree according to requirements. The service system only needs to provide a conversation task tree related to the service according to the service requirement.
The conversation task is divided into an intermediate task and an actual task. The intermediate task does not execute a specific actual task, and provides business logic of the conversation by managing subtasks. For example, the account transfer information task of the system is an intermediate task which has two subtasks, namely an account transfer information input task and an account transfer information confirmation task; the actual tasks correspond to leaf nodes in the conversational task tree. The actual task performs the specific actual task, such as prompting the user for a session, waiting for user input, or invoking a function of the device to fulfill the user's request in the session, such as notifying the system to open a transfer interface. Such as a transfer information input task and a transfer information confirmation task.
The dialogue management system provides different dialogue task base classes, and the business system expands the corresponding task base classes and defines the specific task classes of the business dialogue. The service system can specify conditions for completing the conversation task, conditions for starting and executing the conversation task, data required to be obtained from a user for the conversation task, subtasks of the task and the like, and can also define specific operations required to be executed by the task, such as informing the service system to open a transfer interface and the like, through rewriting or realizing a corresponding method of a task base class.
① the task start execution condition-the dialog management engine will start executing the task only if the condition is true.
② task completion condition-when the condition is true, the dialog management engine considers that the task has completed successfully.
③ data required by the task needs to obtain data from the user, extract from the user's input and bind to the task
Among the variables.
The above are conditions set for the tasks to be executed sequentially.
(2) The dialogue management engine module when adopting the dialogue tree mode:
the dialogue management engine module provides a dialogue task management function independent of specific services, and completes dialogue with a user by executing a dialogue task tree provided by a dialogue service system. The dialogue management engine can support dialogue scene switching and in the case of not accurately understanding the speaking content of the user, the dialogue management engine also provides an error handling mechanism to ensure that the dialogue can smoothly progress.
Regarding the task tree conversation mode, the task tree conversation mode adopts a task stack mode to execute a conversation task, and the process is as follows:
a: pressing a conversation task in a task tree corresponding to the current conversation service into a task stack, and taking a root task as a stack top task of the task stack;
b: judging whether all tasks in the task stack are executed completely, and if the execution is completed, completing the current conversation service; if not, executing step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, after the user input information is acquired, extracting target key information from the user input information and binding the target key information to a matched record item in the waiting data table;
the sequence of the record items in the expectation data table corresponds to the execution sequence of the tasks in the task stack;
d: and C, clearing the completed tasks in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
With respect to the task stack:
and saving the conversation tasks to be executed in the task stack, wherein the conversation tasks are managed in the task stack in a default stack mode. The top-of-stack task of the task stack is the task that is currently executing, referred to as the in-focus task. Other tasks in the task stack are tasks to be executed. In each round of execution process of the dialogue management system, a stack top task is taken out from a task stack and executed. Executing the task may cause other tasks in the task stack or the dialog task tree to be in a complete state, or may cause other tasks in the task stack or the dialog task tree to be a new focus task, and the dialog management system clears the completed dialog task from the task stack, pushes the new focus task in the dialog task tree into the stack, and then starts the next round of execution until the dialog is completed.
Regarding the next focus task, if the current task is an internal task, pushing the first executable subtask of the task into the task stack; and if the current task is a specific task, calling an actual execution process of the task, such as opening a transfer interface or prompting a user for a piece of information. When the task is executed, if the system fails to understand the input of the user, the dialogue management system enters an error processing task, the error processing task becomes a new focus task and is stacked, the dialogue is ensured to be smoothly carried out by requiring the user to speak again or confirm, the dialogue system is ensured to obtain required data, and the dialogue task is ensured to be successfully completed. Or if the user requires to convert the conversation scene, the conversation task of the converted scene is pressed into the task stack to be executed as the focus task, and the tasks which are not completed before are continuously executed after the execution is completed.
The input information processing stages in step C are:
generating an expectation data table according to tasks in a task stack; extracting information from the user input; and storing the extracted information binding data in a waiting data table.
The information extraction mode of the input stage can be realized by the existing technology, and the invention is not specifically limited and described in this respect. The dialogue engine generates a hierarchical expectation data table structure by traversing a task stack, and manages and saves input data information of dialogue context; the subtask can access the task expectation data at the upper layer of the hierarchy, and is more beneficial to performing the contextual analysis of the conversation. Such as data obtained from the user, such as account numbers and amounts in the transfer information entry task, externally obtained other data, such as card number data read from a card reader peripheral, or transaction information data returned from the service host, are all extraction information. It is contemplated that a record in the data table may contain multiple data variables because a single session may expect multiple data inputs, such as a transfer information input task containing both account and amount data variables. And the data variables expect the record item sequence of the data table structure according to the task sequence in the task tree and correspond to the task sequence in the task stack. When two different entries in the table expect one data at a time, the data is bound to the entries in the table in order. The user or the external data are bound to the record items in the waiting data table, so that the task completion condition can be triggered to be in a completion state, and the task start condition can be triggered to start execution. And if the binding fails, the error processing task of the dialog system becomes a focus task to be stacked, the operation of the dialog process is ensured by requiring the user to re-input or confirm, and the task is ensured to obtain required data so as to be completed.
Regarding the error handling task mentioned above, the present invention also monitors whether an error dialog occurs in the dialog process in real time, and if an error dialog occurs, the error handling task is started to ensure that the dialog task is successfully completed.
The decision mechanism of error dialogue monitoring supports user definition to decide whether to enter error processing flow based on rules or in a uniform mode. The dialog management system supports user-defined decision algorithms to decide whether an error handling procedure needs to be entered. The system enters an error processing task when the confidence of information extraction is lower than 80% by defining a rule.
In the embodiment, two error processing tasks, namely explicit and implicit, are set, and the error processing is displayed to directly require the user to repeatedly speak or directly inquire the value of the corresponding expected data. Implicit error handling implies that a user confirms the value of corresponding expected data in a question-back manner, and the current conversation process is not affected, for example, when the user says to transfer money, the conversation system recognizes that the confidence level of the transfer is low, and at this time, the conversation system may handle the above in such a manner:
"is transfer to? Good, opening a transfer interface for you, please wait for "where" is to transfer? A good "implicit error handling" allows the user to confirm whether the transfer service is to be performed. "open transfer interface for you, please wait a little" is the prompt for normal transfer task.
As shown in fig. 5, the present invention provides a specific example for the above-mentioned contents as an explanation:
step 1, an infrared human body sensing module of an application operation platform detects that a person approaches to equipment, a functional navigation agent is triggered to execute a conversation state request, a conversation management system executes a functional menu selection task, and subtasks are pressed into a task stack. The conversation content at this time is as follows:
the function navigation system broadcasts that you are good and welcome you to use the XXX self-service system.
And then continuing to execute the prompt of the user to handle the task, wherein the conversation content is as follows:
the functional navigation system broadcasts asking what business you need to transact?
And (3) user input: is not known.
And 2, calling semantic/intention understanding service, determining to execute an explicit error processing task by the dialog system error processing decision if the dialog management system fails to successfully identify the user intention, pushing the function menu as a focus task, pressing the focus task into a task stack, and executing the task. At this time, the conversation and system contents are as follows:
functional navigation system-in failure, we cannot correctly identify the operation to be executed, and it may be that our system does not support the service or we cannot successfully understand your meaning.
Functional navigation system-do you need to introduce the services supported by our system? ((answer good, answer not required))
And (3) user input: good results are obtained.
Popping up a clickable function list by the application system;
simultaneously broadcast: please select the following services.
And (3) user input: transfer service or select transfer service on the interface.
And 3, calling semantic/intention understanding service, successfully identifying the user service selection intention based on a rule matching mode, executing a linked service process page task, finishing one-stage conversation after the execution is finished, starting the next round of conversation, and pressing the transfer service task and the subtasks thereof into a stack.
And 4, executing the account transfer service to an account information input task, and transferring the preset word slot of the task into two word slots of an account number and a transfer amount. The user is required to input the transferred account and the amount of money, initiate a conversation and acquire input information filled by the user. The content of the conversation is as follows:
the function navigation system comprises: please enter the account number for transfer.
And (3) user input: transfer account xxxxxx.
The function navigation system comprises: please enter the transfer amount.
And (3) user input: 800
And after the step execution conditions are met, executing the business substep task, transferring to the next process step of the business, and executing the transfer information completion determination task.
And 5, completing all subtasks of the account transfer information task, wherein the account transfer information task also becomes a completed state. And pushing the subtask authentication password into the task stack.
Performing the task of verifying the password, the content of the conversation:
the function navigation system comprises: please input a password;
the system should open the keypad device service.
User (keypad) input: the code XXXXX;
and when the service system successfully verifies the password, the service system informs the dialogue management system to complete the password verification task.
And 6, sequentially executing the tasks of transferring accounts, printing transaction receipts and taking cards. This dialog and system content:
the function navigation system comprises: transfer transactions are ongoing, please wait.
And the application system executes the account transfer transaction action and the execution is successful.
The function navigation system comprises: the transfer operation is completed for you, a transaction slip is printed for you, and the user waits.
The application system executes the calling receipt printing device service and prints the transaction receipt work, and the execution is successful.
The function navigation system comprises: you take away the card when the transaction is completed.
And the application system executes and calls the card reader equipment service to finish the card withdrawing operation.
And 7, after the dialog management system executes the task of the task tree, ending the second stage dialog.
In summary, the invention adds a humanized and intelligent navigation system and method for the system aiming at the service characteristics (rich service functions, more menu levels and relatively complex service flow) of the existing self-service application system, and can help clients to handle services conveniently and efficiently. The combination of the semantic/intention understanding service and the dialogue management service is obviously superior to the conventional means that voice recognition is used for guiding and completing functional operation through simple keyword matching, and the recognition accuracy and the interactive humanization degree are effectively improved. The system realizes the whole-course dialogue interaction to finish the business transaction through two-stage dialogue stage function navigation, one is to identify the customer business menu selection, and the other is to be driven by a natural man-machine dialogue mode of business function step flow tasks. In addition, the system supports the management of finite state machine conversation and task tree conversation, and simultaneously supports simple and complex conversation scenes, and shields the respective defects. The finite state machine mode is more suitable for simple and flexible conversation scenes, and the task tree type conversation management supports scene switching, for example, you need to switch from money taking to financial service consultation, only the financial service consultation needs to be a focus task, and after execution, a money withdrawing task returns to continue operation. The method is very suitable for tasks with complex business function steps and multi-layer subfunction steps. Therefore, the method innovatively improves the man-machine interaction mode in the field of self-service application, improves the intelligent degree of the self-service equipment in the process of service handling, adopts a natural language processing mechanism, realizes a menu function and a service flow driven by multi-turn conversation, and improves the convenience and the use efficiency of the self-service equipment.
It should be emphasized that the examples described herein are illustrative and not restrictive, and thus the invention is not to be limited to the examples described herein, but rather to other embodiments that may be devised by those skilled in the art based on the teachings herein, and that various modifications, alterations, and substitutions are possible without departing from the spirit and scope of the present invention.

Claims (10)

1. A self-service equipment navigation method is characterized in that: the method comprises the following steps:
s1: when the starting information is received, the self-service equipment navigation system is awakened;
s2: starting a menu function node navigation dialogue to identify the menu function node intention of the user based on a finite state machine or a task tree dialogue mode;
the menu function node navigation dialogue is a preset human-computer interaction rule with the current menu function node of the user as a target; if menu navigation information input by voice or characters of a user is received in the conversation process, performing menu function node intention identification by adopting a rule matching mode based on the input menu navigation information;
the rule matching mode comprises text similarity pattern matching and regular expression pattern matching, each menu function comprises a plurality of steps, and each step corresponds to one task;
s3: linking to the first step of the menu function node acquired in the step S2, and starting a service session of the menu function node based on a finite state machine or a task tree session mode until a service flow is completed;
the service dialogue of the menu function node is a man-machine interaction rule constructed according to the execution steps of the menu function node.
2. The method of claim 1, wherein: the process of performing the menu function node intention recognition based on the input menu navigation information in the step S2 by adopting the rule matching manner is as follows:
s21: segmenting input menu navigation information based on a domain named entity dictionary, a functional menu dictionary and a word stopping dictionary; if the menu navigation information input by the user is voice information, converting the voice information into text information, and then segmenting the text information;
the domain named entity dictionary comprises industry terms of business domains; the function menu dictionary comprises IDs (identity) of all function menu nodes, a marker word set and a hierarchical path thereof, the IDs corresponding to the function menu nodes with the same name are different, and the stop dictionary comprises filter words; s22: calculating the similarity between each segmented word and each function menu node by adopting a text similarity comparison method based on the marker word set of each function menu node in the function menu dictionary, and selecting the function menu node with the similarity exceeding a threshold value;
s23: acquiring the number of the selected function menu nodes, and executing the following processes according to the number:
a: if the number of the selected function menu nodes is more than or equal to 2, starting menu function node filtering conversations for the selected function menu nodes based on a finite state machine or a task tree conversation mode to obtain the current function menu nodes of the user;
the menu function node filtering dialogue is a preset human-computer interaction rule for the function menu node;
b: if the number of the selected function menu nodes is equal to 1, taking the selected function menu nodes as the current function menu nodes of the user;
c: and if the number of the selected function menu nodes is 0, continuing the menu function node navigation dialogue based on the finite state machine or the task tree dialogue mode.
3. The method of claim 2, wherein: the similarity between the word segmentation and the function menu node in the step S22 is obtained based on the similarity between the tagged words in the tagged word set of the word segmentation and the function menu node;
the similarity between the word segmentation and the sign words in the sign word set is calculated by adopting a word position phase similarity algorithm, and the calculation formula is as follows:
S=X*Wa+Y*Wb
wherein:
X=M/MAX(Length(A),Length(B))
Y=N/MAX(Length(A),Length(B))
Wb=1-Wa;
in the formula, S is the similarity between a word-cutting word and a mark word, Wa is the coefficient of proportionality of difference in number of same words, Wb is the coefficient of proportionality of difference in word displacement, M is the number of same words between the word-cutting word and the mark word, N is the number of same-word and same-position words, and length (a) and length (b) are the lengths of texts of the word-cutting word and the mark word.
4. The method of claim 3, wherein: the value range of the same word number difference proportionality coefficient Wa is as follows: [0.6, 0.8 ]; the numeric area of the word displacement difference proportionality coefficient Wb is: [0.2,0.4].
5. The method of claim 2, wherein: the segmentation process of step S21 is further performed according to a preset negative word dictionary, and the segmentation process retains negative words and word segmentation for extracting the filter words, wherein before the step S23 is executed, the method further includes the following steps: filtering the function menu node selected in step S22 based on the negative word dictionary;
judging whether a negative word exists before the word segmentation corresponding to the functional menu section selected in the step S22, and if the negative word exists, judging whether the number of the negative words is an even number or an odd number;
if no negative word exists or the number of the negative words is even, the corresponding function menu node is reserved, and then S23 is executed; if the number of the negative words is odd, the function menu node is removed, and then S23 is executed.
6. The method of claim 1, wherein: the process of conducting a dialog based on the task tree dialog mode is as follows:
a: pressing a conversation task in a task tree corresponding to the current conversation service into a task stack, and taking a root task as a stack top task of the task stack;
b: judging whether all tasks in the task stack are executed completely, and if the execution is completed, completing the current conversation service; if not, executing step C;
c: executing a stack top task in a task stack;
if the current stack top task needs user input information, after the user input information is acquired, extracting target key information from the user input information and binding the target key information to a matched record item in the waiting data table;
the sequence of the record items in the expectation data table corresponds to the execution sequence of the tasks in the task stack;
d: and C, clearing the completed tasks in the task stack, taking the next task as the stack top task of the task stack according to the task tree, and returning to the step B.
7. The method of claim 6, wherein: the task execution process in the task stack also comprises the steps of monitoring whether an error conversation occurs in real time, and starting an error processing task if the error conversation occurs;
the decision mechanism of the error dialogue monitoring is to identify whether an error dialogue occurs based on a user-defined rule or a uniform configuration mode.
8. A self-service device navigation system based on the method of any one of claims 1-7, characterized by: the system at least comprises a voice recognition service module, a semantic/intention recognition module, a dialogue management engine module, a functional navigation agent module and a self-service equipment application running platform;
the voice recognition service module, the semantic/intention recognition module and the dialogue management engine module are all connected with the functional navigation agent module; the functional navigation agent module is connected with the self-service equipment application running platform;
the voice recognition service module is used for converting the collected user voice into text information;
the semantic/intention identification module is used for identifying the menu function node intention matched with the menu navigation information by adopting a rule matching method;
the dialogue management engine module is used for carrying out dialogue task management and comprises a finite state machine and a task tree mode;
the dialogue at least comprises a menu function node navigation dialogue and a service dialogue of the menu function node;
and the self-service equipment application running platform is used for responding to the navigation requirement of the functional navigation agent to execute the task.
9. The system of claim 8, wherein: the system also comprises an equipment service module, a transaction service module and a flow interface aggregation module, wherein the equipment service module, the transaction service module and the flow interface aggregation module are all connected with the self-service equipment application running platform;
and the self-service equipment application running platform responds to the navigation requirement of the functional navigation agent and calls transaction execution service, equipment driving service and flow page jump service.
10. The system of claim 8, wherein: the voice synthesis service module is used for converting the text information into natural voice.
CN201811496702.8A 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof Active CN111290677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811496702.8A CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811496702.8A CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Publications (2)

Publication Number Publication Date
CN111290677A true CN111290677A (en) 2020-06-16
CN111290677B CN111290677B (en) 2023-09-19

Family

ID=71021267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811496702.8A Active CN111290677B (en) 2018-12-07 2018-12-07 Self-service equipment navigation method and navigation system thereof

Country Status (1)

Country Link
CN (1) CN111290677B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113571069A (en) * 2021-08-03 2021-10-29 北京房江湖科技有限公司 Information processing method, device and storage medium
CN113590235A (en) * 2021-07-27 2021-11-02 京东科技控股股份有限公司 Business process execution method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356285B1 (en) * 1997-12-17 2002-03-12 Lucent Technologies, Inc System for visually representing modification information about an characteristic-dependent information processing system
US20040030556A1 (en) * 1999-11-12 2004-02-12 Bennett Ian M. Speech based learning/training system using semantic decoding
DE102005024638A1 (en) * 2005-05-30 2006-12-07 Siemens Ag Word/text inputs navigation method, for mobile telephone, involves displacing menu based on requirements of electronic device movement found by image recording device, where relative position of cursor and menu entry is found by device
US7197460B1 (en) * 2002-04-23 2007-03-27 At&T Corp. System for handling frequently asked questions in a natural language dialog service
JP2009032118A (en) * 2007-07-27 2009-02-12 Nec Corp Information structuring device, information structuring method, and program
CN104536588A (en) * 2014-12-15 2015-04-22 沈阳美行科技有限公司 Keyboard associating method for navigation equipment using map data
US20150279366A1 (en) * 2014-03-28 2015-10-01 Cubic Robotics, Inc. Voice driven operating system for interfacing with electronic devices: system, method, and architecture
CN105162996A (en) * 2014-07-18 2015-12-16 上海触乐信息科技有限公司 Intelligent service interaction platform apparatus, system, and implementing method
WO2015188454A1 (en) * 2014-06-11 2015-12-17 中兴通讯股份有限公司 Method and device for quickly accessing ivr menu
US20170228109A1 (en) * 2014-07-18 2017-08-10 Shanghai Chule (Cootek) Information Technology Co., Ltd. Information Interactive Platform, System and Method
US20180335849A1 (en) * 2017-05-18 2018-11-22 Atmel Corp Techniques for identifying user interface elements and systems and devices using the same

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356285B1 (en) * 1997-12-17 2002-03-12 Lucent Technologies, Inc System for visually representing modification information about an characteristic-dependent information processing system
US20040030556A1 (en) * 1999-11-12 2004-02-12 Bennett Ian M. Speech based learning/training system using semantic decoding
US7197460B1 (en) * 2002-04-23 2007-03-27 At&T Corp. System for handling frequently asked questions in a natural language dialog service
DE102005024638A1 (en) * 2005-05-30 2006-12-07 Siemens Ag Word/text inputs navigation method, for mobile telephone, involves displacing menu based on requirements of electronic device movement found by image recording device, where relative position of cursor and menu entry is found by device
JP2009032118A (en) * 2007-07-27 2009-02-12 Nec Corp Information structuring device, information structuring method, and program
US20150279366A1 (en) * 2014-03-28 2015-10-01 Cubic Robotics, Inc. Voice driven operating system for interfacing with electronic devices: system, method, and architecture
WO2015188454A1 (en) * 2014-06-11 2015-12-17 中兴通讯股份有限公司 Method and device for quickly accessing ivr menu
CN105162996A (en) * 2014-07-18 2015-12-16 上海触乐信息科技有限公司 Intelligent service interaction platform apparatus, system, and implementing method
US20170228109A1 (en) * 2014-07-18 2017-08-10 Shanghai Chule (Cootek) Information Technology Co., Ltd. Information Interactive Platform, System and Method
CN104536588A (en) * 2014-12-15 2015-04-22 沈阳美行科技有限公司 Keyboard associating method for navigation equipment using map data
US20180335849A1 (en) * 2017-05-18 2018-11-22 Atmel Corp Techniques for identifying user interface elements and systems and devices using the same

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MYOUNGHOON JEON: "Menu Navigetion With In-Vehicle Technologies:Auditory Menu Cues Improve Dual Task Performance,Preference,and Workload" *
甘厚勇;: "基于语音识别的自助语音系统" *
缪淮扣;陈圣波;曾红卫;: "基于模型的Web应用测试" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590235A (en) * 2021-07-27 2021-11-02 京东科技控股股份有限公司 Business process execution method and device, electronic equipment and storage medium
CN113571069A (en) * 2021-08-03 2021-10-29 北京房江湖科技有限公司 Information processing method, device and storage medium

Also Published As

Publication number Publication date
CN111290677B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
EP3491533B1 (en) Providing command bundle suggestions for an automated assistant
US8335690B1 (en) Method and system for creating natural language understanding grammars
US8095544B2 (en) Method, system, and apparatus for validation
US20220129556A1 (en) Systems and Methods for Implementing Smart Assistant Systems
CN114424185A (en) Stop word data augmentation for natural language processing
US20220147707A1 (en) Unsupervised induction of user intents from conversational customer service corpora
US20040083092A1 (en) Apparatus and methods for developing conversational applications
CN107424601A (en) A kind of information interaction system based on speech recognition, method and its device
KR100818979B1 (en) Dialog management apparatus and method for chatting agent
US9607102B2 (en) Task switching in dialogue processing
Huang et al. Improving event coreference resolution by learning argument compatibility from unlabeled data
WO2018161048A1 (en) Developer platform for providing automated assistant in new domains
CN111159375A (en) Text processing method and device
CN109816231A (en) Workflow processing method, electronic device and readable storage medium storing program for executing
CN111290677A (en) Self-service equipment navigation method and navigation system thereof
CN115392264A (en) RASA-based task-type intelligent multi-turn dialogue method and related equipment
CN116635862A (en) Outside domain data augmentation for natural language processing
WO2015188454A1 (en) Method and device for quickly accessing ivr menu
Xie et al. Converse: A Tree-Based Modular Task-Oriented Dialogue System
Li et al. Question answering for technical customer support
WO2021063524A1 (en) Unsupervised induction of user intents from conversational customer service corpora
CN115129865A (en) Work order classification method and device, electronic equipment and storage medium
EP3590050A1 (en) Developer platform for providing automated assistant in new domains
CN207818190U (en) A kind of information interaction system based on speech recognition
CA3218841A1 (en) System and method of automatic topic detection in text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant