CN112307177A - Generating a process flow model using an unstructured conversational robot - Google Patents

Generating a process flow model using an unstructured conversational robot Download PDF

Info

Publication number
CN112307177A
CN112307177A CN202010701615.2A CN202010701615A CN112307177A CN 112307177 A CN112307177 A CN 112307177A CN 202010701615 A CN202010701615 A CN 202010701615A CN 112307177 A CN112307177 A CN 112307177A
Authority
CN
China
Prior art keywords
process flow
flow model
computer
recorded
unstructured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010701615.2A
Other languages
Chinese (zh)
Inventor
R.阿比特波尔
E.瓦塞克鲁格
H.J.希普
J.巴纳亚胡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN112307177A publication Critical patent/CN112307177A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/222Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Mathematical Physics (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

In an example computer-implemented method, unstructured interactions between an unstructured conversational robot and a plurality of users are recorded. A process flow model is generated based on the recorded unstructured interactions. Instructions based on the process flow model are presented to the user in real-time by the conversational robot.

Description

Generating a process flow model using an unstructured conversational robot
Background
The present technology relates to process flows. More particularly, the techniques relate to optimizing the order of process flows.
Disclosure of Invention
According to embodiments described herein, a system may include a processor to record unstructured interactions between an unstructured conversational robot and a plurality of users. The processor may further generate a process flow model based on the recorded unstructured interactions. The processor may also display instructions to a user in real-time based on the process flow model through the conversational robot.
According to another embodiment described herein, a computer-implemented method may include recording, by a processor, unstructured interactions between an unstructured conversational robot and a plurality of users. The computer-implemented method may further include generating, by the processor, a process flow model based on the recorded unstructured interactions. The computer-implemented method may also further comprise: the instructions are presented to the user in real-time based on the process flow model via the conversational robot.
According to another embodiment described herein, a computer program product for generating a process flow model may include a computer readable storage medium having program code embodied therein. The computer readable storage medium is not a transitory signal per se. Program code executable by the processor to cause the processor to record unstructured interactions between the unstructured conversational robot and a plurality of users. The program code may also generate, by the processor, a process flow model based on the recorded unstructured interactions. The program code may also cause the processor to present the instructions to the user in real-time by the conversational robot based on the conversational flow model.
Drawings
FIG. 1 is a block diagram of an example system for generating a process flow model based on recorded unstructured interactions;
FIG. 2 is a process flow diagram of an example process by which a process flow model may be generated based on recorded unstructured interactions;
FIG. 3 is a process flow diagram of an example method that may generate a process flow model based on recorded unstructured interactions;
FIG. 4 is a block diagram of an example computing device that may generate a process flow model based on recorded unstructured interactions;
FIG. 5 is a flow diagram of an example cloud computing environment, according to embodiments described herein;
FIG. 6 is a flow diagram of an example abstraction model layer according to embodiments described herein; and
FIG. 7 is an example tangible, non-transitory computer-readable medium that may generate a process flow model based on recorded unstructured interactions.
Detailed Description
Workers in different industries in different environments may participate in many processes each day. These processes are typically characterized as a regular and repeating sequence of micro-operations that are performed in a predetermined order using paper forms or spreadsheets. One example of such a process is an inspection workflow, where a user performs a series of tests and inspections under system conditions, records status and sensor readings, and applies maintenance routines to system components.
However, the particular order or flow in which these processes are performed (referred to herein as a process flow) may be artificially limited by the interfaces used to perform the process flow. For example, checklists for performing these processes are designed and built by subject matter experts in the relevant field, and may be built either based on logical hierarchy order or on order that aggregates data in a manner that allows a supervisor to review more easily. These forms can then be converted directly into some form of screen-based interface. However, the logical order imposed by this process may not be consistent or even contradictory to the natural order of the business processes. For example, the natural order may be an optimal order in which a worker may execute the natural order if the worker is not constrained to the order of the tables. In some examples, natural order may reflect the subject's physical or actual disposition, but may also be influenced by the skill of the worker in a particular task, the habits and convenience of the worker, and other objective and subjective factors. Thus, workers may have to follow unnatural orders to complete the process. As a result, their performance may not be optimal because of the gaps that must be closed between the boot program and the actual program that may be most effective for them. For example, workers may use the tables as is and compromise their efficiency, or perform the processes in natural order, but compromise the accuracy of the task.
In accordance with embodiments of the present disclosure, an unstructured conversational robot may be used to generate a process flow model based on recorded unstructured interactions. An example system includes a processor to record unstructured interactions between an unstructured conversational robot and a plurality of users. For example, unstructured interactions may not have a particular order. Thus, unstructured interactions may not follow a specified conversation pattern or structure, and thus the user may not follow a guidance path taken by the conversation robot. The processor may generate a process flow model based on the recorded unstructured interactions. The processor may also display instructions to a user in real-time based on the process flow model through the conversational robot. In some examples, the processor may modify the process flow model to accommodate the user's preferences based on unstructured interactions with the user. Thus, embodiments of the present disclosure enable the generation of process flow models that are tailored to the personal preferences of a particular worker or group of workers. Further, by using unstructured conversational robot data, the generated process flow model may be customized to a particular process by the actual set of workers engaged in the task. Thus, the techniques may be used to generate process flow models that improve efficiency while maintaining accuracy in the process as it is executed. In some examples, the system may even continually learn using new process flow models to optimize the efficiency of the work process. The system may thus enable the generation of recommendations to workers having a single order of preference to change their habits and to examine substitute commands that are popular with workers and relatively superior to themselves, thereby also serving as educational tools. In this manner, the techniques may enable conversational robots to efficiently identify and gradually change inefficient habits of worker populations while maintaining accuracy of process flows.
Referring now to FIG. 1, a block diagram illustrates an example system for generating a process flow model based on recorded unstructured interactions. The example system 100 may be used to implement the process 200 or the method 300 of fig. 2 and 3. The system 100 may also be implemented using the computing device 400 of FIG. 4 or the computer-readable medium 700 of FIG. 7.
The system 100 of FIG. 1 includes a plurality of users 102 that are shown interacting with an unstructured conversational robot 104. For example, an unstructured conversational robot may be implemented on multiple mobile devices. The mobile device may include a voice-based personal assistant that provides unstructured feedback. The system 100 includes an interaction recorder 106 to record interactions between the user 102 and the unstructured session robot 104. As one example, the interaction recorder 106 may be a subunit of the unstructured session robot 104. The system 100 includes a process flow model generator and updater 108. For example, the process flow model generator and updater 108 may be implemented on a service (e.g., a cloud server).
In the example of FIG. 1, many users 102 may be involved in performing a particular process. For example, the process may include a series of tests and checks under system conditions, and recording status and sensor readings, as well as applying maintenance routines on system components. In various examples, the unstructured conversational robot 104 may include a voice-based interface. For example, a voice-based interface may allow a user to ignore any human obstacles imposed by other conventional interfaces and perform the process in the most efficient manner for the user. The system 100 may thus allow a user to complete a process using natural language and natural order speech interactions. The order of the processes is unstructured and allows the user to complete the processes in an unguided manner.
Still referring to FIG. 1, the system 100 may record the information entered by the user in the order in which the user performed each process. Recording user interactions can allow the system to learn and develop a generic model from this data that describes the optimal sequence of related business processes. The system will further understand the personal preference order of each worker and apply this personalization to the generic model to create a separate model for each worker and each business process type. Interaction recorder 106 may be any logging device used to perform the process that allows items, time, and location to be recorded while the process is performed. As one example, a worker may use a mobile handheld device with a microphone and speaker and an internet connection. The mobile device may include a voice interface application that allows a worker to perform the process. Thus, in various examples, the collection of data is based on an online voice-enabled tracking and recording device, such as a personal assistant running on a mobile or radio handheld device. In some examples, the logging device records workflow processes and transmits log data online to a central database. In some examples, after the process ends, the logging device may transmit the log data offline to a central database.
The process flow generator and updater 108 may include a learning algorithm that processes the data in the database to sort the best paths or the best order in which to perform the process, thereby producing the best recommended order. In various examples, over time, the system may learn an optimized model as the recorded events accumulate. For example, the logged events may describe the performance of each process item as well as the timing and location data associated with each item. Timing and location data may be collected by the end user input device. In various examples, the optimization model may include a statistical analysis of the correlation between the traversal path of the session (indicated by the order of the session nodes) and the successful or efficient completion of the process flow. This analysis may highlight the best path and trace back its common attributes to generate the best recommendation order.
In some examples, the best recommendation order may be obtained using various machine learning models. For example, a system may be used that includes a random forest model that takes as input attributes that describe the steps in the session, which steps are performed in order of execution. In various examples, the random forest model also receives performance indicators such as quality, speed, and efficiency of completing the task. Over time, the random forest model may learn a set of ideal attributes and their order to achieve optimal results.
After learning the best model for each business process type, the system 100 may generate a process flow model based on the learned models, which may be used for subsequent processes. The system 100 can continuously learn to optimize the efficiency of the workflow even with new process flow models. In various examples, system 100 may recommend workers with an order of personal preference to change their habits and enable workers to check an alternative order that is beneficial to the general population of workers and may be better than themselves. Thus, the system 100 can improve the efficiency of the worker by feeding back the learned order into the various log devices for subsequent workflows.
In some examples, any individual deviations from the generated process flow model are recorded as individual preferences and analyzed to determine cluster behavior and quantify whether these new routes or sequences are better than the currently recommended solution for the process flow model. If a new route or sequence is improved, the system 100 may add the new improved route or sequence to the generic model and update the process flow model accordingly. The resulting processes and sequences are organized based on the actual sequence of the processes and performance information for many workflow instances and are the most appropriate for the type of task being performed by the worker. In various examples, with respect to the original default order of the processes (which can be structured more hierarchically to fit the hosted view), the system 100 can still translate the new order of items back to the original logical default order to meet the user's perspective and preferences, depending on the particular application.
It should be understood that the block diagram of FIG. 1 is not intended to indicate that the system 100 will include all of the components shown in FIG. 1. Rather, system 100 may include fewer or additional components not shown in fig. 1 (e.g., additional client devices or additional resource servers, etc.).
FIG. 2 is a schematic diagram of an example process by which a process flow model may be generated based on recorded unstructured interactions. Process 200 may be implemented with any suitable computing device, such as computing device 400 of FIG. 4, and is described with reference to system 100 of FIG. 1. For example, the methods described below may be implemented by processor 402 or processor 702 of fig. 4 and 7.
FIG. 2 includes a set of different users 202 that interact with a process flow through an unstructured conversational robot using speech. The different users 202 include users 204 that follow a default structure of the process flow. For example, when user 204 is presented to staff member 204, user 204 may follow the process on the form verbatim, or follow spoken commands or prompts, or visual commands or prompts. The different users 202 include users 206 that partially follow the default structure of the process flow. The different users 202 also include free stream users 208 that do not follow the default structure of the process stream. For example, such a free-stream user 208 may deviate entirely from the sequence of default structures of the process stream.
At block 210, the interactions of the user 202 are recorded. For example, unstructured interactions may be recorded and sent to a central database.
At block 212, the model generator generates a process flow model based on the recorded interactions and analyzes the deviation for each user from the process flow model. For example, the model generator may determine whether any deviation from the process flow model by each user exceeds the efficiency of the process flow model. For example, the time it takes to execute a process flow may be used to measure efficiency. In some examples, a secondary indicator of efficiency may be the quality of completion as measured by an external observer. Further, in some examples, user frustration in the process can be measured by surveys, physical distances traversed in the process, and other frustration indicators (if applicable).
At block 214, the system provides the user with a process flow model having personal preferences applied to each user. For example, the generated process flow model may be used as a generic process flow model with which a worker may interact through an unstructured conversational robot. In some examples, the order or sequence of the generic process flow models may be modified according to the personal preferences of the user and presented to the user in a single process flow model.
At block 216, the system iteratively updates the process flow model based on additional recorded interactions with the process flow model. For example, the system may identify cluster behavior in additional recorded unstructured interactions and update the process flow model based on the cluster behavior in response to detecting that the cluster behavior contains improved routes or sequences. In this way, the process flow model may continually improve over time based on the type of process and the preferences of the workers performing the process. Therefore, efficiency can be improved while maintaining process execution accuracy.
The diagram of fig. 2 is not intended to indicate that the operations of process 200 are to be performed in any particular order, or that all of the operations of process 200 are included in each case. Further, process 200 may include any suitable number of additional operations.
FIG. 3 is a flow diagram of an example process that may generate a process flow model based on recorded unstructured interactions. The method 300 may be implemented with any suitable computing device, such as computing device 400 of FIG. 4, and described with reference to system 100 of FIG. 1. For example, the methods described below may be implemented by processor 402 or processor 702 of fig. 4 and 7.
In block 302, unstructured interactions between an unstructured conversational robot and a plurality of users are recorded. In some examples, unstructured interactions are recorded and sent to a database. For example, as discussed in FIG. 1, the interactions in the record may be sent to the central database in an offline or online manner.
In block 304, a process flow model is generated based on the recorded unstructured interactions. For example, a generic process flow model may be generated for a particular type of process flow based on interactions of a user executing the particular type of process flow. In some examples, the generated process flow model may include comparing paths of execution of multiple users and selecting a more efficient path than other paths.
At block 306, instructions based on the process flow model are presented to the user via the conversational robot in real-time. In various examples, views of the process flow model may be generated and presented from different angles. In some examples, a top-down model view or bottom (ground) view may be generated and presented. A top-down view of a model is a view that describes the arrangement of a particular logical hierarchy of data. The top-down model allows recursively drilling down from the top level data elements to their auxiliary data elements. Alternatively, the top-down model allows low-level data elements to be collapsed by grouping them into higher-level data elements one time and another. In various examples, a top-down view of the model may be presented to the supervisor. A top-down model view may be generated based on the default process flow model. In various examples, the underlying view may be generated based on a process flow model. The underlying view is a view depicting the actual arrangement of data elements that are reflected by the person performing the manual operation. The underlying view does not necessarily follow any logical hierarchy, but rather remains consistent with the natural traversal order, as individuals and environments are most likely or appropriate to perform the process. For example, the underlying view may be presented to the user.
In block 308, deviations from the process flow model are recorded and the process flow model is iteratively updated based on deviations that are more effective than other deviations of the recorded deviations. In some examples, individual deviations from the process flow model may be recorded for each of a plurality of users and an individual model may be generated for each user based on the recorded individual deviations for each user of the plurality of users. In various examples, in response to detecting that the clustered behavior includes an improved route or order, the clustered behavior is identified in additional recorded unstructured interactions and the process flow model is updated based on the clustered behavior. As used herein, cluster behavior refers to similar or identical patterns in an entire process flow or portions of a process flow. In some examples, the patterns may be session steps of a series of process flow actions that occur in a certain order.
The process flow diagram of fig. 3 is not intended to indicate that the operations of method 300 are to be performed in any particular order, or that all of the operations of method 300 are to be included in each case. Additionally, method 300 may include any suitable number of additional operations.
FIG. 4 is a block diagram of an example computing device that may generate a process flow model based on recorded unstructured interactions. Computing device 400 may be, for example, a server, a desktop computer, a laptop computer, a tablet computer, or a mobile device such as a smartphone. In some examples, computing device 400 may be a cloud computing node. Computing device 400 may be described in the general context of computer-system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. Computing device 400 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
The computing device 400 may include: a processor 402 for executing stored instructions; a memory device 404 for providing temporary storage space for the operation of the instructions during operation. The processor may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The memory 404 may include Random Access Memory (RAM), read only memory, flash memory, or any other suitable memory system.
Processor 402 may communicate with system interconnect 406 (e.g.,
Figure BDA0002591378620000071
and PCI
Figure BDA0002591378620000072
Etc.) to an input/output (I/O) device interface 408 adapted to connect the computing device 400 to one or more I/O devices 410. The input/output devices 410 may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O device 410 may be a built-in component of the computing device 400 or may be a device that is externally connected to the computing device 400.
The processor 402 may also be connected through a system interconnect 406 to a display interface 412 adapted to connect the computing device 400 to a display device 414. Display device 414 may include a display screen as a built-in component of computing device 400. Display device 414 may also include a computer monitor, television, or projector, etc., that is externally connected to computing device 400. In addition, a Network Interface Controller (NIC)416 may be adapted to connect computing device 400 to a network 418 via system interconnect 406. In some embodiments, the NIC 416 may transmit data using any suitable interface (e.g., Internet Small computer System interface, etc.) or protocol. The network 418 may be a cellular network, a wireless network, a Wide Area Network (WAN), a Local Area Network (LAN), the internet, or the like. External computing device 420 may be connected to computing device 400 through network 418. In some examples, the external computing device 420 may be an external web server 420. In some examples, the external computing device 420 may be a cloud computing node.
The processor 402 may also be connected to a memory device 422 through the system interconnect 406, the memory device 422 may include a hard disk drive, an optical disk drive, a USB flash drive, a drive array, or any combination thereof. In some examples, the memory device may include an interaction recorder module 424, a model generator module 426, a renderer module 428, and a model updater module 430. The interaction recorder module 424 can record unstructured interactions between the unstructured conversational robot and a plurality of users. For example, the interaction recorder module 424 may be a voice-based interface on a mobile device. In some examples, the data contained in the recorded unstructured interactions includes location data and a timestamp. The interaction recorder module 424 can also record a single deviation from the process flow model for each of a plurality of users. The model generation module 426 may generate a process flow model based on the recorded unstructured interactions. In some examples, model generator module 426 may generate a single model for each user based on the recorded single deviations. The presenter module 428 may present instructions to a user in real-time based on the process flow model through the conversational robot. In various examples, the renderer module 428 may generate and render views of the process flow model from different angles. For example, a view may include a bottom view and a hierarchical top-down view. The model updater module 430 may iteratively update the process flow model based on biases that are more effective than other biases of the recorded biases.
The memory device 422 may also include a log database 432 to store recorded interactions with various users. For example, log database 432 may store location data, time, and sequence of interactions with a user. For example, the log database 432 may store location data, interaction time and duration, interaction content including user input and system output, system interpretation of the interaction, and the resulting action of the interaction. The sequence order and additional information may also be inferred from the logged data and may also be stored in the log database 432.
It should be understood that the block diagram of FIG. 4 is not intended to indicate that computing device 400 is to include all of the components shown in FIG. 4. Rather, computing device 400 may include fewer or more components (e.g., additional memory components, embedded controllers, modules, additional network interfaces, etc.) not shown in fig. 4. Further, any of the functionality of the interaction logger module 424, the model generator module 426, and the model updater module 430 may be partially or fully implemented in hardware and/or the processor 402. For example, the functions may be implemented using application specific integrated circuits, logic implemented in an embedded controller, logic implemented in the processor 402, or the like. In some embodiments, the functionality of interaction recorder module 424, model generator module 426, renderer module 428, and model updater module 430 may be implemented in logic, where the logic referred to herein may comprise any suitable hardware (e.g., processors, etc.), software (e.g., applications), firmware, or any suitable combination of hardware, software, and firmware.
In some cases, the techniques described herein may be implemented in a cloud computing environment. As discussed in more detail below with reference to at least fig. 5-6, a computing device configured to generate a process flow model based on recorded unstructured interactions may be implemented in a cloud computing environment. It should be understood at the outset that although the present disclosure may include a detailed description with respect to cloud computing, implementation of the techniques set forth therein is not limited to a cloud computing environment, but may be implemented in connection with any other type of computing environment, whether now known or later developed.
Cloud computing is a service delivery model for convenient, on-demand network access to a shared pool of configurable computing resources. Configurable computing resources are resources that can be deployed and released quickly with minimal administrative cost or interaction with a service provider, such as networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services. Such a cloud model may include at least five features, at least three service models, and at least four deployment models.
Is characterized by comprising the following steps:
self-service on demand: consumers of the cloud are able to unilaterally automatically deploy computing capabilities such as server time and network storage on demand without human interaction with the service provider.
Wide network access: computing power may be acquired over a network through standard mechanisms that facilitate the use of the cloud through heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, Personal Digital Assistants (PDAs)).
Resource pool: the provider's computing resources are relegated to a resource pool and serve multiple consumers through a multi-tenant (multi-tenant) model, where different physical and virtual resources are dynamically allocated and reallocated as needed. Typically, the customer has no control or even knowledge of the exact location of the resources provided, but can specify the location at a higher level of abstraction (e.g., country, state, or data center), and thus has location independence.
Quick elasticity: computing power can be deployed quickly, flexibly (and sometimes automatically) to enable rapid expansion, and quickly released to shrink quickly. The computing power available for deployment tends to appear unlimited to consumers and can be available in any amount at any time.
Measurable service: cloud systems automatically control and optimize resource utility by utilizing some level of abstraction of metering capabilities appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled and reported, providing transparency for both service providers and consumers.
The service model is as follows:
software as a service (SaaS): the capability provided to the consumer is to use the provider's applications running on the cloud infrastructure. Applications may be accessed from various client devices through a thin client interface (e.g., web-based email) such as a web browser. The consumer does not manage nor control the underlying cloud infrastructure including networks, servers, operating systems, storage, or even individual application capabilities, except for limited user-specific application configuration settings.
Platform as a service (PaaS): the ability provided to the consumer is to deploy consumer-created or acquired applications on the cloud infrastructure, which are created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including networks, servers, operating systems, or storage, but has control over the applications that are deployed, and possibly also the application hosting environment configuration.
Infrastructure as a service (IaaS): the capabilities provided to the consumer are the processing, storage, network, and other underlying computing resources in which the consumer can deploy and run any software, including operating systems and applications. The consumer does not manage nor control the underlying cloud infrastructure, but has control over the operating system, storage, and applications deployed thereto, and may have limited control over selected network components (e.g., host firewalls).
The deployment model is as follows:
private cloud: the cloud infrastructure operates solely for an organization. The cloud infrastructure may be managed by the organization or a third party and may exist inside or outside the organization.
Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community of common interest relationships, such as mission missions, security requirements, policy and compliance considerations. A community cloud may be managed by multiple organizations or third parties within a community and may exist within or outside of the community.
Public cloud: the cloud infrastructure is offered to the public or large industry groups and owned by organizations that sell cloud services.
Mixing cloud: the cloud infrastructure consists of two or more clouds (private, community, or public) of deployment models that remain unique entities but are bound together by standardized or proprietary technologies that enable data and application portability (e.g., cloud bursting traffic sharing technology for load balancing between clouds).
Cloud computing environments are service-oriented with features focused on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that contains a network of interconnected nodes.
Referring now to FIG. 5, an exemplary cloud computing environment 500 is shown. As shown, cloud computing environment 500 includes one or more cloud computing nodes 502 with which local computing devices used by cloud consumers, such as Personal Digital Assistants (PDAs) or mobile phones 504A, desktops 504B, notebooks 504C, and/or automotive computer systems 504N may communicate. Cloud computing nodes 502 may communicate with each other. Cloud computing nodes may be physically or virtually grouped (not shown) in one or more networks including, but not limited to, private, community, public, or hybrid clouds, or a combination thereof, as described above. In this way, cloud consumers can request infrastructure as a service (IaaS), platform as a service (PaaS), and/or software as a service (SaaS) provided by the cloud computing environment 500 without maintaining resources on the local computing devices. It should be appreciated that the types of computing devices 504A-N shown in fig. 5 are merely illustrative and that cloud computing node 502, as well as cloud computing environment 500, may communicate with any type of computing device over any type of network and/or network addressable connection (e.g., using a web browser).
Referring now to FIG. 6, therein is shown a set of functional abstraction layers provided by cloud computing environment 500 (FIG. 5). It should be understood at the outset that the components, layers, and functions illustrated in FIG. 6 are illustrative only and that embodiments of the present invention are not limited thereto. As shown in fig. 3, the following layers and corresponding functions are provided:
the hardware and software layer 600 includes hardware and software components. Examples of hardware components include mainframes, which in one example are mainframes
Figure BDA0002591378620000101
Provided is a system. In one example
Figure BDA0002591378620000102
In the system, a server based on a RISC (reduced instruction set computer) architecture;
Figure BDA0002591378620000104
a system; IBM
Figure BDA0002591378620000103
A system; a storage device; networks and network components. Examples of software components include web application server software, in one example IBM
Figure BDA0002591378620000105
Application server software; and database software, IBM in one example
Figure BDA0002591378620000106
Database software. (IBM, zSeries, pSeries, xSeries, BladeCENTer, WebSphere, and DB2 are trademarks of International Business Machines Corporation (International Business Machines Corporation) registered in many jurisdictions worldwide).
The virtual layer 602 provides an abstraction layer that can provide examples of the following virtual entities: virtual servers, virtual storage, virtual networks (including virtual private networks), virtual applications and operating systems, and virtual clients. In one example, the management layer 604 may provide the following functionality: resource provisioning function: providing dynamic acquisition of computing resources and other resources for performing tasks in a cloud computing environment; metering and pricing functions: cost tracking of resource usage and billing and invoicing therefor is performed within a cloud computing environment. In one example, the resource may include an application software license. The safety function is as follows: identity authentication is provided for cloud consumers and tasks, and protection is provided for data and other resources. User portal function: access to the cloud computing environment is provided for consumers and system administrators. Service level management function: allocation and management of cloud computing resources is provided to meet the requisite level of service. Service Level Agreement (SLA) planning and fulfillment functions: the future demand for cloud computing resources predicted according to the SLA is prearranged and provisioned.
Workload layer 606 provides an example of the functionality that a cloud computing environment may implement. In this layer, examples of workloads or functions that can be provided include: mapping and navigating; software development and lifecycle management; virtual classroom education delivery; analyzing and processing data; transaction processing; and process flow modeling.
The technology may be a system, method, and/or computer program product, in any combination of possible technical details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present technology may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry may execute computer-readable program instructions to implement aspects of the present technology by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present technology are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
Referring now to FIG. 7, a block diagram of an example tangible, non-transitory computer-readable medium 700 is depicted, which example tangible, non-transitory computer-readable medium 700 can generate a process flow model based on recorded unstructured interactions. The processor 702 may access the tangible, non-transitory computer-readable medium 700 through a computer interconnect 704. Further, the tangible, non-transitory, computer-readable medium 700 may include code that directs the processor 702 to perform the operations of the method 300 of fig. 3.
As shown in fig. 7, the various software components discussed herein may be stored on a tangible, non-transitory computer-readable medium 700. For example, the interaction recorder module 706 includes code for recording unstructured interactions between the conversation robot and the plurality of users. The interaction recorder module 706 also includes code for recording deviations from the process flow model. In some examples, the interaction recorder module 706 includes code for recording individual deviations from the process flow model for each of a plurality of users. The model generator module 708 includes code for generating a process flow model based on the recorded unstructured interactions. The model generator module 708 includes code that generates a process flow model for a particular type of process flow based on interactions of a user executing the particular type of process flow. For example, the model generator module 708 may include code for comparing execution paths of multiple users and selecting a path that is more efficient than other paths. The model generator module 708 also includes code for generating an individual model for each user based on the recorded individual deviations. The renderer module 710 includes code for rendering instructions to a user via the conversational robot in real-time based on the process flow model. The renderer module 710 also includes code for generating and rendering views of the process flow model from different angles. The model updater module 712 includes code for iteratively updating the process flow model based on biases that are more effective than other biases of the recorded biases. It should be appreciated that any number of additional software components not shown in fig. 7 may be included within the tangible, non-transitory computer-readable medium 700 depending on the particular application.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It should be appreciated that any number of additional software components not shown in fig. 7 may be included within the tangible, non-transitory computer-readable medium 700 depending on the particular application.
Having described embodiments of the present technology, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

1. A system comprising a processor to:
recording unstructured interactions between an unstructured conversational robot and a plurality of users;
generating a process flow model based on the recorded unstructured interactions; and
presenting, by the conversation robot, instructions to a user in real-time based on the process flow model.
2. The system of claim 1, wherein the conversation robot comprises a voice-based interface on a mobile device.
3. The system of claim 1, wherein the processor is to generate and present views of the process flow model from different angles.
4. The system of claim 3, wherein the views comprise a bottom view and a hierarchical top-down view.
5. The system of claim 1, wherein the data included in the recorded unstructured interactions includes location data and a timestamp.
6. The system of claim 1, wherein the processor is to record individual deviations from the process flow model for each of the plurality of users and generate an individual model for each of the users based on the recorded individual deviations.
7. The system of claim 1, wherein the processor is to record deviations from the process flow model and iteratively update the process flow model based on deviations that are more effective than other deviations of the recorded deviations.
8. A computer-implemented method, comprising:
recording, by a processor, unstructured interactions between an unstructured conversational robot and a plurality of users;
generating, by a processor, a process flow model based on the recorded unstructured interactions; and
presenting, by the conversation robot, instructions to a user in real-time based on the process flow model.
9. The computer-implemented method of claim 8, further comprising: deviations from the process flow model are recorded and the process flow model is iteratively updated based on deviations that are more effective than other deviations of the recorded deviations.
10. The computer-implemented method of claim 8, comprising: recording individual deviations from the flow model for each of the plurality of users, and generating an individual model for each user based on the recorded individual deviations for each of the plurality of users.
11. The computer-implemented method of claim 8, wherein generating the process flow model comprises generating a generic process flow model for the particular type of process flow based on interactions of a user executing the particular type of process flow.
12. The computer-implemented method of claim 8, wherein generating the process flow model comprises comparing paths of execution of the plurality of users and selecting a more efficient path than other paths.
13. The computer-implemented method of claim 8, comprising: views of the process flow model are generated and presented from different angles.
14. The computer-implemented method of claim 8, further comprising, in response to detecting that a cluster behavior comprises an improved route or order, identifying a cluster behavior in an additional recorded unstructured interaction, and updating the flow model based on the cluster behavior.
15. A computer program product for generating a process flow model, the computer program product comprising a computer readable storage medium having program code embodied therein, the program code being executable by a processor to cause the processor to perform the steps of the method of one of claims 8 to 14.
16. A system comprising means for performing the steps of the method of one of claims 8-14.
CN202010701615.2A 2019-08-01 2020-07-20 Generating a process flow model using an unstructured conversational robot Pending CN112307177A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/528,681 US11121986B2 (en) 2019-08-01 2019-08-01 Generating process flow models using unstructure conversation bots
US16/528,681 2019-08-01

Publications (1)

Publication Number Publication Date
CN112307177A true CN112307177A (en) 2021-02-02

Family

ID=74260604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010701615.2A Pending CN112307177A (en) 2019-08-01 2020-07-20 Generating a process flow model using an unstructured conversational robot

Country Status (2)

Country Link
US (1) US11121986B2 (en)
CN (1) CN112307177A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4268118A1 (en) 2020-12-22 2023-11-01 Liveperson, Inc. Conversational bot evaluation and reinforcement using meaningful automated connection scores
US20230403244A1 (en) * 2021-06-15 2023-12-14 Meta Platforms, Inc. Methods, mediums, and systems for responding to a user service prompt

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223363A (en) * 2010-04-16 2011-10-19 阿瓦雅公司 System and method for generating persistent sessions in a graphical interface for managing communication sessions
US20170118336A1 (en) * 2015-10-21 2017-04-27 Genesys Telecommunications Laboratories, Inc. Dialogue flow optimization and personalization
CN108109689A (en) * 2017-12-29 2018-06-01 李向坤 Diagnosis and treatment session method and device, storage medium, electronic equipment
CN110070862A (en) * 2018-01-19 2019-07-30 国际商业机器公司 The method and system guided automatically based on ontology of conversational system based on state

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003238619A1 (en) * 2002-05-14 2003-11-11 Mark J. Conway Interactive web collaboration systems and methods
US8302019B2 (en) * 2002-11-05 2012-10-30 International Business Machines Corporation System and method for visualizing process flows
US20090287528A1 (en) * 2008-05-19 2009-11-19 Robert Strickland Dynamic selection of work flows based on environmental conditions to facilitate data entry
US8654963B2 (en) 2008-12-19 2014-02-18 Genesys Telecommunications Laboratories, Inc. Method and system for integrating an interaction management system with a business rules management system
US20170109676A1 (en) 2011-05-08 2017-04-20 Panaya Ltd. Generation of Candidate Sequences Using Links Between Nonconsecutively Performed Steps of a Business Process
US10476971B2 (en) 2014-06-18 2019-11-12 Alfresco Software, Inc. Configurable and self-optimizing business process applications
US10401839B2 (en) * 2016-09-26 2019-09-03 Rockwell Automation Technologies, Inc. Workflow tracking and identification using an industrial monitoring system
US20180121841A1 (en) 2016-10-28 2018-05-03 ThinkProcess Inc. Computer-based end-to-end process modeler for designing and testing business processes
US10469665B1 (en) * 2016-11-01 2019-11-05 Amazon Technologies, Inc. Workflow based communications routing
US10554817B1 (en) * 2018-12-12 2020-02-04 Amazon Technologies, Inc. Automation of contact workflow and automated service agents in contact center system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223363A (en) * 2010-04-16 2011-10-19 阿瓦雅公司 System and method for generating persistent sessions in a graphical interface for managing communication sessions
US20170118336A1 (en) * 2015-10-21 2017-04-27 Genesys Telecommunications Laboratories, Inc. Dialogue flow optimization and personalization
CN108109689A (en) * 2017-12-29 2018-06-01 李向坤 Diagnosis and treatment session method and device, storage medium, electronic equipment
CN110070862A (en) * 2018-01-19 2019-07-30 国际商业机器公司 The method and system guided automatically based on ontology of conversational system based on state

Also Published As

Publication number Publication date
US11121986B2 (en) 2021-09-14
US20210036974A1 (en) 2021-02-04

Similar Documents

Publication Publication Date Title
US9736199B2 (en) Dynamic and collaborative workflow authoring with cloud-supported live feedback
US10248546B2 (en) Intelligent device selection for mobile application testing
US10572819B2 (en) Automated intelligent data navigation and prediction tool
US20190068630A1 (en) Cognitive Security for Workflows
US10977443B2 (en) Class balancing for intent authoring using search
US11144607B2 (en) Network search mapping and execution
US11144879B2 (en) Exploration based cognitive career guidance system
US11978060B2 (en) Dynamic categorization of it service tickets using natural language description
US10977247B2 (en) Cognitive online meeting assistant facility
CN112307177A (en) Generating a process flow model using an unstructured conversational robot
US10878804B2 (en) Voice controlled keyword generation for automated test framework
US11671385B1 (en) Automated communication exchange programs for attended robotic process automation
US20170075895A1 (en) Critical situation contribution and effectiveness tracker
US20210011947A1 (en) Graphical rendering of automata status
WO2023077989A1 (en) Incremental machine learning for a parametric machine learning model
US20230064112A1 (en) Lifecycle management in collaborative version control
CN117716373A (en) Providing a machine learning model based on desired metrics
US20220301087A1 (en) Using a machine learning model to optimize groupings in a breakout session in a virtual classroom
US11360763B2 (en) Learning-based automation machine learning code annotation in computational notebooks
US11182727B2 (en) Automatically detecting inconsistencies between a business process model and a corresponding tutorial video
US11138273B2 (en) Onboarding services
US20230267278A1 (en) Context-based response generation
US20210209531A1 (en) Requirement creation using self learning mechanism
US20220138614A1 (en) Explaining machine learning based time series models
US11811626B1 (en) Ticket knowledge graph enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination