WO2019083603A1 - Robust replay of digital assistant operations - Google Patents

Robust replay of digital assistant operations

Info

Publication number
WO2019083603A1
WO2019083603A1 PCT/US2018/048066 US2018048066W WO2019083603A1 WO 2019083603 A1 WO2019083603 A1 WO 2019083603A1 US 2018048066 W US2018048066 W US 2018048066W WO 2019083603 A1 WO2019083603 A1 WO 2019083603A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital assistant
instructions
assistant device
unexpected behavior
particular function
Prior art date
Application number
PCT/US2018/048066
Other languages
French (fr)
Inventor
Kiran Bindu HEMARAJ
Rajat Mukherjee
Original Assignee
Aiqudo, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/984,122 external-priority patent/US11520610B2/en
Priority claimed from US16/110,108 external-priority patent/US20180366113A1/en
Application filed by Aiqudo, Inc. filed Critical Aiqudo, Inc.
Publication of WO2019083603A1 publication Critical patent/WO2019083603A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Definitions

  • Digital assistants have become ubiquitously integrated into a variety of consumer electronic devices. Modern day digital assistants employ speech recognition technologies to provide a conversational interface between users and electronic devices. These digital assistants can employ various algorithms, such as natural language processing, to improve interpretations of commands received from a user. Consumers have expressed various frustrations with conventional digital assistants due to privacy concerns, constant misinterpretations of spoken commands, unavailability of services due to weak signals or a lack of signal, and the general requirement that the consumer must structure their spoken command in a dialect that is uncomfortable for them.
  • the present disclosure generally describes embodiments that are directed towards systems and methods relating to a crowd-sourced digital assistant for computing devices.
  • embodiments facilitate the intuitive creation and distribution of action datasets that include reproducible computing events and associated commands that can be employed to invoke a reproduction of the computing events on computing devices having the crowd-sourced digital assistant installed and/or executing thereon.
  • a digital assistant system and application can perform any operation on a computing device by way of a received command, whereby the operations are limited only by the various operations executable on the computing device.
  • techniques are described to facilitate the detection and mitigation of unexpected or unanticipated events that may occur during the replay of operations performed by a digital assistant described in accordance with the present disclosure.
  • the described digital assistant and corresponding system provides an ever-growing and evolving library of dialects that enables the digital assistant to learn from its users, in contrast to the frustrating and limited interpretation features provided by conventional digital assistants. Further, because the digital assistant and corresponding system is configured with a framework for distributing improvements to its collection of actionable operations and understandable commands, and because the digital assistant utilizes applications existing on the computing device of each user, privacy concerns typically associated with conventional digital assistants is significantly reduced.
  • the digital assistant described in accordance with the present disclosure can reproduce or "replay" interpretable instructions corresponding to operations that were once recorded by a digital assistant device and stored in a data structure described herein as an action dataset.
  • interpretable instructions or reproducible operations can be associated with installed applications or other features of a digital assistant device's operating system, among other things.
  • unexpected behaviors can be detected on the digital assistant device while these instructions are being interpreted.
  • installed applications or operating systems can generate unexpected events (e.g., pop-up advertisements, errors, notifications, login/authentication prompts) when or while the instructions or operations of an action dataset are being respectively interpreted or reproduced.
  • installed applications can have different features or operations available for invocation depending on various behaviors, such as whether the application is a paid version or a free or trial version, among other things.
  • Such unexpected behaviors can, in many cases, interrupt and potentially terminate or cause an error in the replay process.
  • FIG. 1 is a block diagram of an exemplary computing environment for a crowd- sourced digital assistant system, in accordance with embodiments of the present invention
  • FIG. 2 is a block diagram of an exemplary digital assistant device, in accordance with an embodiment of the present disclosure
  • FIG. 3 is a block diagram of an exemplary digital assistant server, in accordance with an embodiment of the present disclosure
  • FIG. 4 is an exemplary data structure of an action dataset, in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a flow diagram showing a method for mitigating unexpected events during replay of reproducible operations by a digital assistant device of a crowd-sourced digital assistant network, according to various embodiments of the present invention
  • FIG. 6 is a block diagram of an exemplary computing environment suitable for use in implementing embodiments of the present invention.
  • aspects of the technology described herein are generally directed towards systems and methods for generating, distributing, and replaying a set of reproducible operations via a digital assistant device.
  • Various embodiments can facilitate the creation, on- boarding, and interpretation or execution of action datasets, by any number of computing devices having an instance of the digital assistant installed in and/or executing thereon (hereinafter referenced as a "digital assistant device"). More specifically, the described embodiments relate to a digital assistant device that can generate, distribute, or interpret/execute an action dataset to facilitate a robust replay of recorded operations reproducible by the digital assistant device, such that unexpected events occurring during the replay are seamlessly accounted for.
  • an "operation” can correspond to a final result, output, or computing operation(s) that is generated, executed, or performed by a digital assistant device based on one or more action datasets selected and interpreted for execution by the digital assistant device, each action dataset being comprised of one or more reproducible computing events that can be invoked in response to a received command determined to correspond to the action dataset.
  • an “action” is described in reference to operation(s) that is performed in response to an action dataset selected and interpreted for execution.
  • an action can be performed, invoked, initiated, or executed, among other things, and any reference to the foregoing can imply that a corresponding action dataset is selected and interpreted for execution by the digital assistant device to perform the corresponding operation(s).
  • actions can be created, by the digital assistant device, which can record a series of detected events (e.g., inputs) that are typically provided by a user of the digital assistant device when manually invoking the desired operation (e.g., with manual inputs via a touchscreen or other input method of the digital assistant device). That is, to create a new action dataset, the digital assistant device can invoke a recording mode where a user can simply perform a series of computing operations (e.g., manual touches, click inputs) within one or more applications to achieve a desired result, execute a specific operation, and/or initiate a particular feature associated with one or more of the applications.
  • a series of computing operations e.g., manual touches, click inputs
  • actions can include additional and/or alternative events (e.g., inputs) that can be provided by a user of the digital assistant device or defined by a developer or administrator of the digital assistant system described herein.
  • additional and/or alternative events can also perform a corresponding series of computing operations within one or more applications to achieve the desired result, execute a specific operation, and/or initiate a particular feature associated with one or more of the applications.
  • a command representation can be received as speech data and converted to text (e.g., by a speech engine of the digital assistant device), or received as text input data.
  • a "command” is referenced herein to describe data, received as speech data or as text data.
  • a "command representation,” on the other hand is referenced to describe text data that is received, based on inputs (e.g., keyboard), received speech data converted to text data, or received text data communicated from another computing device.
  • a "command template” is referenced herein to describe a portion of a command representation having defined parameter fields in place of variable terms.
  • one or more terms or keywords in the received command can be defined as a parameter based on input(s) received from the user.
  • a parameter in accordance with the present disclosure, can be referenced as corresponding to one of a plurality of predefined parameter types, such as but not limited to, genre, artist, title, location, name or contact, phone number, address, city, state, country, day, week, month, year, and more. It is also contemplated that the digital assistant device can access from a memory, or retrieve (e.g., from a server), a set of predefined parameter types that are known or determined to correspond to the application or applications for which an action dataset is being created.
  • the set of predefined parameter types can be determined based at least in part on corresponding application identifying information.
  • the digital assistant device can extract, based on the defined parameters, the corresponding keywords and generate a command template based on the remaining terms and the defined parameters.
  • a command template may include the originally received command terms if no parameters are defined, or may include a portion of the originally received command terms with parameter fields defined therein, the defined parameters corresponding to variable terms of a command.
  • the digital assistant device can receive, among other things, application identifying information, a recorded series of events, and a set command templates, among other things, to generate a new action dataset that can be retrieved, interpreted and/or invoked by the digital assistant device, simply based on a determination, by the digital assistant device, that a received command or command representation is associated with the action dataset.
  • the digital assistant device can reproduce (e.g., emulate, invoke, execute, perform) the recorded series of events associated with the corresponding action dataset, thereby performing the desired operation.
  • the digital assistant device can anticipate, based on the invoked action, various application(s) or operating system conditions (e.g., displayed GUI elements, active or interactive GUI elements, notifications, pop-ups) that occur or are presented as the recorded series of events is being reproduced.
  • various application(s) or operating system conditions e.g., displayed GUI elements, active or interactive GUI elements, notifications, pop-ups
  • the digital assistant device can select an alternative series of events of the action dataset to address the unexpected behavior and proceed with reproduction of the alternative series of events to perform the desired operation.
  • the parameter term can be employed, by the digital assistant device, to perform custom operations while performing the action. For instance, the digital assistant device can input the parameter term as a text input into a field of the application.
  • an action dataset once created by the digital assistant device, can be uploaded (hereinafter also referenced as "on-boarded") to a remote server for storage thereby.
  • the action dataset can be on-boarded automatically upon its generation or on-boarded manually based on a received instruction, by the digital assistant device. It is contemplated that individuals may want to keep their actions or command templates private, and so an option to keep an action dataset limited to locally-storage may be provided to the user (e.g., via a GUI element).
  • the server upon receiving an on-boarded action dataset, can analyze the action dataset and generate an associated action signature based on the characteristics and/or contents of the action dataset.
  • Contents of an action dataset can include, among other things, application identifying information, corresponding command templates and parameters, and a recorded series of events.
  • the action signature can be generated by various operations, such as hashing the on-boarded action dataset with a hashing algorithm, by way of example. It is also contemplated that the action signature can be generated by the on-boarding digital assistant device, the generated action signature then being stored in or appended to the action dataset before it is uploaded to the server.
  • the server can determine that the on-boarded action dataset already exists on the server, based on a determination that the action signature corresponds to the action signature of another action dataset already stored on the server.
  • the server can either dispose of the on-boarded action dataset or merge the on-boarded action dataset (or determined differing portion(s) thereof) with an existing action dataset stored thereby, preventing redundancy and saving storage space.
  • the server can analyze the on-boarded action dataset to determine if its contents (e.g., the recorded events, command templates, metadata) comply with one or more defined policies (e.g., inappropriate language, misdirected operations, incomplete actions) associated with general usage of the digital assistant system.
  • the server can employ machine learning algorithms, among other things, to perform a variety of tasks, such as determining relevant parameter types, generating additional command templates for association with an on-boarded or stored action dataset, comparing similarity of events between on-boarded action datasets to identify and select more efficient routes for invoking an operation, and more.
  • the server can distribute one or more stored actions datasets to a plurality of digital assistant devices in communication with the server.
  • each digital assistant device can receive action datasets or portions thereof (e.g., command templates) from the server.
  • the action datasets can be distributed to the digital assistant devices in a variety of ways.
  • the server can freely distribute any or all determined relevant action datasets to digital assistant devices.
  • an application profile including a list of applications installed on a digital assistant device can be communicated to the server. Based on the application profile for the digital assistant device, the server can distribute any or all determined relevant action datasets to the digital assistant device.
  • digital assistant devices can include a variety of operating systems, and versions of applications installed thereon can also vary, it is contemplated that the application profile communicated by a digital assistant device to the server may include operating system and application version information, among other things, so that appropriate and relevant action datasets are identified by the server for distribution to the digital assistant device.
  • an action dataset profile including a list of action datasets or action signatures stored on the digital assistant device can be communicated to the server. In this way, only missing or updated action datasets can be distributed to the digital assistant device.
  • a user can simply announce a command to the digital assistant device, and if a corresponding action dataset is not stored on the digital assistant device, the digital assistant device can send the command (representation) to the server for determination and selection of a set of relevant action datasets, which can then be communicated to the digital assistant device.
  • the digital assistant device can retrieve, from the server, a set of determined most relevant action datasets, without additional configuration or interaction by the user, also reducing server load and saving bandwidth by inhibiting extraneous transfer of irrelevant action datasets.
  • a retrieved set of relevant action datasets can be received from the server for invocation by the digital assistant device. It is further contemplated that if two or more action datasets are determined equally relevant to a received command, each action dataset may be retrieved from the server, and the digital assistant device can provide for display a listing of the determined relevant action datasets for selection and execution.
  • a user of a digital assistant device can customize command templates associated with an action dataset corresponding to an application installed on their digital assistant device.
  • a user can employ the digital assistant (or a GUI thereof) to select an action dataset from a list of action datasets stored on the computing device, select an option to add a new command to the action dataset, and define a new command and any associated parameters for storage in the action dataset.
  • the user can add any custom command and parameter that can later be understood by the digital assistant device to invoke the action.
  • the custom command and/or modified action can be on-boarded to the server for analysis and storage, as noted above.
  • the server can distribute the custom command and/or at least a portion of the modified action dataset to a plurality of other digital assistant devices.
  • the list of understandable commands and corresponding actions can continue to grow and evolve, and be automatically provided to any other digital assistant device.
  • an example operating environment 100 in which some embodiments of the present disclosure may be employed is depicted. It should be understood that this and other arrangements and/or features described by the enclosed document are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) or features can be used in addition to or instead of those described, and some elements or features may be omitted altogether for the sake of clarity. Further, many of the elements or features described in the enclosed document may be implemented in one or more components, or as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, some functions may be carried out by a processor executing instructions stored in memory.
  • the system in FIG. 1 includes one or more digital assistant devices 110, 115a, 115b, 115c, ... 115n, in communication with a server 120 via a network 130 (e.g., the Internet).
  • the server 120 also in communication with the network 130, is in communication with each of the digital assistant devices 110, 115a-115n, and can also be in communication with a database 140.
  • the database 140 can be directly coupled to the server 120 or coupled to the server 120 via the network 130.
  • the digital assistant device 110 representative of other digital assistant devices 115a-115n, is a computing device comprising one or more applications 112 and a digital assistant module 114 installed and/or executing thereon.
  • the one or more applications 112 includes any application that is executable on the digital assistant device 110, and can include applications installed via an application marketplace, custom applications, web applications, side-loaded applications, applications included in the operating system of the digital assistant device 110, or any other application that can be reasonably considered to fit the general definition of an application or mobile application.
  • the digital assistant module 114 can provide digital assistant services installed on the digital assistant device 110 or provided by the server 120 via the network 130, or can be implemented at least partially into an operating system of the digital assistant device 110.
  • the digital assistant module 114 provides an interface between a digital assistant device 110 and an associated user (not shown), generally via a speech-based exchanged, although any other method of exchange between user and digital assistant device 110 (e.g., keyboard input, communication from another digital assistant device or computing device) remains within the purview of the present disclosure.
  • voice commands are received by the digital assistant device 110
  • the digital assistant module 114 can convert the speech command to text utilizing a speech-to- text engine (not shown) to extract identified terms and generate a command representation.
  • the digital assistant module 114 can receive the command representation, and determine that the command representation corresponds to at least one command template of at least one action dataset stored on the digital assistant device.
  • the digital assistant module can generate an index of all command templates stored on the digital assistant device 110 for faster searching and comparison of the received command representation to identify a corresponding command template, and thereby a corresponding action dataset.
  • Each indexed command template can be mapped to a corresponding action dataset, which can be interpreted for execution in response to a determination of a confirmed match with the received command representation.
  • a command template can include one or more keywords and/or one or more parameters that each have a corresponding parameter type.
  • Each command template generally corresponds to an operation that can be performed on one or more applications 112 installed on a digital assistant device 110.
  • a plurality of command templates can correspond to a single operation, such that there are multiple equivalent commands that can invoke the same operation.
  • commands such as "check in,” check into flight,” “please check in,” “check into flight now,” “check in to flight 12345,” and the like, can all invoke the same operation that, by way of example only, directs the digital assistant module 114 to execute an appropriate airline application on the digital assistant device 110 and perform a predefined set of events or computer operations to achieve the same result.
  • the aforementioned commands may lack appropriate information (e.g., the specific airline).
  • a user may have multiple applications 112 from various vendors (e.g., airlines) associated with a similar service (e.g., checking into flights).
  • a digital assistant device 110 in accordance with embodiments described herein can provide features that can determine contextual information associated with the digital assistant device 110, or its associated user, based on historical use of the digital assistant device 110, profile information stored on the digital assistant device 110 or server 120, stored parameters from previous interactions or received commands, indexed messages (e.g., email, text messages) stored on the digital assistant device, and a variety of other types of data stored locally or remotely on a server, such as server 120, to identify a most relevant parameter and supplement a command to select a most relevant action dataset. More specific commands, such as "check into Friendly Airline flight," or "Friendly Airline check in,” and the like, where a parameter is specifically defined in the command, can be recognized by the digital assistant module 114.
  • One or more recognizable commands and corresponding action datasets can be received by the digital assistant device 110 from the server 120 at any time, including upon installation, initialization, or invocation of the digital assistant module 114, after or upon receipt of a speech command by the digital assistant module 114, after or upon installation of a new application 112, periodically (e.g., once a day), when pushed to the digital assistant device 110 from the server 120, among many other configurations. It is contemplated that the action datasets received by the digital assistant device 110 from the server 120 can be limited based at least in part on the applications 112 installed on the digital assistant device 110, although configurations where a larger or smaller set of action datasets received are contemplated.
  • digital assistant module 114 can either redirect the user to a marketplace (e.g., launch an app marketplace application) to install the appropriate application determined by the server 120 based on the received command, or can invoke an action training program that prompts a user to manually perform tasks on one or more applications to achieve the desired result, the tasks being recorded and stored into a new action dataset by the digital assistant device 110.
  • the digital assistant module 114 can also receive one or more commands from the user (e.g., via speech or text) to associate with the action dataset being generated.
  • the action training program can facilitate a definition of such parameters and corresponding parameter types to generate command templates for inclusion in the action dataset being generated.
  • a command template(s) is associated with at least the particular application designated by the user and also corresponds to the one or more tasks manually performed by the user, associating the generated command template to the task(s) and thus the desired resulting operation.
  • the server 120 can provide a determined most-relevant action dataset to the digital assistant device 110 based on the received command.
  • the server 120 can store and index a constantly-growing and evolving plurality of crowd-sourced action datasets submitted by or received from digital assistant devices 115a-115n also independently having a digital assistant module 114 and any number of applications 112 installed thereon.
  • the digital assistant devices 115a-115n may have any combination of applications 112 installed thereon, and any generation of action datasets performed on any digital assistant device 110, 115-115n can be communicated to the server 120 to be stored and indexed for mass or selective deployment, among other things.
  • the server 120 can include various machine-learned algorithms to provide a level of quality assurance on command templates included in on-boarded action datasets and/or the tasks and operations performed before they are distributed to other digital assistant devices via the network 130.
  • the digital assistant module 114 determines an appropriate action dataset (e.g., one or more tasks to achieve a desired result) having one or more command templates that corresponds to the received command, the digital assistant module 114 can generate an overlay interface that can mask any or all visual outputs associated with the determined action or the computing device generally.
  • an appropriate action dataset e.g., one or more tasks to achieve a desired result
  • the digital assistant module 114 can generate an overlay interface that can mask any or all visual outputs associated with the determined action or the computing device generally.
  • the generation of the overlay interface can include a selection, by the digital assistant module 114, of one or more user interface elements that are stored in a memory of the digital assistant device 110 or server 120, and/or include a dynamic generation of the user interface element(s) by the digital assistant module 114 or server 120 based on one or more portions of the received command and/or obtained contextual data (e.g., determined location data, user profile associated with the digital assistant device 110 or digital assistant module 114, historical data associated with the user profile, etc.) obtained by the digital assistant device 110, digital assistant module 114, and/or server 120.
  • the selected or generated one or more user interface elements can each include content that is relevant to one or more portions (e.g., terms, keywords) of the received command.
  • such elements can be saved locally on the digital assistant device 110 or remotely on the server 120 for subsequent retrieval by the digital assistant device 110, or can be discarded and dynamically regenerated at any time.
  • Example operating environment depicted in FIG. 1 is suitable for use in implementing embodiments of the invention.
  • environment 100 is suitable for creating, on-boarding, storing, indexing, crowd-sourcing (e.g., distributing), and invoking actions or action datasets, among other things.
  • Environment 100 includes digital assistant device 110, action cloud server 120 (hereinafter also referenced as "server") and network 130.
  • Digital assistant device 110 can be any kind of computing device having a digital assistant module installed and/or executing thereon, the digital assistant module being implemented in accordance with at least some of the described embodiments.
  • digital assistant device 110 can be a computing device such as computing device 800, as described below with reference to FIG. 6.
  • digital assistant device 110 can be a personal computer (PC), a laptop computer, a workstation, a mobile computing device, a PDA, a cell phone, a smart watch or wearable, or the like. Any digital assistant device described in accordance with the present disclosure can include features described with respect to the digital assistant device 110.
  • a digital assistant device can include one or more applications 112 installed and executable thereon.
  • the one or more applications 112 includes any application that is executable on the digital assistant device, and can include applications installed via an application marketplace, custom applications, web applications, side-loaded applications, applications included in the operating system of the digital assistant device, or any other application that can be reasonably considered to fit the general definition of an application.
  • the digital assistant module can be an application, a service accessible via an application installed on the digital assistant device or via the network 130, or implemented into a layer of an operating system of the digital assistant device 110.
  • the digital assistant module 114 can provide an interface between a digital assistant device 110 and a user (not shown), generally via a speech-based exchange, although any other method of exchange between user and digital assistant device may be considered within the purview of the present disclosure.
  • action cloud server 120 can be any kind of computing device capable of facilitating the on-boarding, storage, management, and distribution of crowd-sourced action datasets.
  • action cloud server 120 can be a computing device such as computing device 600, as described below with reference to FIG. 6.
  • action cloud server 120 comprises one or more server computers, whether distributed or otherwise.
  • any of the components of environment 100 may communicate with each other via a network 130, which may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs).
  • LANs local area networks
  • WANs wide area networks
  • the server 120 can include or be in communication with a data source 140, which may comprise data sources and/or data systems, configured to make data available to any of the various constituents of the operating environment 100.
  • Data sources 140 may be discrete from the illustrated components, or any combination thereof, or may be incorporated and/or integrated into at least one of those components.
  • digital assistant device 210 (also depicted as digital assistant device 110 of FIG. 1) is suitable for receiving commands or command representations, selecting action datasets to execute by matching received commands to command templates of action datasets, or determining that no action datasets correspond to received commands, interpreting a selected action dataset to execute the associated operation, generating new action datasets, and sending action datasets to or receiving action datasets from a digital assistant server, such as server 120.
  • a digital assistant server such as server 120.
  • Digital assistant device 210 can include, among other things, a command receiving component 220, an action matching component 230, an action executing component 240, a behavior detecting component 250, an event sequence selection component 260, a training component 270, and a server interfacing component 280.
  • the command receiving component 220 can receive a command, either in the form of speech data or text data.
  • the speech data can be received via a microphone of the digital assistant device 210, or another computing device paired to or in communication with the digital assistant device 210.
  • the command receiving component 220 after receiving the speech data, can employ a speech-to-text engine of the digital assistant device 210 to generate a command representation (e.g., a text string of the command).
  • Text data received by command receiving component 220 can be received via a virtual keyboard or other input method of the digital assistant device 210, and similarly, can be received from another computing device paired to or in communication with the digital assistant device 210. Received text data is already in the form of a command representation, and is treated as such.
  • command receiving component 210 can be invoked manually by a user (e.g., via an input to begin listening for or receiving the command), or can be in an always-listening mode.
  • action matching component 230 can determine whether one or more action datasets stored on the digital assistant device 210 include a command template that corresponds to or substantially corresponds (e.g., at least 90% similar) to the received command representation.
  • a corresponding command template can be identified, and the action dataset of which the corresponding command template is stored in is selected for interpretation by action executing component 240.
  • a corresponding command template cannot be identified, and either the training component 270 can be invoked, or the received command is communicated to the digital assistant server (depicted as server 120 of FIG. 1 and digital assistant server 310 of FIG. 3) via the server interfacing component 280.
  • the action executing component 240 can receive a selected action dataset, either selected by digital assistant device 210 from local storage, by the digital assistant server from storage accessible thereto, or selected from a list presented by digital assistant device 210.
  • the action executing component 240 can, from the received action dataset, interpret a set of instructions otherwise known as event data, which may include executable code, links, deep links, references to GUI elements, references to screen coordinates, field names, or other pieces of data that can correspond to one or more tasks or events stored in the selected action dataset.
  • event data e.g., set of instructions
  • the action executing component 240 can reproduce the events that were recorded when the action dataset was initially generated, by any digital assistant device such as digital assistant device 210.
  • the event data can include time delays, URLs, deep links to application operations, or any other operation that can be accessed, processed, emulated, or executed by the action executing component 240.
  • events such as clicks or touch inputs, can be reproduced on the digital assistant device 210, based on the interpreted event data stored in an action dataset.
  • an action dataset can include more than one set of event data (e.g., different sets of interpretable instructions), whereby each set can be interpreted by action executing component 240 to reproduce (e.g., initiate) a corresponding set of events.
  • events can be recorded or programmed by any digital assistant device, such as digital assistant device 210, or by another computing device such as one operated by an administrator of the digital assistant system described herein.
  • application(s) executing on the digital assistant device or the operating system of the digital assistant device could generate or present unexpected events or behaviors as event data is being interpreted (or as events are being reproduced)
  • the digital assistant device 210 can include a behavior detecting component 250 in accordance with some embodiments described herein.
  • the behavior detecting component 250 can include, among other things, an event listener that actively listens to computing events that are generated by the digital assistant device 210. In some aspects, the behavior detecting component 250 can also determine whether anticipated events or behaviors are present or are generated by the digital assistant device 210. By way of example, the behavior detecting component 250 can parse graphical user interface elements or element trees generated or provided by applications executing on the digital assistant device 210 or by the operating system thereof, and determine whether certain GUI elements required to reproduce an event of an action dataset is present (e.g., displayed or queued for display). For instance, a required GUI element may need to be present for a particular event to be reproduced thereon.
  • an event listener that actively listens to computing events that are generated by the digital assistant device 210.
  • the behavior detecting component 250 can also determine whether anticipated events or behaviors are present or are generated by the digital assistant device 210.
  • the behavior detecting component 250 can parse graphical user interface elements or element trees generated or provided by applications executing on the digital assistant device 210 or by
  • an unexpected behavior can include, without limitation, pop-ups, login or authentication prompts, notifications, the presence or lack of presence of a particular GUI element, or the interactability (e.g., ability to receive input) or non-interactability (e.g., inability to receive input) of a particular GUI element, among other things.
  • the event sequence selection component 260 can, among other things, select one or more alternative sets of event data of an action dataset to interpret or initiate based on a determination by the digital assistant device 210 that an unexpected behavior is present (e.g., is displayed or not displayed, has occurred or has not occurred, was generated or not generated) while another set of event data is being interpreted.
  • an action dataset may include a primary set of event data that is interpretable to reproduce events that initiate a particular feature of an application.
  • the primary set of event data can include instructions that correspond to an ideal scenario where no unexpected behaviors are determined or detected as present throughout a duration of reproducing the events corresponding to the particular application feature.
  • the action dataset may also include at least a secondary set of event data that is also interpretable to reproduce or initiate a particular feature of the application, but includes instructions that correspond to a non-ideal scenario where one or more unexpected behaviors are detected or determined present while the primary set of event data is being interpreted.
  • the secondary set of event data can include instructions that are interpretable only to address the unexpected behavior. For instance, if a particular notification or pop-up is displayed as the primary set of event data is being interpreted, the behavior detecting component 250 can detect the unexpected behavior and cause the event sequence selection component 260 to select the secondary set of event data to, among other things, dismiss, terminate, close, or swipe away the notification or pop-up.
  • the primary set of event data can correspond to reproducible events for initiating a particular feature of a fully-paid version of an application that includes certain GUI elements, such as GUI elements that are not displayed or active in a free version of the application.
  • a determination by the behavior detecting component 250—that the GUI element is not displayed or active— can cause the sequence selection component 260 to select the secondary set of event data for interpretation, whereby the secondary set of event data includes instructions that are interpretable to reproduce an alternative route or set of events that also initiate the particular feature of the application.
  • the secondary set of event data or any other set of event data can include a reference or identifier that corresponds to one or more unexpected behaviors. That is, in some instances, the behavior detecting component 250 can determine a type (e.g., pop-up, notification, login/authentication prompt, GUI element state) of unexpected behavior that is present while another set of event data is being interpreted. In this regard, based on the determined type, the event sequence selection component 260 can select one or more corresponding sets of event data that can be interpreted to address the determined types of unexpected behaviors.
  • a type e.g., pop-up, notification, login/authentication prompt, GUI element state
  • the training component 270 can facilitate the generation of an action dataset, among other things.
  • an indication such as a GUI element, indicating that an action recording session has begun may be presented for display.
  • a prompt to provide the tasks or events required to perform the desired operation can also be presented for display.
  • a user can begin by first launching an application for which the operation is associated with, and proceed with providing inputs to the application (i.e., (performing the requisite tasks).
  • the inputs can be recorded by the digital assistant device 210, and the training component 270 can listen for, parse, identify, and record a variety of attributes of the received inputs, such as long or short presses, time delays between inputs, references to GUI elements interacted with, field identifiers, application links activated based on received inputs (e.g., deep links), and the like.
  • the recorded inputs and attributes e.g., event data
  • the application launched is also identified, and any application identifying information, such as operating system, operating system version, application version, paid or free version status, and more, can be determined from associated metadata and also stored into the new action dataset.
  • a user can activate a training termination button, which can be presented as a floating button or other input mechanism that is preferably positioned away from an active portion of the display.
  • Other termination methods are also contemplated, such as voice activated termination, or motion activated termination, without limitation.
  • the training component 270 can further request that the user provide a set of commands that correspond to the desired operation.
  • a command can be received via speech data and converted to a command representation by a speech to text engine, or received via text input as a command representation, among other ways.
  • the training component 270 can further prompt the user to define any relevant parameters or variables in the command representations, which can correspond to keywords or values that may change whenever the command is spoken.
  • a user may select one or more terms included in the received command representations, and define them with a corresponding parameter type selected from a list of custom, predefined, or determined parameter times, as described herein.
  • the training component 270 can then extract the selected one or more terms from a command representation defined as parameter(s), replacing them with parameter field identifier(s) of a corresponding parameter type, and store the resulting data as a command template.
  • the training component 270 can then generate the action dataset from the recorded event sequence, the application identifying information, and the one or more defined command templates.
  • the training component 270 can generate an action signature or unique hash based on the generated action dataset or one or more portions of data included therein.
  • the action signature can be employed by the digital assistant server to determine whether the action dataset or data included therein is redundant, among other things.
  • digital assistant server 310 (also depicted as server 120 of FIG. 1) is suitable for establishing connections with digital assistant device(s) 210, receiving generated action datasets, maintaining or indexing received action datasets, receiving commands or command representations to determine one or more most relevant action datasets for selection and communication to a sending digital assistant device of the command or command representation, and distributing action datasets to other digital assistant devices, such as digital assistant device 210.
  • Digital assistant server 310 can include, among other things, an on-boarding component 320, an indexing component 330, a maintenance component 340, a relevant component 350, and a distribution component 360, among other things.
  • the on-boarding component 320 can receive action datasets generated by one or more digital assistant devices 210 in communication therewith.
  • the on- boarding component can generate an action signature for a received action dataset, similar to how a digital assistant device may, as described herein above.
  • the action signature can be searched utilizing the indexing component 330, which maintains an index of all action datasets stored by the digital assistant server 310.
  • the indexing component 330 facilitates quick determination of uniqueness of received action datasets, and reduces redundancy and processing load of the digital assistant server 310.
  • the maintenance component 340 can determine whether any portion of a received action dataset is different than action datasets already stored on or by the server (e.g., in a database), and extract such portions for merging into the existing corresponding action datasets. Such portions may be identified in circumstances where only command templates are hashed in the action signature, or where each portion of the action dataset (e.g., application identifying information, command template(s), event sequence) is independently hashed either by training component 240 of FIG. 2 or on-boarding component 310 of FIG. 3, to more easily identify changes or differences between action datasets.
  • a received action dataset can include separate hashes for its application identifying information, event sequence, and command template(s).
  • the digital assistant server 310 can employ the indexing component 330 and maintenance component 340 to quickly identify, for instance, that the received action data corresponds to a particular application and operation, or that the command template(s) are different than those stored in the stored action dataset by virtue of the command template hashes being different.
  • the independent hash signatures for each portion of data included in an action dataset can facilitate efficient determination of changes or differences between any combination of data portions in a received action dataset and a stored action dataset.
  • Relevance component 350 can determine, based on commands or command representations received by a digital assistant device 210, a likelihood that a particular command template corresponds to the received command or command representation. While a variety of relevance determining methods may be employed, a machine learning implementation may be preferable, though a ranking of determined most similar command templates to a command or command representation received from a digital assistant device 210 can also facilitate a determination of relevance and therefore one or more most relevant command templates. Determined most-relevant command templates can thereby facilitate the selection of a most relevant action dataset to be distributed to the command- sending digital assistant device 210.
  • the distribution component 360 can distribute or communicate to one or more digital assistant devices 210, determined relevant or most relevant action datasets, determined new action datasets, determined updated action datasets, any portion and/or combination of the foregoing, or generated notifications corresponding to any portion and/or combination of the foregoing, among other things, based on a variety of factors.
  • the distribution component 360 can include features that determine, among other things, which applications are installed on a digital assistant device 210. Such features can enable the digital assistant server 310 to determine which action datasets or portions thereof are relevant to the digital assistant device 210, and should be distributed to the digital assistant device 210.
  • a digital assistant device 210 profile (not shown) describing all applications currently installed or executable by a digital assistant device 210, can be maintained (e.g., stored, updated) by the digital assistant server 310.
  • the profile can be updated periodically, manually, or dynamically by a server interfacing component 280 of the digital assistant device 210 (e.g., whenever the digital assistant is in communication with and sends a command to the digital assistant server 310, or whenever an application is installed or updated on the digital assistant device 210).
  • the distribution component 360 can distribute or communicate notifications, action datasets, or portions thereof, in a variety of ways, such as pushing, sending in response to received requests for updates, sending in response to established communications with a digital assistant device 210, or by automatic wide scale (e.g., all digital assistant devices) or selective scale (e.g., region, location, app type, app name, app version) distribution, among other things.
  • automatic wide scale e.g., all digital assistant devices
  • selective scale e.g., region, location, app type, app name, app version
  • FIG. 4 a data structure 400 of an exemplary action dataset 410 in accordance with some of the described embodiments is illustrated.
  • the depicted data structure is not intended to be limiting in any way, and any configuration of the depicted data portions of information may be within the purview of the present disclosure. Further, additional data portions or less data portions may be included in an action dataset 410 also remaining within the purview of the present disclosure.
  • the action dataset 410 includes application identifying information 420, recorded event sequence data 430, and command templates 440.
  • the action dataset 410 further includes hash(es) 450, which can include a hash value generated based on the entire action dataset 410, or hash values generated based on any portion of the aforementioned data portions 420, 430, 440, among other things.
  • the action dataset 410 can be generated by training component 270 of digital assistant device 210 of FIG. 2 and/or received from distribution component 360 of digital assistant server 310 of FIG. 3.
  • the application identifying information 420 can include information about a particular application that is required for execution to perform a particular operation for which the action dataset 410 was created. Exemplary pieces of application identifying information 420 are depicted in identifying information 425, which can include any one or more of an operating system (OS) name for which the particular application is executed on, an OS version of the aforementioned OS, a defined native language of the aforementioned OS, a name of the particular application, a version of the particular application, and the like. It is contemplated that the application identifying information 420 is required and checked (e.g., by the digital assistant server 310 of FIG. 3), before an action dataset 410 is distributed to a digital assistant device (e.g., digital assistant device 210 of FIG. 2) and employed by the digital assistant device to ensure that the action dataset 410 is compatible with, or can be correctly interpreted by action executing component 240 of FIG. 2, so that the corresponding and desired operation is performed by the digital assistant device 210.
  • OS operating system
  • the recorded event sequence data 430 can include any or all task or event- related data that was obtained, received, or determined by the digital assistant device (e.g., via training component 270 of FIG. 2) responsible for generating the action dataset 410.
  • the recorded event sequence data 430 can also include task or event-related data that is defined (e.g., programmed) by a computing device, such as one operated by an administrator of the digital assistant system described herein.
  • one or more sets of task or event-related data can be stored within the action dataset 410, each corresponding to at least a portion of reproducible events that, when reproduced, can initiate a particular function of one or more applications associated with the action dataset.
  • the recorded event sequence data can include timing attributes of received inputs (e.g., delays before or in between successive inputs, duration of inputs, GUI elements interacted with, relative positions of GUI elements, labels or metadata of GUI elements, scroll inputs and distances, links or URLs accessed activated, detected activation of application deep links activated in response to received inputs, and more).
  • the recorded event sequence data 430 may include conditions that require actual user intervention before subsequent events or tasks are resumed. For instance, secured login screens may require that a user input username and password information before an application is executed.
  • the recorded event sequence data 430 may include a condition corresponding to when user authentication has occurred, and instructions (e.g., interpretable by action executing component 240) to proceed with the tasks or events in the recorded event sequence data 430 based upon an occurrence of the condition.
  • the action executing component 240 of FIG. 2 can parse metadata, GUI elements, or other information from an executing application to determine when certain events occur or conditions are met.
  • additional conditions may be included in the recorded event sequence data 430 that require prior events or tasks to be completed, or certain GUI elements be displayed or interacted with, or any other conditions to be met, before subsequent events or tasks are performed by the action executing component 240 of FIG. 2.
  • a digital assistant device such as digital assistant device 210 of FIG. 2 can invoke the execution of an application that is referenced by an action dataset selected by the digital assistant device or a digital assistant server, such as digital assistant server 310, the action dataset being based on a command received by the digital assistant device.
  • Execution (e.g., start up, initialization, opening) of the application can be invoked in a variety of manners, such as events reproduced by the digital assistant device based on interpreted instructions of the action dataset, or activation of deep links included in the action dataset, among other things.
  • the action dataset can include at least a first set of instructions and a second set of instructions that are each interpretable by the digital assistant device to initiate a particular function of a corresponding application, or a portion of the operations required to initiate the particular function.
  • the first set of instructions can correspond to a primary set of instructions that can be interpreted to initiate the particular function in an ideal scenario where no unexpected behaviors are generated, displayed, or provided by the application(s) or operation system of the digital assistant device.
  • the first set of instructions can correspond to a default set of instructions that is selected for interpretation in response to the selection of the corresponding action dataset.
  • the first set of instructions can correspond to a set of instructions that is selected for interpretation in response to a determination that an unexpected event has occurred or is present.
  • the second set of instructions can be one of a set of additional sets of instructions that can be interpreted independently, in combination, and/or in addition to the first set of instructions to ultimately initiate the particular function.
  • the second set of instructions can correspond to reproducible events that address unexpected events that are detected, are determined to occur, or are determined be present, while the first set of instructions is being interpreted or the corresponding events are being reproduced by the digital assistant device.
  • an interruption of the initiation of an application function can correspond to a determination, by the digital assistant device, that an expected or anticipated condition of one or more applications or the operating system of the digital assistant device did not occur or is not present.
  • an interruption of the initiation of the application function can correspond to a determination, by the digital assistant device, that an unexpected behavior of one or more applications or the operating system of the digital assistant device has occurred or is present.
  • an interruption of the initiation of the application function can correspond to a determination, by the digital assistant device, that an unexpected behavior of one or more applications or the operating system of the digital assistant device is present, in accordance with various embodiments described herein.
  • the digital assistant device can determine a type of unexpected behavior that is detected while the first set of instructions is being interpreted or the corresponding events are being reproduced. In this regard, the digital assistant device can select one of a set of instructions or reproducible events included in the action dataset to appropriately address the detected unexpected behavior.
  • Each set of instructions can include, among other things, a reference or identifier that corresponds to a type of unexpected behavior.
  • the digital assistant device can select a set of instructions that is determined to correspond to the determined type of unexpected behavior, and interpret, at step 530, the selected set of instructions, thereby reproducing one or more events or operations to address the unexpected behavior.
  • the interpreted set of instructions can, among other things, generate input events, close prompts, accept prompts, close windows or other GUI elements, input saved or predefined credentials, enforce a time delay, clear notifications, or reproduce an entirely different set of events, among other things.
  • the selected set of instructions can be interpreted to merely address the unexpected event, such that the digital assistant device can continue interpreting the first set of instructions at a point just before, when, or just after the unexpected event was determined to occur.
  • the digital assistant device can continue interpreting the first set of instructions so that each instruction is interpreted once (e.g., each corresponding event is reproduced once).
  • the selected set of instructions can be interpreted to not only address the unexpected event, but to reproduce its corresponding set of events to initiate the particular function of the action dataset.
  • Implementations having an entirely different set of events can be more probable when an action dataset is associated with multiple application versions, or when variable access rights to an application are available, such as a free or limited-version of an application versus a paid-version of the application.
  • different versions or different access rights of an application can present different GUI elements that can be interacted with, or operation pathways (e.g., a series of operations that need be performed) to initiate a particular function of the application.
  • a different set of events can be selected for interpretation to initiate the particular function when the digital assistant device detects predefined conditions that correspond to the unexpected event.
  • one or more sets of instructions or reproducible events can be incorporated into all action datasets or provided to the digital assistant device as a catch-all or universal action dataset, to merely address unexpected behaviors that may be applicable to a plurality of applications.
  • operating system notifications e.g., low battery notification
  • the digital assistant device detects the unexpected event, such as the operating system notification, the digital assistant device can select a set of instructions from the catch-all or universal action dataset to address the unexpected event and carry on with its interpretation of the first set of instructions, as described herein above.
  • an exemplary computing environment suitable for implementing embodiments of the invention is now described.
  • an exemplary computing device is provided and referred to generally as computing device 600.
  • the computing device 600 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device 600 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
  • Embodiments of the invention may be described in the general context of computer code or machine-useable instructions, including computer-useable or computer- executable instructions, such as program modules, being executed by a computer or other machine, such as a personal data assistant, a smartphone, a tablet PC, or other handheld device.
  • program modules including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types.
  • Embodiments of the invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, more specialty computing devices, etc.
  • Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote- processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • computing device 600 includes a bus 610 that directly or indirectly couples the following devices: memory 612, one or more processors 614, one or more presentation components 616, one or more input/output (I/O) ports 618, one or more I/O components 620, and an illustrative power supply 622.
  • Bus 610 represents what may be one or more busses (such as an address bus, data bus, or combination thereof).
  • I/O input/output
  • FIG. 6 is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope of FIG. 6 and with reference to “computing device.”
  • Computer-readable media can be any available media that can be accessed by computing device 600 and includes both volatile and nonvolatile media, removable and non- removable media.
  • Computer-readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 600.
  • Computer storage media does not comprise signals per se.
  • Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
  • Memory 612 includes computer storage media in the form of volatile and/or nonvolatile memory.
  • the memory may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc.
  • Computing device 600 includes one or more processors 614 that read data from various entities such as memory 612 or I/O components 620.
  • Presentation component(s) 616 presents data indications to a user or other device.
  • Exemplary presentation components include a display device, speaker, printing component, vibrating component, and the like.
  • the I/O ports 618 allow computing device 600 to be logically coupled to other devices, including I/O components 620, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
  • the I/O components 620 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing.
  • NUI may implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 600.
  • the computing device 600 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 600 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 600 to render immersive augmented reality or virtual reality.
  • computing device 600 may include one or more radio(s) 624 (or similar wireless communication components).
  • the radio 624 transmits and receives radio or wireless communications.
  • the computing device 600 may be a wireless terminal adapted to receive communications and media over various wireless networks.
  • Computing device 600 may communicate via wireless protocols, such as code division multiple access (“CDMA"), global system for mobiles (“GSM”), or time division multiple access (“TDMA”), as well as others, to communicate with other devices.
  • CDMA code division multiple access
  • GSM global system for mobiles
  • TDMA time division multiple access
  • the radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection.
  • a short-range connection may include, by way of example and not limitation, a Wi-Fi® connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol; a Bluetooth connection to another computing device is a second example of a short-range connection, or a near-field communication connection.
  • a long-range connection may include a connection using, by way of example and not limitation, one or more of CDMA, GPRS, GSM, TDMA, and 802.16 protocols.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments described herein facilitate the robust replay of reproducible computing events or tasks when an associated command is received by a digital assistant device. The digital assistant device can determine when a received command corresponds to one of a plurality of action datasets, select the corresponding action dataset to interpret instructions included therein, which can thereby initiate a particular feature of an application associated with the corresponding action dataset. During the process of initiating the particular feature, the digital assistant device can determine when unexpected behaviors of the associated application or the digital assistant device's operating system occur. In this way, the digital assistant device can dynamically switch to a different set of instructions included in the corresponding action dataset to address the unexpected behaviors and successfully initiate the particular feature associated with the received command.

Description

ROBUST REPLAY OF DIGITAL ASSISTANT OPERATIONS
BACKGROUND OF THE INVENTION
Digital assistants have become ubiquitously integrated into a variety of consumer electronic devices. Modern day digital assistants employ speech recognition technologies to provide a conversational interface between users and electronic devices. These digital assistants can employ various algorithms, such as natural language processing, to improve interpretations of commands received from a user. Consumers have expressed various frustrations with conventional digital assistants due to privacy concerns, constant misinterpretations of spoken commands, unavailability of services due to weak signals or a lack of signal, and the general requirement that the consumer must structure their spoken command in a dialect that is uncomfortable for them.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in isolation as an aid in determining the scope of the claimed subject matter.
The present disclosure generally describes embodiments that are directed towards systems and methods relating to a crowd-sourced digital assistant for computing devices. In particular, embodiments facilitate the intuitive creation and distribution of action datasets that include reproducible computing events and associated commands that can be employed to invoke a reproduction of the computing events on computing devices having the crowd-sourced digital assistant installed and/or executing thereon. In various embodiments, a digital assistant system and application can perform any operation on a computing device by way of a received command, whereby the operations are limited only by the various operations executable on the computing device. In the described embodiments, techniques are described to facilitate the detection and mitigation of unexpected or unanticipated events that may occur during the replay of operations performed by a digital assistant described in accordance with the present disclosure. In accordance with embodiments described herein, the described digital assistant and corresponding system provides an ever-growing and evolving library of dialects that enables the digital assistant to learn from its users, in contrast to the frustrating and limited interpretation features provided by conventional digital assistants. Further, because the digital assistant and corresponding system is configured with a framework for distributing improvements to its collection of actionable operations and understandable commands, and because the digital assistant utilizes applications existing on the computing device of each user, privacy concerns typically associated with conventional digital assistants is significantly reduced.
The digital assistant described in accordance with the present disclosure can reproduce or "replay" interpretable instructions corresponding to operations that were once recorded by a digital assistant device and stored in a data structure described herein as an action dataset. Such interpretable instructions or reproducible operations can be associated with installed applications or other features of a digital assistant device's operating system, among other things. In some instances, however, unexpected behaviors can be detected on the digital assistant device while these instructions are being interpreted. For instance, installed applications or operating systems can generate unexpected events (e.g., pop-up advertisements, errors, notifications, login/authentication prompts) when or while the instructions or operations of an action dataset are being respectively interpreted or reproduced. In some instances, installed applications can have different features or operations available for invocation depending on various behaviors, such as whether the application is a paid version or a free or trial version, among other things. Such unexpected behaviors can, in many cases, interrupt and potentially terminate or cause an error in the replay process. In this regard, it would be beneficial to provide a digital assistant or features thereof that can interpret the instructions of an action dataset in a robust manner, to overcome any unexpected behaviors, and seamlessly reproduce the operations of the action dataset without interruption or error.
BRIEF DESCRIPTION OF THE DRAWING
The present invention is described in detail below with reference to the attached drawing figures, wherein: FIG. 1 is a block diagram of an exemplary computing environment for a crowd- sourced digital assistant system, in accordance with embodiments of the present invention;
FIG. 2 is a block diagram of an exemplary digital assistant device, in accordance with an embodiment of the present disclosure;
FIG. 3 is a block diagram of an exemplary digital assistant server, in accordance with an embodiment of the present disclosure;
FIG. 4 is an exemplary data structure of an action dataset, in accordance with an embodiment of the present disclosure;
FIG. 5 is a flow diagram showing a method for mitigating unexpected events during replay of reproducible operations by a digital assistant device of a crowd-sourced digital assistant network, according to various embodiments of the present invention;
FIG. 6 is a block diagram of an exemplary computing environment suitable for use in implementing embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms "step" and/or "block" may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Aspects of the technology described herein are generally directed towards systems and methods for generating, distributing, and replaying a set of reproducible operations via a digital assistant device. Various embodiments can facilitate the creation, on- boarding, and interpretation or execution of action datasets, by any number of computing devices having an instance of the digital assistant installed in and/or executing thereon (hereinafter referenced as a "digital assistant device"). More specifically, the described embodiments relate to a digital assistant device that can generate, distribute, or interpret/execute an action dataset to facilitate a robust replay of recorded operations reproducible by the digital assistant device, such that unexpected events occurring during the replay are seamlessly accounted for.
In accordance with the present disclosure, an "operation" can correspond to a final result, output, or computing operation(s) that is generated, executed, or performed by a digital assistant device based on one or more action datasets selected and interpreted for execution by the digital assistant device, each action dataset being comprised of one or more reproducible computing events that can be invoked in response to a received command determined to correspond to the action dataset. In accordance with embodiments described herein, an "action" is described in reference to operation(s) that is performed in response to an action dataset selected and interpreted for execution. In this regard, an action can be performed, invoked, initiated, or executed, among other things, and any reference to the foregoing can imply that a corresponding action dataset is selected and interpreted for execution by the digital assistant device to perform the corresponding operation(s).
In some embodiments, actions (or the action datasets corresponding thereto) can be created, by the digital assistant device, which can record a series of detected events (e.g., inputs) that are typically provided by a user of the digital assistant device when manually invoking the desired operation (e.g., with manual inputs via a touchscreen or other input method of the digital assistant device). That is, to create a new action dataset, the digital assistant device can invoke a recording mode where a user can simply perform a series of computing operations (e.g., manual touches, click inputs) within one or more applications to achieve a desired result, execute a specific operation, and/or initiate a particular feature associated with one or more of the applications. In some further embodiments, actions (or corresponding action datasets) can include additional and/or alternative events (e.g., inputs) that can be provided by a user of the digital assistant device or defined by a developer or administrator of the digital assistant system described herein. In various implementations, such additional and/or alternative events can also perform a corresponding series of computing operations within one or more applications to achieve the desired result, execute a specific operation, and/or initiate a particular feature associated with one or more of the applications.
After the recording is stopped by the user, via a terminating input, the action dataset can store and be associated with a set of command templates corresponding to commands that the user would preferably announce to the digital assistant device when an invocation of the operation is desired. In various embodiments, a command representation can be received as speech data and converted to text (e.g., by a speech engine of the digital assistant device), or received as text input data. In accordance with embodiments described herein, a "command" is referenced herein to describe data, received as speech data or as text data. A "command representation," on the other hand is referenced to describe text data that is received, based on inputs (e.g., keyboard), received speech data converted to text data, or received text data communicated from another computing device. A "command template" is referenced herein to describe a portion of a command representation having defined parameter fields in place of variable terms.
In more detail, one or more terms or keywords in the received command can be defined as a parameter based on input(s) received from the user. A parameter, in accordance with the present disclosure, can be referenced as corresponding to one of a plurality of predefined parameter types, such as but not limited to, genre, artist, title, location, name or contact, phone number, address, city, state, country, day, week, month, year, and more. It is also contemplated that the digital assistant device can access from a memory, or retrieve (e.g., from a server), a set of predefined parameter types that are known or determined to correspond to the application or applications for which an action dataset is being created. In some embodiments, the set of predefined parameter types can be determined based at least in part on corresponding application identifying information. The digital assistant device can extract, based on the defined parameters, the corresponding keywords and generate a command template based on the remaining terms and the defined parameters. By way of example only, if the command was originally received as "play music by Coldplay," and the term "Coldplay" is defined as a parameter of type "artist," a resulting command template generated by the digital assistant device may appear as "play music by <artist>". In this regard, a command template may include the originally received command terms if no parameters are defined, or may include a portion of the originally received command terms with parameter fields defined therein, the defined parameters corresponding to variable terms of a command.
The digital assistant device can receive, among other things, application identifying information, a recorded series of events, and a set command templates, among other things, to generate a new action dataset that can be retrieved, interpreted and/or invoked by the digital assistant device, simply based on a determination, by the digital assistant device, that a received command or command representation is associated with the action dataset. When an action is invoked based on a determination that a received command or command representation corresponds to an action dataset, the digital assistant device can reproduce (e.g., emulate, invoke, execute, perform) the recorded series of events associated with the corresponding action dataset, thereby performing the desired operation. In some aspects, the digital assistant device can anticipate, based on the invoked action, various application(s) or operating system conditions (e.g., displayed GUI elements, active or interactive GUI elements, notifications, pop-ups) that occur or are presented as the recorded series of events is being reproduced. In the event that the digital assistant device determines that an anticipated condition does not occur or is not present, or in other words detects an unexpected behavior, the digital assistant device can select an alternative series of events of the action dataset to address the unexpected behavior and proceed with reproduction of the alternative series of events to perform the desired operation.
Moreover, in circumstances where a received command or command representation includes a parameter term, and a determination is made that the received command or command representation corresponds to an action dataset having a parameter field that also corresponds to the parameter term, the parameter term can be employed, by the digital assistant device, to perform custom operations while performing the action. For instance, the digital assistant device can input the parameter term as a text input into a field of the application.
In some further embodiments, an action dataset, once created by the digital assistant device, can be uploaded (hereinafter also referenced as "on-boarded") to a remote server for storage thereby. The action dataset can be on-boarded automatically upon its generation or on-boarded manually based on a received instruction, by the digital assistant device. It is contemplated that individuals may want to keep their actions or command templates private, and so an option to keep an action dataset limited to locally-storage may be provided to the user (e.g., via a GUI element). The server, upon receiving an on-boarded action dataset, can analyze the action dataset and generate an associated action signature based on the characteristics and/or contents of the action dataset. Contents of an action dataset can include, among other things, application identifying information, corresponding command templates and parameters, and a recorded series of events. The action signature can be generated by various operations, such as hashing the on-boarded action dataset with a hashing algorithm, by way of example. It is also contemplated that the action signature can be generated by the on-boarding digital assistant device, the generated action signature then being stored in or appended to the action dataset before it is uploaded to the server.
In one aspect, the server can determine that the on-boarded action dataset already exists on the server, based on a determination that the action signature corresponds to the action signature of another action dataset already stored on the server. The server can either dispose of the on-boarded action dataset or merge the on-boarded action dataset (or determined differing portion(s) thereof) with an existing action dataset stored thereby, preventing redundancy and saving storage space. In another aspect, the server can analyze the on-boarded action dataset to determine if its contents (e.g., the recorded events, command templates, metadata) comply with one or more defined policies (e.g., inappropriate language, misdirected operations, incomplete actions) associated with general usage of the digital assistant system. In another aspect, the server can employ machine learning algorithms, among other things, to perform a variety of tasks, such as determining relevant parameter types, generating additional command templates for association with an on-boarded or stored action dataset, comparing similarity of events between on-boarded action datasets to identify and select more efficient routes for invoking an operation, and more.
In some further embodiments, the server can distribute one or more stored actions datasets to a plurality of digital assistant devices in communication with the server. In this way, each digital assistant device can receive action datasets or portions thereof (e.g., command templates) from the server. The action datasets can be distributed to the digital assistant devices in a variety of ways. For instance, in an embodiment, the server can freely distribute any or all determined relevant action datasets to digital assistant devices. In an embodiment, an application profile including a list of applications installed on a digital assistant device can be communicated to the server. Based on the application profile for the digital assistant device, the server can distribute any or all determined relevant action datasets to the digital assistant device. As digital assistant devices can include a variety of operating systems, and versions of applications installed thereon can also vary, it is contemplated that the application profile communicated by a digital assistant device to the server may include operating system and application version information, among other things, so that appropriate and relevant action datasets are identified by the server for distribution to the digital assistant device. For a more granular implementation, an action dataset profile including a list of action datasets or action signatures stored on the digital assistant device can be communicated to the server. In this way, only missing or updated action datasets can be distributed to the digital assistant device.
In some embodiments, a user can simply announce a command to the digital assistant device, and if a corresponding action dataset is not stored on the digital assistant device, the digital assistant device can send the command (representation) to the server for determination and selection of a set of relevant action datasets, which can then be communicated to the digital assistant device. Provided that the digital assistant device has the corresponding application installed thereon, the digital assistant device can retrieve, from the server, a set of determined most relevant action datasets, without additional configuration or interaction by the user, also reducing server load and saving bandwidth by inhibiting extraneous transfer of irrelevant action datasets. A retrieved set of relevant action datasets can be received from the server for invocation by the digital assistant device. It is further contemplated that if two or more action datasets are determined equally relevant to a received command, each action dataset may be retrieved from the server, and the digital assistant device can provide for display a listing of the determined relevant action datasets for selection and execution.
In some further embodiments, a user of a digital assistant device can customize command templates associated with an action dataset corresponding to an application installed on their digital assistant device. Put simply, a user can employ the digital assistant (or a GUI thereof) to select an action dataset from a list of action datasets stored on the computing device, select an option to add a new command to the action dataset, and define a new command and any associated parameters for storage in the action dataset. In this regard, the user can add any custom command and parameter that can later be understood by the digital assistant device to invoke the action. In some aspects, the custom command and/or modified action can be on-boarded to the server for analysis and storage, as noted above. In some further aspects, based on the analysis, the server can distribute the custom command and/or at least a portion of the modified action dataset to a plurality of other digital assistant devices. In this regard, the list of understandable commands and corresponding actions can continue to grow and evolve, and be automatically provided to any other digital assistant device.
Accordingly, at a high level and with reference to FIG. 1, an example operating environment 100 in which some embodiments of the present disclosure may be employed is depicted. It should be understood that this and other arrangements and/or features described by the enclosed document are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) or features can be used in addition to or instead of those described, and some elements or features may be omitted altogether for the sake of clarity. Further, many of the elements or features described in the enclosed document may be implemented in one or more components, or as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, some functions may be carried out by a processor executing instructions stored in memory.
The system in FIG. 1 includes one or more digital assistant devices 110, 115a, 115b, 115c, ... 115n, in communication with a server 120 via a network 130 (e.g., the Internet). In this example, the server 120, also in communication with the network 130, is in communication with each of the digital assistant devices 110, 115a-115n, and can also be in communication with a database 140. The database 140 can be directly coupled to the server 120 or coupled to the server 120 via the network 130. The digital assistant device 110, representative of other digital assistant devices 115a-115n, is a computing device comprising one or more applications 112 and a digital assistant module 114 installed and/or executing thereon.
The one or more applications 112 includes any application that is executable on the digital assistant device 110, and can include applications installed via an application marketplace, custom applications, web applications, side-loaded applications, applications included in the operating system of the digital assistant device 110, or any other application that can be reasonably considered to fit the general definition of an application or mobile application. On the other hand, the digital assistant module 114 can provide digital assistant services installed on the digital assistant device 110 or provided by the server 120 via the network 130, or can be implemented at least partially into an operating system of the digital assistant device 110. In accordance with embodiments described herein, the digital assistant module 114 provides an interface between a digital assistant device 110 and an associated user (not shown), generally via a speech-based exchanged, although any other method of exchange between user and digital assistant device 110 (e.g., keyboard input, communication from another digital assistant device or computing device) remains within the purview of the present disclosure. When voice commands are received by the digital assistant device 110, the digital assistant module 114 can convert the speech command to text utilizing a speech-to- text engine (not shown) to extract identified terms and generate a command representation. The digital assistant module 114 can receive the command representation, and determine that the command representation corresponds to at least one command template of at least one action dataset stored on the digital assistant device. In some embodiments, the digital assistant module can generate an index of all command templates stored on the digital assistant device 110 for faster searching and comparison of the received command representation to identify a corresponding command template, and thereby a corresponding action dataset. Each indexed command template can be mapped to a corresponding action dataset, which can be interpreted for execution in response to a determination of a confirmed match with the received command representation.
By way of brief overview, a command template can include one or more keywords and/or one or more parameters that each have a corresponding parameter type. Each command template generally corresponds to an operation that can be performed on one or more applications 112 installed on a digital assistant device 110. Moreover, a plurality of command templates can correspond to a single operation, such that there are multiple equivalent commands that can invoke the same operation. By way of example only, commands such as "check in," check into flight," "please check in," "check into flight now," "check in to flight 12345," and the like, can all invoke the same operation that, by way of example only, directs the digital assistant module 114 to execute an appropriate airline application on the digital assistant device 110 and perform a predefined set of events or computer operations to achieve the same result.
The aforementioned commands, however, may lack appropriate information (e.g., the specific airline). As one of ordinary skill may appreciate, a user may have multiple applications 112 from various vendors (e.g., airlines) associated with a similar service (e.g., checking into flights). A digital assistant device 110 in accordance with embodiments described herein can provide features that can determine contextual information associated with the digital assistant device 110, or its associated user, based on historical use of the digital assistant device 110, profile information stored on the digital assistant device 110 or server 120, stored parameters from previous interactions or received commands, indexed messages (e.g., email, text messages) stored on the digital assistant device, and a variety of other types of data stored locally or remotely on a server, such as server 120, to identify a most relevant parameter and supplement a command to select a most relevant action dataset. More specific commands, such as "check into Friendly Airline flight," or "Friendly Airline check in," and the like, where a parameter is specifically defined in the command, can be recognized by the digital assistant module 114.
One or more recognizable commands and corresponding action datasets can be received by the digital assistant device 110 from the server 120 at any time, including upon installation, initialization, or invocation of the digital assistant module 114, after or upon receipt of a speech command by the digital assistant module 114, after or upon installation of a new application 112, periodically (e.g., once a day), when pushed to the digital assistant device 110 from the server 120, among many other configurations. It is contemplated that the action datasets received by the digital assistant device 110 from the server 120 can be limited based at least in part on the applications 112 installed on the digital assistant device 110, although configurations where a larger or smaller set of action datasets received are contemplated.
In the event an action dataset is determined not available for a particular application 112 installed on the digital assistant device 110, digital assistant module 114 can either redirect the user to a marketplace (e.g., launch an app marketplace application) to install the appropriate application determined by the server 120 based on the received command, or can invoke an action training program that prompts a user to manually perform tasks on one or more applications to achieve the desired result, the tasks being recorded and stored into a new action dataset by the digital assistant device 110. The digital assistant module 114 can also receive one or more commands from the user (e.g., via speech or text) to associate with the action dataset being generated. If the command includes variable parameters (e.g., optional fields), the action training program can facilitate a definition of such parameters and corresponding parameter types to generate command templates for inclusion in the action dataset being generated. In this way, a command template(s) is associated with at least the particular application designated by the user and also corresponds to the one or more tasks manually performed by the user, associating the generated command template to the task(s) and thus the desired resulting operation.
In some instances, the server 120 can provide a determined most-relevant action dataset to the digital assistant device 110 based on the received command. The server 120 can store and index a constantly-growing and evolving plurality of crowd-sourced action datasets submitted by or received from digital assistant devices 115a-115n also independently having a digital assistant module 114 and any number of applications 112 installed thereon. The digital assistant devices 115a-115n may have any combination of applications 112 installed thereon, and any generation of action datasets performed on any digital assistant device 110, 115-115n can be communicated to the server 120 to be stored and indexed for mass or selective deployment, among other things. In some aspects, the server 120 can include various machine-learned algorithms to provide a level of quality assurance on command templates included in on-boarded action datasets and/or the tasks and operations performed before they are distributed to other digital assistant devices via the network 130.
When the digital assistant module 114 determines an appropriate action dataset (e.g., one or more tasks to achieve a desired result) having one or more command templates that corresponds to the received command, the digital assistant module 114 can generate an overlay interface that can mask any or all visual outputs associated with the determined action or the computing device generally. The generation of the overlay interface can include a selection, by the digital assistant module 114, of one or more user interface elements that are stored in a memory of the digital assistant device 110 or server 120, and/or include a dynamic generation of the user interface element(s) by the digital assistant module 114 or server 120 based on one or more portions of the received command and/or obtained contextual data (e.g., determined location data, user profile associated with the digital assistant device 110 or digital assistant module 114, historical data associated with the user profile, etc.) obtained by the digital assistant device 110, digital assistant module 114, and/or server 120. The selected or generated one or more user interface elements can each include content that is relevant to one or more portions (e.g., terms, keywords) of the received command. In the event of dynamic generation of user interface elements, such elements can be saved locally on the digital assistant device 110 or remotely on the server 120 for subsequent retrieval by the digital assistant device 110, or can be discarded and dynamically regenerated at any time.
Example operating environment depicted in FIG. 1 is suitable for use in implementing embodiments of the invention. Generally, environment 100 is suitable for creating, on-boarding, storing, indexing, crowd-sourcing (e.g., distributing), and invoking actions or action datasets, among other things. Environment 100 includes digital assistant device 110, action cloud server 120 (hereinafter also referenced as "server") and network 130. Digital assistant device 110 can be any kind of computing device having a digital assistant module installed and/or executing thereon, the digital assistant module being implemented in accordance with at least some of the described embodiments. For example, in an embodiment, digital assistant device 110 can be a computing device such as computing device 800, as described below with reference to FIG. 6. In embodiments, digital assistant device 110 can be a personal computer (PC), a laptop computer, a workstation, a mobile computing device, a PDA, a cell phone, a smart watch or wearable, or the like. Any digital assistant device described in accordance with the present disclosure can include features described with respect to the digital assistant device 110. In this regard, a digital assistant device can include one or more applications 112 installed and executable thereon. The one or more applications 112 includes any application that is executable on the digital assistant device, and can include applications installed via an application marketplace, custom applications, web applications, side-loaded applications, applications included in the operating system of the digital assistant device, or any other application that can be reasonably considered to fit the general definition of an application. On the other hand, the digital assistant module can be an application, a service accessible via an application installed on the digital assistant device or via the network 130, or implemented into a layer of an operating system of the digital assistant device 110. In accordance with embodiments described herein, the digital assistant module 114 can provide an interface between a digital assistant device 110 and a user (not shown), generally via a speech-based exchange, although any other method of exchange between user and digital assistant device may be considered within the purview of the present disclosure.
Similarly, action cloud server 120 ("server") can be any kind of computing device capable of facilitating the on-boarding, storage, management, and distribution of crowd-sourced action datasets. For example, in an embodiment, action cloud server 120 can be a computing device such as computing device 600, as described below with reference to FIG. 6. In some embodiments, action cloud server 120 comprises one or more server computers, whether distributed or otherwise. Generally, any of the components of environment 100 may communicate with each other via a network 130, which may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. The server 120 can include or be in communication with a data source 140, which may comprise data sources and/or data systems, configured to make data available to any of the various constituents of the operating environment 100. Data sources 140 may be discrete from the illustrated components, or any combination thereof, or may be incorporated and/or integrated into at least one of those components.
Referring now to FIG. 2, a block diagram 200 of an exemplary digital assistant device 210 suitable for use in implementing embodiments of the invention is shown. Generally, digital assistant device 210 (also depicted as digital assistant device 110 of FIG. 1) is suitable for receiving commands or command representations, selecting action datasets to execute by matching received commands to command templates of action datasets, or determining that no action datasets correspond to received commands, interpreting a selected action dataset to execute the associated operation, generating new action datasets, and sending action datasets to or receiving action datasets from a digital assistant server, such as server 120.
Digital assistant device 210 can include, among other things, a command receiving component 220, an action matching component 230, an action executing component 240, a behavior detecting component 250, an event sequence selection component 260, a training component 270, and a server interfacing component 280. The command receiving component 220 can receive a command, either in the form of speech data or text data. The speech data can be received via a microphone of the digital assistant device 210, or another computing device paired to or in communication with the digital assistant device 210. The command receiving component 220, after receiving the speech data, can employ a speech-to-text engine of the digital assistant device 210 to generate a command representation (e.g., a text string of the command). Text data received by command receiving component 220, on the other hand, can be received via a virtual keyboard or other input method of the digital assistant device 210, and similarly, can be received from another computing device paired to or in communication with the digital assistant device 210. Received text data is already in the form of a command representation, and is treated as such. In various embodiments, command receiving component 210 can be invoked manually by a user (e.g., via an input to begin listening for or receiving the command), or can be in an always-listening mode.
Based on a command representation being received, action matching component 230 can determine whether one or more action datasets stored on the digital assistant device 210 include a command template that corresponds to or substantially corresponds (e.g., at least 90% similar) to the received command representation. In some aspects, a corresponding command template can be identified, and the action dataset of which the corresponding command template is stored in is selected for interpretation by action executing component 240. In some other aspects, a corresponding command template cannot be identified, and either the training component 270 can be invoked, or the received command is communicated to the digital assistant server (depicted as server 120 of FIG. 1 and digital assistant server 310 of FIG. 3) via the server interfacing component 280.
The action executing component 240 can receive a selected action dataset, either selected by digital assistant device 210 from local storage, by the digital assistant server from storage accessible thereto, or selected from a list presented by digital assistant device 210. The action executing component 240 can, from the received action dataset, interpret a set of instructions otherwise known as event data, which may include executable code, links, deep links, references to GUI elements, references to screen coordinates, field names, or other pieces of data that can correspond to one or more tasks or events stored in the selected action dataset. When the event data (e.g., set of instructions) is interpreted, the action executing component 240 can reproduce the events that were recorded when the action dataset was initially generated, by any digital assistant device such as digital assistant device 210. In some aspects, the event data can include time delays, URLs, deep links to application operations, or any other operation that can be accessed, processed, emulated, or executed by the action executing component 240. In some aspects, events such as clicks or touch inputs, can be reproduced on the digital assistant device 210, based on the interpreted event data stored in an action dataset.
In some embodiments, an action dataset can include more than one set of event data (e.g., different sets of interpretable instructions), whereby each set can be interpreted by action executing component 240 to reproduce (e.g., initiate) a corresponding set of events. In various embodiments, events can be recorded or programmed by any digital assistant device, such as digital assistant device 210, or by another computing device such as one operated by an administrator of the digital assistant system described herein. As it is contemplated that application(s) executing on the digital assistant device or the operating system of the digital assistant device could generate or present unexpected events or behaviors as event data is being interpreted (or as events are being reproduced), the digital assistant device 210 can include a behavior detecting component 250 in accordance with some embodiments described herein.
The behavior detecting component 250 can include, among other things, an event listener that actively listens to computing events that are generated by the digital assistant device 210. In some aspects, the behavior detecting component 250 can also determine whether anticipated events or behaviors are present or are generated by the digital assistant device 210. By way of example, the behavior detecting component 250 can parse graphical user interface elements or element trees generated or provided by applications executing on the digital assistant device 210 or by the operating system thereof, and determine whether certain GUI elements required to reproduce an event of an action dataset is present (e.g., displayed or queued for display). For instance, a required GUI element may need to be present for a particular event to be reproduced thereon. If a particular GUI element is determined not present (e.g., displayed or queued for display), then the behavior detecting component 250 can determine that an unexpected behavior has occurred. In other words, the behavior detecting component 250 can determine that the events corresponding to the action dataset cannot be reproduced to completion. In such circumstances, the behavior detecting component 250 may initialize or signal an event sequence selection component 260 to address the unexpected behavior. In various embodiments, an unexpected behavior can include, without limitation, pop-ups, login or authentication prompts, notifications, the presence or lack of presence of a particular GUI element, or the interactability (e.g., ability to receive input) or non-interactability (e.g., inability to receive input) of a particular GUI element, among other things.
The event sequence selection component 260 can, among other things, select one or more alternative sets of event data of an action dataset to interpret or initiate based on a determination by the digital assistant device 210 that an unexpected behavior is present (e.g., is displayed or not displayed, has occurred or has not occurred, was generated or not generated) while another set of event data is being interpreted. By way of example, an action dataset may include a primary set of event data that is interpretable to reproduce events that initiate a particular feature of an application. The primary set of event data can include instructions that correspond to an ideal scenario where no unexpected behaviors are determined or detected as present throughout a duration of reproducing the events corresponding to the particular application feature. The action dataset may also include at least a secondary set of event data that is also interpretable to reproduce or initiate a particular feature of the application, but includes instructions that correspond to a non-ideal scenario where one or more unexpected behaviors are detected or determined present while the primary set of event data is being interpreted. It is further contemplated that in some embodiments, the secondary set of event data can include instructions that are interpretable only to address the unexpected behavior. For instance, if a particular notification or pop-up is displayed as the primary set of event data is being interpreted, the behavior detecting component 250 can detect the unexpected behavior and cause the event sequence selection component 260 to select the secondary set of event data to, among other things, dismiss, terminate, close, or swipe away the notification or pop-up. Moreover, a variety of scenarios can present an unexpected behavior in relation to a primary set of event data. For instance, the primary set of event data can correspond to reproducible events for initiating a particular feature of a fully-paid version of an application that includes certain GUI elements, such as GUI elements that are not displayed or active in a free version of the application. In this regard, a determination by the behavior detecting component 250— that the GUI element is not displayed or active— can cause the sequence selection component 260 to select the secondary set of event data for interpretation, whereby the secondary set of event data includes instructions that are interpretable to reproduce an alternative route or set of events that also initiate the particular feature of the application.
In some aspects, the secondary set of event data or any other set of event data can include a reference or identifier that corresponds to one or more unexpected behaviors. That is, in some instances, the behavior detecting component 250 can determine a type (e.g., pop-up, notification, login/authentication prompt, GUI element state) of unexpected behavior that is present while another set of event data is being interpreted. In this regard, based on the determined type, the event sequence selection component 260 can select one or more corresponding sets of event data that can be interpreted to address the determined types of unexpected behaviors.
The training component 270 can facilitate the generation of an action dataset, among other things. When the training component 270 is invoked, an indication, such as a GUI element, indicating that an action recording session has begun may be presented for display. A prompt to provide the tasks or events required to perform the desired operation can also be presented for display. In this regard, a user can begin by first launching an application for which the operation is associated with, and proceed with providing inputs to the application (i.e., (performing the requisite tasks). The inputs can be recorded by the digital assistant device 210, and the training component 270 can listen for, parse, identify, and record a variety of attributes of the received inputs, such as long or short presses, time delays between inputs, references to GUI elements interacted with, field identifiers, application links activated based on received inputs (e.g., deep links), and the like. The recorded inputs and attributes (e.g., event data) can be stored, sequentially, in an event sequence, and stored into a new action dataset. The application launched is also identified, and any application identifying information, such as operating system, operating system version, application version, paid or free version status, and more, can be determined from associated metadata and also stored into the new action dataset. When the desired operation is completed (i.e., all requisite tasks/events performed), a user can activate a training termination button, which can be presented as a floating button or other input mechanism that is preferably positioned away from an active portion of the display. Other termination methods are also contemplated, such as voice activated termination, or motion activated termination, without limitation.
The training component 270 can further request that the user provide a set of commands that correspond to the desired operation. A command can be received via speech data and converted to a command representation by a speech to text engine, or received via text input as a command representation, among other ways. When the set of commands is provided and stored as command representations, the training component 270 can further prompt the user to define any relevant parameters or variables in the command representations, which can correspond to keywords or values that may change whenever the command is spoken. In this regard, a user may select one or more terms included in the received command representations, and define them with a corresponding parameter type selected from a list of custom, predefined, or determined parameter times, as described herein. The training component 270 can then extract the selected one or more terms from a command representation defined as parameter(s), replacing them with parameter field identifier(s) of a corresponding parameter type, and store the resulting data as a command template. The training component 270 can then generate the action dataset from the recorded event sequence, the application identifying information, and the one or more defined command templates. In some embodiments, the training component 270 can generate an action signature or unique hash based on the generated action dataset or one or more portions of data included therein. The action signature can be employed by the digital assistant server to determine whether the action dataset or data included therein is redundant, among other things.
Looking now to FIG. 3, a block diagram 300 of an exemplary digital assistant server 310 suitable for use in implementing embodiments of the invention is shown. Generally, digital assistant server 310 (also depicted as server 120 of FIG. 1) is suitable for establishing connections with digital assistant device(s) 210, receiving generated action datasets, maintaining or indexing received action datasets, receiving commands or command representations to determine one or more most relevant action datasets for selection and communication to a sending digital assistant device of the command or command representation, and distributing action datasets to other digital assistant devices, such as digital assistant device 210. Digital assistant server 310 can include, among other things, an on-boarding component 320, an indexing component 330, a maintenance component 340, a relevant component 350, and a distribution component 360, among other things.
The on-boarding component 320 can receive action datasets generated by one or more digital assistant devices 210 in communication therewith. In some aspects, the on- boarding component can generate an action signature for a received action dataset, similar to how a digital assistant device may, as described herein above. Before storing the received action dataset, the action signature can be searched utilizing the indexing component 330, which maintains an index of all action datasets stored by the digital assistant server 310. The indexing component 330 facilitates quick determination of uniqueness of received action datasets, and reduces redundancy and processing load of the digital assistant server 310.
On a similar note, the maintenance component 340 can determine whether any portion of a received action dataset is different than action datasets already stored on or by the server (e.g., in a database), and extract such portions for merging into the existing corresponding action datasets. Such portions may be identified in circumstances where only command templates are hashed in the action signature, or where each portion of the action dataset (e.g., application identifying information, command template(s), event sequence) is independently hashed either by training component 240 of FIG. 2 or on-boarding component 310 of FIG. 3, to more easily identify changes or differences between action datasets. By way of example, in some embodiments, a received action dataset can include separate hashes for its application identifying information, event sequence, and command template(s). In this regard, the digital assistant server 310 can employ the indexing component 330 and maintenance component 340 to quickly identify, for instance, that the received action data corresponds to a particular application and operation, or that the command template(s) are different than those stored in the stored action dataset by virtue of the command template hashes being different. Similarly, the independent hash signatures for each portion of data included in an action dataset can facilitate efficient determination of changes or differences between any combination of data portions in a received action dataset and a stored action dataset.
Relevance component 350 can determine, based on commands or command representations received by a digital assistant device 210, a likelihood that a particular command template corresponds to the received command or command representation. While a variety of relevance determining methods may be employed, a machine learning implementation may be preferable, though a ranking of determined most similar command templates to a command or command representation received from a digital assistant device 210 can also facilitate a determination of relevance and therefore one or more most relevant command templates. Determined most-relevant command templates can thereby facilitate the selection of a most relevant action dataset to be distributed to the command- sending digital assistant device 210.
The distribution component 360 can distribute or communicate to one or more digital assistant devices 210, determined relevant or most relevant action datasets, determined new action datasets, determined updated action datasets, any portion and/or combination of the foregoing, or generated notifications corresponding to any portion and/or combination of the foregoing, among other things, based on a variety of factors. For instance, the distribution component 360 can include features that determine, among other things, which applications are installed on a digital assistant device 210. Such features can enable the digital assistant server 310 to determine which action datasets or portions thereof are relevant to the digital assistant device 210, and should be distributed to the digital assistant device 210. For instance, a digital assistant device 210 profile (not shown) describing all applications currently installed or executable by a digital assistant device 210, can be maintained (e.g., stored, updated) by the digital assistant server 310. The profile can be updated periodically, manually, or dynamically by a server interfacing component 280 of the digital assistant device 210 (e.g., whenever the digital assistant is in communication with and sends a command to the digital assistant server 310, or whenever an application is installed or updated on the digital assistant device 210). The distribution component 360 can distribute or communicate notifications, action datasets, or portions thereof, in a variety of ways, such as pushing, sending in response to received requests for updates, sending in response to established communications with a digital assistant device 210, or by automatic wide scale (e.g., all digital assistant devices) or selective scale (e.g., region, location, app type, app name, app version) distribution, among other things. Turning now to FIG. 4, a data structure 400 of an exemplary action dataset 410 in accordance with some of the described embodiments is illustrated. The depicted data structure is not intended to be limiting in any way, and any configuration of the depicted data portions of information may be within the purview of the present disclosure. Further, additional data portions or less data portions may be included in an action dataset 410 also remaining within the purview of the present disclosure.
In the depicted data structure 400, the action dataset 410 includes application identifying information 420, recorded event sequence data 430, and command templates 440. In some embodiments, the action dataset 410 further includes hash(es) 450, which can include a hash value generated based on the entire action dataset 410, or hash values generated based on any portion of the aforementioned data portions 420, 430, 440, among other things. The action dataset 410 can be generated by training component 270 of digital assistant device 210 of FIG. 2 and/or received from distribution component 360 of digital assistant server 310 of FIG. 3.
The application identifying information 420 can include information about a particular application that is required for execution to perform a particular operation for which the action dataset 410 was created. Exemplary pieces of application identifying information 420 are depicted in identifying information 425, which can include any one or more of an operating system (OS) name for which the particular application is executed on, an OS version of the aforementioned OS, a defined native language of the aforementioned OS, a name of the particular application, a version of the particular application, and the like. It is contemplated that the application identifying information 420 is required and checked (e.g., by the digital assistant server 310 of FIG. 3), before an action dataset 410 is distributed to a digital assistant device (e.g., digital assistant device 210 of FIG. 2) and employed by the digital assistant device to ensure that the action dataset 410 is compatible with, or can be correctly interpreted by action executing component 240 of FIG. 2, so that the corresponding and desired operation is performed by the digital assistant device 210.
The recorded event sequence data 430 can include any or all task or event- related data that was obtained, received, or determined by the digital assistant device (e.g., via training component 270 of FIG. 2) responsible for generating the action dataset 410. In some aspects, the recorded event sequence data 430 can also include task or event-related data that is defined (e.g., programmed) by a computing device, such as one operated by an administrator of the digital assistant system described herein. In various embodiments, one or more sets of task or event-related data can be stored within the action dataset 410, each corresponding to at least a portion of reproducible events that, when reproduced, can initiate a particular function of one or more applications associated with the action dataset. As noted herein, the recorded event sequence data can include timing attributes of received inputs (e.g., delays before or in between successive inputs, duration of inputs, GUI elements interacted with, relative positions of GUI elements, labels or metadata of GUI elements, scroll inputs and distances, links or URLs accessed activated, detected activation of application deep links activated in response to received inputs, and more). In some instances, the recorded event sequence data 430 may include conditions that require actual user intervention before subsequent events or tasks are resumed. For instance, secured login screens may require that a user input username and password information before an application is executed. In this regard, the recorded event sequence data 430 may include a condition corresponding to when user authentication has occurred, and instructions (e.g., interpretable by action executing component 240) to proceed with the tasks or events in the recorded event sequence data 430 based upon an occurrence of the condition. In various implementations, it is contemplated that the action executing component 240 of FIG. 2 can parse metadata, GUI elements, or other information from an executing application to determine when certain events occur or conditions are met. In this regard, additional conditions may be included in the recorded event sequence data 430 that require prior events or tasks to be completed, or certain GUI elements be displayed or interacted with, or any other conditions to be met, before subsequent events or tasks are performed by the action executing component 240 of FIG. 2.
Turning now to FIG. 5, an embodiment of a process flow or method 500 for mitigating unexpected events during replay of reproducible operations by a digital assistant device of a crowd-sourced digital assistant network is provided. At step 510, a digital assistant device, such as digital assistant device 210 of FIG. 2, can invoke the execution of an application that is referenced by an action dataset selected by the digital assistant device or a digital assistant server, such as digital assistant server 310, the action dataset being based on a command received by the digital assistant device. Execution (e.g., start up, initialization, opening) of the application can be invoked in a variety of manners, such as events reproduced by the digital assistant device based on interpreted instructions of the action dataset, or activation of deep links included in the action dataset, among other things. The action dataset can include at least a first set of instructions and a second set of instructions that are each interpretable by the digital assistant device to initiate a particular function of a corresponding application, or a portion of the operations required to initiate the particular function.
In some aspects, the first set of instructions can correspond to a primary set of instructions that can be interpreted to initiate the particular function in an ideal scenario where no unexpected behaviors are generated, displayed, or provided by the application(s) or operation system of the digital assistant device. In other words, the first set of instructions can correspond to a default set of instructions that is selected for interpretation in response to the selection of the corresponding action dataset. However, in some instances, the first set of instructions can correspond to a set of instructions that is selected for interpretation in response to a determination that an unexpected event has occurred or is present. In another aspect, the second set of instructions can be one of a set of additional sets of instructions that can be interpreted independently, in combination, and/or in addition to the first set of instructions to ultimately initiate the particular function. In accordance with some embodiments described herein, the second set of instructions can correspond to reproducible events that address unexpected events that are detected, are determined to occur, or are determined be present, while the first set of instructions is being interpreted or the corresponding events are being reproduced by the digital assistant device.
At step 520, while the first set of instructions is being interpreted to initiate the particular function of the application(s) corresponding to the selected action dataset, the digital assistant device can detect an unexpected behavior that interrupts the initiation of the particular function. In some aspects, an interruption of the initiation of an application function can correspond to a determination, by the digital assistant device, that an expected or anticipated condition of one or more applications or the operating system of the digital assistant device did not occur or is not present. In some other aspects, an interruption of the initiation of the application function can correspond to a determination, by the digital assistant device, that an unexpected behavior of one or more applications or the operating system of the digital assistant device has occurred or is present. In some further aspects, an interruption of the initiation of the application function can correspond to a determination, by the digital assistant device, that an unexpected behavior of one or more applications or the operating system of the digital assistant device is present, in accordance with various embodiments described herein.
In some embodiments, the digital assistant device can determine a type of unexpected behavior that is detected while the first set of instructions is being interpreted or the corresponding events are being reproduced. In this regard, the digital assistant device can select one of a set of instructions or reproducible events included in the action dataset to appropriately address the detected unexpected behavior. Each set of instructions can include, among other things, a reference or identifier that corresponds to a type of unexpected behavior. To address the detected unexpected behavior, the digital assistant device can select a set of instructions that is determined to correspond to the determined type of unexpected behavior, and interpret, at step 530, the selected set of instructions, thereby reproducing one or more events or operations to address the unexpected behavior. To address an unexpected behavior, the interpreted set of instructions can, among other things, generate input events, close prompts, accept prompts, close windows or other GUI elements, input saved or predefined credentials, enforce a time delay, clear notifications, or reproduce an entirely different set of events, among other things. In some aspects, the selected set of instructions can be interpreted to merely address the unexpected event, such that the digital assistant device can continue interpreting the first set of instructions at a point just before, when, or just after the unexpected event was determined to occur. Ideally, the digital assistant device can continue interpreting the first set of instructions so that each instruction is interpreted once (e.g., each corresponding event is reproduced once). In some other aspects, the selected set of instructions can be interpreted to not only address the unexpected event, but to reproduce its corresponding set of events to initiate the particular function of the action dataset. Implementations having an entirely different set of events can be more probable when an action dataset is associated with multiple application versions, or when variable access rights to an application are available, such as a free or limited-version of an application versus a paid-version of the application. More particularly, different versions or different access rights of an application can present different GUI elements that can be interacted with, or operation pathways (e.g., a series of operations that need be performed) to initiate a particular function of the application. As such, a different set of events can be selected for interpretation to initiate the particular function when the digital assistant device detects predefined conditions that correspond to the unexpected event. It is further contemplated that one or more sets of instructions or reproducible events can be incorporated into all action datasets or provided to the digital assistant device as a catch-all or universal action dataset, to merely address unexpected behaviors that may be applicable to a plurality of applications. For instance, operating system notifications (e.g., low battery notification) can be generated and presented regardless of the application invoked by the digital assistant device. Provided that the digital assistant device detects the unexpected event, such as the operating system notification, the digital assistant device can select a set of instructions from the catch-all or universal action dataset to address the unexpected event and carry on with its interpretation of the first set of instructions, as described herein above.
Having described various embodiments of the invention, an exemplary computing environment suitable for implementing embodiments of the invention is now described. With reference to FIG. 6, an exemplary computing device is provided and referred to generally as computing device 600. The computing device 600 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device 600 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
Embodiments of the invention may be described in the general context of computer code or machine-useable instructions, including computer-useable or computer- executable instructions, such as program modules, being executed by a computer or other machine, such as a personal data assistant, a smartphone, a tablet PC, or other handheld device. Generally, program modules, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote- processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to FIG. 6, computing device 600 includes a bus 610 that directly or indirectly couples the following devices: memory 612, one or more processors 614, one or more presentation components 616, one or more input/output (I/O) ports 618, one or more I/O components 620, and an illustrative power supply 622. Bus 610 represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks of FIG. 6 are shown with lines for the sake of clarity, in reality, these blocks represent logical, not necessarily actual, components. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. The inventors hereof recognize that such is the nature of the art and reiterate that the diagram of FIG. 6 is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as "workstation," "server," "laptop," "handheld device," etc., as all are contemplated within the scope of FIG. 6 and with reference to "computing device."
Computing device 600 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 600 and includes both volatile and nonvolatile media, removable and non- removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 600. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 612 includes computer storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 600 includes one or more processors 614 that read data from various entities such as memory 612 or I/O components 620. Presentation component(s) 616 presents data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, and the like.
The I/O ports 618 allow computing device 600 to be logically coupled to other devices, including I/O components 620, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components 620 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 600. The computing device 600 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 600 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 600 to render immersive augmented reality or virtual reality.
Some embodiments of computing device 600 may include one or more radio(s) 624 (or similar wireless communication components). The radio 624 transmits and receives radio or wireless communications. The computing device 600 may be a wireless terminal adapted to receive communications and media over various wireless networks. Computing device 600 may communicate via wireless protocols, such as code division multiple access ("CDMA"), global system for mobiles ("GSM"), or time division multiple access ("TDMA"), as well as others, to communicate with other devices. The radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection. When we refer to "short" and "long" types of connections, we do not mean to refer to the spatial relation between two devices. Instead, we are generally referring to short range and long range as different categories, or types, of connections (i.e., a primary connection and a secondary connection). A short-range connection may include, by way of example and not limitation, a Wi-Fi® connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol; a Bluetooth connection to another computing device is a second example of a short-range connection, or a near-field communication connection. A long-range connection may include a connection using, by way of example and not limitation, one or more of CDMA, GPRS, GSM, TDMA, and 802.16 protocols.
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments of the present invention have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations and are contemplated within the scope of the claims.

Claims

CLAIMS What is claimed is:
1. A computer- implemented method for mitigating unexpected events during a replay of operations by a digital assistant device, the method comprising: invoking, by the digital assistant device, execution of an application referenced by an action dataset that is selected based on a received command, wherein the selected action dataset corresponds to a particular function of the referenced application and includes at least a first set and a second set of instructions that are each interpretable by the digital assistant device to initiate at least a corresponding portion of the particular function; as the first set of instructions is being interpreted to initiate the particular function, detecting, by the digital assistant device, an unexpected behavior that interrupts the initiation of the particular function; and interpreting, by the digital assistant device, the second set of instructions to resume the initiation of the particular function based at least in part on the unexpected behavior having been detected as the first set of instructions was being interpreted.
2. The computer- implemented method of claim 1, wherein at least a portion of the second set of instructions is associated with the detected unexpected behavior.
3. The computer- implemented method of claim 2, wherein the interpretation of at least a portion of the second set of instructions addresses the detected unexpected behavior.
4. The computer-implemented method of claim 2, wherein the second set of instructions is selected for interpretation based on a determination that at least the portion of the second set of instructions is associated with the detected unexpected behavior.
5. The computer-implemented method of claim 4, wherein the detected unexpected behavior includes at least one of a displayed pop-up, a displayed login prompt, a displayed notification, a displayed graphical user interface (GUI) element, a missing GUI element, or an inactive GUI element.
6. The computer-implemented method of claim 4, wherein the detected unexpected behavior is determined not associated with the first set of instructions.
7. The computer- implemented method of claim 1, wherein the unexpected behavior is included in a set of unexpected behaviors that are each detectable within a plurality of applications.
8. The computer- implemented method of claim 1, wherein the unexpected behavior is determined to interrupt the initiation of the particular function based on another determination that at least one operation corresponding to an interpreted instruction of the first set of instructions cannot be reproduced.
9. The computer- implemented method of claim 1, wherein the unexpected behavior is determined to interrupt the initiation of the particular function based on another determination that an expected GUI element associated with an interpreted instruction of the first set of instructions is unavailable.
10. The computer- implemented method of claim 1, wherein the interpretation of each instruction in the first set of instructions initiates the particular function.
11. The computer-implemented method of claim 10, wherein the interpretation of the second set of instructions initiates at least one of a termination operation, a delay operation, a close operation, an accept operation, or any other operation to address the unexpected behavior.
12. A non-transitory computer storage medium storing computer-useable instructions that, when used by a digital assistant device, cause the digital assistant device to perform operations comprising: invoking execution of an application referenced by an action dataset that is selected based on a received command, wherein the selected action dataset corresponds to a particular function of the referenced application and includes at least a first set and a second set of instructions that are each interpretable by the digital assistant device to initiate at least a corresponding portion of the particular function; as the first set of instructions is being interpreted to initiate the particular function, detecting an unexpected behavior that interrupts the initiation of the particular function; and interpreting the second set of instructions to resume the initiation of the particular function based at least in part on the unexpected behavior having been detected as the first set of instructions was being interpreted.
13. The non-transitory computer storage medium of claim 12, wherein the first set of instructions is interpreted in response to the invoked execution of the referenced application.
14. The non-transitory computer storage medium of claim 12, wherein at least a portion of the second set of instructions is associated with the detected unexpected behavior.
15. The non-transitory computer storage medium of claim 14, wherein the interpretation of at least a portion of the second set of instructions addresses the detected unexpected behavior.
16. The non-transitory computer storage medium of claim 12, wherein the second set of instructions is selected for interpretation based on a determination that at least the portion of the second set of instructions is associated with the detected unexpected behavior.
17. The non-transitory computer storage medium of claim 16, wherein the detected unexpected behavior includes at least one of a pop-up, a login prompt, a notification, an unassociated graphical user interface (GUI) element, a determined missing GUI element, or a determined inactive GUI element.
18. A digital assistant device comprising: one or more processors; and one or more non-transitory computer storage media storing computer-usable instructions that, when used by the one or more processors, cause the one or more processors to: invoke execution of an application referenced by an action dataset that is selected based on a received command, wherein the selected action dataset corresponds to a particular function of the referenced application and includes at least a first set and a second set of instructions that are each interpretable by the digital assistant device to initiate at least a corresponding portion of the particular function; as the first set of instructions is being interpreted to initiate the particular function, detect an unexpected behavior that interrupts the initiation of the particular function; and interpret the second set of instructions to resume the initiation of the particular function based at least in part on the unexpected behavior having been detected as the first set of instructions was being interpreted.
19. The digital assistant device of claim 18, wherein the first set of instructions is interpreted in response to the invoked execution of the referenced application, and wherein at least a portion of the second set of instructions is associated with the detected unexpected behavior.
20. The digital assistant device of claim 19, wherein the second set of instructions is selected for interpretation based on a determination that at least the portion of the second set of instructions is associated with the detected unexpected behavior.
PCT/US2018/048066 2017-10-25 2018-08-27 Robust replay of digital assistant operations WO2019083603A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762576766P 2017-10-25 2017-10-25
US62/576,766 2017-10-25
US15/984,122 US11520610B2 (en) 2017-05-18 2018-05-18 Crowdsourced on-boarding of digital assistant operations
US15/984,122 2018-05-18
US16/110,108 US20180366113A1 (en) 2017-05-18 2018-08-23 Robust replay of digital assistant operations
US16/110,108 2018-08-23

Publications (1)

Publication Number Publication Date
WO2019083603A1 true WO2019083603A1 (en) 2019-05-02

Family

ID=66247526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/048066 WO2019083603A1 (en) 2017-10-25 2018-08-27 Robust replay of digital assistant operations

Country Status (1)

Country Link
WO (1) WO2019083603A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249206A1 (en) * 2008-03-28 2009-10-01 Nokia Corporation Method, apparatus and computer program product for presenting a media history
US20140074483A1 (en) * 2012-09-10 2014-03-13 Apple Inc. Context-Sensitive Handling of Interruptions by Intelligent Digital Assistant
EP2760015A1 (en) * 2013-01-23 2014-07-30 BlackBerry Limited Event-triggered hands-free multitasking for media playback

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249206A1 (en) * 2008-03-28 2009-10-01 Nokia Corporation Method, apparatus and computer program product for presenting a media history
US20140074483A1 (en) * 2012-09-10 2014-03-13 Apple Inc. Context-Sensitive Handling of Interruptions by Intelligent Digital Assistant
EP2760015A1 (en) * 2013-01-23 2014-07-30 BlackBerry Limited Event-triggered hands-free multitasking for media playback

Similar Documents

Publication Publication Date Title
US11682380B2 (en) Systems and methods for crowdsourced actions and commands
US20230100423A1 (en) Crowdsourced on-boarding of digital assistant operations
US10768954B2 (en) Personalized digital assistant device and related methods
US10698654B2 (en) Ranking and boosting relevant distributable digital assistant operations
US20180366108A1 (en) Crowdsourced training for commands matching
US20180366113A1 (en) Robust replay of digital assistant operations
US11425059B2 (en) Chatbot support platform
US20210311975A1 (en) Language agnostic command-understanding digital assistant
US12093598B2 (en) System to facilitate interaction during a collaborative screen sharing session
RU2699587C2 (en) Updating models of classifiers of understanding language based on crowdsourcing
KR102490776B1 (en) Headless task completion within digital personal assistants
WO2018213740A1 (en) Action recipes for a crowdsourced digital assistant system
KR20200007882A (en) Offer command bundle suggestions for automated assistants
WO2017040437A1 (en) Generation and application of universal hypothesis ranking model
US20190347118A1 (en) Identifying parameter values and determining features for boosting rankings of relevant distributable digital assistant operations
US20160156774A1 (en) Techniques for enhancing content on a mobile device
US10176171B1 (en) Language agnostic command-understanding digital assistant
KR20200019522A (en) Gui voice control apparatus using real time command pattern matching and method thereof
US12093707B2 (en) Action recipes for a crowdsourced digital assistant system
US10847135B2 (en) Sharing commands and command groups across digital assistant operations
WO2019083604A1 (en) Sharing commands and command groups across digital assistant operations
CN110832444B (en) User interface sound emission activity classification
EP3799658A1 (en) Systems and methods for crowdsourced actions and commands
WO2019083603A1 (en) Robust replay of digital assistant operations
EP3746914B1 (en) Personalized digital assistant device and related methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18871751

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18871751

Country of ref document: EP

Kind code of ref document: A1