US20220129639A1 - Conditional responses to application commands in a client-server system - Google Patents
Conditional responses to application commands in a client-server system Download PDFInfo
- Publication number
- US20220129639A1 US20220129639A1 US17/569,433 US202217569433A US2022129639A1 US 20220129639 A1 US20220129639 A1 US 20220129639A1 US 202217569433 A US202217569433 A US 202217569433A US 2022129639 A1 US2022129639 A1 US 2022129639A1
- Authority
- US
- United States
- Prior art keywords
- response
- client device
- application
- natural language
- package
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000035045 associative learning Effects 0.000 title claims abstract description 39
- 230000004044 response Effects 0.000 claims abstract description 196
- 230000009471 action Effects 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 16
- 230000009118 appropriate response Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 abstract description 44
- 230000014509 gene expression Effects 0.000 description 38
- 230000000875 corresponding effect Effects 0.000 description 17
- 230000003993 interaction Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000003490 calendering Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013518 transcription Methods 0.000 description 3
- 230000035897 transcription Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013497 data interchange Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 235000004213 low-fat Nutrition 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- the disclosed embodiments relate generally to execution of applications distributed over a computer network, and more specifically, to server-based determination and provision of possible responses to requests for client devices.
- client devices support different types and degrees of application functionality. For example, users may use their laptops or tablets in some contexts, their smartphones in other contexts, specialized devices (e.g., smart watches or intelligent appliances) in other contexts, and the like.
- the various client devices may delegate requests (e.g., in natural language form) for application functionality to a server. This delegation is useful in cases where the client device itself is not capable of interpreting the request and thus relies on the server to perform the interpretation and provide the client device with a more structured form of the request that will enable the client device to respond to the request.
- the server does not have knowledge of the functionality of the various client devices making the requests, and/or cannot realistically know all the possible responses of which the various devices are capable. In such cases, the server cannot with certainty predict which, of the many potential types of responses, the client device should give in response to the application request.
- a client device receives a user request to execute a command of an application.
- the application operates in a specific type of domain, such as calendaring/scheduling, email/communications, or the like.
- the request may be specified, for example, in natural language form (e.g., as text or voice).
- the client device packages the request into a message and transmits the message to a response-processing server to interpret.
- the response-processing server determines the various possible responses that client devices could make in response to the request based on, for example, the capabilities of the client devices and the state of the application data (e.g., whether particular referenced data exists or not).
- the response-processing server accordingly generates a response package that encodes a number of different conditional responses that client devices could have to the request based on the various possible circumstances and transmits the response package to the client device.
- the client device selects the appropriate response from the response package based on the circumstances as determined by the client device. Then, using data from the response package about the requested command, the client device executes the command (if possible), and provides the user with some representation of the response (e.g., written or spoken) as supported by the capabilities of the client device.
- FIG. 1 shows a system environment in which command processing takes place, according to one embodiment.
- FIG. 2 is a data flow diagram illustrating interactions of the response processing server, a developer, the developer system, the client device, a user, and an application server, according to one embodiment
- FIG. 3 illustrates a set of graphical regions that for most client devices serve as a baseline user interface.
- FIG. 4 illustrates in more detail the components of the response-processing server of FIGS. 1 and 2 , according to one embodiment.
- FIG. 5 is a sequence diagram illustrating interactions between the different entities of FIGS. 1 and 2 , according to one embodiment.
- FIG. 6 is a high-level block diagram illustrating physical components of a computer used as part or all of the query-processing server, client device, or developer host from FIG. 1 , according to one embodiment.
- FIG. 1 shows a system environment in which command processing takes place, according to one embodiment.
- Users of the client devices 110 use the client devices to specify requests for applications 111 executing partially or entirely on the client devices.
- the applications 111 are made up of executable computer code and implement functionality for one or more application domains, such as calendaring / scheduling, event reminders, email/voice or other communications, mapping, music players, and the like.
- the applications 111 can include both “back-end” code (e.g., creating a calendar item) and front-end processing code (e.g., interpreting a user natural language command to create a calendar item).
- the user command code may be authored by the same developer(s) that authored the “back-end” code, or a separate developer(s) may extend the user interface of an existing application by authoring new user command code that interprets user commands and implements them by calling the back-end code of the existing application (e.g., its application programming interface (API)) as appropriate for the given user command.
- the code of the application 111 made be entirely located on client devices 110 , or it may be located on an application server 130 , or it may be some combination thereof (e.g., a thin client application on the client device 110 that makes calls to an application server 130 ).
- the client devices delegate the processing of the request to a response-processing server 100 by creating a message that encodes the request and transmitting that message to the response-processing server 100 .
- the response-processing server 100 parses or otherwise programmatically analyzes the request to determine the meaning of the request, and if the parsing/analysis indicates that the request represents a valid command for the application, the server generates a set of conditional responses.
- the set of conditional responses hereinafter referred to as a “response package”—represents the various possible responses that the client device 110 could make in response to the command, depending on the circumstances.
- the response-processing server 100 provides the response package to the client device 110 .
- the client device 110 selects and displays the appropriate response from the response package based on its own capabilities and/or on the state of the application data itself. If the requested functionality of the application 111 is implemented on an application server 130 , rather than wholly by the local application 111 , the client device 110 may also use data from the response package to make a request of the application server 130 to carry out the command.
- the server creates a response package that includes a set of conditional responses, which enables the recipient client device 110 to programmatically select the appropriate response from the package without further network transmissions and the associated delays due to network traffic.
- this also avoids the need to account for and track multiple of intermediate dialog states.
- Multiple intermediate dialog states are represented at the various branches of the above decision tree. For example, each “Y(es)” or “N(o)” answer leads to two separate states, one reflecting the Yes answer, and the other the No answer. Each of these possibility would need to be reflected in the dialog logic and/or the state variables.
- an application developer authors an application 111 on a developer system and makes it available (e.g., via an app store) to the client devices 110 , which install it locally.
- the application developer having the domain knowledge for the application domain, additionally specifies a semantic grammar that defines a set of natural language expressions that it is able to interpret as valid commands to the application.
- a semantic grammar that defines a set of natural language expressions that it is able to interpret as valid commands to the application.
- separate developers could extend the user interface of the application 111 by defining the semantic grammars for the application 111 . Additional details of the semantic grammars of this embodiment are provided below with respect to FIG. 4 .
- the client devices 110 of FIG. 1 have one or more software applications 111 developed by a developer on the developer system 120 and made available to users.
- the application 111 includes command handling code 112 that responds to the receipt of a request from a user and delegates processing of the request to the response processing server.
- the command-handling code 112 is not part of the application 111 itself, but rather supplements the application.
- the request may be received by the command handling code 112 in different forms in different embodiments, such as natural language (either as speech or text), designations of user interface components, or the like.
- the request is processed by the response module 104 of the response processing server 100 .
- the response-processing server 100 includes a domain knowledge repository 102 that enables the response module 104 to provide the response package that is appropriate for the particular application and command.
- the domain knowledge repository 102 is implemented wholly or in part by semantic grammars that define a set of natural language expressions that are valid commands, as well as a set of corresponding actions that assemble the appropriate response package.
- the response module 104 is specially programmed code that is not available in a generic computer system, and implements the algorithms described herein. As will be apparent from the following discussion, the algorithms and processes described herein require implementation on a computer system, and are not performed by humans using mental steps.
- the response processing server 100 , client device 110 , and developer system are connected by a network 140 .
- the network 140 may be any suitable communications network for data transmission.
- the network 140 is the Internet and uses standard communications technologies and/or protocols.
- FIG. 2 is a data flow diagram illustrating interactions of the response processing server 100 , a developer 201 , the developer system 120 , the client device 110 , a user 210 , and an application server 130 , according to one embodiment.
- a developer 201 provides a software application 111 via a developer host system 120 .
- the developer's application 111 is configured to receive requests to execute commands, e.g., specified in natural language form, from users 210 .
- Each application 111 may have different possible responses when executed by the different client devices 110 that execute the application 111 .
- one particular application 111 is a calendar application allowing a user to make appointments by sending appointment creation messages to an application server 130 storing and controlling access to the actual calendar data.
- the laptop might contain sufficient application 111 logic to implement the deletion of the desired appointment (e.g., by sending a message to the application server 130 in the format required by the portion of the application 111 executing on the application server 130 ), assuming that such an appointment existed.
- the smart appliance might lack sufficient application 111 logic to implement the creation of the appointment.
- the appropriate response by the laptop might be to send the properly-formatted calendar appointment deletion request to the application server 130 and to display the message “Appointment deleted” within a user interface of the laptop. (If the calendar appointment in question does not exist, the proper response might be to note that the appointment does not existing and to display the message “Error: No such appointment.”)
- the proper response might be for the smart appliance to determine that it lacks the logic to handle appointment requests and to display or speak the response, “Error: Can't handle appointment requests.”
- the response-processing server 100 may be used to interpret many different command requests from many different client devices 110 on behalf of many different applications 111 .
- each application developer 201 provides the response-processing server 100 with semantic grammars (described in more detail below with respect to FIG. 4 ) that define corresponding sets of natural language expressions that constitute valid commands, as well as actions to take in response to successfully parsing the commands, such as code to generate the appropriate response package to provide to the client device.
- the semantic grammars may also be defined by other developers, separate from those that developed the application 111 , as a way to extend the user interface capabilities of the application (e.g., by adding natural language commands).
- the developer 201 also provides a set of response schemas to the response processing server 100 .
- the response schemas together with the semantic grammars, make up the domain knowledge repository 102 , in that they enable the response-processing server 100 to identify valid commands and to generate the appropriate corresponding response packages.
- the response schemas describe the format of the response package for corresponding application commands.
- the schemas have at least: a type that identifies the type of command to which the conditional responses of the response package correspond; the set of conditional responses themselves containing at least one default response, each response having at least one textual message that represents that particular response; and any needed command parameter values for implementing the command, assuming that the client device 110 supports the command.
- the command parameter values are derived from the command request and make it simpler for the client device 110 to execute the requested command, such as the value “14” (representing 2 PM) derived from the command request “Cancel my meeting with Chris tomorrow at 2 pm”.
- the response package includes a required features descriptor that indicates capabilities that the client device 110 must have in order to execute the command.
- the response package additionally contains an ordered list of view types (e.g., “Native”, or “None”) that specify the ways in which the response should be displayed, with the highest-ordered view type being used if the client device 110 and/or the application 111 supports it, the next-highest if the client device 110 and/or the application 111 do not support the highest-ordered (but supports the next-highest-ordered), and so on.
- view types e.g., “Native”, or “None”
- a value of “Native” indicates that the response should be rendered in a native graphical user interface of the application; a value of “None” (or an equivalent thereof) indicates that the response should be simply displayed in a text field of the client device 110 (e.g., a general-purpose text area on a smart appliance, where an application 111 might lack a native graphical user interface).
- each response of the set of different possible responses specified by the response package may contain an optional output state indicator that indicates a state that the application will be in after the response is provided, as well as an optional input state for the response to be applicable.
- an “initiate alarm set” response from the response package might have an associated output state indicator indicating that the application 111 is now in the state of waiting for the user to specify a time for the alarm
- an “alarm successfully set” response might have an associated input state requiring that the application 111 be in the “initiate alarm set” state before it can be selected.
- each command-response package has a corresponding schema.
- the schemas may be specified as being derived from each other; for example, there may be a base schema for a generic response package that simply indicates that the command is not supported (supplied, e.g., by the response-processing server 100 ), and application developers may inherit from the base schema and add application-specific information.
- the response package of Listing 1 above specifies that the command type (“CommandKind”) is “CalendarCommand”.
- the response package also specifies at spoken and a written response corresponding to various conditional responses; these values indicate text for the client device 110 to either speak using text to speech conversion or to display textually within a user interface of the client device, respectively.
- the response package includes a default conditional response (corresponding to the “SpokenResponse” and “WrittenResponse” in the second and third top-level items) to be used when the client device 110 does not support the feature at all; a “ClientActionFailedResult” conditional response that is used when the calendar cancellation command is successful; a “ClientActionFailedResult” conditional response that is used when the calendar cancellation command fails; a “CalendarPreferenceIsNotSetResult” conditional response that is used when no specific preferred calendar has been chosen yet; and a “NoMatchResult” that is used when no matching calendar item (e.g., an item with “Chris” at time “tomorrow at 2 pm”) can be found on the calendar.
- a default conditional response corresponding to the “SpokenResponse” and “WrittenResponse” in the second and third top-level items
- the response package also includes a descriptor of the specific command (“Cancelltem”, the value of the “CalendarCommandKind” field), and command parameter values that the client can use to carry out the command (namely, the data in the hash corresponding to the “Query” field, such as the value “14” for the field “Hour”, indicating 2 PM).
- the response package of Listing 1 additionally contains a view type field (indicated by the “ViewType” field name) containing the ordered values “Native” and “None”, indicating that the responses, if displayed textually, should preferably be rendered graphically in a native user interface, and failing that, the “WrittenResponse” field of the appropriate response should simply be displayed in a default text area of a user interface.
- FIG. 3 illustrates a set of graphical regions that for most client devices 110 serve as a baseline user interface 300 .
- the user interface 300 includes a transcription text area 312 in which the application 111 of the client device 110 presents a textual version of a user command request, assuming that the user command request was initially given in the form of speech.
- the user interface 300 also includes a written response area 314 in which the application 111 of the client device 110 displays a textual version of the response, such as the value of the “WrittenResponse” field discussed above.
- the user interface also includes a graphical rendering area 316 in which the application 111 of the client device 110 displays a richer graphical rendering of a result of executing the command, such as a portion of the application data that was implicated by the command.
- the application 111 might display a graphical view of a portion of the user's calendar including the time representing “tomorrow at 2 pm”, showing that the meeting was canceled, as well as any other calendar appointments around that time that are still scheduled.
- the example user interface 300 may include different user interface portions on different client devices 110 .
- a client device 110 on which the command request is entered as text may omit the transcription area 312 .
- less full-featured devices such as a smart appliance, might not include graphical rendering capabilities and thus might omit the graphical rendering area 316 .
- the response-processing server 100 includes a response module 104 that processes user command requests and generates the appropriate response package.
- the response module 104 includes a grammar application module 205 that applies the semantic grammar that is appropriate for the given command request.
- the response module 104 also includes a response package builder module 206 , which generates the appropriate response package for the given command request.
- the grammar application module 205 is implemented by parsing logic 402 , described below in more detail with respect to FIG. 4 .
- the package builder module 206 is implemented by action code specified by the applied semantic grammar, as described in more detail below with respect to the actions 408 B of FIG. 4 .
- the response module 104 can then provide the response package to the client device 110 .
- a response selector 212 of the application code 111 executing on the client device 110 selects the response that is appropriate given the circumstances.
- the relevant circumstances include the capabilities of the client device 110 itself (e.g., whether the application code is configured to handle the particular type of command, as indicated by the “CommandKind” field of the response package) and the state of the application data (e.g., whether data for a specified appointment does in fact exist, as in the above example).
- the application code 111 may perform actions such as speaking the response text of the selected response (e.g., the value of the “SpokenResponse” field described above), displaying the response text of the selected response (e.g., the value of the “WrittenResponse” field) in the written response area 314 of FIG. 3 , and/or executing the command in the application 111 .
- the execution of the command may use values specified within the response, e.g., values within the value of the “Query” field of Listing 1.
- the execution of the command may involve only local actions within the client device 110 , or—if the application 111 is implemented in part using a remote application server 130 —may involve sending a request to the application server 130 in the format required by the application server, e.g., to delete a calendar item as in the above example.
- FIG. 4 illustrates in more detail the components of the response-processing server 100 of FIGS. 1 and 2 , according to one embodiment.
- the configuration of the logic and data of the response-processing server 100 results in a particular, non-generic computer system that is specifically structured to perform the functions described herein.
- the response-processing server 100 stores domain knowledge data 102 , which includes a set of semantic grammars 406 .
- Each semantic grammar 406 implicitly defines a set of natural language expressions (hereinafter “expressions”) that it is able to parse and interpret, such as commands requesting the creation or modification of calendar appointments, commands that set alarms, and the like.
- the semantic grammars 406 are typically submitted by the various developers 201 of the applications 111 , although it is possible for the semantic grammars to be submitted by other parties as well, such as the entity responsible for the response-processing server 100 .
- Each semantic grammar 406 has associated metadata 407 , such as a name, a description, an author, and/or a date of submission. It also has one or more rules 408 defined in a programming language. Each semantic grammar rule 408 defines: a pattern 408 A for the natural language expressions that the semantic grammar 406 can parse (for example, a natural language expression that specifies a date or a range of dates); optionally empty programming actions 408 B that may include semantic constraints, and that specify the construction of an interpretation or meaning (e.g., by assigning values to output variables based on a natural language request that the semantic grammar interprets); and output variables 408 C that represent the meaning of the natural language request, as discussed below.
- a pattern 408 A for the natural language expressions that the semantic grammar 406 can parse for example, a natural language expression that specifies a date or a range of dates
- optionally empty programming actions 408 B that may include semantic constraints, and that specify the construction of an interpretation or meaning (e.g., by assigning values
- a semantic grammar 406 is said to be able to interpret a natural language request if the request can be successfully parsed according to the pattern 408 A of one of the rules 408 of the semantic grammar 406 , and the semantic constraints (if any) defined by the corresponding actions 408 B are all satisfied; the meaning representation of the expression is then defined by the output variables 408 C of the rule 408 .
- a natural language expression (or entire request) can be successfully parsed and interpreted by a given semantic grammar 406 , the interpretation yields an associated meaning defined by values of the output variables 408 C, and (in embodiments that have a weighting module 404 , as described below) an associated strength measure. If a natural language expression cannot be successfully interpreted by the semantic grammar, in one embodiment the interpretation returns a NULL value or other value signifying that the natural language expression could not be interpreted by the semantic grammar; in another embodiment, the failure of a semantic grammar to interpret an expression does not result in any return value, but instead in a failure to complete parsing and interpretation.
- An expression pattern 408 A corresponds to one of the ways of describing the right-hand side of a context-free grammar rule, such as (BNF) Backus-Naur Form or EBNF (Extended Backus-Naur Form), or a regular expression, or a finite state automaton (FSA), or the like.
- the expression pattern 408 A may be specified in terms of other semantic grammars, thereby allowing for hierarchical relationships between semantic grammars, and possibly the parsing of recursive expressions.
- the programming actions 408 B may be used to assign values to the output variables 408 C based on the natural language request that the semantic grammar 406 interprets.
- the programming language is a general-purpose procedural programming language that is extended with additional programming constructs for specifying natural language interpretation, such as the patterns 408 A of the expressions to be matched.
- the semantic grammars 406 are defined by an extended procedural programming language based on C++ in which “interpret” blocks define the expression pattern 408 A to be matched, the semantic grammar parameter list defines the output variables 408 C, and standard C++ statements define the programming actions 408 B. It is appreciated that languages other than C++ could equally be used in other embodiments.
- a semantic grammar 406 is implemented by a ‘code block’ with the general form “block outputVariablesList blockName (pattern programmingAction)+”, where “outputVariablesList” is the output variables 408 C, “blockName” is the given name of the block that implements the semantic grammar 406 , and each pair of a “pattern” expression 408 A and a “programmingAction” 408 B represents a particular action to be taken in response to the existence of a particular corresponding pattern.
- the expression (pattern programmingAction)+ indicates that there are one or more pairings of a pattern with a programming action.
- 10 “tell me”] . [“the”] . “number of points played when announcer says” . n1 VALID_TENNIS_NUMBER( ) . [“to”] .
- the semantic grammar 406 named TENNIS POINTS( ) is able to interpret an expression embodying a request to determine how many points have been played in a game based on the game score in a tennis match, with the meaning of the expression represented by output variables 408 C in the parameter list (namely, the integer “totalPoints”).
- the expression pattern 408 A is able to interpret an expression embodying a request to determine how many points have been played in a game based on the game score in a tennis match, with the meaning of the expression represented by output variables 408 C in the parameter list (namely, the integer “totalPoints”).
- the numbers that are valid in tennis are in turn represented by their own semantic grammars 406 named “VALID_TENNIS_NUMBER” and which accepts the scores 15, 30, 40, or “love”.
- the expression pattern 408 A of this example semantic grammar can interpret the expression “what is the score when announcer says fifteen love” (in which case its meaning is represented by the value 1 for the totalPoints output variable), but not “what is the score when announcer says fifteen eighty” because the former expression can be successfully parsed according to the expression pattern 408 A, but the latter expression cannot (because “eighty” is not a valid tennis number).
- semantic grammar INTEGER_COUNT( ) is not defined in Listing 2, but (for example) is provided by a separate developer and submitted for inclusion in the domain knowledge data 102 for use by others, such as the author of the TENNIS POINTS( )semantic grammar of Listing 2.
- the programming actions 408 B may accomplish different types of purposes.
- semantic grammars 406 could also be created for other purposes, such as specifying commands like the above example cancellation of a calendar appointment.
- a developer of the application 111 that includes a command for canceling a calendar appointment could author a top-level semantic grammar 406 for handling user requests for the command.
- One or more semantic grammars 406 and their rules 408 would specify the different manners in which a user could indicate a request for the command, and the command specifics, using natural language.
- different patterns 408 A could accept different forms of a request, such as “Cancel my meeting with Chris tomorrow at 2 pm”, “Delete appointment tomorrow at 1400 ”, “Remove meeting with Chris on June 21st”, and the like.
- the actions 408 B of the semantic grammar 406 could result in the generation of the particular response package that has a format corresponding to the command, as specified by the schema for that command (e.g., one response for client devices 110 that don't support the feature, one response for successful cancellation of the appointment, one response for non-existence of the reference appointment, etc.) and that has the values specified by the command request (e.g., “June 21 st ” as the appointment day, “2 pm” as the appointment time, etc.).
- the schema for that command e.g., one response for client devices 110 that don't support the feature, one response for successful cancellation of the appointment, one response for non-existence of the reference appointment, etc.
- the values specified by the command request e.g., “June 21 st ” as the appointment day, “2 pm” as the appointment time, etc.
- the response-processing server 100 comprises parsing logic 402 that takes a natural language request as input and determines whether a given semantic grammar 406 can interpret that query and if so (optionally) a match strength measure between the given semantic grammar and the expression.
- a semantic grammar 406 is considered to be able to interpret a natural language request if the request can be parsed according to the semantic grammar's pattern 408 A and satisfies the semantic grammar's semantic constraints (if any).
- the parsing logic 402 has a meaning parser module 403 that accepts textual input for a natural language request and determines whether a given semantic grammar 406 can interpret that textual input.
- the parsing logic 402 also has a weighting module 404 that assigns values to successful interpretations of an expression as a way to rank the relative strength of the successful interpretations.
- the weighting module 404 computes a match strength measure corresponding to a measure of strength of a parse obtained by a semantic grammar 406 parsing the given request, such as the estimated likelihood that the parse produces the request.
- the match strength measure is determined as a function of weights assigned to sub-expressions within the given expression that ultimately matches the input.
- the weights of sub-expressions represent likelihoods of the sub-expressions, and the weight of the matching expression is calculated as the product of the weights of the sub-expressions.
- the expression pattern 408 A for the TENNIS POINTS semantic grammar is composed of sub-expressions such as “what is”, “tell me”, “the”, “number of points when announcer says”, VALID_TENNIS_NUMBER( ) (a separate semantic grammar), and “to”.
- the value 10 serves as a weight corresponding to the sub-expression “tell me”.
- the parsing logic 402 implements the grammar application module 205 of FIG. 2 . That is, when a particular command request is received by the response-processing server 100 from a client device 110 , the parsing logic 402 tests and weights some or all of the stored semantic grammars 406 against the received command request, selecting the semantic grammar producing the highest-score as the semantic grammar to use when handling the command request.
- the response-processing server 100 includes a schema repository 420 storing the package schemas 420 that correspond to the different commands/semantic grammars 406 .
- the schemas 420 may be provided by the developers 201 of FIG. 2 , for example.
- the package schemas describe the structure that a response package for a particular command must have.
- a schema compiler 421 compiles the schemas 420 to produce corresponding response package interface code 422 .
- the response package interface code 422 includes procedural code for creating data structures embodying the corresponding response package, for getting and setting fields of those data structures, and for serializing and de-serializing those data structures when transmitting the data structures over the network 140 between the server 100 and the client devices 110 .
- FIG. 5 is a sequence diagram illustrating interactions between the different entities of FIGS. 1 and 2 , according to one embodiment.
- An application developer 201 provides 410 domain knowledge data to the response processing server 100 , for each command of the application for which the application developer 201 wishes to use the response-processing server 100 for command interpretation.
- the domain knowledge includes, for each command, a semantic grammar 406 and (optionally) a package schema 420 .
- the domain knowledge may also be provided by a developer separate from the developer that authored the application 111 itself In such cases, the separate developer also provides code for the response selector 212 of FIG. 2 .
- a user submits 430 a command request on the user's client device 110 (e.g., as spoken or typed natural language), and the client device 110 delegates 435 the request to the response-processing server 100 .
- the response-processing server 100 identifies 440 a semantic grammar for the command request.
- the parsing logic 402 of FIG. 4 which parses and weights the command request using each of the semantic grammars 406 in the domain knowledge data and selecting the semantic grammar that produces the highest score, is one example of a means of performing this function.
- the response-processing server executes 445 the action code 408 B of the semantic grammar, and the action code generates a response package for the command.
- the action code 408 B generates the response package by identifying and applying a package schema 420 that corresponds to the identified semantic grammar, e.g., by calling the response package interface code 422 previously generated by the schema compiler 421 .
- the response-processing server 100 sends 450 the response package for delivery to the client device 110 .
- the response selector 212 for the application 111 determines 460 whether it can execute the command.
- the response selector 212 may determine that the command is not executable by reading a field from the response package, e.g., the value of the “CommandKind” field, such as “CalendarCommand” and determining that it is not one of the commands that the client device 110 and/or the application 111 can support.
- the particular application 111 implemented on the client device 110 might not support the “CalendarCommand” command type, or its specific sub-type “Cancelltem” indicated by the “CalendarCommandKind” field.
- the response selector 212 may also determine that the command is not executable based on the state of the application data, e.g., that the appointment for which cancellation is requested doesn't exist, or the state of the application itself, as indicated by the required input states, if any, specified by the various possible responses of the response package.
- the response selector 212 executes 465 the command, to the extent that the command can be successfully executed.
- the response selector 212 identifies which of the potential responses of the response package is appropriate and provides 470 an appropriate indication of the outcome of the command using the identified response, such as displaying textually a value from identified response in the response package (e.g., in the case of appointment cancellation success, the sub-value “I've canceled that item on your calendar” of the “ClientActionSucceededResult” value from Listing 1), speaking a value of the identified response from the response package using text to speech technology, or the like.
- the response selector 212 may choose the indication of the outcome of the command according to a view type attribute of the response package, determining which is the “best” of the indicated view types that the client device 110 can successfully provide.
- the response selector 212 may also set a state of the application 111 based on the output state (if any) specified by the selected response from the response package.
- FIG. 6 is a high-level block diagram illustrating physical components of a computer 600 used as part or all of the query-processing server 100 or client device 110 , or developer host 120 from FIG. 1 , according to one embodiment. Illustrated are at least one processor 602 coupled to a chipset 604 . Also coupled to the chipset 604 are a memory 606 , a storage device 608 , a keyboard 610 , a graphics adapter 612 , a pointing device 614 , and a network adapter 616 . A display 618 is coupled to the graphics adapter 612 . In one embodiment, the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622 . In another embodiment, the memory 606 is coupled directly to the processor 602 instead of the chipset 604 .
- the storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
- the memory 606 holds instructions and data used by the processor 602 .
- the pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer 600 .
- the graphics adapter 612 displays images and other information on the display 618 .
- the network adapter 616 couples the computer 600 to a local or wide area network.
- a computer 600 can have different and/or other components than those shown in FIG. 6 .
- the computer 600 can lack certain illustrated components.
- a computer 600 acting as a server may lack a keyboard 610 , pointing device 614 , graphics adapter 612 , and/or display 618 .
- the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).
- SAN storage area network
- the computer 600 is adapted to execute computer program modules for providing functionality described herein.
- module refers to computer program logic utilized to provide the specified functionality.
- a module can be implemented in hardware, firmware, and/or software.
- program modules are stored on the storage device 608 , loaded into the memory 606 , and executed by the processor 602 .
- process steps and instructions are embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
- Appendix 4 Response package for command request “Set an alarm” ⁇ “Format”: “SoundHoundVoiceSearchResult”, “FormatVersion”: “1.0”, “Status”: “OK”, “NumToReturn”: 1, “AllResults”: [ ⁇ “CommandKind”: “AlarmCommand”, “SpokenResponse”: “This client does not support setting an alarm.”, “SpokenResponseLong”: “This client does not support setting an alarm.”, “WrittenResponse”: “This client does not support setting an alarm.”, “WrittenResponseLong”: “This client does not support setting an alarm.”, “AutoListen”: false, “ConversationState”: ⁇ “ConversationStateTime”: 1431127801, “CommandKind”: “AlarmCommand”, “AlarmCommandKind”: “AlarmStartCommand”, “Mode”: “Default” ⁇
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 16/791,421, filed on Feb. 14, 2020, which is a continuation of U.S. patent application Ser. No. 16/550,174, filed on Aug. 24, 2019, which is in turn a continuation of U.S. patent application Ser. No. 14/799,382, filed on Jul. 14, 2015, all of which are hereby incorporated by reference.
- The disclosed embodiments relate generally to execution of applications distributed over a computer network, and more specifically, to server-based determination and provision of possible responses to requests for client devices.
- Different types of client devices support different types and degrees of application functionality. For example, users may use their laptops or tablets in some contexts, their smartphones in other contexts, specialized devices (e.g., smart watches or intelligent appliances) in other contexts, and the like. The various client devices may delegate requests (e.g., in natural language form) for application functionality to a server. This delegation is useful in cases where the client device itself is not capable of interpreting the request and thus relies on the server to perform the interpretation and provide the client device with a more structured form of the request that will enable the client device to respond to the request. However, in many cases the server does not have knowledge of the functionality of the various client devices making the requests, and/or cannot realistically know all the possible responses of which the various devices are capable. In such cases, the server cannot with certainty predict which, of the many potential types of responses, the client device should give in response to the application request.
- A client device receives a user request to execute a command of an application. The application operates in a specific type of domain, such as calendaring/scheduling, email/communications, or the like. The request may be specified, for example, in natural language form (e.g., as text or voice). The client device packages the request into a message and transmits the message to a response-processing server to interpret. Using programmed code that represents domain knowledge of the application's domain, the response-processing server determines the various possible responses that client devices could make in response to the request based on, for example, the capabilities of the client devices and the state of the application data (e.g., whether particular referenced data exists or not). The response-processing server accordingly generates a response package that encodes a number of different conditional responses that client devices could have to the request based on the various possible circumstances and transmits the response package to the client device. The client device selects the appropriate response from the response package based on the circumstances as determined by the client device. Then, using data from the response package about the requested command, the client device executes the command (if possible), and provides the user with some representation of the response (e.g., written or spoken) as supported by the capabilities of the client device.
-
FIG. 1 shows a system environment in which command processing takes place, according to one embodiment. -
FIG. 2 is a data flow diagram illustrating interactions of the response processing server, a developer, the developer system, the client device, a user, and an application server, according to one embodiment -
FIG. 3 illustrates a set of graphical regions that for most client devices serve as a baseline user interface. -
FIG. 4 illustrates in more detail the components of the response-processing server ofFIGS. 1 and 2 , according to one embodiment. -
FIG. 5 is a sequence diagram illustrating interactions between the different entities ofFIGS. 1 and 2 , according to one embodiment. -
FIG. 6 is a high-level block diagram illustrating physical components of a computer used as part or all of the query-processing server, client device, or developer host fromFIG. 1 , according to one embodiment. - The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that other alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
-
FIG. 1 shows a system environment in which command processing takes place, according to one embodiment. Users of theclient devices 110 use the client devices to specify requests forapplications 111 executing partially or entirely on the client devices. Theapplications 111 are made up of executable computer code and implement functionality for one or more application domains, such as calendaring / scheduling, event reminders, email/voice or other communications, mapping, music players, and the like. Theapplications 111 can include both “back-end” code (e.g., creating a calendar item) and front-end processing code (e.g., interpreting a user natural language command to create a calendar item). The user command code may be authored by the same developer(s) that authored the “back-end” code, or a separate developer(s) may extend the user interface of an existing application by authoring new user command code that interprets user commands and implements them by calling the back-end code of the existing application (e.g., its application programming interface (API)) as appropriate for the given user command. The code of theapplication 111 made be entirely located onclient devices 110, or it may be located on anapplication server 130, or it may be some combination thereof (e.g., a thin client application on theclient device 110 that makes calls to an application server 130). - In cases where the applications 111 (or add-on software thereto) cannot themselves programmatically interpret the requests for the applications (e.g., when the requests are specified in a natural language, such as via a virtual assistant program that performs request interpretation on a server), the client devices delegate the processing of the request to a response-
processing server 100 by creating a message that encodes the request and transmitting that message to the response-processing server 100. The response-processing server 100 parses or otherwise programmatically analyzes the request to determine the meaning of the request, and if the parsing/analysis indicates that the request represents a valid command for the application, the server generates a set of conditional responses. The set of conditional responses—hereinafter referred to as a “response package”—represents the various possible responses that theclient device 110 could make in response to the command, depending on the circumstances. The response-processing server 100 provides the response package to theclient device 110. Theclient device 110 selects and displays the appropriate response from the response package based on its own capabilities and/or on the state of the application data itself. If the requested functionality of theapplication 111 is implemented on anapplication server 130, rather than wholly by thelocal application 111, theclient device 110 may also use data from the response package to make a request of theapplication server 130 to carry out the command. - Providing the
client device 110 with the response package—which provides the set of possible responses—immediately allows the client device to quickly select the proper response. This greatly reduces the number of client-server interactions required to respond to the command request, and accordingly also simplifies the protocol logic that would be required to implement the different possible responses. For example, designing a protocol for canceling a meeting in a calendaring application would require more back-and-forth interactions between the client and the server, and hence would also require more complexity to design to account for the various states of the interactions. For instance, consider the user request “Cancel my meeting with Chris tomorrow at 2 pm”. A conventional protocol using a single question/response at a time might result in the following multi-step dialog process: -
- a. Server asks client if it has calendar capabilities
- b. Client answers server with Y or N (along with client ID and dialog state)
- c. Server receives client's Y or N answers and current dialog state,
- i. In case of N answer, send a final response to the client
- ii. In case of Y answer, checks if a calendar is available for this user
- 1. Client answers Y or N (with dialog state and user ID)
- 2. Server receives Y or N answers and current dialog state,
- a. In case of N answer, has an final answer
- b. In case of Y, server further inquires if the user has a calendar entry matching the request (time, person)
- i. Client answers Y or N
- 1. . . .
- In contrast, the server creates a response package that includes a set of conditional responses, which enables the
recipient client device 110 to programmatically select the appropriate response from the package without further network transmissions and the associated delays due to network traffic. Importantly, this also avoids the need to account for and track multiple of intermediate dialog states. (Multiple intermediate dialog states are represented at the various branches of the above decision tree. For example, each “Y(es)” or “N(o)” answer leads to two separate states, one reflecting the Yes answer, and the other the No answer. Each of these possibility would need to be reflected in the dialog logic and/or the state variables.) - In one embodiment, an application developer authors an
application 111 on a developer system and makes it available (e.g., via an app store) to theclient devices 110, which install it locally. In this embodiment, the application developer, having the domain knowledge for the application domain, additionally specifies a semantic grammar that defines a set of natural language expressions that it is able to interpret as valid commands to the application. Alternatively, as noted above, separate developers could extend the user interface of theapplication 111 by defining the semantic grammars for theapplication 111. Additional details of the semantic grammars of this embodiment are provided below with respect toFIG. 4 . - Thus, the
client devices 110 ofFIG. 1 have one ormore software applications 111 developed by a developer on thedeveloper system 120 and made available to users. Theapplication 111 includescommand handling code 112 that responds to the receipt of a request from a user and delegates processing of the request to the response processing server. (In an embodiment in which separate developers develop the semantic grammars for defining a natural language user interface, the command-handling code 112 is not part of theapplication 111 itself, but rather supplements the application.) The request may be received by thecommand handling code 112 in different forms in different embodiments, such as natural language (either as speech or text), designations of user interface components, or the like. - The request is processed by the
response module 104 of theresponse processing server 100. The response-processing server 100 includes adomain knowledge repository 102 that enables theresponse module 104 to provide the response package that is appropriate for the particular application and command. In one embodiment, thedomain knowledge repository 102 is implemented wholly or in part by semantic grammars that define a set of natural language expressions that are valid commands, as well as a set of corresponding actions that assemble the appropriate response package. Theresponse module 104 is specially programmed code that is not available in a generic computer system, and implements the algorithms described herein. As will be apparent from the following discussion, the algorithms and processes described herein require implementation on a computer system, and are not performed by humans using mental steps. - The
response processing server 100,client device 110, and developer system are connected by anetwork 140. Thenetwork 140 may be any suitable communications network for data transmission. In one embodiment, thenetwork 140 is the Internet and uses standard communications technologies and/or protocols. -
FIG. 2 is a data flow diagram illustrating interactions of theresponse processing server 100, adeveloper 201, thedeveloper system 120, theclient device 110, auser 210, and anapplication server 130, according to one embodiment. Initially, adeveloper 201 provides asoftware application 111 via adeveloper host system 120. The developer'sapplication 111 is configured to receive requests to execute commands, e.g., specified in natural language form, fromusers 210. (The ability to receive and execute commands may be provided directly by the developer(s) 201 that developed theapplication 111, or by separate developers that author command-handling code that then conceptually forms part of theoriginal application 111.) Eachapplication 111, and each distinct command within the application, may have different possible responses when executed by thedifferent client devices 110 that execute theapplication 111. For example, assume that oneparticular application 111 is a calendar application allowing a user to make appointments by sending appointment creation messages to anapplication server 130 storing and controlling access to the actual calendar data. If a user specified (e.g., via speech input) the request “Cancel my meeting with Chris tomorrow at 2 pm” on his laptop, the laptop might containsufficient application 111 logic to implement the deletion of the desired appointment (e.g., by sending a message to theapplication server 130 in the format required by the portion of theapplication 111 executing on the application server 130), assuming that such an appointment existed. In contrast, if the user specified the same request on a smart appliance (e.g., by speaking to a smart television), the smart appliance might lacksufficient application 111 logic to implement the creation of the appointment. Thus, in the example of the laptop, the appropriate response by the laptop might be to send the properly-formatted calendar appointment deletion request to theapplication server 130 and to display the message “Appointment deleted” within a user interface of the laptop. (If the calendar appointment in question does not exist, the proper response might be to note that the appointment does not existing and to display the message “Error: No such appointment.”) In the example of the smart appliance, the proper response might be for the smart appliance to determine that it lacks the logic to handle appointment requests and to display or speak the response, “Error: Can't handle appointment requests.” - The response-
processing server 100 may be used to interpret many different command requests from manydifferent client devices 110 on behalf of manydifferent applications 111. In order to better allow the response-processing server 100 to be able to interpret all of these different requests, in one embodiment eachapplication developer 201 provides the response-processing server 100 with semantic grammars (described in more detail below with respect toFIG. 4 ) that define corresponding sets of natural language expressions that constitute valid commands, as well as actions to take in response to successfully parsing the commands, such as code to generate the appropriate response package to provide to the client device. As noted above, the semantic grammars may also be defined by other developers, separate from those that developed theapplication 111, as a way to extend the user interface capabilities of the application (e.g., by adding natural language commands). - Similarly, in one embodiment the developer 201 (or whichever developer provides the semantic grammars) also provides a set of response schemas to the
response processing server 100. The response schemas, together with the semantic grammars, make up thedomain knowledge repository 102, in that they enable the response-processing server 100 to identify valid commands and to generate the appropriate corresponding response packages. The response schemas describe the format of the response package for corresponding application commands. In one embodiment, the schemas have at least: a type that identifies the type of command to which the conditional responses of the response package correspond; the set of conditional responses themselves containing at least one default response, each response having at least one textual message that represents that particular response; and any needed command parameter values for implementing the command, assuming that theclient device 110 supports the command. The command parameter values are derived from the command request and make it simpler for theclient device 110 to execute the requested command, such as the value “14” (representing 2 PM) derived from the command request “Cancel my meeting with Chris tomorrow at 2 pm”. In some embodiments, the response package includes a required features descriptor that indicates capabilities that theclient device 110 must have in order to execute the command. In some embodiments, the response package additionally contains an ordered list of view types (e.g., “Native”, or “None”) that specify the ways in which the response should be displayed, with the highest-ordered view type being used if theclient device 110 and/or theapplication 111 supports it, the next-highest if theclient device 110 and/or theapplication 111 do not support the highest-ordered (but supports the next-highest-ordered), and so on. A value of “Native” (or an equivalent thereof), for example, indicates that the response should be rendered in a native graphical user interface of the application; a value of “None” (or an equivalent thereof) indicates that the response should be simply displayed in a text field of the client device 110 (e.g., a general-purpose text area on a smart appliance, where anapplication 111 might lack a native graphical user interface). - In some embodiments, each response of the set of different possible responses specified by the response package may contain an optional output state indicator that indicates a state that the application will be in after the response is provided, as well as an optional input state for the response to be applicable. For example, for a response package corresponding to a user request for a “set alarm” command, an “initiate alarm set” response from the response package might have an associated output state indicator indicating that the
application 111 is now in the state of waiting for the user to specify a time for the alarm, and an “alarm successfully set” response might have an associated input state requiring that theapplication 111 be in the “initiate alarm set” state before it can be selected. - Listing 1 below provides an excerpt of one specific example of a response package specified in the JSON data interchange format in response to the command request “Cancel my meeting with Chris tomorrow at 2 pm”, a more complete listing of which is provided in Appendix 1. Appendices 2-4 provide examples of response packages for other command requests. In one embodiment, each command-response package has a corresponding schema. The schemas may be specified as being derived from each other; for example, there may be a base schema for a generic response package that simply indicates that the command is not supported (supplied, e.g., by the response-processing server 100), and application developers may inherit from the base schema and add application-specific information.
-
Listing 1 { “CommandKind”: “CalendarCommand”, “SpokenResponse”: “Canceling a calendar item is not supported by this client.”, “WrittenResponse”: “Canceling a calendar item is not supported by this client.”, “ViewType”: [ “Native”, “None” ], “ClientActionSucceededResult”: { “SpokenResponse”: “I've canceled that item on your calendar.”, “WrittenResponse”: “I've canceled that item on your calendar.” }, “ClientActionFailedResult”: { “SpokenResponse”: “That calendar item cancelation could not be made.”, “WrittenResponse”: “That calendar item cancelation could not be made.”, }, “CalendarCommandKind”: “CancelItem”, “Query”: { “SelectionTarget”: “First”, “RecurringTarget”: “SpecifiedOnly”, “ItemType”: “Meeting”, “StartTimeRange”: { “RangeStart”: { “Date”: { “Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } }, “RangeEnd”: { “Date”: { ”Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } } } }, “CalendarPreferenceIsNotSetResult”: { “SpokenResponse”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.”, “WrittenResponse”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.”, }, “NoMatchResult”: { “SpokenResponse”: “No matching calendar item was found.”, “WrittenResponse”: “No matching calendar item was found.”, } } - The response package of Listing 1 above specifies that the command type (“CommandKind”) is “CalendarCommand”. The response package also specifies at spoken and a written response corresponding to various conditional responses; these values indicate text for the
client device 110 to either speak using text to speech conversion or to display textually within a user interface of the client device, respectively. For example, the response package includes a default conditional response (corresponding to the “SpokenResponse” and “WrittenResponse” in the second and third top-level items) to be used when theclient device 110 does not support the feature at all; a “ClientActionFailedResult” conditional response that is used when the calendar cancellation command is successful; a “ClientActionFailedResult” conditional response that is used when the calendar cancellation command fails; a “CalendarPreferenceIsNotSetResult” conditional response that is used when no specific preferred calendar has been chosen yet; and a “NoMatchResult” that is used when no matching calendar item (e.g., an item with “Chris” at time “tomorrow at 2 pm”) can be found on the calendar. The response package also includes a descriptor of the specific command (“Cancelltem”, the value of the “CalendarCommandKind” field), and command parameter values that the client can use to carry out the command (namely, the data in the hash corresponding to the “Query” field, such as the value “14” for the field “Hour”, indicating 2 PM). The response package of Listing 1 additionally contains a view type field (indicated by the “ViewType” field name) containing the ordered values “Native” and “None”, indicating that the responses, if displayed textually, should preferably be rendered graphically in a native user interface, and failing that, the “WrittenResponse” field of the appropriate response should simply be displayed in a default text area of a user interface. -
FIG. 3 illustrates a set of graphical regions that formost client devices 110 serve as abaseline user interface 300. Theuser interface 300 includes atranscription text area 312 in which theapplication 111 of theclient device 110 presents a textual version of a user command request, assuming that the user command request was initially given in the form of speech. Theuser interface 300 also includes a writtenresponse area 314 in which theapplication 111 of theclient device 110 displays a textual version of the response, such as the value of the “WrittenResponse” field discussed above. The user interface also includes agraphical rendering area 316 in which theapplication 111 of theclient device 110 displays a richer graphical rendering of a result of executing the command, such as a portion of the application data that was implicated by the command. For example, in the case of the above example command “Cancel my meeting with Chris tomorrow at 2 pm”, theapplication 111 might display a graphical view of a portion of the user's calendar including the time representing “tomorrow at 2 pm”, showing that the meeting was canceled, as well as any other calendar appointments around that time that are still scheduled. - The
example user interface 300 may include different user interface portions ondifferent client devices 110. For example, aclient device 110 on which the command request is entered as text may omit thetranscription area 312. As another example, less full-featured devices, such as a smart appliance, might not include graphical rendering capabilities and thus might omit thegraphical rendering area 316. - Returning again to
FIG. 2 , the response-processing server 100 includes aresponse module 104 that processes user command requests and generates the appropriate response package. Theresponse module 104 includes agrammar application module 205 that applies the semantic grammar that is appropriate for the given command request. Theresponse module 104 also includes a responsepackage builder module 206, which generates the appropriate response package for the given command request. (In one embodiment, thegrammar application module 205 is implemented by parsinglogic 402, described below in more detail with respect toFIG. 4 . In one embodiment, thepackage builder module 206 is implemented by action code specified by the applied semantic grammar, as described in more detail below with respect to theactions 408B ofFIG. 4 .) Theresponse module 104 can then provide the response package to theclient device 110. - Given the response package, a
response selector 212 of theapplication code 111 executing on theclient device 110 then selects the response that is appropriate given the circumstances. (In cases where the semantic grammars and schemas are authored by a separate developer, other than the developer of theoriginal application 111, the separate developer also authors theresponse selector 212, and in these cases the response selector may be considered external to theapplication 111 itself) The relevant circumstances include the capabilities of theclient device 110 itself (e.g., whether the application code is configured to handle the particular type of command, as indicated by the “CommandKind” field of the response package) and the state of the application data (e.g., whether data for a specified appointment does in fact exist, as in the above example). Theapplication code 111 may perform actions such as speaking the response text of the selected response (e.g., the value of the “SpokenResponse” field described above), displaying the response text of the selected response (e.g., the value of the “WrittenResponse” field) in the writtenresponse area 314 ofFIG. 3 , and/or executing the command in theapplication 111. The execution of the command may use values specified within the response, e.g., values within the value of the “Query” field of Listing 1. The execution of the command may involve only local actions within theclient device 110, or—if theapplication 111 is implemented in part using aremote application server 130—may involve sending a request to theapplication server 130 in the format required by the application server, e.g., to delete a calendar item as in the above example. -
FIG. 4 illustrates in more detail the components of the response-processing server 100 ofFIGS. 1 and 2 , according to one embodiment. As can be appreciated by one of skill in the art, the configuration of the logic and data of the response-processing server 100 results in a particular, non-generic computer system that is specifically structured to perform the functions described herein. - The response-
processing server 100 storesdomain knowledge data 102, which includes a set ofsemantic grammars 406. Eachsemantic grammar 406 implicitly defines a set of natural language expressions (hereinafter “expressions”) that it is able to parse and interpret, such as commands requesting the creation or modification of calendar appointments, commands that set alarms, and the like. Thesemantic grammars 406 are typically submitted by thevarious developers 201 of theapplications 111, although it is possible for the semantic grammars to be submitted by other parties as well, such as the entity responsible for the response-processing server 100. - Each
semantic grammar 406 has associatedmetadata 407, such as a name, a description, an author, and/or a date of submission. It also has one ormore rules 408 defined in a programming language. Eachsemantic grammar rule 408 defines: apattern 408A for the natural language expressions that thesemantic grammar 406 can parse (for example, a natural language expression that specifies a date or a range of dates); optionallyempty programming actions 408B that may include semantic constraints, and that specify the construction of an interpretation or meaning (e.g., by assigning values to output variables based on a natural language request that the semantic grammar interprets); andoutput variables 408C that represent the meaning of the natural language request, as discussed below. - A
semantic grammar 406 is said to be able to interpret a natural language request if the request can be successfully parsed according to thepattern 408A of one of therules 408 of thesemantic grammar 406, and the semantic constraints (if any) defined by the correspondingactions 408B are all satisfied; the meaning representation of the expression is then defined by theoutput variables 408C of therule 408. - If a natural language expression (or entire request) can be successfully parsed and interpreted by a given
semantic grammar 406, the interpretation yields an associated meaning defined by values of theoutput variables 408C, and (in embodiments that have aweighting module 404, as described below) an associated strength measure. If a natural language expression cannot be successfully interpreted by the semantic grammar, in one embodiment the interpretation returns a NULL value or other value signifying that the natural language expression could not be interpreted by the semantic grammar; in another embodiment, the failure of a semantic grammar to interpret an expression does not result in any return value, but instead in a failure to complete parsing and interpretation. - An
expression pattern 408A corresponds to one of the ways of describing the right-hand side of a context-free grammar rule, such as (BNF) Backus-Naur Form or EBNF (Extended Backus-Naur Form), or a regular expression, or a finite state automaton (FSA), or the like. Theexpression pattern 408A may be specified in terms of other semantic grammars, thereby allowing for hierarchical relationships between semantic grammars, and possibly the parsing of recursive expressions. Theprogramming actions 408B may be used to assign values to theoutput variables 408C based on the natural language request that thesemantic grammar 406 interprets. - In one embodiment, the programming language is a general-purpose procedural programming language that is extended with additional programming constructs for specifying natural language interpretation, such as the
patterns 408A of the expressions to be matched. In one embodiment, thesemantic grammars 406 are defined by an extended procedural programming language based on C++ in which “interpret” blocks define theexpression pattern 408A to be matched, the semantic grammar parameter list defines theoutput variables 408C, and standard C++ statements define theprogramming actions 408B. It is appreciated that languages other than C++ could equally be used in other embodiments. - In one embodiment a
semantic grammar 406 is implemented by a ‘code block’ with the general form “block outputVariablesList blockName (pattern programmingAction)+”, where “outputVariablesList” is theoutput variables 408C, “blockName” is the given name of the block that implements thesemantic grammar 406, and each pair of a “pattern”expression 408A and a “programmingAction” 408B represents a particular action to be taken in response to the existence of a particular corresponding pattern. The expression (pattern programmingAction)+indicates that there are one or more pairings of a pattern with a programming action. One simple, concrete example of the use of this extended procedural programming language follows in Listing 2, below: -
Listing 2 extern block (int totalPoints) TENNIS_POINTS( ) { interpret { ([“what is | 10 “tell me”] . [“the”] . “number of points played when announcer says” . n1=VALID_TENNIS_NUMBER( ) . [“to”] . n2=VALID_TENNIS_NUMBER( )) } as { totalPoints = (Points(n1->count) + Points(n2->count)); } } static block (int count) VALID_TENNIS_NUMBER( ) { interpret { n_love=LOVE( ) | n=INTEGER_COUNT() } as { if (n_love) {count = 0;} else { if ( (n->count != 15) && (n->count != 30) && (n- >count != 40) ) {excludethis( );} else {count = n->count;} } } } static block ( ) LOVE( ) { interpret { ( “love” | “zero” ) } as { } } int Points (int score) { switch (score) { case 0: return 0; break; case 15: return 1; break; case 30: return 2; break; case 40: return 3; break; default: return −1; } } - In the above example, the
semantic grammar 406 named TENNIS POINTS( )is able to interpret an expression embodying a request to determine how many points have been played in a game based on the game score in a tennis match, with the meaning of the expression represented byoutput variables 408C in the parameter list (namely, the integer “totalPoints”). Theexpression pattern 408A -
- ([“what is”|10 “tell me”]. [“the”]. “number of points played when announcer says”. n1=VALID_TENNIS_NUMBER( )). [“to”]. n2=VALID_TENNIS_NUMBER( ))
defines a set of possible expressions that a user may input (either in text or speech) to request the current number of points so far in a tennis game. The expressions optionally begin with either the phrase “what is” or “tell me”, optionally followed by the word “the”, followed by the phrase “number of points played when announcer says”, followed by two numbers that are valid in tennis and that are optionally separated by the word “to”. Note that by constructing a more flexible expression pattern 308A for expressions in the domain of interest—one that includes a greater range of alternative forms of expression—a developer can create a “broader spectrum” request interpretation system for users.
- ([“what is”|10 “tell me”]. [“the”]. “number of points played when announcer says”. n1=VALID_TENNIS_NUMBER( )). [“to”]. n2=VALID_TENNIS_NUMBER( ))
- The numbers that are valid in tennis are in turn represented by their own
semantic grammars 406 named “VALID_TENNIS_NUMBER” and which accepts the scores 15, 30, 40, or “love”. Thus, theexpression pattern 408A of this example semantic grammar can interpret the expression “what is the score when announcer says fifteen love” (in which case its meaning is represented by the value 1 for the totalPoints output variable), but not “what is the score when announcer says fifteen eighty” because the former expression can be successfully parsed according to theexpression pattern 408A, but the latter expression cannot (because “eighty” is not a valid tennis number). - Note that the semantic grammar INTEGER_COUNT( ) is not defined in Listing 2, but (for example) is provided by a separate developer and submitted for inclusion in the
domain knowledge data 102 for use by others, such as the author of the TENNIS POINTS( )semantic grammar of Listing 2. - Note also that the
programming actions 408B may accomplish different types of purposes. For example, theprogramming actions 408B of the block TENNIS_POINTS( ) (i.e., the code “totalPoints=(Points (n1−>count)+Points (n2−>count));”) simply converts the given two scores to point values and sums them, whereas theprogramming actions 408B of the VALID_TENNIS_NUMBER( ) block (i.e. the code beginning with “if (n_love) {count=0;}”) specifies the semantic constraint that the score be “15”, “30”, or “40”; if this semantic constraint is not satisfied, the excludethis( ) function will be called to abort the parse as unsuccessful. - The above example of Listing 2 is a simple example explaining the different aspects of a
semantic grammar 406 in the context of a particular example problem. Note that another use of theactions 408B of thesemantic grammars 406 is the generation of request packages. For example, as one or moresemantic grammars 406 successful parse a given user command, their correspondingactions 408B can generate the response packages to be provided to theclient devices 110. - In the same way,
semantic grammars 406 could also be created for other purposes, such as specifying commands like the above example cancellation of a calendar appointment. For example, a developer of theapplication 111 that includes a command for canceling a calendar appointment could author a top-levelsemantic grammar 406 for handling user requests for the command. One or moresemantic grammars 406 and theirrules 408 would specify the different manners in which a user could indicate a request for the command, and the command specifics, using natural language. For example,different patterns 408A could accept different forms of a request, such as “Cancel my meeting with Chris tomorrow at 2 pm”, “Delete appointment tomorrow at 1400”, “Remove meeting with Chris on June 21st”, and the like. Similarly, theactions 408B of thesemantic grammar 406 could result in the generation of the particular response package that has a format corresponding to the command, as specified by the schema for that command (e.g., one response forclient devices 110 that don't support the feature, one response for successful cancellation of the appointment, one response for non-existence of the reference appointment, etc.) and that has the values specified by the command request (e.g., “June 21st” as the appointment day, “2 pm” as the appointment time, etc.). - The response-
processing server 100 comprises parsinglogic 402 that takes a natural language request as input and determines whether a givensemantic grammar 406 can interpret that query and if so (optionally) a match strength measure between the given semantic grammar and the expression. Asemantic grammar 406 is considered to be able to interpret a natural language request if the request can be parsed according to the semantic grammar'spattern 408A and satisfies the semantic grammar's semantic constraints (if any). - The parsing
logic 402 has a meaningparser module 403 that accepts textual input for a natural language request and determines whether a givensemantic grammar 406 can interpret that textual input. - In one embodiment, the parsing
logic 402 also has aweighting module 404 that assigns values to successful interpretations of an expression as a way to rank the relative strength of the successful interpretations. Specifically, theweighting module 404 computes a match strength measure corresponding to a measure of strength of a parse obtained by asemantic grammar 406 parsing the given request, such as the estimated likelihood that the parse produces the request. The match strength measure is determined as a function of weights assigned to sub-expressions within the given expression that ultimately matches the input. In one embodiment, the weights of sub-expressions represent likelihoods of the sub-expressions, and the weight of the matching expression is calculated as the product of the weights of the sub-expressions. For instance, referring to the above code example of Listing 1, theexpression pattern 408A for the TENNIS POINTS semantic grammar is composed of sub-expressions such as “what is”, “tell me”, “the”, “number of points when announcer says”, VALID_TENNIS_NUMBER( ) (a separate semantic grammar), and “to”. In the disjunction for the optional sub-expression [“what is”|10 “tell me”], the value 10 serves as a weight corresponding to the sub-expression “tell me”. This causes theweighting module 404 to assign a likelihood value of 1/(1+10)=1/11 to the alternative “what is”, and 10/(1+10)=10/11 to the alternative “tell me”, reflecting an expectation that a user will be 10 times as likely to say “tell me” as the user would be to say “what is” when expressing a request for a tennis score. Accordingly, expressions that are interpreted by the TENNIS_POINTS( )semantic grammar 406 and that begin with “tell me” will obtain higher match strength (0.9090) measures than those that match but begin with “what is” (0.0909). - In one embodiment, the parsing
logic 402 implements thegrammar application module 205 ofFIG. 2 . That is, when a particular command request is received by the response-processing server 100 from aclient device 110, the parsinglogic 402 tests and weights some or all of the storedsemantic grammars 406 against the received command request, selecting the semantic grammar producing the highest-score as the semantic grammar to use when handling the command request. - In one embodiment, the response-
processing server 100 includes aschema repository 420 storing thepackage schemas 420 that correspond to the different commands/semantic grammars 406. As noted above, theschemas 420 may be provided by thedevelopers 201 ofFIG. 2 , for example. The package schemas describe the structure that a response package for a particular command must have. In one embodiment, aschema compiler 421 compiles theschemas 420 to produce corresponding responsepackage interface code 422. The responsepackage interface code 422 includes procedural code for creating data structures embodying the corresponding response package, for getting and setting fields of those data structures, and for serializing and de-serializing those data structures when transmitting the data structures over thenetwork 140 between theserver 100 and theclient devices 110. -
FIG. 5 is a sequence diagram illustrating interactions between the different entities ofFIGS. 1 and 2 , according to one embodiment. - An
application developer 201 provides 410 domain knowledge data to theresponse processing server 100, for each command of the application for which theapplication developer 201 wishes to use the response-processing server 100 for command interpretation. The domain knowledge includes, for each command, asemantic grammar 406 and (optionally) apackage schema 420. The domain knowledge may also be provided by a developer separate from the developer that authored theapplication 111 itself In such cases, the separate developer also provides code for theresponse selector 212 ofFIG. 2 . - A user submits 430 a command request on the user's client device 110 (e.g., as spoken or typed natural language), and the
client device 110delegates 435 the request to the response-processing server 100. The response-processing server 100 identifies 440 a semantic grammar for the command request. The parsinglogic 402 ofFIG. 4 , which parses and weights the command request using each of thesemantic grammars 406 in the domain knowledge data and selecting the semantic grammar that produces the highest score, is one example of a means of performing this function. With asemantic grammar 406 for the command request identified, the response-processing server executes 445 theaction code 408B of the semantic grammar, and the action code generates a response package for the command. Theresponse package builder 206 ofFIG. 2 (e.g., as implemented by the executingaction code 408B) is one example of a means of performing the function of generating the response package. In one embodiment, theaction code 408B generates the response package by identifying and applying apackage schema 420 that corresponds to the identified semantic grammar, e.g., by calling the responsepackage interface code 422 previously generated by theschema compiler 421. - With the response package generated, the response-
processing server 100 sends 450 the response package for delivery to theclient device 110. - When the
client device 110 receives the response package, theresponse selector 212 for theapplication 111—which is one example of a means of performing the function of selecting an appropriate response and executing the command—determines 460 whether it can execute the command. Theresponse selector 212 may determine that the command is not executable by reading a field from the response package, e.g., the value of the “CommandKind” field, such as “CalendarCommand” and determining that it is not one of the commands that theclient device 110 and/or theapplication 111 can support. For the example of Listing 1, theparticular application 111 implemented on theclient device 110 might not support the “CalendarCommand” command type, or its specific sub-type “Cancelltem” indicated by the “CalendarCommandKind” field. Theresponse selector 212 may also determine that the command is not executable based on the state of the application data, e.g., that the appointment for which cancellation is requested doesn't exist, or the state of the application itself, as indicated by the required input states, if any, specified by the various possible responses of the response package. - If the command can be executed, then the
response selector 212 executes 465 the command, to the extent that the command can be successfully executed. Theresponse selector 212 identifies which of the potential responses of the response package is appropriate and provides 470 an appropriate indication of the outcome of the command using the identified response, such as displaying textually a value from identified response in the response package (e.g., in the case of appointment cancellation success, the sub-value “I've canceled that item on your calendar” of the “ClientActionSucceededResult” value from Listing 1), speaking a value of the identified response from the response package using text to speech technology, or the like. Theresponse selector 212 may choose the indication of the outcome of the command according to a view type attribute of the response package, determining which is the “best” of the indicated view types that theclient device 110 can successfully provide. Theresponse selector 212 may also set a state of theapplication 111 based on the output state (if any) specified by the selected response from the response package. -
FIG. 6 is a high-level block diagram illustrating physical components of acomputer 600 used as part or all of the query-processingserver 100 orclient device 110, ordeveloper host 120 fromFIG. 1 , according to one embodiment. Illustrated are at least oneprocessor 602 coupled to achipset 604. Also coupled to thechipset 604 are amemory 606, astorage device 608, akeyboard 610, agraphics adapter 612, apointing device 614, and anetwork adapter 616. Adisplay 618 is coupled to thegraphics adapter 612. In one embodiment, the functionality of thechipset 604 is provided by amemory controller hub 620 and an I/O controller hub 622. In another embodiment, thememory 606 is coupled directly to theprocessor 602 instead of thechipset 604. - The
storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. Thememory 606 holds instructions and data used by theprocessor 602. Thepointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with thekeyboard 610 to input data into thecomputer 600. Thegraphics adapter 612 displays images and other information on thedisplay 618. Thenetwork adapter 616 couples thecomputer 600 to a local or wide area network. - As is known in the art, a
computer 600 can have different and/or other components than those shown inFIG. 6 . In addition, thecomputer 600 can lack certain illustrated components. In one embodiment, acomputer 600 acting as a server may lack akeyboard 610, pointingdevice 614,graphics adapter 612, and/ordisplay 618. Moreover, thestorage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)). - As is known in the art, the
computer 600 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on thestorage device 608, loaded into thememory 606, and executed by theprocessor 602. - Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- It should be noted that the process steps and instructions are embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
- The operations herein may also be performed by an apparatus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.
- While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention.
- Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims below.
-
Appendix 1. Response package for command request “Cancel my meeting with Chris tomorrow at 2pm” { “CommandKind”: “CalendarCommand”, “SpokenResponse”: “Canceling a calendar item is not supported by this client.”, “SpokenResponseLong”: “Canceling a calendar item is not supported by this client.”, “WrittenResponse”: “Canceling a calendar item is not supported by this client.”, “WrittenResponseLong”: “Canceling a calendar item is not supported by this client.”, “AutoListen”: false, “ViewType”: [ “Native” ], “ClientActionSucceededResult”: { “SpokenResponse”: “I've canceled that item on your calendar.”, “SpokenResponseLong”: “I've canceled that item on your calendar.”, “WrittenResponse”: “I've canceled that item on your calendar.”, “WrittenResponseLong”: “I've canceled that item on your calendar.” }, “ClientActionFailedResult”: { “SpokenResponse”: “That calendar item cancelation could not be made.”, “SpokenResponseLong”: “That calendar item cancelation could not be made.”, “WrittenResponse”: “That calendar item cancelation could not be made.”, “WrittenResponseLong”: “That calendar item cancelation could not be made.” }, “CalendarCommandKind”: “CancelItem”, “Query”: { “SelectionTarget”: “First”, “RecurringTarget”: “SpecifiedOnly”, “ItemType”: “Meeting”, “StartTimeRange”: { “RangeStart”: { “Date”: { “Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } }, “RangeEnd”: { “Date”: { “Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } } } }, “NativeData”: { “Query”: { “SelectionTarget”: “First”, “RecurringTarget”: “SpecifiedOnly”, “ItemType”: “Meeting”, “StartTimeRange”: { “RangeStart”: { “Date”: { “Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } }, “RangeEnd”: { “Date”: { “Symbolic”: “unknown” }, “Time”: { “Hour”: 14, “AmPmUnknown”: false, “Minute”: 0, “Second”: 0 } } } } }, “CalendarPreferenceIsNotSetResult”: { “SpokenResponse”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.”, “SpokenResponseLong”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.”, “WrittenResponse”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.”, “WrittenResponseLong”: “Calendar items cannot currently be canceled because no preferred calendar has been chosen.” }, “NoMatchResult”: { “SpokenResponse”: “No matching calendar item was found.”, “SpokenResponseLong”: “No matching calendar item was found.”, “WrittenResponse”: “No matching calendar item was found.”, “WrittenResponseLong”: “No matching calendar item was found.” } } -
Appendix 2 Response package for command request “dial 123 456 7890” { “CommandKind”: “PhoneCommand”, “SpokenResponse”: “Placing phone calls is not supported by this client.”, “SpokenResponseLong”: “Placing phone calls is not supported by this client.”, “WrittenResponse”: “Placing phone calls is not supported by this client.”, “WrittenResponseLong”: “Placing phone calls is not supported by this client.”, “AutoListen”: false, “ViewType”: [ “Native”, “None” ], “ClientActionSucceededResult”: { “SpokenResponse”: “Calling one two three four five six seven eight nine zero”, “SpokenResponseLong”: “Calling one two three four five six seven eight nine zero”, “WrittenResponse”: “Calling 1234567890.”, “WrittenResponseLong”: “Calling 1234567890.” }, “ClientActionFailedResult”: { “SpokenResponse”: “Failed trying to call one two three four five six seven eight nine zero”, “SpokenResponseLong”: “Failed trying to call one two three four five six seven eight nine zero”, “WrittenResponse”: “Failed trying to call 1234567890.”, “WrittenResponseLong”: “Failed trying to call 1234567890.” }, “PhoneCommandKind”: “CallNumber”, “Number”: “1234567890”, “NativeData”: { “Number”: “1234567890” } } -
Appendix 3. Response package for command request “Set an alarm for 6am” { “CommandKind”: “AlarmCommand”, “SpokenResponse”: “This client does not support setting an alarm.”, “SpokenResponseLong”: “This client does not support setting an alarm.”, “WrittenResponse”: “This client does not support setting an alarm.”, “WrittenResponseLong”: “This client does not support setting an alarm.”, “AutoListen”: false, “ConversationState”: { “ConversationStateTime”: 1431126349, “CommandKind”: “AlarmCommand”, “AlarmCommandKind”: “AlarmSetCommand” }, “ViewType”: [ “Native”, “None” ], “ClientActionSucceededResult”: { “SpokenResponse”: “Setting the alarm for 6 a m.”, “SpokenResponseLong”: “Setting the alarm for 6 a m.”, “WrittenResponse”: “Setting the alarm for 6:00 am.”, “WrittenResponseLong”: “Setting the alarm for 6:00 am.”, “ConversationState”: { “ConversationStateTime”: 1431126349, “CommandKind”: “AlarmCommand”, “AlarmCommandKind”: “AlarmSetCommand” } }, “ClientActionFailedResult”: { “SpokenResponse”: “The alarm could not be created.”, “SpokenResponseLong”: “The alarm could not be created.”, “WrittenResponse”: “The alarm could not be created.”, “WrittenResponseLong”: “The alarm could not be created.” }, “AlarmCommandKind”: “AlarmSetCommand”, “NativeData”: { “Alarms”: [ { “Hour”: 6, “Minute”: 0, “DaysOfWeek”: [ ], “InvalidDates”: [ ] } ] } } -
Appendix 4: Response package for command request “Set an alarm” { “Format”: “SoundHoundVoiceSearchResult”, “FormatVersion”: “1.0”, “Status”: “OK”, “NumToReturn”: 1, “AllResults”: [ { “CommandKind”: “AlarmCommand”, “SpokenResponse”: “This client does not support setting an alarm.”, “SpokenResponseLong”: “This client does not support setting an alarm.”, “WrittenResponse”: “This client does not support setting an alarm.”, “WrittenResponseLong”: “This client does not support setting an alarm.”, “AutoListen”: false, “ConversationState”: { “ConversationStateTime”: 1431127801, “CommandKind”: “AlarmCommand”, “AlarmCommandKind”: “AlarmStartCommand”, “Mode”: “Default” }, “ViewType”: [ “Native”, “None” ], “RequiredFeatures”: [ “AlarmSet” ], “RequiredFeaturesSupportedResult”: { “SpokenResponse”: “What time should the alarm go off?”, “SpokenResponseLong”: “What time should the alarm go off?”, “WrittenResponse”: “What time should the alarm go off?”, “WrittenResponseLong”: “What time should the alarm go off?”, “AutoListen”: true, “ConversationState”: { “ConversationStateTime”: 1431127801, “Mode”: “Alarm”, “CommandKind”: “AlarmCommand”, “AlarmCommandKind”: “AlarmStartCommand” } }, “AlarmCommandKind”: “AlarmStartCommand”, “NativeData”: { } } ], “Disambiguation”: { “NumToShow”: 1, “ChoiceData”: [ { “Transcription”: “set an alarm”, “ConfidenceScore”: 1 } ] }, “ResultsAreFinal”: [ true ], “BuildInfo”: { “User”: “knightly”, “Date”: “Fri May 8 13:10:36 PDT 2015”, “Machine”: “s10766724502414.pnp.melodis.com”, “SVNRevision”: “23020”, “SVNBranch”: “dev_multi_machine”, “BuildNumber”: “1438”, “Kind”: “Low Fat”, “Variant”: “release” }, “ServerGeneratedId”: “925dae85-f746-4821-bf37-c1ec3a3d3fd4” }
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/569,433 US20220129639A1 (en) | 2015-07-14 | 2022-01-05 | Conditional responses to application commands in a client-server system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201514799382A | 2015-07-14 | 2015-07-14 | |
US201916550174A | 2019-08-24 | 2019-08-24 | |
US16/791,421 US11250217B1 (en) | 2015-07-14 | 2020-02-14 | Conditional responses to application commands in a client-server system |
US17/569,433 US20220129639A1 (en) | 2015-07-14 | 2022-01-05 | Conditional responses to application commands in a client-server system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/791,421 Continuation US11250217B1 (en) | 2015-07-14 | 2020-02-14 | Conditional responses to application commands in a client-server system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220129639A1 true US20220129639A1 (en) | 2022-04-28 |
Family
ID=80249597
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/791,421 Active 2035-09-09 US11250217B1 (en) | 2015-07-14 | 2020-02-14 | Conditional responses to application commands in a client-server system |
US17/569,433 Abandoned US20220129639A1 (en) | 2015-07-14 | 2022-01-05 | Conditional responses to application commands in a client-server system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/791,421 Active 2035-09-09 US11250217B1 (en) | 2015-07-14 | 2020-02-14 | Conditional responses to application commands in a client-server system |
Country Status (1)
Country | Link |
---|---|
US (2) | US11250217B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11797497B2 (en) * | 2022-02-25 | 2023-10-24 | Snowflake Inc. | Bundle creation and distribution |
Citations (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020032900A1 (en) * | 1999-10-05 | 2002-03-14 | Dietrich Charisius | Methods and systems for generating source code for object oriented elements |
US20020143529A1 (en) * | 2000-07-20 | 2002-10-03 | Schmid Philipp H. | Method and apparatus utilizing speech grammar rules written in a markup language |
US20030005174A1 (en) * | 2001-06-29 | 2003-01-02 | Coffman Daniel M. | System and method for providing dialog management and arbitration in a multi-modal environment |
US6523061B1 (en) * | 1999-01-05 | 2003-02-18 | Sri International, Inc. | System, method, and article of manufacture for agent-based navigation in a speech-based data navigation system |
US20040024652A1 (en) * | 2002-07-31 | 2004-02-05 | Willms Buhse | System and method for the distribution of digital products |
US20040098292A1 (en) * | 2002-11-18 | 2004-05-20 | Miller Lynn R. | System and method for enabling supplier manufacturing integration |
US20040139119A1 (en) * | 2002-11-08 | 2004-07-15 | Matt Clark | Feature/concept based local data service request formulation for client-server data services |
US20080282172A1 (en) * | 2007-05-09 | 2008-11-13 | International Business Machines Corporation | Dynamic gui rendering by aggregation of device capabilities |
US20090300076A1 (en) * | 2008-05-30 | 2009-12-03 | Novell, Inc. | System and method for inspecting a virtual appliance runtime environment |
US7873654B2 (en) * | 2005-01-24 | 2011-01-18 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US20110307398A1 (en) * | 2010-06-15 | 2011-12-15 | Tilo Reinhardt | Managing Consistent Interfaces for Request for Information, Request for Information Response, Supplier Assessment Profile, Supplier Questionnaire Assessment, and Supplier Transaction Assessment Business Objects across Heterogeneous Systems |
US20120016678A1 (en) * | 2010-01-18 | 2012-01-19 | Apple Inc. | Intelligent Automated Assistant |
US8370146B1 (en) * | 2010-08-31 | 2013-02-05 | Google Inc. | Robust speech recognition |
US20130132084A1 (en) * | 2011-11-18 | 2013-05-23 | Soundhound, Inc. | System and method for performing dual mode speech recognition |
US20130290203A1 (en) * | 2012-04-27 | 2013-10-31 | Thomas Purves | Social Checkout Widget Generation and Integration Apparatuses, Methods and Systems |
US20130293587A1 (en) * | 2012-01-19 | 2013-11-07 | Zumobi, Inc. | System and Method for Adaptive and Persistent Media Presentations |
US20130346302A1 (en) * | 2012-06-20 | 2013-12-26 | Visa International Service Association | Remote Portal Bill Payment Platform Apparatuses, Methods and Systems |
US20140095583A1 (en) * | 2012-09-28 | 2014-04-03 | Disney Enterprises, Inc. | Client-side web site selection according to device capabilities |
US20140095538A1 (en) * | 2007-02-02 | 2014-04-03 | Facebook, Inc. | Digital File Distribution in a Social Network System |
US20140222433A1 (en) * | 2011-09-19 | 2014-08-07 | Personetics Technologies Ltd. | System and Method for Evaluating Intent of a Human Partner to a Dialogue Between Human User and Computerized System |
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US8983858B2 (en) * | 2011-12-30 | 2015-03-17 | Verizon Patent And Licensing Inc. | Lifestyle application for consumers |
US8996376B2 (en) * | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US9043278B1 (en) * | 2012-05-09 | 2015-05-26 | Bertec Corporation | System and method for the merging of databases |
US9043204B2 (en) * | 2012-09-12 | 2015-05-26 | International Business Machines Corporation | Thought recollection and speech assistance device |
US9053089B2 (en) * | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US9071855B1 (en) * | 2014-01-03 | 2015-06-30 | Google Inc. | Product availability notifications |
US9134952B2 (en) * | 2013-04-03 | 2015-09-15 | Lg Electronics Inc. | Terminal and control method thereof |
US20160078866A1 (en) * | 2014-09-14 | 2016-03-17 | Speaktoit, Inc. | Platform for creating customizable dialog system engines |
US20160179464A1 (en) * | 2014-12-22 | 2016-06-23 | Microsoft Technology Licensing, Llc | Scaling digital personal assistant agents across devices |
US9582467B2 (en) * | 2010-05-05 | 2017-02-28 | Microsoft Technology Licensing, Llc | Normalizing data for fast superscalar processing |
US9891907B2 (en) * | 2014-07-07 | 2018-02-13 | Harman Connected Services, Inc. | Device component status detection and illustration apparatuses, methods, and systems |
US9912494B2 (en) * | 2015-08-12 | 2018-03-06 | Cisco Technology, Inc. | Distributed application hosting environment to mask heterogeneity |
US9921665B2 (en) * | 2012-06-25 | 2018-03-20 | Microsoft Technology Licensing, Llc | Input method editor application platform |
US9971542B2 (en) * | 2015-06-09 | 2018-05-15 | Ultrata, Llc | Infinite memory fabric streams and APIs |
US20180144064A1 (en) * | 2016-11-21 | 2018-05-24 | Accenture Global Solutions Limited | Closed-loop natural language query pre-processor and response synthesizer architecture |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6775023B1 (en) | 1999-07-30 | 2004-08-10 | Canon Kabushiki Kaisha | Center server, information processing apparatus and method, and print system |
JP2002279141A (en) | 2001-03-19 | 2002-09-27 | Ricoh Co Ltd | Information display system, information display method, information display server and information display program |
DE60229338D1 (en) | 2001-09-07 | 2008-11-27 | Canon Kk | Image processing method and apparatus |
JP2003216310A (en) | 2001-09-28 | 2003-07-31 | Ricoh Co Ltd | Key operation monitoring method, plotting information acquiring method, key operation reproducing method, program for making computer execute the same method and image forming device |
US7230731B2 (en) | 2001-11-16 | 2007-06-12 | Ricoh Company, Ltd. | Image formation apparatus and method with password acquisition |
US7532349B2 (en) | 2002-01-25 | 2009-05-12 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus, storage medium, program, and color image forming system |
US7584203B2 (en) | 2002-05-14 | 2009-09-01 | Canon Kabushiki Kaisha | Information processing system, information processing apparatus, archive information management method, storage medium which stores information-processing-apparatus-readable program that implements the method, and program |
US7466446B2 (en) | 2002-07-31 | 2008-12-16 | Canon Kabushiki Kaisha | Information processing apparatus and method |
JP4241053B2 (en) | 2003-01-14 | 2009-03-18 | 株式会社日立製作所 | Communication system and terminal device thereof |
JP4096892B2 (en) | 2004-02-19 | 2008-06-04 | セイコーエプソン株式会社 | Color matching profile creation device, color matching system, color matching method, color matching program, and electronic device |
US20050259280A1 (en) | 2004-05-05 | 2005-11-24 | Kodak Polychrome Graphics, Llc | Color management of halftone prints |
KR100686158B1 (en) | 2004-07-16 | 2007-02-26 | 엘지전자 주식회사 | Apparatus for displaying data broadcasting contents and method thereof |
US7376645B2 (en) | 2004-11-29 | 2008-05-20 | The Intellection Group, Inc. | Multimodal natural language query system and architecture for processing voice and proximity-based queries |
US8150872B2 (en) | 2005-01-24 | 2012-04-03 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US20060212906A1 (en) | 2005-03-18 | 2006-09-21 | Cantalini James C | System and method for digital media navigation and recording |
JP4604877B2 (en) | 2005-06-24 | 2011-01-05 | 富士ゼロックス株式会社 | Display image control program, image distribution apparatus, display image control apparatus, and display image control method |
JP4079169B2 (en) | 2005-10-06 | 2008-04-23 | コニカミノルタビジネステクノロジーズ株式会社 | Image processing apparatus, image processing system including the apparatus, image processing method, and program for causing computer to function as image processing apparatus |
US8112513B2 (en) | 2005-11-30 | 2012-02-07 | Microsoft Corporation | Multi-user display proxy server |
JP4756070B2 (en) | 2006-03-30 | 2011-08-24 | 富士通株式会社 | Information processing apparatus, management method, and management program |
JP4710722B2 (en) | 2006-06-05 | 2011-06-29 | 富士ゼロックス株式会社 | Image forming system and program |
KR100725186B1 (en) | 2006-07-27 | 2007-06-04 | 삼성전자주식회사 | Method and apparatus for photography during video call in mobile phone |
JP5109439B2 (en) | 2007-03-29 | 2012-12-26 | 富士ゼロックス株式会社 | Display control apparatus and program |
US20090067805A1 (en) | 2007-09-10 | 2009-03-12 | Victor Company Of Japan, Limited | Recording and reproducing device |
CN101918979B (en) | 2008-01-15 | 2012-05-23 | 佩特媒体株式会社 | Mosaic image generation device and method |
JP5574394B2 (en) | 2008-02-20 | 2014-08-20 | 日本電気株式会社 | OS image reduction program and recording medium recorded with OS image reduction program |
JP2009251229A (en) | 2008-04-04 | 2009-10-29 | Canon Inc | Color image forming apparatus, and image forming condition setting method for color image forming apparatus |
JP5304478B2 (en) | 2008-08-07 | 2013-10-02 | 株式会社リコー | Image forming apparatus, operation screen updating method and program |
JP5347590B2 (en) | 2009-03-10 | 2013-11-20 | 株式会社リコー | Image forming apparatus, data management method, and program |
EP2593857A4 (en) | 2010-07-16 | 2016-05-25 | Hewlett Packard Development Co | Adjusting the color output of a display device based on a color profile |
US8694537B2 (en) | 2010-07-29 | 2014-04-08 | Soundhound, Inc. | Systems and methods for enabling natural language processing |
KR101980173B1 (en) | 2012-03-16 | 2019-05-20 | 삼성전자주식회사 | A collaborative personal assistant system for delegating providing of services supported by third party task providers and method therefor |
-
2020
- 2020-02-14 US US16/791,421 patent/US11250217B1/en active Active
-
2022
- 2022-01-05 US US17/569,433 patent/US20220129639A1/en not_active Abandoned
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6523061B1 (en) * | 1999-01-05 | 2003-02-18 | Sri International, Inc. | System, method, and article of manufacture for agent-based navigation in a speech-based data navigation system |
US20020032900A1 (en) * | 1999-10-05 | 2002-03-14 | Dietrich Charisius | Methods and systems for generating source code for object oriented elements |
US20020143529A1 (en) * | 2000-07-20 | 2002-10-03 | Schmid Philipp H. | Method and apparatus utilizing speech grammar rules written in a markup language |
US20030005174A1 (en) * | 2001-06-29 | 2003-01-02 | Coffman Daniel M. | System and method for providing dialog management and arbitration in a multi-modal environment |
US20040024652A1 (en) * | 2002-07-31 | 2004-02-05 | Willms Buhse | System and method for the distribution of digital products |
US20040139119A1 (en) * | 2002-11-08 | 2004-07-15 | Matt Clark | Feature/concept based local data service request formulation for client-server data services |
US20040098292A1 (en) * | 2002-11-18 | 2004-05-20 | Miller Lynn R. | System and method for enabling supplier manufacturing integration |
US7873654B2 (en) * | 2005-01-24 | 2011-01-18 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US20140095538A1 (en) * | 2007-02-02 | 2014-04-03 | Facebook, Inc. | Digital File Distribution in a Social Network System |
US20080282172A1 (en) * | 2007-05-09 | 2008-11-13 | International Business Machines Corporation | Dynamic gui rendering by aggregation of device capabilities |
US9053089B2 (en) * | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8996376B2 (en) * | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US20090300076A1 (en) * | 2008-05-30 | 2009-12-03 | Novell, Inc. | System and method for inspecting a virtual appliance runtime environment |
US20120016678A1 (en) * | 2010-01-18 | 2012-01-19 | Apple Inc. | Intelligent Automated Assistant |
US9582467B2 (en) * | 2010-05-05 | 2017-02-28 | Microsoft Technology Licensing, Llc | Normalizing data for fast superscalar processing |
US20110307398A1 (en) * | 2010-06-15 | 2011-12-15 | Tilo Reinhardt | Managing Consistent Interfaces for Request for Information, Request for Information Response, Supplier Assessment Profile, Supplier Questionnaire Assessment, and Supplier Transaction Assessment Business Objects across Heterogeneous Systems |
US8370146B1 (en) * | 2010-08-31 | 2013-02-05 | Google Inc. | Robust speech recognition |
US20140222433A1 (en) * | 2011-09-19 | 2014-08-07 | Personetics Technologies Ltd. | System and Method for Evaluating Intent of a Human Partner to a Dialogue Between Human User and Computerized System |
US20130132084A1 (en) * | 2011-11-18 | 2013-05-23 | Soundhound, Inc. | System and method for performing dual mode speech recognition |
US8983858B2 (en) * | 2011-12-30 | 2015-03-17 | Verizon Patent And Licensing Inc. | Lifestyle application for consumers |
US20130293587A1 (en) * | 2012-01-19 | 2013-11-07 | Zumobi, Inc. | System and Method for Adaptive and Persistent Media Presentations |
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US9953378B2 (en) * | 2012-04-27 | 2018-04-24 | Visa International Service Association | Social checkout widget generation and integration apparatuses, methods and systems |
US20130290203A1 (en) * | 2012-04-27 | 2013-10-31 | Thomas Purves | Social Checkout Widget Generation and Integration Apparatuses, Methods and Systems |
US9043278B1 (en) * | 2012-05-09 | 2015-05-26 | Bertec Corporation | System and method for the merging of databases |
US20130346302A1 (en) * | 2012-06-20 | 2013-12-26 | Visa International Service Association | Remote Portal Bill Payment Platform Apparatuses, Methods and Systems |
US9921665B2 (en) * | 2012-06-25 | 2018-03-20 | Microsoft Technology Licensing, Llc | Input method editor application platform |
US9043204B2 (en) * | 2012-09-12 | 2015-05-26 | International Business Machines Corporation | Thought recollection and speech assistance device |
US20140095583A1 (en) * | 2012-09-28 | 2014-04-03 | Disney Enterprises, Inc. | Client-side web site selection according to device capabilities |
US9134952B2 (en) * | 2013-04-03 | 2015-09-15 | Lg Electronics Inc. | Terminal and control method thereof |
US9071855B1 (en) * | 2014-01-03 | 2015-06-30 | Google Inc. | Product availability notifications |
US9891907B2 (en) * | 2014-07-07 | 2018-02-13 | Harman Connected Services, Inc. | Device component status detection and illustration apparatuses, methods, and systems |
US20160078866A1 (en) * | 2014-09-14 | 2016-03-17 | Speaktoit, Inc. | Platform for creating customizable dialog system engines |
US20160179464A1 (en) * | 2014-12-22 | 2016-06-23 | Microsoft Technology Licensing, Llc | Scaling digital personal assistant agents across devices |
US9971542B2 (en) * | 2015-06-09 | 2018-05-15 | Ultrata, Llc | Infinite memory fabric streams and APIs |
US9912494B2 (en) * | 2015-08-12 | 2018-03-06 | Cisco Technology, Inc. | Distributed application hosting environment to mask heterogeneity |
US20180144064A1 (en) * | 2016-11-21 | 2018-05-24 | Accenture Global Solutions Limited | Closed-loop natural language query pre-processor and response synthesizer architecture |
Also Published As
Publication number | Publication date |
---|---|
US11250217B1 (en) | 2022-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10733983B2 (en) | Parameter collection and automatic dialog generation in dialog systems | |
US11232155B2 (en) | Providing command bundle suggestions for an automated assistant | |
JP6686226B2 (en) | Call the appropriate agent automation assistant | |
JP7118056B2 (en) | Personalize your virtual assistant | |
EP3513324B1 (en) | Computerized natural language query intent dispatching | |
KR20140014200A (en) | Conversational dialog learning and correction | |
JP2023530423A (en) | Entity-Level Data Augmentation in Chatbots for Robust Named Entity Recognition | |
US11676602B2 (en) | User-configured and customized interactive dialog application | |
CN110730953A (en) | Customizing interactive dialog applications based on creator-provided content | |
CN116724305A (en) | Integration of context labels with named entity recognition models | |
JP2023519713A (en) | Noise Data Augmentation for Natural Language Processing | |
CN116802629A (en) | Multi-factor modeling for natural language processing | |
US11003426B1 (en) | Identification of code for parsing given expressions | |
Mitrevski et al. | Getting started with wit. ai | |
CN116490879A (en) | Method and system for over-prediction in neural networks | |
US20220129639A1 (en) | Conditional responses to application commands in a client-server system | |
JP2024503519A (en) | Multiple feature balancing for natural language processors | |
CN110114754B (en) | Computing system, computer-implemented method, and storage medium for application development | |
US11941345B2 (en) | Voice instructed machine authoring of electronic documents | |
US20230367602A1 (en) | Speculative execution of dataflow program nodes | |
US20220366331A1 (en) | Persona-driven assistive suite | |
WO2023219686A1 (en) | Speculative execution of dataflow program nodes | |
Kuzmin | Kentico Voice Interface (KEVIN) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: SOUNDHOUND, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOHAJER, KEYVAN;WILSON, CHRISTOPHER S.;KHOV, KHENG;AND OTHERS;REEL/FRAME:059230/0065 Effective date: 20150807 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: ACP POST OAK CREDIT II LLC, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:SOUNDHOUND, INC.;SOUNDHOUND AI IP, LLC;REEL/FRAME:063349/0355 Effective date: 20230414 |
|
AS | Assignment |
Owner name: SOUNDHOUND AI IP HOLDING, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SOUNDHOUND, INC.;REEL/FRAME:064083/0484 Effective date: 20230510 |
|
AS | Assignment |
Owner name: SOUNDHOUND AI IP, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SOUNDHOUND AI IP HOLDING, LLC;REEL/FRAME:064205/0676 Effective date: 20230510 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |