US20150039292A1 - Method and system of classification in a natural language user interface - Google Patents
Method and system of classification in a natural language user interface Download PDFInfo
- Publication number
- US20150039292A1 US20150039292A1 US14/233,640 US201214233640A US2015039292A1 US 20150039292 A1 US20150039292 A1 US 20150039292A1 US 201214233640 A US201214233640 A US 201214233640A US 2015039292 A1 US2015039292 A1 US 2015039292A1
- Authority
- US
- United States
- Prior art keywords
- query
- user
- computer
- implemented method
- clarification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000005352 clarification Methods 0.000 claims abstract description 83
- 230000006870 function Effects 0.000 claims abstract description 77
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000003058 natural language processing Methods 0.000 claims abstract description 16
- 230000004044 response Effects 0.000 claims description 22
- 238000012706 support-vector machine Methods 0.000 claims description 22
- 238000004422 calculation algorithm Methods 0.000 claims description 21
- 238000004458 analytical method Methods 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 230000002068 genetic effect Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 8
- 238000012248 genetic selection Methods 0.000 claims description 7
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000009471 action Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 238000003066 decision tree Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 235000006719 Cassia obtusifolia Nutrition 0.000 description 1
- 235000014552 Cassia tora Nutrition 0.000 description 1
- 244000201986 Cassia tora Species 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 239000000344 soap Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
-
- G06F17/30424—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/285—Clustering or classification
-
- G06F17/28—
-
- G06F17/30598—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Definitions
- the present disclosure relates to natural language processing in a speech-based user interface and more particularly to classifying speech inputs.
- User interfaces for electronic and other devices are evolving to include speech-based inputs in a natural language such as English.
- a user may voice a command to control the operation of a device such as a smartphone, appliance, robot or other device.
- Natural language processing a type of machine learning using statistics, may be used to interpret and act upon speech inputs. Speech recognition may convert the input to text. The text may be analyzed for meaning to determine the command to be performed.
- Speech inputs in a natural language for a command may be ambiguous and require clarification. More than one speech input may be occasioned to complete a specific command. Thus, sequential speech inputs may relate to a same command or to different commands.
- Classifying a speech input in relation to a current command or a new command may be useful to processing the command.
- a method and system are provided for processing natural language user queries for commanding a user interface to perform functions.
- Individual user queries are classified in accordance with the types of functions and a plurality of user queries may be related to define a particular command.
- a query type for each user query is determined where the query type is one of a functional query requesting a particular new command to perform a particular type of function, an entity query relating to an entity associated with the particular new command having the particular type of function and a clarification query responding to a clarification question posed to clarify a prior user query having the particular type of function.
- Functional queries may be processed using a plurality of natural language processing techniques and scores from each technique combined to determine which type of function is commanded.
- a computer-implemented method of processing user queries comprising natural language for a natural language-based user interface for performing one or more functions.
- the method comprises: receiving at a computing device a plurality of user queries for defining one or more commands for controlling the user interface to perform particular types of functions; and classifying, via the computing device, individual user queries in accordance with the types of functions to relate a subset of the plurality of user queries to define a particular command for invoking a particular type of function, determining a query type for each user query, the query type selected from a group comprising a functional query, an entity query and a clarification query; wherein the functional query comprises a request for a particular new command to perform a particular type of function; the entity query relates to an entity associated with the particular new command having the particular type of function; and the clarification query is responsive to a clarification question posed to clarify a prior user query having the particular type of function.
- the computer-implemented method may further comprise further processing the user queries in response to the particular type of function to define the particular command.
- the computer-implemented method may further comprise providing the particular command to invoke the function.
- Classifying may comprise, for a user query received following a posing of a clarification question: performing keyword analysis on the user query to determine whether the user query is responsive to the clarification question; and classifying the user query as a clarification query having the particular type of function in response to the keyword analysis.
- Keyword analysis may be performed in accordance with term frequency-inverse document frequency (TF-IDF) techniques to indentify keywords in the user query which are associated with the clarification question posed.
- TF-IDF term frequency-inverse document frequency
- the computer-implemented method may comprise, for a user query received following a posing of a clarification question which is unresponsive to the question posed or for a user query received other than a user query received following a posing of a clarification question: determining whether the user query is an entity query or a functional query and in response, perform one of: classifying the user query as an entity query having the particular type of function of the particular command to which it relates; and classifying the user query as a functional query, analyzing the user query to determine the particular type of function for the particular new command. Determining whether the user query is an entity query or a functional query may be performed using a support vector machine.
- Analyzing the user query to determine the particular type of function may comprise: performing a plurality of natural language processing techniques to determine a rank of candidate types of functions and selecting the type of function in response.
- the natural language processing techniques may include one or more of random forest processing, na ⁇ ve Bayes classifier processing, a plurality of support vector machines processing, and previous query score processing.
- the rank may be derived from the plurality of natural language processing techniques via a two layer neural network responsive to an output of each of the plurality of natural language processing techniques.
- Previous query score processing may comprise: performing statistical analysis to provide candidate types of functions for the user query, the analysis responsive to keywords of the user query and prior user queries having associated respective types of functions previously determined for each of the prior user queries.
- the computer-implemented method may comprise maintaining a data store of prior user queries and respective types of functions. The prior user queries may be responsive to individual users to provide user-centric preferences for commands.
- the computer-implemented method may comprise posing a clarification question in response to a previous user query, the clarification question associated with a type of function.
- Processing the user queries in response to the particular type of function may comprise extracting entities from the user queries for the particular command using statistical modeling methods.
- a genetic algorithm may be used to define optimized features sets with which to extract the entities for particular types of functions.
- the statistical modeling methods may comprise using conditional random fields.
- the user queries may comprise voice signals and the method may further comprise converting the voice signals to text.
- a system comprising one or more processors and memory storing instructions and data for performing a method in accordance with an aspect described.
- a computer program product comprising a storage medium (e.g. a memory or other storage device) storing instructions and data for performing a method in accordance with an aspect described.
- FIG. 1 is a block diagram of a top level architecture of a communication system including a smartphone and a cloud-based service in accordance with one example embodiment.
- FIG. 2 is a block diagram that shows software architecture of the cloud-based service in accordance with one embodiment.
- FIG. 3 illustrates a block diagram of modules performing operations (methods) of the service of FIGS. 1 and 2 .
- FIG. 4 illustrates a block diagram of modules performing operations (methods) of question type classification.
- FIG. 5 illustrates a block diagram of modules performing operations (methods) of keyword identification.
- FIG. 6 illustrates a block diagram of modules performing operations (methods) of answer ranking.
- FIG. 7 illustrates a block diagram of modules of an entity extraction pipeline performing operations (methods) of entity extraction.
- FIG. 8 illustrates a general overview flow of selected operations of capturing clarification questions/dialog within feature sets according to one example embodiment.
- FIG. 9 illustrates a general overview flow of selected operations for defining optimal feature sets (i.e. feature vector(s)) using a genetic algorithm according to one embodiment.
- FIG. 1 is a block diagram of a top level architecture, in accordance with one example embodiment, of a communication system 100 including a smartphone 102 and components of a cloud-based service infrastructure 104 providing a voice-based interface to one or more services.
- FIG. 2 is a block diagram that shows software architecture of the cloud-based service infrastructure 104 in accordance with one embodiment.
- cloud-based service infrastructure 104 is configured to permit a user of smartphone 102 to provide speech inputs defining commands to obtain one or more services.
- a command may comprise an action and associated parameters or other data.
- a command such as “I want to book a meeting” indicates a calendar related action but does not include associate parameters such as date, time, location, invitees etc.
- Services in this context may be internal services or external services.
- Internal services relate to one or more functions of the user's communication device (e.g. smartphone 102 ) such as voice and data communication services, personal information management (PIM) by way of example, telephone, email, Instant Messaging (IM), text or short message service (SMS), calendar, contacts, notes, and other services.
- PIM personal information management
- IM Instant Messaging
- SMS short message service
- External services relate to those provided by another party, typically via a web connection, such as a travel booking service, weather information service, taxi service, shopping service, information retrieval service, social networking service, etc.
- the user input may be a speech input, but responses (output) from the service for presenting by smartphone 102 need not be speech (e.g. synthesized automated voice) responses. Output may include text or other types of response (e.g. image, sounds, etc).
- a user may also provide other inputs via the smartphone 102 . For example, a speech input such as “Send an email to Bob” defining a command to email a particular contact may initiate a draft email on smartphone 102 . The user may manually edit the email using a keyboard (not shown) or other input means of smartphone 102 .
- components of cloud-based service infrastructure 104 include cloudfront server 106 , delegate service 108 , event notification service 110 , speech service 112 , NLP service 114 , conversation service 116 , external dependent service interfaces 118 providing access to one or more external services such as flight provider service 118 A, taxi service 118 B and weather service 118 C. It is apparent that there may be a plurality of each of these respective service components within the infrastructure to scalably and reliably handle service request from a plurality of communication devices of which only one is illustrated. Though shown as a client (smartphone) and server model, certain functions and features may be performed on the client.
- Cloudfront server 106 provides connection, load balancing and other communication related services to a plurality of communication devices such as smartphone 102 .
- Delegate service 108 is chiefly responsible for handling and/or coordinating processing of the speech input, the resulting commands for the applicable services and any applicable responses.
- Event notification service 110 provides event-related messages to smartphone 102 , for example, data communications such as calendar reminders, recommendation, previously used external services, follow-ups, survey requests, etc.
- Speech service 112 performs speech-to-text conversion, receiving speech input for defining a command, such as in the form of an digital audio recording, from smartphone 102 and provides text output. In examples discussed herein with reference to FIGS. 3-7 , such text output is a user query 302 .
- NLP service 114 analyzes the user query to determine meaning and specific commands with which to provide the services.
- Conversation service 116 assists with the user interface between the user and the services, for example, engaging in natural language dialogue with the user.
- the dialogue may include questions clarifying one or more aspects of a specific command as discussed further herein below.
- the service's responses to speech inputs from smartphone 102 need not be in a spoken word format but may be in a text-based or other format as previously mentioned.
- Interfaces 118 are interfaces to particular web-based services (e.g. Web Services) or other external services. External services typically utilize well-defined interfaces for receiving requests and returning responses. Cloud-based service infrastructure 104 provides a manner for receiving natural language commands for such services, determining the applicable external service request and any associated data (parameters) to make the request and invoking the request. Cloud-based service infrastructure 104 is also configured to receive the applicable response and provide same to smartphone 102 . Similar operations may be performed to invoke internal services.
- web-based services e.g. Web Services
- External services typically utilize well-defined interfaces for receiving requests and returning responses.
- Cloud-based service infrastructure 104 provides a manner for receiving natural language commands for such services, determining the applicable external service request and any associated data (parameters) to make the request and invoking the request. Cloud-based service infrastructure 104 is also configured to receive the applicable response and provide same to smartphone 102 . Similar operations may be performed to invoke internal services.
- Non-service call passive mechanisms can also be used. In this case, data is placed at digital location that is accessible by the invoked service. The invoked service checks this digital location. This passive mechanism is also effective as an invocation mechanism
- Software components 200 further include template service 202 to assist with the conversation service 116 , persistence memcache service/relational database management service (RDBMS) 204 for storing and managing data and application server and business code components 206 such as components of an object oriented JBoss Server and Enterprise Java Beans® (EJB) container service in accordance with an example implementation.
- RDBMS persistence memcache service/relational database management service
- business code components 206 such as components of an object oriented JBoss Server and Enterprise Java Beans® (EJB) container service in accordance with an example implementation.
- Smartphone 102 is configured, such as via one or more applications, to send language information to cloud-based service infrastructure 104 and receive a response based on language understanding. Smartphone 102 is also configured to receive notifications from event notification service 110 .
- smartphone 102 may be configured perform language understanding without the use of cloud-based service infrastructure 104 , for example, when understanding requires sensitive information or information unique to the phone (e.g. contact information entities).
- user devices need not be limited to smartphones only. Other communication devices can be supported such as dumb phones via any communication protocol including TTY and SMS.
- Non-phone clients like laptops, set top boxes, TV's and kiosks, etc. can be supported as well.
- FIG. 3 illustrates a general overview flow of selected operations (methods) 300 of the service of FIGS. 1 and 2 .
- a user query 302 is input to such operations 300 and provides output 304 discussed further herein below.
- Dialogue driver 306 receives user query 302 for processing, providing same to question type classification determiner 314 .
- User query 302 is also provided to keyword expansion unit 308 .
- the user query and expanded keywords are provided to previous query score determiner 310 which references prior queries (not shown) stored to query database 312 .
- Previous query score determiner 310 performs statistical analysis and provides candidate answers (commands) for ranking by answer ranking unit 316 .
- Previous query score determiner 310 may be useful in determining that a particular user query likely relates to a particular command as well as determining that a particular user query likely does not relate to a particular command.
- Previous query score 602 may be used as an input to 2 layer neural network 610 as shown in FIG. 6 (as well as to other methods for combining statistical classifiers such as a reciprocal rank fusion method).
- Previous query score 602 may also be employed in post-processing of the rank of answers 612 generated by 2 layer neural network 610 to eliminate some candidate answers and/or to select some candidate answers as the command likely intended by the user.
- previous query score 602 is used only in post-processing of the rank of answers 612 instead of as an input to 2 layer neural network 612 .
- Query database 312 may store, such as in a machine learning manner, a history of user queries and the associated commands and additional data such as keywords determined by cloud-based service infrastructure 104 .
- the query database 312 may store a complete history (or subset) of a particular user's queries and associated commands to build user-centric preferences. For example a particular user's user query “Tell Bob I want a meeting” may result in a command to telephone Bob or email Bob. The resulting command to telephone or email, as applicable, may be associated with the user query “tell” on behalf of the particular user.
- query database 312 may also be useful to store and provide access to user queries, commands etc. from all users, such as via an aggregated subset of queries and associated commands.
- the aggregated data may define a broader corpus from which statistics and other data may be gleaned and be useful when determining expanded keywords and/or the classification of a user query.
- Question type classification determiner 314 evaluates user query 302 to determine whether it is a function type query, entity type query, or a clarification type query.
- a function type query establishes a new command.
- An example of a function type query is “Book a meeting for next Friday at 2:00 pm” or “Send a message to Bob”.
- a entity type query is in relation to a current command and adds or changes an entity in such command. For example, “Actually, move that to 3:00 pm” or “Add James to the message”.
- a clarification type query is in relation to a current command and is responsive to a clarification question (output 304 ) posed by dialogue driver 306 .
- Clarification type queries only occur when the dialogue driver asks the user a clarification style question: e.g. For a user query “Tell Bob I want a meeting”, an output 304 comprising a clarification question from dialogue driver 306 may be “Did you want to text or email Bob?”.
- Function type queries are directed by question type classification determiner 314 to answer ranking unit 316 for determining the new command, if possible.
- Question type classification determiner 314 directs entity type queries and clarification type queries to template system 318 for additional processing to obtain further meaning from the user query with a view to also initiating appropriate output.
- Template system 318 may also receive function type queries from answer ranking unit 316 .
- Template system 318 may access template memory store 320 to define or refine a command and to define applicable output 304 .
- Extraction pipeline 322 receives the user query and conversation features and extracts entities from the user query to build up the command and its associated data as described further herein below with reference to FIG. 7 .
- Dialogue driver 306 provides output 304 for smartphone 102 also as described below.
- FIG. 4 illustrates a flow chart of a method 400 of question type classification for question type classification determiner 314 in accordance with an example embodiment.
- User query 302 is received.
- a determination is made whether a clarification type question was initiated (i.e. the question was previously posed (e.g. provided as output 304 ) to the smartphone via dialogue driver 306 ). If no, a question is not pending, operations continue at 404 . If yes, operations continue at 406 .
- user query 302 is subjected to binary classification such as via a support vector machine (SVM) for analysis.
- SVM performs analysis of the user query to determine whether the query is an entity type query, related to the current function, or not (i.e. that it is a function type query).
- Functional type queries are passed ( 408 ) to answer ranking unit 316 .
- Entity type queries are passed ( 410 ) to template system 318 .
- An SVM is configured using a set of input data or training examples where each is identified as belonging to one of the two query types.
- a training algorithm builds a model for assigning new queries to one of the two types.
- An SVM model is a representation of the examples as points in space (hyperplane), mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New queries are then mapped into that same space and predicted to belong to a category based on the side of the gap on which each respective query falls.
- it may be assistive to select provide certain words, terms and metadata or other features related to the query. Using all words from a query may be problematic because common words may skew results in correctly. Services, application programming interfaces or other means which perform entity extraction may be useful to extract entities such as people, places, dates, specific things, etc. For example, the following is an example of features which may be determined and provided for the SVM:
- Keyword identification may be performed in the context of operations 406 to assist with the determination of whether the user query is an answer to the clarification question posed.
- Statistics may be defined for particular terms to identify their relative frequency of appearance in user queries associated with a particular category (e.g. each respective categories may represent a specific command).
- FIG. 5 illustrates a flow chart of a method 500 of keyword identification such as may be useful for processing a user query to determine a set of keywords related to the command and/or entities in the query.
- a database of queries and associated categories may be defined. For example, in a smartphone communication context relevant to internal services, a subset of categories may represent smartphone functions/commands such as “email”, “telephone”, “book meeting”, “Short Message Service (SMS)/Text” among others.
- SMS Short Message Service
- the user queries grouped by associated categories are represented generically as Category “A” queries 502 , Category “B” queries 504 , Category “C” queries 506 , and Category “D” queries 508 . It is understood that more categories may exist in an actual implementation.
- the relative frequency of a term in a category is comparatively determined in relation to the term's infrequency in the other categories as well.
- term frequency-inverse document frequency (TF-IDF) word scoring is used to determine keywords for each category.
- a document is defined as the set of queries that have the same category (e.g. 508 ).
- the corpus (within query database 312 ) is the set of queries ( 502 , 504 and 506 etc.) that are not the category where we are finding the keywords.
- a term (keyword) which is relatively unique to category “D” will be less frequently occurring in the corpus of category “A”, “B” and “C” queries.
- This database and associated statistics may be maintained (e.g. pre-calculated) so that the statistics are available for use in real-time when processing the user query.
- a word ranking for words in the current user query may be determined (at 512 ) to identify unique words indicative of keyword status.
- the user query may be analyzed for keywords from the category or categories associated to the user query. Given that a clarification type question elicited the current user query, one or more categories may be have been previously associated with the current user query as determined from processing the prior user query which occasioned the clarification. These one or more categories relate to the current command. It is understood that because individual user queries may be vague and/or ambiguous, more than one category (command) may be associated with the user query e.g. as respective candidate commands. As the dialogue develops, a specific command can be determined.
- the classification of the query type is useful to initiate a new command (via answer ranking unit 316 and template system 318 ) or to further process a current command (via template system 318 ).
- Answer ranking may be performed when a user query is identified as a function type query indicating a new command. Answer ranking may be performed to assist with the identification of the specific command to which the user query relates.
- answer ranking method 600 performs four types of analyses ( 602 , 604 , 606 and 608 ) of user query 302 and combines the results of same (via two-layer neural network 610 ) to drive a rank of answers 612 .
- a plurality of four natural language processing techniques are used in the example embodiment for this analysis, fewer or additional techniques may be used and the respective results of same combined to drive a rank of answers.
- a means other than a two-layer neural network may be used to combine such results.
- user history is examined to define a previous query score ( 602 ). Keywords are extracted from the user query such as by TF-IDF techniques. Previous user queries and their respective associated commands form the corpus for comparative purposes.
- Keywords may be expanded with related words (e.g. synonyms) such as via WordNetTM expansion (WordNet is a registered trademark of Princeton University ⁇ http://wordnet.princeton.edu>).
- WordNetTM expansion WordNet is a registered trademark of Princeton University ⁇ http://wordnet.princeton.edu>.
- the extracted and expanded keywords may form the basis of a comparison or search applied against the query corpus and a relevance score calculated (e.g. retrieval and ranking functions) to rate search results.
- the search results i.e. the respective associated command and the ranking score
- the ranking function applied at 602 may comprise a BM25 or similar ranking function (e.g. BM25-F taking into account document format, length, etc.).
- BM25 relies upon IDF statistics to determine relevance of keywords in a document set.
- the user query is applied to a set of decision trees where each decision tree assigns (determines) a command to which the user query relates.
- a rank (or mode) may be calculated to determine which command (or commands) results most frequently by the decision trees.
- N(c) represent the number decision trees that classify the user query as command c.
- R(c) is the score for class c calculated as N(c) divided by the sum of N(c) for all c's derived by the decision trees.
- the scores and associated candidate commands are made available to operations of two-layer neural network 610 .
- a two layer neural network (see 610 discussed below) may be trained in order to determine the probability that the query was relevant. From this a rank for each of the classes can be determined according to this probability.
- each SVM is a binary classifier configured to determine whether the user query is associated with a particular command or any of the other commands (i.e. a one-versus-all determination).
- a SVM is configured for each pair of commands to determine whether the user query is associated with one of two particular commands (e.g. email vs. telephone) (i.e. a one-versus-one determination). It is noted that in a one-versus-one embodiment, SVMs may be configured for a pairing of particular commands to a null class.
- a winner takes all approach is often adopted, selecting the highest score from the SVMs.
- the SVMs require calibration to produce comparable scores.
- a command selected most frequently by the set of SVMs is the candidate command if the SVM approach is the sole classifier.
- scores for each candidate command are provided for operations of two-layer neural network 610 .
- the user query is provided to a Bayes-theorem based classifier with strong independence assumptions to perform document classification.
- the na ⁇ ve Bayes classifier determines a probability that a particular user query (set of features) belongs (e.g. is associated with) a particular class (i.e. command).
- the classifier may be trained using a training set of known queries and commands. It is assumed that words of a user query are independent. Frequency of appearance (count) of a word in a given class (command) may be used to determine a probability that a particular word is in a particular class.
- the score for a particular class is a multiplier of the score (probability) for each word in the query relative to the particular class. Care must be taken when a word never appears in a particular class to avoid multiplying by zero.
- a smoothing technique can be used to eliminate the effects of zero probabilities in the data.
- candidate commands and scores from each of the analyses are available to a two layer neural network to drive a result, tying the four individual predictions (scores) for each class (command) together to define a single score for each command.
- the scores from the classifiers are used as input nodes to a two layer neural network which represents a rank function.
- the set of classifier scores for a single class represents a single input vector. This vector is scored, via the neural network, according to its relevance to the user query. Here a score of 1 is considered highly relevant to the users query and a score of 0 is considered irrelevant.
- Each of the vectors for each category are scored via the rank function and sorted according to their score.
- the scores are normalized by dividing each of the scores by the maximum of the scores.
- Template system 318 thus has available from answer ranking unit 316 data identifying the user query as a function type query and candidate commands and rankings for the new function. Template system may initiate processing to identify a particular command. From question type classification unit 314 , template system 318 has data identifying the user query as an entity type or a clarification type. Thus template system may continue previous processing to derive a particular command. Template system 318 may comprise a state machine having states: new function, continue prior dialogue, undo last action/negation (e.g. a command may be to stop or change).
- Each command may be associated with a respective template in template memory store 320 .
- Templates may be used to build up a command and its parameters (data).
- Extraction pipeline 322 may analyze the user query for entities and relationships among entities in the current user query and prior related user queries, working with the template system 318 to populate the template. Anaphora techniques may be used to relate specific words in a user query with entities in a prior user query. For example, user queries “I'd like a ticket from New York” followed by “Change that to San Jose”, the word “that” will be associated with both “ticket” and “New York” and New York will be an entity initially defining the from location for the ticket in the template. Extracted template entities are provided back to the template system 318 . Control is passed back to the dialogue manager, for example, to produce dialogue.
- Hobbs' algorithm is used to determine the entity(ies) in a previous user query that relate to pronouns in the current user query.
- Example processing may involve determining whether one or more pronouns is present in the current user query. If one or more pronouns is present in the current user query, then Hobbs' algorithm may be used for each pronoun to determine the words in previous user queries that are referenced by each pronoun.
- a second user query may be created by substituting the pronoun with the word referenced by the pronoun and named entity recognition may be performed on the second user query.
- a user previously uttered the user query “Find me a flight from Calgary” and subsequently says “Change that to New York” which is the current user query.
- the current user query may be analyzed to determine if a pronoun is present; in this example, the pronoun “that” is identified.
- Hobbs' algorithm may be employed to determine which word(s) in the previous user queries are likely referenced by the pronoun “that”.
- a second user query is created by substituting the pronoun with the word the pronoun likely references which results in a second user query of “Change Calgary to New York”. Entity extraction may then be performed on the second user query as described herein in order to perform the command intended by the user.
- a genetic algorithm 704 working on a general feature set 706 determined from a labeled corpus 708 generates (e.g. off-line, in advance of its use in a particular classification instance) optimized feature sets 702 for respective specific classes (e.g. types of functions).
- Each class may have its own extraction pipeline 322 for extracting entities for the specific class.
- Each pipeline instance 332 receives the user query 302 and its conversation features 710 .
- conversational features 710 include question ID, results of pronoun resolution with previous sentences, and other related information.
- a feature extraction module 712 expands the features associated with the user query 302 and conversation features 710 . Examples include date lists, number lists, city lists, time lists, name lists, among others.
- the expanded user query and its specific conversation features are fed through the filter created by the genetic algorithm and provided to a previously defined conditional random field (CRF) or another sequential classifier.
- CRF is a statistical modeling method applied for pattern recognition. Optimized feature sets are used to define the filter and to train the CRF.
- the CRF is trained with specific features decided by the genetic algorithm. To train a CRF, it is required to obtain training data, which includes a set of labeled test queries relating to a particular domain. Labeling a set of training data may include labeling entities found in the test queries (such as departure_city) by marking up the text queries using a predefined mark-up language or format. After it is trained with specific features it will expect those features in the future. The system ensures that the CRF only gets the features it is expecting.
- a first layer determines general entities (e.g. an entity extraction CRF 714 ). For example, in a travel booking user interface, general entities may include date, place, time.
- a second layer determines more specific template entities (e.g. an template filler extraction CRF 716 ) such as destination, departure location, departure date to fill templates of the template system 318 .
- a single CRF layer may be employed.
- Template system 318 may store (e.g. to template memory store 320 ) the filled or partially filled template for a particular command as user queries are processed.
- the first CRF may be used to determine general entities as described above, and these entities may be used as features in the second CRF which then determines more specific entities.
- a genetic algorithm assists to make the extraction pipeline adaptable to new domains, defining new optimized feature sets as directed.
- Dialogue driver 306 maintains conversation/system state and generates responses (output 304 ) based on the state of the conversation.
- Dialogue driver 306 may be configured as a finite state machine. Markov decision process (MDP) or partially observable MDP (POMDP) techniques may be used for determining actions of the dialogue driver 306 . States may comprise entity, clarification, speech error, NLP error, unknown request, informative response.
- MDP Markov decision process
- POMDP partially observable MDP
- Clarification type questions may be generated.
- Each class has a predefined descriptive.
- Dialogue driver 306 generates a question providing specific alternatives among the classes e.g. Did you want to ⁇ class 1>, ⁇ class 2>, ⁇ class 3>? For a user query “Tell Bob I want a meeting Thursday”, a question in response is “Did you want to text, email or book a meeting? Dialogue driver passes the desired command and extracted entities to the delegate service 108 for example, to invoke a particular function.
- FIG. 8 illustrates a general overview flow of selected operations (methods) 800 of capturing clarification questions/dialog within feature sets according to one example embodiment.
- the operations 800 may be used to increase the accuracy of the service of FIGS. 1 and 2 by incorporating clarification questions and/or user queries responsive to clarification questions into a feature set used to extract entities.
- a feature set is created for each general domain of knowledge.
- the calendar domain may have a feature set
- the news domain may have a feature set
- the knowledge domain may have a feature set, and so forth.
- Each feature set may be created and fine-tuned using one of several techniques, for example, by using one or more genetic algorithms, examples of which are described herein.
- a given feature set may include one or more elements that represent whether a clarification question was initiated by the system and/or whether a given user query was responsive to a clarification question posed.
- a particular feature may be present in a feature set for each clarification question/dialog statement that may be initiated by the system and presented to the user.
- the feature associated with the particular clarification question may be set to ‘1’ and all other features related to the other clarification questions (i.e. the clarification questions not posed to the user) may be assigned a ‘0’ or NULL value.
- the system includes a plurality of possible clarification questions that may be initiated and presented to the user on smartphone 102 in order to elicit entity information from the user.
- the particular clarification question posed to a user depends at least in part on the entities that have not been provided by the user's query 302 or extracted by the system.
- the system maintains a linear mapping between all possible filled or unfilled entity states, and predefined questions related to each entity state.
- a user utters the user query of “Get me a flight to Calgary leaving on Friday”.
- the system may classify the sentence in accordance with FIG. 4-6 and extract entities (“Calgary” and “Friday”) according to FIG. 7 .
- the system may further determine that a departure city is required in order to perform the command desired by the user. Several techniques may be used to determine the departure city. In some embodiments, the system may use a default rule that automatically selects the closest city with an airport. In other embodiments, the system initiates a clarification question to elicit the departure city from the user, for example “Which city would you like to leave from?” In another exemplary interaction, the user utters the user query “Get me a flight from Toronto to Calgary”. In one embodiment, the system may process the user query 302 in accordance with FIGS. 3-7 and determine that a departure date and possibly a return date is required in order to execute the desired command (i.e. find a flight). Continuing with the example, the system may present a clarification question to the user on smartphone 102 to elicit the departure date such as “What day would you like to leave?”
- a user query 302 is received and the output of operations 800 is provided to the template system as described herein.
- answer ranking is performed on the user query to identify the command desired by the user.
- a determination is made at step 804 whether a clarification type question was recently initiated by the service and presented to the smartphone via dialogue driver 306 . If a clarification question was recently initiated, then at 806 A, a feature vector is created that represents the user query 302 and other relevant information. Other relevant information may include the clarification question that was initiated by the system.
- the feature vector created for a given user query will generally only indicate the particular clarification question that was initiated (although multiple clarifications may be concatenated into a single dialog statement in other embodiments and the multiple clarification questions will be captured in the feature set).
- the other clarification questions i.e. the clarification questions that were not initiated by the system
- the feature vector created at step 806 A or 806 B is then applied to one or more conditional random fields to extract the entities that relate to the identified command.
- the flow of operations is transferred to the template system 318 at step 814 so that the command may be performed.
- a clarification question may also be presented at 812 to confirm the request. If all of the entities for a particular command are not known, however, then the system will identify the proper clarification question to present to the user to elicit the unknown entities and will present the selected clarification question to the user at step 812 .
- a clarification question and/or dialog is presented to the user after every user query 302 is received.
- the dialog is selected at 810 and presented to the user at 812 .
- Any new entities that have been extracted at 808 are also provided to the template system at 814 .
- a general overview flow of selected operations (methods) 900 is illustrated for defining optimal feature sets (i.e. feature vector(s)) using a genetic algorithm according to one embodiment.
- one or more initial features sets are defined as the starting point in the genetic algorithm.
- the initial feature set(s) may be generated randomly or may be selected by an expert at least partly based on the subject matter of the domain to which the feature set is directed (for example, weather).
- a set of random permutations of the initial feature set(s) is generated.
- the number of random permutations generated is up to an administrator, but may also be preset depending on the number of features available in a given feature set. For example, if a given feature set has hundreds of thousands of features available then it may be desirable to run the genetic algorithm with thousands of random permutations being generated at step 904 .
- each of the random permutations of feature sets will be tested against a test dataset that contains test user queries.
- each user query in the test dataset will be applied to the each random permutation in order to evaluate the performance (i.e. accuracy, speed, etc.) of each random permutation feature set.
- a performance measure is calculated for each random permutation.
- the performance measure is calculated using a function that includes an “f-measure+log(n)” relationship so that random permutations having a combination of accuracy and speed are favored by the system, although other performance measures may be used at step 908 .
- Step 910 is an optional step in which the performance measure of each random permutation is compared against a predetermined threshold. If one or more of the random permutations has a performance measure greater than the predetermined threshold, than the random permutation with the most favorable performance measure may be selected as the genetic algorithm is being applied. If none of the random permutations have a performance measure that is greater than a predetermined threshold then a subset of the random permutations with the most favorable performance thresholds (or all of the random permutations) may be set as the initial feature sets at 914 and the genetic algorithm may be run again beginning at step 902 .
- the flow of operations shown in FIG. 9 is one round of genetic selection. In some embodiments, the operations of FIG. 9 may be run several times to increase the performance of the final feature set selected at step 912 .
- the process of FIG. 9 begins by an administrator selecting the initial feature vector as well as tuning parameters X and Y.
- X refers to the number of permutations that are to be generated at 904 and Y refers to the number of times the genetic algorithm is run (i.e. the number of rounds of genetic selection to be performed).
- Y refers to the number of times the genetic algorithm is run (i.e. the number of rounds of genetic selection to be performed).
- decision step 910 is not executed, but rather, the algorithm is run 10,000 times whether or not a performance measure is calculated.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Acoustics & Sound (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- The present disclosure relates to natural language processing in a speech-based user interface and more particularly to classifying speech inputs.
- User interfaces for electronic and other devices are evolving to include speech-based inputs in a natural language such as English. A user may voice a command to control the operation of a device such as a smartphone, appliance, robot or other device. Natural language processing, a type of machine learning using statistics, may be used to interpret and act upon speech inputs. Speech recognition may convert the input to text. The text may be analyzed for meaning to determine the command to be performed.
- Speech inputs in a natural language for a command may be ambiguous and require clarification. More than one speech input may be occasioned to complete a specific command. Thus, sequential speech inputs may relate to a same command or to different commands.
- Classifying a speech input in relation to a current command or a new command may be useful to processing the command.
- A method and system are provided for processing natural language user queries for commanding a user interface to perform functions. Individual user queries are classified in accordance with the types of functions and a plurality of user queries may be related to define a particular command. To assist with classification, a query type for each user query is determined where the query type is one of a functional query requesting a particular new command to perform a particular type of function, an entity query relating to an entity associated with the particular new command having the particular type of function and a clarification query responding to a clarification question posed to clarify a prior user query having the particular type of function. Functional queries may be processed using a plurality of natural language processing techniques and scores from each technique combined to determine which type of function is commanded.
- In one example aspect, there is provided a computer-implemented method of processing user queries comprising natural language for a natural language-based user interface for performing one or more functions. The method comprises: receiving at a computing device a plurality of user queries for defining one or more commands for controlling the user interface to perform particular types of functions; and classifying, via the computing device, individual user queries in accordance with the types of functions to relate a subset of the plurality of user queries to define a particular command for invoking a particular type of function, determining a query type for each user query, the query type selected from a group comprising a functional query, an entity query and a clarification query; wherein the functional query comprises a request for a particular new command to perform a particular type of function; the entity query relates to an entity associated with the particular new command having the particular type of function; and the clarification query is responsive to a clarification question posed to clarify a prior user query having the particular type of function.
- The computer-implemented method may further comprise further processing the user queries in response to the particular type of function to define the particular command. The computer-implemented method may further comprise providing the particular command to invoke the function.
- Classifying may comprise, for a user query received following a posing of a clarification question: performing keyword analysis on the user query to determine whether the user query is responsive to the clarification question; and classifying the user query as a clarification query having the particular type of function in response to the keyword analysis. Keyword analysis may be performed in accordance with term frequency-inverse document frequency (TF-IDF) techniques to indentify keywords in the user query which are associated with the clarification question posed.
- The computer-implemented method may comprise, for a user query received following a posing of a clarification question which is unresponsive to the question posed or for a user query received other than a user query received following a posing of a clarification question: determining whether the user query is an entity query or a functional query and in response, perform one of: classifying the user query as an entity query having the particular type of function of the particular command to which it relates; and classifying the user query as a functional query, analyzing the user query to determine the particular type of function for the particular new command. Determining whether the user query is an entity query or a functional query may be performed using a support vector machine.
- Analyzing the user query to determine the particular type of function may comprise: performing a plurality of natural language processing techniques to determine a rank of candidate types of functions and selecting the type of function in response. The natural language processing techniques may include one or more of random forest processing, naïve Bayes classifier processing, a plurality of support vector machines processing, and previous query score processing. The rank may be derived from the plurality of natural language processing techniques via a two layer neural network responsive to an output of each of the plurality of natural language processing techniques. Previous query score processing may comprise: performing statistical analysis to provide candidate types of functions for the user query, the analysis responsive to keywords of the user query and prior user queries having associated respective types of functions previously determined for each of the prior user queries. The computer-implemented method may comprise maintaining a data store of prior user queries and respective types of functions. The prior user queries may be responsive to individual users to provide user-centric preferences for commands.
- The computer-implemented method may comprise posing a clarification question in response to a previous user query, the clarification question associated with a type of function.
- Processing the user queries in response to the particular type of function may comprise extracting entities from the user queries for the particular command using statistical modeling methods. A genetic algorithm may be used to define optimized features sets with which to extract the entities for particular types of functions. The statistical modeling methods may comprise using conditional random fields.
- The user queries may comprise voice signals and the method may further comprise converting the voice signals to text.
- In one example aspect, there is provided a system comprising one or more processors and memory storing instructions and data for performing a method in accordance with an aspect described. In one example aspect, there is provided a computer program product comprising a storage medium (e.g. a memory or other storage device) storing instructions and data for performing a method in accordance with an aspect described.
-
FIG. 1 is a block diagram of a top level architecture of a communication system including a smartphone and a cloud-based service in accordance with one example embodiment. -
FIG. 2 is a block diagram that shows software architecture of the cloud-based service in accordance with one embodiment. -
FIG. 3 illustrates a block diagram of modules performing operations (methods) of the service ofFIGS. 1 and 2 . -
FIG. 4 illustrates a block diagram of modules performing operations (methods) of question type classification. -
FIG. 5 illustrates a block diagram of modules performing operations (methods) of keyword identification. -
FIG. 6 illustrates a block diagram of modules performing operations (methods) of answer ranking. -
FIG. 7 illustrates a block diagram of modules of an entity extraction pipeline performing operations (methods) of entity extraction. -
FIG. 8 illustrates a general overview flow of selected operations of capturing clarification questions/dialog within feature sets according to one example embodiment. -
FIG. 9 illustrates a general overview flow of selected operations for defining optimal feature sets (i.e. feature vector(s)) using a genetic algorithm according to one embodiment. - Like reference numerals indicate like parts throughout the diagrams.
-
FIG. 1 is a block diagram of a top level architecture, in accordance with one example embodiment, of acommunication system 100 including asmartphone 102 and components of a cloud-basedservice infrastructure 104 providing a voice-based interface to one or more services.FIG. 2 is a block diagram that shows software architecture of the cloud-basedservice infrastructure 104 in accordance with one embodiment. In the present example embodiment, cloud-basedservice infrastructure 104 is configured to permit a user ofsmartphone 102 to provide speech inputs defining commands to obtain one or more services. - A command may comprise an action and associated parameters or other data. For example, a command such as “I want to book a meeting” indicates a calendar related action but does not include associate parameters such as date, time, location, invitees etc. A command “I want to fly to San Francisco next Tuesday” indicates a travel related action and provides some associated parameters such as destination and travel date.
- Services in this context may be internal services or external services. Internal services relate to one or more functions of the user's communication device (e.g. smartphone 102) such as voice and data communication services, personal information management (PIM) by way of example, telephone, email, Instant Messaging (IM), text or short message service (SMS), calendar, contacts, notes, and other services. External services relate to those provided by another party, typically via a web connection, such as a travel booking service, weather information service, taxi service, shopping service, information retrieval service, social networking service, etc.
- In some contexts, the user input may be a speech input, but responses (output) from the service for presenting by
smartphone 102 need not be speech (e.g. synthesized automated voice) responses. Output may include text or other types of response (e.g. image, sounds, etc). In addition to speech inputs, a user may also provide other inputs via thesmartphone 102. For example, a speech input such as “Send an email to Bob” defining a command to email a particular contact may initiate a draft email onsmartphone 102. The user may manually edit the email using a keyboard (not shown) or other input means ofsmartphone 102. - With reference to
FIGS. 1 and 2 , components of cloud-basedservice infrastructure 104 includecloudfront server 106,delegate service 108,event notification service 110,speech service 112,NLP service 114,conversation service 116, externaldependent service interfaces 118 providing access to one or more external services such as flight provider service 118A, taxi service 118B and weather service 118C. It is apparent that there may be a plurality of each of these respective service components within the infrastructure to scalably and reliably handle service request from a plurality of communication devices of which only one is illustrated. Though shown as a client (smartphone) and server model, certain functions and features may be performed on the client. -
Cloudfront server 106 provides connection, load balancing and other communication related services to a plurality of communication devices such assmartphone 102.Delegate service 108 is chiefly responsible for handling and/or coordinating processing of the speech input, the resulting commands for the applicable services and any applicable responses. -
Event notification service 110 provides event-related messages tosmartphone 102, for example, data communications such as calendar reminders, recommendation, previously used external services, follow-ups, survey requests, etc. -
Speech service 112 performs speech-to-text conversion, receiving speech input for defining a command, such as in the form of an digital audio recording, fromsmartphone 102 and provides text output. In examples discussed herein with reference toFIGS. 3-7 , such text output is auser query 302. -
NLP service 114 analyzes the user query to determine meaning and specific commands with which to provide the services.Conversation service 116 assists with the user interface between the user and the services, for example, engaging in natural language dialogue with the user. The dialogue may include questions clarifying one or more aspects of a specific command as discussed further herein below. The service's responses to speech inputs fromsmartphone 102 need not be in a spoken word format but may be in a text-based or other format as previously mentioned. -
Interfaces 118 are interfaces to particular web-based services (e.g. Web Services) or other external services. External services typically utilize well-defined interfaces for receiving requests and returning responses. Cloud-basedservice infrastructure 104 provides a manner for receiving natural language commands for such services, determining the applicable external service request and any associated data (parameters) to make the request and invoking the request. Cloud-basedservice infrastructure 104 is also configured to receive the applicable response and provide same tosmartphone 102. Similar operations may be performed to invoke internal services. - Internal services such as via
interfaces 118 can be invoked a number of ways. Any service call mechanism can be used. Examples are but not limited to REST, SOAP, CORBA etc. Non-service call, passive mechanisms can also be used. In this case, data is placed at digital location that is accessible by the invoked service. The invoked service checks this digital location. This passive mechanism is also effective as an invocation mechanism - For simplicity, components appearing in
FIG. 2 that also appear inFIG. 1 are identically numbered.Software components 200 further includetemplate service 202 to assist with theconversation service 116, persistence memcache service/relational database management service (RDBMS) 204 for storing and managing data and application server andbusiness code components 206 such as components of an object oriented JBoss Server and Enterprise Java Beans® (EJB) container service in accordance with an example implementation. -
Smartphone 102 is configured, such as via one or more applications, to send language information to cloud-basedservice infrastructure 104 and receive a response based on language understanding.Smartphone 102 is also configured to receive notifications fromevent notification service 110. In some embodiments,smartphone 102 may be configured perform language understanding without the use of cloud-basedservice infrastructure 104, for example, when understanding requires sensitive information or information unique to the phone (e.g. contact information entities). In some embodiments, (not shown) user devices need not be limited to smartphones only. Other communication devices can be supported such as dumb phones via any communication protocol including TTY and SMS. Non-phone clients, like laptops, set top boxes, TV's and kiosks, etc. can be supported as well. -
FIG. 3 illustrates a general overview flow of selected operations (methods) 300 of the service ofFIGS. 1 and 2 . Auser query 302 is input tosuch operations 300 and providesoutput 304 discussed further herein below. -
Dialogue driver 306 receivesuser query 302 for processing, providing same to questiontype classification determiner 314.User query 302 is also provided tokeyword expansion unit 308. The user query and expanded keywords (not shown) are provided to previousquery score determiner 310 which references prior queries (not shown) stored to querydatabase 312. Previousquery score determiner 310 performs statistical analysis and provides candidate answers (commands) for ranking byanswer ranking unit 316. - Previous
query score determiner 310 may be useful in determining that a particular user query likely relates to a particular command as well as determining that a particular user query likely does not relate to a particular command.Previous query score 602 may be used as an input to 2 layerneural network 610 as shown inFIG. 6 (as well as to other methods for combining statistical classifiers such as a reciprocal rank fusion method).Previous query score 602 may also be employed in post-processing of the rank ofanswers 612 generated by 2 layerneural network 610 to eliminate some candidate answers and/or to select some candidate answers as the command likely intended by the user. In some embodiments,previous query score 602 is used only in post-processing of the rank ofanswers 612 instead of as an input to 2 layerneural network 612. -
Query database 312 may store, such as in a machine learning manner, a history of user queries and the associated commands and additional data such as keywords determined by cloud-basedservice infrastructure 104. Thequery database 312 may store a complete history (or subset) of a particular user's queries and associated commands to build user-centric preferences. For example a particular user's user query “Tell Bob I want a meeting” may result in a command to telephone Bob or email Bob. The resulting command to telephone or email, as applicable, may be associated with the user query “tell” on behalf of the particular user. - In addition to providing a source of user-centric preferences,
query database 312 may also be useful to store and provide access to user queries, commands etc. from all users, such as via an aggregated subset of queries and associated commands. The aggregated data may define a broader corpus from which statistics and other data may be gleaned and be useful when determining expanded keywords and/or the classification of a user query. - Question
type classification determiner 314 evaluatesuser query 302 to determine whether it is a function type query, entity type query, or a clarification type query. A function type query establishes a new command. An example of a function type query is “Book a meeting for next Friday at 2:00 pm” or “Send a message to Bob”. - A entity type query is in relation to a current command and adds or changes an entity in such command. For example, “Actually, move that to 3:00 pm” or “Add James to the message”.
- A clarification type query is in relation to a current command and is responsive to a clarification question (output 304) posed by
dialogue driver 306. Clarification type queries only occur when the dialogue driver asks the user a clarification style question: e.g. For a user query “Tell Bob I want a meeting”, anoutput 304 comprising a clarification question fromdialogue driver 306 may be “Did you want to text or email Bob?”. - Function type queries are directed by question
type classification determiner 314 to answer rankingunit 316 for determining the new command, if possible. Questiontype classification determiner 314 directs entity type queries and clarification type queries totemplate system 318 for additional processing to obtain further meaning from the user query with a view to also initiating appropriate output.Template system 318 may also receive function type queries fromanswer ranking unit 316.Template system 318 may accesstemplate memory store 320 to define or refine a command and to defineapplicable output 304. -
Extraction pipeline 322 receives the user query and conversation features and extracts entities from the user query to build up the command and its associated data as described further herein below with reference toFIG. 7 . -
Dialogue driver 306 providesoutput 304 forsmartphone 102 also as described below. -
FIG. 4 illustrates a flow chart of amethod 400 of question type classification for questiontype classification determiner 314 in accordance with an example embodiment.User query 302 is received. At 402, a determination is made whether a clarification type question was initiated (i.e. the question was previously posed (e.g. provided as output 304) to the smartphone via dialogue driver 306). If no, a question is not pending, operations continue at 404. If yes, operations continue at 406. - At
step 404,user query 302 is subjected to binary classification such as via a support vector machine (SVM) for analysis. SVM performs analysis of the user query to determine whether the query is an entity type query, related to the current function, or not (i.e. that it is a function type query). Functional type queries are passed (408) to answer rankingunit 316. Entity type queries are passed (410) totemplate system 318. An SVM is configured using a set of input data or training examples where each is identified as belonging to one of the two query types. A training algorithm builds a model for assigning new queries to one of the two types. An SVM model is a representation of the examples as points in space (hyperplane), mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New queries are then mapped into that same space and predicted to belong to a category based on the side of the gap on which each respective query falls. When preparing the SVM and when passing new queries in for classification, it may be assistive to select provide certain words, terms and metadata or other features related to the query. Using all words from a query may be problematic because common words may skew results in correctly. Services, application programming interfaces or other means which perform entity extraction may be useful to extract entities such as people, places, dates, specific things, etc. For example, the following is an example of features which may be determined and provided for the SVM: -
- Presence of Keywords: TF-IDF scores for each domain are calculated for each word in the entire corpus. The words are then sorted and a selection from the words with top 50 highest scores are taken. This is done in the same way as mentioned earlier in the patent.
- Question type Keywords: This represents the words that begin questions: how, where, when, why, what followed by obvious keywords that relate to the domains (e.g. commands etc. related to functions provided by a user interface) such as call, email, text, message, book, etc.
- Presence of Key Entities: Places/Addresses, Person Names, Restaurant Types, Food Dish Names, Dates, etc. (This list is not complete. As new domains are added, new key entities may be added). These key entities may be retrieved using named entity extraction.
- Potential Features: The current action that the user is performing on the device. The previous domain the user requested.
- Presence of Regular Expressions: Whether the query matches a pattern known to be found in data for each domain (patterns may have been partly handcrafted and partly learned from data for each domain).
- When a clarification question has been posed, at 406, a determination is made whether the
user query 302 contains keywords related to the clarification question posed. If yes, then the query is clarification type query and the classification of the user query (and its association with the current command) is passed totemplate system 318 for further processing. If such keywords are not present, the user query may comprise a new functional type query or an entity type query (such as where the entity/entities were not the focus of the clarification question posed). The user query is forwarded to step 404 via no branch from 406. - Keyword identification may be performed in the context of
operations 406 to assist with the determination of whether the user query is an answer to the clarification question posed. Statistics may be defined for particular terms to identify their relative frequency of appearance in user queries associated with a particular category (e.g. each respective categories may represent a specific command).FIG. 5 illustrates a flow chart of amethod 500 of keyword identification such as may be useful for processing a user query to determine a set of keywords related to the command and/or entities in the query. A database of queries and associated categories may be defined. For example, in a smartphone communication context relevant to internal services, a subset of categories may represent smartphone functions/commands such as “email”, “telephone”, “book meeting”, “Short Message Service (SMS)/Text” among others. InFIG. 5 , the user queries grouped by associated categories are represented generically as Category “A” queries 502, Category “B” queries 504, Category “C” queries 506, and Category “D” queries 508. It is understood that more categories may exist in an actual implementation. - The relative frequency of a term in a category is comparatively determined in relation to the term's infrequency in the other categories as well. As per 510, term frequency-inverse document frequency (TF-IDF) word scoring is used to determine keywords for each category. A document is defined as the set of queries that have the same category (e.g. 508). The corpus (within query database 312) is the set of queries (502, 504 and 506 etc.) that are not the category where we are finding the keywords. A term (keyword) which is relatively unique to category “D” will be less frequently occurring in the corpus of category “A”, “B” and “C” queries. This database and associated statistics may be maintained (e.g. pre-calculated) so that the statistics are available for use in real-time when processing the user query. A word ranking for words in the current user query may be determined (at 512) to identify unique words indicative of keyword status.
- The user query may be analyzed for keywords from the category or categories associated to the user query. Given that a clarification type question elicited the current user query, one or more categories may be have been previously associated with the current user query as determined from processing the prior user query which occasioned the clarification. These one or more categories relate to the current command. It is understood that because individual user queries may be vague and/or ambiguous, more than one category (command) may be associated with the user query e.g. as respective candidate commands. As the dialogue develops, a specific command can be determined.
- The classification of the query type is useful to initiate a new command (via
answer ranking unit 316 and template system 318) or to further process a current command (via template system 318). - With reference to
FIG. 6 there is illustrated a flow chart of amethod 600 of answer ranking. Answer ranking may be performed when a user query is identified as a function type query indicating a new command. Answer ranking may be performed to assist with the identification of the specific command to which the user query relates. In the present example embodiment, answer rankingmethod 600 performs four types of analyses (602, 604, 606 and 608) ofuser query 302 and combines the results of same (via two-layer neural network 610) to drive a rank ofanswers 612. Though a plurality of four natural language processing techniques are used in the example embodiment for this analysis, fewer or additional techniques may be used and the respective results of same combined to drive a rank of answers. In some embodiments, a means other than a two-layer neural network may be used to combine such results. - In one of the analyses, user history is examined to define a previous query score (602). Keywords are extracted from the user query such as by TF-IDF techniques. Previous user queries and their respective associated commands form the corpus for comparative purposes.
- Keywords may be expanded with related words (e.g. synonyms) such as via WordNet™ expansion (WordNet is a registered trademark of Princeton University <http://wordnet.princeton.edu>).
- The extracted and expanded keywords may form the basis of a comparison or search applied against the query corpus and a relevance score calculated (e.g. retrieval and ranking functions) to rate search results. The search results (i.e. the respective associated command and the ranking score) are made available to operations of two-layer
neural network 610. - The ranking function applied at 602 may comprise a BM25 or similar ranking function (e.g. BM25-F taking into account document format, length, etc.). BM25 relies upon IDF statistics to determine relevance of keywords in a document set.
- In one of the analyses (random forest 604), the user query is applied to a set of decision trees where each decision tree assigns (determines) a command to which the user query relates. A rank (or mode) may be calculated to determine which command (or commands) results most frequently by the decision trees. Let N(c) represent the number decision trees that classify the user query as command c. R(c) is the score for class c calculated as N(c) divided by the sum of N(c) for all c's derived by the decision trees. The scores and associated candidate commands are made available to operations of two-layer
neural network 610. A two layer neural network (see 610 discussed below) may be trained in order to determine the probability that the query was relevant. From this a rank for each of the classes can be determined according to this probability. - In one of the analyses (multiclass Support Vector Machines 606), the query is applied to a set of SVMs to determine a command. In one embodiment, each SVM is a binary classifier configured to determine whether the user query is associated with a particular command or any of the other commands (i.e. a one-versus-all determination). In another embodiment, a SVM is configured for each pair of commands to determine whether the user query is associated with one of two particular commands (e.g. email vs. telephone) (i.e. a one-versus-one determination). It is noted that in a one-versus-one embodiment, SVMs may be configured for a pairing of particular commands to a null class.
- In a one-versus-all determination, if the SVM approach is the sole classifier, a winner takes all approach is often adopted, selecting the highest score from the SVMs. The SVMs require calibration to produce comparable scores. In the one-versus-one approach, a command selected most frequently by the set of SVMs is the candidate command if the SVM approach is the sole classifier. In this example embodiment where the SVM approach is one of four inputs, scores for each candidate command are provided for operations of two-layer
neural network 610. - In one of the analyses (naïve Bayes classifier 608), the user query is provided to a Bayes-theorem based classifier with strong independence assumptions to perform document classification. The naïve Bayes classifier determines a probability that a particular user query (set of features) belongs (e.g. is associated with) a particular class (i.e. command). The classifier may be trained using a training set of known queries and commands. It is assumed that words of a user query are independent. Frequency of appearance (count) of a word in a given class (command) may be used to determine a probability that a particular word is in a particular class. The score for a particular class is a multiplier of the score (probability) for each word in the query relative to the particular class. Care must be taken when a word never appears in a particular class to avoid multiplying by zero. A smoothing technique can be used to eliminate the effects of zero probabilities in the data.
- At two-layer
neural network 610, candidate commands and scores from each of the analyses (602, 604, 606 and 608) are available to a two layer neural network to drive a result, tying the four individual predictions (scores) for each class (command) together to define a single score for each command. More particularly, the scores from the classifiers are used as input nodes to a two layer neural network which represents a rank function. The set of classifier scores for a single class represents a single input vector. This vector is scored, via the neural network, according to its relevance to the user query. Here a score of 1 is considered highly relevant to the users query and a score of 0 is considered irrelevant. Each of the vectors for each category are scored via the rank function and sorted according to their score. Finally, the scores are normalized by dividing each of the scores by the maximum of the scores. -
Template system 318 thus has available fromanswer ranking unit 316 data identifying the user query as a function type query and candidate commands and rankings for the new function. Template system may initiate processing to identify a particular command. From questiontype classification unit 314,template system 318 has data identifying the user query as an entity type or a clarification type. Thus template system may continue previous processing to derive a particular command.Template system 318 may comprise a state machine having states: new function, continue prior dialogue, undo last action/negation (e.g. a command may be to stop or change). - Each command may be associated with a respective template in
template memory store 320. Templates may be used to build up a command and its parameters (data).Extraction pipeline 322 may analyze the user query for entities and relationships among entities in the current user query and prior related user queries, working with thetemplate system 318 to populate the template. Anaphora techniques may be used to relate specific words in a user query with entities in a prior user query. For example, user queries “I'd like a ticket from New York” followed by “Change that to San Jose”, the word “that” will be associated with both “ticket” and “New York” and New York will be an entity initially defining the from location for the ticket in the template. Extracted template entities are provided back to thetemplate system 318. Control is passed back to the dialogue manager, for example, to produce dialogue. - In one embodiment, Hobbs' algorithm is used to determine the entity(ies) in a previous user query that relate to pronouns in the current user query. Example processing may involve determining whether one or more pronouns is present in the current user query. If one or more pronouns is present in the current user query, then Hobbs' algorithm may be used for each pronoun to determine the words in previous user queries that are referenced by each pronoun. A second user query may be created by substituting the pronoun with the word referenced by the pronoun and named entity recognition may be performed on the second user query.
- By way of an exemplary user interaction, say a user previously uttered the user query “Find me a flight from Calgary” and subsequently says “Change that to New York” which is the current user query. The current user query may be analyzed to determine if a pronoun is present; in this example, the pronoun “that” is identified. Next, Hobbs' algorithm may be employed to determine which word(s) in the previous user queries are likely referenced by the pronoun “that”. In the exemplary interaction, it is determined that the word “that” likely refers to the city Calgary. In one embodiment, a second user query is created by substituting the pronoun with the word the pronoun likely references which results in a second user query of “Change Calgary to New York”. Entity extraction may then be performed on the second user query as described herein in order to perform the command intended by the user.
- In one embodiment, once Hobbs' algorithm makes the association between “that” and Calgary, further processing is performed to make the association between “that” and an entity such as departure_city. The user query “Change that to New York” may then be interpreted as meaning change the entity named departure_city to New York which is performed by the system without creating a second user query and performing entity extraction on the second user query. In such an embodiment, the system assigns New York as the new departure_city and sends the new entity to the
template system 318. - In more detail and with reference to
FIG. 7 , agenetic algorithm 704 working on a general feature set 706 determined from a labeled corpus 708 generates (e.g. off-line, in advance of its use in a particular classification instance) optimized feature sets 702 for respective specific classes (e.g. types of functions). Each class may have itsown extraction pipeline 322 for extracting entities for the specific class. Each pipeline instance 332 receives theuser query 302 and its conversation features 710. Examples ofconversational features 710 include question ID, results of pronoun resolution with previous sentences, and other related information. - A
feature extraction module 712 expands the features associated with theuser query 302 and conversation features 710. Examples include date lists, number lists, city lists, time lists, name lists, among others. - The expanded user query and its specific conversation features are fed through the filter created by the genetic algorithm and provided to a previously defined conditional random field (CRF) or another sequential classifier. CRF is a statistical modeling method applied for pattern recognition. Optimized feature sets are used to define the filter and to train the CRF. The CRF is trained with specific features decided by the genetic algorithm. To train a CRF, it is required to obtain training data, which includes a set of labeled test queries relating to a particular domain. Labeling a set of training data may include labeling entities found in the test queries (such as departure_city) by marking up the text queries using a predefined mark-up language or format. After it is trained with specific features it will expect those features in the future. The system ensures that the CRF only gets the features it is expecting.
- In the illustrated embodiment, two layers of CRF are employed. A first layer determines general entities (e.g. an entity extraction CRF 714). For example, in a travel booking user interface, general entities may include date, place, time. A second layer determines more specific template entities (e.g. an template filler extraction CRF 716) such as destination, departure location, departure date to fill templates of the
template system 318. In some embodiments, a single CRF layer may be employed.Template system 318 may store (e.g. to template memory store 320) the filled or partially filled template for a particular command as user queries are processed. In embodiments in which two layers of CRF are employed, the first CRF may be used to determine general entities as described above, and these entities may be used as features in the second CRF which then determines more specific entities. - A genetic algorithm assists to make the extraction pipeline adaptable to new domains, defining new optimized feature sets as directed.
-
Dialogue driver 306 maintains conversation/system state and generates responses (output 304) based on the state of the conversation.Dialogue driver 306 may be configured as a finite state machine. Markov decision process (MDP) or partially observable MDP (POMDP) techniques may be used for determining actions of thedialogue driver 306. States may comprise entity, clarification, speech error, NLP error, unknown request, informative response. - Clarification type questions may be generated. Each class has a predefined descriptive.
Dialogue driver 306 generates a question providing specific alternatives among the classes e.g. Did you want to <class 1>, <class 2>, <class 3>? For a user query “Tell Bob I want a meeting Thursday”, a question in response is “Did you want to text, email or book a meeting? Dialogue driver passes the desired command and extracted entities to thedelegate service 108 for example, to invoke a particular function. -
FIG. 8 illustrates a general overview flow of selected operations (methods) 800 of capturing clarification questions/dialog within feature sets according to one example embodiment. Theoperations 800 may be used to increase the accuracy of the service ofFIGS. 1 and 2 by incorporating clarification questions and/or user queries responsive to clarification questions into a feature set used to extract entities. - In some embodiments, a feature set is created for each general domain of knowledge. For example, the calendar domain may have a feature set, the news domain may have a feature set, the knowledge domain may have a feature set, and so forth. Each feature set may be created and fine-tuned using one of several techniques, for example, by using one or more genetic algorithms, examples of which are described herein. A given feature set may include one or more elements that represent whether a clarification question was initiated by the system and/or whether a given user query was responsive to a clarification question posed. A particular feature may be present in a feature set for each clarification question/dialog statement that may be initiated by the system and presented to the user. For example, if a database of the system contains 1000 possible clarification questions, then 1000 features in the feature set will be present, each of which is associated with a particular clarification question. When a particular clarification question is posed, then the feature associated with the particular clarification question may be set to ‘1’ and all other features related to the other clarification questions (i.e. the clarification questions not posed to the user) may be assigned a ‘0’ or NULL value.
- In one embodiment, the system includes a plurality of possible clarification questions that may be initiated and presented to the user on
smartphone 102 in order to elicit entity information from the user. The particular clarification question posed to a user depends at least in part on the entities that have not been provided by the user'squery 302 or extracted by the system. In one approach, the system maintains a linear mapping between all possible filled or unfilled entity states, and predefined questions related to each entity state. In an exemplary interaction, a user utters the user query of “Get me a flight to Calgary leaving on Friday”. The system may classify the sentence in accordance withFIG. 4-6 and extract entities (“Calgary” and “Friday”) according toFIG. 7 . The system may further determine that a departure city is required in order to perform the command desired by the user. Several techniques may be used to determine the departure city. In some embodiments, the system may use a default rule that automatically selects the closest city with an airport. In other embodiments, the system initiates a clarification question to elicit the departure city from the user, for example “Which city would you like to leave from?” In another exemplary interaction, the user utters the user query “Get me a flight from Toronto to Calgary”. In one embodiment, the system may process theuser query 302 in accordance withFIGS. 3-7 and determine that a departure date and possibly a return date is required in order to execute the desired command (i.e. find a flight). Continuing with the example, the system may present a clarification question to the user onsmartphone 102 to elicit the departure date such as “What day would you like to leave?” - Referring to
FIG. 8 , auser query 302 is received and the output ofoperations 800 is provided to the template system as described herein. At 802, answer ranking is performed on the user query to identify the command desired by the user. A determination is made atstep 804 whether a clarification type question was recently initiated by the service and presented to the smartphone viadialogue driver 306. If a clarification question was recently initiated, then at 806A, a feature vector is created that represents theuser query 302 and other relevant information. Other relevant information may include the clarification question that was initiated by the system. Given that that the system may include a repository of clarification questions in a database, each of which are designed to elicit specific entity information in relation to a particular command, the feature vector created for a given user query will generally only indicate the particular clarification question that was initiated (although multiple clarifications may be concatenated into a single dialog statement in other embodiments and the multiple clarification questions will be captured in the feature set). The other clarification questions (i.e. the clarification questions that were not initiated by the system) will be represented in the feature vector as not being relevant. If a clarification question was not initiated by the system recently, the feature vector created atstep 806B will not indicate any clarification questions as being relevant to the user query 302 (i.e. the features representing clarification questions may be set to 0 or Null). At 808, the feature vector created atstep step 810, a determination is made about which clarification questions/dialog will be displayed to the user. This determination may involve cross-referencing the entities already filled in bytemplate system 318 with the entities required by a particular command. In some embodiments, if all the entities have been elicited and/or assumed by the system, a clarification question may be presented confirming the command that is about to be performed. The command is performed if the user confirms the instructions. - If all the entities required by command have been identified then the flow of operations is transferred to the
template system 318 atstep 814 so that the command may be performed. A clarification question may also be presented at 812 to confirm the request. If all of the entities for a particular command are not known, however, then the system will identify the proper clarification question to present to the user to elicit the unknown entities and will present the selected clarification question to the user atstep 812. - In some embodiments, a clarification question and/or dialog is presented to the user after every
user query 302 is received. In such implementations, the dialog is selected at 810 and presented to the user at 812. Any new entities that have been extracted at 808 are also provided to the template system at 814. - Referring next to
FIG. 9 , a general overview flow of selected operations (methods) 900 is illustrated for defining optimal feature sets (i.e. feature vector(s)) using a genetic algorithm according to one embodiment. Atstep 902, one or more initial features sets are defined as the starting point in the genetic algorithm. The initial feature set(s) may be generated randomly or may be selected by an expert at least partly based on the subject matter of the domain to which the feature set is directed (for example, weather). Atstep 904, a set of random permutations of the initial feature set(s) is generated. The number of random permutations generated is up to an administrator, but may also be preset depending on the number of features available in a given feature set. For example, if a given feature set has hundreds of thousands of features available then it may be desirable to run the genetic algorithm with thousands of random permutations being generated atstep 904. - At step 906, each of the random permutations of feature sets will be tested against a test dataset that contains test user queries. To perform the testing of 906, each user query in the test dataset will be applied to the each random permutation in order to evaluate the performance (i.e. accuracy, speed, etc.) of each random permutation feature set. At 908, a performance measure is calculated for each random permutation. In some embodiments, the performance measure is calculated using a function that includes an “f-measure+log(n)” relationship so that random permutations having a combination of accuracy and speed are favored by the system, although other performance measures may be used at step 908.
- Step 910 is an optional step in which the performance measure of each random permutation is compared against a predetermined threshold. If one or more of the random permutations has a performance measure greater than the predetermined threshold, than the random permutation with the most favorable performance measure may be selected as the genetic algorithm is being applied. If none of the random permutations have a performance measure that is greater than a predetermined threshold then a subset of the random permutations with the most favorable performance thresholds (or all of the random permutations) may be set as the initial feature sets at 914 and the genetic algorithm may be run again beginning at
step 902. - The flow of operations shown in
FIG. 9 is one round of genetic selection. In some embodiments, the operations ofFIG. 9 may be run several times to increase the performance of the final feature set selected atstep 912. - In one embodiment, the process of
FIG. 9 begins by an administrator selecting the initial feature vector as well as tuning parameters X and Y. X refers to the number of permutations that are to be generated at 904 and Y refers to the number of times the genetic algorithm is run (i.e. the number of rounds of genetic selection to be performed). For example, an administrator may set X=1000 and Y=10,000, meaning that 1000 random permutation will be generated from the initial feature vector(s) and the genetic algorithm will be run 10,000 times. In such an embodiment,decision step 910 is not executed, but rather, the algorithm is run 10,000 times whether or not a performance measure is calculated. - The scope of the claims should not be limited by the specific embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
Claims (26)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/233,640 US10387410B2 (en) | 2011-07-19 | 2012-07-19 | Method and system of classification in a natural language user interface |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2,747,153 | 2011-07-19 | ||
CA2747153 | 2011-07-19 | ||
CA2747153A CA2747153A1 (en) | 2011-07-19 | 2011-07-19 | Natural language processing dialog system for obtaining goods, services or information |
US201261596407P | 2012-02-08 | 2012-02-08 | |
PCT/CA2012/000685 WO2013010262A1 (en) | 2011-07-19 | 2012-07-19 | Method and system of classification in a natural language user interface |
US14/233,640 US10387410B2 (en) | 2011-07-19 | 2012-07-19 | Method and system of classification in a natural language user interface |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2012/000685 A-371-Of-International WO2013010262A1 (en) | 2011-07-19 | 2012-07-19 | Method and system of classification in a natural language user interface |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/410,641 Continuation US12072877B2 (en) | 2011-07-19 | 2019-05-13 | Method and system of classification in a natural language user interface |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150039292A1 true US20150039292A1 (en) | 2015-02-05 |
US10387410B2 US10387410B2 (en) | 2019-08-20 |
Family
ID=47553784
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/233,640 Active 2034-03-02 US10387410B2 (en) | 2011-07-19 | 2012-07-19 | Method and system of classification in a natural language user interface |
US16/410,641 Active US12072877B2 (en) | 2011-07-19 | 2019-05-13 | Method and system of classification in a natural language user interface |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/410,641 Active US12072877B2 (en) | 2011-07-19 | 2019-05-13 | Method and system of classification in a natural language user interface |
Country Status (4)
Country | Link |
---|---|
US (2) | US10387410B2 (en) |
EP (1) | EP2734938A4 (en) |
CA (1) | CA2747153A1 (en) |
WO (1) | WO2013010262A1 (en) |
Cited By (226)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140039877A1 (en) * | 2012-08-02 | 2014-02-06 | American Express Travel Related Services Company, Inc. | Systems and Methods for Semantic Information Retrieval |
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US20150088511A1 (en) * | 2013-09-24 | 2015-03-26 | Verizon Patent And Licensing Inc. | Named-entity based speech recognition |
US20150348551A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Multi-command single utterance input method |
US9355155B1 (en) * | 2015-07-01 | 2016-05-31 | Klarna Ab | Method for using supervised model to identify user |
US20160155442A1 (en) * | 2014-11-28 | 2016-06-02 | Microsoft Technology Licensing, Llc | Extending digital personal assistant action providers |
US20160188565A1 (en) * | 2014-12-30 | 2016-06-30 | Microsoft Technology Licensing , LLC | Discriminating ambiguous expressions to enhance user experience |
US9466294B1 (en) * | 2013-05-21 | 2016-10-11 | Amazon Technologies, Inc. | Dialog management system |
US20170011742A1 (en) * | 2014-03-31 | 2017-01-12 | Mitsubishi Electric Corporation | Device and method for understanding user intent |
US20170091318A1 (en) * | 2015-09-29 | 2017-03-30 | Kabushiki Kaisha Toshiba | Apparatus and method for extracting keywords from a single document |
US20170147553A1 (en) * | 2015-11-24 | 2017-05-25 | Xiaomi Inc. | Method and device for identifying information, and computer-readable storage medium |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US20170177715A1 (en) * | 2015-12-21 | 2017-06-22 | Adobe Systems Incorporated | Natural Language System Question Classifier, Semantic Representations, and Logical Form Templates |
US20170185582A1 (en) * | 2014-09-14 | 2017-06-29 | Google Inc. | Platform for creating customizable dialog system engines |
US20170242886A1 (en) * | 2016-02-19 | 2017-08-24 | Jack Mobile Inc. | User intent and context based search results |
US9772816B1 (en) * | 2014-12-22 | 2017-09-26 | Google Inc. | Transcription and tagging system |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US20180068659A1 (en) * | 2016-09-06 | 2018-03-08 | Toyota Jidosha Kabushiki Kaisha | Voice recognition device and voice recognition method |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US20180096058A1 (en) * | 2016-10-05 | 2018-04-05 | International Business Machines Corporation | Using multiple natural language classifiers to associate a generic query with a structured question type |
WO2018067260A1 (en) * | 2016-10-03 | 2018-04-12 | Google Llc | Task initiation using long-tail voice commands |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US20180157739A1 (en) * | 2016-12-06 | 2018-06-07 | Sap Se | Dialog system for transitioning between state diagrams |
US20180173694A1 (en) * | 2016-12-21 | 2018-06-21 | Industrial Technology Research Institute | Methods and computer systems for named entity verification, named entity verification model training, and phrase expansion |
AU2017203826B2 (en) * | 2016-06-23 | 2018-07-05 | Accenture Global Solutions Limited | Learning based routing of service requests |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US20180226076A1 (en) * | 2017-02-06 | 2018-08-09 | Kabushiki Kaisha Toshiba | Spoken dialogue system, a spoken dialogue method and a method of adapting a spoken dialogue system |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US20180233133A1 (en) * | 2016-08-19 | 2018-08-16 | Panasonic Avionics Corporation | Digital assistant and associated methods for a transportation vehicle |
US20180246878A1 (en) * | 2017-02-24 | 2018-08-30 | Microsoft Technology Licensing, Llc | Corpus specific natural language query completion assistant |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10078651B2 (en) | 2015-04-27 | 2018-09-18 | Rovi Guides, Inc. | Systems and methods for updating a knowledge graph through user input |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10083226B1 (en) | 2013-03-14 | 2018-09-25 | Google Llc | Using web ranking to resolve anaphora |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US20180336188A1 (en) * | 2013-02-22 | 2018-11-22 | The Directv Group, Inc. | Method And System For Generating Dynamic Text Responses For Display After A Search |
US20180358006A1 (en) * | 2017-06-12 | 2018-12-13 | Microsoft Technology Licensing, Llc | Dynamic event processing |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US20190027141A1 (en) * | 2017-07-21 | 2019-01-24 | Pearson Education, Inc. | Systems and methods for virtual reality-based interaction evaluation |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US20190074005A1 (en) * | 2017-09-06 | 2019-03-07 | Zensar Technologies Limited | Automated Conversation System and Method Thereof |
US20190096403A1 (en) * | 2017-09-27 | 2019-03-28 | Toyota Jidosha Kabushiki Kaisha | Service providing device and computer-readable non-transitory storage medium storing service providing program |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US20190115020A1 (en) * | 2016-03-23 | 2019-04-18 | Clarion Co., Ltd. | Server system, information system, and in-vehicle apparatus |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10269351B2 (en) * | 2017-05-16 | 2019-04-23 | Google Llc | Systems, methods, and apparatuses for resuming dialog sessions via automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20190140995A1 (en) * | 2017-11-03 | 2019-05-09 | Salesforce.Com, Inc. | Action response selection based on communication message analysis |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303790B2 (en) | 2016-06-08 | 2019-05-28 | International Business Machines Corporation | Processing un-typed triple store data |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10324940B2 (en) * | 2016-06-20 | 2019-06-18 | Rovi Guides, Inc. | Approximate template matching for natural language queries |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10387882B2 (en) | 2015-07-01 | 2019-08-20 | Klarna Ab | Method for using supervised model with physical store |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US20190279264A1 (en) * | 2015-05-27 | 2019-09-12 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10417039B2 (en) | 2017-06-12 | 2019-09-17 | Microsoft Technology Licensing, Llc | Event processing using a scorable tree |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US10430407B2 (en) | 2015-12-02 | 2019-10-01 | International Business Machines Corporation | Generating structured queries from natural language text |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
CN110400561A (en) * | 2018-04-16 | 2019-11-01 | 松下航空电子公司 | Method and system for the vehicles |
US10474439B2 (en) | 2016-06-16 | 2019-11-12 | Microsoft Technology Licensing, Llc | Systems and methods for building conversational understanding systems |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US20190348029A1 (en) * | 2018-05-07 | 2019-11-14 | Google Llc | Activation of remote devices in a networked system |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US20190361978A1 (en) * | 2018-05-22 | 2019-11-28 | Samsung Electronics Co., Ltd. | Cross domain personalized vocabulary learning in intelligent assistants |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10515086B2 (en) | 2016-02-19 | 2019-12-24 | Facebook, Inc. | Intelligent agent and interface to provide enhanced search |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10540387B2 (en) * | 2014-12-23 | 2020-01-21 | Rovi Guides, Inc. | Systems and methods for determining whether a negation statement applies to a current or past query |
US10558686B2 (en) * | 2016-12-05 | 2020-02-11 | Sap Se | Business intelligence system dataset navigation based on user interests clustering |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US20200058299A1 (en) * | 2016-11-16 | 2020-02-20 | Samsung Electronics Co., Ltd. | Device and method for providing response message to voice input of user |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
CN111026856A (en) * | 2019-12-09 | 2020-04-17 | 出门问问信息科技有限公司 | Intelligent interaction method and device and computer readable storage medium |
US10629186B1 (en) * | 2013-03-11 | 2020-04-21 | Amazon Technologies, Inc. | Domain and intent name feature identification and processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US10728364B1 (en) * | 2019-09-30 | 2020-07-28 | Capital One Services, Llc | Computer-based systems configured to manage continuous integration/continuous delivery programming pipelines with their associated datapoints and methods of use thereof |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
WO2020198319A1 (en) * | 2019-03-25 | 2020-10-01 | Jpmorgan Chase Bank, N.A. | Method and system for implementing a natural language interface to data stores using deep learning |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10838588B1 (en) | 2012-10-18 | 2020-11-17 | Gummarus, Llc | Methods, and computer program products for constraining a communication exchange |
US10841258B1 (en) | 2012-10-18 | 2020-11-17 | Gummarus, Llc | Methods and computer program products for browsing using a communicant identifier |
CN111966796A (en) * | 2020-07-21 | 2020-11-20 | 福建升腾资讯有限公司 | Question and answer pair extraction method, device and equipment and readable storage medium |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10904178B1 (en) | 2010-07-09 | 2021-01-26 | Gummarus, Llc | Methods, systems, and computer program products for processing a request for a resource in a communication |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US20210073474A1 (en) * | 2019-09-06 | 2021-03-11 | Accenture Global Solutions Limited | Dynamic and unscripted virtual agent systems and methods |
US20210073253A1 (en) * | 2019-09-06 | 2021-03-11 | Kabushiki Kaisha Toshiba | Analyzing apparatus, analyzing method, and computer program product |
US10963497B1 (en) * | 2016-03-29 | 2021-03-30 | Amazon Technologies, Inc. | Multi-stage query processing |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US20210157881A1 (en) * | 2019-11-22 | 2021-05-27 | International Business Machines Corporation | Object oriented self-discovered cognitive chatbot |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US11068954B2 (en) * | 2015-11-20 | 2021-07-20 | Voicemonk Inc | System for virtual agents to help customers and businesses |
US11069340B2 (en) | 2017-02-23 | 2021-07-20 | Microsoft Technology Licensing, Llc | Flexible and expandable dialogue system |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US11102315B2 (en) * | 2018-12-27 | 2021-08-24 | Verizon Media Inc. | Performing operations based upon activity patterns |
US11100924B2 (en) | 2017-12-11 | 2021-08-24 | Toyota Jidosha Kabushiki Kaisha | Service providing device, non-transitory computer-readable storage medium storing service providing program and service providing method |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
CN113409782A (en) * | 2021-06-16 | 2021-09-17 | 云茂互联智能科技(厦门)有限公司 | Method, device and system for noninductive scheduling of BI (business intelligence) large screen |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11132499B2 (en) * | 2017-08-28 | 2021-09-28 | Microsoft Technology Licensing, Llc | Robust expandable dialogue system |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US20210343287A1 (en) * | 2020-12-22 | 2021-11-04 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Voice processing method, apparatus, device and storage medium for vehicle-mounted device |
US11170763B2 (en) * | 2018-05-31 | 2021-11-09 | Toyota Jidosha Kabushiki Kaisha | Voice interaction system, its processing method, and program therefor |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11172063B2 (en) * | 2017-05-22 | 2021-11-09 | Genesys Telecommunications Laboratories, Inc. | System and method for extracting domain model for dynamic dialog control |
US11182565B2 (en) | 2018-02-23 | 2021-11-23 | Samsung Electronics Co., Ltd. | Method to learn personalized intents |
US11195516B2 (en) | 2017-02-23 | 2021-12-07 | Microsoft Technology Licensing, Llc | Expandable dialogue system |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US20210398524A1 (en) * | 2020-06-22 | 2021-12-23 | Amazon Technologies, Inc. | Natural language processing |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US20220050968A1 (en) * | 2020-08-13 | 2022-02-17 | Salesforce.Com, Inc. | Intent resolution for chatbot conversations with negation and coreferences |
US11256868B2 (en) * | 2019-06-03 | 2022-02-22 | Microsoft Technology Licensing, Llc | Architecture for resolving ambiguous user utterance |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US20220130394A1 (en) * | 2018-09-04 | 2022-04-28 | Newton Howard | Emotion-based voice controlled device |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11347751B2 (en) * | 2016-12-07 | 2022-05-31 | MyFitnessPal, Inc. | System and method for associating user-entered text to database entries |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11373132B1 (en) * | 2022-01-25 | 2022-06-28 | Accenture Global Solutions Limited | Feature selection system |
US11380304B1 (en) * | 2019-03-25 | 2022-07-05 | Amazon Technologies, Inc. | Generation of alternate representions of utterances |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US20220284889A1 (en) * | 2021-03-05 | 2022-09-08 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
US20220292087A1 (en) * | 2019-08-30 | 2022-09-15 | Servicenow Canada Inc. | Decision support system for data retrieval |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11508372B1 (en) * | 2020-06-18 | 2022-11-22 | Amazon Technologies, Inc. | Natural language input routing |
US11514903B2 (en) * | 2017-08-04 | 2022-11-29 | Sony Corporation | Information processing device and information processing method |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11699435B2 (en) * | 2019-09-18 | 2023-07-11 | Wizergos Software Solutions Private Limited | System and method to interpret natural language requests and handle natural language responses in conversation |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11775656B2 (en) * | 2015-05-01 | 2023-10-03 | Micro Focus Llc | Secure multi-party information retrieval |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11854535B1 (en) * | 2019-03-26 | 2023-12-26 | Amazon Technologies, Inc. | Personalization for speech processing applications |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US20240185846A1 (en) * | 2021-06-29 | 2024-06-06 | Amazon Technologies, Inc. | Multi-session context |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9229974B1 (en) | 2012-06-01 | 2016-01-05 | Google Inc. | Classifying queries |
US9122681B2 (en) | 2013-03-15 | 2015-09-01 | Gordon Villy Cormack | Systems and methods for classifying electronic information using advanced active learning techniques |
EP2851808A3 (en) * | 2013-09-19 | 2015-04-15 | Maluuba Inc. | Hybrid natural language processor |
US9965492B1 (en) | 2014-03-12 | 2018-05-08 | Google Llc | Using location aliases |
US10361924B2 (en) | 2014-04-04 | 2019-07-23 | International Business Machines Corporation | Forecasting computer resources demand |
US10043194B2 (en) | 2014-04-04 | 2018-08-07 | International Business Machines Corporation | Network demand forecasting |
US10439891B2 (en) | 2014-04-08 | 2019-10-08 | International Business Machines Corporation | Hyperparameter and network topology selection in network demand forecasting |
US9385934B2 (en) | 2014-04-08 | 2016-07-05 | International Business Machines Corporation | Dynamic network monitoring |
CN103914548B (en) * | 2014-04-10 | 2018-01-09 | 北京百度网讯科技有限公司 | Information search method and device |
US10713574B2 (en) | 2014-04-10 | 2020-07-14 | International Business Machines Corporation | Cognitive distributed network |
WO2015192212A1 (en) * | 2014-06-17 | 2015-12-23 | Maluuba Inc. | Server and method for classifying entities of a query |
JP6051366B2 (en) * | 2014-12-18 | 2016-12-27 | バイドゥ ネットコム サイエンス アンド テクノロジー(ペキン) カンパニー リミテッド | Information retrieval method and device |
US10242001B2 (en) | 2015-06-19 | 2019-03-26 | Gordon V. Cormack | Systems and methods for conducting and terminating a technology-assisted review |
US10949748B2 (en) * | 2016-05-13 | 2021-03-16 | Microsoft Technology Licensing, Llc | Deep learning of bots through examples and experience |
US10224031B2 (en) | 2016-12-30 | 2019-03-05 | Google Llc | Generating and transmitting invocation request to appropriate third-party agent |
WO2018223331A1 (en) * | 2017-06-08 | 2018-12-13 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for text attribute determination using conditional random field model |
KR102389041B1 (en) * | 2017-08-11 | 2022-04-21 | 엘지전자 주식회사 | Mobile terminal and method using machine learning for controlling mobile terminal |
CN111615696B (en) * | 2017-11-18 | 2024-07-02 | 科奇股份有限公司 | Interactive representation of content for relevance detection and review |
CN108376544B (en) * | 2018-03-27 | 2021-10-15 | 京东方科技集团股份有限公司 | Information processing method, device, equipment and computer readable storage medium |
US20200074321A1 (en) * | 2018-09-04 | 2020-03-05 | Rovi Guides, Inc. | Methods and systems for using machine-learning extracts and semantic graphs to create structured data to drive search, recommendation, and discovery |
WO2021061635A1 (en) * | 2019-09-24 | 2021-04-01 | RELX Inc. | Transparent iterative multi-concept semantic search |
WO2021146388A1 (en) * | 2020-01-14 | 2021-07-22 | RELX Inc. | Systems and methods for providing answers to a query |
DE102021109265A1 (en) | 2020-08-31 | 2022-03-03 | Cognigy Gmbh | Procedure for optimization |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010053968A1 (en) * | 2000-01-10 | 2001-12-20 | Iaskweb, Inc. | System, method, and computer program product for responding to natural language queries |
US20070244853A1 (en) * | 2002-06-14 | 2007-10-18 | Stacey Schneider | Method and computer for responding to a query |
US20090112605A1 (en) * | 2007-10-26 | 2009-04-30 | Rakesh Gupta | Free-speech command classification for car navigation system |
US20090112604A1 (en) * | 2007-10-24 | 2009-04-30 | Scholz Karl W | Automatically Generating Interactive Learning Applications |
US20090204409A1 (en) * | 2008-02-13 | 2009-08-13 | Sensory, Incorporated | Voice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems |
US20100082510A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Training a search result ranker with automatically-generated samples |
US20100094854A1 (en) * | 2008-10-14 | 2010-04-15 | Omid Rouhani-Kalleh | System for automatically categorizing queries |
US20110246076A1 (en) * | 2004-05-28 | 2011-10-06 | Agency For Science, Technology And Research | Method and System for Word Sequence Processing |
US20120254143A1 (en) * | 2011-03-31 | 2012-10-04 | Infosys Technologies Ltd. | Natural language querying with cascaded conditional random fields |
US8812509B1 (en) * | 2007-05-18 | 2014-08-19 | Google Inc. | Inferring attributes from search queries |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7036128B1 (en) * | 1999-01-05 | 2006-04-25 | Sri International Offices | Using a community of distributed electronic agents to support a highly mobile, ambient computing environment |
US6665666B1 (en) * | 1999-10-26 | 2003-12-16 | International Business Machines Corporation | System, method and program product for answering questions using a search engine |
US7392185B2 (en) | 1999-11-12 | 2008-06-24 | Phoenix Solutions, Inc. | Speech based learning/training system using semantic decoding |
US6999963B1 (en) | 2000-05-03 | 2006-02-14 | Microsoft Corporation | Methods, apparatus, and data structures for annotating a database design schema and/or indexing annotations |
US6785651B1 (en) * | 2000-09-14 | 2004-08-31 | Microsoft Corporation | Method and apparatus for performing plan-based dialog |
US7158935B1 (en) * | 2000-11-15 | 2007-01-02 | At&T Corp. | Method and system for predicting problematic situations in a automated dialog |
US7246062B2 (en) | 2002-04-08 | 2007-07-17 | Sbc Technology Resources, Inc. | Method and system for voice recognition menu navigation with error prevention and recovery |
US20030216923A1 (en) * | 2002-05-15 | 2003-11-20 | Gilmore Jeffrey A. | Dynamic content generation for voice messages |
US20040148170A1 (en) * | 2003-01-23 | 2004-07-29 | Alejandro Acero | Statistical classifiers for spoken language understanding and command/control scenarios |
US20050165607A1 (en) * | 2004-01-22 | 2005-07-28 | At&T Corp. | System and method to disambiguate and clarify user intention in a spoken dialog system |
US7747601B2 (en) * | 2006-08-14 | 2010-06-29 | Inquira, Inc. | Method and apparatus for identifying and classifying query intent |
KR100655491B1 (en) * | 2004-12-21 | 2006-12-11 | 한국전자통신연구원 | Two stage utterance verification method and device of speech recognition system |
US8150872B2 (en) * | 2005-01-24 | 2012-04-03 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US7437297B2 (en) | 2005-01-27 | 2008-10-14 | International Business Machines Corporation | Systems and methods for predicting consequences of misinterpretation of user commands in automated systems |
US8204751B1 (en) * | 2006-03-03 | 2012-06-19 | At&T Intellectual Property Ii, L.P. | Relevance recognition for a human machine dialog system contextual question answering based on a normalization of the length of the user input |
US8219407B1 (en) * | 2007-12-27 | 2012-07-10 | Great Northern Research, LLC | Method for processing the output of a speech recognizer |
US8812493B2 (en) * | 2008-04-11 | 2014-08-19 | Microsoft Corporation | Search results ranking using editing distance and document information |
US8825472B2 (en) * | 2010-05-28 | 2014-09-02 | Yahoo! Inc. | Automated message attachment labeling using feature selection in message content |
-
2011
- 2011-07-19 CA CA2747153A patent/CA2747153A1/en not_active Abandoned
-
2012
- 2012-07-19 US US14/233,640 patent/US10387410B2/en active Active
- 2012-07-19 EP EP12814991.1A patent/EP2734938A4/en not_active Withdrawn
- 2012-07-19 WO PCT/CA2012/000685 patent/WO2013010262A1/en active Application Filing
-
2019
- 2019-05-13 US US16/410,641 patent/US12072877B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010053968A1 (en) * | 2000-01-10 | 2001-12-20 | Iaskweb, Inc. | System, method, and computer program product for responding to natural language queries |
US20070244853A1 (en) * | 2002-06-14 | 2007-10-18 | Stacey Schneider | Method and computer for responding to a query |
US20110246076A1 (en) * | 2004-05-28 | 2011-10-06 | Agency For Science, Technology And Research | Method and System for Word Sequence Processing |
US8812509B1 (en) * | 2007-05-18 | 2014-08-19 | Google Inc. | Inferring attributes from search queries |
US20090112604A1 (en) * | 2007-10-24 | 2009-04-30 | Scholz Karl W | Automatically Generating Interactive Learning Applications |
US20090112605A1 (en) * | 2007-10-26 | 2009-04-30 | Rakesh Gupta | Free-speech command classification for car navigation system |
US20090204409A1 (en) * | 2008-02-13 | 2009-08-13 | Sensory, Incorporated | Voice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems |
US20100082510A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Training a search result ranker with automatically-generated samples |
US20100094854A1 (en) * | 2008-10-14 | 2010-04-15 | Omid Rouhani-Kalleh | System for automatically categorizing queries |
US20120254143A1 (en) * | 2011-03-31 | 2012-10-04 | Infosys Technologies Ltd. | Natural language querying with cascaded conditional random fields |
Non-Patent Citations (2)
Title |
---|
Asif Ekbal, Sriparna Saha, Weighted Vote-Based Classifier Ensemble for Named Entity Recognition: A Genetic Algorithm-Based Approach, Volume 10 Issue 2, June 2011, Article No. 9 ACM New York, NY, USA Pertinent pages: all URL: http://delivery.acm.org/10.1145/1970000/1967296/a9-ekbal.pdf?ip=151.207.250.61&id=1967296&acc=ACTIVE%20SERVICE&key=C15944E * |
sif Ekbal, Sriparna Saha, Weighted Vote-Based Classifier Ensemble for Named Entity Recognition: A Genetic Algorithm-Based Approach, Volume 10 Issue 2, June 2011, Article No. 9 ACM New York, NY, USA Pertinent pages: all URL: http://delivery.acm.org/10.1145/1970000/1967296/a9-ekbal.pdf?ip=151.207.250.61&id=1967296&acc=ACTIVE%20SERVICE&key=C15944E * |
Cited By (394)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11928604B2 (en) | 2005-09-08 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US10904178B1 (en) | 2010-07-09 | 2021-01-26 | Gummarus, Llc | Methods, systems, and computer program products for processing a request for a resource in a communication |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US9575963B2 (en) * | 2012-04-20 | 2017-02-21 | Maluuba Inc. | Conversational agent |
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US20170228367A1 (en) * | 2012-04-20 | 2017-08-10 | Maluuba Inc. | Conversational agent |
US9971766B2 (en) * | 2012-04-20 | 2018-05-15 | Maluuba Inc. | Conversational agent |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US20140039877A1 (en) * | 2012-08-02 | 2014-02-06 | American Express Travel Related Services Company, Inc. | Systems and Methods for Semantic Information Retrieval |
US9805024B2 (en) | 2012-08-02 | 2017-10-31 | American Express Travel Related Services Company, Inc. | Anaphora resolution for semantic tagging |
US9280520B2 (en) * | 2012-08-02 | 2016-03-08 | American Express Travel Related Services Company, Inc. | Systems and methods for semantic information retrieval |
US9424250B2 (en) | 2012-08-02 | 2016-08-23 | American Express Travel Related Services Company, Inc. | Systems and methods for semantic information retrieval |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10838588B1 (en) | 2012-10-18 | 2020-11-17 | Gummarus, Llc | Methods, and computer program products for constraining a communication exchange |
US10841258B1 (en) | 2012-10-18 | 2020-11-17 | Gummarus, Llc | Methods and computer program products for browsing using a communicant identifier |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US12009007B2 (en) | 2013-02-07 | 2024-06-11 | Apple Inc. | Voice trigger for a digital assistant |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US20180336188A1 (en) * | 2013-02-22 | 2018-11-22 | The Directv Group, Inc. | Method And System For Generating Dynamic Text Responses For Display After A Search |
US10878200B2 (en) * | 2013-02-22 | 2020-12-29 | The Directv Group, Inc. | Method and system for generating dynamic text responses for display after a search |
US11741314B2 (en) | 2013-02-22 | 2023-08-29 | Directv, Llc | Method and system for generating dynamic text responses for display after a search |
US12136417B2 (en) | 2013-03-11 | 2024-11-05 | Amazon Technologies, Inc. | Domain and intent name feature identification and processing |
US10629186B1 (en) * | 2013-03-11 | 2020-04-21 | Amazon Technologies, Inc. | Domain and intent name feature identification and processing |
US10083226B1 (en) | 2013-03-14 | 2018-09-25 | Google Llc | Using web ranking to resolve anaphora |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US9466294B1 (en) * | 2013-05-21 | 2016-10-11 | Amazon Technologies, Inc. | Dialog management system |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US20150088511A1 (en) * | 2013-09-24 | 2015-03-26 | Verizon Patent And Licensing Inc. | Named-entity based speech recognition |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US20170011742A1 (en) * | 2014-03-31 | 2017-01-12 | Mitsubishi Electric Corporation | Device and method for understanding user intent |
US10037758B2 (en) * | 2014-03-31 | 2018-07-31 | Mitsubishi Electric Corporation | Device and method for understanding user intent |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US9966065B2 (en) * | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US20150348551A1 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Multi-command single utterance input method |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US20170185582A1 (en) * | 2014-09-14 | 2017-06-29 | Google Inc. | Platform for creating customizable dialog system engines |
US10546067B2 (en) * | 2014-09-14 | 2020-01-28 | Google Llc | Platform for creating customizable dialog system engines |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US20160155442A1 (en) * | 2014-11-28 | 2016-06-02 | Microsoft Technology Licensing, Llc | Extending digital personal assistant action providers |
US10192549B2 (en) * | 2014-11-28 | 2019-01-29 | Microsoft Technology Licensing, Llc | Extending digital personal assistant action providers |
US9772816B1 (en) * | 2014-12-22 | 2017-09-26 | Google Inc. | Transcription and tagging system |
US10540387B2 (en) * | 2014-12-23 | 2020-01-21 | Rovi Guides, Inc. | Systems and methods for determining whether a negation statement applies to a current or past query |
US11386268B2 (en) | 2014-12-30 | 2022-07-12 | Microsoft Technology Licensing, Llc | Discriminating ambiguous expressions to enhance user experience |
US9836452B2 (en) * | 2014-12-30 | 2017-12-05 | Microsoft Technology Licensing, Llc | Discriminating ambiguous expressions to enhance user experience |
US20160188565A1 (en) * | 2014-12-30 | 2016-06-30 | Microsoft Technology Licensing , LLC | Discriminating ambiguous expressions to enhance user experience |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US11934372B2 (en) | 2015-04-27 | 2024-03-19 | Rovi Guides, Inc. | Systems and methods for updating a knowledge graph through user input |
US10078651B2 (en) | 2015-04-27 | 2018-09-18 | Rovi Guides, Inc. | Systems and methods for updating a knowledge graph through user input |
US11561955B2 (en) | 2015-04-27 | 2023-01-24 | Rovi Guides, Inc. | Systems and methods for updating a knowledge graph through user input |
US11775656B2 (en) * | 2015-05-01 | 2023-10-03 | Micro Focus Llc | Secure multi-party information retrieval |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11170415B2 (en) * | 2015-05-27 | 2021-11-09 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US20230153876A1 (en) * | 2015-05-27 | 2023-05-18 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11769184B2 (en) * | 2015-05-27 | 2023-09-26 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US20240013269A1 (en) * | 2015-05-27 | 2024-01-11 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11551273B2 (en) * | 2015-05-27 | 2023-01-10 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US20190279264A1 (en) * | 2015-05-27 | 2019-09-12 | Google Llc | Enhancing functionalities of virtual assistants and dialog systems via plugin marketplace |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US9904916B2 (en) | 2015-07-01 | 2018-02-27 | Klarna Ab | Incremental login and authentication to user portal without username/password |
US9886686B2 (en) * | 2015-07-01 | 2018-02-06 | Klarna Ab | Method for using supervised model to identify user |
US20170004136A1 (en) * | 2015-07-01 | 2017-01-05 | Klarna Ab | Method for using supervised model to identify user |
US11461751B2 (en) | 2015-07-01 | 2022-10-04 | Klarna Bank Ab | Method for using supervised model to identify user |
US10417621B2 (en) | 2015-07-01 | 2019-09-17 | Klarna Ab | Method for using supervised model to configure user interface presentation |
US9355155B1 (en) * | 2015-07-01 | 2016-05-31 | Klarna Ab | Method for using supervised model to identify user |
US10607199B2 (en) | 2015-07-01 | 2020-03-31 | Klarna Bank Ab | Method for using supervised model to identify user |
US10387882B2 (en) | 2015-07-01 | 2019-08-20 | Klarna Ab | Method for using supervised model with physical store |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US20170091318A1 (en) * | 2015-09-29 | 2017-03-30 | Kabushiki Kaisha Toshiba | Apparatus and method for extracting keywords from a single document |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11068954B2 (en) * | 2015-11-20 | 2021-07-20 | Voicemonk Inc | System for virtual agents to help customers and businesses |
US11995698B2 (en) * | 2015-11-20 | 2024-05-28 | Voicemonk, Inc. | System for virtual agents to help customers and businesses |
US20240311888A1 (en) * | 2015-11-20 | 2024-09-19 | Voicemonk, Inc. | System for virtual agents to help customers and businesses |
US10061762B2 (en) * | 2015-11-24 | 2018-08-28 | Xiaomi Inc. | Method and device for identifying information, and computer-readable storage medium |
US20170147553A1 (en) * | 2015-11-24 | 2017-05-25 | Xiaomi Inc. | Method and device for identifying information, and computer-readable storage medium |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10430407B2 (en) | 2015-12-02 | 2019-10-01 | International Business Machines Corporation | Generating structured queries from natural language text |
US11068480B2 (en) | 2015-12-02 | 2021-07-20 | International Business Machines Corporation | Generating structured queries from natural language text |
US10262062B2 (en) * | 2015-12-21 | 2019-04-16 | Adobe Inc. | Natural language system question classifier, semantic representations, and logical form templates |
US20170177715A1 (en) * | 2015-12-21 | 2017-06-22 | Adobe Systems Incorporated | Natural Language System Question Classifier, Semantic Representations, and Logical Form Templates |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10515086B2 (en) | 2016-02-19 | 2019-12-24 | Facebook, Inc. | Intelligent agent and interface to provide enhanced search |
US20170242886A1 (en) * | 2016-02-19 | 2017-08-24 | Jack Mobile Inc. | User intent and context based search results |
WO2017143338A1 (en) * | 2016-02-19 | 2017-08-24 | Jack Mobile Inc. | User intent and context based search results |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10896676B2 (en) * | 2016-03-23 | 2021-01-19 | Clarion Co., Ltd. | Server system, information system, and in-vehicle apparatus |
US20190115020A1 (en) * | 2016-03-23 | 2019-04-18 | Clarion Co., Ltd. | Server system, information system, and in-vehicle apparatus |
US10963497B1 (en) * | 2016-03-29 | 2021-03-30 | Amazon Technologies, Inc. | Multi-stage query processing |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10303790B2 (en) | 2016-06-08 | 2019-05-28 | International Business Machines Corporation | Processing un-typed triple store data |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US10474439B2 (en) | 2016-06-16 | 2019-11-12 | Microsoft Technology Licensing, Llc | Systems and methods for building conversational understanding systems |
US10324940B2 (en) * | 2016-06-20 | 2019-06-18 | Rovi Guides, Inc. | Approximate template matching for natural language queries |
US11200243B2 (en) | 2016-06-20 | 2021-12-14 | Rovi Guides, Inc. | Approximate template matching for natural language queries |
US12079226B2 (en) | 2016-06-20 | 2024-09-03 | Rovi Guides, Inc. | Approximate template matching for natural language queries |
US10776717B2 (en) | 2016-06-23 | 2020-09-15 | Accenture Global Solutions Limited | Learning based routing of service requests |
AU2017203826B2 (en) * | 2016-06-23 | 2018-07-05 | Accenture Global Solutions Limited | Learning based routing of service requests |
US10573299B2 (en) * | 2016-08-19 | 2020-02-25 | Panasonic Avionics Corporation | Digital assistant and associated methods for a transportation vehicle |
US20180233133A1 (en) * | 2016-08-19 | 2018-08-16 | Panasonic Avionics Corporation | Digital assistant and associated methods for a transportation vehicle |
US11048869B2 (en) | 2016-08-19 | 2021-06-29 | Panasonic Avionics Corporation | Digital assistant and associated methods for a transportation vehicle |
US20180068659A1 (en) * | 2016-09-06 | 2018-03-08 | Toyota Jidosha Kabushiki Kaisha | Voice recognition device and voice recognition method |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10490190B2 (en) | 2016-10-03 | 2019-11-26 | Google Llc | Task initiation using sensor dependent context long-tail voice commands |
US10297254B2 (en) | 2016-10-03 | 2019-05-21 | Google Llc | Task initiation using long-tail voice commands by weighting strength of association of the tasks and their respective commands based on user feedback |
WO2018067260A1 (en) * | 2016-10-03 | 2018-04-12 | Google Llc | Task initiation using long-tail voice commands |
US10754886B2 (en) * | 2016-10-05 | 2020-08-25 | International Business Machines Corporation | Using multiple natural language classifier to associate a generic query with a structured question type |
US20180096058A1 (en) * | 2016-10-05 | 2018-04-05 | International Business Machines Corporation | Using multiple natural language classifiers to associate a generic query with a structured question type |
US11501766B2 (en) * | 2016-11-16 | 2022-11-15 | Samsung Electronics Co., Ltd. | Device and method for providing response message to voice input of user |
US20200058299A1 (en) * | 2016-11-16 | 2020-02-20 | Samsung Electronics Co., Ltd. | Device and method for providing response message to voice input of user |
US10558686B2 (en) * | 2016-12-05 | 2020-02-11 | Sap Se | Business intelligence system dataset navigation based on user interests clustering |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10503744B2 (en) * | 2016-12-06 | 2019-12-10 | Sap Se | Dialog system for transitioning between state diagrams |
US10866975B2 (en) | 2016-12-06 | 2020-12-15 | Sap Se | Dialog system for transitioning between state diagrams |
US11314792B2 (en) | 2016-12-06 | 2022-04-26 | Sap Se | Digital assistant query intent recommendation generation |
US10810238B2 (en) | 2016-12-06 | 2020-10-20 | Sap Se | Decoupled architecture for query response generation |
US20180157739A1 (en) * | 2016-12-06 | 2018-06-07 | Sap Se | Dialog system for transitioning between state diagrams |
US12008002B2 (en) | 2016-12-07 | 2024-06-11 | MyFitnessPal, Inc. | System and method for associating user-entered text to database entries |
US11347751B2 (en) * | 2016-12-07 | 2022-05-31 | MyFitnessPal, Inc. | System and method for associating user-entered text to database entries |
US20180173694A1 (en) * | 2016-12-21 | 2018-06-21 | Industrial Technology Research Institute | Methods and computer systems for named entity verification, named entity verification model training, and phrase expansion |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US20180226076A1 (en) * | 2017-02-06 | 2018-08-09 | Kabushiki Kaisha Toshiba | Spoken dialogue system, a spoken dialogue method and a method of adapting a spoken dialogue system |
US10832667B2 (en) * | 2017-02-06 | 2020-11-10 | Kabushiki Kaisha Toshiba | Spoken dialogue system, a spoken dialogue method and a method of adapting a spoken dialogue system |
US11195516B2 (en) | 2017-02-23 | 2021-12-07 | Microsoft Technology Licensing, Llc | Expandable dialogue system |
US11069340B2 (en) | 2017-02-23 | 2021-07-20 | Microsoft Technology Licensing, Llc | Flexible and expandable dialogue system |
US20180246878A1 (en) * | 2017-02-24 | 2018-08-30 | Microsoft Technology Licensing, Llc | Corpus specific natural language query completion assistant |
US10102199B2 (en) * | 2017-02-24 | 2018-10-16 | Microsoft Technology Licensing, Llc | Corpus specific natural language query completion assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10269351B2 (en) * | 2017-05-16 | 2019-04-23 | Google Llc | Systems, methods, and apparatuses for resuming dialog sessions via automated assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US11264033B2 (en) | 2017-05-16 | 2022-03-01 | Google Llc | Systems, methods, and apparatuses for resuming dialog sessions via automated assistant |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US11817099B2 (en) | 2017-05-16 | 2023-11-14 | Google Llc | Systems, methods, and apparatuses for resuming dialog sessions via automated assistant |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11172063B2 (en) * | 2017-05-22 | 2021-11-09 | Genesys Telecommunications Laboratories, Inc. | System and method for extracting domain model for dynamic dialog control |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US20180358006A1 (en) * | 2017-06-12 | 2018-12-13 | Microsoft Technology Licensing, Llc | Dynamic event processing |
US10902533B2 (en) * | 2017-06-12 | 2021-01-26 | Microsoft Technology Licensing, Llc | Dynamic event processing |
US10417039B2 (en) | 2017-06-12 | 2019-09-17 | Microsoft Technology Licensing, Llc | Event processing using a scorable tree |
US20190027141A1 (en) * | 2017-07-21 | 2019-01-24 | Pearson Education, Inc. | Systems and methods for virtual reality-based interaction evaluation |
US11068043B2 (en) | 2017-07-21 | 2021-07-20 | Pearson Education, Inc. | Systems and methods for virtual reality-based grouping evaluation |
US11514903B2 (en) * | 2017-08-04 | 2022-11-29 | Sony Corporation | Information processing device and information processing method |
US11132499B2 (en) * | 2017-08-28 | 2021-09-28 | Microsoft Technology Licensing, Llc | Robust expandable dialogue system |
US20190074005A1 (en) * | 2017-09-06 | 2019-03-07 | Zensar Technologies Limited | Automated Conversation System and Method Thereof |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
JP2019061532A (en) * | 2017-09-27 | 2019-04-18 | トヨタ自動車株式会社 | Service provision device and service provision program |
US20190096403A1 (en) * | 2017-09-27 | 2019-03-28 | Toyota Jidosha Kabushiki Kaisha | Service providing device and computer-readable non-transitory storage medium storing service providing program |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US20190140995A1 (en) * | 2017-11-03 | 2019-05-09 | Salesforce.Com, Inc. | Action response selection based on communication message analysis |
US11050700B2 (en) * | 2017-11-03 | 2021-06-29 | Salesforce.Com, Inc. | Action response selection based on communication message analysis |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US11100924B2 (en) | 2017-12-11 | 2021-08-24 | Toyota Jidosha Kabushiki Kaisha | Service providing device, non-transitory computer-readable storage medium storing service providing program and service providing method |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US11182565B2 (en) | 2018-02-23 | 2021-11-23 | Samsung Electronics Co., Ltd. | Method to learn personalized intents |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
CN110400561A (en) * | 2018-04-16 | 2019-11-01 | 松下航空电子公司 | Method and system for the vehicles |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US20190348029A1 (en) * | 2018-05-07 | 2019-11-14 | Google Llc | Activation of remote devices in a networked system |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11145300B2 (en) * | 2018-05-07 | 2021-10-12 | Google Llc | Activation of remote devices in a networked system |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11024306B2 (en) * | 2018-05-07 | 2021-06-01 | Google Llc | Activation of remote devices in a networked system |
US11664025B2 (en) | 2018-05-07 | 2023-05-30 | Google Llc | Activation of remote devices in a networked system |
US11011164B2 (en) | 2018-05-07 | 2021-05-18 | Google Llc | Activation of remote devices in a networked system |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11314940B2 (en) * | 2018-05-22 | 2022-04-26 | Samsung Electronics Co., Ltd. | Cross domain personalized vocabulary learning in intelligent assistants |
US20190361978A1 (en) * | 2018-05-22 | 2019-11-28 | Samsung Electronics Co., Ltd. | Cross domain personalized vocabulary learning in intelligent assistants |
US11170763B2 (en) * | 2018-05-31 | 2021-11-09 | Toyota Jidosha Kabushiki Kaisha | Voice interaction system, its processing method, and program therefor |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US12061752B2 (en) | 2018-06-01 | 2024-08-13 | Apple Inc. | Attention aware virtual assistant dismissal |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10720160B2 (en) | 2018-06-01 | 2020-07-21 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US12080287B2 (en) | 2018-06-01 | 2024-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US20220130394A1 (en) * | 2018-09-04 | 2022-04-28 | Newton Howard | Emotion-based voice controlled device |
US11727938B2 (en) * | 2018-09-04 | 2023-08-15 | Newton Howard | Emotion-based voice controlled device |
US20230386474A1 (en) * | 2018-09-04 | 2023-11-30 | Newton Howard | Emotion-based voice controlled device |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11102315B2 (en) * | 2018-12-27 | 2021-08-24 | Verizon Media Inc. | Performing operations based upon activity patterns |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US12136419B2 (en) | 2019-03-18 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
WO2020198319A1 (en) * | 2019-03-25 | 2020-10-01 | Jpmorgan Chase Bank, N.A. | Method and system for implementing a natural language interface to data stores using deep learning |
US11380304B1 (en) * | 2019-03-25 | 2022-07-05 | Amazon Technologies, Inc. | Generation of alternate representions of utterances |
US11880658B2 (en) | 2019-03-25 | 2024-01-23 | Jpmorgan Chase Bank, N.A. | Method and system for implementing a natural language interface to data stores using deep learning |
US11854535B1 (en) * | 2019-03-26 | 2023-12-26 | Amazon Technologies, Inc. | Personalization for speech processing applications |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11360739B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User activity shortcut suggestions |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11256868B2 (en) * | 2019-06-03 | 2022-02-22 | Microsoft Technology Licensing, Llc | Architecture for resolving ambiguous user utterance |
US20220292087A1 (en) * | 2019-08-30 | 2022-09-15 | Servicenow Canada Inc. | Decision support system for data retrieval |
US20210073474A1 (en) * | 2019-09-06 | 2021-03-11 | Accenture Global Solutions Limited | Dynamic and unscripted virtual agent systems and methods |
US20210073253A1 (en) * | 2019-09-06 | 2021-03-11 | Kabushiki Kaisha Toshiba | Analyzing apparatus, analyzing method, and computer program product |
US11709998B2 (en) * | 2019-09-06 | 2023-07-25 | Accenture Global Solutions Limited | Dynamic and unscripted virtual agent systems and methods |
US11615126B2 (en) * | 2019-09-06 | 2023-03-28 | Kabushiki Kaisha Toshiba | Analyzing apparatus, analyzing method, and computer program product |
US11699435B2 (en) * | 2019-09-18 | 2023-07-11 | Wizergos Software Solutions Private Limited | System and method to interpret natural language requests and handle natural language responses in conversation |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11354104B2 (en) * | 2019-09-30 | 2022-06-07 | Capital One Services, Llc | Computer-based systems configured to manage continuous integration/continuous delivery programming pipelines with their associated datapoints and methods of use thereof |
US10728364B1 (en) * | 2019-09-30 | 2020-07-28 | Capital One Services, Llc | Computer-based systems configured to manage continuous integration/continuous delivery programming pipelines with their associated datapoints and methods of use thereof |
US20210157881A1 (en) * | 2019-11-22 | 2021-05-27 | International Business Machines Corporation | Object oriented self-discovered cognitive chatbot |
CN111026856A (en) * | 2019-12-09 | 2020-04-17 | 出门问问信息科技有限公司 | Intelligent interaction method and device and computer readable storage medium |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11508372B1 (en) * | 2020-06-18 | 2022-11-22 | Amazon Technologies, Inc. | Natural language input routing |
US20210398524A1 (en) * | 2020-06-22 | 2021-12-23 | Amazon Technologies, Inc. | Natural language processing |
US12008985B2 (en) * | 2020-06-22 | 2024-06-11 | Amazon Technologies, Inc. | Natural language processing of declarative statements |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
CN111966796A (en) * | 2020-07-21 | 2020-11-20 | 福建升腾资讯有限公司 | Question and answer pair extraction method, device and equipment and readable storage medium |
US11531821B2 (en) * | 2020-08-13 | 2022-12-20 | Salesforce, Inc. | Intent resolution for chatbot conversations with negation and coreferences |
US20220050968A1 (en) * | 2020-08-13 | 2022-02-17 | Salesforce.Com, Inc. | Intent resolution for chatbot conversations with negation and coreferences |
US20210343287A1 (en) * | 2020-12-22 | 2021-11-04 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Voice processing method, apparatus, device and storage medium for vehicle-mounted device |
US20220284889A1 (en) * | 2021-03-05 | 2022-09-08 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
US11605375B2 (en) * | 2021-03-05 | 2023-03-14 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
US20230197068A1 (en) * | 2021-03-05 | 2023-06-22 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
US20240046922A1 (en) * | 2021-03-05 | 2024-02-08 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
US11798540B2 (en) * | 2021-03-05 | 2023-10-24 | Capital One Services, Llc | Systems and methods for dynamically updating machine learning models that provide conversational responses |
CN113409782A (en) * | 2021-06-16 | 2021-09-17 | 云茂互联智能科技(厦门)有限公司 | Method, device and system for noninductive scheduling of BI (business intelligence) large screen |
US20240185846A1 (en) * | 2021-06-29 | 2024-06-06 | Amazon Technologies, Inc. | Multi-session context |
US11373132B1 (en) * | 2022-01-25 | 2022-06-28 | Accenture Global Solutions Limited | Feature selection system |
Also Published As
Publication number | Publication date |
---|---|
WO2013010262A1 (en) | 2013-01-24 |
EP2734938A1 (en) | 2014-05-28 |
EP2734938A4 (en) | 2015-08-19 |
US10387410B2 (en) | 2019-08-20 |
CA2747153A1 (en) | 2013-01-19 |
US20190272269A1 (en) | 2019-09-05 |
US12072877B2 (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12072877B2 (en) | Method and system of classification in a natural language user interface | |
US11347783B2 (en) | Implementing a software action based on machine interpretation of a language input | |
US10853582B2 (en) | Conversational agent | |
US11948563B1 (en) | Conversation summarization during user-control task execution for assistant systems | |
US11062270B2 (en) | Generating enriched action items | |
US11200886B2 (en) | System and method for training a virtual agent to identify a user's intent from a conversation | |
US11580112B2 (en) | Systems and methods for automatically determining utterances, entities, and intents based on natural language inputs | |
US12010268B2 (en) | Partial automation of text chat conversations | |
US11482223B2 (en) | Systems and methods for automatically determining utterances, entities, and intents based on natural language inputs | |
JP2023531346A (en) | Using a single request for multi-person calling in auxiliary systems | |
WO2020139865A1 (en) | Systems and methods for improved automated conversations | |
KR20160147303A (en) | Method for dialog management based on multi-user using memory capacity and apparatus for performing the method | |
CN116547676A (en) | Enhanced logic for natural language processing | |
WO2021063089A1 (en) | Rule matching method, rule matching apparatus, storage medium and electronic device | |
CN116583837A (en) | Distance-based LOGIT values for natural language processing | |
US20230100508A1 (en) | Fusion of word embeddings and word scores for text classification | |
CN116615727A (en) | Keyword data augmentation tool for natural language processing | |
CN111046151B (en) | Message processing method and device | |
CN116091076A (en) | Dynamic dashboard management | |
WO2021202282A1 (en) | Systems and methods for automatically determining utterances, entities, and intents based on natural language inputs | |
EP3161666A1 (en) | Semantic re-ranking of nlu results in conversational dialogue applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MALUUBA INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SULEMAN, KAHEER;PANTONY, JOSHUA R.;HSU, WILSON;AND OTHERS;SIGNING DATES FROM 20140527 TO 20140907;REEL/FRAME:035348/0465 |
|
AS | Assignment |
Owner name: MALUUBA INC., CANADA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE COUNTRY OF INCORPORATION INSIDE THE ASSIGNMENT DOCUMENT PREVIOUSLY RECORDED AT REEL: 035348 FRAME: 0465. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:SULEMAN, KAHEER;PANTONY, JOSHUA R.;HSU, WILSON;AND OTHERS;SIGNING DATES FROM 20140527 TO 20140907;REEL/FRAME:041834/0022 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MALUUBA INC.;REEL/FRAME:053116/0878 Effective date: 20200612 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |