WO2015134296A1 - Model based approach for on-screen item selection and disambiguation - Google Patents

Model based approach for on-screen item selection and disambiguation Download PDF

Info

Publication number
WO2015134296A1
WO2015134296A1 PCT/US2015/017874 US2015017874W WO2015134296A1 WO 2015134296 A1 WO2015134296 A1 WO 2015134296A1 US 2015017874 W US2015017874 W US 2015017874W WO 2015134296 A1 WO2015134296 A1 WO 2015134296A1
Authority
WO
WIPO (PCT)
Prior art keywords
utterance
items
item
display
identifying
Prior art date
Application number
PCT/US2015/017874
Other languages
English (en)
French (fr)
Inventor
Ruhi Sarikaya
Fethiye Asli CELIKYILMAZ
Zhaleh Feizollahi
Larry Paul Heck
Dilek Z. Hakkani-Tur
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Priority to KR1020167027135A priority Critical patent/KR20160127810A/ko
Priority to EP15716197.7A priority patent/EP3114582A1/de
Priority to CN201580012103.2A priority patent/CN106104528A/zh
Publication of WO2015134296A1 publication Critical patent/WO2015134296A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • Many computing devices such as smartphones, desktops, laptops, tablets, game consoles, televisions, and the like, include functionality for receiving an input (e.g., voice input) for identifying and selecting items displayed on a screen.
  • an input e.g., voice input
  • a user interacting with an entertainment search application executing on a computing device may wish to request the display of movie titles which share a common theme (e.g., HARRY POTTER movies) or a list of restaurants sharing a common attribute (e.g., middle eastern cuisine).
  • Current applications focus on rule-based grammars that cover a very strict set of language constructs comprising a limited number of acceptable commands.
  • Embodiments provide a model based approach for on-screen item selection and disambiguation.
  • An utterance may be received by a computing device in response to displaying items on a display.
  • a disambiguation model may then be applied to the utterance by the computing device.
  • the disambiguation model may be utilized for identifying whether the utterance is directed to at least one of the items on the display, extracting referential features from the utterance and identifying an item among the displayed items corresponding to the utterance, based on the extracted referential features.
  • the computing device may then perform an action associated with the utterance upon identifying the item corresponding to the utterance on the display.
  • FIGURE 1 is a block diagram illustrating a system which utilizes a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 2A shows a screen display of a computing device which includes a user interface for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 2B shows a screen display of a computing device which includes a user interface for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 3 shows a screen display of a computing device which includes a user interface for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 4 is a flow diagram illustrating a routine for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 5 is a flow diagram illustrating a routine for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment
  • FIGURE 6 is a simplified block diagram of a computing device with which various embodiments may be practiced
  • FIGURE 7A is a simplified block diagram of a mobile computing device with which various embodiments may be practiced
  • FIGURE 7B is a simplified block diagram of a mobile computing device with which various embodiments may be practiced.
  • FIGURE 8 is a simplified block diagram of a distributed computing system in which various embodiments may be practiced.
  • Embodiments provide a model based approach for on-screen item selection and disambiguation.
  • An utterance may be received by a computing device in response to displaying items on a display.
  • a disambiguation model may then be applied to the utterance by the computing device.
  • the disambiguation model may be utilized for identifying whether the utterance is directed to at least one of the items on the display, extracting referential features from the utterance and identifying an item among the displayed items corresponding to the utterance, based on the extracted referential features.
  • the computing device may then perform an action associated with the utterance upon identifying the item corresponding to the utterance on the display.
  • FIGURE 1 is a block diagram illustrating a system 100 which utilizes a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • the system 100 which may comprise a conversational dialog system, includes a computing device 125 which is in communication with a display 110 (it should be understood that the display 110 may be integrated with the computing device 125 or comprise a separate device connected to the computing device 125, in accordance with various embodiments).
  • the computing device 125 may comprise, without limitation, a desktop computer, laptop computer, smartphone, video game console or a television.
  • the computing device 125 may also comprise or be in communication with one or more recording devices (not shown) used to detect speech and receive video/pictures (e.g., MICROSOFT KINECT, microphone(s), and the like).
  • the computing device 125 may store an application 130 which, as will be described in greater detail below, may be configured to receive utterances 135 and 140 from a user in the form of natural language queries to select items 115 which may be shown on the display 110.
  • Each of the items 115 may further comprise metadata 120 which may include additional item data such as text descriptions (e.g., a synopsis of a movie item, year of publication, actors, genre, etc.).
  • the application 130 may be configured to display a user interface for querying a list of movies based on a common character (e.g., "HARRY POTTER” movies) or a list of restaurants based in a particular area of a city or town (e.g., restaurants located in northeast Bellevue, Washington), and then making a desired selection therefrom.
  • Utterances comprising natural language queries for other items corresponding to other categories may also be received and displayed utilizing the application 130.
  • the application 130 may also be configured to generate a disambiguation model 150 for receiving referential features 145 (which may include explicit descriptive references, implicit descriptive references, explicit spatial or positional references and implicit spatial or positional references) associated with utterance 140.
  • the disambiguation model 150 may include various sub-models and program modules, including statistical classifier model 155, match scores module 160, semantic parser 165 and semantic location parser 170.
  • the disambiguation model 150 may utilize the aforementioned sub-models and program modules to determine if there is a relationship between a displayed item 115 and the utterance 140 so that the disambiguation model 150 may correctly identify utterances directed to the display 110 of the computing device 125 and choose the correct item in response to a user query.
  • the application 130 may comprise an operating system such as the WINDOWS PHONE and XBOX OS operating systems from MICROSOFT CORPORATION of Redmond Washington. It should be understood, however, that other operating systems and applications (including those from other manufacturers) may alternatively be utilized in accordance with the various embodiments described herein.
  • FIGURE 2 A shows a screen display of the computing device 125 which includes a user interface 205 for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • the user interface 205 which may be generated by the application 130 on the display 110, may be configured for a user to interact with the computing device 125 to complete several tasks such as browsing, searching, filtering, etc.
  • the user interface 205 may include a first turn or first utterance 207 and a recognition result 209.
  • the first turn utterance 207 may comprise a query posed by a user for a list of items (e.g., "find comedies"), after which the application 30 may return a list of items 220A-220J for the user to choose from, which are shown on the display 110.
  • a list of items e.g., "find comedies”
  • each of the items 220A- 220J may include accompanying text (e.g., titles of movie comedies) in addition to metadata (not shown to the user) which may include additional information about each item.
  • FIGURE 2B shows a screen display of the computing device 125 which includes the user interface 205 for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • the user interface 205 which may be generated by the application 130 after displaying the items 220A-220J in response to receiving the first utterance 207 (as shown in FIGURE 2A), may include a second turn or second utterance 210 and a recognition result 215.
  • the recognition result 215 may be determined by applying the disambiguation model 150 to the second utterance 210 in order to identify the correct item requested by the user (e.g., the "last one") from among the displayed items 220A-220J (e.g., "Item 10"). Once an item has been identified, the item may then be highlighted (such as shown surrounding the item 320J) for selection or other action by the user.
  • FIGURE 3 shows a screen display of the computing device 125 which includes a user interface 305 for utilizing a model based approach for on-screen item selection and disambiguation, in accordance with another embodiment.
  • the user interface 305 which may be generated by the application 130 on the display 110, may be configured for a user to interact with the computing device 125 to complete several tasks such as browsing, searching, filtering, etc.
  • the user interface 305 may include an utterance 310 and a recognition result 315.
  • the recognition result 315 may be determined by applying the disambiguation model 150 to the utterance 315 in order to identify the correct item requested by the user (e.g., "the one on Street Name 3") from among displayed items 320- 330. Once an item has been identified, the item may then be highlighted (such as shown applied to the item 330) for selection or other action by the user.
  • the correct item requested by the user e.g., "the one on Street Name 3”
  • the item may then be highlighted (such as shown applied to the item 330) for selection or other action by the user.
  • FIGURE 4 is a flow diagram illustrating a routine 400 utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • routines utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • the logical operations of various embodiments of the present invention are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logical circuits or circuit modules within the computing system.
  • the implementation is a matter of choice dependent on the performance requirements of the computing system implementing the invention. Accordingly, the logical operations illustrated in FIGURES 4-5 and making up the various embodiments described herein are referred to variously as operations, structural devices, acts or modules.
  • the routine 400 begins at operation 405, where the application 130 executing on the computing device 125, may receive an utterance (from a user) in response to a display of items on the display 110.
  • the routine 400 continues to operation 410, where the application 130 executing on the computing device 125, may apply the disambiguation model 150 to identify a displayed item corresponding to the utterance received at operation 405.
  • a single model e.g., the disambiguation model 150
  • multiple models e.g., two separate models may be utilized to implement the aforementioned two stage process.
  • a first model may be utilized to identify whether the user is referring to an item on the display 110 and a second model may be utilized to determine which item the user is referring.
  • Illustrative operations performed by the disambiguation model 150 for identifying a displayed item corresponding to the utterance will be described in greater detail below with respect to FIGURE 5.
  • the routine 400 continues to operation 415, where the application 130 executing on the computing device 125, may perform an action (or actions) associated with the item identified on the display 110 by the disambiguation model 150.
  • the action may include the user selection of the disambiguated item on the display 110 for viewing additional information about the selected item (e.g., additional information about a selected movie title).
  • an action may include the user selection of the disambiguated item on the display and the execution of an activity associated with the selected item. The activity may include, for example, playing a selected movie, displaying directions to a selected restaurant location, generating an e-mail to a selected contact from a contacts list, etc. From operation 415, the routine 400 then ends.
  • FIGURE 5 is a flow diagram illustrating a routine 500 utilizing a model based approach for on-screen item selection and disambiguation, in accordance with an embodiment.
  • the routine 500 begins at operation 505, where the disambiguation model 150 (generated by the application 130) may determine if the utterance received at operation 405 of FIGURE 4 is directed to any items displayed on the display 110.
  • the disambiguation model 150 may be configured to build and apply the statistical classifier model 155 to the utterance.
  • the statistical classifier model 155 may include lexical and semantic features.
  • the lexical and semantic features may include a vocabulary obtained from text in the utterance, a phrase match between the utterance and item metadata associated with the items on the display 110, and locational features (e.g., "top,” "second one,” etc.). If, at operation 505, the disambiguation model 150 determines that the utterance is directed to at least one of the items displayed on the display 110, then the routine 500 branches to operation 515. If, at operation 505, the disambiguation model 150 is unable to determine that the utterance is directed to any of the displayed items on the display 110 (e.g., there is not a phrase match between the utterance and any of the metadata for the displayed items), then the routine 500 continues to operation 510.
  • locational features e.g., "top,” "second one,” etc.
  • the application 130 may be configured to request a clarification of the received utterance.
  • the requested clarification may include returning a "no results" message followed by a request to restate the utterance.
  • the routine 500 returns to operation 505.
  • the disambiguation model 150 may extract referential features from the utterance.
  • the disambiguation model 150 may be configured to extract semantic and syntactic features by considering different types of utterances (or utterance classes).
  • the utterance classes may include: (1) Explicit Referential (i.e., explicit mentions of a whole or part of a title, or other textual cues such as underline text (e.g., show me the details of the empty chair" when referring to a book title)); (2) Implicit Referential (i.e., an implicit referral of an item using information related to the item such as the name of an author or item image (e.g., "the one released in 2005")); (3) Explicit Positional (i.e., a positional reference or screen location data using information from a list of items displayed as a grid (e.g., "I want to watch the movie on the bottom right corner”)); and (4) Implicit Positional (i.e., positional references
  • the routine 500 continues to operation 520, where the disambiguation model 150 may identify an item on the display 110 corresponding to the utterance based on the extracted referential features at operation 515.
  • the disambiguation model 150 may be configured to identify one or more explicit and implicit references in the utterance, determine lexical match scores between the utterance and metadata associated with each of the displayed items, parse the utterance for matching phrases between semantic phrases in the utterance and the metadata, and parse the utterance to capture location indicators for predicting a screen location of the item.
  • the lexical match scores may be based on an n-gram match based on word overlap, word order, Jaccard-sentence similarity, etc.
  • the disambiguation model 150 may determine that item corresponds to the utterance made by the user. It should be understood, that in accordance with an embodiment, the disambiguation model 150 may utilize the semantic parser 165 (which may comprise a natural language understanding model) to decode the utterance into semantic tags such as movie-name, actor-name, or descriptions such as movie or game genre or description. The disambiguation model 150 may then look for matching phrases between the semantic phrases in the utterance and an item's metadata.
  • the semantic parser 165 which may comprise a natural language understanding model
  • the disambiguation model 150 may utilize the semantic location parser 170 to parse the utterance for capturing screen location features (e.g., row and column indicators) depending on a screen layout (e.g., on a smaller display screen, such as a smartphone or handheld gaming device, the displayed items may be listed in a single column whereas on a larger display screen, such as a laptop, tablet, desktop computer monitor or television, the displayed items may be listed on a grid structure).
  • the disambiguation model 150 may be utilized to determine the predicted location of a displayed item. From operation 520, the routine 500 then ends.
  • FIGURE 6-8 and the associated descriptions provide a discussion of a variety of operating environments in which embodiments of the invention may be practiced.
  • the devices and systems illustrated and discussed with respect to FIGURES 6-8 are for purposes of example and illustration and are not limiting of a vast number of computing device configurations that may be utilized for practicing embodiments of the invention, described herein.
  • FIGURE 6 is a block diagram illustrating example physical components of a computing device 600 with which various embodiments may be practiced.
  • the computing device 600 may include at least one processing unit 602 and a system memory 604.
  • system memory 604 may comprise, but is not limited to, volatile (e.g. random access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination.
  • System memory 604 may include an operating system 605 and application 130. Operating system 605, for example, may be suitable for controlling the computing device 600 's operation and, in accordance with an embodiment, may comprise the WINDOWS operating systems from MICROSOFT CORPORATION of Redmond, Washington.
  • the application 130 (which, in some embodiments, may be included in the operating system 605) may comprise functionality for performing routines including, for example, utilizing a model based approach for on-screen item selection and disambiguation as described above with respect to the operations in routines 400-500 of FIGURES 4-5.
  • the computing device 600 may have additional features or functionality.
  • the computing device 600 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, solid state storage devices ("SSD"), flash memory or tape.
  • additional storage is illustrated in FIGURE 6 by a removable storage 609 and a non-removable storage 610.
  • the computing device 600 may also have input device(s) 612 such as a keyboard, a mouse, a pen, a sound input device (e.g., a microphone), a touch input device for receiving gestures, an accelerometer or rotational sensor, etc.
  • Output device(s) 614 such as a display, speakers, a printer, etc. may also be included.
  • the computing device 600 may include one or more communication connections 616 allowing communications with other computing devices 618.
  • suitable communication connections 616 include, but are not limited to, RF transmitter, receiver, and/or transceiver circuitry; universal serial bus (USB), parallel, and/or serial ports.
  • various embodiments may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors.
  • various embodiments may be practiced via a system-on-a-chip ("SOC") where each or many of the components illustrated in FIGURE 6 may be integrated onto a single integrated circuit.
  • SOC system-on-a-chip
  • Such an SOC device may include one or more processing units, graphics units, communications units, system virtualization units and various application functionality all of which are integrated (or "burned") onto the chip substrate as a single integrated circuit.
  • the functionality, described herein may operate via application-specific logic integrated with other components of the computing device/system 600 on the single integrated circuit (chip).
  • Embodiments may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies.
  • embodiments may be practiced within a general purpose computer or in any other circuits or systems.
  • Computer readable media may include computer storage media.
  • Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules.
  • the system memory 604, the removable storage device 609, and the non-removable storage device 610 are all computer storage media examples (i.e., memory storage.)
  • Computer storage media may include RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information and which can be accessed by the computing device 600. Any such computer storage media may be part of the computing device 600.
  • Computer storage media does not include a carrier wave or other propagated or modulated data signal.
  • Communication media may be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
  • modulated data signal may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal.
  • communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
  • RF radio frequency
  • FIGURES 7A and 7B illustrate a suitable mobile computing environment, for example, a mobile computing device 750 which may include, without limitation, a smartphone, a tablet personal computer, a laptop computer and the like, with which various embodiments may be practiced.
  • a mobile computing device 750 for implementing the embodiments is illustrated.
  • mobile computing device 750 is a handheld computer having both input elements and output elements.
  • Input elements may include touch screen display 725 and input buttons 710 that allow the user to enter information into mobile computing device 750.
  • Mobile computing device 750 may also incorporate an optional side input element 720 allowing further user input.
  • Optional side input element 720 may be a rotary switch, a button, or any other type of manual input element.
  • mobile computing device 750 may incorporate more or less input elements.
  • the mobile computing device is a portable telephone system, such as a cellular phone having display 725 and input buttons 710.
  • Mobile computing device 750 may also include an optional keypad 705.
  • Optional keypad 705 may be a physical keypad or a "soft" keypad generated on the touch screen display.
  • Mobile computing device 750 incorporates output elements, such as display 725, which can display a graphical user interface (GUI). Other output elements include speaker 730 and LED 780. Additionally, mobile computing device 750 may incorporate a vibration module (not shown), which causes mobile computing device 750 to vibrate to notify the user of an event. In yet another embodiment, mobile computing device 750 may incorporate a headphone jack (not shown) for providing another means of providing output signals.
  • output elements such as display 725, which can display a graphical user interface (GUI). Other output elements include speaker 730 and LED 780.
  • mobile computing device 750 may incorporate a vibration module (not shown), which causes mobile computing device 750 to vibrate to notify the user of an event. In yet another embodiment, mobile computing device 750 may incorporate a headphone jack (not shown) for providing another means of providing output signals.
  • any computer system having a plurality of environment sensors, a plurality of output elements to provide notifications to a user and a plurality of notification event types may incorporate the various embodiments described herein.
  • FIG. 7B is a block diagram illustrating components of a mobile computing device used in one embodiment, such as the mobile computing device 750 shown in FIG. 7A. That is, mobile computing device 750 can incorporate a system 702 to implement some embodiments.
  • system 702 can be used in implementing a "smartphone" that can run one or more applications similar to those of a desktop or notebook computer.
  • the system 702 is integrated as a computing device, such as an integrated personal digital assistant (PDA) and wireless phone.
  • PDA personal digital assistant
  • Application 130 may be loaded into memory 762 and run on or in association with an operating system 764.
  • the system 702 also includes non-volatile storage 768 within memory the 762.
  • Non-volatile storage 768 may be used to store persistent information that should not be lost if system 702 is powered down.
  • the application 130 may use and store information in the non-volatile storage 768.
  • the application 130 may comprise functionality for performing routines including, for example, utilizing a model based approach for on-screen item selection and disambiguation as described above with respect to the operations in routines 400-500 of FIGURES 4-5.
  • a synchronization application (not shown) also resides on system 702 and is programmed to interact with a corresponding synchronization application resident on a host computer to keep the information stored in the non-volatile storage 768 synchronized with corresponding information stored at the host computer.
  • other applications may also be loaded into the memory 762 and run on the mobile computing device 750.
  • the system 702 has a power supply 770, which may be implemented as one or more batteries.
  • the power supply 770 might further include an external power source, such as an AC adapter or a powered docking cradle that supplements or recharges the batteries.
  • the system 702 may also include a radio 772 (i.e., radio interface layer) that performs the function of transmitting and receiving radio frequency communications.
  • the radio 772 facilitates wireless connectivity between the system 702 and the "outside world," via a communications carrier or service provider. Transmissions to and from the radio 772 are conducted under control of OS 764. In other words, communications received by the radio 772 may be disseminated to the application 130 via OS 764, and vice versa.
  • the radio 772 allows the system 702 to communicate with other computing devices, such as over a network.
  • the radio 772 is one example of communication media.
  • the embodiment of the system 702 is shown with two types of notification output devices: the LED 780 that can be used to provide visual notifications and an audio interface 774 that can be used with speaker 730 to provide audio notifications. These devices may be directly coupled to the power supply 770 so that when activated, they remain on for a duration dictated by the notification mechanism even though processor 760 and other components might shut down for conserving battery power.
  • the LED 780 may be programmed to remain on indefinitely until the user takes action to indicate the powered- on status of the device.
  • the audio interface 774 is used to provide audible signals to and receive audible signals from the user.
  • the audio interface 774 may also be coupled to a microphone (not shown) to receive audible (e.g., voice) input, such as to facilitate a telephone conversation.
  • the microphone may also serve as an audio sensor to facilitate control of notifications.
  • the system 702 may further include a video interface 776 that enables an operation of on-board camera 740 to record still images, video streams, and the like.
  • a mobile computing device implementing the system 702 may have additional features or functionality.
  • the device may also include additional data storage devices (removable and/or non-removable) such as, magnetic disks, optical disks, or tape.
  • additional storage is illustrated in FIG. 7B by storage 768.
  • Data/information generated or captured by the mobile computing device 750 and stored via the system 702 may be stored locally on the mobile computing device 750, as described above, or the data may be stored on any number of storage media that may be accessed by the device via the radio 772 or via a wired connection between the mobile computing device 750 and a separate computing device associated with the mobile computing device 750, for example, a server computer in a distributed computing network such as the Internet.
  • a server computer in a distributed computing network such as the Internet.
  • data/information may be accessed via the mobile computing device 750 via the radio 772 or via a distributed computing network.
  • data/information may be readily transferred between computing devices for storage and use according to well-known data/information transfer and storage means, including electronic mail and collaborative data/information sharing systems.
  • FIGURE 8 is a simplified block diagram of a distributed computing system in which various embodiments may be practiced.
  • the distributed computing system may include number of client devices such as a computing device 803, a tablet computing device 805 and a mobile computing device 810.
  • the client devices 803, 805 and 810 may be in communication with a distributed computing network 815 (e.g., the Internet).
  • a server 820 is in communication with the client devices 803, 805 and 810 over the network 815.
  • the server 820 may store application 130 which may be perform routines including, for example, utilizing a model based approach for on-screen item selection and disambiguation as described above with respect to the operations in routines 400-500 of FIGURES 4-5.
  • Content developed, interacted with, or edited in association with the application 130 may be stored in different communication channels or other storage types.
  • various documents may be stored using a directory service 822, a web portal 824, a mailbox service 826, an instant messaging store 828, or a social networking site 830.
  • the application 130 may use any of these types of systems or the like for enabling data utilization, as described herein.
  • the server 820 may provide the proximity application 130 to clients.
  • the server 820 may be a web server providing the application 130 over the web.
  • the server 820 may provide the application 130 over the web to clients through the network 815.
  • the computing device 10 may be implemented as the computing device 803 and embodied in a personal computer, the tablet computing device 805 and/or the mobile computing device 810 (e.g., a smart phone). Any of these embodiments of the computing devices 803, 805 and 810 may obtain content from the store 816.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
PCT/US2015/017874 2014-03-03 2015-02-27 Model based approach for on-screen item selection and disambiguation WO2015134296A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020167027135A KR20160127810A (ko) 2014-03-03 2015-02-27 온스크린 아이템 선택 및 명확화를 위한 모델 기반 방식
EP15716197.7A EP3114582A1 (de) 2014-03-03 2015-02-27 Modellbasierter ansatz zur auswahl und disambiguierung von elementen auf einem bildschirm
CN201580012103.2A CN106104528A (zh) 2014-03-03 2015-02-27 用于屏幕上项目选择和消歧的基于模型的方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/194,964 US9412363B2 (en) 2014-03-03 2014-03-03 Model based approach for on-screen item selection and disambiguation
US14/194,964 2014-03-03

Publications (1)

Publication Number Publication Date
WO2015134296A1 true WO2015134296A1 (en) 2015-09-11

Family

ID=52829307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/017874 WO2015134296A1 (en) 2014-03-03 2015-02-27 Model based approach for on-screen item selection and disambiguation

Country Status (5)

Country Link
US (1) US9412363B2 (de)
EP (1) EP3114582A1 (de)
KR (1) KR20160127810A (de)
CN (1) CN106104528A (de)
WO (1) WO2015134296A1 (de)

Families Citing this family (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
KR20150104615A (ko) 2013-02-07 2015-09-15 애플 인크. 디지털 어시스턴트를 위한 음성 트리거
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3008641A1 (de) 2013-06-09 2016-04-20 Apple Inc. Vorrichtung, verfahren und grafische benutzeroberfläche für gesprächspersistenz über zwei oder mehrere instanzen eines digitaler assistenten
CN105453026A (zh) 2013-08-06 2016-03-30 苹果公司 基于来自远程设备的活动自动激活智能响应
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
KR20150128406A (ko) * 2014-05-09 2015-11-18 삼성전자주식회사 음성 인식 정보를 표시하는 방법 및 장치
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
TWI566107B (zh) 2014-05-30 2017-01-11 蘋果公司 用於處理多部分語音命令之方法、非暫時性電腦可讀儲存媒體及電子裝置
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9641919B1 (en) * 2014-09-30 2017-05-02 Amazon Technologies, Inc. Audio assemblies for electronic devices
KR102301880B1 (ko) * 2014-10-14 2021-09-14 삼성전자 주식회사 전자 장치 및 이의 음성 대화 방법
US9953644B2 (en) * 2014-12-01 2018-04-24 At&T Intellectual Property I, L.P. Targeted clarification questions in speech recognition with concept presence score and concept correctness score
US9792560B2 (en) * 2015-02-17 2017-10-17 Microsoft Technology Licensing, Llc Training systems and methods for sequence taggers
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US9886958B2 (en) 2015-12-11 2018-02-06 Microsoft Technology Licensing, Llc Language and domain independent model based approach for on-screen item selection
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
JP6523974B2 (ja) * 2016-01-05 2019-06-05 株式会社東芝 コミュニケーション支援装置、コミュニケーション支援方法、および、プログラム
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
EP3502840B1 (de) * 2016-08-16 2020-11-04 Sony Corporation Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) * 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
KR102412202B1 (ko) * 2017-01-03 2022-06-27 삼성전자주식회사 냉장고 및 이의 정보 표시 방법
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. USER INTERFACE FOR CORRECTING RECOGNITION ERRORS
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK201770429A1 (en) 2017-05-12 2018-12-14 Apple Inc. LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770411A1 (en) 2017-05-15 2018-12-20 Apple Inc. MULTI-MODAL INTERFACES
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10449440B2 (en) * 2017-06-30 2019-10-22 Electronic Arts Inc. Interactive voice-controlled companion application for a video game
US10515625B1 (en) 2017-08-31 2019-12-24 Amazon Technologies, Inc. Multi-modal natural language processing
US10621317B1 (en) 2017-09-14 2020-04-14 Electronic Arts Inc. Audio-based device authentication system
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US11182122B2 (en) * 2017-12-08 2021-11-23 Amazon Technologies, Inc. Voice control of computing devices
US10503468B2 (en) 2017-12-08 2019-12-10 Amazon Technologies, Inc. Voice enabling applications
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10629192B1 (en) 2018-01-09 2020-04-21 Electronic Arts Inc. Intelligent personalized speech recognition
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. VIRTUAL ASSISTANT OPERATION IN MULTI-DEVICE ENVIRONMENTS
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK179822B1 (da) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
WO2020040780A1 (en) * 2018-08-24 2020-02-27 Hewlett-Packard Development Company, L.P. Identifying digital elements
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. USER ACTIVITY SHORTCUT SUGGESTIONS
DK201970510A1 (en) 2019-05-31 2021-02-11 Apple Inc Voice identification in digital assistant systems
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11227599B2 (en) 2019-06-01 2022-01-18 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US10926173B2 (en) 2019-06-10 2021-02-23 Electronic Arts Inc. Custom voice control of video game character
WO2021056255A1 (en) 2019-09-25 2021-04-01 Apple Inc. Text detection using global geometry estimators
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11043220B1 (en) 2020-05-11 2021-06-22 Apple Inc. Digital assistant hardware abstraction
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
CN114594923A (zh) * 2022-02-16 2022-06-07 北京梧桐车联科技有限责任公司 车载终端的控制方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110054899A1 (en) * 2007-03-07 2011-03-03 Phillips Michael S Command and control utilizing content information in a mobile voice-to-speech application
US20110276944A1 (en) * 2010-05-07 2011-11-10 Ruth Bergman Natural language text instructions
US20120209608A1 (en) * 2011-02-15 2012-08-16 Pantech Co., Ltd. Mobile communication terminal apparatus and method for executing application through voice recognition
EP2533242A1 (de) * 2011-06-07 2012-12-12 Samsung Electronics Co., Ltd. Anzeigevorrichtung und Verfahren zur Ausführung einer Verbindung, und Verfahren zur Spracherkennung dafür

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085160A (en) 1998-07-10 2000-07-04 Lernout & Hauspie Speech Products N.V. Language independent speech recognition
US6757718B1 (en) 1999-01-05 2004-06-29 Sri International Mobile navigation of network-based electronic information using spoken input
DE50104533D1 (de) 2000-01-27 2004-12-23 Siemens Ag System und verfahren zur blickfokussierten sprachverarbeitung
US6795806B1 (en) 2000-09-20 2004-09-21 International Business Machines Corporation Method for enhancing dictation and command discrimination
US6964023B2 (en) 2001-02-05 2005-11-08 International Business Machines Corporation System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input
JP3919210B2 (ja) 2001-02-15 2007-05-23 アルパイン株式会社 音声入力案内方法及び装置
AU2002314933A1 (en) 2001-05-30 2002-12-09 Cameronsound, Inc. Language independent and voice operated information management system
KR100457509B1 (ko) 2001-07-07 2004-11-17 삼성전자주식회사 터치스크린과 음성인식을 통해 동작 제어되는 정보단말기 및 그의 명령 실행 방법
US7324947B2 (en) 2001-10-03 2008-01-29 Promptu Systems Corporation Global speech user interface
US7398209B2 (en) 2002-06-03 2008-07-08 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7881493B1 (en) 2003-04-11 2011-02-01 Eyetools, Inc. Methods and apparatuses for use of eye interpretation information
US7742911B2 (en) 2004-10-12 2010-06-22 At&T Intellectual Property Ii, L.P. Apparatus and method for spoken language understanding by using semantic role labeling
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US8467672B2 (en) 2005-10-17 2013-06-18 Jeffrey C. Konicek Voice recognition and gaze-tracking for a camera
JP4878471B2 (ja) 2005-11-02 2012-02-15 キヤノン株式会社 情報処理装置およびその制御方法
US8793620B2 (en) 2011-04-21 2014-07-29 Sony Computer Entertainment Inc. Gaze-assisted computer interface
US9250703B2 (en) 2006-03-06 2016-02-02 Sony Computer Entertainment Inc. Interface with gaze detection and voice input
US8375326B2 (en) 2006-05-30 2013-02-12 Dell Products Lp. Contextual-based and overlaid user interface elements
BRPI0712837B8 (pt) 2006-06-11 2021-06-22 Volvo Tech Corporation método para determinação e análise de uma localização de interesse visual
US8224656B2 (en) * 2008-03-14 2012-07-17 Microsoft Corporation Speech recognition disambiguation on mobile devices
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10540976B2 (en) 2009-06-05 2020-01-21 Apple Inc. Contextual voice commands
KR101597289B1 (ko) 2009-07-31 2016-03-08 삼성전자주식회사 동적 화면에 따라 음성을 인식하는 장치 및 방법
US9043206B2 (en) 2010-04-26 2015-05-26 Cyberpulse, L.L.C. System and methods for matching an utterance to a template hierarchy
US8700392B1 (en) 2010-09-10 2014-04-15 Amazon Technologies, Inc. Speech-inclusive device interfaces
US8560321B1 (en) 2011-01-05 2013-10-15 Interactions Corportion Automated speech recognition system for natural language understanding
US20140099623A1 (en) 2012-10-04 2014-04-10 Karmarkar V. Amit Social graphs based on user bioresponse data
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US20120259638A1 (en) 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Apparatus and method for determining relevance of input speech
WO2013033842A1 (en) 2011-09-07 2013-03-14 Tandemlaunch Technologies Inc. System and method for using eye gaze information to enhance interactions
CN103187057A (zh) * 2011-12-29 2013-07-03 方正国际软件(北京)有限公司 漫画声控系统和方法
US9024844B2 (en) 2012-01-25 2015-05-05 Microsoft Technology Licensing, Llc Recognition of image on external display
US9423870B2 (en) 2012-05-08 2016-08-23 Google Inc. Input determination method
US9823742B2 (en) 2012-05-18 2017-11-21 Microsoft Technology Licensing, Llc Interaction and management of devices using gaze detection
US20130346085A1 (en) 2012-06-23 2013-12-26 Zoltan Stekkelpak Mouth click sound based computer-human interaction method, system and apparatus
US8977555B2 (en) 2012-12-20 2015-03-10 Amazon Technologies, Inc. Identification of utterance subjects
CN103885743A (zh) 2012-12-24 2014-06-25 大陆汽车投资(上海)有限公司 结合注视跟踪技术的语音文本输入方法和系统
US8571851B1 (en) 2012-12-31 2013-10-29 Google Inc. Semantic interpretation using user gaze order
KR20140089876A (ko) * 2013-01-07 2014-07-16 삼성전자주식회사 대화형 인터페이스 장치 및 그의 제어 방법
KR20140132246A (ko) 2013-05-07 2014-11-17 삼성전자주식회사 오브젝트 선택 방법 및 오브젝트 선택 장치
US10317992B2 (en) 2014-09-25 2019-06-11 Microsoft Technology Licensing, Llc Eye gaze for spoken language understanding in multi-modal conversational interactions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110054899A1 (en) * 2007-03-07 2011-03-03 Phillips Michael S Command and control utilizing content information in a mobile voice-to-speech application
US20110276944A1 (en) * 2010-05-07 2011-11-10 Ruth Bergman Natural language text instructions
US20120209608A1 (en) * 2011-02-15 2012-08-16 Pantech Co., Ltd. Mobile communication terminal apparatus and method for executing application through voice recognition
EP2533242A1 (de) * 2011-06-07 2012-12-12 Samsung Electronics Co., Ltd. Anzeigevorrichtung und Verfahren zur Ausführung einer Verbindung, und Verfahren zur Spracherkennung dafür

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PUI-YU HUI ET AL: "Cross-Modality Semantic Integration With Hypothesis Rescoring for Robust Interpretation of Multimodal User Interactions", IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, IEEE SERVICE CENTER, NEW YORK, NY, USA, vol. 17, no. 3, 1 March 2009 (2009-03-01), pages 486 - 500, XP011251305, ISSN: 1558-7916, DOI: 10.1109/TASL.2008.2011509 *

Also Published As

Publication number Publication date
EP3114582A1 (de) 2017-01-11
US20150248886A1 (en) 2015-09-03
US9412363B2 (en) 2016-08-09
KR20160127810A (ko) 2016-11-04
CN106104528A (zh) 2016-11-09

Similar Documents

Publication Publication Date Title
US9412363B2 (en) Model based approach for on-screen item selection and disambiguation
US9886958B2 (en) Language and domain independent model based approach for on-screen item selection
US10572602B2 (en) Building conversational understanding systems using a toolset
US10181322B2 (en) Multi-user, multi-domain dialog system
US10055403B2 (en) Rule-based dialog state tracking
EP3183728B1 (de) System und verfahren zur detektion verwaister äusserungen
US10235358B2 (en) Exploiting structured content for unsupervised natural language semantic parsing
US9875237B2 (en) Using human perception in building language understanding models
US20140222422A1 (en) Scaling statistical language understanding systems across domains and intents
US20150179170A1 (en) Discriminative Policy Training for Dialog Systems
US20180089164A1 (en) Entity-specific conversational artificial intelligence
US20150325236A1 (en) Context specific language model scale factors
EP3345100A1 (de) Verteiltes serversystem für sprachverständnis
CN111247778A (zh) 使用web智能的对话式/多回合的问题理解
US20140379323A1 (en) Active learning using different knowledge sources
US10719791B2 (en) Topic-based place of interest discovery feed
US20180196870A1 (en) Systems and methods for a smart search of an electronic document
US20140350931A1 (en) Language model trained using predicted queries from statistical machine translation
US20180322155A1 (en) Search system for temporally relevant social data
US10963641B2 (en) Multi-lingual tokenization of documents and associated queries
US11900926B2 (en) Dynamic expansion of acronyms in audio content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15716197

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
REEP Request for entry into the european phase

Ref document number: 2015716197

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015716197

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20167027135

Country of ref document: KR

Kind code of ref document: A