CN117170780A - Application vocabulary integration through digital assistant - Google Patents

Application vocabulary integration through digital assistant Download PDF

Info

Publication number
CN117170780A
CN117170780A CN202310581530.9A CN202310581530A CN117170780A CN 117170780 A CN117170780 A CN 117170780A CN 202310581530 A CN202310581530 A CN 202310581530A CN 117170780 A CN117170780 A CN 117170780A
Authority
CN
China
Prior art keywords
vocabulary
application
vocabulary entry
entry
software application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310581530.9A
Other languages
Chinese (zh)
Inventor
L·N·珀金斯
P·贝林
D·迪兹曼
K·D·皮托兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/946,977 external-priority patent/US11978436B2/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117170780A publication Critical patent/CN117170780A/en
Pending legal-status Critical Current

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

Systems and processes for operating intelligent automated assistants are provided. For example, the intelligent automation assistant obtains a static vocabulary entry for the application and registers the static vocabulary entry with the knowledge base. When the application is running, the intelligent automation assistant receives a request from the application to register a dynamic vocabulary entry and also registers the dynamic vocabulary entry. Upon receiving user input, the intelligent automated assistant determines whether a matching vocabulary entry for the application has been registered and causes the application to perform a task based on the matching vocabulary entry.

Description

Application vocabulary integration through digital assistant
Technical Field
This document relates generally to intelligent automation assistants, and more particularly to registering application terms for use with intelligent automation assistants.
Background
An intelligent automated assistant (or digital assistant) may provide an advantageous interface between a human user and an electronic device. Such assistants may allow a user to interact with a device or system in voice form and/or text form using natural language. For example, a user may provide a voice input containing a user request to a digital assistant running on an electronic device. The digital assistant may interpret the user intent from the voice input and operate the user intent into a task. These tasks may then be performed by executing one or more services of the electronic device, and the relevant output in response to the user request may be returned to the user.
Electronic devices (e.g., mobile phones, laptops, tablet computers, etc.) implementing digital assistants may be installed with applications (e.g., first-party and third-party software programs) that can greatly extend the content and functionality available to users. However, integrating additional application content and functionality with the digital assistant so that users can interact with the application using natural language can be difficult and inefficient. For example, for a digital assistant's natural language processing system, terms for application content and functionality may be "out of vocabulary". As another example, even though the digital assistant understands natural language input, the digital assistant may not understand how to use application content and functionality to implement user intent.
Disclosure of Invention
Exemplary methods are disclosed herein. An exemplary method includes, at an electronic device having one or more processors, obtaining, from a software application, a first vocabulary entry for the software application; registering the first vocabulary entry with a knowledge base of a digital assistant of the electronic device; and when the software application is running: receiving a request from the software application to register a second vocabulary entry for the software application; and registering the second vocabulary entry with the knowledge base of the digital assistant.
Example non-transitory computer-readable media are disclosed herein. An example non-transitory computer readable storage medium stores one or more programs. The one or more programs include instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: obtaining a first vocabulary entry for a software application from the software application; registering the first vocabulary entry with a knowledge base of a digital assistant of the electronic device; and when the software application is running: receiving a request from the software application to register a second vocabulary entry for the software application; and registering the second vocabulary entry with the knowledge base of the digital assistant.
Example electronic devices are disclosed herein. An example electronic device includes one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for: obtaining a first vocabulary entry for a software application from the software application; registering the first vocabulary entry with a knowledge base of a digital assistant of the electronic device; and when the software application is running: receiving a request from the software application to register a second vocabulary entry for the software application; and registering the second vocabulary entry with the knowledge base of the digital assistant.
An exemplary electronic device includes means for: obtaining a first vocabulary entry for a software application from the software application; registering the first vocabulary entry with a knowledge base of a digital assistant of the electronic device; and when the software application is running: receiving a request from the software application to register a second vocabulary entry for the software application; and registering the second vocabulary entry with the knowledge base of the digital assistant.
Exemplary methods are disclosed herein. An exemplary method includes, at an electronic device having one or more processors, obtaining an application vocabulary of a software application from the software application, wherein the vocabulary includes at least a first type of vocabulary entry and a second type of vocabulary entry; registering the application vocabulary with a knowledge base of a digital assistant of the electronic device; receiving user input; determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, causing the software application to perform a first action based on the first vocabulary entry.
Example non-transitory computer-readable media are disclosed herein. An example non-transitory computer readable storage medium stores one or more programs. The one or more programs include instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: obtaining an application vocabulary of the software application from the software application, wherein the vocabulary includes at least a first type of vocabulary entry and a second type of vocabulary entry; registering the application vocabulary with a knowledge base of a digital assistant of the electronic device; receiving user input; determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, causing the software application to perform a first action based on the first vocabulary entry.
Example electronic devices are disclosed herein. An example electronic device includes one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for: obtaining an application vocabulary of the software application from the software application, wherein the vocabulary includes at least a first type of vocabulary entry and a second type of vocabulary entry; registering the application vocabulary with a knowledge base of a digital assistant of the electronic device; receiving user input; determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, causing the software application to perform a first action based on the first vocabulary entry.
An exemplary electronic device includes means for: obtaining an application vocabulary of the software application from the software application, wherein the vocabulary includes at least a first type of vocabulary entry and a second type of vocabulary entry; registering the application vocabulary with a knowledge base of a digital assistant of the electronic device; receiving user input; determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, causing the software application to perform a first action based on the first vocabulary entry.
Registering application terminology for use with a digital assistant, as described herein, provides an efficient and accurate way to integrate application content and functionality with a digital assistant system. For example, registering the first type of application vocabulary entry and the second type of application vocabulary entry, respectively, may allow the application vocabulary to be selectively and efficiently updated. In addition, registering and processing application terms, as described herein, may allow a user to access content and functionality provided by a software application using an intuitive and efficient natural language interface of a digital assistant.
Drawings
Fig. 1 is a block diagram illustrating a system and environment for implementing a digital assistant according to various examples.
Fig. 2A is a block diagram illustrating a portable multifunction device implementing a client-side portion of a digital assistant in accordance with various examples.
FIG. 2B is a block diagram illustrating exemplary components for event processing according to various examples.
Fig. 3 illustrates a portable multifunction device implementing a client-side portion of a digital assistant in accordance with various examples.
FIG. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with various examples.
FIG. 5A illustrates an exemplary user interface of a menu of applications on a portable multifunction device in accordance with various examples.
FIG. 5B illustrates an exemplary user interface of a multi-function device having a touch-sensitive surface separate from a display according to various examples.
Fig. 6A illustrates a personal electronic device according to various examples.
Fig. 6B is a block diagram illustrating a personal electronic device in accordance with various examples.
Fig. 7A is a block diagram illustrating a digital assistant system or server portion thereof according to various examples.
Fig. 7B illustrates the functionality of the digital assistant shown in fig. 7A according to various examples.
Fig. 7C illustrates a portion of a ontology according to various examples.
Fig. 8A-8B illustrate a system for registering application terms for use with a digital assistant according to various examples.
Fig. 9A-9B illustrate a flow chart for registering application terms for use with a digital assistant, according to various examples.
Fig. 10A-10B illustrate systems for implementing application vocabulary by a digital assistant, according to various examples.
11A-11B illustrate a flow diagram for implementing an application vocabulary by a digital assistant, according to various examples.
Detailed Description
In the following description of the examples, reference is made to the accompanying drawings in which, by way of illustration, specific examples in which the embodiments may be practiced are shown. It is to be understood that other examples may be utilized and structural changes may be made without departing from the scope of the various examples.
The intelligent automated assistant may integrate application content and functionality by obtaining and registering application vocabularies. For example, the intelligent automation assistant may initiate obtaining a first vocabulary entry from the application, such as a static vocabulary entry for a class (e.g., programming concept) handled by the application, and registering the first vocabulary entry with a knowledge base of the intelligent automation assistant. In addition, the intelligent automation assistant may receive a "push" request from the application to register a second vocabulary entry (such as another type of dynamic vocabulary entry processed by the application) and register the second vocabulary entry to the knowledge base. Upon receiving a user input, the intelligent automated assistant determines whether the user input corresponds to any registered application vocabulary entries, e.g., finds a match with the user input based on metadata associated with the vocabulary entries in the knowledge base. When a matching vocabulary entry is found, the intelligent automated assistant causes the application to perform an action based on the vocabulary entry.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first input may be referred to as a second input, and similarly, a second input may be referred to as a first input, without departing from the scope of the various described examples. The first input and the second input are both inputs, and in some cases are independent and different inputs.
The terminology used in the description of the various illustrated examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" may be interpreted to mean "when" ("white" or "upon") or "in response to a determination" or "in response to detection". Similarly, the phrase "if a [ stated condition or event ] is detected" may be interpreted to mean "upon a determination" or "in response to a determination" or "upon a detection of a [ stated condition or event ] or" in response to a detection of a [ stated condition or event ], depending on the context.
1. System and environment
Fig. 1 illustrates a block diagram of a system 100 in accordance with various examples. In some examples, system 100 implements a digital assistant. The terms "digital assistant," "virtual assistant," "intelligent automated assistant," or "automated digital assistant" refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent and performs an action based on the inferred user intent. For example, to act on inferred user intent, the system performs one or more of the following steps: identifying a task flow having steps and parameters designed to achieve the inferred user intent, inputting specific requirements into the task flow based on the inferred user intent; executing task flows through calling programs, methods, services, APIs and the like; and generating an output response to the user in audible (e.g., speech) and/or visual form.
In particular, the digital assistant is capable of accepting user requests in the form of, at least in part, natural language commands, requests, statements, lectures, and/or inquiries. Typically, users request that the digital assistant be asked to make informational answers or perform tasks. Satisfactory responses to user requests include providing the requested informational answer, performing the requested task, or a combination of both. For example, the user presents questions to the digital assistant such as "where is i now? ". Based on the user's current location, the digital assistant answers "you are near the central park siemens. "the user also requests to perform a task, such as" please invite my friends to take part in my girl's birthday party next week. In response, the digital assistant may acknowledge the request by speaking "good, immediate" and then send an appropriate calendar invitation on behalf of the user to each of the user's friends listed in the user's electronic address book. During execution of the requested task, the digital assistant sometimes interacts with the user in a continuous conversation involving multiple exchanges of information over a long period of time. There are many other ways to interact with a digital assistant to request information or perform various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant also provides responses in other video or audio forms, for example as text, alerts, music, video, animation, and the like.
As shown in fig. 1, in some examples, the digital assistant is implemented according to a client-server model. The digital assistant includes a client-side portion 102 (hereinafter "DA client 102") that executes on a user device 104 and a server-side portion 106 (hereinafter "DA server 106") that executes on a server system 108. DA client 102 communicates with DA server 106 through one or more networks 110. The DA client 102 provides client-side functionality such as user-oriented input and output processing, and communication with the DA server 106. The DA server 106 provides server-side functionality for any number of DA clients 102 each located on a respective user device 104.
In some examples, the DA server 106 includes a client-oriented I/O interface 112, one or more processing modules 114, a data and model 116, and an I/O interface 118 to external services. The client-oriented I/O interface 112 facilitates client-oriented input and output processing of the DA server 106. The one or more processing modules 114 process the speech input using the data and models 116 and determine user intent based on the natural language input. Further, the one or more processing modules 114 perform task execution based on the inferred user intent. In some examples, the DA server 106 is through F-EF239246
One or more networks 110 communicate with external services 120 to accomplish tasks or collect information. The I/O interface 118 to external services facilitates such communication.
The user device 104 may be any suitable electronic device. In some examples, the user device 104 is a portable multifunction device (e.g., device 200 described below with reference to fig. 2A),A multifunction device (e.g., device 400 described below with reference to fig. 4) or a personal electronic device (e.g., device 600 described below with reference to fig. 6A-6B). The portable multifunction device is, for example, a mobile phone that also contains other functions such as PDA and/or music player functions. Specific examples of portable multifunction devices include Apple from Apple inc (Cupertino, california) And->An apparatus. Other examples of portable multifunction devices include, but are not limited to, earbud/headphones, speakers, and laptop or tablet computers. Further, in some examples, the user device 104 is a non-portable multifunction device. In particular, the user device 104 is a desktop computer, a gaming machine, speakers, a television, or a television set-top box. In some examples, the user device 104 includes a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). In addition, the user device 104 optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick. Various examples of electronic devices, such as multifunction devices, are described in more detail below.
Examples of one or more communication networks 110 include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the Internet. One or more of the communication networks 110 are implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, universal Serial Bus (USB), FIREWIRE, global System for Mobile communications (GSM), enhanced Data GSM Environment (EDGE), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth, wi-Fi, voice over Internet protocol (VoIP), wi-MAX, or any other suitable communication protocol.
The server system 108 is implemented on one or more standalone data processing devices or distributed computer networks. In some examples, the server system 108 also employs various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of the server system 108.
In some examples, the user device 104 communicates with the DA server 106 via a second user device 122. The second user device 122 is similar or identical to the user device 104. For example, the second user device 122 is similar to the device 200, 400, or 600 described below with reference to fig. 2A, 4, and 6A-6B. The user device 104 is configured to be communicatively coupled to the second user device 122 via a direct communication connection (such as bluetooth, NFC, BTLE, etc.) or via a wired or wireless network (such as a local Wi-Fi network). In some examples, the second user device 122 is configured to act as a proxy between the user device 104 and the DA server 106. For example, the DA client 102 of the user device 104 is configured to transmit information (e.g., user requests received at the user device 104) to the DA server 106 via the second user device 122. The DA server 106 processes this information and returns relevant data (e.g., data content in response to a user request) to the user device 104 via the second user device 122.
In some examples, the user device 104 is configured to send a thumbnail request for data to the second user device 122 to reduce the amount of information transmitted from the user device 104. The second user device 122 is configured to determine supplemental information to be added to the thumbnail request to generate a complete request for transmission to the DA server 106. The system architecture may advantageously allow user devices 104 (e.g., watches or similar compact electronic devices) with limited communication capabilities and/or limited battery power to access services provided by the DA server 106 by using a second user device 122 (e.g., mobile phone, laptop, tablet, etc.) with greater communication capabilities and/or battery power as a proxy to the DA server 106. Although only two user devices 104 and 122 are shown in fig. 1, it should be understood that in some examples, system 100 may include any number and type of user devices configured to communicate with DA server system 106 in this proxy configuration.
Although the digital assistant shown in fig. 1 includes both a client-side portion (e.g., DA client 102) and a server-side portion (e.g., DA server 106), in some examples, the functionality of the digital assistant is implemented as a standalone application installed on a user device. Furthermore, the division of functionality between the client portion and the server portion of the digital assistant may vary in different implementations. For example, in some examples, the DA client is a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the digital assistant to the back-end server.
2. Electronic equipment
Attention is now directed to an implementation of an electronic device for implementing a client-side portion of a digital assistant. Fig. 2A is a block diagram illustrating a portable multifunction device 200 with a touch-sensitive display system 212 in accordance with some embodiments. Touch-sensitive display 212 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to or referred to as a "touch-sensitive display system". Device 200 includes memory 202 (which optionally includes one or more computer-readable storage media), memory controller 222, one or more processing units (CPUs) 220, peripheral interface 218, RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, input/output (I/O) subsystem 206, other input control devices 216, and external ports 224. The device 200 optionally includes one or more optical sensors 264. The device 200 optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on the device 200 (e.g., a touch-sensitive surface of the device 200 such as the touch-sensitive display system 212). The device 200 optionally includes one or more haptic output generators 267 for generating haptic outputs on the device 200 (e.g., generating haptic outputs on a touch-sensitive surface such as the touch-sensitive display system 212 of the device 200 or the touch pad 455 of the device 400). These components optionally communicate via one or more communication buses or signal lines 203.
As used in this specification and the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact), or to an alternative to the force or pressure of the contact on the touch-sensitive surface (surrogate). The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch sensitive surface. Alternatively, the size of the contact area and/or its variation detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface and/or its variation in the vicinity of the contact and/or the resistance of the touch-sensitive surface and/or its variation in the vicinity of the contact are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, surrogate measurements of contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that are not otherwise accessible to the user on a smaller sized device of limited real estate for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, touch-sensitive surface, or physical/mechanical control, such as a knob or button).
As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous location of the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device, to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless stated otherwise, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
It should be understood that the device 200 is only one example of a portable multifunction device, and that the device 200 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 2A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 202 includes one or more computer-readable storage media. These computer readable storage media are, for example, tangible and non-transitory. Memory 202 includes high-speed random access memory, and also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. The memory controller 222 controls other components of the device 200 to access the memory 202.
In some examples, the non-transitory computer-readable storage medium of memory 202 is used to store instructions (e.g., for performing aspects of the processes described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing aspects of the processes described below) are stored on a non-transitory computer-readable storage medium (not shown) of the server system 108 or divided between a non-transitory computer-readable storage medium of the memory 202 and a non-transitory computer-readable storage medium of the server system 108.
Peripheral interface 218 is used to couple the input and output peripherals of the device to CPU 220 and memory 202. The one or more processors 220 run or execute various software programs and/or sets of instructions stored in the memory 202 to perform various functions of the device 200 and process data. In some embodiments, peripheral interface 218, CPU 220, and memory controller 222 are implemented on a single chip, such as chip 204. In some other embodiments, they are implemented on separate chips.
The RF (radio frequency) circuit 208 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 208 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 208 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 208 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The RF circuitry 208 optionally includes well-known circuitry for detecting a Near Field Communication (NFC) field, such as by a short-range communication radio. Wireless communications optionally use any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11 ac), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible messaging and presence protocol (XMPP), protocols for instant messaging and presence initiation with extended session initiation (sime), messages and presence (pls), or other fashionable communications protocols, or any other suitable fashion-oriented protocols, or non-compliant communications including, such as may be developed on the date of any other suitable date.
Audio circuitry 210, speaker 211, and microphone 213 provide an audio interface between the user and device 200. Audio circuit 210 receives audio data from peripheral interface 218, converts the audio data into an electrical signal, and transmits the electrical signal to speaker 211. The speaker 211 converts electrical signals into sound waves that are audible to humans. The audio circuit 210 also receives electrical signals converted from sound waves by the microphone 213. Audio circuitry 210 converts the electrical signals to audio data and transmits the audio data to peripheral interface 218 for processing. The audio data is retrieved from and/or transmitted to the memory 202 and/or the RF circuitry 208 via the peripheral interface 218. In some embodiments, the audio circuit 210 also includes a headset jack (e.g., 312 in fig. 3). The headset jack provides an interface between the audio circuit 210 and a removable audio input/output peripheral, such as an output-only earphone or a headset having both an output (e.g., a monaural earphone or a binaural earphone) and an input (e.g., a microphone).
I/O subsystem 206 couples input/output peripheral devices on device 200, such as touch screen 212 and other input control devices 216 to peripheral interface 218. The I/O subsystem 206 optionally includes a display controller 256, an optical sensor controller 258, an intensity sensor controller 259, a haptic feedback controller 261, and one or more input controllers 260 for other input or control devices. One or more input controllers 260 receive electrical signals from/send electrical signals to other input control devices 216. Other input control devices 216 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative implementations, the input controller 260 is optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. One or more buttons (e.g., 308 in fig. 3) optionally include an up/down button for volume control of speaker 211 and/or microphone 213. The one or more buttons optionally include a push button (e.g., 306 in fig. 3).
A quick press of the push button may disengage the lock of the touch screen 212 or begin the process of unlocking the device using gestures on the touch screen, as described in U.S. patent application No. 11/322549 to U.S. patent 7657849, entitled "Unlocking a Device by Performing Gestures on an Unlock Image," filed 12-23, 2005, which is hereby incorporated by reference in its entirety. Longer presses of the push button (e.g., 306) cause the device 200 to power on or off. The user is able to customize the functionality of one or more buttons. Touch screen 212 is used to implement virtual buttons or soft buttons and one or more soft keyboards.
The touch sensitive display 212 provides an input interface and an output interface between the device and the user. Display controller 256 receives electrical signals from touch screen 212 and/or transmits electrical signals to touch screen 212. Touch screen 212 displays visual output to a user. Visual output includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some implementations, some or all of the visual output corresponds to a user interface object.
Touch screen 212 has a touch-sensitive surface, sensor or set of sensors that receives input from a user based on haptic and/or tactile contact. Touch screen 212 and display controller 256 (along with any associated modules and/or sets of instructions in memory 202) detect contact (and any movement or interruption of the contact) on touch screen 212 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 212. In an exemplary embodiment, the point of contact between touch screen 212 and the user corresponds to a user's finger.
Touch screen 212 uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but other display technologies may be used in other embodiments. Touch screen 212 and display controller 256 detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 212. In an exemplary embodiment, a projected mutual capacitance sensing technique is used, such as that described in the text from Apple inc (Cupertino, california)And->Techniques used in the above.
In some implementations, the touch sensitive display of touch screen 212 is similar to the following U.S. patents: 6,323,846 (Westerman et al), 6,570,557 (Westerman et al) and/or 6,677,932 (Westerman) and/or a multi-touch-sensitive touch pad as described in U.S. patent publication 2002/0015024A1, which are incorporated herein by reference in their entirety. However, touch screen 212 displays visual output from device 200, while the touch sensitive touchpad does not provide visual output.
Touch sensitive displays in some implementations of touch screen 212 are described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, "Multipoint Touch Surface Controller", filed on 5/2/2006; (2) U.S. patent application Ser. No. 10/840,862, "Multipoint Touchscreen", filed 5/6/2004; (3) U.S. patent application Ser. No. 10/903,964, "Gestures For Touch Sensitive Input Devices", filed 7.30.2004; (4) U.S. patent application Ser. No. 11/048,264, "Gestures For Touch Sensitive Input Devices", filed 1/31/2005; (5) U.S. patent application Ser. No. 11/038,590, "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices", filed 1/18/2005; (6) U.S. patent application Ser. No. 11/228,758, "Virtual Input Device Placement On A Touch Screen User Interface", filed 9/16/2005; (7) U.S. patent application Ser. No. 11/228,700, "Operation Of A Computer With A Touch Screen Interface", filed 9/16/2005; (8) U.S. patent application Ser. No. 11/228,737, "Activating Virtual Keys Of A Touch-Screen Virtual Keyboard", filed on 9/16/2005; and (9) U.S. patent application Ser. No. 11/367,749, "Multi-Functional Hand-Held Device," filed 3/2006. All of these applications are incorporated by reference herein in their entirety.
Touch screen 212 has, for example, a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of about 160 dpi. The user makes contact with touch screen 212 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily through finger-based contact and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor location or command for performing the action desired by the user.
In some embodiments, the device 200 includes a touch pad (not shown) for activating or deactivating a specific function in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is a touch sensitive surface separate from the touch screen 212 or an extension of the touch sensitive surface formed by the touch screen.
The device 200 also includes a power system 262 for powering the various components. The power system 262 includes a power management system, one or more power sources (e.g., batteries, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The device 200 also includes one or more optical sensors 264. Fig. 2A shows an optical sensor coupled to an optical sensor controller 258 in the I/O subsystem 206. The optical sensor 264 includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 264 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with an imaging module 243 (also called a camera module), the optical sensor 264 captures still images or video. In some embodiments, the optical sensor is located at the back of the device 200, opposite the touch screen display 212 at the front of the device, such that the touch screen display is used as a viewfinder for still image and/or video image acquisition. In some embodiments, the optical sensor is located at the front of the device such that the user's image is acquired for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 264 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 264 is used with the touch screen display for both video conferencing and still image and/or video image acquisition.
The device 200 optionally further includes one or more contact strength sensors 265. Fig. 2A shows a contact intensity sensor coupled to an intensity sensor controller 259 in the I/O subsystem 206. The contact strength sensor 265 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 265 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 212). In some embodiments, at least one contact intensity sensor is located on the rear of the device 200, opposite the touch screen display 212 located on the front of the device 200.
The device 200 also includes one or more proximity sensors 266. Fig. 2A shows a proximity sensor 266 coupled to the peripheral interface 218. Alternatively, the proximity sensor 266 is coupled to the input controller 260 in the I/O subsystem 206. The proximity sensor 266 performs as described in the following U.S. patent applications: no.11/241,839, entitled "Proximity Detector In Handheld Device"; no.11/240,788, entitled "Proximity Detector In Handheld Device"; no.11/620,702, entitled "Using Ambient Light Sensor To Augment Proximity Sensor Output"; no.11/586,862, entitled "Automated Response To And Sensing Of User Activity In Portable Devices"; and No.11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals," which are hereby incorporated by reference in their entirety. In some implementations, the proximity sensor turns off and disables the touch screen 212 when the multifunction device is placed near the user's ear (e.g., when the user is making a telephone call).
The device 200 optionally further comprises one or more tactile output generators 267. Fig. 2A illustrates a haptic output generator coupled to a haptic feedback controller 261 in I/O subsystem 206. The tactile output generator 267 optionally includes one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating means (e.g., means for converting an electrical signal into a tactile output on a device). The contact strength sensor 265 receives haptic feedback generation instructions from the haptic feedback module 233 and generates a haptic output on the device 200 that can be perceived by a user of the device 200. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 212), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 200) or laterally (e.g., backward and forward in the same plane as the surface of device 200). In some embodiments, at least one tactile output generator sensor is located on the rear of the device 200, opposite the touch screen display 212 located on the front of the device 200.
The device 200 also includes one or more accelerometers 268. Fig. 2A shows accelerometer 268 coupled to peripheral interface 218. Alternatively, accelerometer 268 is coupled to input controller 260 in I/O subsystem 206. Accelerometer 268 performs as described in the following U.S. patent publications: U.S. patent publication 20050190059, "acception-based Theft Detection System for Portable Electronic Devices" and U.S. patent publication 20060017692, "Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer," both of which are incorporated herein by reference in their entirety. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from one or more accelerometers. The device 200 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the one or more accelerometers 268 for obtaining information regarding the position and orientation (e.g., longitudinal or lateral) of the device 200.
In some embodiments, the software components stored in memory 202 include an operating system 226, a communication module (or set of instructions) 228, a contact/motion module (or set of instructions) 230, a graphics module (or set of instructions) 232, a text input module (or set of instructions) 234, a Global Positioning System (GPS) module (or set of instructions) 235, a digital assistant client module 229, and an application program (or set of instructions) 236. In addition, the memory 202 stores data and models, such as user data and models 231. Further, in some embodiments, memory 202 (fig. 2A) or 470 (fig. 4) stores device/global internal state 257, as shown in fig. 2A and 4. The device/global internal state 257 includes one or more of the following: an active application state indicating which applications (if any) are currently active; display status, indicating what applications, views, or other information occupy various areas of the touch screen display 212; sensor status, including information obtained from the various sensors of the device and the input control device 216; and location information relating to the device location and/or pose.
Operating system 226 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware components and software components.
The communication module 228 facilitates communication with other devices through one or more external ports 224 and also includes various software components for processing data received by the RF circuitry 208 and/or the external ports 224. External port 224 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is in communication with30-pin connector used on (Apple Inc. trademark) device identical or similar and/or compatible withA multi-pin (e.g., 30-pin) connector.
The contact/motion module 230 optionally detects contact with the touch screen 212 (in conjunction with the display controller 256) and other touch sensitive devices (e.g., a touch pad or physical click wheel). The contact/motion module 230 includes various software components for performing various operations related to contact detection, such as determining whether contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has stopped (e.g., detecting a finger lift event or a contact break). The contact/motion module 230 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 230 and the display controller 256 detect contact on the touch pad.
In some implementations, the contact/motion module 230 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of specific physical actuators and may be adjusted without changing the physical hardware of the device 200). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 230 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger press event, and then detecting a finger lift (lift off) event at the same location (or substantially the same location) as the finger press event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then detecting a finger-up (lift-off) event.
Graphics module 232 includes various known software components for rendering and displaying graphics on touch screen 212 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 232 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 232 receives one or more codes designating graphics to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 256.
Haptic feedback module 233 includes various software components for generating instructions for use by one or more haptic output generators 267 to generate haptic output at one or more locations on device 200 in response to user interaction with device 200.
The text input module 234, which in some examples is a component of the graphics module 232, provides a soft keyboard for entering text in various applications (e.g., contacts 237, email 240, IM 241, browser 247, and any other application requiring text input).
The GPS module 235 determines the location of the device and provides this information for use in various applications (e.g., to the phone 238 for use in location-based dialing, to the camera 243 as picture/video metadata, and to applications that provide location-based services, such as weather gadgets, local page gadgets, and map/navigation gadgets).
The digital assistant client module 229 includes various client-side digital assistant instructions to provide client-side functionality of the digital assistant. For example, the digital assistant client module 229 is capable of accepting acoustic input (e.g., voice input), text input, touch input, and/or gesture input through various user interfaces of the portable multifunction device 200 (e.g., microphone 213, one or more accelerometers 268, touch-sensitive display system 212, one or more optical sensors 264, other input control devices 216, etc.). The digital assistant client module 229 is also capable of providing output in audio form (e.g., voice output), visual form, and/or tactile form through various output interfaces of the portable multifunction device 200 (e.g., speaker 211, touch-sensitive display system 212, one or more tactile output generators 267, etc.). For example, the output is provided as voice, sound, an alert, a text message, a menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, the digital assistant client module 229 communicates with the DA server 106 using the RF circuitry 208.
The user data and model 231 includes various data associated with the user (e.g., user-specific vocabulary data, user preference data, user-specified name pronunciations, data from a user electronic address book, backlog, shopping list, etc.) to provide client-side functionality of the digital assistant. Further, the user data and models 231 include various models (e.g., speech recognition models, statistical language models, natural language processing models, ontologies, task flow models, service models, etc.) for processing user inputs and determining user intent.
In some examples, the digital assistant client module 229 utilizes the various sensors, subsystems, and peripherals of the portable multifunction device 200 to gather additional information from the surrounding environment of the portable multifunction device 200 to establish a context associated with a user, current user interaction, and/or current user input. In some examples, the digital assistant client module 229 provides contextual information, or a subset thereof, along with user input to the DA server 106 to help infer user intent. In some examples, the digital assistant also uses the context information to determine how to prepare the output and communicate it to the user. The context information is referred to as context data.
In some examples, the contextual information accompanying the user input includes sensor information such as lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, and the like. In some examples, the contextual information may also include a physical state of the device, such as device orientation, device location, device temperature, power level, speed, acceleration, movement pattern, cellular signal strength, and the like. In some examples F-EF239246
Information related to the software state of the DA server 106, such as the running process of the portable multifunction device 200, installed programs, past and current network activities, background services, error logs, resource usage, etc., is provided to the DA server 106 as context information associated with user inputs.
In some examples, the digital assistant client module 229 selectively provides information (e.g., user data 231) stored on the portable multifunction device 200 in response to a request from the DA server 106. In some examples, the digital assistant client module 229 also brings up additional input from the user via a natural language dialog or other user interface upon request by the DA server 106. The digital assistant client module 229 communicates this additional input to the DA server 106 to assist the DA server 106 in intent inference and/or to implement user intent expressed in the user request.
The digital assistant is described in more detail below with reference to fig. 7A-7C. It should be appreciated that the digital assistant client module 229 may include any number of sub-modules of the digital assistant module 726 described below.
The application 236 includes the following modules (or instruction sets) or a subset or superset thereof:
● Contacts module 237 (sometimes referred to as an address book or contact list);
● A telephone module 238;
● A video conference module 239;
● An email client module 240;
● An Instant Messaging (IM) module 241;
● A fitness support module 242;
● A camera module 243 for still and/or video images;
● An image management module 244;
● A video player module;
● A music player module;
● A browser module 247;
● A calendar module 248;
● A desktop applet module 249 that, in some examples, includes one or more of the following: weather desktop applet 249-1, stock desktop applet 249-2, calculator desktop applet 249-3, alarm desktop applet 249-4, dictionary desktop applet 249-5 and other desktop applet obtained by user and user created desktop applet 249-6;
● A desktop applet creator module 250 for forming a user-created desktop applet 249-6;
● A search module 251;
● A video and music player module 252 that incorporates the video player module and the music player module;
● A notepad module 253;
● A map module 254; and/or
● An online video module 255.
Examples of other applications 236 stored in the memory 202 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, contacts module 237 is used to manage an address book or contact list (e.g., in application internal state 292 of contacts module 237 stored in memory 202 or memory 470), including: adding one or more names to the address book; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with the name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications through telephone 238, video conferencing module 239, email 240 or IM 241; etc.
In conjunction with RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, telephone module 238 is used to input a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contact module 237, modify telephone numbers that have been entered, dial a corresponding telephone number, conduct a conversation, and disconnect or hang-up when the conversation is completed. As described above, wireless communication uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, touch screen 212, display controller 256, optical sensor 264, optical sensor controller 258, contact/motion module 230, graphics module 232, text input module 234, contacts module 237, and telephony module 238, videoconferencing module 239 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other parties according to user instructions.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, email client module 240 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 244, the email client module 240 makes it very easy to create and send emails with still or video images captured by the camera module 243.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, instant message module 241 includes executable instructions for: inputting a character sequence corresponding to an instant message, modifying previously inputted characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages include graphics, photographs, audio files, video files, and/or other attachments as supported in MMS and/or Enhanced Messaging Services (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, map module 254, and music player module, workout support module 242 includes executable instructions for: creating workouts (e.g., with time, distance, and/or calorie burn targets); communicate with a fitness sensor (exercise device); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for exercise; and displaying, storing and transmitting the fitness data.
In conjunction with touch screen 212, display controller 256, one or more optical sensors 264, optical sensor controller 258, contact/motion module 230, graphics module 232, and image management module 244, camera module 243 includes executable instructions for: capturing still images or videos (including video streams) and storing them in the memory 202, modifying features of still images or videos, or deleting still images or videos from the memory 202.
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, and camera module 243, image management module 244 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, tagging, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, browser module 247 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, email client module 240, and browser module 247, calendar module 248 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with the RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, and browser module 247, the desktop applet module 249 is a mini-application (e.g., weather desktop applet 249-1, stock market desktop applet 249-2, calculator desktop applet 249-3, alarm clock desktop applet 249-4, and dictionary desktop applet 249-5) or a mini-application created by a user (e.g., user created desktop applet 249-6) that can be downloaded and used by a user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, and browser module 247, a desktop applet creator module 250 is used by a user to create a desktop applet (e.g., to cause a user-specified portion of a web page to become a desktop applet).
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, search module 251 includes executable instructions for searching memory 202 for text, music, sound, images, video, and/or other files matching one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with the touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuit 210, speaker 211, RF circuit 208, and browser module 247, the video and music player module 252 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on the touch screen 212 or on an external display connected via the external port 224). In some embodiments, the device 200 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, notepad module 253 includes executable instructions for creating and managing notepads, backlog, etc. in accordance with user instructions.
In conjunction with the RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, and browser module 247, map module 254 is configured to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to shops and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuit 210, speaker 211, RF circuit 208, text input module 234, email client module 240, and browser module 247, online video module 255 includes instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on a touch screen or on a connected external display via external port 224), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats (such as h.264). In some embodiments, the instant messaging module 241 is used instead of the email client module 240 to send links to particular online videos. Additional description of online video applications can be found in U.S. provisional patent application Ser. No.60/936,562, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 6, 20, 2007, and U.S. patent application Ser. No.11/968,067, titled "Portable Multifunction Device, method, and Graphical User Interface for Playing Online Videos," filed on even date 12, 31, 2007, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. For example, the video player module may be combined with the music player module into a single module (e.g., video and music player module 252 in fig. 2A). In some embodiments, memory 202 stores a subset of the modules and data structures described above. Further, the memory 202 stores additional modules and data structures not described above.
In some embodiments, device 200 is a device on which the operation of a predefined set of functions is performed exclusively by a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operation of the device 200, the number of physical input control devices (such as push buttons, dials, etc.) on the device 200 is reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touch pad, when touched by a user, navigates the device 200 from any user interface displayed on the device 200 to a main menu, home menu, or root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch pads.
Fig. 2B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 202 (fig. 2A) or memory 470 (fig. 4) includes event sorter 270 (e.g., in operating system 226) and corresponding applications 236-1 (e.g., any of the aforementioned applications 237-251, 255, 480-490).
Event classifier 270 receives event information and determines an application view 291 of application 236-1 and application 236-1 to which to deliver the event information. Event sorter 270 includes event monitor 271 and event dispatcher module 274. In some embodiments, the application 236-1 includes an application internal state 292 that indicates one or more current application views that are displayed on the touch-sensitive display 212 when the application is active or executing. In some embodiments, the device/global internal state 257 is used by the event classifier 270 to determine which application(s) are currently active, and the application internal state 292 is used by the event classifier 270 to determine the application view 291 to which to deliver event information.
In some implementations, the application internal state 292 includes additional information, such as one or more of the following: restoration information to be used when the application 236-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 236-1, a state queue for enabling the user to return to a previous state or view of the application 236-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 271 receives event information from peripheral interface 218. The event information includes information about sub-events (e.g., user touches on the touch sensitive display 212 as part of a multi-touch gesture). Peripheral interface 218 transmits information it receives from I/O subsystem 206 or sensors, such as proximity sensor 266, one or more accelerometers 268, and/or microphone 213 (via audio circuitry 210). The information received by the peripheral interface 218 from the I/O subsystem 206 includes information from the touch-sensitive display 212 or touch-sensitive surface.
In some embodiments, event monitor 271 sends requests to peripheral interface 218 at predetermined intervals. In response, peripheral interface 218 transmits the event information. In other embodiments, the peripheral interface 218 transmits event information only if there is a significant event (e.g., an input above a predetermined noise threshold is received and/or an input exceeding a predetermined duration is received).
In some implementations, the event classifier 270 also includes a hit view determination module 272 and/or an active event identifier determination module 273.
When the touch sensitive display 212 displays more than one view, the hit view determination module 272 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected corresponds to a level of programming within the application's programming hierarchy or view hierarchy. For example, the lowest horizontal view in which a touch is detected is referred to as the hit view, and the set of events that are considered to be correct inputs is determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 272 receives information related to sub-events of touch-based gestures. When an application has multiple views organized in a hierarchy, hit view determination module 272 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in a sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module 272, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 273 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the active event identifier determination module 273 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 273 determines that all views that include the physical location of the sub-event are actively engaged views and, thus, that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
Event dispatcher module 274 dispatches event information to an event recognizer (e.g., event recognizer 280). In embodiments that include an active event recognizer determination module 273, the event dispatcher module 274 delivers event information to the event recognizer determined by the active event recognizer determination module 273. In some embodiments, the event dispatcher module 274 stores event information in event queues that is retrieved by the corresponding event receiver 282.
In some embodiments, operating system 226 includes event classifier 270. Alternatively, application 236-1 includes event classifier 270. In yet another embodiment, the event classifier 270 is a stand-alone module or part of another module stored in the memory 202 (such as the contact/motion module 230).
In some embodiments, the application 236-1 includes a plurality of event handlers 290 and one or more application views 291, each of which includes instructions for processing touch events that occur within a corresponding view of the user interface of the application. Each application view 291 of the application 236-1 includes one or more event recognizers 280. Typically, the respective application view 291 includes a plurality of event recognizers 280. In other embodiments, one or more of the event recognizers 280 are part of a separate module, which is a higher level object such as a user interface toolkit (not shown) or application 236-1 from which to inherit methods and other properties. In some implementations, the respective event handlers 290 include one or more of the following: the data updater 276, the object updater 277, the GUI updater 278, and/or the event data 279 received from the event classifier 270. Event handler 290 utilizes or invokes data updater 276, object updater 277 or GUI updater 278 to update the application internal state 292. Alternatively, one or more of the application views 291 include one or more corresponding event handlers 290. Additionally, in some implementations, one or more of the data updater 276, the object updater 277, and the GUI updater 278 are included in the respective application view 291.
The corresponding event identifier 280 receives event information (e.g., event data 279) from the event classifier 270 and identifies events from the event information. Event recognizer 280 includes event receiver 282 and event comparator 284. In some embodiments, event recognizer 280 further includes at least a subset of metadata 283 and event transfer instructions 288 (which include sub-event transfer instructions).
Event receiver 282 receives event information from event sorter 270. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to the motion of a touch, the event information also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
Event comparator 284 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of the event or sub-event. In some embodiments, event comparator 284 includes event definition 286. Event definition 286 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (287-1), event 2 (287-2), and other events. In some embodiments, sub-events in event (287) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (287-1) is a double click on the displayed object. For example, a double click includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift-off on the displayed object for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift-off on the displayed object for a predetermined length of time (touch end). In another example, the definition of event 2 (287-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display 212, and lifting of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 290.
In some embodiments, event definition 287 includes a definition of an event for a corresponding user interface object. In some implementations, event comparator 284 performs hit testing to determine which user interface object is associated with the sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 212, when a touch is detected on touch-sensitive display 212, event comparator 284 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 290, the event comparator uses the results of the hit test to determine which event handler 290 should be activated. For example, event comparator 284 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (287) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 280 determines that the sequence of sub-events does not match any of the events in the event definition 286, the respective event recognizer 280 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 280 includes metadata 283 having configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate whether sub-events are delivered to different levels in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 280 activates an event handler 290 associated with the event. In some implementations, the respective event identifier 280 delivers event information associated with the event to the event handler 290. The activation event handler 290 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 280 throws a marker associated with the recognized event, and event handler 290 associated with the marker obtains the marker and performs a predefined process.
In some implementations, the event delivery instructions 288 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 276 creates and updates data used in the application 236-1. For example, the data updater 276 updates a telephone number used in the contact module 237, or stores a video file used in the video player module. In some embodiments, object updater 277 creates and updates objects used in application 236-1. For example, the object updater 277 creates a new user interface object or updates the location of the user interface object. GUI updater 278 updates the GUI. For example, the GUI updater 278 prepares display information and sends the display information to the graphics module 232 for display on a touch-sensitive display.
In some embodiments, event handler 290 includes or has access to data updater 276, object updater 277, and GUI updater 278. In some embodiments, the data updater 276, the object updater 277, and the GUI updater 278 are included in a single module of the respective application 236-1 or application view 291. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 200, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or holds; contact movement on the touchpad, such as tap, drag, scroll, etc.; inputting by a touch pen; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof is optionally used as input corresponding to sub-events defining the event to be distinguished.
Fig. 3 illustrates a portable multifunction device 200 with a touch screen 212 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 300. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 302 (not drawn to scale in the figures) or one or more styluses 303 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 200 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over an application icon optionally does not select the corresponding application.
The device 200 also includes one or more physical buttons, such as a "home" or menu button 304. As previously described, menu button 304 is used to navigate to any application 236 in a set of applications executing on device 200. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 212.
In some embodiments, device 200 includes a touch screen 212, menu buttons 304, a press button 306 for powering the device on/off and for locking the device, one or more volume adjustment buttons 308, a Subscriber Identity Module (SIM) card slot 310, a headset jack 312, and a docking/charging external port 224. Pressing button 306 is optionally used to turn on/off the device by pressing the button and holding the button in the pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In an alternative embodiment, the device 200 also accepts verbal input through the microphone 213 for activating or deactivating certain functions. The device 200 also optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on the touch screen 212 and/or one or more haptic output generators 267 for generating haptic outputs for a user of the device 200.
FIG. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 400 need not be portable. In some embodiments, the device 400 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). Device 400 typically includes one or more processing units (CPUs) 410, one or more network or other communication interfaces 460, memory 470, and one or more communication buses 420 for interconnecting these components. Communication bus 420 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 400 includes an input/output (I/O) interface 430 with a display 440, typically a touch screen display. The I/O interface 430 also optionally includes a keyboard and/or mouse (or other pointing device) 450 and a touch pad 455, a tactile output generator 457 (e.g., similar to one or more tactile output generators 267 described above with reference to fig. 2A), a sensor 459 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor (similar to one or more contact intensity sensors 265 described above with reference to fig. 2A)), for generating a tactile output on the device 400. Memory 470 includes high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 470 optionally includes one or more storage devices located remotely from CPU 410. In some embodiments, memory 470 stores programs, modules, and data structures, or a subset thereof, similar to those stored in memory 202 of portable multifunction device 200 (fig. 2A). In addition, the memory 470 optionally stores additional programs, modules, and data structures not present in the memory 202 of the portable multifunction device 200. For example, the memory 470 of the device 400 optionally stores the drawing module 480, the presentation module 482, the word processing module 484, the website creation module 486, the disk editing module 488, and/or the spreadsheet module 490, while the memory 202 of the portable multifunction device 200 (fig. 2A) optionally does not store these modules.
Each of the above-described elements in fig. 4 are in some examples stored in one or more of the previously mentioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the above-described functions. The above-described modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are combined or otherwise rearranged in various embodiments. In some embodiments, memory 470 stores a subset of the modules and data structures described above. Further, the memory 470 stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces that may be implemented on, for example, portable multifunction device 200.
Fig. 5A illustrates an exemplary user interface of an application menu on the portable multifunction device 200 in accordance with some embodiments. A similar user interface is implemented on device 400. In some embodiments, user interface 500 includes the following elements, or a subset or superset thereof:
one or more wireless communications, such as one or more signal strength indicators 502 of cellular signals and Wi-Fi signals;
● Time 504;
● A bluetooth indicator 505;
● A battery status indicator 506;
● A tray 508 with icons for commonly used applications such as:
an icon 516 labeled "phone" of the o-phone module 238, optionally including an indicator 514 of the number of missed calls or voice messages;
o an icon 518 labeled "mail" of email client module 240, optionally including an indicator 510 of the number of unread emails;
an icon 520 labeled "browser" of the browser module 247; and
an icon 522 labeled "iPod" of the o video and music player module 252 (also referred to as iPod (trademark of Apple inc. Module 252); and
● Icons of other applications, such as:
icon 524 of oim module 241 labeled "message"; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 526 labeled "calendar" of o calendar module 248; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 528 of o image management module 244 labeled "photo"; the method comprises the steps of carrying out a first treatment on the surface of the
An icon 530 labeled "camera" for o camera module 243; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 532 labeled "online video" of online video module 255; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 534 labeled "stock market" for o stock market desktop applet 249-2; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 536 labeled "map" of o map module 254; the method comprises the steps of carrying out a first treatment on the surface of the
o the icon 538 labeled "weather" for weather desktop applet 249-1; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 540 labeled "clock" for o-alarm desktop applet 249-4; the method comprises the steps of carrying out a first treatment on the surface of the
An icon 542 labeled "fitness support" for the fitness support module 242; the method comprises the steps of carrying out a first treatment on the surface of the
Icon 544 labeled "notepad" for o-notepad module 253; and
o an icon 546 labeled "set" for setting applications or modules that provides access to the settings of the device 200 and its various applications 236.
It should be noted that the iconic labels shown in fig. 5A are merely exemplary. For example, the icon 522 of the video and music player module 252 is optionally labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 5B illustrates an exemplary user interface on a device (e.g., device 400 of fig. 4) having a touch-sensitive surface 551 (e.g., tablet or touch pad 455 of fig. 4) separate from a display 550 (e.g., touch screen display 212). The device 400 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 459) for detecting the intensity of contacts on the touch-sensitive surface 551 and/or one or more tactile output generators 457 for generating tactile outputs for a user of the device 400.
While some of the examples that follow will be given with reference to inputs on touch screen display 212 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface that is separate from the display, as shown in fig. 5B. In some implementations, the touch-sensitive surface (e.g., 551 in fig. 5B) has a primary axis (e.g., 552 in fig. 5B) that corresponds to the primary axis (e.g., 553 in fig. 5B) on the display (e.g., 550). According to these embodiments, the device detects contact (e.g., 560 and 562 in fig. 5B) with the touch-sensitive surface 551 at a location (e.g., 560 corresponds to 568 and 562 corresponds to 570 in fig. 5B) corresponding to the respective location on the display. In this way, user inputs (e.g., contacts 560 and 562 and their movements) detected by the device on the touch-sensitive surface (e.g., 551 in FIG. 5B) are used by the device to manipulate a user interface on the display (e.g., 550 in FIG. 5B) of the multifunction device when the touch-sensitive surface is separated from the device. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily given with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
Fig. 6A illustrates an exemplary personal electronic device 600. The device 600 includes a body 602. In some embodiments, device 600 includes some or all of the features described with respect to devices 200 and 400 (e.g., fig. 2A-4). In some implementations, the device 600 has a touch sensitive display 604, hereinafter referred to as a touch screen 604. In addition to or in lieu of the touch screen 604, the device 600 has a display and a touch-sensitive surface. As with devices 200 and 400, in some implementations, touch screen 604 (or touch-sensitive surface) has one or more intensity sensors for detecting the intensity of a contact (e.g., touch) being applied. One or more intensity sensors of the touch screen 604 (or touch sensitive surface) provide output data representative of the intensity of the touch. The user interface of device 600 responds to touches based on touch strength, meaning that touches of different strengths may invoke different user interface operations on device 600.
Techniques for detecting and processing touch intensity may exist, for example, in related applications: international patent application serial number PCT/US2013/040061, filed 5/8 a 2013, method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application, and international patent application serial number PCT/US2013/069483, filed 11 a 2013, method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 600 has one or more input mechanisms 606 and 608. Input mechanisms 606 and 608 (if included) are in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, the device 600 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 600 to, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, leash, shoe, purse, backpack, or the like. These attachment mechanisms allow the user to wear the device 600.
Fig. 6B illustrates an exemplary personal electronic device 600. In some embodiments, the apparatus 600 includes some or all of the components described with respect to fig. 2A, 2B, and 4. The device 600 has a bus 612 that operatively couples an I/O section 614 to one or more computer processors 616 and memory 618. The I/O section 614 is connected to a display 604, which may have a touch sensitive member 622 and optionally also a touch intensity sensitive member 624. In addition, the I/O portion 614 is connected to a communication unit 630 for receiving application and operating system data using Wi-Fi, bluetooth, near Field Communication (NFC), cellular, and/or other wireless communication technologies. The device 600 includes input mechanisms 606 and/or 608. For example, input mechanism 606 is a rotatable input device or a depressible input device and a rotatable input device. In some examples, input mechanism 608 is a button.
In some examples, input mechanism 608 is a microphone. The personal electronic device 600 includes, for example, various sensors, such as a GPS sensor 632, an accelerometer 634, an orientation sensor 640 (e.g., a compass), a gyroscope 636, a motion sensor 638, and/or combinations thereof, all of which are operatively connected to the I/O section 614.
The memory 618 of the personal electronic device 600 is a non-transitory computer-readable storage medium for storing computer-executable instructions that, when executed by the one or more computer processors 616, for example, cause the computer processors to perform the techniques and processes described above. The computer-executable instructions are also stored and/or transmitted, for example, within any non-transitory computer-readable storage medium, for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. The personal electronic device 600 is not limited to the components and configuration of fig. 6B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object displayed, for example, on a display screen of devices 200, 400, and/or 600 (fig. 2A, 4, and 6A-6B). For example, images (e.g., icons), buttons, and text (e.g., hyperlinks) each constitute an affordance.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 455 in fig. 4 or touch-sensitive surface 551 in fig. 5B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 212 in fig. 2A or touch screen 212 in fig. 5A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by a contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another using a tab key or arrow key); in these implementations, the focus selector moves according to movement of the focus between different areas of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically controlled by the user in order to deliver a user interface element (or contact on the touch screen display) that is interactive with the user of the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact strength, average value of contact strength, value at the first 10% of contact strength, half maximum value of contact strength, 90% maximum value of contact strength, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds includes a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation) instead of being used to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is identified for determining a feature strength. For example, the touch-sensitive surface receives a continuous swipe contact that transitions from a starting position and to an ending position where the intensity of the contact increases. In this example, the characteristic intensity of the contact at the end position is based only on a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., the portion of the swipe contact located only at the end position). In some embodiments, a smoothing algorithm is applied to the intensity of the swipe contact before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of the following: an unweighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or depressions in the intensity of the swipe contact for the purpose of determining the characteristic intensity.
The intensity of the contact on the touch-sensitive surface is characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the tap strength threshold corresponds to a strength of: at this intensity the device will perform the operations normally associated with clicking a button of a physical mouse or touch pad. In some embodiments, the deep compression intensity threshold corresponds to an intensity of: at this intensity the device will perform an operation that is different from the operation normally associated with clicking a physical mouse or a button of a touch pad. In some implementations, when a contact is detected with a characteristic intensity below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold, a contact below the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector according to movement of the contact over the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent across different sets of user interface drawings.
The increase in contact characteristic intensity from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. The increase in contact characteristic intensity from an intensity below the deep-press intensity threshold to an intensity above the deep-press intensity threshold is sometimes referred to as a "deep-press" input. The increase in the contact characteristic intensity from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the contact characteristic intensity from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting a lift-off of contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.
In some implementations described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein a respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some implementations, the respective operation is performed in response to detecting that the intensity of the respective contact increases above a press input intensity threshold (e.g., a "downstroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "upstroke" of the respective press input).
In some implementations, the device employs intensity hysteresis to avoid accidental inputs, sometimes referred to as "jitter," in which the device defines or selects a hysteresis intensity threshold that has a predefined relationship to the compression input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the compression input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the compression input intensity threshold). Thus, in some embodiments, the press input includes an increase in the intensity of the respective contact above a press input intensity threshold and a subsequent decrease in the intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting that the intensity of the respective contact subsequently decreases below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting a press input (e.g., an increase in contact intensity or a decrease in contact intensity depending on the circumstances).
For ease of explanation, optionally, a description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture comprising a press input is triggered in response to detecting any of the following: the contact strength increases above the compression input strength threshold, the contact strength increases from an intensity below the hysteresis strength threshold to an intensity above the compression input strength threshold, the contact strength decreases below the compression input strength threshold, and/or the contact strength decreases below the hysteresis strength threshold corresponding to the compression input strength threshold. In addition, in examples where the operation is described as being performed in response to the intensity of the detected contact decreasing below a press input intensity threshold, the operation is optionally performed in response to the intensity of the detected contact decreasing below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.
3. Digital assistant system
Fig. 7A illustrates a block diagram of a digital assistant system 700, according to various examples. In some examples, the digital assistant system 700 is implemented on a standalone computer system. In some examples, digital assistant system 700 is distributed across multiple computers. In some examples, some of the modules and functions of the digital assistant are divided into a server portion and a client portion, where the client portion is located on one or more user devices (e.g., devices 104, 122, 200, 400, 600, or 1002) and communicates with the server portion (e.g., server system 108) over one or more networks, for example, as shown in fig. 1. In some examples, digital assistant system 700 is a specific implementation of server system 108 (and/or DA server 106) shown in fig. 1. It should be noted that digital assistant system 700 is only one example of a digital assistant system, and that digital assistant system 700 has more or fewer components than shown, combines two or more components, or may have a different configuration or layout of components. The various components shown in fig. 7A are implemented in hardware, in software instructions for execution by one or more processors, in firmware (including one or more signal processing integrated circuits and/or application specific integrated circuits), or in combinations thereof.
The digital assistant system 700 includes a memory 702, an input/output (I/O) interface 706, a network communication interface 708, and one or more processors 704. These components may communicate with each other via one or more communication buses or signal lines 710.
In some examples, memory 702 includes non-transitory computer-readable media such as high-speed random access memory and/or non-volatile computer-readable storage media (e.g., one or more disk storage devices, flash memory devices, or other non-volatile solid state memory devices).
In some examples, the I/O interface 706 couples input/output devices 716 of the digital assistant system 700, such as a display, a keyboard, a touch screen, and a microphone, to the user interface module 722. The I/O interface 706, along with the user interface module 722, receives user input (e.g., voice input, keyboard input, touch input, etc.) and processes the input accordingly. In some examples, for example, when the digital assistant is implemented on a standalone user device, the digital assistant system 700 includes any of the components and I/O communication interfaces described with respect to the device 200, 400, or 600 in fig. 2A, 4, 6A-6B, respectively. In some examples, digital assistant system 700 represents a server portion of a digital assistant implementation and may interact with a user through a client-side portion located on a user device (e.g., device 104, device 200, device 400, or device 600).
In some examples, the network communication interface 708 includes one or more wired communication ports 712 and/or wireless transmit and receive circuitry 714. One or more wired communication ports receive and transmit communication signals via one or more wired interfaces, such as ethernet, universal Serial Bus (USB), FIREWIRE, etc. The wireless circuitry 714 receives and transmits RF signals and/or optical signals from and to a communication network and other communication devices. The wireless communication uses any of a variety of communication standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, bluetooth, wi-Fi, voIP, wi-MAX, or any other suitable communication protocol. Network communication interface 708 enables communication between digital assistant system 700 and other devices via a network, such as the internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN).
In some examples, memory 702 or a computer-readable storage medium of memory 702 stores programs, modules, instructions, and data structures, including all or a subset of the following: an operating system 718, a communication module 720, a user interface module 722, one or more application programs 724, and a digital assistant module 726. In particular, the memory 702 or a computer readable storage medium of the memory 702 stores instructions for performing the processes described above. One or more processors 704 execute these programs, modules, and instructions and read data from and write data to the data structures.
Operating system 718 (e.g., darwin, RTXC, LINUX, UNIX, iOS, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware, firmware, and software components.
The communication module 720 facilitates communication between the digital assistant system 700 and other devices via the network communication interface 708. For example, the communication module 720 communicates with the RF circuitry 208 of an electronic device (such as the device 200, 400, or 600 shown in fig. 2A, 4, 6A-6B, respectively). The communication module 720 also includes various components for processing data received by the wireless circuit 714 and/or the wired communication port 712.
The user interface module 722 receives commands and/or input from a user (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone) via the I/O interface 706 and generates user interface objects on a display. The user interface module 722 also prepares and communicates output (e.g., voice, sound, animation, text, icons, vibration, haptic feedback, illumination, etc.) to the user via the I/O interface 706 (e.g., through a display, audio channel, speaker, touch pad, etc.).
Application programs 724 include programs and/or modules configured to be executed by the one or more processors 704. For example, if the digital assistant system is implemented on a standalone user device, the applications 724 include user applications such as games, calendar applications, navigation applications, or mail applications. If the digital assistant system 700 is implemented on a server, the applications 724 include, for example, a resource management application, a diagnostic application, or a scheduling application.
The memory 702 also stores a digital assistant module 726 (or server portion of the digital assistant). In some examples, digital assistant module 726 includes the following sub-modules, or a subset or superset thereof: an input/output processing module 728, a Speech To Text (STT) processing module 730, a natural language processing module 732, a dialog flow processing module 734, a task flow processing module 736, a services processing module 738, and a speech synthesis processing module 740. Each of these modules has access to one or more of the following systems or data and models of digital assistant module 726, or a subset or superset thereof: ontology 760, vocabulary index 744, user data 748, task flow model 754, service model 756, and ASR system 758.
In some examples, using the processing modules, data, and models implemented in digital assistant module 726, the digital assistant may perform at least some of the following: converting the speech input into text; identifying a user intent expressed in natural language input received from a user; actively elicit and obtain information needed to fully infer the user's intent (e.g., by disambiguating words, games, intent, etc.); determining a task flow for satisfying the inferred intent; and executing the task flow to satisfy the inferred intent.
In some examples, as shown in fig. 7B, I/O processing module 728 may interact with a user via I/O device 716 in fig. 7A or interact with a user device (e.g., device 104, device 200, device 400, or device 600) via network communication interface 708 in fig. 7A to obtain user input (e.g., voice input) and provide a response to the user input (e.g., as voice output). The I/O processing module 728 optionally obtains contextual information associated with the user input from the user device along with or shortly after receiving the user input. The contextual information includes user-specific data, vocabulary, and/or preferences related to user input. In some examples, the context information further includes software state and hardware state of the user device at the time the user request is received, and/or information related to the user's surroundings at the time the user request is received. In some examples, the I/O processing module 728 also sends follow-up questions related to the user request to the user and receives answers from the user. When a user request is received by the I/O processing module 728 and the user request includes a voice input, the I/O processing module 728 forwards the voice input to the STT processing module 730 (or speech recognizer) for voice-to-text conversion.
The STT processing module 730 includes one or more ASR systems 758. The one or more ASR systems 758 may process speech input received through the I/O processing module 728 to produce recognition results. Each ASR system 758 includes a front-end speech pre-processor. The front-end speech pre-processor extracts representative features from the speech input. For example, the front-end speech pre-processor performs a fourier transform on the speech input to extract spectral features characterizing the speech input as a sequence of representative multidimensional vectors. In addition, each ASR system 758 includes one or more speech recognition models (e.g., acoustic models and/or language models) and implements one or more speech recognition engines. Examples of speech recognition models include hidden Markov models, gaussian mixture models, deep neural network models, n-gram language models, and other statistical models. Examples of speech recognition engines include dynamic time warping based engines and Weighted Finite State Transducer (WFST) based engines. The extracted representative features of the front-end speech pre-processor are processed using one or more speech recognition models and one or more speech recognition engines to produce intermediate recognition results (e.g., phonemes, phoneme strings, and sub-words), and ultimately text recognition results (e.g., words, word strings, or symbol sequences). In some examples, the voice input is processed at least in part by a third party service or on a device of the user (e.g., device 104, device 200, device 400, or device 600) to produce the recognition result. Once STT processing module 730 generates a recognition result that includes a text string (e.g., a word, or a sequence of words, or a sequence of symbols), the recognition result is passed to natural language processing module 732 for intent inference. In some examples, the STT processing module 730 generates a plurality of candidate text representations of the speech input. Each candidate text representation is a sequence of words or symbols corresponding to a speech input. In some examples, each candidate text representation is associated with a speech recognition confidence score. Based on the speech recognition confidence scores, the STT processing module 730 ranks the candidate text representations and provides the n best (e.g., the n highest ranked) candidate text representations to the natural language processing module 732 for intent inference, where n is a predetermined integer greater than zero. For example, in one example, only the highest ranked (n=1) candidate text representations are delivered to the natural language processing module 732 for intent inference. As another example, the 5 highest ranked (n=5) candidate text representations are passed to the natural language processing module 732 for intent inference.
Further details regarding speech-to-text processing are described in U.S. patent application Ser. No. 13/236942, entitled "Consolidating Speech Recognition Results," filed on even date 20 at 9 and 2011, the entire disclosure of which is incorporated herein by reference.
In some examples, the STT processing module 730 includes a vocabulary of recognizable words and/or accesses the vocabulary via the phonetic-to-letter conversion module 731. Each vocabulary word is associated with one or more candidate pronunciations for the word represented in the speech recognition phonetic alphabet. In particular, the vocabulary of recognizable words includes words associated with a plurality of candidate pronunciations. For example, the vocabulary includes andand->Is the least suitable for the personThe word "text" associated with the pronunciation is selected. In addition, the vocabulary words are associated with custom candidate pronunciations based on previous speech input from the user. Such custom candidate pronunciations are stored in the STT processing module 730 and are associated with a particular user via a user profile on the device. In some examples, the candidate pronunciation of the word is determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciation is generated manually, e.g., based on a known standard pronunciation.
In some examples, candidate pronunciations are ranked based on their popularity. For example, candidate pronunciationRanking of (2) is higher than +.>As the former is a more common pronunciation (e.g., for users in a particular geographic region, or for any other suitable subset of users, among all users). In some examples, candidate pronunciations are ranked based on whether the candidate pronunciations are custom candidate pronunciations associated with the user. For example, custom candidate pronunciations are ranked higher than standard candidate pronunciations. This can be used to identify proper nouns having unique pronunciations that deviate from the canonical pronunciation. In some examples, the candidate pronunciation is associated with one or more speech features such as geographic origin, country, or race. For example, candidate pronunciation +.>Associated with the United states, and candidate pronunciation +.>Associated with the uk. Further, the ranking of candidate pronunciations is based on one or more characteristics (e.g., geographic origin, country, race, etc.) of the user in a user profile stored on the device. For example, the user may be determined from a user profile to be associated with the united states. Based on the userAssociated with the United states, candidate pronunciation +.>Comparable candidate pronunciation +. >The ranking (associated with the uk) is higher. In some examples, one of the ranked candidate pronunciations may be selected as a predicted pronunciation (e.g., the most likely pronunciation).
Upon receiving a speech input, the STT processing module 730 is used to determine a phoneme corresponding to the speech input (e.g., using a voice model) and then attempt to determine a word that matches the phoneme (e.g., using a voice model). For example, if the STT processing module 730 first identifies a sequence of phonemes corresponding to a portion of the speech inputIt may then determine that the sequence corresponds to the word "match" based on the vocabulary index 744.
In some examples, STT processing module 730 uses fuzzy matching techniques to determine words in the utterance. Thus, for example, the STT processing module 730 determines a phoneme sequenceCorresponds to the word "key", even though the particular phoneme sequence is not a candidate phoneme sequence for that word.
The natural language processing module 732 of the digital assistant ("natural language processor") obtains the n best candidate textual representations ("word sequences" or "symbol sequences") generated by the STT processing module 730 and attempts to associate each candidate textual representation with one or more "actionable intents" identified by the digital assistant. "actionable intent" (or "user intent") represents a task that may be executed by a digital assistant and that may have an associated task flow implemented in task flow model 754. An associated task flow is a series of programmed actions and steps taken by the digital assistant to perform a task. The scope of the capabilities of the digital assistant depends on the number and variety of task flows that have been implemented and stored in the task flow model 754, or in other words, the number and variety of "actionable intents" identified by the digital assistant. However, the effectiveness of a digital assistant also depends on the ability of the assistant to infer the correct "one or more actionable intents" from user requests expressed in natural language.
In some examples, the natural language processing module 732 receives contextual information associated with the user request, for example, from the I/O processing module 728, in addition to the sequence of words or symbols obtained from the STT processing module 730. The natural language processing module 732 optionally uses the contextual information to clarify, supplement, and/or further define the information contained in the candidate text representations received from the STT processing module 730. The context information includes, for example, user preferences, hardware and/or software status of the user device, sensor information collected before, during, or shortly after a user request, previous interactions (e.g., conversations) between the digital assistant and the user, and so forth. As described herein, in some examples, the contextual information is dynamic and varies with time, location, content, and other factors of the conversation.
In some examples, natural language processing is based on, for example, ontology 760. Ontology 760 is a hierarchical structure that contains a number of nodes, each representing an "actionable intent" or "attribute" that is related to one or more of the "actionable intents" or other "attributes. As described above, "executable intent" refers to a task that a digital assistant is capable of performing, i.e., that the task is "executable" or can be performed. An "attribute" represents a parameter associated with a sub-aspect of an actionable intent or another attribute. The connections between the actionable intent nodes and the attribute nodes in ontology 760 define how the parameters represented by the attribute nodes pertain to the tasks represented by the actionable intent nodes.
In some examples, ontology 760 is composed of actionable intent nodes and attribute nodes. Within ontology 760, each actionable intent node is connected directly to or through one or more intermediate attribute nodes to one or more attribute nodes. Similarly, each attribute node is connected directly to or through one or more intermediate attribute nodes to one or more actionable intent nodes. For example, as shown in fig. 7C, ontology 760 includes a "restaurant reservation" node (i.e., an actionable intent node). The attribute nodes "restaurant", "date/time" (for reservation) and "party size" are each directly connected to the executable intent node (i.e., the "restaurant reservation" node).
Further, the attribute nodes "cuisine", "price section", "telephone number", and "location" are child nodes of the attribute node "restaurant", and are each connected to the "restaurant reservation" node (i.e., executable intention node) through the intermediate attribute node "restaurant". As another example, as shown in fig. 7C, ontology 760 also includes a "set reminder" node (i.e., another actionable intent node). The attribute nodes "date/time" (for setting reminders) and "topic" (for reminders) are both connected to the "set reminders" node. Since the attribute "date/time" is related to both the task of making a restaurant reservation and the task of setting a reminder, the attribute node "date/time" is connected to both the "restaurant reservation" node and the "set reminder" node in the ontology 760.
The actionable intent node, along with its linked attribute nodes, is described as a "domain". In this discussion, each domain is associated with a respective actionable intent and refers to a set of nodes (and relationships between those nodes) associated with a particular actionable intent. For example, ontology 760 shown in fig. 7C includes an example of restaurant reservation field 762 and an example of reminder field 764 within ontology 760. The restaurant reservation domain includes executable intent nodes "restaurant reservation," attribute nodes "restaurant," date/time, "and" party number, "and sub-attribute nodes" cuisine, "" price range, "" phone number, "and" location. The reminder field 764 includes executable intent nodes "set reminder" and attribute nodes "subject" and "date/time". In some examples, ontology 760 is composed of a plurality of domains. Each domain shares one or more attribute nodes with one or more other domains. For example, in addition to the restaurant reservation field 762 and the reminder field 764, a "date/time" attribute node is associated with many different fields (e.g., a travel reservation field, a movie ticket field, etc.).
Although fig. 7C shows two exemplary fields within ontology 760, other fields include, for example, "find movie," "initiate phone call," "find direction," "schedule meeting," "send message," and "provide answer to question," "read list," "provide navigation instructions," "provide instructions for task," and so forth. The "send message" field is associated with a "send message" actionable intent node and further includes attribute nodes such as "one or more recipients", "message type", and "message body". The attribute node "recipient" is further defined, for example, by sub-attribute nodes such as "recipient name" and "message address".
In some examples, ontology 760 includes all domains (and thus executable intents) that the digital assistant can understand and work with. In some examples, ontology 760 is modified, such as by adding or removing an entire domain or node, or by modifying relationships between nodes within ontology 760.
In some examples, nodes associated with multiple related actionable intents are clustered under a "superdomain" in ontology 760. For example, a "travel" super domain includes a cluster of travel-related attribute nodes and actionable intent nodes. Executable intent nodes associated with travel include "airline reservations," "hotel reservations," "car rentals," "route planning," "finding points of interest," and so forth. An actionable intent node under the same super domain (e.g., a "travel" super domain) has multiple attribute nodes in common. For example, executable intent nodes for "airline reservations," hotel reservations, "" car rentals, "" get routes, "and" find points of interest "share one or more of the attribute nodes" start location, "" destination, "" departure date/time, "" arrival date/time, "and" party number.
In some examples, each node in ontology 760 is associated with a set of words and/or phrases that are related to the attribute or actionable intent represented by the node. The respective set of words and/or phrases associated with each node is a so-called "vocabulary" associated with the node. A respective set of words and/or phrases associated with each node is stored in a vocabulary index 744 associated with the attribute or actionable intent represented by the node. For example, returning to FIG. 7B, the vocabulary associated with the node of the "restaurant" attribute includes words such as "food," "drink," "cuisine," "hunger," "eat," "pizza," "fast food," "meal," and the like. As another example, words associated with a node that "initiates a telephone call" may perform intent include words and phrases such as "call," "make a call to … …," "call the number," "make a phone call," and the like. The vocabulary index 744 optionally includes words and phrases in different languages.
The natural language processing module 732 receives the candidate text representations (e.g., one or more text strings or one or more symbol sequences) from the STT processing module 730 and, for each candidate representation, determines which nodes the words in the candidate text representation relate to. In some examples, a word or phrase in the candidate text representation "triggers" or "activates" those nodes if it is found to be associated (via the vocabulary index 744) with one or more nodes in the ontology 760. Based on the number and/or relative importance of activated nodes, the natural language processing module 732 selects one of the executable intents as a task that the user intends the digital assistant to perform. In some examples, the domain with the most "triggered" nodes is selected. In some examples, the domain with the highest confidence (e.g., based on the relative importance of its respective triggered node) is selected. In some examples, the domain is selected based on a combination of the number and importance of triggered nodes. In some examples, additional factors are also considered in selecting the node, such as whether the digital assistant has previously properly interpreted a similar request from the user.
The user data 748 includes user-specific information such as user-specific vocabulary, user preferences, user addresses, user's default second language, user's contact list, and other short-term or long-term information for each user. In some examples, the natural language processing module 732 uses user-specific information to supplement information contained in the user input to further define the user intent. For example, for a user request "invite my friends to my birthday party," the natural language processing module 732 can access the user data 748 to determine what the "friends" are and when and where the "birthday party" will be held without requiring the user to explicitly provide such information in his request.
It should be appreciated that in some examples, the natural language processing module 732 is implemented with one or more machine learning mechanisms (e.g., a neural network). In particular, the one or more machine learning mechanisms are configured to receive a candidate text representation and context information associated with the candidate text representation. Based on the candidate text representations and the associated context information, the one or more machine learning mechanisms are configured to determine an intent confidence score based on a set of candidate executable intents. The natural language processing module 732 may select one or more candidate actionable intents from a set of candidate actionable intents based on the determined intent confidence scores. In some examples, an ontology (e.g., ontology 760) is also utilized to select one or more candidate actionable intents from a set of candidate actionable intents.
Additional details of searching for ontologies based on symbol strings are described in U.S. patent application Ser. No. 12/347743, entitled "Method and Apparatus for Searching Using An Active Ontology," filed on 12/22 of 2008, the entire disclosure of which is incorporated herein by reference.
In some examples, once the natural language processing module 732 identifies an actionable intent (or domain) based on a user request, the natural language processing module 732 generates a structured query to represent the identified actionable intent. In some examples, the structured query includes parameters for one or more nodes within the domain of the actionable intent, and at least some of the parameters are populated with specific information and requirements specified in the user request. For example, the user says "help me reserve a seat at 7 pm at sushi store. "in this case, the natural language processing module 732 is able to correctly identify the actionable intent as" restaurant reservation "based on user input. According to the ontology, the structured query of the "restaurant reservation" field includes parameters such as { cuisine }, { time }, { date }, { party number }, and the like. In some examples, based on the speech input and text derived from the speech input using STT processing module 730, natural language processing module 732 generates a partially structured query for the restaurant reservation domain, where the partially structured query includes parameters { cuisine = "sushi class" }, and { time = "7 pm" }. However, in this example, the user utterance contains insufficient information to complete the structured query associated with the domain. Thus, based on the currently available information, other necessary parameters such as { party number } and { date } are not specified in the structured query. In some examples, the natural language processing module 732 populates some parameters of the structured query with the received contextual information. For example, in some examples, if the user requests a "nearby" sushi store, the natural language processing module 732 populates { location } parameters in the structured query with GPS coordinates from the user device.
In some examples, the natural language processing module 732 identifies a plurality of candidate actionable intents for each candidate text representation received from the STT processing module 730. Additionally, in some examples, a respective structured query is generated (partially or wholly) for each identified candidate executable intent. The natural language processing module 732 determines an intent confidence score for each candidate actionable intent and ranks the candidate actionable intents based on the intent confidence scores. In some examples, the natural language processing module 732 communicates the generated one or more structured queries (including any completed parameters) to the task flow processing module 736 ("task flow processor"). In some examples, one or more structured queries for the m best (e.g., m highest ranked) candidate executable intents are provided to the task flow processing module 736, where m is a predetermined integer greater than zero. In some examples, one or more structured queries for the m best candidate actionable intents are provided to the task flow processing module 736 along with the corresponding one or more candidate text representations.
Additional details for inferring user intent based on a plurality of candidate actionable intents determined from a plurality of candidate textual representations of a speech input are described in U.S. patent application Ser. No. 14/298725, entitled "System and Method for Inferring User Intent From Speech Inputs," filed 6/2014, the entire disclosure of which is incorporated herein by reference.
Task flow processing module 736 is configured to receive one or more structured queries from natural language processing module 732, complete the structured queries (if necessary), and perform the actions required to "complete" the user's final request. In some examples, the various processes necessary to accomplish these tasks are provided in the task flow model 754. In some examples, the task flow model 754 includes a process for obtaining additional information from a user, as well as a task flow for performing actions associated with executable intents.
As described above, to complete a structured query, task flow processing module 736 needs to initiate additional conversations with the user in order to obtain additional information and/or ascertain possibly ambiguous utterances. When such interactions are necessary, the task flow processing module 736 invokes the dialog flow processing module 734 to engage in a dialog with the user. In some examples, the dialog flow processor module 734 determines how (and/or when) to request additional information from the user and receives and processes user responses. Questions are provided to and answers are received from users through I/O processing module 728. In some examples, the dialog flow processing module 734 presents dialog outputs to the user via audible and/or visual outputs and receives input from the user via verbal or physical (e.g., click) responses. Continuing with the example above, when task flow processing module 736 invokes dialog flow processing module 734 to determine "party number" and "date" information for a structured query associated with the domain "restaurant reservation," dialog flow processing module 734 generates a query such as "several digits in a row? "and" what day to subscribe? "and the like. Upon receipt of an answer from the user, the dialog flow processing module 734 populates the structured query with missing information or passes information to the task flow processing module 736 to complete the missing information based on the structured query.
Once the task flow processing module 736 has completed the structured query for the executable intent, the task flow processing module 736 begins executing the final tasks associated with the executable intent. Accordingly, the task flow processing module 736 performs the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, a task flow model for an actionable intent "restaurant reservation" includes steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party number at a particular time. For example, using structured queries such as: { restaurant reservation, restaurant=abc cafe, date=3/12/2012, time=7pm, party number=5 }, the task flow processing module 736 can perform the following steps: (1) Logging into a server of an ABC cafe or such as(2) entering date, time, and dispatch information in the form of a web site, (3) submitting a form, and (4) forming calendar entries for reservations in the user's calendar.
In some examples, the task flow processing module 736 completes the tasks requested in the user input or provides the informational answers requested in the user input with the aid of a service processing module 738 ("service processing module"). For example, the service processing module 738 initiates a telephone call, sets up a calendar entry, invokes a map search, invokes or interacts with other user applications installed on the user device, and invokes or interacts with third party services (e.g., restaurant reservation portals, social networking sites, banking portals, etc.) on behalf of the task flow processing module 736. In some examples, the protocols and Application Programming Interfaces (APIs) required for each service are specified by a corresponding service model in service models 756. The service processing module 738 accesses an appropriate service model for a service and generates requests for the service according to the service model in accordance with the protocols and APIs required for the service.
For example, if a restaurant has enabled an online booking service, the restaurant submits a service model that specifies the necessary parameters to make the booking and communicates the values of the necessary parameters to the API of the online booking service. Upon request by the task flow processing module 736, the service processing module 738 can use the Web address stored in the service model to establish a network connection with the online booking service and send the necessary parameters of the booking (e.g., time, date, party number) to the online booking interface in a format according to the API of the online booking service.
In some examples, the natural language processing module 732, the dialog flow processing module 734, and the task flow processing module 736 are used collectively and repeatedly to infer and define a user's intent, to obtain information to further clarify and refine the user's intent, and to ultimately generate a response (i.e., output to the user, or complete a task) to satisfy the user's intent. The generated response is a dialog response to the voice input that at least partially satisfies the user's intent. Additionally, in some examples, the generated response is output as a speech output. In these examples, the generated response is sent to a speech synthesis processing module 740 (e.g., a speech synthesizer), where the generated response can be processed to synthesize the dialog response in speech form. In other examples, the generated response is data content related to satisfying a user request in a voice input.
In examples where the task flow processing module 736 receives a plurality of structured queries from the natural language processing module 732, the task flow processing module 736 first processes a first structured query of the received structured queries in an attempt to complete the first structured query and/or perform one or more tasks or actions represented by the first structured query. In some examples, the first structured query corresponds to the highest ranked executable intent. In other examples, the first structured query is selected from structured queries received based on a combination of the corresponding speech recognition confidence score and the corresponding intent confidence score. In some examples, if task flow processing module 736 encounters an error during processing of the first structured query (e.g., due to an inability to determine the necessary parameters), task flow processing module 736 can continue to select and process a second one of the received structured queries that corresponds to a lower-ranked executable intent. The second structured query is selected, for example, based on a speech recognition confidence score for the corresponding candidate text representation, an intent confidence score for the corresponding candidate actionable intent, a requisite parameter for a miss in the first structured query, or any combination thereof.
The speech synthesis processing module 740 is configured to synthesize speech output for presentation to a user. The speech synthesis processing module 740 synthesizes a speech output based on text provided by the digital assistant. For example, the generated dialog response is in the form of a text string. The speech synthesis processing module 740 converts the text string into audible speech output. The speech synthesis processing module 740 uses any suitable speech synthesis technique to generate speech output from text, including but not limited to: stitching synthesis, unit selection synthesis, diphone synthesis, domain-specific synthesis, formant synthesis, pronunciation synthesis, hidden Markov Model (HMM) based synthesis, and sine wave synthesis. In some examples, the speech synthesis processing module 740 is configured to synthesize individual words based on the phoneme strings corresponding to the words. For example, the phoneme string is associated with a word in the generated dialog response. The phoneme string is stored in metadata associated with the word. The speech synthesis processing module 740 is configured to directly process the phoneme strings in the metadata to synthesize words in speech form.
In some examples, instead of (or in addition to) using the speech synthesis processing module 740, speech synthesis is performed on a remote device (e.g., server system 108) and the synthesized speech is sent to a user device for output to a user. For example, this may occur in some implementations in which the output of the digital assistant is generated at a server system. And since the server system typically has more processing power or more resources than the user equipment, it is possible to obtain a higher quality speech output than would be achieved by the client-side synthesis.
Additional details regarding digital assistants can be found in U.S. patent application Ser. No. 12/987982, entitled "Intelligent Automated Assistant," filed 1/10/2011, and U.S. patent application Ser. No. 13/251088, entitled "Generating and Processing Task Items That Represent Tasks to Perform," filed 9/30/2011, the disclosures of which are incorporated herein by reference in their entireties.
4. Procedure for registration
Fig. 8A-8B illustrate a system 800 for registering application terms for use with a digital assistant according to various examples. The system 800 may be implemented, for example, using one or more electronic devices that implement a digital assistant (e.g., the digital assistant system 700). In some embodiments, the system 800 is implemented using a client-server system (e.g., system 100), and the functionality of the system 800 is divided in any manner between one or more server devices (e.g., DA server 106) and the client device. In other embodiments, the functionality of system 800 is divided between one or more servers and multiple client devices (e.g., mobile phones and smart watches). Thus, while some of the functions of system 800 are described herein as being performed by a particular device of a client-server system, it should be understood that system 800 is not so limited. In other examples, system 800 is implemented using only one client device (e.g., user device 104) or only multiple client devices. In system 800, some functions are optionally combined, the order of some functions is optionally changed, and some functions are optionally omitted. In some examples, additional steps may be performed in conjunction with the functionality of the system 800 described.
The system 800 may be implemented using hardware, software, or a combination of hardware and software to perform the principles discussed herein. Further, the system 800 is exemplary, and thus the system 800 may have more or fewer components than illustrated, may combine two or more components, or may have different component configurations or arrangements. Furthermore, while the following discussion describes functions performed at a single component of system 800, it should be understood that these functions may be performed at other components of system 800 and that these functions may be performed at more than one component of system 800. The system 800 may be used to implement the method 900 as described below with respect to fig. 9A-9B.
Referring to fig. 8A-8B, system 800 includes one or more software applications (e.g., first-party and third-party applications) including application 802. In some embodiments, application 802 is installed on or in the process of being installed on an electronic device implementing system 800. Software applications, such as application 802, provide additional content and functionality to a user of an electronic device. For example, a reader application provides content such as books, audio books, or documents, and provides functions such as opening/presenting media, media playback, purchasing media, or searching for media. As another example, a voice recording application provides content such as voice recordings (e.g., voice notes or voice memos) and provides functions such as recording (e.g., file creation and content recording), playback, editing, and organization. Other exemplary applications include productivity applications (having content such as documents, presentations, spreadsheets, etc., and functions such as text editing, formatting, data analysis, etc.), media applications (having content such as music, podcasts, video, etc., and functions such as search, playback, and library organization), food ordering applications (having content such as restaurant listings and menu items, and functions such as searching for restaurant information, placing takeaway orders, and placing delivery orders), etc.
The system 800 also includes a digital assistant knowledge module 806. For example, the digital assistant knowledge module 806 manages a knowledge base of digital assistants (e.g., digital assistant module 726 of digital assistant system 700 as described above) that includes words used by the digital assistants to interpret and implement natural language user inputs (such as spoken or typed user inputs to the digital assistants). The digital assistant knowledge module 806 includes a vocabulary donation module 808, a term database 810, and an Automatic Speech Recognition (ASR) database 812. The vocabulary donation module 808 manages the "donation" (e.g., addition or contribution) of vocabulary items related to application content and functionality from an application (such as application 802) to a digital assistant knowledge base. The term database 810 and Automatic Speech Recognition (ASR) database 812 allow the digital assistant to search for application vocabulary after donation and registration.
The system 800 also includes an application program interface module 804 that serves as an interface to link the digital assistant knowledge module 806 and one or more application programs, including the application program 802. For example, as described in further detail below, the vocabulary donation module 808 may request a vocabulary donation from the application 802 via the application interface module 804, and/or the application 802 may use the application interface module 804 to make a vocabulary donation to a digital assistant.
The application vocabulary may include vocabulary entries for classes handled by one or more applications, including application 802. In some embodiments, the class processed by the one or more applications is a programming concept understood by the one or more applications, such as an entity (e.g., a book entity or a voice memo entity), enumeration (e.g., a tag or page in the application or a setting of the application), or action (e.g., a self-contained task executable by the application, such as opening a book entity or creating a voice memo entity). For example, if the application 802 is a reader application, its corresponding application vocabulary may include entries for general entities such as books, audio books, articles, or documents, as well as specific instances of those concepts (e.g., individual books, audio books, articles, and documents in a user-specific or online library), as well as labels such as application labels (e.g., now reading, library, bookstore) or labels of content (e.g., favorites, in progress, or completed labels of book entities). As another example, a word processing application may be associated with lexical and italic text formatting enumeration of word entities.
Application vocabulary may be classified as static or dynamic, referring to how the meaning of the vocabulary tends to change relative to one or more applications, including application 802. For example, it is not possible for the application 802 to change its understanding (e.g., processing) of the word "book" at least on the timescale (e.g., day, week, or month) of the application update schedule, so the word entries of the book entity type may be static word entries. On the other hand, a particular book may be added to (or removed from) the application 802 at any time, so the vocabulary entries of the particular book may be dynamic vocabulary entries. Fig. 8A illustrates the use of the system 800 to donate static vocabulary entries of the application 802 to the digital assistant knowledge module 806, and fig. 8B illustrates the use of the system 800 to donate dynamic vocabulary entries of the application 802 to the digital assistant knowledge module 806, according to some examples.
As shown in fig. 8A, the vocabulary donation module 808 obtains a first (e.g., static) vocabulary entry from the application 802. The first vocabulary entry represents a first class (e.g., a first programming concept) that is processed by the application 802, such as an entity, enumeration, or action understood by the application, and in particular, a first class that tends to remain static in a sense relative to the application 802. For example, the first vocabulary entry may be a static vocabulary entry of a "book" entity type (e.g., a general book, not a specific book) that is processed by the application 802 (e.g., a reader application). While the digital assistant may understand the general meaning of the book, for example, using a non-specialized vocabulary, obtaining the first vocabulary entry may allow the digital assistant to specifically understand the type of book entity being processed by the application 802.
The first vocabulary entry of the application 802 may include a first identifier of a first class processed by the application 802. For example, the first vocabulary entry may include an identifier "BookEntity" that is used by the application 802 to identify the book entity type. The first vocabulary entry obtained from the application 802 may also include at least a first synonym for the first identifier. For example, synonyms may include words and phrases that a human user can understand and speak to refer to a class, such as the synonyms "novice", "volume", or "book" (e.g., as opposed to "bookEntity" that a user is unlikely to speak in natural language input). In some embodiments, the first vocabulary entry may include other metadata obtained from the application 802, such as related vocabulary entries (e.g., vocabularies of a particular instance of a book) or related contexts (e.g., other applications that may process the book entity type, related terms such as "bookmarks" or "chapters", etc.).
The first vocabulary entry may be associated with one or more commands of the application 802. For example, vocabulary entries for book entity types may be associated with reader application commands for adding to libraries, adding to collections, opening, deleting, and the like. As discussed above, the associated command may be included in the first vocabulary entry as metadata obtained through the first vocabulary entry.
The vocabulary donation module 808 of the digital assistant knowledge module 808 obtains the first vocabulary entry via the application program interface module 804. In some implementations, the digital assistant initiates obtaining the first vocabulary entry, for example, by making a vocabulary donation request (as indicated by arrow 1 shown in fig. 8A) via the application program interface module 804. In response, the application program interface module 804 retrieves the first vocabulary entry from the application program 802 (as indicated by arrow 2) and passes the first vocabulary entry to the vocabulary donation module 808 (as indicated by arrow 3) for registration. For example, the application program interface module 804 can retrieve a first vocabulary entry for the application program 802 from a first data file for the application program 802 (such as a static vocabulary library distributed with installation and/or update material for the application program 802). Because the static lexicon library will tend to remain unchanged for the application 802, as discussed above, the first data file may include predetermined synonyms or other metadata for application concepts that have been selected (e.g., professionally planned) by the application developer. The application program interface module 804 extracts the vocabulary items from the first data file to obtain an initial donation of static vocabulary items from the application program 802 for communication to the vocabulary donation module 808.
In some embodiments, the vocabulary donation module 808 obtains the first vocabulary entry from the application 802 in response to receiving user input related to the application 802, such as a request to install the application 802, a request to update the application 802, or a request to launch the application 802 (e.g., when the application 802 is installed but not currently running). Thus, the vocabulary donation module 808 may begin obtaining the first vocabulary entry prior to installing the application 802 (e.g., as part of an application installation process) or once the application 802 has been installed (e.g., as part of an application refresh process).
Once the first (e.g., static) vocabulary entry has been obtained, the vocabulary donation module 808 registers the first vocabulary entry with the digital assistant knowledge module 806. In some embodiments, as part of registering the first vocabulary entry with the digital assistant knowledge module 806, the vocabulary donation module 808 indexes the first vocabulary entry in the term database 810 (as indicated by arrow 4 shown in fig. 8A). Indexing the first vocabulary entry in the term database 810 may allow the digital assistant to search for the first vocabulary entry, e.g., based on identifiers, synonyms, metadata, and the like.
The vocabulary donation module 808 may register the first metadata in association with the first vocabulary entry. For example, the first metadata may provide a context for the digital assistant to understand the first vocabulary entry. In some embodiments, the first metadata may include metadata obtained from the application 802 (e.g., metadata included in the first data file). In some embodiments, the first metadata may include metadata determined by the vocabulary donation module 808, such as an identity of the application 802, an identity of other applications capable of handling the first class, or related vocabulary entries. For example, the first metadata associated with the vocabulary entry of the book entity type may include an identity of the application 802 (e.g., a donation application), metadata received from the application 802 (e.g., synonyms of the book entity, processing information, related commands), related vocabulary entries (e.g., instances of the book entity, such as a particular book), identities of other applications capable of handling the book entity type (e.g., document viewer applications), and so forth.
Registering a first (e.g., static) vocabulary entry with the digital assistant knowledge module 806 may allow the digital assistant to use (e.g., identify and understand) the first (e.g., static) vocabulary entry of the application 802 in processing natural language user input. For example, upon receiving a natural language user input such as "what novel i am currently reading," the digital assistant may search the term database 810 for the token "novel" and determine that it matches a synonym of the book entity of the application 802. Thus, the digital assistant may interpret and implement user input in the context of the content and functionality of the application 802, such as retrieving information about book-type entities that the user has read through the application 802. In some implementations, once registered, the first vocabulary entry serves as a persistent reference to the first class such that the digital assistant can understand user input related to the class even when the application 802 is not running or active.
As shown in fig. 8B, the vocabulary donation module 808 receives a request from the application 802 to register a second (e.g., dynamic) vocabulary entry (as indicated by arrow 5 shown in fig. 8B). Similar to the first vocabulary entry, the second vocabulary entry represents a second class (e.g., a second programming concept) processed by the application 802, such as an entity, enumeration, or action understood by the application. For example, the first vocabulary entry may be a dynamic vocabulary entry for a particular instance of a book entity "under the rail" processed by application 802.
The second vocabulary entry of the application 802 may include a second identifier of a second class processed by the application 802. For example, the second vocabulary entry may include an identifier "the undergroudorrowhead" that is used by the application 802 to identify a particular book instance. In some embodiments, the second vocabulary entry may include other metadata related to the class, such as related vocabulary entries or process information, provided by the application 802. Additionally, the second vocabulary entry may be associated with one or more commands of the application 802. For example, a second vocabulary entry for a particular book instance "underground railway" may be associated with a command for adding to a library, adding to a collection, opening, deleting, etc. using a reader application. As discussed above, the associated command may be included in the second vocabulary entry as metadata obtained through the second vocabulary entry.
The vocabulary donation module 808 receives a request from the application 802 via the application interface module 804 to register a second vocabulary entry (as indicated by arrow 6). In some embodiments, a second vocabulary entry (e.g., including a second identifier and associated metadata) is included in the request from application 802. For example, the application 802 may make an Application Programming Interface (API) call for donating a vocabulary, wherein a second vocabulary entry (e.g., including a second identifier and associated metadata) is provided by the application 802 as a parameter (e.g., object) of the API call. The API call may be invoked directly by the application 802 or by a daemon (e.g., a daemon) of the application 802. In turn, the application program interface module 804 passes the second vocabulary entry to the vocabulary donation module 808.
In some implementations, the second (e.g., dynamic) vocabulary entry is included in an ordered set of dynamic vocabulary entries. For example, the application 802 may maintain a dynamic vocabulary corpus, e.g., vocabulary entries with all dynamic classes handled by the application 802, such as specific instances of books, audio books, and documents in an online or local library of reader applications (including the second vocabulary entry for "underground railroads"). An update to the dynamic vocabulary corpus by the application 802 (e.g., adding a new book, deleting from the library, updating metadata, etc.) may trigger the application to send a request to register (or re-register) an ordered set of dynamic vocabulary entries (including a second vocabulary entry for "underground railway").
The collection of dynamic vocabulary entries may be relatively large and may be updated on a frequent or unpredictable basis (e.g., as discussed above with respect to the static/dynamic distinction). Thus, in some implementations, the application 802 orders (e.g., prioritizes) the vocabulary entries included in the collection such that the scope of the overall dynamic vocabulary donation can be controlled by the digital assistant knowledge module 806. For example, the digital assistant knowledge module 806 can limit each of the one or more applications to a vocabulary allocation of 1,000 entries or less. Thus, upon receiving a request from the application 802 to register an ordered set of dynamic vocabulary entries, the application interface module 804 may pass only the vocabulary entries with an order index of 1,000 or less (e.g., whether the vocabulary entry is among the first 1,000 prioritized vocabulary entries) to the vocabulary donation module 808 for registration.
Upon receiving a request to register a second (e.g., dynamic) vocabulary entry, the vocabulary donation module 808 registers the second vocabulary entry with the digital assistant knowledge module 806. That is, as with the first (e.g., static) vocabulary entry, the vocabulary donation module 808 may incorporate the second vocabulary entry into the vocabulary of the digital assistant.
As described with respect to the first vocabulary entry, the vocabulary donation module 808 may register second metadata in association with the second vocabulary entry. For example, as with the first metadata, the second metadata may provide a context for the digital assistant to understand the second vocabulary entry, including the identity of the application 802, metadata received from the application 802, related vocabulary entries, and so forth.
In some implementations, the second metadata includes Automatic Speech Recognition (ASR) data that the digital assistant can use to recognize the second vocabulary entry from the natural language speech input. For example, unlike a first (e.g., static) vocabulary entry that may include ASR data provided by the application 802 (e.g., as part of a layout data file), the vocabulary donation module 808 determines (e.g., generates) ASR metadata for a second vocabulary entry. In addition to including pronunciation information, the ASR metadata determined for the second vocabulary entry may include context information, e.g., indicating that the term "underground railway" is more likely to occur near the word "read" (than the word "draw").
In some implementations, as part of registering the second vocabulary entry with the digital assistant knowledge module 806, the vocabulary donation module 808 indexes the second vocabulary entry in the term database 810 (as indicated by arrow 7) and in the ASR database 812 (as indicated by arrow 8). Similar to the first vocabulary entry, indexing the second vocabulary entry in the term database 810 may allow the digital assistant to search for the second vocabulary entry, e.g., based on the second identifier, the second metadata, etc. Indexing the second vocabulary entry in the ASR database may facilitate (e.g., promote) recognition of the second vocabulary entry when performing the ASR or NLP process. For example, indexing the second vocabulary entry in the ASR database may increase the likelihood of identifying a word "underground railway" that is combined with and/or in its vicinity based on the ASR metadata generated by the vocabulary donation module 808.
Registering a second (e.g., dynamic) vocabulary entry with the digital assistant knowledge module 806 may allow the digital assistant to use (e.g., identify and understand) the second vocabulary entry of the application 802 in processing natural language user input. For example, upon receiving a natural language user input such as "hi Siri, open" underground railway' ", the digital assistant may use the ASR database 812 to process the user input and/or search the term database 810 for a token" underground railway "to determine that it matches a particular book entity known to the application 802. Thus, the digital assistant can interpret and implement user input in the context of the content and functionality of the application 802, such as causing the application 802 to open a particular book entity, "underground railway.
The operations described above with reference to fig. 8A to 8B are optionally implemented by the components depicted in fig. 1 to 4, 6A to 6B, and 7A to 7C. For example, the operations of system 800 may be implemented by one or more electronic devices (e.g., 104, 122, 200, 400, 600) such as the electronic devices implementing system 700. It will be apparent to one of ordinary skill in the art how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, and 7A-7C.
Fig. 9A-9B are flowcharts illustrating a method 900 for registering application terms for use with a digital assistant, according to some embodiments. Method 900 may be performed using one or more electronic devices (e.g., device 104, device 200, device 600) having one or more processors and memory. In some implementations, the method 900 is performed using a client-server system, where the operations of the method 900 are divided between the client devices (e.g., 104, 200, 600) and the servers in any manner. Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
At block 902, a first vocabulary entry for a software application is obtained from the software application. In some embodiments, the first vocabulary entry represents a first class processed by the software application. In some implementations, the first class processed by the software application is a first programming concept understood by the software application, such as entities (e.g., object instances or object types), enumerations (e.g., predefined variables), and/or actions (e.g., for executing commands or tasks). For example, a first vocabulary entry representing a book type entity may be obtained from a reader application; a first vocabulary entry representing an audio recording type entity may be obtained from the voice recording application; or may obtain a first vocabulary entry representing a folder type entity from a file browser application.
In some embodiments, the vocabulary entries may be classified as static or dynamic vocabularies by type, and the first vocabulary entry is a static vocabulary entry. For example, the static vocabulary of the reader application may include vocabulary entries for book entity types, library tag enumerations, and the like. As another example, static vocabulary for recording an application may include vocabulary entries for recording entity types, favorites folder entities for an application, and so forth. In contrast to dynamic vocabulary, it is assumed that static vocabulary remains stable on the timescale of the application update schedule. For example, the manner in which a voice recording application processes or understands the record type entity, or the manner in which a reader application processes or understands the library tag enumeration, will tend to remain unchanged, at least between updates to the application.
In some implementations, obtaining the first vocabulary entry for the software application includes retrieving the first vocabulary entry from the first data file for the software application at block 904. For example, an installation or update package of a software application may include a standardized lexical data file of static application lexicons, including second lexical items, which may be read (e.g., by the application interface module 804) to extract the second lexical items and metadata associated with the second lexical items.
In some embodiments, the first vocabulary entry includes a first identifier of a first class processed by the software application. For example, the first vocabulary entry may include an identifier "recordingmentty" that is used by the voice recording application to identify the recording entity type. As another example, the first vocabulary entry may include an identifier "library enum" that is used by the reader application to identify the application's "library" tag.
In some embodiments, the first vocabulary entry further includes at least a first synonym for the first identifier. For example, the first vocabulary entry may include one or more natural language words or phrases (e.g., human readable and utterable strings) corresponding to the first category, such as "record," "voice note," "voice memo," and/or "dictation" for the record entity type of the voice recording application (e.g., as opposed to the first identifier "recordingmenty," which is unlikely to be included in natural language voice input). As described above, in some embodiments, the first vocabulary is a static vocabulary entry. Since the static vocabulary changes relatively infrequently (at least on the timescale of the application update schedule), the one or more synonyms of the first identifier may be developed, for example, professionally by the application developer, and included in the first data file obtained from the application for use in the application vocabulary.
In some implementations, the first vocabulary entry is associated with at least a first command of the software application. For example, the first command may be a task or action executable by the software application for, through, or with respect to the first class. For a record entity type, the first vocabulary entry may be associated with a first command to create a new instance of the record entity type using a voice recording application. As another example, for library tag enumeration, a first vocabulary entry may be associated with a first command to open a reader application to a library tag. In some implementations, the first vocabulary entry can be associated with a plurality of commands of the first application (including the first command of the first application). For example, the first vocabulary entry of the record entity type may be associated with any or all actions (such as play, pause, open, edit, or delete) that the voice recording application may take with respect to the example of the record entity type. In some implementations, the first command associated with the first vocabulary entry can be included in a first data file of the software application, for example, as part of a selected (e.g., professional planning) library of commands compatible with the first type represented by the first vocabulary entry.
In some implementations, block 902 (obtaining a first vocabulary entry for a software application) is initiated by a digital assistant (e.g., digital assistant module 726 of digital assistant system 700) of an electronic device implementing method 900. For example, the digital assistant may make a request to a digital assistant-application program interface (e.g., application program interface module 804) to read a first data file of the software application and extract a first vocabulary entry (and/or other static vocabulary entries) for registration by the digital assistant.
In some implementations, block 902 is performed (obtaining a first vocabulary entry for the software application) in response to receiving a user input requesting the installation of the software application. For example, a first data file of the software application may be included in the installation material and read as part of the installation process to extract an initial vocabulary donation including the first vocabulary entry. In some implementations, obtaining the first vocabulary entry for the software application is performed in response to updating the software application (e.g., including the first data file in an updated material of the software application). As described above, in some embodiments, the first vocabulary is a static vocabulary entry, and thus, registering (or re-registering) the first vocabulary entry will typically capture any changes to the static vocabulary of the software application as part of the software application installation and update process.
In some embodiments, the software application is already installed on the electronic device implementing the method 900, e.g., before blocks 902-904 are performed. Thus, in some implementations, block 902 is performed (obtaining a first vocabulary entry for a software application) in response to launching the software application. For example, when a user launches a software application, the static vocabulary of the software application (including the first vocabulary entry) may be extracted (e.g., from the first data file) such that an initial launch after installation triggers an initial static vocabulary registration, and a subsequent launch triggers a "refresh" of the static vocabulary entry (as described in further detail below).
At block 906, the first vocabulary entry is registered with a knowledge base of the digital assistant. For example, the knowledge base may include vocabulary (e.g., vocabulary 744 of digital assistant module 726) used by the digital assistant to interpret and implement natural language user input, such as spoken or typed user input to the digital assistant. Thus, once registered with the knowledge base, the digital assistant may interpret and implement natural language user input using the first vocabulary entry. In some implementations, registering the first vocabulary entry includes adding the first vocabulary entry to the knowledge base (e.g., a first time) if the first class of the first vocabulary entry is not previously known by the digital assistant. In some embodiments, registering the first vocabulary entry includes updating the first vocabulary entry in the knowledge base. For example, registering a version of a first vocabulary entry of a record entity type obtained when updating a voice recording application may include updating a previous version of the first vocabulary entry (e.g., a version of the first vocabulary entry obtained and registered when the voice recording application was initially installed). As another example, registering a version of the first vocabulary entry of the record entity type may include updating the version of the first vocabulary entry that has been previously obtained from a different application (e.g., a video editing application that also uses the record entity type).
In some implementations, registering the first vocabulary entry includes registering first metadata in association with the first vocabulary entry at block 908. In some embodiments, the first metadata associated with the first vocabulary entry may include an identifier of the first class; one or more synonyms; at least a first command associated with a first vocabulary entry; an identity of the application (or applications) donating the first vocabulary entry; related vocabulary entries; automatic Speech Recognition (ASR) data; or any other information that will assist the digital assistant in interpreting and implementing user input relative to the first vocabulary entry. For example, the first metadata associated with the first vocabulary entry of the record entity type may include an identifier "record entity", synonyms (e.g., "record", "voice note", "voice memo", "dictation", etc.), identities of a recording application and video editing application that process the record entity type, commands to create a voice recording, etc. In some implementations, at least a portion of the metadata associated with the first (e.g., static) vocabulary entry can be obtained from the software application, for example, as part of a first data file selected (e.g., professionally planned) by the application developer for the static application vocabulary.
In some implementations, registering the first vocabulary entry includes indexing the first vocabulary entry in a searchable database of the digital assistant at block 910. For example, a first vocabulary entry of a record entity type may be indexed along with associated first metadata. Thus, the digital assistant may query the searchable database for one or more matching vocabulary entries for natural language user input based on identifiers of the vocabulary entries, synonyms, associated commands, and/or associated metadata.
While the software application is running at block 912 (e.g., after the application has been installed and launched), a request to register a second vocabulary entry for the software application is received from the software application at block 914. As with the first vocabulary entry, in some implementations, the second vocabulary entry represents a second class of processing by the software application, such as entities, enumerations, and/or actions. For example, a second vocabulary entry representing the book "underground railway" may be included in the request from the reader application; a second vocabulary entry representing a record entitled "voice note 2" may be included in the request from the voice recording application; or a second vocabulary entry representing a folder called a "cat photo" may be included in the request from the file browser application.
In embodiments in which the vocabulary entries are classified as static or dynamic vocabulary entries by type, the second vocabulary entry is a dynamic vocabulary entry. As with the first vocabulary entry, in some embodiments, the second class processed by the software application is a second programming concept understood by the software application. In contrast to static vocabulary entries, in some embodiments, dynamic vocabulary entries of an application may include class vocabularies that are created, modified, or deleted frequently or unpredictably. For example, vocabulary entries for books (e.g., specific instances of book entity types, such as "underground railroads") in a local or online library of a reader application may be added, modified, or deleted at any time by a user or administrator of the library. As another example, a vocabulary entry of a record (e.g., a specific instance of a record entity type, such as "voice note 2") may be created (and named) by the user.
As described with respect to the first vocabulary entry, in some embodiments, the second vocabulary entry includes a second identifier of a second class that is processed by the software application. For example, the second vocabulary entry may include an identifier "VoiceNote2" to identify a particular instance of the recording of the voice recording application entitled "voice note 2". As another example, the second vocabulary entry may include an identifier "the undergroudrailhead" to identify a particular book of the reader application entitled "underground railway". In some embodiments, unlike the first (e.g., static) vocabulary entry, the second (e.g., dynamic) vocabulary entry may not include a predetermined synonym (e.g., a synonym selected by an application developer). For example, given the dynamic nature of vocabulary, an application developer may not have the ability to use synonyms professionally (e.g., for large online libraries), or there may not be appropriate synonyms for classes (e.g., for user-generated or appropriate names such as "voice note2" or "underground railroads").
As described with respect to the first vocabulary entry, in some embodiments, the second vocabulary entry is associated with at least a second command of the software application. For example, the second command may be associated with a task or action that may be performed by the software application for, through, or with respect to the first class. For the record "voice note 2", the second vocabulary entry may be associated with a second command to play, edit or delete the record by the voice recording application.
In some embodiments, the second vocabulary entry (e.g., including the second identifier, the associated command, and/or any other related metadata provided by the application) is included in the request received from (e.g., provided by) the software application. For example, the request from the software application received at block 914 may be an Application Programming Interface (API) call for donating vocabulary. The second vocabulary entry may be provided by the application as a parameter (e.g., object) of the API call. In some embodiments, the API call that contributes the second vocabulary entry may be made directly by the software application or by a daemon (e.g., a daemon) of the software application.
In some implementations, the request from the software application received at block 914 is initiated by the software application (e.g., rather than by the digital assistant). For example, as described above, dynamic vocabulary entries for an application may include frequently or unpredictably created, modified, or deleted class vocabularies. Thus, rather than waiting for the digital assistant to extract any new or updated dynamic vocabulary entries (e.g., as part of an application update), the software application may "push" dynamic vocabulary donations to the digital assistant as those additions and updates are made.
In some implementations, the second (e.g., dynamic) vocabulary entry is included in an ordered set of vocabulary entries. For example, rather than donating a separate second vocabulary entry for recording "voice note 2", the voice recording application may provide a corpus of dynamic vocabulary entries that includes entries for all specific records included in the voice recording application library. As another example, rather than donating a separate second vocabulary entry for the book "underground railway," the reader application may provide a corpus of dynamic vocabulary entries that includes multiple entries for books, audio books, and documents included in a local or online library. In these embodiments, the corpus of dynamic vocabulary entries may be ordered (e.g., prioritized) by the software application in order for the digital assistant to control the scope of dynamic vocabulary donation. For example, the software application may determine a relative priority for each vocabulary entry in the set of vocabulary entries based on user interactions, context data, associated functionality, etc. (e.g., prioritizing entities and enumerations with which the user frequently interacts over entities and enumerations that are rarely used).
In some embodiments, it may be determined whether the software application has modified the ordered set of vocabulary entries, and based on the determination that the software application has modified the ordered set of vocabulary entries, the request to register the second vocabulary entry received at block 914 may be received. For example, when the software application adds new vocabulary to the dynamic corpus, modifies vocabulary already included in the dynamic corpus, reorders (e.g., reprioritizes) the dynamic corpus, or removes vocabulary from the dynamic corpus, the software application may "push" the dynamic vocabulary donations in order to update the digital assistant accordingly.
At block 916, the second vocabulary entry is registered with the knowledge base of the digital assistant. As with registering the first vocabulary entry, in some embodiments registering the second vocabulary entry includes adding the second vocabulary entry to the knowledge base (e.g., a first time), or updating the second vocabulary entry in the knowledge base (e.g., if a version of the second vocabulary entry of the second class was previously donated by the software application or a different software application).
In embodiments in which the second vocabulary entry is included in the ordered set of vocabulary entries, a determination may be made as to whether a sequential index of the second vocabulary entry in the ordered set of vocabulary entries (e.g., a priority of the dynamic corpus) meets the registration criteria, and block 916 may be performed (registering the second vocabulary entry to the knowledge base) based on the determination that the registration criteria are met. For example, the digital assistant knowledge base may limit each installed software application to a fixed vocabulary allocation (e.g., less than 1,000 entries per application). As another example, the digital assistant knowledge base may limit each installed software application to flexible vocabulary allocation such that if the voice recording application donates only 100 vocabulary entries, the reader application may use some of the remaining 900 allocation entries of the voice recording application in addition to its own 1,000 allocation entries. Thus, the registration criteria may include the following requirements: the index of the second vocabulary entry in the ordered set of vocabulary entities does not exceed a threshold for static vocabulary allocation.
In some implementations, registering the second vocabulary entry includes registering the second metadata in association with the second vocabulary entry at block 918. In some embodiments, the second metadata associated with the second vocabulary entry may include an identifier of the second class; at least a second command associated with a second vocabulary entry; an identity of the application (or applications) that donate or otherwise process the second vocabulary entry; the relevant vocabulary entry, or any other information that will assist the digital assistant in interpreting and implementing user input relative to the second vocabulary entry. For example, the second metadata associated with the second vocabulary entry for recording "voice note2" may include the identifier "VoiceNote2", the recording entity type, the identity of the voice recording application (and/or video editing application capable of handling the recording entity type), the date on which the recording "voice note2" was created, and so forth. In some embodiments, at least a portion of the second metadata may be provided by the software application, for example, with a request to register the second vocabulary application.
In some implementations, the second metadata registered in association with the second (e.g., dynamic) vocabulary entry includes Automatic Speech Recognition (ASR) metadata for the second vocabulary entry. For example, the ASR metadata may include pronunciation information and/or context information, e.g., indicating that the term "underground railway" is more likely to occur near the word "read" (than the word "draw"). In some implementations, the method 900 includes determining (e.g., generating) the ASR metadata for the second vocabulary entry. For example, for a second vocabulary entry recording "voice note2," the digital assistant may determine pronunciation data for an identifier and/or context information associated with a particular entity to help identify the user-generated title.
In some implementations, registering the second vocabulary entry includes indexing the second vocabulary entry in one or more searchable databases of the digital assistant at block 920. In some embodiments, as with the first vocabulary entry, the second vocabulary entry may be indexed to a search engine (e.g., the term database 810), and the digital assistant may query the search engine for a vocabulary entry match based on an identifier of the vocabulary entry, a synonym, an associated command, and/or associated metadata. In some implementations, the second vocabulary entry can be indexed to an ASR-specific database (e.g., ASR database 812) to facilitate (e.g., promote) recognition of the second vocabulary entry when performing an ASR or NLP process. For example, indexing the second vocabulary entry into the ASR database may increase the likelihood that the word "speech note 2" (combination) is recognized as an ASR interpretation (e.g., as a candidate transcription of speech input), as opposed to the homonyms "Voice note to" or "Voice note to".
Referring to FIG. 9B, in some embodiments, at block 922, the first vocabulary entry and/or the second vocabulary entry of the software application is unregistered from the knowledge base of the digital assistant. In some implementations, logging off the first vocabulary entry and/or the second vocabulary entry for the software application includes removing the first vocabulary entry and/or the second vocabulary from the knowledge base. In some implementations, logging off the first vocabulary entry and/or the second vocabulary entry for the software application includes updating the first vocabulary entry and/or the second vocabulary entry such that a version of the vocabulary entry specifically associated with the software application is removed from the knowledge base, but a version of the vocabulary entry associated with a different application may remain registered.
In some embodiments, block 922 is performed in response to receiving a request to launch a software application. For example, launching the software application may trigger a "refresh" of the application vocabulary, where a second (e.g., dynamic) vocabulary entry is unregistered from the knowledge base and subsequently re-registered (e.g., re-registering the dynamic application vocabulary) by executing some or all of blocks 912 through 920. In some embodiments, block 922 is performed in accordance with a determination that the software application has been uninstalled from the electronic device implementing method 900. Thus, in these embodiments, the digital assistant will no longer interpret or implement user input that includes the first vocabulary entry and the second vocabulary entry with respect to the uninstalled software application.
The operations described above with reference to fig. 9A to 9B are optionally implemented by the components depicted in fig. 1 to 4, 6A to 6B, 7A to 7C, 8A to 8B, and 10A to 10B. For example, the operations of method 900 may be implemented in accordance with system 800, which may be implemented on one or more electronic devices, such as a mobile phone. It will be apparent to one of ordinary skill in the art how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, 7A-7C, 8A-8B, and 10A-10B.
Fig. 10A-10B illustrate a system 1000 for implementing an application vocabulary by a digital assistant, according to various examples. The system 1000 may be implemented, for example, using one or more electronic devices that implement a digital assistant (e.g., the digital assistant system 700). In some embodiments, system 1000 is implemented using a client-server system (e.g., system 100), and the functionality of system 1000 is divided in any manner between one or more server devices (e.g., DA server 106) and the client device. In other embodiments, the functionality of system 1000 is divided between one or more servers and multiple client devices (e.g., mobile phones and smart watches). Thus, while some of the functions of system 1000 are described herein as being performed by a particular device of a client-server system, it should be understood that system 1000 is not so limited. In other examples, system 1000 is implemented using only one client device (e.g., user device 104) or only multiple client devices. In the system 1000, some functions are optionally combined, the order of some functions is optionally changed, and some functions are optionally omitted. In some examples, additional steps may be performed in conjunction with the functions of the system 1000 described.
The system 1000 may be implemented using hardware, software, or a combination of hardware and software to perform the principles discussed herein. Further, system 1000 is exemplary, so system 1000 may have more or fewer components than illustrated, may combine two or more components, or may have different component configurations or arrangements. Furthermore, while the following discussion describes functions performed at a single component of system 1000, it should be understood that these functions may be performed at other components of system 1000 and that these functions may be performed at more than one component of system 1000. The system 1000 may be used to implement the method 1100 as described below with respect to fig. 11A-11B.
Referring to fig. 10A-10B, system 1000 includes one or more software applications (e.g., first-party and third-party applications) installed on device 1002, including application 1004. As shown in fig. 10A-10B, the application 1004 is a voice recording application installed on the device 1002 that provides additional content and functionality to a user of the electronic device, such as creating, editing, and playing back voice recordings. The system 1000 also includes a digital assistant system (e.g., as described above with respect to the digital assistant system 700) that includes vocabulary used by the digital assistant to interpret and implement natural language user inputs, such as spoken or typed user inputs to the digital assistant.
To integrate application content and functionality (e.g., to interpret and implement natural language user input via one or more applications), digital assistant systems use application vocabulary of one or more applications. As discussed with respect to fig. 8A-9B, the application vocabulary includes vocabulary entries that represent classes (e.g., programming concepts) handled by one or more applications, including application 1004. For example, a vocabulary entry may represent a type (e.g., an entity type, such as a record entity type) or an object, such as a particular instance of an entity (e.g., a particular record) or enumeration (e.g., a tag of an application). The objects or types represented by the vocabulary entries may be handled by a single application or multiple applications (e.g., a video editing application may be capable of interacting with a record type entity in addition to the voice recording application 1004).
As discussed with respect to fig. 8A-9B, the application vocabulary may be categorized into two types: static and dynamic. For example, vocabulary entries for voice recording entity types are static in that the application 1004 will typically understand (e.g., process) the general class of voice recording entities in the same manner on the timescale of the application update schedule. For static vocabulary entries, the donation application may include (e.g., in the first data file) predetermined information (e.g., metadata), such as synonyms selected (e.g., professional curation) by the application developer. On the other hand, a particular instance of a voice recording may be created, edited, or deleted by a user at any time, so the vocabulary entry for a given instance of the voice recording entity is a dynamic vocabulary entry. Although the donation application may provide some metadata for the dynamic vocabulary entry (e.g., the identity of the user creating the instance, the date or time the instance was created, etc.), the metadata may not be included with the application and/or selected by the developer.
Thus, to integrate the content and functionality of the application 1004, the digital assistant system obtains an application vocabulary from the application 1004 that includes vocabulary entries for both the first type and the second type, and registers the application vocabulary with the digital assistant's knowledge base. The application vocabulary of the application 1004 may be obtained and registered with a digital assistant knowledge base, as described above with respect to fig. 8A-9B. For example, the digital assistant may initiate a donation (e.g., an acquisition) of a first type of application vocabulary (e.g., static vocabulary entry) when the application 1004 is being installed or when a request to update or launch the application 1004 is received, while a second type of application vocabulary (e.g., dynamic vocabulary entry) may be acquired in response to a request to register for a dynamic vocabulary (e.g., "push" donation) made by the application.
FIG. 10A illustrates the use of system 1000 in response to user input using static application vocabulary. As shown in fig. 10A, the digital assistant represented by icon 1006 receives user input 1008, i.e., natural language voice input "create a new voice memo called' monday note. The digital assistant may receive user input 1008 when the device 1002 is displaying a home screen, for example, when an application installed on the device 1002 (including the application 1004) is not running or is in an inactive state.
In response to receiving the user input 1008, the digital assistant determines whether the user input 1008 corresponds to a vocabulary entry from the application vocabulary. For example, the digital assistant may parse the user input 1008 to obtain a set of one or more tokens (e.g., words, phrases, sub-word fragments, etc.), e.g., using Automatic Speech Recognition (ASR) or Natural Language Processing (NLP) techniques. The set of tokens may be compared to metadata of vocabulary entries registered with the knowledge base to determine whether one or more tokens representing user input 1008 correspond to (e.g., match) any application vocabulary.
For example, the digital assistant may determine that the user input corresponds to a first vocabulary entry of a first type (e.g., static) included in a knowledge base of the digital assistant, thereby representing a record entity type of the application 1004. The first vocabulary entry may be associated with one or more commands in the knowledge base, such as a command to create a new entity instance (and/or a command related to an instance of a record entity type, such as play, pause, edit, or delete a particular record entity). The first vocabulary entry may include an identifier of the represented type (such as "recordingmotity") and may be registered in association with first metadata such as synonyms of the entity type (e.g., "record", "voice note", "audio memo", "dictation", etc.), application information (e.g., identity of a voice recording application and/or a video editing application), commands, etc.
Based on the token representing user input 1008 "create a new voice memo called 'monday note', the digital assistant determines that the token" voice memo "matches a synonym of the record entity type included in the first metadata of the first vocabulary entry. In some embodiments, because the first vocabulary entry is a static vocabulary entry and a predetermined synonym (e.g., a synonym selected by an application developer) may be included in the first metadata, the digital assistant may require an exact match between the token and the synonym (or other portion of the metadata used to determine the match). The correspondence between the user input 1008 and the first vocabulary entry of the record entity type may be further enhanced by other natural language understanding and contextual information, e.g., the token "create" matches the action of creating a new entity instance associated with the first vocabulary entry (e.g., compatible with the record entity type).
Since user input 1008 is received while application 1004 is in an inactive state, this determination may be performed while application 1004 is still in an inactive state (e.g., not running or currently unfocused). Although the application 1004 is in an inactive state, because the first vocabulary entry of the record entity type is registered with the digital assistant knowledge base as part of the application vocabulary, the digital assistant is still able to interpret the user input 1008 using the content and functionality of the application 1004.
Once the digital assistant has determined that the user input 1008 matches the first vocabulary entry of the record entity type, the digital assistant causes the application 1004 to perform a first action based on the first vocabulary entry. The digital assistant may use the first metadata of the first vocabulary entry to identify the application 1004. For example, the first metadata of the first vocabulary entry may include an identity of the application 1004, such as an application that may process the record entity type and/or a command associated with the record entity type. As another example, the first metadata of the first vocabulary entry may identify the application 1004 as an application that donates the first vocabulary entry and/or a particular version of the first vocabulary entry to a digital assistant (e.g., an application that donates a "voice memo" synonym for matching).
The digital assistant may also identify a first action using the first metadata of the first vocabulary entry. For example, the first metadata of the first vocabulary entry may include a command to create an associated new entity instance using the application 1004. The first action may also be identified using other application vocabulary, such as matching vocabulary entries for the command itself, which the digital assistant may determine is compatible with the record entity type identified from the first vocabulary entry.
Thus, in response to user input 1008 "create a new voice memo called 'monday note', the digital assistant causes application 1004 to create a new instance of the recording entity, the entity type being identified by the matching first vocabulary entry. In implementations in which user input 1008 is received and processed while application 1004 is in an inactive state, the digital assistant may first open (e.g., launch or change to active focus) application 1004. In some implementations, the digital assistant can also provide other information to the application 1004 for performing the first task, such as parameters of the name "monday notes" of the new instance of the recording entity extracted (e.g., using natural language processing techniques) from the user input 1008. In some implementations, to cause the application 1004 to perform a first action based on the first vocabulary entry, the digital assistant uses the identified command to instruct the application 1004 to create a new entity instance, for example, using a digital assistant plug-in or an Application Programming Interface (API) call. In response to receiving the instruction, the application 1004 performs tasks required to complete the first action, such as instantiating a new instance of the recording entity, naming it as a "voice note", and beginning recording audio.
FIG. 10B illustrates the use of system 1000 in response to user input using a dynamic application vocabulary. As shown in fig. 10B, the digital assistant represented by icon 1006 receives user input 1010, i.e., the natural language voice input "play" voice note 2' ". The digital assistant may receive user input 1010 when the device 1002 is displaying a home screen, for example, when an application installed on the device 1002 (including the application 1004) is not running or is in an inactive state.
As described with respect to fig. 10A, the digital assistant determines whether the user input 1010 corresponds to a vocabulary entry from an application vocabulary. For example, the digital assistant may parse the user input 1010 to obtain a set of one or more tokens (e.g., words, phrases, sub-word fragments, etc.) for comparison to the application vocabulary, e.g., using Automatic Speech Recognition (ASR) or Natural Language Processing (NLP) techniques.
For example, the digital assistant may determine that the user input corresponds to a second vocabulary entry of a second type (e.g., dynamic) included in the knowledge base of the digital assistant, thereby representing the recording entity instance "voice note2" of the application 1004. The second vocabulary entry may be associated with one or more commands in the knowledge base, such as commands to play, edit, pause, or delete an entity. The second vocabulary entry may include an identifier of the represented type (such as "VoiceNote 2") and may be registered in association with second metadata such as application information (e.g., an identity of a voice recording application and/or a video editing application), commands, and the like. In some embodiments, as described above with respect to fig. 8B-9B, the second metadata may include ASR metadata, such as pronunciation data and/or context information, that is determined (e.g., generated) during registration of the second vocabulary entry. In these embodiments, the ASR metadata included in the second metadata may facilitate (e.g., promote) ASR recognition of the second vocabulary entry. For example, the system 1000 can determine pronunciation data for the second vocabulary entry to facilitate recognition of the combined word "Voice note2" as an ASR interpretation, as opposed to homonyms "Voice note to" or "Voice note to" and/or contextual information indicating that "Voice note2" is more likely to appear near the word "play" than near the word "call".
Based on the token representing the user input 1010 "play 'voice note 2'", the digital assistant determines that the token "voice note 2" matches the identifier of the second vocabulary entry and/or the ASR metadata. In some embodiments, because the second vocabulary entry is a dynamic vocabulary entry, the second vocabulary entry may represent an appropriate, automatically generated, or user generated term and may not include a predetermined synonym. Thus, the digital assistant may only need a partial match between the token and the second metadata To account for slight variations (e.g., transcribing the input ASR To "Voice Note To" instead of "Voice Note 2"). The correspondence between the user input 1010 and the second vocabulary entry of the record entity type may be further enhanced by other natural language understanding and contextual information, such as a command for the token to "play" a matching voice record associated with the second vocabulary entry (e.g., compatible with the record entity instance).
As with the user input 1008, the digital assistant is still able to interpret the user input 1010 using the content and functionality of the application 1004 because the second vocabulary entry recording "voice note 2" is registered with the digital assistant knowledge base as part of the application vocabulary, although the application 1004 may be in an inactive state upon receiving the user input 1010.
Once the digital assistant has determined that the user input 1010 matches the second vocabulary entry for recording "Voice Note 2," the digital assistant causes the application 1004 to perform a second action based on the second vocabulary entry. As described with respect to fig. 10A, the digital assistant can use the second metadata of the second vocabulary entry to identify the application 1004 and/or the second action. For example, the second metadata of the second vocabulary entry may include the identity of the application 1004 (e.g., an application creating the record "voice note 2") and may include an indication of a command (such as play, edit, move, or delete) compatible with the recording entity.
Thus, in response to user input 1010 "play 'voice note 2'", the digital assistant causes application 1004 to initiate playing the record "voice note 2", the particular entity instance identified by the matching second vocabulary entry. As described with respect to fig. 10A, initiating execution of the second action by the application 1004 may include opening the application 1004 (e.g., if the application 1004 is already in an inactive state) or providing additional parameters of the second action to the application 1004. The digital assistant may use the identified commands to instruct the application 1004, for example, using a digital assistant plug-in or an API call. Thus, the application 1004 accesses the record "voice note 2" and plays it for the user.
The operations described above with reference to fig. 10A to 10B are optionally implemented by the components depicted in fig. 1 to 4, 6A to 6B, and 7A to 7C. For example, the operations of system 1000 may be implemented by one or more electronic devices (e.g., 104, 122, 200, 400, 600, 1002) such as the electronic devices implementing system 700. It will be apparent to one of ordinary skill in the art how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, and 7A-7C.
Fig. 11A-11B are flowcharts illustrating a method 1100 for implementing an application vocabulary by a digital assistant, according to some embodiments. Method 1100 may be performed using one or more electronic devices (e.g., device 104, device 200, device 600, device 1002) having one or more processors and memory. In some implementations, the method 1100 is performed using a client-server system, where the operations of the method 1100 are divided between a client device (e.g., 104, 200, 600, 1002) and a server in any manner. Some operations in method 1100 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
Referring to FIG. 11A, at block 1102, an application vocabulary for a software application is obtained from the software application, including at least a first type of vocabulary entry and a second type of vocabulary entry. In some embodiments, block 1102 is performed as described with respect to blocks 902 and 912-914 of fig. 8A-9B and in particular with respect to fig. 9A. For example, the application vocabulary may represent the content and functionality of a voice recording application (e.g., application 1004), a reader application, or any other application installed on or in the process of being installed on a device implementing method 1100.
In some embodiments, the first type of vocabulary entry is a static vocabulary entry (e.g., as described with respect to the first vocabulary entry of fig. 8A-9B), and the second type of vocabulary entry is a dynamic vocabulary entry (e.g., as described in further detail with respect to the second vocabulary entry of fig. 8A-9B). That is, in some embodiments, the obtained application vocabulary is a hybrid static-dynamic vocabulary of the software application. For example, the application vocabulary of a voice recording application may include static vocabulary entries for general recording entity types and dynamic vocabulary entries for specific recording instances in a user's record base. As another example, the application vocabulary of the reader application may include static vocabulary entries for general book entity types and "library" tag enumeration, as well as dynamic vocabulary entries for specific instances of books in the user's library and/or online library or bookstore.
In some embodiments, obtaining the application vocabulary of the software application includes obtaining at least a first portion of the application vocabulary in response to receiving user input requesting installation of the software application. For example, the first portion of the application vocabulary may be an initial donation of static application vocabulary (e.g., a first type of vocabulary) obtained as part of the application installation process. In some embodiments, the software application is installed on an electronic device implementing the method 1100. In these embodiments, obtaining the application vocabulary of the software application may include obtaining at least a second portion of the application in response to launching the software application. For example, the second portion of the application vocabulary may be a "refresh" of the static application vocabulary (e.g., the first type of vocabulary).
In some embodiments, obtaining the application vocabulary of the software application includes receiving a request from the software application, and in response to receiving the request from the software application, obtaining at least a third portion of the application vocabulary including vocabulary entries of the second type. For example, as described with respect to block 914 of fig. 8B and 9A, the third portion of the application vocabulary may be a dynamic vocabulary entry or a dynamic vocabulary corpus for which the software application "pushes" donations. In some embodiments, a request from a software application is received as an Application Programming Interface (API) call. For example, the software application may issue the request directly (e.g., make an API call), or the request from the software application may be received via a daemon of the software application.
At block 1104, the application vocabulary is registered with a knowledge base of the digital assistant. In some embodiments, block 1104 is performed as described with respect to fig. 8A-9B and in particular with respect to blocks 906-910 and 916-920 of fig. 9A.
In some embodiments, registering the application vocabulary with the knowledge base at block 1106 includes, for each respective vocabulary entry, associating respective metadata with the respective vocabulary entry in the knowledge base. For example, metadata for a given vocabulary entry may include identifiers, synonyms, commands, donation application information, source or context information, automatic Speech Recognition (ASR) data, or any other information associated with the vocabulary entry that will assist the digital assistant in interpreting and implementing user input relative to the vocabulary entry.
In some embodiments, the method 1100 includes determining ASR metadata for the respective vocabulary entry and the ASR metadata is included in the respective metadata. In some embodiments, as described with respect to fig. 8B and 10B, dynamic vocabulary entries (such as "voice note 2") may not include predetermined synonyms, and may instead be preprocessed to determine (e.g., generate) ASR metadata, such as pronunciation data and/or contextual information. In these implementations, the ASR metadata may be registered in association with the dynamic vocabulary entry to increase the likelihood of identifying the corresponding token in the ASR stage (e.g., block 1112). For example, ASR metadata may facilitate (e.g., "boost") "recognition of Voice note 2" in combination (e.g., as opposed to the homonym "Voice note to" or "Voice note to") or when in proximity to a contextually related word such as "play" (e.g., as opposed to "call").
At block 1108, user input is received. In some implementations, the user input is natural language input to the digital assistant, which may be spoken user input or text (e.g., typed) input. Exemplary user inputs may include "create a new voice memo called ' monday note '," play ' voice note 2' "," open ' underground railway ' to my bookmarks ', "open my library", etc. In some implementations, the user input is received while the software application is in an active state, e.g., while the user is viewing the software application on an electronic device implementing the method 1100. In some implementations, user input is received when the software application is not in an active state. For example, as shown in fig. 10A-10B, user input may be received while the software application is not running or in focus (e.g., while the user is viewing a home page on an electronic device implementing method 1100).
At block 1110, a determination is made whether the user input corresponds to a first vocabulary entry for the application vocabulary. For example, the correspondence may be explicit (e.g., user input naming the first vocabulary entry) or implicit (e.g., the first vocabulary entry may be implicitly determined from the context of the user input). In some embodiments, as described with respect to fig. 8A-9B, the first vocabulary entry represents a first class (e.g., a first programming concept) that is processed by the software application. For example, the first vocabulary entry may represent a first object processed by the software application, such as a particular instance of an entity (e.g., record entity "voice note 2", book entity "underground railway") or enumeration (e.g., a "library" tag of the reader application). As another example, the first vocabulary entry may represent a first type handled by the software application, such as a general entity type (e.g., a general record entity type, a general book entity type, etc.). Thus, at block 1110, the digital assistant may determine whether the user input implicitly or explicitly relates to programming concepts that the digital assistant may implement using the content and functionality of the software application.
In some implementations, the first vocabulary entry is associated with at least a first command of the software application. In some implementations, at least the first command can be included as part of the first metadata associated with the first vocabulary entry in the knowledge base (e.g., at block 1106). For example, the first command may be a task or action executable by the software application for, through, or with respect to the first class. The first vocabulary entry of the record entity type may be associated with a command to create an instance of the record entity type by the voice recording application, or a command (such as play, edit, or delete) that may be performed on a particular instance of the record entity type. As another example, a first vocabulary entry of the book instance "underground railway" may be associated with a command to open the book, search the book, place a bookmark in the book, and so forth.
In some implementations, at block 1112, determining whether the user input corresponds to the first vocabulary entry includes parsing the user input to obtain a set of one or more tokens representing the user input. For example, for natural language user input, such as speech or typed text, the digital assistant may initiate Automatic Speech Recognition (ASR) and/or Natural Language Processing (NLP) processing to recognize phonemes, sub-word fragments, words and/or phrases spoken or typed by the user. The identified tokens may then be compared with corresponding metadata of the application vocabulary entries registered with the knowledge base to determine whether one or more tokens representing user input correspond to (e.g., match) any application vocabulary.
In some implementations, at block 1114, determining whether the user input corresponds to a first vocabulary entry includes determining whether at least one token in the set of one or more tokens matches first metadata associated with the first vocabulary entry in the knowledge base. For example, as described with respect to fig. 8A-9B, the application vocabulary may be indexed to one or more searchable databases that the digital assistant may use the identified tokens to search for matching vocabulary entries. In some embodiments, as discussed above, the first metadata associated with the first vocabulary entry may include any information related to the first vocabulary entry, such as identifiers, synonyms, commands, donation application information, related vocabulary entries, and the like.
For example, user input "open 'underground railway' to my bookmark" may be determined as a vocabulary entry corresponding to the book entity "underground railway" of the reader application based on the token "underground railway" matching the title (e.g., identifier). Additional matches (such as a token "open" match a command to open a book entity "underground railway" and/or a token "bookmark" match an associated vocabulary entry for a book entity type of the reader application) may increase the confidence in determining that the user input corresponds to the first vocabulary entry of the software application. As another example, user input such as "open Colson Whitehead" may be determined to correspond to a vocabulary entry for the book entity "underground railway" based on the token "Colson Whitehead" matching "author" metadata of the book entity. As another example, user input such as "my library" may be determined to explicitly correspond to vocabulary entries for "library" tag enumeration, as well as implicitly correspond to vocabulary entries for the open folder action, as the command to open the folder action is associated with vocabulary entries for "library" tag enumeration.
In some implementations, the first vocabulary entry is a first type of vocabulary entry (e.g., a static vocabulary entry), and determining whether the at least one token matches the first metadata includes determining whether the at least one token exactly matches at least a first portion of the first metadata. For example, as discussed above with respect to fig. 8A and 10A, for static vocabulary, some of the associated metadata (such as synonyms) may be predetermined (e.g., selected by an application developer) and provided by the software application (e.g., as part of the first data file), and thus, exact match criteria may be used, as the synonyms may reflect expected user behavior. In some embodiments, at least a first portion of the first metadata may include synonyms for the first vocabulary entry. For example, the first vocabulary entry of the record entity type may include predetermined synonyms "record", "voice memo", and "dictation". Thus, the user input "create a new voice memo called 'monday memo' may be determined to correspond to the first vocabulary entry because the token" voice memo "exactly matches at least a portion of the first metadata of the synonym" voice memo ". However, user inputs such as "create notes called 'monday notes' may not be determined to correspond to the first vocabulary item because the token" notes "themselves are not exactly matched for synonyms of the first vocabulary item.
In some implementations, the first vocabulary entry is a second type of vocabulary entry (e.g., a dynamic vocabulary entry), and determining whether the at least one token matches the first metadata includes determining whether the at least one token partially matches at least a second portion of the first metadata. For example, as discussed above with respect to fig. 8B and 10B, for dynamic vocabulary, the first vocabulary entry may include appropriate, automatically generated, or user generated terms, and thus, partial matching criteria may be used to account for unexpected user behavior or inaccurate ASR parsing. For example, for the first vocabulary entry of the book instance "underground railway", the user input "open 'underground railway'" may be a sufficient match with the identifier "the undergroudrailhead". As another example, the user input "play 'voice note 2'" can be parsed into a set of tokens "also play voice note" which is not an exact (e.g., character-to-character) match for the identifier "VoiceNote2", but may be a sufficient match under partial match criteria. In some implementations, the second portion of the first metadata can include ASR metadata that is determined during registration (e.g., at block 1106), which can reduce uncertainty or inaccuracy in determining the correspondence between the user input and the dynamic vocabulary entry.
In some implementations, any or all of blocks 1108-1114 are performed (e.g., receiving user input and determining whether the user input corresponds to the first vocabulary entry) when the software application is not in an active state, such as when the software application is not currently running or is currently unfocused (e.g., the software application is running in the background but is not viewed or interacted with by the user). For example, registering the application vocabulary at block 1104 may allow the digital assistant to interpret and implement user input with respect to the software application at any time because the application vocabulary persists in the digital assistant's knowledge base (e.g., until log-off) even when the software application is inactive.
Referring to FIG. 11B, at block 1116, the software application is caused to perform a first action based on the first vocabulary entry. For example, in response to a user input "open 'underground railway' to my bookmark," the digital assistant may cause the reader application to perform an action of opening a book "underground railway," which is a particular book entity represented by a first vocabulary entry corresponding to the user input. As another example, in response to a user input "open my library," the digital assistant may cause the reader application to perform an action of opening to the "library" tab, which is an enumeration represented by a first vocabulary entry corresponding to the user input. In some embodiments, the first action is determined by providing the first vocabulary entry and any associated metadata to a statistical or machine learning model, e.g., allowing the first vocabulary to affect the selection of the first action even if the vocabulary entry of the first action is not included in the user input.
In some implementations, causing the software application to perform the first action includes, at block 1118, identifying the first action using second metadata associated with a first vocabulary entry in the knowledge base. In some embodiments, the second metadata may be part of first metadata associated with the first vocabulary entry during registration of the application vocabulary, such as at least a first command associated with the first vocabulary entry in the knowledge base. For example, for a first vocabulary entry of a record entity type, the second metadata may include a command to create a new entity instance, so the digital assistant may identify the first action as creating a new instance of the record entity type. As another example, for a first vocabulary entry for a particular book entity "underground railway," the second metadata may include a command to open the book entity, so the digital assistant may identify the first action as opening the particular book entity "underground railway.
In some embodiments, the first action may be identified using other application vocabulary, such as vocabulary entries for actions determined to be compatible with the first class represented by the first vocabulary entry. For example, the digital assistant may identify the token "create" in the user input "create a voice memo called' monday note" as an action for creating an entity instance compatible with the record entity type represented by the first vocabulary entry, and may thus identify the first action as creating an instance of the record entity type.
In some implementations, causing the software application to perform the first action includes, at block 1120, identifying the software application using third metadata associated with the first vocabulary entry in the knowledge base. In some embodiments, the third metadata may be part of the first metadata associated with the first vocabulary entry during registration of the application vocabulary, such as donating the first vocabulary entry and/or processing an identity of the first type of application represented by the first vocabulary entry. For example, for a first vocabulary entry of a record entity type, the third metadata may indicate that the voice recording application may process the record entity and/or may create a command for a new entity instance, so the digital assistant may recognize that the first action of creating the new instance of the record entity type should be performed by the voice recording application. As another example, for a first vocabulary entry for a particular book entity "underground railway," the third metadata may indicate that the reader application donates the first vocabulary entry, and thus the digital assistant may recognize that the first action of opening the particular book entity "underground railway" should be performed by the reader application.
In some implementations, such as implementations in which some or all of blocks 1108-1114 are performed while the software application is in an inactive state, causing the software application to perform a first action based on the first vocabulary entry at block 1116 includes opening (e.g., launching or focusing) the software application. In some implementations, causing the software application to perform a first action based on the first vocabulary entry at block 1116 includes using the first command to instruct the software application. For example, the first command may correspond to a digital assistant plug-in or an API call that causes the software application to perform a first action. In some implementations, causing the software application to perform the first action based on the first vocabulary entry at block 1116 includes providing the software application with additional information for performing the first action. For example, for user input "create a voice memo called 'monday note', the digital assistant may extract (e.g., using NLP technology) the name" monday note "as a parameter for the first action to create the new recording entity. Thus, using the application vocabulary described above, the digital assistant can use the application content and functionality to coordinate responses to user inputs.
The operations described above with reference to fig. 11A to 11B are optionally implemented by the components depicted in fig. 1 to 4, 6A to 6B, 7A to 7C, 8A to 8B, and 10A to 10B. For example, the operations of method 900 may be implemented in accordance with system 800, which may be implemented on one or more electronic devices, such as a mobile phone. It will be apparent to one of ordinary skill in the art how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, 7A-7C, 8A-8B, and 10A-10B.
According to some implementations, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium) is provided that stores one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods or processes described herein.
According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes means for performing any of the methods and processes described herein.
According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes a processing unit configured to perform any of the methods and processes described herein.
According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods and processes described herein.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such variations and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data available from a variety of sources to improve integration of application vocabulary for use by a digital assistant. The present disclosure contemplates that in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, tweet IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to populate registered application vocabularies, such as metadata as vocabulary entries. Thus, the use of such personal information data enables the digital assistant to interpret and customize the application response based on user preferences. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of a digital assistant integrated application vocabulary, the techniques of the present invention may be configured to allow a user to choose to "join" or "exit" to participate in the collection of personal information data during or at any time after the registration service. In another example, the user may choose not to provide emotion-related data for a digital assistant that integrates application vocabulary. In another example, the user may choose to limit the length of time that the mood-related data is maintained, or to completely prohibit development of the underlying mood state. In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing a particular identifier (e.g., date of birth, etc.), controlling the amount or characteristics of data stored (e.g., collecting location data at a city level rather than an address level), controlling the manner in which data is stored (e.g., aggregating data among users), and/or other methods, where appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the digital assistant may interpret and implement user input using the application vocabulary based on non-personal information data or an absolute minimum of personal information (such as content requested by a device associated with the user, other non-personal information of the digital assistant applicable to the integrated application vocabulary, or publicly available information).

Claims (62)

1. A method, comprising:
at an electronic device having one or more processors and memory:
obtaining an application vocabulary of the software application from the software application, wherein the vocabulary comprises at least a first type of vocabulary entry and a second type of vocabulary entry;
registering the application vocabulary with a knowledge base of a digital assistant of the electronic device;
receiving user input;
determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and
In accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, the software application is caused to perform a first action based on the first vocabulary entry.
2. The method of claim 1, wherein the vocabulary entries of the first type are static vocabulary entries and the vocabulary entries of the second type are dynamic vocabulary entries.
3. The method of any of claims 1-2, wherein the first vocabulary entry represents a first object processed by the software application.
4. The method of any of claims 1-2, wherein the first vocabulary entry represents a first type of processing by the software application.
5. The method of any of claims 1-2, wherein the first vocabulary entry is associated with a first command of the first action in the knowledge base.
6. The method of any of claims 1-2, wherein determining whether the user input corresponds to the first vocabulary entry comprises:
parsing the user input to obtain a set of one or more tokens representing the user input; and
a determination is made as to whether at least one token of the set of one or more tokens matches first metadata associated with the first vocabulary entry in the knowledge base.
7. The method according to claim 6, wherein:
the first vocabulary entry is the first type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token exactly matches at least a first portion of the first metadata associated with the first vocabulary entry in the knowledge base.
8. The method of claim 7, wherein the first metadata comprises synonyms for the first vocabulary entry.
9. The method according to claim 6, wherein:
the first vocabulary entry is the second type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token partially matches at least a second portion of the first metadata associated with the first vocabulary entry.
10. The method of claim 9, wherein the first metadata comprises Automatic Speech Recognition (ASR) metadata for the first vocabulary entry, and further comprising:
the ASR metadata for the first vocabulary entry is determined.
11. The method of any of claims 1-2, wherein determining whether the user input corresponds to the first vocabulary entry is performed while the software application is not in an active state.
12. The method of any of claims 1-2, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the first action is identified using first metadata associated with the first vocabulary entry in the knowledge base.
13. The method of claim 12, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the software application is identified using second metadata associated with the first vocabulary entry in the knowledge base.
14. The method of any of claims 1-2, wherein obtaining the application vocabulary comprises:
at least a first portion of the application vocabulary is obtained in response to receiving a user input requesting installation of the software application.
15. The method of any of claims 1-2, wherein the software application is installed on the electronic device.
16. The method of any of claims 1-2, wherein obtaining the application vocabulary of the software application comprises:
at least a second portion of the application vocabulary is obtained in response to launching the software application.
17. The method of any of claims 16, wherein obtaining the application vocabulary of the software application comprises:
receiving a request from the software application; and
in response to receiving the request from the software application, at least a third portion of the application vocabulary is obtained, wherein the third portion of the application vocabulary includes the vocabulary entries of the second type.
18. The method of claim 17, wherein the request from the software application is received as an Application Programming Interface (API) call.
19. The method of claim 17, wherein the request from the software application is received via a daemon.
20. The method of any of claims 1-2, wherein registering the application vocabulary with the knowledge base comprises:
for each respective vocabulary entry included in the application vocabulary, respective metadata is associated with the respective vocabulary entry in the knowledge base.
21. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
The application vocabulary of the software application is obtained from the software application,
wherein the vocabulary includes at least a first type of vocabulary entry and a second type of vocabulary entry;
registering the application vocabulary with a knowledge base of a digital assistant of the electronic device;
receiving user input;
determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and
in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, the software application is caused to perform a first action based on the first vocabulary entry.
22. The electronic device of claim 21, wherein the vocabulary entries of the first type are static vocabulary entries and the vocabulary entries of the second type are dynamic vocabulary entries.
23. The electronic device of any of claims 21-22, wherein the first vocabulary entry represents a first object processed by the software application.
24. The electronic device of any of claims 21-22, wherein the first vocabulary entry represents a first type of processing by the software application.
25. The electronic device of any of claims 21-22, wherein the first vocabulary entry is associated with a first command of the first action in the knowledge base.
26. The electronic device of any of claims 21-22, wherein determining whether the user input corresponds to the first vocabulary entry comprises:
parsing the user input to obtain a set of one or more tokens representing the user input; and
a determination is made as to whether at least one token of the set of one or more tokens matches first metadata associated with the first vocabulary entry in the knowledge base.
27. The electronic device of claim 26, wherein:
the first vocabulary entry is the first type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token exactly matches at least a first portion of the first metadata associated with the first vocabulary entry in the knowledge base.
28. The electronic device of claim 27, wherein the first metadata comprises synonyms for the first vocabulary entry.
29. The electronic device of claim 26, wherein:
the first vocabulary entry is the second type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token partially matches at least a second portion of the first metadata associated with the first vocabulary entry.
30. The electronic device of claim 29, wherein the first metadata comprises Automatic Speech Recognition (ASR) metadata of the first vocabulary entry, and further comprising:
the ASR metadata for the first vocabulary entry is determined.
31. The electronic device of any of claims 21-22, wherein determining whether the user input corresponds to the first vocabulary entry is performed while the software application is not in an active state.
32. The electronic device of any of claims 21-22, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the first action is identified using first metadata associated with the first vocabulary entry in the knowledge base.
33. The electronic device of claim 32, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the software application is identified using second metadata associated with the first vocabulary entry in the knowledge base.
34. The electronic device of any of claims 21-22, wherein obtaining the application vocabulary comprises:
At least a first portion of the application vocabulary is obtained in response to receiving a user input requesting installation of the software application.
35. The electronic device of any of claims 21-22, wherein the software application is installed on the electronic device.
36. The electronic device of any of claims 21-22, wherein obtaining the application vocabulary of the software application comprises:
at least a second portion of the application vocabulary is obtained in response to launching the software application.
37. The electronic device of any of claims 36, wherein obtaining the application vocabulary of the software application comprises:
receiving a request from the software application; and
in response to receiving the request from the software application, at least a third portion of the application vocabulary is obtained, wherein the third portion of the application vocabulary includes the vocabulary entries of the second type.
38. The electronic device of claim 37, wherein the request from the software application is received as an Application Programming Interface (API) call.
39. The electronic device of claim 37, wherein the request from the software application is received via a daemon.
40. The electronic device of any of claims 21-22, wherein registering the application vocabulary with the knowledge base comprises:
for each respective vocabulary entry included in the application vocabulary, respective metadata is associated with the respective vocabulary entry in the knowledge base.
41. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to:
obtaining an application vocabulary of the software application from the software application, wherein the vocabulary comprises at least a first type of vocabulary entry and a second type of vocabulary entry;
registering the application vocabulary with a knowledge base of a digital assistant of the electronic device;
receiving user input;
determining whether the user input corresponds to a first vocabulary entry for the application vocabulary; and
in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry, the software application is caused to perform a first action based on the first vocabulary entry.
42. The non-transitory computer readable storage medium of claim 41, wherein the vocabulary entries of the first type are static vocabulary entries and the vocabulary entries of the second type are dynamic vocabulary entries.
43. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein the first vocabulary entry represents a first object processed by the software application.
44. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein the first vocabulary entry represents a first type of processing by the software application.
45. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein the first vocabulary entry is associated with a first command of the first action in the knowledge base.
46. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein determining whether the user input corresponds to the first vocabulary entry comprises:
parsing the user input to obtain a set of one or more tokens representing the user input; and
a determination is made as to whether at least one token of the set of one or more tokens matches first metadata associated with the first vocabulary entry in the knowledge base.
47. The non-transitory computer readable storage medium of claim 46, wherein:
the first vocabulary entry is the first type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token exactly matches at least a first portion of the first metadata associated with the first vocabulary entry in the knowledge base.
48. The non-transitory computer-readable storage medium of claim 47, wherein the first metadata comprises synonyms for the first vocabulary entry.
49. The non-transitory computer readable storage medium of claim 46, wherein:
the first vocabulary entry is the second type of vocabulary entry; and is also provided with
Determining whether the at least one token matches the first metadata includes determining whether the at least one token partially matches at least a second portion of the first metadata associated with the first vocabulary entry.
50. The non-transitory computer readable storage medium of claim 49, wherein the first metadata comprises Automatic Speech Recognition (ASR) metadata of the first vocabulary entry, and further comprising:
The ASR metadata for the first vocabulary entry is determined.
51. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein determining whether the user input corresponds to the first vocabulary entry is performed when the software application is not in an active state.
52. The non-transitory computer-readable storage medium of any of claims 41-42, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the first action is identified using first metadata associated with the first vocabulary entry in the knowledge base.
53. The non-transitory computer-readable storage medium of claim 52, wherein causing the software application to perform the first action based on the first vocabulary entry comprises:
the software application is identified using second metadata associated with the first vocabulary entry in the knowledge base.
54. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein obtaining the application vocabulary comprises:
at least a first portion of the application vocabulary is obtained in response to receiving a user input requesting installation of the software application.
55. The non-transitory computer readable storage medium of any one of claims 41-42, wherein the software application is installed on the electronic device.
56. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein obtaining the application vocabulary of the software application comprises:
at least a second portion of the application vocabulary is obtained in response to launching the software application.
57. The non-transitory computer readable storage medium of claim 56, wherein obtaining the application vocabulary of the software application comprises:
receiving a request from the software application; and
in response to receiving the request from the software application, at least a third portion of the application vocabulary is obtained, wherein the third portion of the application vocabulary includes the vocabulary entries of the second type.
58. The non-transitory computer readable storage medium of claim 57, wherein the request from the software application is received as an Application Programming Interface (API) call.
59. The non-transitory computer-readable storage medium of claim 57, wherein the request from the software application is received via a daemon.
60. The non-transitory computer-readable storage medium of any one of claims 41-42, wherein registering the application vocabulary to the knowledge base comprises:
for each respective vocabulary entry included in the application vocabulary, respective metadata is associated with the respective vocabulary entry in the knowledge base.
61. An electronic device, comprising:
apparatus for performing the method of any one of claims 1 to 2.
62. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component, the one or more programs comprising instructions for performing the method of any of claims 1-2.
CN202310581530.9A 2022-06-03 2023-05-23 Application vocabulary integration through digital assistant Pending CN117170780A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/348,965 2022-06-03
US17/946,977 2022-09-16
US17/946,977 US11978436B2 (en) 2022-09-16 Application vocabulary integration with a digital assistant

Publications (1)

Publication Number Publication Date
CN117170780A true CN117170780A (en) 2023-12-05

Family

ID=88934265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310581530.9A Pending CN117170780A (en) 2022-06-03 2023-05-23 Application vocabulary integration through digital assistant

Country Status (1)

Country Link
CN (1) CN117170780A (en)

Similar Documents

Publication Publication Date Title
CN112567323B (en) User activity shortcut suggestions
CN111901481B (en) Computer-implemented method, electronic device, and storage medium
CN110364148B (en) Natural assistant interaction
US11010561B2 (en) Sentiment prediction from textual data
US10984780B2 (en) Global semantic word embeddings using bi-directional recurrent neural networks
CN108604449B (en) speaker identification
US9865280B2 (en) Structured dictation using intelligent automated assistants
CN110797019B (en) Multi-command single speech input method
CN112567332A (en) Multimodal input of voice commands
CN110692049A (en) Method and system for providing query suggestions
CN116414282A (en) Multi-modal interface
CN110998560A (en) Method and system for customizing suggestions using user-specific information
CN117033578A (en) Active assistance based on inter-device conversational communication
EP4086750A1 (en) Digital assistant handling of personal requests
US20220374109A1 (en) User input interpretation using display representations
US20220374110A1 (en) Contextual action predictions
CN115344119A (en) Digital assistant for health requests
US20230098174A1 (en) Digital assistant for providing handsfree notification management
KR20240027140A (en) Digital assistant interaction in communication sessions
CN110612566B (en) Privacy maintenance of personal information
CN116486799A (en) Generating emoji from user utterances
CN115083414A (en) Multi-state digital assistant for continuous conversation
CN112015873A (en) Speech assistant discoverability through in-device object location and personalization
US11978436B2 (en) Application vocabulary integration with a digital assistant
CN110574023A (en) offline personal assistant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination