CN110574023A - offline personal assistant - Google Patents
offline personal assistant Download PDFInfo
- Publication number
- CN110574023A CN110574023A CN201880028447.6A CN201880028447A CN110574023A CN 110574023 A CN110574023 A CN 110574023A CN 201880028447 A CN201880028447 A CN 201880028447A CN 110574023 A CN110574023 A CN 110574023A
- Authority
- CN
- China
- Prior art keywords
- task
- electronic device
- user
- natural language
- usefulness score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 141
- 238000003860 storage Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 abstract description 95
- 238000012545 processing Methods 0.000 description 79
- 238000004891 communication Methods 0.000 description 58
- 230000033001 locomotion Effects 0.000 description 52
- 230000004044 response Effects 0.000 description 41
- 238000003058 natural language processing Methods 0.000 description 32
- 238000005111 flow chemistry technique Methods 0.000 description 22
- 230000003287 optical effect Effects 0.000 description 21
- 230000015572 biosynthetic process Effects 0.000 description 20
- 238000003786 synthesis reaction Methods 0.000 description 20
- 230000002093 peripheral effect Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 16
- 230000007423 decrease Effects 0.000 description 15
- 238000001514 detection method Methods 0.000 description 15
- 230000007246 mechanism Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000036541 health Effects 0.000 description 13
- 238000007726 management method Methods 0.000 description 13
- 230000000007 visual effect Effects 0.000 description 13
- 241000699666 Mus <mouse, genus> Species 0.000 description 12
- 230000003993 interaction Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000009499 grossing Methods 0.000 description 7
- 230000003213 activating effect Effects 0.000 description 5
- 230000001413 cellular effect Effects 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 5
- 230000001960 triggered effect Effects 0.000 description 5
- 239000003550 marker Substances 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000021317 sensory perception Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 241000227653 Lycopersicon Species 0.000 description 3
- 235000007688 Lycopersicon esculentum Nutrition 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000000977 initiatory effect Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 241001422033 Thestylus Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013503 de-identification Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- MQJKPEGWNLWLTK-UHFFFAOYSA-N Dapsone Chemical compound C1=CC(N)=CC=C1S(=O)(=O)C1=CC=C(N)C=C1 MQJKPEGWNLWLTK-UHFFFAOYSA-N 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000000881 depressing effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 235000013410 fast food Nutrition 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 235000013550 pizza Nutrition 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 238000009528 vital sign measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90332—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a system and process for performing tasks using a digital assistant. According to one example, a method includes, at an electronic device with one or more processors: receiving a natural language input; determining a first task and a first usefulness score associated with the first task based on the natural language input; receiving, from another electronic device, a second task and a second usefulness score associated with the second task; determining whether the first usefulness score is higher than the second usefulness score; in accordance with a determination that the first usefulness score is higher than the second usefulness score: performing a first task determined by the electronic device; and providing an output indicating whether the first task has been executed; and in accordance with a determination that the second usefulness score is higher than the first usefulness score: performing a second task received from another electronic device; and providing an output indicating whether the second task has been performed.
Description
Priority requirement
This patent application claims priority from us provisional patent application 62/504,991 entitled "OFFLINE PERSONAL assitant", filed on 11/5/2017, the content of which is hereby incorporated by reference in its entirety for all purposes. This patent application claims priority from danish patent application PA 201770439 entitled "OFFLINE personessistant" filed 6/2017, the content of which is hereby incorporated by reference in its entirety for all purposes.
Technical Field
The present disclosure relates generally to automated digital assistants, and more particularly, to performing tasks using automated digital assistants.
background
Automated digital assistants may provide an advantageous interface between a human user and an electronic device. Such digital assistants may allow a user to interact with a device or system in phonetic and/or textual form using natural language. For example, a user may provide a voice input including a user request to a digital assistant that is running on an electronic device. The digital assistant can interpret the user intent from the speech input and manipulate the user intent into one or more tasks. These tasks may be performed by executing one or more services of the electronic device, and relevant output responsive to the user request may be returned to the user. However, typically, conventional automated digital assistants for electronic devices must rely on back-end (e.g., server-end) components to operate, often due to the computing limitations of the electronic devices. For example, speech-to-text functions are typically performed and/or verified by the back-end component. As another example, the back-end component is typically responsible for interpreting intent from the speech input and/or manipulating intent into tasks.
Disclosure of Invention
Example methods are disclosed herein. One example method includes, at an electronic device with one or more processors: receiving a natural language input; determining a first task and a first usefulness score associated with the first task based on the natural language input; receiving, from another electronic device, a second task and a second usefulness score associated with the second task; determining whether the first usefulness score is higher than the second usefulness score; in accordance with a determination that the first usefulness score is higher than the second usefulness score: performing a first task determined by the electronic device; and providing an output indicating whether the first task has been executed; and in accordance with a determination that the second usefulness score is higher than the first usefulness score: performing a second task received from another electronic device; and providing an output indicating whether the second task has been performed.
Example non-transitory computer-readable media are disclosed herein. An example non-transitory computer readable storage medium stores one or more programs. The one or more programs include instructions that, when executed by the one or more processors of the electronic device, cause the electronic device to receive natural language input; determining a first task and a first usefulness score associated with the first task based on the natural language input; receiving, from another electronic device, a second task and a second usefulness score associated with the second task; determining whether the first usefulness score is higher than the second usefulness score; in accordance with a determination that the first usefulness score is higher than the second usefulness score: performing a first task determined by the electronic device; and providing an output indicating whether the first task has been executed; and in accordance with a determination that the second usefulness score is higher than the first usefulness score: performing a second task received from another electronic device; and providing an output indicating whether the second task has been performed.
Example electronic devices are disclosed herein. An example electronic device includes one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a natural language input; determining a first task and a first usefulness score associated with the first task based on the natural language input; receiving, from another electronic device, a second task and a second usefulness score associated with the second task; determining whether the first usefulness score is higher than the second usefulness score; in accordance with a determination that the first usefulness score is higher than the second usefulness score: performing a first task determined by the electronic device; and providing an output indicating whether the first task has been executed; and in accordance with a determination that the second usefulness score is higher than the first usefulness score: performing a second task received from another electronic device; and providing an output indicating whether the second task has been performed.
An exemplary electronic device comprises means for receiving a natural language input; means for determining a first task and a first usefulness score associated with the first task based on the natural language input; means for receiving a second task and a second usefulness score associated with the second task from another electronic device; means for determining whether the first usefulness score is higher than the second usefulness score; means for, in accordance with a determination that the first usefulness score is higher than the second usefulness score: performing a first task determined by the electronic device; and providing an output indicating whether the first task has been executed; and means for performing the following in accordance with a determination that the second usefulness score is higher than the first usefulness score: performing a second task received from another electronic device; and providing an output indicating whether the second task has been performed.
Determining whether the first usefulness score is higher than the second usefulness score and performing tasks associated with higher usefulness scores enables a digital assistant of the electronic device to more efficiently select and perform tasks determined to best satisfy the user request. Selecting and performing tasks in this manner enhances the operability of the electronic device by allowing a digital assistant of the electronic device to operate more reliably (e.g., by better interpreting and performing tasks in response to user requests), which in turn reduces power usage and improves the battery life of the device by enabling users to use the device more quickly and efficiently.
Drawings
Fig. 1 is a block diagram illustrating a system and environment for implementing a digital assistant in accordance with various examples.
Fig. 2A is a block diagram illustrating a portable multifunction device implementing a client-side portion of a digital assistant, according to various examples.
Fig. 2B is a block diagram illustrating exemplary components for event processing according to various examples.
Fig. 3 illustrates a portable multifunction device implementing a client-side portion of a digital assistant, in accordance with various examples.
Fig. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with various examples.
fig. 5A illustrates an exemplary user interface of a menu of applications on a portable multifunction device according to various examples.
Fig. 5B illustrates an exemplary user interface of a multifunction device with a touch-sensitive surface separate from a display, in accordance with various examples.
fig. 6A illustrates a personal electronic device, according to various examples.
Fig. 6B is a block diagram illustrating a personal electronic device, according to various examples.
Fig. 7A is a block diagram illustrating a digital assistant system or server portion thereof according to various examples.
Fig. 7B illustrates functionality of the digital assistant illustrated in fig. 7A according to various examples.
Fig. 7C illustrates a portion of an ontology according to various examples.
Fig. 8 illustrates a process for performing a task, according to various examples.
fig. 9 is a flow diagram of a process for performing a task, according to various examples.
Fig. 10 is a flow diagram of a process for performing a task, according to various examples.
Fig. 11A illustrates an exemplary sequence of operations for performing tasks in a privacy-preserving manner, in accordance with various examples.
Fig. 11B illustrates an exemplary sequence of operations for performing tasks in a privacy-preserving manner, in accordance with various examples.
Fig. 12 illustrates a process for performing a task, according to various examples.
FIG. 13 illustrates a process for selectively determining tasks according to various examples.
Detailed Description
In the following description of the examples, reference is made to the accompanying drawings in which are shown, by way of illustration, specific examples that may be implemented. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the various examples.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first input may be referred to as a second input, and similarly, a second input may be referred to as a first input, without departing from the scope of the various described examples. The first input and the second input are both inputs, and in some cases are separate and distinct inputs.
The terminology used in the description of the various described examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
depending on the context, the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.
1. System and environment
Fig. 1 illustrates a block diagram of a system 100 according to various examples. In some examples, system 100 implements a digital assistant. The terms "digital assistant," "virtual assistant," "intelligent automated assistant," or "automatic digital assistant" refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent and perform actions based on the inferred user intent. For example, to act on the inferred user intent, the system performs one or more of the following operations: identifying a task flow having steps and parameters designed to implement the inferred user intent, entering specific requirements into the task flow in accordance with the inferred user intent; executing the task flow by calling a program, a method, a service, an API, etc.; and to generate an output response to the user in audible (e.g., voice) and/or visual form.
In particular, the digital assistant is capable of accepting user requests, at least in part, in the form of natural language commands, requests, statements, narratives, and/or inquiries.
As shown in fig. 1, in some examples, the digital assistant is implemented according to a client-server model. The digital assistant includes a client-side portion 102 (hereinafter "DA client 102") executing on a user device 104 and a server-side portion 106 (hereinafter "DA server 106") executing on a server system 108. The DA client 102 communicates with the DA server 106 over one or more networks 110. The DA client 102 provides client-side functionality, such as user-oriented input and output processing, as well as communicating with the DA server 106. DA server 106 provides server-side functionality for any number of DA clients 102, each located on a respective user device 104.
In some examples, DA server 106 includes a client-facing I/O interface 112, one or more processing modules 114, data and models 116, and an I/O interface 118 to external services. The client-facing I/O interface 112 facilitates client-facing input and output processing by the DA Server 106. The one or more processing modules 114 utilize the data and models 116 to process speech input and determine user intent based on natural language input. Further, the one or more processing modules 114 perform task execution based on the inferred user intent. In some examples, DA server 106 communicates with external services 120 over one or more networks 110 to complete tasks or collect information. An I/O interface 118 to external services facilitates such communication.
The user device 104 may be any suitable electronic device. In some examples, the user device is a portable multifunction device (e.g., device 200 described below with reference to fig. 2A), a multifunction device (e.g., device 400 described below with reference to fig. 4), or a personal electronic device (e.g., device 600 described below with reference to fig. 6A-6B). A portable multifunction device is for example a mobile phone that also contains other functions such as PDA and/or music player functions. Specific examples of portable multifunction devices include Apple from Apple Inc iPod AndAn apparatus. Other examples of portable multifunction devices include, but are not limited to, ear/head phones, speakers, and laptop or tablet computers. Further, in some examples, user device 104 is a non-portable multifunction device. In particular, the user device 104 is a desktop computer, a game console, a speaker, a television, or a television set-top box. In some examples, the user device 104 includes a touch-sensitive surface (e.g., a touchscreen display and/or a trackpad). Further, the user device 104 optionally includes one or more other physical utilitiesuser interface devices such as a physical keyboard, mouse, and/or joystick. Various examples of electronic devices, such as multifunction devices, are described in more detail below.
Examples of one or more communication networks 110 include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet. The one or more communication networks 110 are implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), firewire, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi, Voice over Internet protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
The server system 108 is implemented on one or more stand-alone data processing devices or a distributed computer network. In some examples, the server system 108 also employs various virtual devices and/or services of third party service providers (e.g., third party cloud service providers) to provide potential computing resources and/or infrastructure resources of the server system 108.
In some examples, user device 104 communicates with DA server 106 via second user device 122. The second user device 122 is similar to or the same as the user device 104. For example, the second user equipment 122 is similar to the apparatus 200, 400, or 600 described below with reference to fig. 2A, 4, and 6A-6B. The user device 104 is configured to communicatively couple to the second user device 122 via a direct communication connection (such as bluetooth, NFC, BTLE, etc.) or via a wired or wireless network (such as a local Wi-Fi network). In some examples, second user device 122 is configured to act as a proxy between user device 104 and DA server 106. For example, DA client 102 of user device 104 is configured to transmit information (e.g., a user request received at user device 104) to DA server 106 via second user device 122. DA server 106 processes the information and returns relevant data (e.g., data content in response to the user request) to user device 104 via second user device 122.
In some examples, the user device 104 is configured to send an abbreviated request for data to the second user device 122 to reduce the amount of information transmitted from the user device 104. Second user device 122 is configured to determine supplemental information to add to the abbreviated request to generate a complete request to transmit to DA server 106. The system architecture may advantageously allow a user device 104 (e.g., a watch or similar compact electronic device) with limited communication capabilities and/or limited battery power to access services provided by DA server 106 by using a second user device 122 (e.g., a mobile phone, laptop, tablet, etc.) with greater communication capabilities and/or battery power as a proxy to DA server 106. Although only two user devices 104 and 122 are shown in fig. 1, it should be understood that in some examples, system 100 includes any number and type of user devices configured to communicate with DA server system 106 in this proxy configuration.
while the digital assistant shown in fig. 1 includes both a client-side portion (e.g., DA client 102) and a server-side portion (e.g., DA server 106), in some examples, the functionality of the digital assistant is implemented as a standalone application that is installed on a user device. Moreover, the division of functionality between the client portion and the server portion of the digital assistant may vary in different implementations. For example, in some examples, the DA client is a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the digital assistant to a backend server.
2. Electronic device
attention is now directed to embodiments of an electronic device for implementing a client-side portion of a digital assistant. FIG. 2A is a block diagram illustrating a portable multifunction device 200 with a touch-sensitive display system 212 in accordance with some embodiments. The touch sensitive display 212 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a "touch sensitive display system". Device 200 includes memory 202 (which optionally includes one or more computer-readable storage media), memory controller 222, one or more processing units (CPUs) 220, peripherals interface 218, RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, input/output (I/O) subsystem 206, other input control devices 216, and external ports 224. The device 200 optionally includes one or more optical sensors 264. Device 200 optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on device 200 (e.g., a touch-sensitive surface, such as touch-sensitive display system 212 of device 200). Device 200 optionally includes one or more tactile output generators 267 for generating tactile outputs on device 200 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 212 of device 200 or trackpad 455 of device 400). These components optionally communicate over one or more communication buses or signal lines 203.
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of contact force or pressure is converted into an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as a property of the user input, allowing the user to access additional device functionality that is otherwise inaccessible to the user on smaller-sized devices with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls, such as knobs or buttons).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other portion of a user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of the touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "up click," "down click," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user.
It should be understood that device 200 is merely one example of a portable multifunction device, and that device 200 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 2A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
memory 202 includes one or more computer-readable storage media. These computer-readable storage media are, for example, tangible and non-transitory. The memory 202 comprises high-speed random access memory and also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 222 controls access to memory 202 by other components of device 200.
In some examples, the non-transitory computer-readable storage medium of memory 202 is used to store instructions (e.g., for performing aspects of the processes described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing aspects of the processes described below) are stored on a non-transitory computer-readable storage medium (not shown) of the server system 108 or divided between the non-transitory computer-readable storage medium of the memory 202 and the non-transitory computer-readable storage medium of the server system 108.
Peripheral interface 218 is used to couple the input and output peripherals of the device to CPU 220 and memory 202. The one or more processors 220 execute or execute various software programs and/or sets of instructions stored in the memory 202 to perform various functions of the device 200 and to process data. In some embodiments, peripherals interface 218, CPU 220, and memory controller 222 are implemented on a single chip, such as chip 204. In some other embodiments, they are implemented on separate chips.
RF (radio frequency) circuitry 208 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 208 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 208 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 208 optionally communicates with networks such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices via wireless communications. The RF circuitry 208 optionally includes well-known circuitry for detecting Near Field Communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA +, dual cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth low power consumption (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11b, IEEE802.11 g, IEEE802.11 n, and/or IEEE802.11ac), voice over internet protocol (VoIP), Wi-MAX, email protocol (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol with extensions for instant messaging and presence (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed at the filing date of this document.
Audio circuitry 210, speaker 211, and microphone 213 provide an audio interface between a user and device 200. The audio circuit 210 receives audio data from the peripheral interface 218, converts the audio data to an electrical signal, and transmits the electrical signal to the speaker 211. The speaker 211 converts the electrical signals into sound waves audible to a human. The audio circuit 210 also receives electrical signals converted from sound waves by the microphone 213. The audio circuit 210 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 218 for processing. Audio data is retrieved from and/or transmitted to the memory 202 and/or RF circuitry 208 through the peripherals interface 218. In some embodiments, the audio circuit 210 also includes a headset jack (e.g., 312 in fig. 3). The headset jack provides an interface between the audio circuitry 210 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 206 couples input/output peripheral devices on the device 200, such as the touch screen 212 and other input control devices 216, to a peripheral interface 218. The I/O subsystem 206 optionally includes a display controller 256, an optical sensor controller 258, an intensity sensor controller 259, a haptic feedback controller 261, and one or more input controllers 260 for other input or control devices. One or more input controllers 260 receive/transmit electrical signals from/to other input control devices 216. Other input control devices 216 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, the one or more input controllers 260 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 308 in fig. 3) optionally include an up/down button for volume control of the speaker 211 and/or microphone 213. The one or more buttons optionally include a push button (e.g., 306 in fig. 3).
A quick push of the push button disengages the lock on the touch screen 212 or begins the process of Unlocking the Device using a gesture on the touch screen, as described in U.S. patent application 11/322,549, entitled "Unlocking a Device by Performance on measures an Unlock Image," filed on 23.12.2005, hereby incorporated by reference in its entirety. Pressing the push button (e.g., 306) longer turns the device 200 on or off. The user can customize the functionality of one or more buttons. The touch screen 212 is used to implement virtual or soft buttons and one or more soft keyboards.
The touch sensitive display 212 provides an input interface and an output interface between the device and the user. The display controller 256 receives electrical signals from the touch screen 212 and/or transmits electrical signals to the touch screen 212. Touch screen 212 displays visual output to a user. Visual output includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object.
Touch screen 212 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile and/or haptic contact. Touch screen 212 and display controller 256 (along with any associated modules and/or sets of instructions in memory 202) detect contact (and any movement or breaking of the contact) on touch screen 212 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 212. In an exemplary embodiment, the point of contact between the touch screen 212 and the user corresponds to a finger of the user.
The touch screen 212 uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. Touch screen 212 and display controller 256 employ any of a number of touch sensing technologies now known or later developed, as well as other interfacesA near sensor array or other element for determining one or more points of contact with touch screen 212 to detect contact and any movement or breaking thereof, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that available from Apple Incand iPodThe technique found in (1).
In some embodiments, the touch sensitive display of the touch screen 212 is similar to the following U.S. patents: 6,323,846(Westerman et al), 6,570,557(Westerman et al), and/or 6,677,932(Westerman) and/or the multi-touch pad described in U.S. patent publication 2002/0015024a1, which are hereby incorporated by reference in their entirety. However, touch screen 212 displays visual output from device 200, while touch sensitive trackpads do not provide visual output.
In some embodiments, the touch sensitive display of touch screen 212 is as described in the following applications: (1) U.S. patent application No. 11/381,313 entitled "Multipoint Touch Surface Controller" filed on 2.5.2006; (2) U.S. patent application No. 10/840,862 entitled "Multipoint touch screen" filed on 6.5.2004; (3) U.S. patent application No. 10/903,964 entitled "Gestures For Touch Sensitive Input Devices" filed on 30.7.2004; (4) U.S. patent application No. 11/048,264 entitled "Gestures For Touch Sensitive Input Devices" filed on 31/1/2005; (5) U.S. patent application 11/038,590 entitled "model-Based Graphical User Interfaces For Touch Sensitive Input Devices" filed on 18.1.2005; (6) U.S. patent application No. 11/228,758 entitled "Virtual Input Device OnA Touch Screen User Interface" filed on 16.9.2005; (7) U.S. patent application No. 11/228,700 entitled "Operation Of A Computer With A Touch Screen Interface," filed on 16.9.2005; (8) U.S. patent application No. 11/228,737 entitled "Activating Virtual Keys Of A Touch-Screen Virtual Keys" filed on 16.9.2005; and (9) U.S. patent application 11/367,749 entitled "Multi-Functional Hand-Held Device" filed 3.3.2006. All of these applications are incorporated herein by reference in their entirety.
The touch screen 212 has, for example, a video resolution of over 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user makes contact with the touch screen 212 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 200 includes a touch pad (not shown) for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The trackpad is a touch-sensitive surface separate from the touch screen 212 or an extension of the touch-sensitive surface formed by the touch screen.
The device 200 also includes a power system 262 for powering the various components. Power system 262 includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
the device 200 also includes one or more optical sensors 264. Fig. 2A shows an optical sensor coupled to optical sensor controller 258 in I/O subsystem 206. The optical sensor 264 includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 264 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with the imaging module 243 (also called a camera module), the optical sensor 264 captures still images or video. In some embodiments, the optical sensor is located at the rear of the device 200, opposite the touch screen display 212 at the front of the device, such that the touch screen display is used as a viewfinder for still and/or video image acquisition. In some embodiments, the optical sensor is located in the front of the device so that images of the user are acquired for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 264 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 264 is used with a touch screen display for both video conferencing and still image and/or video image capture.
Device 200 optionally further comprises one or more contact intensity sensors 265. FIG. 2A shows a contact intensity sensor coupled to intensity sensor controller 259 in I/O subsystem 206. Contact intensity sensor 265 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 265 receives contact intensity information (e.g., pressure information or a surrogate for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is juxtaposed or adjacent to the touch-sensitive surface (e.g., touch-sensitive display system 212). In some embodiments, at least one contact intensity sensor is located on the back of device 200, opposite touch screen display 212, which is located on the front of device 200.
The device 200 also includes one or more proximity sensors 266. Fig. 2A shows a proximity sensor 266 coupled to the peripheral interface 218. Alternatively, the proximity sensor 266 is coupled to the input controller 260 in the I/O subsystem 206. The proximity sensor 266 performs as described in the following U.S. patent applications: 11/241,839 entitled "ProximatyDetector In Handheld Device"; 11/240,788 entitled "Proximaty Detector In HandheldDevice"; 11/620,702, entitled "Using Ambient Light Sensor To augmentation promimitysensor Output"; 11/586,862, entitled "Automated Response To And Sensing Of user activity In Portable Devices"; and 11/638,251, entitled "Methods And Systems for automatic Configuration Of Periphers," which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables the touch screen 212 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
device 200 optionally further comprises one or more tactile output generators 267. Fig. 2A shows a tactile output generator coupled to a tactile feedback controller 261 in the I/O subsystem 206. Tactile output generator 267 optionally includes one or more electro-acoustic devices such as speakers or other audio components; and/or an electromechanical device such as a motor, solenoid, electroactive aggregator, piezoelectric actuator, electrostatic actuator, or other tactile output generating component for converting energy into linear motion (e.g., a component for converting an electrical signal into a tactile output on the device). Contact intensity sensor 265 receives haptic feedback generation instructions from haptic feedback module 233 and generates haptic output on device 200 that can be sensed by a user of device 200. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 212), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 200) or laterally (e.g., back and forth in the same plane as the surface of device 200). In some embodiments, at least one tactile output generator sensor is located on the back of device 200, opposite touch screen display 212, which is located on the front of device 200.
The device 200 also includes one or more accelerometers 268. Fig. 2A shows accelerometer 268 coupled to peripherals interface 218. Alternatively, accelerometer 268 is coupled to input controller 260 in I/O subsystem 206. For example, accelerometer 268 performs as described in the following U.S. patent publications: U.S. patent publication 20050190059, "accumulation-Based Detection System For Portable Electronic Devices" And U.S. patent publication 20060017692, "Methods And applications For Operating A Portable Device Based on an accumulator," both of which are hereby incorporated by reference in their entirety. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from one or more accelerometers. Device 200 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the one or more accelerometers 268 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 200.
in some embodiments, the software components stored in memory 202 include an operating system 226, a communication module (or set of instructions) 228, a contact/motion module (or set of instructions) 230, a graphics module (or set of instructions) 232, a text input module (or set of instructions) 234, a Global Positioning System (GPS) module (or set of instructions) 235, a digital assistant client module 229, and an application program (or set of instructions) 236. In addition, memory 202 stores data and models, such as user data and models 231. Further, in some embodiments, memory 202 (fig. 2A) or 470 (fig. 4) stores device/global internal state 257, as shown in fig. 2A and 4. Device/global internal state 257 includes one or more of: an active application state indicating which applications (if any) are currently active; a display state indicating what applications, views, or other information occupy various areas of the touch screen display 212; sensor status, including information obtained from the various sensors of the device and the input control device 216; and location information regarding the location and/or pose of the device.
The operating system 226 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 228 facilitates communication with other devices via the one or more external ports 224 and also includes various software components for processing data received by the RF circuitry 208 and/or the external ports 224. External port 224 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly through a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is an external port(trademark of Apple inc.) a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with the 30-pin connector used on the device.
The contact/motion module 230 optionally detects contact with the touch screen 212 (in conjunction with the display controller 256) and other touch sensitive devices (e.g., a trackpad or physical click wheel). The contact/motion module 230 includes various software components for performing various operations related to contact detection, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining the intensity of contact (e.g., the force or pressure of the contact, or a surrogate of the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact-breaking). The contact/motion module 230 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or multiple point simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 230 and the display controller 256 detect contact on a touch pad.
In some embodiments, the contact/motion module 230 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 200). For example, the mouse "click" threshold of the trackpad or touchscreen can be set to any one of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 230 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.
graphics module 232 includes various known software components for rendering and displaying graphics on touch screen 212 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
in some embodiments, graphics module 232 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 232 receives one or more codes for specifying a graphic to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 256.
Haptic feedback module 233 includes various software components for generating instructions for use by one or more haptic output generators 267 to produce haptic outputs at one or more locations on device 200 in response to user interaction with device 200.
Text input module 234, which in some examples is a component of graphics module 232, provides a soft keyboard for entering text in various applications (e.g., contacts 237, email 240, IM 241, browser 247, and any other application that requires text input).
The GPS module 235 determines the location of the device and provides this information for use in various applications (e.g., to the phone 238 for use in location-based dialing; to the camera 243 as picture/video metadata; and to applications that provide location-based services, such as weather desktop applets, local yellow pages desktop applets, and map/navigation desktop applets).
The digital assistant client module 229 includes various client side digital assistant instructions to provide client side functionality of the digital assistant. For example, the digital assistant client module 229 can accept voice input (e.g., voice input), text input, touch input, and/or gesture input through various user interfaces of the portable multifunction device 200 (e.g., the microphone 213, the one or more accelerometers 268, the touch-sensitive display system 212, the one or more optical sensors 229, the other input control device 216, etc.). The digital assistant client module 229 can also provide output in audio form (e.g., speech output), visual form, and/or tactile form through various output interfaces of the portable multifunction device 200 (e.g., the speaker 211, the touch-sensitive display system 212, the one or more tactile output generators 267, etc.). For example, the output may be provided as voice, sound, alarm, text message, menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, digital assistant client module 229 communicates with DA server 106 using RF circuitry 208.
the user data and model 231 includes various data associated with the user (e.g., user-specific vocabulary data, user preference data, user-specified name pronunciations, data from the user's electronic address book, backlogs, shopping lists, etc.) to provide client-side functionality of the digital assistant. Further, the user data and models 231 include various models (e.g., speech recognition models, statistical language models, natural language processing models, ontologies, task flow models, service models, etc.) for processing user input and determining user intent.
In some examples, the digital assistant client module 229 utilizes various sensors, subsystems, and peripherals of the portable multifunction device 200 to gather additional information from the surroundings of the portable multifunction device 200 to establish a context associated with the user, the current user interaction, and/or the current user input. In some examples, the digital assistant client module 229 provides the context information, or a subset thereof, along with the user input to the DA server 106 to help infer the user intent. In some examples, the digital assistant also uses the context information to determine how to prepare and deliver the output to the user. The context information is referred to as context data.
In some examples, contextual information accompanying the user input includes sensor information, such as lighting, ambient noise, ambient temperature, images or video of the surrounding environment, and the like. In some examples, the context information may also include physical states of the device, such as device orientation, device location, device temperature, power level, velocity, acceleration, motion pattern, cellular signal strength, and the like. In some examples, information related to the software state of the DA server 106, such as the running process of the portable multifunction device 200, installed programs, past and current network activities, background services, error logs, resource usage, etc., is provided to the DA server 106 as contextual information associated with the user input.
In some examples, the digital assistant client module 229 selectively provides information (e.g., user data 231) stored on the portable multifunction device 200 in response to a request from the DA server 106. In some examples, the digital assistant client module 229 also elicits additional input from the user via a natural language dialog or other user interface upon request by the DA server 106. The digital assistant client module 229 communicates this additional input to the DA server 106 to assist the DA server 106 in intent inference and/or to satisfy the user intent expressed in the user request.
The digital assistant is described in more detail below with reference to fig. 7A-7C. It should be appreciated that the digital assistant client module 229 may include any number of sub-modules of the digital assistant module 726 described below.
The application programs 236 include the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 237 (sometimes referred to as a contact list or contact list);
A phone module 238;
A video conferencing module 239;
An email client module 240;
An Instant Messaging (IM) module 241;
Fitness support module 242;
A camera module 243 for still and/or video images;
An image management module 244;
A video player module;
A music player module;
A browser module 247;
A calendar module 248;
desktop applet modules 249 that, in some examples, include one or more of the following: a weather desktop applet 249-1, a stock desktop applet 249-2, a calculator desktop applet 249-3, an alarm desktop applet 249-4, a dictionary desktop applet 249-5, other desktop applets acquired by a user, and a user-created desktop applet 249-6;
a desktop applet creator module 250 for forming a user-created desktop applet 249-6;
A search module 251;
A video and music player module 252 that incorporates a video player module and a music player module;
A notepad module 253;
A map module 254; and/or
Online video module 255.
Examples of other application programs 236 stored in memory 202 include other word processing application programs, other image editing application programs, drawing application programs, rendering application programs, JAVA-enabled application programs, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, contacts module 237 is used to manage contact lists or contact lists (e.g., stored in memory 202 or in application internal state 292 of contacts module 237 in memory 470), including: adding one or more names to the address book; deleting one or more names from the address book; associating one or more telephone numbers, one or more email addresses, one or more physical addresses, or other information with the name; associating the image with a name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communications through the telephone 238, video conferencing module 239, email 240, or IM 241; and so on.
in conjunction with the RF circuitry 208, the audio circuitry 210, the speaker 211, the microphone 213, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the phone module 238 is operable to enter a sequence of characters corresponding to a phone number, access one or more phone numbers in the contacts module 237, modify an already entered phone number, dial a corresponding phone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As described above, wireless communication uses any of a variety of communication standards, protocols, and technologies.
In conjunction with the RF circuitry 208, the audio circuitry 210, the speaker 211, the microphone 213, the touch screen 212, the display controller 256, the optical sensor 264, the optical sensor controller 258, the contact/motion module 230, the graphics module 232, the text input module 234, the contacts module 237, and the phone module 238, the video conference module 239 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, email client module 240 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 244, the e-mail client module 240 makes it very easy to create and send an e-mail having a still image or a video image photographed by the camera module 243.
in conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the instant message module 241 includes executable instructions for: inputting a sequence of characters corresponding to an instant message, modifying previously input characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for a phone-based instant message or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages include graphics, photos, audio files, video files, and/or other attachments as supported in MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
in conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, map module 254, and music player module, fitness support module 242 includes executable instructions for: creating fitness (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with the touch screen 212, the display controller 256, the one or more optical sensors 264, the optical sensor controller 258, the contact/motion module 230, the graphics module 232, and the image management module 244, the camera module 243 includes executable instructions for: capturing still images or video (including video streams) and storing them in the memory 202, modifying features of the still images or video, or deleting the still images or video from the memory 202.
In conjunction with the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, and the camera module 243, the image management module 244 includes executable instructions for arranging, modifying (e.g., editing) or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, browser module 247 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, the email client module 240, and the browser module 247, the calendar module 248 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, and the browser module 247, the desktop applet module 249 is a mini-application (e.g., a weather desktop applet 249-1, a stock desktop applet 249-2, a calculator desktop applet 249-3, an alarm desktop applet 249-4, and a dictionary desktop applet 249-5) or a mini-application created by a user (e.g., a user-created desktop applet 249-6) that may be downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
in conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, and browser module 247, desktop applet creator module 250 is used by a user to create a desktop applet (e.g., to render a user-specified portion of a web page into a desktop applet).
in conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, search module 251 includes executable instructions for searching memory 202 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
in conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuitry 210, speakers 211, RF circuitry 208, and browser module 247, video and music player module 252 includes executable instructions that allow a user to download and playback recorded music and other sound files, such as MP3 or AAC files, stored in one or more file formats, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 212 or on an external display connected via external port 224). In some embodiments, the device 200 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the notepad module 253 includes executable instructions to create and manage notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, and browser module 247, map module 254 is used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuit 210, speaker 211, RF circuit 208, text input module 234, email client module 240, and browser module 247, online video module 255 includes instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), playback (e.g., on the touch screen or on an external display connected via external port 224), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats, such as h.264. In some embodiments, the link to a particular online video is sent using instant messaging module 241 rather than email client module 240. Additional description of Online video applications can be found in U.S. provisional patent application 60/936,562 entitled "Portable Multi function Device, Method, and Graphical User Interface for playing Online video," filed on.20.2007, and U.S. patent application 11/968,067 entitled "Portable Multi function Device, Method, and Graphical User Interface for playing Online video," filed on.31.2007, which are both hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. For example, a video player module may be combined with a music player module into a single module (e.g., video and music player module 252 in fig. 2A). In some embodiments, memory 202 stores a subset of the modules and data structures described above. Further, memory 202 stores additional modules and data structures not described above.
In some embodiments, device 200 is a device on which the operation of a predefined set of functions is performed exclusively through a touch screen and/or a trackpad. By using a touch screen and/or touch pad as the primary input control device for operation of device 200, the number of physical input control devices (such as push buttons, dials, etc.) on device 200 is reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the trackpad, when touched by a user, navigates device 200 from any user interface displayed on device 200 to a main, home, or root menu. In such embodiments, a "menu button" is implemented using a touch pad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
fig. 2B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 202 (fig. 2A) or memory 470 (fig. 4) includes event classifier 270 (e.g., in operating system 226) and corresponding application 236-1 (e.g., any of the aforementioned applications 237 through 251, 255, 480 through 490).
The event sorter 270 receives the event information and determines the application 236-1 to which the event information is to be delivered and the application view 291 of the application 236-1. The event sorter 270 includes an event monitor 271 and an event dispatcher module 274. In some embodiments, the application 236-1 includes an application internal state 292 that indicates one or more current application views that are displayed on the touch-sensitive display 212 when the application is active or executing. In some embodiments, device/global internal state 257 is used by event classifier 270 to determine which application(s) are currently active, and application internal state 292 is used by event classifier 270 to determine the application view 291 to which to deliver event information.
In some embodiments, the application internal state 292 includes additional information, such as one or more of the following: resume information to be used when the application 236-1 resumes execution, user interface state information indicating that information is being displayed or is ready for display by the application 236-1, a state queue for enabling a user to return to a previous state or view of the application 236-1, and a repeat/undo queue of previous actions taken by the user.
The event monitor 271 receives event information from the peripheral interface 218. The event information includes information about a sub-event (e.g., a user touch on the touch-sensitive display 212 as part of a multi-touch gesture). Peripherals interface 218 transmits information it receives from I/O subsystem 206 or sensors such as proximity sensor 266, one or more accelerometers 268, and/or microphone 213 (through audio circuitry 210). Information received by peripheral interface 218 from I/O subsystem 206 includes information from touch-sensitive display 212 or a touch-sensitive surface.
In some embodiments, event monitor 271 sends requests to peripheral interface 218 at predetermined intervals. In response, peripheral interface 218 transmits event information. In other embodiments, peripheral interface 218 transmits event information only when there is a significant event (e.g., receiving input above a predetermined noise threshold and/or receiving input for more than a predetermined duration).
in some embodiments, event classifier 270 also includes hit view determination module 272 and/or activity event recognizer determination module 273.
When the touch-sensitive display 212 displays more than one view, the hit view determination module 272 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected corresponds to a programmatic hierarchy of applications or a programmatic level within the view hierarchy. For example, the lowest level view in which a touch is detected is referred to as the hit view, and the set of events considered to be correct inputs is determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
hit view determination module 272 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, hit view determination module 272 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by hit view determination module 272, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
The activity event identifier determination module 273 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the activity event recognizer determination module 273 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the active event recognizer determination module 273 determines that all views including the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
Event dispatcher module 274 dispatches event information to event recognizers (e.g., event recognizer 280). In embodiments that include the activity event recognizer determination module 273, the event dispatcher module 274 delivers the event information to the event recognizer determined by the activity event recognizer determination module 273. In some embodiments, the event dispatcher module 274 stores event information in an event queue, which is retrieved by the respective event receiver 282.
In some embodiments, the operating system 226 includes an event classifier 270. Alternatively, the application 236-1 includes an event classifier 270. In yet another embodiment, the event classifier 270 is a stand-alone module or is part of another module stored in the memory 202 (such as the contact/motion module 230).
In some embodiments, the application 236-1 includes a plurality of event handlers 290 and one or more application views 291, each of which includes instructions for processing touch events that occur within a respective view of the application's user interface. Each application view 291 of the application 236-1 includes one or more event recognizers 280. Typically, the respective application view 291 includes a plurality of event recognizers 280. In other embodiments, one or more of the event recognizers 280 are part of a separate module that is a higher-level object such as a user interface toolkit (not shown) or the application 236-1 that inherits methods and other properties from it. In some embodiments, the respective event handlers 290 include one or more of: data updater 276, object updater 277, GUI updater 278, and/or event data 279 received from event classifier 270. Event handler 290 updates application internal state 292 with or calls data updater 276, object updater 277 or GUI updater 278. Alternatively, one or more of the application views 291 include one or more respective event handlers 290. Additionally, in some embodiments, one or more of the data updater 276, the object updater 277, and the GUI updater 278 are included in a respective application view 291.
The corresponding event identifier 280 receives event information (e.g., event data 279) from the event classifier 270 and identifies events from the event information. Event recognizer 280 includes an event receiver 282 and an event comparator 284. In some embodiments, event recognizer 280 also includes metadata 283 and at least a subset of event delivery instructions 288 (which include sub-event delivery instructions).
Event receiver 282 receives event information from event sorter 270. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to the motion of a touch, the event information also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 284 compares the event information to predefined event or sub-event definitions and determines an event or sub-event, or determines or updates the state of an event or sub-event, based on the comparison. In some embodiments, event comparator 284 includes an event definition 286. The event definition 286 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(287-1), event 2(287-2), and other events. In some embodiments, sub-events in event (287) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(287-1) is a double click on the displayed object. For example, a double tap includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift off for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift off for a predetermined length of time (touch end). In another example, the definition of event 2(287-2) is a drag on the displayed object. For example, the drag includes a predetermined length of time of touch (or contact) on the displayed object, movement of the touch on the touch-sensitive display 212, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 290.
In some embodiments, the event definitions 287 include definitions of events for respective user interface objects. In some embodiments, event comparator 284 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on the touch-sensitive display 212, when a touch is detected on the touch-sensitive display 212, the event comparator 284 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 290, the event comparator uses the results of the hit test to determine which event handler 290 should be activated. For example, the event comparator 284 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (287) further comprises a delay action that delays the delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to the event type of the event recognizer.
when the respective event recognizer 280 determines that the sequence of sub-events does not match any event in the event definition 286, the respective event recognizer 280 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 280 includes metadata 283 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 280 activates the event handler 290 associated with the event. In some embodiments, the respective event identifier 280 delivers event information associated with the event to the event handler 290. Activating the event handler 290 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 280 throws a marker associated with the recognized event, and event handler 290 associated with the marker retrieves the marker and performs a predefined process.
In some embodiments, the event delivery instructions 288 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, the data updater 276 creates and updates data used in the application 236-1. For example, the data updater 276 updates a phone number used in the contacts module 237 or stores a video file used in the video player module. In some embodiments, the object updater 277 creates and updates objects used in the application 236-1. For example, object updater 277 creates a new user interface object or updates the location of a user interface object. The GUI updater 278 updates the GUI. For example, GUI updater 278 prepares display information and sends the display information to graphics module 232 for display on the touch-sensitive display.
In some embodiments, one or more event handlers 290 include or have access to data updater 276, object updater 277, and GUI updater 278. In some embodiments, the data updater 276, the object updater 277, and the GUI updater 278 are included in a single module of the respective application 236-1 or application view 291. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 200, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as tapping, dragging, scrolling, etc.; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 3 illustrates a portable multifunction device 200 with a touch screen 212 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 300. In this embodiment, as well as others described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 302 (not drawn to scale in the figure) or one or more styluses 303 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 200. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
The device 200 also includes one or more physical buttons, such as a "home" or menu button 304. As previously described, the menu button 304 is used to navigate to any application 236 in a set of applications executing on the device 200. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 212.
In some embodiments, device 200 includes a touch screen 212, menu buttons 304, a push button 306 for powering the device on/off and for locking the device, one or more volume adjustment buttons 308, a Subscriber Identity Module (SIM) card slot 310, a headset jack 312, and a docking/charging external port 224. Pressing the button 306 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 200 also accepts verbal input through microphone 213 for activating or deactivating certain functions. Device 200 also optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on touch screen 212, and/or one or more tactile output generators 267 for generating tactile outputs for a user of device 200.
Fig. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 400 need not be portable. In some embodiments, the device 400 is a laptop computer, desktop computer, tablet computer, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 400 typically includes one or more processing units (CPUs) 410, one or more network or other communication interfaces 460, memory 470, and one or more communication buses 420 for interconnecting these components. The communication bus 420 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communication between system components. Device 400 includes an input/output (I/O) interface 430 with a display 440, which is typically a touch screen display. The I/O interface 430 also optionally includes a keyboard and/or mouse (or other pointing device) 450 and a trackpad 455, a tactile output generator 457 (e.g., similar to the one or more tactile output generators 267 described above with reference to fig. 2A), a sensor 459 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor (similar to the one or more contact intensity sensors 265 described above with reference to fig. 2A)) for generating tactile outputs on the device 400. Memory 470 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 470 optionally includes one or more storage devices located remotely from one or more CPUs 410. In some embodiments, memory 470 stores programs, modules, and data structures similar to or a subset of the programs, modules, and data structures stored in memory 202 of portable multifunction device 200 (fig. 2A). In addition, memory 470 optionally stores additional programs, modules, and data structures not present in memory 202 of portable multifunction device 200. For example, memory 470 of device 400 optionally stores drawing module 480, presentation module 482, word processing module 484, website creation module 486, disk editing module 488, and/or spreadsheet module 490, while memory 202 of portable multifunction device 200 (FIG. 2A) optionally does not store these modules.
Each of the above-described elements in fig. 4 is stored in one or more of the previously mentioned memory devices in some examples. Each of the above modules corresponds to a set of instructions for performing a function described above. The modules or programs (e.g., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 470 stores a subset of the modules and data structures described above. Further, memory 470 stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces that may be implemented on, for example, portable multifunction device 200.
Fig. 5A illustrates an exemplary user interface of an application menu on a portable multifunction device 200 according to some embodiments. A similar user interface is implemented on the device 400. In some embodiments, the user interface 500 includes the following elements, or a subset or superset thereof:
One or more signal strength indicators 502 for one or more wireless communications (such as cellular signals and Wi-Fi signals);
Time 504;
A bluetooth indicator 505;
A battery status indicator 506;
Tray 508 with icons for commonly used applications, such as:
An icon 516 of the o-phone module 238 labeled "phone," optionally including an indicator 514 of the number of missed calls or voice messages;
An icon 518 for the e-mail client module 240 labeled "mail", optionally including an indicator 510 of the number of unread e-mails;
An icon 520 labeled "browser" for the browser module 247; and
An icon 522 labeled "iPod" for the o video and music player module 252 (also known as iPod (trademark of Apple inc.) module 252); and
icons for other applications, such as:
Icon 524 of o IM module 241 labeled "message";
an icon 526 labeled "calendar" for the o-calendar module 248;
An icon 528 of the o-image management module 244 labeled "photo";
An icon 530 labeled "camera" for the o-camera module 243;
An icon 532 labeled "online video" for online video module 255;
an icon 534 of the o-stock desktop applet 249-2 labeled "stock market";
o icon 536 of map module 254 labeled "map";
An icon 538 of the o-weather desktop applet 249-1 labeled "weather";
An icon 540 labeled "clock" for the o-alarm desktop applet 249-4;
o an icon 542 of fitness support module 242 labeled "fitness support";
icon 544 labeled "notepad" of o-notepad module 253; and
o an icon 546 labeled "settings" for settings applications or modules that provides access to the settings of the device 200 and its various applications 236.
It should be noted that the icon labels shown in fig. 5A are merely exemplary. For example, icon 522 of video and music player module 252 is optionally labeled "music" or "music player". Other tabs are optionally used for the various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 5B illustrates an exemplary user interface on a device (e.g., device 400 of fig. 4) having a touch-sensitive surface 551 (e.g., tablet or trackpad 455 of fig. 4) separate from a display 550 (e.g., touchscreen display 212). The device 400 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 457) for detecting the intensity of contacts on the touch-sensitive surface 551 and/or one or more tactile output generators 459 for generating tactile outputs for a user of the device 400.
Although some of the examples that follow will be given with reference to input on the touch screen display 212 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 5B. In some implementations, the touch-sensitive surface (e.g., 551 in fig. 5B) has a major axis (e.g., 552 in fig. 5B) that corresponds to a major axis (e.g., 553 in fig. 5B) on the display (e.g., 550). According to these embodiments, the device detects contacts (e.g., 560 and 562 in fig. 5B) with the touch-sensitive surface 551 at locations that correspond to respective locations on the display (e.g., 560 corresponds to 568 and 562 corresponds to 570 in fig. 5B). As such, when the touch-sensitive surface (e.g., 551 in fig. 5B) is separated from the display (e.g., 550 in fig. 5B) of the multifunction device, user inputs (e.g., contacts 560 and 562 and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture), it should be understood that in some embodiments one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
fig. 6A illustrates an exemplary personal electronic device 600. The device 600 includes a body 602. In some embodiments, device 600 includes some or all of the features described with respect to devices 200 and 400 (e.g., fig. 2A-4). In some embodiments, device 600 has a touch-sensitive display screen 604, hereinafter referred to as touch screen 604. Instead of or in addition to the touch screen 604, the device 600 has a display and a touch-sensitive surface. As with devices 200 and 400, in some embodiments, touch screen 604 (or touch-sensitive surface) has one or more intensity sensors for detecting the intensity of a contact (e.g., touch) being applied. One or more intensity sensors of touch screen 604 (or touch-sensitive surface) provide output data representing the intensity of a touch. The user interface of device 600 responds to the touch based on the strength of the touch, meaning that different strengths of the touch can invoke different user interface operations on device 600.
techniques for detecting and processing touch intensity are found, for example, in related applications: international patent Application Ser. No. PCT/US2013/040061 entitled "Device, Method, and Graphical User Interface for Displaying User interfaces correcting to an Application", filed on 8.5.2013, and International patent Application Ser. No. PCT/US2013/069483 entitled "Device, Method, and Graphical User Interface for transmitting Between Betwen Touch Input to Display outputting applications", filed 11.11.2013, each of which is hereby incorporated by reference in its entirety.
In some embodiments, device 600 has one or more input mechanisms 606 and 608. Input mechanisms 606 and 608 (if included) are in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 600 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow device 600 to be attached with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, belt, shoe, purse, backpack, and the like. These attachment mechanisms allow the user to wear the device 600.
Fig. 6B illustrates an exemplary personal electronic device 600. In some embodiments, the apparatus 600 includes some or all of the components described with respect to fig. 2A, 2B, and 4. The device 600 has a bus 612 that operatively couples an I/O portion 614 with one or more computer processors 616 and a memory 618. I/O portion 614 is connected to display 604, which may have touch sensitive component 622 and optionally also touch intensity sensitive component 624. Further, I/O portion 614 interfaces with communications unit 630 for receiving applications and operating system data using Wi-Fi, bluetooth, Near Field Communication (NFC), cellular, and/or other wireless communications technologies. Device 600 includes input mechanisms 606 and/or 608. For example, input mechanism 606 is a rotatable input device or a depressible and rotatable input device. In some examples, input mechanism 608 is a button.
in some examples, input mechanism 608 is a microphone. The personal electronic device 600 includes, for example, various sensors, such as a GPS sensor 632, an accelerometer 634, an orientation sensor 640 (e.g., a compass), a gyroscope 636, a motion sensor 638, and/or combinations thereof, all of which are operatively connected to the I/O portion 614.
The memory 618 of the personal electronic device 600 is a non-transitory computer-readable storage medium for storing computer-executable instructions that, when executed by the one or more computer processors 616, cause the computer processors to perform the techniques and processes described above, for example. The computer-executable instructions are also stored and/or transmitted, for instance, within any non-transitory computer-readable storage medium, for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. The personal electronic device 600 is not limited to the components and configuration of fig. 6B, but may include other components or additional components in a variety of configurations.
as used herein, the term "affordance" refers to a user-interactive graphical user interface object displayed, for example, on a display screen of device 200, 400, and/or 600 (FIGS. 2A, 4, and 6-B). For example, images (e.g., icons), buttons, and text (e.g., hyperlinks) each constitute an affordance.
as used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 455 in fig. 4 or touch-sensitive surface 551 in fig. 5B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 212 in fig. 2A or touch screen 212 in fig. 5A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to communicate the user's intended interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a trackpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: a maximum value of the intensity of the contact, a mean value of the intensity of the contact, an average value of the intensity of the contact, a value at the top 10% of the intensity of the contact, a half-maximum value of the intensity of the contact, a 90% maximum value of the intensity of the contact, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds includes a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, the comparison between the feature strengths and the one or more thresholds is used to determine whether to perform the one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation), rather than to determine whether to perform the first operation or the second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface receives a continuous swipe contact that transitions from a starting location and reaches an ending location where the intensity of the contact increases. In this example, the characteristic strength of the contact at the end position is based only on a portion of the continuous swipe contact, and not the entire swipe contact (e.g., the swipe contact is only located at the end position). In some embodiments, a smoothing algorithm is applied to the intensity of the swipe contact before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
The intensity of a contact on the touch-sensitive surface is characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when a contact is detected whose characteristic intensity is below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold, a contact below the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact on the touch-sensitive surface without performing operations associated with a light press intensity threshold or a deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
increasing the contact characteristic intensity from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. Increasing the contact characteristic intensity from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a "deep press" input. Increasing the contact characteristic intensity from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the characteristic intensity of the contact from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting an increase in intensity of the respective contact above a press input intensity threshold (e.g., a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "up stroke" of the respective press input).
in some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., an increase in intensity of the contact or a decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, optionally, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detection of any of the following: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, and/or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.
3. Digital assistant system
Fig. 7A illustrates a block diagram of a digital assistant system 700, according to various examples. In some examples, the digital assistant system 700 is implemented on a standalone computer system. In some examples, the digital assistant system 700 is distributed across multiple computers. In some examples, some of the modules and functionality of the digital assistant are divided into a server portion and a client portion, where the client portion is located on one or more user devices (e.g., device 104, device 122, device 200, device 400, or device 600) and communicates with the server portion (e.g., server system 108) over one or more networks, e.g., as shown in fig. 1. In some examples, digital assistant system 700 is a specific implementation of server system 108 (and/or DA server 106) shown in fig. 1. It should be noted that the digital assistant system 700 is only one example of a digital assistant system, and that the digital assistant system 700 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or layout of components. The various components shown in fig. 7A are implemented in hardware, software instructions for execution by one or more processors, firmware (including one or more signal processing integrated circuits and/or application specific integrated circuits), or a combination thereof.
The digital assistant system 700 comprises a memory 702, an input/output (I/O) interface 706, a network communication interface 708, and one or more processors 704. These components may communicate with each other via one or more communication buses or signal lines 710.
In some examples, the memory 702 includes a non-transitory computer-readable medium, such as high-speed random access memory and/or a non-volatile computer-readable storage medium (e.g., one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices).
In some examples, I/O interface 706 couples input/output devices 716, such as a display, a keyboard, a touch screen, and a microphone, of digital assistant system 700 to user interface module 722. I/O interface 706, in conjunction with user interface module 722, receives user input (e.g., voice input, keyboard input, touch input, etc.) and processes the input accordingly. In some examples, for example, when the digital assistant is implemented on a standalone user device, the digital assistant system 700 includes any of the components and I/O communication interfaces described with respect to the device 200, device 400, or device 600 in fig. 2A, fig. 4, fig. 6A-6B, respectively. In some examples, the digital assistant system 700 represents a server portion of a digital assistant implementation and may interact with a user through a client-side portion located on a user device (e.g., device 104, device 200, device 400, or device 600).
In some examples, the network communication interface 708 includes one or more wired communication ports 712 and/or wireless transmission and reception circuitry 714. The one or more wired communication ports receive and transmit communication signals via one or more wired interfaces, such as ethernet, Universal Serial Bus (USB), firewire, and the like. The wireless circuitry 714 receives and transmits RF and/or optical signals to and from the communication network and other communication devices. The wireless communication uses any of a number of communication standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. Network communication interface 708 enables communication between digital assistant system 700 and other devices via a network, such as the internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN).
in some examples, memory 702 or the computer-readable storage medium of memory 702 stores programs, modules, instructions, and data structures, including all or a subset of the following: an operating system 718, a communications module 720, a user interface module 722, one or more application programs 724, and a digital assistant module 726. In particular, memory 702 or the computer-readable storage medium of memory 702 stores instructions for performing the processes described above. The one or more processors 704 execute the programs, modules, and instructions and read data from, or write data to, the data structures.
the operating system 718 (e.g., Darwin, RTXC, LINUX, UNIX, iOS, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware, firmware, and software components.
The communications module 720 facilitates communications between the digital assistant system 700 and other devices via the network communications interface 708. For example, the communication module 720 communicates with the RF circuitry 208 of an electronic device, such as the devices 200, 400, or 600 shown in fig. 2A, 4, 6A-6B, respectively. The communications module 720 also includes various components for processing data received by the wireless circuitry 714 and/or the wired communications port 712.
User interface module 722 receives commands and/or input from a user (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone) via I/O interface 706 and generates user interface objects on the display. User interface module 722 also prepares and delivers output (e.g., voice, sound, animation, text, icons, vibrations, haptic feedback, lighting, etc.) to the user via I/O interface 706 (e.g., through a display, audio channel, speaker, touch pad, etc.).
The application programs 724 include programs and/or modules configured to be executed by the one or more processors 704. For example, if the digital assistant system is implemented on a standalone user device, the applications 724 include user applications such as games, calendar applications, navigation applications, or mail applications. If the digital assistant system 700 is implemented on a server, the application programs 724 include, for example, an asset management application, a diagnostic application, or a scheduling application.
The memory 702 also stores a digital assistant module 726 (or a server portion of a digital assistant). In some examples, digital assistant module 726 includes the following sub-modules, or a subset or superset thereof: an input/output processing module 728, a Speech To Text (STT) processing module 730, a natural language processing module 732, a dialog flow processing module 734, a task flow processing module 736, a service processing module 738, and a speech synthesis processing module 740. Each of these modules has access to one or more of the following systems or data and models, or a subset or superset thereof, of the digital assistant module 726: ontology 760, vocabulary index 744, user data 748, task flow model 754, service model 756, and ASR system 758.
In some examples, using the processing modules, data, and models implemented in the digital assistant module 726, the digital assistant can perform at least some of the following: converting the speech input to text; identifying a user intent expressed in a natural language input received from a user; actively elicit and obtain information needed to fully infer a user's intent (e.g., by disambiguating words, games, intentions, etc.); determining a task flow for satisfying the inferred intent; and executing the task flow to satisfy the inferred intent.
In some examples, as shown in fig. 7B, I/O processing module 728 may interact with a user via I/O device 716 in fig. 7A or interact with a user device (e.g., device 104, device 200, device 400, or device 600) via network communication interface 708 in fig. 7A to obtain user input (e.g., voice input) and provide a response to the user input (e.g., as voice output). The I/O processing module 728 optionally obtains contextual information associated with the user input from the user device along with or shortly after receiving the user input. The contextual information includes user-specific data, vocabulary, and/or preferences related to user input. In some examples, the context information also includes software and hardware states of the user device at the time the user request is received, and/or information relating to the user's surroundings at the time the user request is received. In some examples, the I/O processing module 728 also sends follow-up questions to the user regarding the user request and receives answers from the user. When a user request is received by the I/O processing module 728 and the user request includes speech input, the I/O processing module 728 forwards the speech input to the STT processing module 730 (or speech recognizer) for speech-to-text conversion.
STT processing module 730 includes one or more ASR systems 758. The one or more ASR systems 758 may process speech input received through the I/O processing module 728 to generate recognition results. Each ASR system 758 includes a front-end speech preprocessor. A front-end speech preprocessor extracts representative features from speech input. For example, a front-end speech preprocessor performs a fourier transform on a speech input to extract spectral features characterizing the speech input as a sequence of representative multi-dimensional vectors. In addition, each ASR system 758 includes one or more speech recognition models (e.g., acoustic models and/or language models) and implements one or more speech recognition engines. Examples of speech recognition models include hidden markov models, gaussian mixture models, deep neural network models, n-gram language models, and other statistical models. Examples of speech recognition engines include dynamic time warping based engines and Weighted Finite State Transformer (WFST) based engines. The extracted representative features of the front-end speech preprocessor are processed using one or more speech recognition models and one or more speech recognition engines to produce intermediate recognition results (e.g., phonemes, phoneme strings, and sub-words), and ultimately text recognition results (e.g., words, word strings, or sequences of tokens). In some examples, the voice input is processed at least in part by a third party service or on a device of the user (e.g., device 104, device 200, device 400, or device 600) to produce a recognition result. Once STT processing module 730 generates a recognition result that includes a text string (e.g., a word, or a sequence of words, or a sequence of tokens), the recognition result is passed to natural language processing module 732 for intent inference. In some examples, STT processing module 730 generates a plurality of candidate text representations of the speech input. Each candidate text representation is a sequence of words or tokens corresponding to the speech input. In some examples, each candidate text representation is associated with a speech recognition confidence score. Based on the speech recognition confidence scores, STT processing module 730 ranks the candidate text representations and provides the n-best (e.g., n-highest ranked) candidate text representations to natural language processing module 732 for intent inference, where n is a predetermined integer greater than zero. For example, in one example, only the highest ranked (n ═ 1) candidate text representation is passed to natural language processing module 732 for intent inference. As another example, the 5 highest ranked (n ═ 5) candidate text representations are passed to natural language processing module 732 for intent inference.
More details regarding Speech-to-text processing are described in U.S. utility model patent application serial No. 13/236,942 entitled "consistent Speech Recognition Results" filed on 20/9/2011, the entire disclosure of which is incorporated herein by reference.
in some examples, STT processing module 730 includes a vocabulary of recognizable words and/or accesses the vocabulary via speech-to-alphabet conversion module 731. Each vocabulary word is associated with one or more candidate pronunciations for the word represented in the speech recognition phonetic alphabet. In particular, the vocabulary of recognizable words includes words associated with a plurality of candidate pronunciations. For example, the word includes and-/and-The candidate pronunciation of/is associated with the word "tomato". In addition, the vocabulary words are associated with custom candidate pronunciations based on previous speech input from the user. Such custom candidate pronunciations are stored in STT processing module 730 and associated with a particular user via a user profile on the device. In some examples, the candidate pronunciation of the word is determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciation is generated manually, e.g., based on a known standard pronunciation.
In some examples, candidate pronunciations are ranked based on their prevalence. For example, a candidate pronunciation-The rank of/is higher thanV. because the former is a more common pronunciation (e.g., among all users, for users in a particular geographic area, or for any other suitable subset of users). In some examples, the candidate pronunciation is based on whether it isThe candidate pronunciations are ordered by custom candidate pronunciations associated with the user. For example, the custom candidate pronunciation is ranked higher than the standard candidate pronunciation. This can be used to identify proper nouns with unique pronunciations that deviate from the standard pronunciation. In some examples, the candidate pronunciation is associated with one or more speech features such as a geographic origin, country, or ethnicity. For example, a candidate pronunciation-/is associated with the United states and the candidate pronunciation +Associated with the uk. Further, the ranking of the candidate pronunciations is based on one or more characteristics of the user (e.g., geographic origin, country, race, etc.) in a user profile stored on the device. For example, it may be determined from a user profile that the user is associated with the united states. Candidate pronunciation @basedon the user being associated with the united states/(associated with the United states) may be compared to a candidate pronunciation @/(associated with the uk) is ranked higher. In some examples, one of the ranked candidate pronunciations may be selected as a predicted pronunciation (e.g., the most likely pronunciation).
Upon receiving a speech input, the STT processing module 730 is used to determine a phoneme (e.g., using a sound model) corresponding to the speech input, and then attempt to determine a word (e.g., using a language model) that matches the phoneme. For example, if STT processing module 730 first identifies a phoneme sequence ≧ corresponding to a portion of the speech inputThen it may then determine that the sequence corresponds to the word "tomato" based on the lexical index 744.
in some examples, STT processing module 730 uses fuzzy matching techniques toWords in the utterance are determined. Thus, for example, STT processing module 730 determines a phoneme sequence ≧Corresponding to the word "tomato", even if the particular phoneme sequence is not one of the candidate phoneme sequences for the word.
the natural language processing module 732 (the "natural language processor") of the digital assistant takes the n best candidate textual representations ("word sequences" or "token sequences") generated by the STT processing module 730 and attempts to associate each candidate textual representation with one or more "actionable intents" identified by the digital assistant. An "executable intent" (or "user intent") represents a task that can be performed by the digital assistant and that can have an associated task flow implemented in the task flow model 754. An associated task stream is a series of programmed actions and steps taken by the digital assistant to perform a task. The capability scope of the digital assistant depends on the number and variety of task flows that have been implemented and stored in task flow model 754, or in other words, on the number and variety of "actionable intents" that the digital assistant recognizes. However, the effectiveness of a digital assistant also depends on the assistant's ability to infer the correct "executable intent or intents" from a user request expressed in natural language.
In some examples, natural language processing module 732 receives context information associated with the user request, for example, from I/O processing module 728, in addition to the sequence of words or tokens obtained from STT processing module 730. The natural language processing module 732 optionally uses the context information to clarify, supplement, and/or further define information contained in the candidate text representation received from the STT processing module 730. Contextual information includes, for example, user preferences, hardware and/or software states of the user device, sensor information collected before, during, or shortly after a user request, previous interactions (e.g., conversations) between the digital assistant and the user, and so forth. As described herein, in some examples, the contextual information is dynamic and varies with time, location, content of the conversation, and other factors.
In some examples, the natural language processing is based on, for example, ontology 760. Ontology 760 is a hierarchical structure that contains many nodes, each node representing an "actionable intent" or "attribute" related to one or more of the "actionable intents" or other "attributes". As described above, an "actionable intent" refers to a task that a digital assistant is capable of performing, i.e., that task is "actionable" or can be performed. "Properties" represent parameters associated with a sub-aspect of an actionable intent or another property. The links between the actionable intent nodes and the property nodes in ontology 760 define how the parameters represented by the property nodes pertain to the tasks represented by the actionable intent nodes.
In some examples, ontology 760 consists of actionable intent nodes and property nodes. Within ontology 760, each actionable intent node is linked directly to one or more property nodes or through one or more intermediate property nodes. Similarly, each property node is linked directly to one or more actionable intent nodes or through one or more intermediate property nodes. For example, as shown in FIG. 7C, ontology 760 includes a "restaurant reservation" node (i.e., an actionable intent node). The property nodes "restaurant," "date/time" (for reservation), and "party size" are all directly linked to the actionable intent node (i.e., "restaurant reservation" node).
Further, the attribute nodes "cuisine", "price interval", "phone number", and "location" are child nodes of the attribute node "restaurant", and are each linked to the "restaurant reservation" node (i.e., actionable intent node) through the intermediate attribute node "restaurant". As another example, as shown in FIG. 7C, ontology 760 also includes a "set reminder" node (i.e., another actionable intent node). The property node "date/time" (for set reminders) and "topic" (for reminders) are both linked to the "set reminders" node. Since the attribute "date/time" is related to both the task of making restaurant reservations and the task of setting reminders, the attribute node "date/time" is linked to both the "restaurant reservation" node and the "set reminders" node in ontology 760.
The actionable intent node, along with its linked property nodes, is described as a "domain". In the present discussion, each domain is associated with a respective executable intent and refers to a set of nodes (and relationships between those nodes) associated with a particular executable intent. For example, ontology 760 shown in FIG. 7C includes an example of a restaurant reservation field 762 and an example of a reminder field 764 within ontology 760. The restaurant reservation domain includes the actionable intent node "restaurant reservation," the attribute nodes "restaurant," date/time, "and" party size, "and the child attribute nodes" cuisine, "" price range, "" phone number, "and" location. The reminder field 764 includes the actionable intent node "set reminder" and property nodes "subject" and "date/time". In some examples, ontology 760 is comprised of multiple domains. Each domain shares one or more attribute nodes with one or more other domains. For example, in addition to the restaurant reservation field 762 and reminder field 764, the "date/time" property node is associated with a number of different fields (e.g., a scheduling field, a travel reservation field, a movie tickets field, etc.).
Although fig. 7C shows two exemplary domains within ontology 760, other domains include, for example, "find movie", "initiate phone call", "find route", "arrange meeting", "send message", and "provide answer to question", "read list", "provide navigation instructions", "provide instructions for task", etc. The "send message" field is associated with a "send message" actionable intent node and further includes attribute nodes such as "one or more recipients", "message type", and "message body". The attribute node "recipient" is further defined, for example, by child attribute nodes such as "recipient name" and "message address".
In some examples, ontology 760 includes all domains (and thus actionable intents) that a digital assistant is able to understand and act upon. In some examples, ontology 760 is modified, such as by adding or removing entire domains or nodes, or by modifying relationships between nodes within ontology 760.
In some examples, nodes associated with multiple related executables are clustered under a "super domain" in ontology 760. For example, a "travel" super-domain includes a cluster of attribute nodes and actionable intent nodes related to travel. Executable intent nodes related to travel include "airline reservation," "hotel reservation," "car rental," "get directions," "find points of interest," and the like. Actionable intent nodes under the same super-domain (e.g., a "travel" super-domain) have multiple attribute nodes in common. For example, executable intent nodes for "airline reservation," hotel reservation, "" car rental, "" directions to acquire, "and" find points of interest "share one or more of the attribute nodes" starting location, "" destination, "" departure date/time, "" arrival date/time, "and" party size.
In some examples, each node in ontology 760 is associated with a set of words and/or phrases that are related to the property or executable intent represented by the node. The respective set of words and/or phrases associated with each node is a so-called "vocabulary" associated with the node. The respective set of words and/or phrases associated with each node is stored in the lexical index 744 associated with the property or actionable intent represented by the node. For example, returning to fig. 7B, the vocabulary associated with the node of the "restaurant" attribute includes words such as "food," "drinks," "cuisine," "hunger," "eating," "pizza," "fast food," "meal," and so forth. As another example, the words associated with the node of the actionable intent of "initiate a phone call" include words and phrases such as "call," "make a call," "dial," "make a call with … …," "call the number," "call to," and so forth. The vocabulary index 744 optionally includes words and phrases in different languages.
The natural language processing module 732 receives candidate text representations (e.g., one or more text strings or one or more token sequences) from the STT processing module 730 and, for each candidate representation, determines which nodes the words in the candidate text representation relate to. In some examples, a word or phrase in a candidate text representation is found to be associated (via lexical index 744) with one or more nodes in ontology 760, and then "triggers" or "activates" those nodes. Based on the number and/or relative importance of the activated nodes, the natural language processing module 732 selects one of the actionable intents as the task that the user intends for the digital assistant to perform. In some examples, the domain with the most "triggered" nodes is selected. In some examples, the domain with the highest confidence (e.g., based on the relative importance of its respective triggered node) is selected. In some examples, the domain is selected based on a combination of the number and importance of triggered nodes. In some examples, additional factors are also considered in selecting a node, such as whether the digital assistant has previously correctly interpreted a similar request from the user.
the user data 748 includes user-specific information such as user-specific vocabulary, user preferences, user addresses, a user's default second language, a user's contact list, and other short-term or long-term information for each user. In some examples, natural language processing module 732 uses user-specific information to supplement information contained in the user input to further define the user intent. For example, for a user request "invite my friend to my birthday party," natural language processing module 732 can access user data 748 to determine which people "friends" are and where and when the "birthday party" will be held without the user explicitly providing such information in their request.
It is to be appreciated that in some examples, natural language processing module 732 is implemented with one or more machine learning mechanisms (e.g., neural networks). In particular, one or more machine learning mechanisms are configured to receive candidate text representations and contextual information associated with the candidate text representations. Based on the candidate text representations and the associated context information, one or more machine learning mechanisms are configured to determine an intent confidence score based on a set of candidate actionable intents. The natural language processing module 732 may select one or more candidate actionable intents from a set of candidate actionable intents based on the determined intent confidence scores. In some examples, an ontology (e.g., ontology 760) is also utilized to select one or more candidate actionable intents from a set of candidate actionable intents.
Additional details of Searching Ontology based on token strings are described in U.S. utility patent application serial No. 12/341,743 entitled "Method and Apparatus for Searching Using An Active Ontology," filed on 22.12.2008, the entire disclosure of which is incorporated herein by reference.
in some examples, once natural language processing module 732 identifies an executable intent (or domain) based on a user request, natural language processing module 732 generates a structured query to represent the identified executable intent. In some examples, the structured query includes parameters for one or more nodes within the domain that can execute the intent, and at least some of the parameters are populated with specific information and requirements specified in the user request. For example, the user says "help me reserve a seat at 7pm in a sushi shop. In this case, the natural language processing module 732 can correctly recognize the executable intention as "restaurant reservation" based on the user input. According to the ontology, the structured query of the "restaurant reservation" domain includes parameters such as { cuisine }, { time }, { date }, { party size }, and the like. In some examples, based on the speech input and text derived from the speech input using STT processing module 730, natural language processing module 732 generates a partially structured query for the restaurant reservation field, where the partially structured query includes parameters { cuisine ═ sushi class "} and { time ═ 7 pm" }. However, in this example, the user utterance contains insufficient information to complete a structured query associated with the domain. Thus, other necessary parameters such as { party number } and { date } are not specified in the structured query based on currently available information. In some examples, natural language processing module 732 populates some parameters of the structured query with the received contextual information. For example, in some examples, if the user requests a sushi store that is "nearby," the natural language processing module 732 populates the { location } parameter in the structured query with the GPS coordinates from the user device.
In some examples, natural language processing module 732 identifies a plurality of candidate executable intents for each candidate text representation received from STT processing module 730. Additionally, in some examples, a respective structured query is generated (partially or wholly) for each identified candidate executable intent. The natural language processing module 732 determines an intent confidence score for each candidate actionable intent and ranks the candidate actionable intents based on the intent confidence scores. In some examples, the natural language processing module 732 passes the generated one or more structured queries (including any completed parameters) to a task stream processing module 736 ("task stream processor"). In some examples, the one or more structured queries for the m-best (e.g., m highest ranked) candidate executables are provided to task flow processing module 736, where m is a predetermined integer greater than zero. In some examples, the one or more structured queries for the m best candidate executable intents are provided to task flow processing module 736 along with the corresponding one or more candidate textual representations.
Additional details of Inferring user intent based on multiple candidate actionable intents determined From multiple candidate textual representations of Speech input are described in U.S. utility model patent application serial No. 14/298,725 entitled "System and Method for introducing user From Speech Inputs" filed 6.2014, the entire disclosure of which is incorporated herein by reference.
task stream processing module 736 is configured to receive one or more structured queries from natural language processing module 732, complete the structured queries (if necessary), and perform the actions required to "complete" the user's final request. In some examples, the various processes necessary to accomplish these tasks are provided in the task flow model 754. In some examples, task flow model 754 includes procedures for obtaining additional information from a user, as well as task flows for performing actions associated with an executable intent.
As described above, to complete a structured query, the task flow processing module 736 needs to initiate additional dialog with the user in order to obtain additional information and/or clarify potentially ambiguous utterances, when such interaction is necessary, the task flow processing module 736 invokes the dialog flow processing module 734 to engage in a dialog with the user, in some examples, the dialog flow processing module 734 determines how (and/or when) to request additional information from the user and receives and processes a user response, provides the question to the user and receives an answer from the user through the I/O processing module 728.
once the task flow processing module 736 has completed the structured query for the executable intent, the task flow processing module 736 begins executing the final task associated with the executable intent. Thus, the task flow processing module 736 performs the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, a task flow model for the actionable intent "restaurant reservation" includes steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, using structured queries such as: { restaurant reservation, restaurant ABC cafe, date 3/12/2012, time 7pm, party number 5}, task flow processing module 736 may perform the following steps: (1) logging into a server of an ABC cafe or a coffee shop such asRestaurant reservation systemThe method includes (1) entering date, time, and party size information in a form on a website, (3) submitting the form, and (4) forming a calendar entry for the reservation in the user's calendar.
In some examples, the task flow processing module 736 either completes the task requested in the user input or provides the informational answer requested in the user input with the assistance of the service processing module 738 ("service processing module"). For example, the service processing module 738 initiates phone calls, sets calendar entries, invokes map searches, invokes or interacts with other user applications installed on the user device, and invokes or interacts with third-party services (e.g., restaurant reservation portals, social networking sites, bank portals, etc.) on behalf of the task flow processing module 736. In some examples, the protocols and Application Programming Interfaces (APIs) required for each service are specified by respective ones of service models 756. The service handling module 738 accesses the appropriate service model for the service and generates a request for the service according to the service model according to the protocols and APIs required by the service.
for example, if a restaurant has enabled an online reservation service, the restaurant submits a service model that specifies the necessary parameters to make the reservation and an API to communicate the values of the necessary parameters to the online reservation service. The service processing module 738, when requested by the task flow processing module 736, can establish a network connection with the online booking service using the Web address stored in the service model and send the necessary parameters for booking (e.g., time, date, party size) to the online booking interface in a format according to the API of the online booking service.
in some examples, the natural language processing module 732, the conversation flow processing module 734, and the task flow processing module 736 are used jointly and iteratively to infer and define the user's intent, to obtain information to further clarify and refine the user's intent, and to ultimately generate a response (i.e., output to the user, or complete a task) to satisfy the user's intent. The generated response is a dialog response to the speech input that at least partially satisfies the user intent. Additionally, in some examples, the generated response is output as a speech output. In these examples, the generated response is sent to a speech synthesis processing module 740 (e.g., a speech synthesizer), where the generated response may be processed to synthesize the dialog response in speech form. In other examples, the generated response is data content relevant to satisfying the user request in the voice input.
In examples where the task flow processing module 736 receives multiple structured queries from the natural language processing module 732, the task flow processing module 736 first processes a first structured query of the received structured queries in an attempt to complete the first structured query and/or to perform one or more tasks or actions represented by the first structured query. In some examples, the first structured query corresponds to a highest ranked executable intent. In other examples, the first structured query is selected from structured queries received based on a combination of a corresponding speech recognition confidence score and a corresponding intent confidence score. In some examples, if the task flow processing module 736 encounters an error during processing of the first structured query (e.g., due to an inability to determine the necessary parameters), the task flow processing module 736 may continue to select and process a second structured query of the received structured queries that corresponds to a lower ranked executable intent. The second structured query is selected, for example, based on the speech recognition confidence score of the corresponding candidate text representation, the intent confidence score of the corresponding candidate actionable intent, the missing necessary parameters in the first structured query, or any combination thereof.
the speech synthesis processing module 740 is configured to synthesize speech output for presentation to a user. The speech synthesis processing module 740 synthesizes speech output based on text provided by the digital assistant. For example, the generated dialog response is in the form of a text string. The speech synthesis processing module 740 converts the text string into audible speech output. Speech synthesis processing module 740 uses any suitable speech synthesis technique to generate speech output from text, including but not limited to: concatenative synthesis, unit-selective synthesis, diphone synthesis, domain-specific synthesis, formant synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sine wave synthesis. In some examples, speech synthesis processing module 740 is configured to synthesize individual words based on the phoneme strings corresponding to the words. For example, the phoneme string is associated with a word in the generated dialog response. The phoneme string is stored in metadata associated with the word. The speech synthesis processing module 740 is configured to directly process the phoneme string in the metadata to synthesize words in speech form.
In some examples, speech synthesis is performed on a remote device (e.g., server system 108) instead of (or in addition to) using speech synthesis processing module 740, and the synthesized speech is sent to the user device for output to the user. This may occur, for example, in some implementations where the output of the digital assistant is generated at the server system. Also, since the server system typically has more processing power or more resources than the user device, it is possible to obtain a higher quality speech output than the client side synthesis will achieve.
Additional details regarding digital assistants can be found in U.S. utility patent application 12/987,982 entitled "Intelligent Automated Assistant" filed on 10.1.2011 and U.S. utility patent application 13/251,088 entitled "Generation and Processing Task Items Using pages to Performance" filed on 30.9.2011, the entire disclosures of which are incorporated herein by reference.
4. offline personal assistant
Fig. 8 illustrates a process 800 for operating a digital assistant, according to various examples. Process 800 is performed, for example, using one or more electronic devices implementing a digital assistant. In some examples, process 800 is performed using a client-server system (e.g., system 100), and the blocks of process 800 are divided in any manner between a server (e.g., DA server 106) and a client device. In other examples, the blocks of process 800 are divided between a server and multiple client devices (e.g., a mobile phone and a smart watch). Thus, while portions of process 800 are described herein as being performed by a particular device of a client-server system, it should be understood that process 800 is not so limited. In other examples, process 800 is performed using only a client device (e.g., user device 104) or only a plurality of client devices. In process 800, some blocks are optionally combined, the order of some blocks is optionally changed, and some blocks are optionally omitted. In some examples, additional steps may be performed in connection with process 800.
In general, process 800 may be implemented, for example, using an automated digital assistant of an electronic device to perform tasks. As described in further detail below, in some examples, an electronic device may receive a plurality of tasks and determine which of the plurality of tasks to perform to satisfy a user request. Each task may be determined based on a score (e.g., a usefulness score) associated with the task. It should be appreciated that the plurality of tasks may be provided by any number of devices. The plurality of tasks may be provided by a single device, or each of the plurality of tasks may be provided by a respective device. For example, a first task may be provided by an electronic device and a second task may be provided by another electronic device (such as a server), and the electronic device may determine which task to perform based on the respective scores of the first and second tasks. The electronic device then performs the determined task and optionally provides an output indicating whether the task was performed.
At block 805, the electronic device receives a natural language input. In some examples, the electronic device is electronic device 904 (fig. 9). In some examples, the electronic device 904 may be the user device 104.
The natural language input is a speech input or a text input. In some examples, the natural language input includes a request to cause the electronic device and/or another device to perform a task. For example, in the example "call Ross," the natural language input includes a request to initiate a call (e.g., a telephone call) to an electronic device, and in particular, to a digital assistant of the electronic device. In some examples, the natural language input may further specify one or more parameters for the requested task. For example, "Ross" may be specified as a parameter for a contact or business name stored on the electronic device. Whether the natural language input relates to calling a contact or calling an enterprise may be determined by the electronic device, for example, by disambiguating parameters based on another input and/or based on contextual information of the electronic device, as described in further detail below.
at block 810, the electronic device determines a first task and a first usefulness score associated with the first task. For example, a first task and a first usefulness score are determined based on the natural language input.
In some examples, the natural language input is a natural language speech input. Thus, determining the first task may include providing one or more candidate text representations (e.g., text strings) of the natural language speech input, for example, using STT processing module 730. As described above, each candidate text representation may be associated with a speech recognition confidence score, and the candidate text representations may be ranked accordingly. In other examples, the natural language input is a text input and is provided as a candidate text representation, where n ═ 1. Text input provided as candidate text representations in this manner may be assigned a maximum speech recognition confidence score, or any other speech recognition confidence score.
Determining the first task may further include providing one or more candidate intents based on the n-best (e.g., highest ranked) candidate text representations, for example, using natural language processing module 732. Each of the candidate intents may be associated with an intent confidence score, and the candidate intents may be ranked accordingly. In some examples, multiple candidate intents are identified for each candidate text representation. Additionally, in some examples, a structured query is generated (partially or fully) for each candidate intent.
Candidate tasks are then determined, for example, using task flow processing module 736, based on the m best (e.g., highest ranked) candidate intents. In some examples, candidate tasks are identified based on the structured query for each of the m best (e.g., highest ranked) candidate intents. For example, as described above, a structured query can be implemented in accordance with one or more task flows, such as task flow 754, to determine tasks associated with candidate intents.
Once candidate tasks are identified based on the candidate intent, a usefulness score for the candidate intent is determined. In some examples, usefulness scores are determined for all candidate intents, and in other examples, usefulness scores are determined for a subset of the candidate intents. In some examples, a usefulness score for a candidate task is determined based on a speech recognition confidence score of a candidate text representation associated with the candidate task and/or an intent confidence score of a candidate intent associated with the candidate task.
in some examples, the usefulness score is based on a context, such as a context of the electronic device. For example, the usefulness score may be based on information stored on the electronic device (e.g., user-specific information) including, but not limited to, contacts (e.g., phone contacts, email contacts), call history, messages, emails, calendars, music, or any combination thereof. For example, consider the natural language input "play my music". A task of playing music stored on the electronic device may be associated with a relatively high usefulness score. As another example, the usefulness score may be based on one or more tasks previously performed by the electronic device. In some cases, more recent tasks performed by the electronic device may be weighted more strongly. Consider the natural language input "send a message to Ted, say 'arrive soon'". A task of sending a message to a contact Ted that is messaging frequently may be associated with a relatively high usefulness score and/or a task of sending a message to a contact Ted that is not messaging frequently may be associated with a relatively low usefulness score. As another example, the usefulness score may be based on a location and/or environment of the electronic device. Consider the natural language input "provide route to gas station". Tasks retrieved to routes determined to gas stations that are close to the electronic device may be associated with a relatively high usefulness score, and tasks retrieved to routes determined to gas stations that are far from the electronic device may be associated with a relatively low usefulness score.
for example, consider that the continuous natural language input provided by a user to an automated digital assistant of an electronic device is (1) "whether there is a meeting? during lunch and (2)" how? the 2 o' clock afternoon situation.
once all of the candidate tasks and the usefulness scores for the candidate tasks are determined, the electronic device selects the candidate task as the first task. Selecting a candidate task as the first task may include selecting the candidate task associated with the best (e.g., highest) usefulness score. In other examples, selecting a candidate task as the first task may include selecting a candidate task first identified by the electronic device.
at block 815, the electronic device receives a second task and a second usefulness score associated with the second task. In some examples, the second task and the second usefulness score are received from another device, such as a DA server. In some examples, the DA server is DA server 906 (fig. 9). In some examples, DA server 906 may be DA server 106 (fig. 1).
In some examples, the second task and the second usefulness score received by the electronic device are based on natural language input. Thus, in some cases, the electronic device provides the natural language input to the DA server, and in response, the DA server provides the electronic device with the second task and the second usefulness score based on the natural language input. In some examples, the electronic device provides the natural language input and/or the context of the electronic device to the DA server prior to determining the first task and the first usefulness score. In other examples, the electronic device provides the natural language input and/or the context of the electronic device to the DA server when determining the first task and the first usefulness score. In other examples, the electronic device provides the natural language input and the first usefulness score to the DA server after determining the first task and the first usefulness score.
In some examples, the electronic device selectively provides the natural language input and/or context to the DA server. For example, the electronic device may provide the natural language input and context to the server only when the electronic device has established communication with the DA server or determined whether the communication between the electronic device and the DA server meets a predetermined threshold. The threshold applied in this manner may be based on a maximum transmission rate, an average transmission rate, loss, interference, delay, or any combination thereof.
the DA server may provide the second task and the second usefulness score by providing (e.g., determining) a candidate text representation, a candidate intent, and a candidate task and selecting the candidate task as the second task, as described above. The DA server may then provide the second task and a second usefulness score corresponding to the second task to the electronic device, for example, as shown in fig. 9.
In some examples, the second task and the second usefulness score may be determined by the DA server based on a context of the electronic device. Thus, in some cases, the electronic device may provide the context of the electronic device to the DA server such that the DA server may provide the second task and the second usefulness score based on the context of the electronic device. In some examples, the context of the electronic device provided in this manner may include a session context of the electronic device. Providing the session context in this manner allows the DA server to determine the second task if the natural language input corresponds to one or more other natural language inputs. In other examples, the context of the electronic device provided to the DA server in this manner may indicate a location of the electronic device, one or more tasks previously performed by the electronic device, and/or any other context of the electronic device.
In some examples, the context of the electronic device may relate to personal information and/or private information associated with a user of the electronic device such that the context is provided in a privacy-preserving manner. For example, the session context provided to the DA server may be limited, for example, to domain identification of natural language input.
At block 820, the electronic device determines whether the first usefulness score is higher than the second usefulness score. In some examples, the electronic device determines whether the first usefulness score is higher than the second usefulness score by comparing the first usefulness score to the second usefulness score. In other examples, the electronic device receives an indication from another device (such as a DA server) indicating whether the first usefulness score is higher than the second usefulness score.
In some examples, the electronic device adjusts (e.g., increases, decreases) one or more of the first usefulness score and the second usefulness score prior to determining whether the first usefulness score is higher than the second usefulness score. For example, if the electronic device determines a first task before receiving a second task, the electronic device may increase the first usefulness score and/or decrease the second usefulness score. Conversely, if the electronic device receives a second task before the electronic device determines the first task, the electronic device may increase the second usefulness score and/or decrease the first usefulness score. As another example, the usefulness score may be adjusted based on a rate at which tasks associated with the usefulness score are provided in response to the natural language input. As another example, the usefulness score may be increased or decreased based on the device type of the electronic device and/or the device type of the DA server.
In general, in determining whether the first usefulness score is higher than the second usefulness score, the electronic device selects from the first task and the second task to select the task determined to be most appropriate to satisfy the user request. In some examples, the electronic device is associated with a user, and the task may be determined based on personal information and/or private information of the user stored on the electronic device. Conversely, in at least some instances, the DA server may lack access to certain contextual information (e.g., personal and/or private information of the user), but may determine the task by utilizing a more powerful and/or more thoroughly trained model (e.g., a speech recognition model, a statistical language model, a natural language processing model, an ontology, a task flow model, a service model, etc.) than that employed by the electronic device. Thus, each of the electronic device and the DA server may provide tasks according to their respective advantages.
In some examples, the DA server may not be able to provide the second task and usefulness score within a predetermined period of time (e.g., the DA server times out, loses connection). In response, the electronic device may select the first task as the selected task.
if the electronic device determines that the first usefulness score is higher than the second usefulness score, then at block 825, the electronic device performs a first task determined by the electronic device. The electronic device may also provide an output indicating that the first task has been performed. The output may be any type of output (e.g., visual, audible, tactile), and may specify whether the first task was performed successfully or unsuccessfully.
if the electronic device determines that the second usefulness score is higher than the first usefulness score, at block 830, the electronic device performs a second task received from the DA server. The electronic device may also provide an output indicating that the second task has been performed. The output may be any type of output (e.g., visual, audible, tactile), and may specify whether the second task was performed successfully or unsuccessfully.
As such, each of the electronic device and the DA server may provide the task and usefulness score based on, for example, natural language input provided to the electronic device. In some examples, the electronic device and the DA server may operate simultaneously (i.e., in parallel) to provide each task. In other examples, the electronic device or the DA server may be used to provide the task before another device provides the task. The electronic device may determine (e.g., select) a task of the tasks respectively provided by the electronic device and the DA server based on the score corresponding to the task. Accordingly, the electronic device may select the task determined to be most appropriate for achieving the intent of the natural language input.
In some examples, the device providing the selected task from the first task and the second task provides the task for a remainder of a session between the user and the automated digital assistant. For example, during the session, natural language input may be received, and based on the natural language input, a first task is provided by the electronic device and a second task is provided by the DA server. If the first task is selected, the electronic device may provide the task based only on subsequent natural language input for the remainder of the session. This may include forgoing providing each subsequent natural language input to the DA server. Similarly, if the second task is selected, the DA server may only provide the tasks for the remainder of the session. This may include providing all subsequent natural language input for the session to the DA server and forgoing providing the task with the electronic device.
the operations described above with reference to fig. 8 are optionally implemented by the components depicted in fig. 1-4, 6A-6B, and 7A-7C. For example, the operations of process 800 may be implemented by any device (or components thereof) described herein, including but not limited to devices 104, 200, 400, 600, 904, and 906. Those of ordinary skill in the art will clearly know how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, and 7A-7C.
As described with respect to process 800 of fig. 8 and flowchart 900 of fig. 9, in some examples, an electronic device (e.g., user device 904) receives a task and a usefulness score and selects a task to perform based on the usefulness score.
conversely, in other examples, the DA server receives the tasks and usefulness scores and selects the tasks to perform for the electronic device. Referring to the flowchart 1000 of fig. 10, the electronic device 1004 provides a first task and a first usefulness score, and the DA server 1006 provides a second task and a second usefulness score. The electronic device 1004 then provides the first task and the first usefulness score to the DA server 1006, which in turn selects from the first task and the second task based on the respective usefulness scores, as described above. If the task selected by the DA Server 1006 is a second task, the DA Server 1006 provides the second task to the electronic device. If the task selected by the DA server 1006 is a first task, the DA server 1006 provides an indication to the electronic device that the first task is selected. In some examples, providing the indication in this manner includes providing the first task to the electronic device 1004. The electronic device 1004 performs the task and optionally provides an output indicating whether the selected task was performed.
5. performing tasks in a privacy-preserving manner
Fig. 11A illustrates an exemplary sequence 1100 for performing tasks in a privacy-preserving manner, in accordance with various examples. In some examples, one or more operations of sequence 1100 are optionally combined, the order of the operations is optionally changed, and/or some operations are optionally omitted. In some examples, additional steps may be performed in conjunction with sequence 1100. Furthermore, the use of "sequence" is not intended to require a particular order of interaction unless otherwise indicated. An additional exemplary description OF performing tasks in a privacy-preserving manner may be found in U.S. provisional patent application 62/505,019 entitled "MAINTAINING PRIVACY OF PERSONALINFORMATION" filed on 11.5.2017, the contents OF which are hereby incorporated by reference in their entirety.
The operations of sequence 1100 may be performed using electronic device 1104 and server 1106, as described herein. The electronic device 1104 can be any of the devices 104, 200, 400, and 600 (fig. 1, 2, 4, and 6A-6B), and the server 1106 can be the DA server 106 (fig. 1). It should be understood that in some examples, one or more alternative or additional devices may be used to perform the operations of sequence 1100. For example, one or more operations of sequence 1100, depicted as being performed by electronic device 1104, may be performed using multiple electronic devices.
In general, the operations of sequence 1100 may be implemented to perform tasks in a privacy-preserving manner. As described in further detail below, in some examples, a domain of the natural language input is identified by a server and a process flow corresponding to the domain is provided from the server to the electronic device such that the electronic device performs tasks according to the process flow. Thus, during operation, data (e.g., sensitive data) of the electronic device corresponding to the task is not exposed to other devices, including the server 1106.
At operation 1110, the electronic device 1104 receives (e.g., via a microphone) a natural language input. The natural language input is a speech input or a text input. In some examples, the electronic device 1104 receives a natural language input indicating a request for a digital assistant of the electronic device 11. The natural language input may include any request that may be directed to a digital assistant. For example, the natural language input "provide a route to starbucks" may request that the digital assistant of electronic device 1104 provide a driving route from the location of the electronic device to the nearest starbucks location.
As described above, the natural language input may correspond to one or more privacy domains (e.g., financial domain, health domain) in some examples, the privacy domain is any domain (or hyper-domain) in which an executable intent or attribute of the domain is associated with private data, in some examples, the private data includes any personal (e.g., user-specific) data that is sensitive personal data (e.g., financial data, health data).
At operation 1112, the electronic device 1104 provides the natural language input to the server 1106. If the natural language input is speech input, the electronic device 11 may also provide a textual representation of the natural language input to the server 1106. The text representation may be provided using a speech-to-text processing module, such as STT processing module 730.
In some examples, the electronic device 1104 also provides the context of the electronic device 1104 to the server 1106. The contextual information may include information stored on the electronic device (e.g., user-specific information) including, but not limited to, contacts (e.g., phone contacts, email contacts), call history, messages, email, calendar, music, or any combination thereof. The contextual information may include information describing the location and/or environment of the electronic device.
In some examples, the contextual information may include information related to one or more tasks previously performed by the electronic device 1104. The context information may, for example, identify each task and/or any executable intent or domain associated with each task. Alternatively, in some examples, the context information may only indicate a domain associated with a previously performed task. In the case where the task (or its intent) corresponds to a privacy domain, the context information may be limited in this manner. By indicating only the domain of the previously performed task, the electronic device need not display the previously performed task and/or any results associated with the task to other devices, such as the server 1106.
at operation 1114, the server 1106 receives the natural language input from the electronic device 1104 and identifies a domain of the natural language input. In general, identifying a field of natural language input includes associating a textual representation of the natural language input with an executable intent and identifying a field associated with the executable intent. As described above, the textual representation may be provided by the electronic device 1104 to the server 1106 and/or may be provided by the server 1106 based on natural language input. The textual representation may be associated with the executable intent using a natural language processing module, such as natural language processing module 732.
In some examples, multiple candidate domains of the natural language input are identified, and a domain of the natural language input is identified (e.g., selected) from the candidate domains. For example, multiple candidate text representations (e.g., text strings) of the natural language speech input may be provided (by the electronic device 1104 and/or the server 1106) and each of the candidate text representations may be assigned a speech recognition confidence score. The candidate textual representations may be ranked accordingly.
Each of the n-best (e.g., highest ranked) candidate text representations may then be associated with a respective actionable intent. An intent confidence score may be assigned to each executable intent, and the executable intents may be ordered accordingly. In some examples, the intent confidence score may be based on a speech recognition confidence score. The domain of the candidate intent associated with the highest intent confidence score may be identified as the domain of the natural language input.
At operation 1116, the server 1106 provides the process flow corresponding to the identified domain to the electronic device 1104. In some examples, the server 1106 generates the process flow and provides the process flow to the electronic device 1104 in response to identifying the domain. In other examples, the server 1106 retrieves the process flow (e.g., from the server 1106 or another device) and provides the process flow to the electronic device 1104.
Generally speaking, a process flow is a set of executable instructions (e.g., JavaScript) that, when executed, allow a device, such as electronic device 1104, to determine (e.g., select, identify) and/or perform a task based on natural language input. Thus, the process flow may include instructions for completing a structured query and/or performing a task corresponding to a natural language input (e.g., a task flow, a dialog flow, a user interface flow). In some examples, the process flow may also include instructions for speech input processing (e.g., speech-to-text models) and/or intent inference. Including speech input processing and intent inference instructions in this manner may, for example, enable electronic device 1104 to locally process subsequent natural language input corresponding to the identified domain.
In some examples, the server 1106 also provides the candidate text representation (and the speech recognition confidence score) and/or the candidate actionable intent (and the intent confidence score) to the electronic device 1104.
At operation 1118, the electronic device 1104 uses the process flow provided by the server 1106 to determine tasks based on natural language input. In some examples, the electronic device 1104 uses the candidate text representations and the actionable intent provided by the server 1106 to determine the task. For example, the electronic device may initiate a dialog with the user to obtain additional information based on the candidate text representations (e.g., disambiguating parameters specified in the user request). For another example, the electronic device 1104 can generate and/or complete a structured query for one or more of the candidate actionable intents.
in some examples, the electronic device 1104 determines the task based on a context of the electronic device 1104. As described above, the contextual information may include, but is not limited to, information stored on the electronic device, information describing the location and/or environment of the electronic device, and/or information regarding one or more tasks previously performed by the electronic device 1104.
at operation 1120, the electronic device performs the selected task. In some examples, the electronic device 1104 performs tasks using process flows provided by the server 1106. The electronic device 1104 may, for example, use the process flow to perform the tasks of the structured query corresponding to the highest ranked candidate actionable intent.
In some examples, performing the task includes retrieving private data from a memory of the electronic device. The electronic device may, for example, access a portion of a memory (e.g., a database) dedicated to storing private data. Additionally or alternatively, in some examples, performing the task includes requesting private data from a third party application, for example, using an intent object data structure. Additional exemplary descriptions of operations using third party APPLICATIONs may be found in U.S. provisional patent APPLICATION 62/348,929 entitled "APPLICATION administration WITH A DIGITAL ASSISTANT" filed on 1/6/2016 and U.S. patent APPLICATION 62/444,162 entitled "APPLICATION administration WITH A DIGITALASSISTANT" filed on 9/1/2017, the contents of which are hereby incorporated by reference in their entirety.
At operation 1122, the electronic device 1104 provides output regarding whether the task has been performed. In some examples, the output is a natural language output generated by the electronic device 1104. The output may provide user-specific information in response to a user request (e.g., "you are burning 150 calories today"), or may indicate that there is no information requested (e.g., "no data related to the request is found").
In some examples, the output is a visual output (e.g., on a touch-sensitive display of electronic device 1104). Thus, output may be provided according to a User Interface (UI) flow of the process flow. The UI flow may indicate a manner in which to display one or more outputs. In some examples, the UI flow indicates to a digital assistant of electronic device 1104 a manner of displaying the output, and the digital assistant causes electronic device 1104 to display the output accordingly. In some examples, the output is based on a type of the electronic device. A device with relatively complex display capabilities (e.g., a mobile phone) may, for example, have more comprehensive or detailed output than a device with relatively simple display capabilities (e.g., a smart watch).
The description is made with respect to the server 1106 providing the process flow, textual representation, and executable intent to the electronic device 1104 during operation. However, in some examples, the server 1106 also selectively determines and provides tasks to the electronic device 1104. For example, referring to FIG. 11B, after identifying the domain of the natural language input (operation 1130), the server 1106 may selectively determine a task corresponding to the natural language input and provide the task to the electronic device 1104 in conjunction with the process flow, the textual representation, and the executable intent. The electronic device 1104 may then select from the tasks provided by the server 1106 (operation 1130) and the tasks provided by the electronic device (operation 1118) and execute the selected tasks. In some examples, the electronic device 1104 selects a task for the executable intent associated with the highest intent confidence score.
In some examples, the server 1106 selectively determines the tasks based on whether the identified domain is a predetermined type of domain (i.e., a privacy domain). The server 1106 can determine, for example, whether the domain's actionable intent or attributes of the domain are associated with a predetermined type of data (e.g., private data). If the server 1106 determines that the domain is not a predetermined type of domain, the server 1106 determines a task based on the natural language input and provides the task to the electronic device 1104, as described above. Conversely, if the server 1106 determines that the identified domain is a predetermined type of domain, the server 1106 foregoes determining and providing the task.
For example, consider the continuous natural language inputs "how far did I run today?" and "how yesterday?," where only the first natural language input was considered to correctly determine the task of the second natural language input (i.e., retrieve the distance that the user ran yesterday).
The operations described above with reference to fig. 11A-11B are optionally implemented by components depicted in fig. 1-4, 6A-6B, and 7A-7C. For example, the operations of process 800 may be implemented by any of the devices (or components thereof) described herein, including but not limited to devices 104, 200, 400, 600, 804, and 806. Those of ordinary skill in the art will clearly know how to implement other processes based on the components depicted in fig. 1-4, 6A-6B, and 7A-7C. The operations of sequences 1100, 1150 further illustrate the processes described below, including processes 1200 and 1300 of fig. 12 and 13, respectively.
Fig. 12 illustrates a process 1200 for providing tasks according to various examples. Process 1200 is performed, for example, using one or more electronic devices implementing a digital assistant. In some examples, process 1200 is performed using a client-server system (e.g., system 100), and the blocks of process 1200 are divided in any manner between a server (e.g., DA server 1106) and a client device (e.g., user device 1104). In other examples, the blocks of process 1200 are divided between a server and multiple client devices (e.g., a mobile phone and a smart watch). Thus, while portions of process 1200 are described herein as being performed by a particular device of a client-server system, it should be understood that process 1200 is not so limited. In other examples, process 1200 is performed using only a client device (e.g., user device 1104) or only a plurality of client devices. In process 1200, some blocks are optionally combined, the order of some blocks is optionally changed, and some blocks are optionally omitted. In some examples, additional steps may be performed in conjunction with process 1200.
At block 1205, the electronic device receives a natural language input corresponding to the domain. In some examples, the domain is a domain associated with private data, such as health data.
At block 1210, the electronic device provides natural language input to an external device. In some examples, the external device receives a natural language input; identifying a domain of natural language input; and providing the process flow corresponding to the domain to the electronic device. In some examples, providing the process flow includes providing the first textual representation (or representations) and/or the executable intent (or intents) based on the natural language input.
In some examples, the task is a first task. In some examples, identifying the domain of the natural language input includes determining whether the domain corresponding to the natural language input is a predetermined type of domain. In some examples, providing the process flow to the electronic device includes: in accordance with a determination that the domain corresponding to the natural language input is not a predetermined type of domain, a second candidate task associated with the natural language input is determined, and the second candidate task is provided to the electronic device. In some examples, providing the process flow to the electronic device further comprises: in accordance with a determination that the domain corresponding to the natural language input is a predetermined type of domain, a second candidate task associated with the natural language input is determined. In some examples, determining the second candidate task associated with the natural language input includes determining the second candidate task based on a context of the external device
At block 1215, the electronic device receives a process flow corresponding to the domain from an external device. In some examples, the process flow is a set of executable instructions, such as JavaScript. In some examples, the set of executable instructions corresponds to the domain. In some examples, the process flow includes an Automated Speech Recognition (ASR) flow, a Natural Language (NL) processing flow, or a combination thereof. In some examples, the process flow includes a user interface flow.
At block 1220, the electronic device determines a task associated with the natural language input using the process flow corresponding to the domain. In some examples, the task includes providing a response to the user request, such as a request for personal and/or private data (e.g., health data). In some examples, determining the task includes determining the task based on a context of the electronic device. The context includes context stored on the electronic device, such as context relating to one or more interactions between the user and the digital assistant. In some examples, determining the task includes determining the task and determining a parameter associated with the task. In some examples, determining the task includes receiving a plurality of candidate tasks and selecting the task from the plurality of candidate tasks. The candidate tasks may be provided by the electronic device and the external device, respectively.
At block 1225, the electronic device performs the task. In some examples, performing the task includes retrieving private data from a database of the electronic device. In some examples, performing the task includes requesting data from a third party application. In some examples, performing the task includes performing the task using the parameter. In some examples, performing the task includes generating a natural language output. In some examples, performing the task includes selecting the task from the first candidate task and the second candidate task.
At block 1230, the electronic device provides output indicating whether the task has been performed. In some examples, providing the output includes displaying information about the domain and/or indicating whether the task executed successfully or unsuccessfully. In some examples, the output is based on a type of the electronic device. In some examples, providing the output includes providing a natural language output. In some examples, providing the output includes providing the output on a touch-sensitive display of the electronic device. In some examples, providing the output on the touch-sensitive display of the electronic device includes providing the output on the touch-sensitive display in accordance with the user interface flow. In some examples, providing the output on a touch-sensitive display of the electronic device includes providing data associated with the domain on the touch-sensitive display. In some examples, providing the output on the touch-sensitive display of the electronic device includes providing the output based on a context of the electronic device.
fig. 13 illustrates a process 1300 for providing tasks according to various examples. Process 1300 is performed, for example, using one or more electronic devices implementing a digital assistant. In some examples, process 1300 is performed using a client-server system (e.g., system 100), and the blocks of process 1300 are divided in any manner between a server (e.g., DA server 1106) and a client device (e.g., user device 1104). In other examples, the blocks of process 1300 are divided between a server and multiple client devices (e.g., a mobile phone and a smart watch). Thus, while portions of process 1300 are described herein as being performed by a particular device of a client-server system, it should be understood that process 1300 is not so limited. In other examples, process 1300 is performed using only a client device (e.g., user device 1104) or only a plurality of client devices. In process 1300, some blocks are optionally combined, the order of some blocks is optionally changed, and some blocks are optionally omitted. In some examples, additional steps may be performed in connection with process 1300.
at block 1305, the electronic device receives a natural language input from another electronic device.
At block 1310, the electronic device identifies a domain based on the natural language input. In some examples, identifying the domain includes generating a text string based on the natural language input.
At block 1315, the electronic device determines whether the identified domain is a predetermined type of domain.
In accordance with a determination that the identified domain is not a predetermined type of domain, the electronic device determines a candidate task associated with the natural language input and provides the candidate task to another electronic device at block 1320. In some examples, determining the candidate task includes determining the candidate task based on a context of the electronic device.
At block 1325, in accordance with a determination that the identified domain is a predetermined type of domain, the electronic device forgoes determining a second candidate task associated with the natural language input.
at block 1330, the electronic device provides the process flow to another electronic device. In some examples, the process flow corresponds to the identified domain. In some examples, providing the process flow includes providing a text string to the electronic device.
According to some implementations, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium) is provided that stores one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods or processes described herein.
according to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes means for performing any of the methods and processes described herein.
According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes a processing unit configured to perform any of the methods and processes described herein.
According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods and processes described herein.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the disclosure and examples as defined by the following claims.
as described above, one aspect of the present technology is to collect and use data available from various sources to improve digital assistant processing between a server and a client. The present disclosure contemplates that, in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, phone numbers, email addresses, twitter IDs, home addresses, data or records relating to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, personal information data may be used to improve digital assistant processing between a server and a client through one or more usefulness scores. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used to provide insight into the overall health condition of a user, or may be used as positive feedback for individuals using technology to pursue health goals.
the present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. Such policies should be easily accessible to users and should be updated as data is collected and/or used. Personal information from the user should be collected for legitimate and legitimate uses by the entity and not shared or sold outside of these legitimate uses. Furthermore, such acquisition/sharing should be performed after receiving user informed consent. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to apply applicable laws and standards including jurisdiction-specific considerations. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the health insurance association and accountability act (HIPAA); while other countries may have health data subject to other regulations and policies and should be treated accordingly. Therefore, different privacy practices should be maintained for different personal data types in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of an offline personal assistant, the techniques of the present invention may be configured to allow a user to opt-in or opt-out of participating in the collection of personal information data during the registration service or at any time thereafter. As another example, the user may choose not to provide health data for improving an offline personal assistant. In yet another example, the user may choose to limit the collected contextual information related to call history, messages, emails, calendars, music, and so forth. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that their personal information data is to be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing particular identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the context may be determined by inferring preferences based on non-personal information data or an absolute minimum amount of personal information (such as content requested by a device associated with the user, other non-personal information available to an offline personal assistant system, or publicly available information).
Claims (16)
1. A method, comprising:
At an electronic device with one or more processors:
receiving a natural language input;
Determining a first task and a first usefulness score associated with the first task based on the natural language input;
Receiving, from another electronic device, a second task and a second usefulness score associated with the second task;
determining whether the first usefulness score is higher than the second usefulness score;
in accordance with a determination that the first usefulness score is higher than the second usefulness score:
Performing the first task determined by the electronic device; and
Providing an output indicating whether the first task has been executed; and is
In accordance with a determination that the second usefulness score is higher than the first usefulness score:
Performing the second task received from the other electronic device; and
providing an output indicating whether the second task has been performed.
2. The method of claim 1, wherein determining a first task and a first usefulness score associated with the first task based on the natural language input comprises:
Providing a text string based on the natural language input;
Determining a user intent based on the text string; and
determining the first task based on the user intent.
3. The method of any of claims 1-2, wherein determining a first task and a first usefulness score associated with the first task based on the natural language input comprises:
Determining the first task based on a context of the electronic device.
4. The method of any one of claims 1 to 3, wherein determining whether the first usefulness score is higher than the second usefulness score comprises:
Receiving an input from the other electronic device indicating whether the first usefulness score is higher than the second usefulness score.
5. The method of any of claims 1 to 4, further comprising:
providing, at the electronic device, the natural language input to the other device, wherein the second task and the second usefulness score associated with the second task are based on the natural language input.
6. The method of any of claims 1-5, wherein the first usefulness score is based on a speech-to-text confidence score, a natural language confidence score, or a combination thereof.
7. The method of any of claims 1 to 6, further comprising:
Receiving a second natural language input;
Determining whether the first task is executed or the second task is executed;
In accordance with a determination that the first task is performed, determining, with the electronic device, a third task based on the second natural language input; and
In accordance with a determination that the second task is performed, providing the second natural language input to the other electronic device.
8. The method of any of claims 1-7, wherein determining a first task and a first usefulness score associated with the first task based on the natural language input comprises:
Determining the first task based on a session context of the electronic device.
9. The method of any of claims 8, further comprising:
Providing the session context of the electronic device to the other electronic device.
10. The method of any of claims 1-9, wherein the natural language input is a natural language speech input.
11. an electronic device, comprising:
one or more processors;
A memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
Receiving a natural language input;
Determining a first task and a first usefulness score associated with the first task based on the natural language input;
Receiving, from another electronic device, a second task and a second usefulness score associated with the second task;
Determining whether the first usefulness score is higher than the second usefulness score;
In accordance with a determination that the first usefulness score is higher than the second usefulness score:
Performing the first task determined by the electronic device; and
providing an output indicating whether the first task has been executed; and is
in accordance with a determination that the second usefulness score is higher than the first usefulness score:
performing the second task received from the other electronic device; and
Providing an output indicating whether the second task has been performed.
12. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to:
Receiving a natural language input;
Determining a first task and a first usefulness score associated with the first task based on the natural language input;
Receiving, from another electronic device, a second task and a second usefulness score associated with the second task;
Determining whether the first usefulness score is higher than the second usefulness score;
In accordance with a determination that the first usefulness score is higher than the second usefulness score:
executing the first task; and
providing an output indicating whether the first task has been executed; and is
in accordance with a determination that the second usefulness score is higher than the first usefulness score:
executing the second task; and
providing an output indicating whether the second task has been performed.
13. a system, comprising:
means for receiving a natural language input;
means for determining a first task and a first usefulness score associated with the first task based on the natural language input;
Means for receiving a second task and a second usefulness score associated with the second task from another electronic device;
means for determining whether the first usefulness score is higher than the second usefulness score;
In accordance with a determination that the first usefulness score is higher than the second usefulness score, means for:
Performing the first task determined by the electronic device; and
Providing an output indicating whether the first task has been executed; and is
in accordance with a determination that the second usefulness score is higher than the first usefulness score, means for:
Performing the second task received from the other electronic device; and
Providing an output indicating whether the second task has been performed.
14. An electronic device, comprising:
One or more processors;
A memory; and
One or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-10.
15. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform the method of any of claims 1-10.
16. An electronic device, comprising:
Apparatus for performing the method of any one of claims 1 to 10.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762504991P | 2017-05-11 | 2017-05-11 | |
US62/504,991 | 2017-05-11 | ||
DKPA201770439 | 2017-06-06 | ||
DKPA201770439A DK201770439A1 (en) | 2017-05-11 | 2017-06-06 | Offline personal assistant |
PCT/US2018/032075 WO2018209093A1 (en) | 2017-05-11 | 2018-05-10 | Offline personal assistant |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110574023A true CN110574023A (en) | 2019-12-13 |
Family
ID=64105118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880028447.6A Pending CN110574023A (en) | 2017-05-11 | 2018-05-10 | offline personal assistant |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3596625A1 (en) |
CN (1) | CN110574023A (en) |
WO (1) | WO2018209093A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11651439B2 (en) | 2019-08-01 | 2023-05-16 | Patty, Llc | System and method for pre-qualifying a consumer for life and health insurance products or services, benefits products or services based on eligibility and referring a qualified customer to a licensed insurance agent, producer or broker to facilitate the enrollment process |
WO2021022257A1 (en) | 2019-08-01 | 2021-02-04 | Patty Llc | Self-optimizing, multi-channel, cognitive virtual benefits product field underwriter and customer service representative |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101356525A (en) * | 2005-11-30 | 2009-01-28 | 微软公司 | Adaptive semantic reasoning engine |
US20110015928A1 (en) * | 2009-07-15 | 2011-01-20 | Microsoft Corporation | Combination and federation of local and remote speech recognition |
EP2575128A2 (en) * | 2011-09-30 | 2013-04-03 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US20130132084A1 (en) * | 2011-11-18 | 2013-05-23 | Soundhound, Inc. | System and method for performing dual mode speech recognition |
CN105677765A (en) * | 2015-07-28 | 2016-06-15 | Tcl集团股份有限公司 | Method and system recommending expected function sequence for users |
WO2016144840A1 (en) * | 2015-03-06 | 2016-09-15 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US20160336007A1 (en) * | 2014-02-06 | 2016-11-17 | Mitsubishi Electric Corporation | Speech search device and speech search method |
CN106462617A (en) * | 2014-06-30 | 2017-02-22 | 苹果公司 | Intelligent automated assistant for tv user interactions |
-
2018
- 2018-05-10 CN CN201880028447.6A patent/CN110574023A/en active Pending
- 2018-05-10 WO PCT/US2018/032075 patent/WO2018209093A1/en unknown
- 2018-05-10 EP EP18732972.7A patent/EP3596625A1/en not_active Ceased
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101356525A (en) * | 2005-11-30 | 2009-01-28 | 微软公司 | Adaptive semantic reasoning engine |
US20110015928A1 (en) * | 2009-07-15 | 2011-01-20 | Microsoft Corporation | Combination and federation of local and remote speech recognition |
EP2575128A2 (en) * | 2011-09-30 | 2013-04-03 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
CN103226949A (en) * | 2011-09-30 | 2013-07-31 | 苹果公司 | Using context information to facilitate processing of commands in a virtual assistant |
US20130132084A1 (en) * | 2011-11-18 | 2013-05-23 | Soundhound, Inc. | System and method for performing dual mode speech recognition |
US20160336007A1 (en) * | 2014-02-06 | 2016-11-17 | Mitsubishi Electric Corporation | Speech search device and speech search method |
CN106462617A (en) * | 2014-06-30 | 2017-02-22 | 苹果公司 | Intelligent automated assistant for tv user interactions |
WO2016144840A1 (en) * | 2015-03-06 | 2016-09-15 | Apple Inc. | Reducing response latency of intelligent automated assistants |
CN105677765A (en) * | 2015-07-28 | 2016-06-15 | Tcl集团股份有限公司 | Method and system recommending expected function sequence for users |
Non-Patent Citations (1)
Title |
---|
李瀚清等: "利用深度去噪自编码器深度学习的指令意图理解方法", 《上海交通大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
EP3596625A1 (en) | 2020-01-22 |
WO2018209093A1 (en) | 2018-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111901481B (en) | Computer-implemented method, electronic device, and storage medium | |
CN112567323B (en) | User activity shortcut suggestions | |
CN110364148B (en) | Natural assistant interaction | |
CN111418007B (en) | Multi-round prefabricated dialogue | |
CN111656439B (en) | Method for controlling electronic device based on delay, electronic device and storage medium | |
CN111480134B (en) | Attention-aware virtual assistant cleanup | |
CN110223698B (en) | Training a speaker recognition model of a digital assistant | |
CN110288994B (en) | Detecting triggering of a digital assistant | |
CN112767929B (en) | Privacy maintenance of personal information | |
CN107491469B (en) | Intelligent task discovery | |
CN109257941B (en) | Method, electronic device and system for synchronization and task delegation of digital assistants | |
CN115088250A (en) | Digital assistant interaction in a video communication session environment | |
CN112567332A (en) | Multimodal input of voice commands | |
CN116414282A (en) | Multi-modal interface | |
CN115221295A (en) | Personal requested digital assistant processing | |
CN115344119A (en) | Digital assistant for health requests | |
CN110603586B (en) | User interface for correcting recognition errors | |
CN118056172A (en) | Digital assistant for providing hands-free notification management | |
CN111524506B (en) | Client server processing of natural language input to maintain privacy of personal information | |
CN109257942B (en) | User-specific acoustic models | |
CN112015873A (en) | Speech assistant discoverability through in-device object location and personalization | |
CN111399714A (en) | User activity shortcut suggestions | |
CN110574023A (en) | offline personal assistant | |
CN115083414A (en) | Multi-state digital assistant for continuous conversation | |
CN110651324B (en) | Multi-modal interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191213 |
|
WD01 | Invention patent application deemed withdrawn after publication |