AU2019100034B4 - Improving automatic speech recognition based on user feedback - Google Patents

Improving automatic speech recognition based on user feedback Download PDF

Info

Publication number
AU2019100034B4
AU2019100034B4 AU2019100034A AU2019100034A AU2019100034B4 AU 2019100034 B4 AU2019100034 B4 AU 2019100034B4 AU 2019100034 A AU2019100034 A AU 2019100034A AU 2019100034 A AU2019100034 A AU 2019100034A AU 2019100034 B4 AU2019100034 B4 AU 2019100034B4
Authority
AU
Australia
Prior art keywords
examples
input
speech
user
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU2019100034A
Other versions
AU2019100034A4 (en
Inventor
Mahesh Krishnamoorthy
Matthias Paulik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2015/047062 external-priority patent/WO2016033257A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to AU2019100034A priority Critical patent/AU2019100034B4/en
Application granted granted Critical
Publication of AU2019100034A4 publication Critical patent/AU2019100034A4/en
Publication of AU2019100034B4 publication Critical patent/AU2019100034B4/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Abstract

Systems and processes for processing speech in a digital assistant are provided. In one example process, a first speech input can be received from a user. The first speech input can be processed using a first automatic speech recognition system to produce a first recognition result. An input indicative of a potential error in the first recognition result can be received. The input can be used to improve the first recognition result. For example, the input can include a second speech input that is a repetition of the first speech input. The second speech input can be processed using a second automatic speech recognition system to produce a second recognition result.

Description

IMPROVING AUTOMATIC SPEECH RECOGNITION BASED ON USER FEEDBACK
Cross-Reference to Related Application [0001] This application claims priority from U.S. Provisional Serial No. 62/043,041, filed on August 28, 2014, entitled AUTOMATIC SPEECH RECOGNITION BASED ON USER FEEDBACK and U.S. Non-Provisional Serial No. 14/591,754, filed on January 7, 2015, entitled AUTOMATIC SPEECH RECOGNITION BASED ON USER FEEDBACK, which are hereby incorporated by reference in their entirety for all purposes. Also incorporated by reference in its entirety is PCT/US2015/047062 (published as WO 2016/033257) filed on 26 August 2015.
Field [0002] This relates generally to automatic speech recognition and, more specifically, to improving automatic speech recognition based on user feedback.
Background [0003] Automatic speech recognition (ASR) systems can suffer from transcription errors. These errors can occur due to a variety of reasons, such as, garbled speech inputs, speech inputs having noisy backgrounds, or speech inputs containing words that are phonetically similar to other words. Further, in real-time ASR systems, compromises in accuracy can be implemented to achieve acceptable latency times. For example, smaller vocabulary models or less robust speech recognition engines can be implemented. These compromises can contribute to transcription errors. Conventionally, each speech input received by an ASR system can be processed identically. However, processing all speech inputs identically can result in similar transcription errors repeatedly reoccurring, which can lead to frustration on the part of the user and poor user experience.
1002423269
2019100034 11 Jan 2019
Summary [0003A] As used herein, except where the context requires otherwise, the term comprise and variations of the term, such as comprising, comprises and comprised, are not intended to exclude other additives, components, integers or steps.” [0003B] According to an aspect of the invention, there is provided a method for processing speech in a digital assistant, the method comprising: at an electronic device with a processor and memory storing one or more programs for execution by the processor: receiving, from a network interface, a first speech input; processing the first speech input using a first automatic speech recognition system to produce a first recognition result; performing a first task corresponding to a first user intent determined from the first speech recognition result; upon performing the first task, receiving an input representing a rejection of the first task; in response to receiving the input, processing at least a portion of the first speech input using a second automatic speech recognition system to produce a second speech recognition result, wherein the first automatic speech recognition system includes one or more speech recognition models, and the second automatic speech recognition system includes one or more speech recognition models that are different from the one or more speech recognition models of the first automatic speech recognition system; determining a combined speech recognition result based on the first speech recognition result and the second speech recognition result; and performing a second task corresponding to a second user intent determined from the combined speech recognition result.
[0004] Systems and processes for processing speech in a digital assistant are provided. In an example process, a first speech input can be received from a user. The first speech input can be processed using a first automatic speech recognition system to produce a first recognition result. An input indicative of a potential error in the first recognition result can be received. The input can be used to improve the first recognition result.
[0005] In some examples, the input can include a second speech input that is a repetition of the first speech input. The second speech input can be processed using a second automatic speech recognition system to produce a second recognition result.
[0006] In some examples, the user can be prompted to repeat at least a portion of the first speech input. A third speech input that is a repetition of the first speech input can be received from the user input. The third speech input can be processed using the second automatic speech recognition system to produce a third recognition result.
1002423269
2019100034 11 Jan 2019 [0007] In some examples, the first speech input can be processed using the second automatic speech recognition system to produce a fourth recognition result.
Brief Description of the Drawings [0008] FIG. 1 illustrates a system and environment for implementing a digital assistant according to various examples.
[0009] FIG. 2 illustrates a user device implementing the client-side portion of a digital assistant according to various examples.
[0010] FIG. 3A illustrates a digital assistant system or a server portion thereof according to various examples.
[0011] FIG. 3B illustrates the functions of the digital assistant shown in FIG. 3A according to various examples.
[0012] FIGS. 4A-B illustrates a process for processing speech according to various examples.
[0013] FIG. 5 illustrates a functional block diagram of an electronic device according to various examples.
Detailed Description [0014] In the following description of examples, reference is made to the accompanying drawings in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the various examples.
[0015] As described above, repeated reoccurrence of similar errors from an ASR system can lead to poor user experience. In various examples described herein, systems and processes for improving speech processing based on user feedback are provided. In some examples, the speech processing can be performed in a digital assistant. In one example process, a first speech input can be received from a user. The first speech input can be processed using a first ASR system to produce a first recognition result. An input indicative of a potential error in the first recognition result can be received. The input can be used to produce an improved recognition result, thereby reducing the probability of similar errors reoccurring.
1002423269
2019100034 11 Jan 2019 [0016] In some examples, the input can include a second speech input that is a repetition of the first speech input. Specifically, the user can repeat the first speech input to indicate a potential error in the first recognition result. The second speech input can be processed using a second ASR system to produce a second recognition result. In other examples, the user can be prompted to repeat at least a portion of the first speech input. A third speech input that is a repetition of the first speech input can be received from the user input. The third speech input can be processed using the second ASR system to produce a third recognition result. In yet other examples, the first speech input can be processed using the second ASR system to produce a fourth recognition result. In some examples, the second ASR system can be more accurate than the first ASR system. Thus, the second recognition result, third recognition result, and fourth recognition result can each be more accurate than the first recognition result.
[0017] Further, in some examples, a combined result can be determined by performing ASR system combination using the first recognition result and the recognition result produced using the second ASR system (e.g., the second recognition result, the third recognition result, or the fourth recognition result). The combined result can be more accurate than the first recognition result.
[0018] The longer latency and computation times associated with the more accurate second ASR system and with performing ASR system combination can be an acceptable trade-off to reducing the probability of similar errors reoccurring. Specifically, after experiencing the error associated with the first recognition result, the user may prefer to wait longer to obtain a subsequent correct result rather than obtain the same error in a shorter period of time. The systems and processes disclosed herein can thus be implemented to reduce the probability of similar errors reoccurring during speech processing, thereby improving user experience.
1. System and Environment [0019] FIG. 1 illustrates a block diagram of a system 100 according to various examples. In some examples, the system 100 can implement a digital assistant. The terms “digital assistant,” “virtual assistant,” “intelligent automated assistant,” or “automatic digital assistant,” can refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent. For example, to act on an inferred user intent, the system can perform
1002423269
2019100034 11 Jan 2019 one or more of the following: identifying a task flow with steps and parameters designed to accomplish the inferred user intent, inputting specific requirements from the inferred user intent into the task flow; executing the task flow by invoking programs, methods, services, APIs, or the like; and generating output responses to the user in an audible (e.g., speech) and/or visual form.
[0020] Specifically, a digital assistant can be capable of accepting a user request at least partially in the form of a natural language command, request, statement, narrative, and/or inquiry. Typically, the user request can seek either an informational answer or performance of a task by the digital assistant. A satisfactory response to the user request can be a provision of the requested informational answer, a performance of the requested task, or a combination of the two. For example, a user can ask the digital assistant a question, such as “Where am I right now?” Based on the user’s current location, the digital assistant can answer, “You are in Central Park near the west gate.” The user can also request the performance of a task, for example, “Please invite my friends to my girlfriend’s birthday party next week.” In response, the digital assistant can acknowledge the request by saying “Yes, right away,” and then send a suitable calendar invite on behalf of the user to each of the user’s friends listed in the user’s electronic address book. During performance of a requested task, the digital assistant can sometimes interact with the user in a continuous dialogue involving multiple exchanges of information over an extended period of time. There are numerous other ways of interacting with a digital assistant to request information or performance of various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant also can provide responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc.
[0021] An example of a digital assistant is described in Applicant’s U.S. Utility Application Serial No. 12/987,982 for “Intelligent Automated Assistant,” filed January 10, 2011, the entire disclosure of which is incorporated herein by reference.
[0022] As shown in FIG. 1, in some examples, a digital assistant can be implemented according to a client-server model. The digital assistant can include a client-side portion 102a, 102b (hereafter “DA client 102”) executed on a user device 104a, 104b, and a serverside portion 106 (hereafter “DA server 106”) executed on a server system 108. The DA client 102 can communicate with the DA server 106 through one or more networks 110. The DA client 102 can provide client-side functionalities such as user-facing input and output
1002423269
2019100034 11 Jan 2019 processing and communication with the DA-server 106. The DA server 106 can provide server-side functionalities for any number of DA-clients 102 each residing on a respective user device 104.
[0023] In some examples, the DA server 106 can include a client-facing I/O interface 112, one or more processing modules 114, data and models 116, and an I/O interface to external services 118. The client-facing I/O interface can facilitate the client-facing input and output processing for the digital assistant server 106. The one or more processing modules 114 can utilize the data and models 116 to process speech input and determine the user’s intent based on natural language input. Further, the one or more processing modules 114 perform task execution based on inferred user intent. In some examples, the DA-server 106 can communicate with external services 120 through the network(s) 110 for task completion or information acquisition. The I/O interface to external services 118 can facilitate such communications.
[0024] Examples of the user device 104 can include, but are not limited to, a handheld computer, a personal digital assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a cellular telephone, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, a game console, a television, a television set-top box, a remote control, a wearable electronic device, or a combination of any two or more of these data processing devices or other data processing devices. More details on the user device 104 are provided in reference to an exemplary user device 104 shown in FIG. 2.
[0025] Examples of the communication network(s) 110 can include local area networks (“LAN”) and wide area networks (“WAN”), e.g., the Internet. The communication network(s) 110 can be implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
[0026] The server system 108 can be implemented on one or more standalone data processing apparatus or a distributed network of computers. In some examples, the server system 108 can also employ various virtual devices and/or services of third-party service
1002423269
2019100034 11 Jan 2019 providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of the server system 108.
[0027] Although the digital assistant shown in FIG. 1 can include both a client-side portion (e.g., the DA-client 102) and a server-side portion (e.g., the DA-server 106), in some examples, the functions of a digital assistant can be implemented as a standalone application installed on a user device. In addition, the divisions of functionalities between the client and server portions of the digital assistant can vary in different implementations. For instance, in some examples, the DA client can be a thin-client that provides only user-facing input and output processing functions, and delegates all other functionalities of the digital assistant to a backend server.
2. User Device [0028] FIG. 2 illustrates a block diagram of a user-device 104 in accordance with various examples. The user device 104 can include a memory interface 202, one or more processors 204, and a peripherals interface 206. The various components in the user device 104 can be coupled by one or more communication buses or signal lines. The user device 104 can include various sensors, subsystems, and peripheral devices that are coupled to the peripherals interface 206. The sensors, subsystems, and peripheral devices can gather information and/or facilitate various functionalities of the user device 104.
[0029] For example, a motion sensor 210, a light sensor 212, and a proximity sensor 214 can be coupled to the peripherals interface 206 to facilitate orientation, light, and proximity sensing functions. One or more other sensors 216, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, a gyro, a compass, an accelerometer, and the like, can also be connected to the peripherals interface 206 to facilitate related functionalities.
[0030] In some examples, a camera subsystem 220 and an optical sensor 222 can be utilized to facilitate camera functions, such as taking photographs and recording video clips. Communication functions can be facilitated through one or more wired and/or wireless communication subsystems 224, which can include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. An audio subsystem 226 can be coupled to speakers 228 and a microphone 230 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and
1002423269
2019100034 11 Jan 2019 telephony functions. The microphone 230 can be configured to receive a speech input from the user.
[0031] In some examples, an I/O subsystem 240 can also be coupled to the peripherals interface 206. The I/O subsystem 240 can include a touch screen controller 242 and/or other input controlleds) 244. The touch-screen controller 242 can be coupled to a touch screen 246. The touch screen 246 and the touch screen controller 242 can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave technologies, proximity sensor arrays, and the like. The other input controlleds) 244 can be coupled to other input/control devices 248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus.
[0032] In some examples, the memory interface 202 can be coupled to memory 250. The memory 250 can include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such as CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like. In some examples, a non-transitory computer-readable storage medium of the memory 250 can be used to store instructions (e.g., for performing the process 400, described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computerbased system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing the process 400, described below) can be stored on a non-transitory computer-readable storage medium (not shown) of the server system 108, or can be divided between the non-transitory computer-readable storage medium of memory 250 and the non-transitory computer-readable storage medium of server system 110. In the context of this document, a “non-transitory computer readable storage medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.
[0033] In some examples, the memory 250 can store an operating system 252, a communication module 254, a user interface module 256, a sensor processing module 258, a
1002423269
2019100034 11 Jan 2019 phone module 260, and applications 262. The operating system 252 can include instructions for handling basic system services and for performing hardware dependent tasks. The communication module 254 can facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. The user interface module 256 can facilitate graphic user interface processing and output processing using other output channels (e.g., speakers). The sensor processing module 258 can facilitate sensor-related processing and functions. The phone module 260 can facilitate phone-related processes and functions. The application module 262 can facilitate various functionalities of user applications, such as electronic-messaging, web browsing, media processing, Navigation, imaging, and/or other processes and functions.
[0034] As described herein, the memory 250 can also store client-side digital assistant instructions (e.g., in a digital assistant client module 264) and various user data 266 (e.g., user-specific vocabulary data, preference data, and/or other data such as the user’s electronic address book, to-do lists, shopping lists, user-specified name pronunciations, etc.) to provide the client-side functionalities of the digital assistant.
[0035] In various examples, the digital assistant client module 264 can be capable of accepting voice input (e.g., speech input), text input, touch input, and/or gestural input through various user interfaces (e.g., the I/O subsystem 244) of the user device 104. The digital assistant client module 264 can also be capable of providing output in audio (e.g., speech output), visual, and/or tactile forms. For example, output can be provided as voice, sound, alerts, text messages, menus, graphics, videos, animations, vibrations, and/or combinations of two or more of the above. During operation, the digital assistant client module 264 can communicate with the digital assistant server 106 using the communication subsystems 224.
[0036] In some examples, the digital assistant client module 264 can utilize the various sensors, subsystems, and peripheral devices to gather additional information from the surrounding environment of the user device 104 to establish a context associated with a user, the current user interaction, and/or the current user input. In some examples, the digital assistant client module 264 can provide the context information or a subset thereof with the user input to the digital assistant server to help infer the user’s intent. In some examples, the digital assistant can also use the context information to determine how to prepare and deliver outputs to the user.
1002423269
2019100034 11 Jan 2019 [0037] In some examples, the context information that accompanies the user input can include sensor information, e.g., lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, etc. In some examples, the context information can also include the physical state of the device, e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion patterns, cellular signals strength, etc. In some examples, information related to the software state of the user device 106, e.g., running processes, installed programs, past and present network activities, background services, error logs, resources usage, etc., of the user device 104 can be provided to the digital assistant server as context information associated with a user input.
[0038] In some examples, the DA client module 264 can selectively provide information (e.g., user data 266) stored on the user device 104 in response to requests from the digital assistant server. In some examples, the digital assistant client module 264 can also elicit additional input from the user via a natural language dialogue or other user interfaces upon request by the digital assistant server 106. The digital assistant client module 264 can pass the additional input to the digital assistant server 106 to help the digital assistant server 106 in intent deduction and/or fulfillment of the user’s intent expressed in the user request.
[0039] In various examples, the memory 250 can include additional instructions or fewer instructions. For example, the DA client module 264 can include any of the sub-modules of the digital assistant module 326 described below in FIG. 3A. Furthermore, various functions of the user device 104 can be implemented in hardware and/or in firmware, including in one or more signal processing and/or application specific integrated circuits.
3. Digital Assistant System [0040] FIG. 3 A illustrates a block diagram of an example digital assistant system 300 in accordance with various examples. In some examples, the digital assistant system 300 can be implemented on a standalone computer system. In some examples, the digital assistant system 300 can be distributed across multiple computers. In some examples, some of the modules and functions of the digital assistant can be divided into a server portion and a client portion, where the client portion resides on a user device (e.g., the user device 104) and communicates with the server portion (e.g., the server system 108) through one or more networks, e.g., as shown in FIG. 1. In some examples, the digital assistant system 300 can be an implementation of the server system 108 (and/or the digital assistant server 106) shown in FIG. 1. It should be noted that the digital assistant system 300 is only one example of a
1002423269
2019100034 11 Jan 2019 digital assistant system, and that the digital assistant system 300 can have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components. The various components shown in FIG. 3A can be implemented in hardware, software instructions for execution by one or more processors, firmware, including one or more signal processing and/or application specific integrated circuits, or a combination of thereof.
[0041] The digital assistant system 300 can include memory 302, one or more processors 304, an input/output (I/O) interface 306, and a network communications interface 308. These components can communicate with one another over one or more communication buses or signal lines 310.
[0042] In some examples, the memory 302 can include a non-transitory computer readable medium, such as high-speed random access memory and/or a non-volatile computerreadable storage medium (e.g., one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices).
[0043] In some examples, the I/O interface 306 can couple input/output devices 316 of the digital assistant system 300, such as displays, keyboards, touch screens, and microphones, to the user interface module 322. The I/O interface 306, in conjunction with the user interface module 322, can receive user inputs (e.g., voice input, keyboard inputs, touch inputs, etc.) and processes them accordingly. In some examples, e.g., when the digital assistant is implemented on a standalone user device, the digital assistant system 300 can include any of the components and I/O and communication interfaces described with respect to the user device 104 in FIG. 2. In some examples, the digital assistant system 300 can represent the server portion of a digital assistant implementation, and can interact with the user through a client-side portion residing on a user device (e.g., the user device 104 shown in FIG. 2).
[0044] In some examples, the network communications interface 308 can include wired communication port(s) 312 and/or wireless transmission and reception circuitry 314. The wired communication port(s) can receive and send communication signals via one or more wired interfaces, e.g., Ethernet, Universal Serial Bus (USB), FIREWIRE, etc. The wireless circuitry 314 can receive and send RF signals and/or optical signals from/to communications networks and other communications devices. The wireless communications can use any of a plurality of communications standards, protocols, and technologies, such as GSM, EDGE,
1002423269
2019100034 11 Jan 2019
CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. The network communications interface 308 can enable communication between the digital assistant system 300 with networks, such as the Internet, an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices.
[0045] In some examples, memory 302, or the computer readable storage media of memory 302, can store programs, modules, instructions, and data structures including all or a subset of: an operating system 318, a communications module 320, a user interface module 322, one or more applications 324, and a digital assistant module 326. In particular, memory 302, or the computer readable storage media of memory 302, can store instructions for performing the process 400, described below. The one or more processors 304 can execute these programs, modules, and instructions, and reads/writes from/to the data structures.
[0046] The operating system 318 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) can include various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communications between various hardware, firmware, and software components.
[0047] The communications module 320 can facilitate communications between the digital assistant system 300 with other devices over the network communications interface 308. For example, the communications module 320 can communicate with the communication module 254 of the device 104 shown in FIG. 2. The communications module 320 can also include various components for handling data received by the wireless circuitry 314 and/or wired communications port 312.
[0048] The user interface module 322 can receive commands and/or inputs from a user via the I/O interface 306 (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone), and generate user interface objects on a display. The user interface module 322 can also prepare and deliver outputs (e.g., speech, sound, animation, text, icons, vibrations, haptic feedback, and light, etc.) to the user via the I/O interface 306 (e.g., through displays, audio channels, speakers, and touch-pads, etc.).
[0049] The applications 324 can include programs and/or modules that are configured to be executed by the one or more processors 304. For example, if the digital assistant system is implemented on a standalone user device, the applications 324 can include user applications,
1002423269
2019100034 11 Jan 2019 such as games, a calendar application, a navigation application, or an email application. If the digital assistant system 300 is implemented on a server farm, the applications 324 can include resource management applications, diagnostic applications, or scheduling applications, for example.
[0050] The memory 302 can also store the digital assistant module (or the server portion of a digital assistant) 326. In some examples, the digital assistant module 326 can include the following sub-modules, or a subset or superset thereof: an input/output processing module 328, a speech-to-text (STT) processing module 330, a natural language processing module 332, a dialogue flow processing module 334, a task flow processing module 336, and a service processing module 338. Each of these modules can have access to one or more of the following data and models of the digital assistant 326, or a subset or superset thereof: ontology 360, vocabulary index 344, user data 348, task flow models 354, and service models 356.
[0051] In some examples, using the processing modules, data, and models implemented in the digital assistant module 326, the digital assistant can perform at least some of the following: converting speech input into text; identifying a user’s intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully infer the user’s intent (e.g., by disambiguating words, games, intentions, etc.); determining the task flow for fulfilling the inferred intent; and executing the task flow to fulfill the inferred intent.
[0052] In some examples, as shown in FIG. 3B, the I/O processing module 328 can interact with the user through the FO devices 316 in FIG. 3A or with a user device (e.g., a user device 104 in FIG. 1) through the network communications interface 308 in FIG. 3A to obtain user input (e.g., a speech input) and to provide responses (e.g., as speech outputs) to the user input. The I/O processing module 328 can optionally obtain context information associated with the user input from the user device, along with or shortly after the receipt of the user input. The context information can include user-specific data, vocabulary, and/or preferences relevant to the user input. In some examples, the context information also includes software and hardware states of the device (e.g., the user device 104 in FIG. 1) at the time the user request is received, and/or information related to the surrounding environment of the user at the time that the user request was received. In some examples, the FO processing module 328 can also send follow-up questions to, and receive answers from, the
1002423269
2019100034 11 Jan 2019 user regarding the user request. When a user request is received by the I/O processing module 328 and the user request can include speech input, the I/O processing module 328 can forward the speech input to the STT processing module 330 (or speech recognizer) for speech-to-text conversions.
[0053] The STT processing module 330 can include one or more ASR systems. The one or more ASR systems can process the speech input that is received through the I/O processing module 328 to produce a recognition result. Each ASR system can include a front-end speech pre-processor. The front-end speech pre-processor can extract representative features from the speech input. For example, the front-end speech preprocessor can perform a Fourier transform on the speech input to extract spectral features that characterize the speech input as a sequence of representative multi-dimensional vectors. Further, each ASR system can include one or more speech recognition models (e.g., acoustic models and/or language models) and can implement one or more speech recognition engines. Examples of speech recognition models can include Hidden Markov Models, GaussianMixture Models, Deep Neural Network Models, n-gram models, and other statistical models. Examples of speech recognition engines can include the dynamic time warping based engines and weighted finite-state transducers (WFST) based engines. The one or more speech recognition models and the one or more speech recognition engines can be used to process the extracted representative features of the front-end speech pre-processor to produce intermediate recognitions results (e.g., phonemes, sequence of phonemes, and sub-words), and ultimately, text recognition results (e.g., words, sequence of words, or sequence of tokens). In some examples, the speech input can be processed at least partially by a thirdparty service or on the user’s device (e.g., user device 104) to produce the recognition result. Once the STT processing module 330 produces recognition results containing text (e.g., words, or sequence of words, or sequence of tokens), the recognition result can be passed to the natural language processing module 332 for intent deduction.
[0054] In some examples, the STT processing module 330 can include and/or access a vocabulary of recognizable words via a phonetic alphabet conversion module 331. Each vocabulary word can be associated with one or more candidate pronunciations of the word represented in a speech recognition phonetic alphabet. For example, the vocabulary may include the word “tomato” in association with the candidate pronunciations of “tuh-may-doe” and “tuh-mah-doe.” In some examples, the candidate pronunciations for words can be
1002423269
2019100034 11 Jan 2019 determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciations can be manually generated, e.g., based on known canonical pronunciations.
[0055] In some examples, the candidate pronunciations can be ranked based on the commonness of the candidate pronunciation. For example, the candidate pronunciation “tuhmay-doe” can be ranked higher than “tuh-mah-doe,” because the former is a more commonly used pronunciation (e.g., among all users, for users in a particular geographical region, or for any other appropriate subset of users). In some examples, one of the candidate pronunciations can be selected as a predicted pronunciation (e.g., the most likely pronunciation).
[0056] When a speech input is received, the STT processing module 330 can be used to determine the phonemes corresponding to the speech input (e.g., using an acoustic model), and then attempt to determine words that match the phonemes (e.g., using a language model). For example, if the STT processing module 330 can first identify the sequence of phonemes “tuh-may-doe” corresponding to a portion of the speech input, it can then determine, based on the vocabulary index 344, that this sequence corresponds to the word “tomato.” [0057] In some examples, the STT processing module 330 can use approximate matching techniques to determine words in an utterance. Thus, for example, the STT processing module 330 can determine that the sequence of phonemes “duh-may-doe” corresponds to the word “tomato,” even if that particular sequence of phonemes is not one of the candidate phonemes for that word.
[0058] In some examples, the STT processing module 330 can be capable of determining a combined result based on two or more recognition results. For example, STT processing module 330 can be capable of performing blocks 414, 424, and 430 of process 400 described below.
[0059] The natural language processing module 332 (“natural language processor”) of the digital assistant can take the sequence of words or tokens (“token sequence”) generated by the STT processing module 330, and attempt to associate the token sequence with one or more “actionable intents” recognized by the digital assistant. An “actionable intent” can represent a task that can be performed by the digital assistant, and can have an associated task flow implemented in the task flow models 354. The associated task flow can be a series of programmed actions and steps that the digital assistant takes in order to perform the task.
1002423269
2019100034 11 Jan 2019
The scope of a digital assistant’s capabilities can be dependent on the number and variety of task flows that have been implemented and stored in the task flow models 354, or in other words, on the number and variety of “actionable intents” that the digital assistant recognizes. The effectiveness of the digital assistant, however, can also be dependent on the assistant’s ability to infer the correct “actionable intent(s)” from the user request expressed in natural language.
[0060] In some examples, in addition to the sequence of words or tokens obtained from the STT processing module 330, the natural language processing module 332 can also receive context information associated with the user request, e.g., from the I/O processing module 328. The natural language processing module 332 can optionally use the context information to clarify, supplement, and/or further define the information contained in the token sequence received from the STT processing module 330. The context information can include, for example, user preferences, hardware and/or software states of the user device, sensor information collected before, during, or shortly after the user request, prior interactions (e.g., dialogue) between the digital assistant and the user, and the like. As described herein, context information can be dynamic, and can change with time, location, content of the dialogue, and other factors.
[0061] In some examples, the natural language processing can be based on e.g., ontology 360. The ontology 360 can be a hierarchical structure containing many nodes, each node representing either an “actionable intent” or a “property” relevant to one or more of the “actionable intents” or other “properties.” As noted above, an “actionable intent” can represent a task that the digital assistant is capable of performing, i.e., it is “actionable” or can be acted on. A “property” can represent a parameter associated with an actionable intent or a sub-aspect of another property. A linkage between an actionable intent node and a property node in the ontology 360 can define how a parameter represented by the property node pertains to the task represented by the actionable intent node.
[0062] The natural language processing module 332 can receive the token sequence (e.g., a text string) from the STT processing module 330, and determine what nodes are implicated by the words in the token sequence. In some examples, if a word or phrase in the token sequence is found to be associated with one or more nodes in the ontology 360 (via the vocabulary index 344), the word or phrase can “trigger” or “activate” those nodes. Based on the quantity and/or relative importance of the activated nodes, the natural language
1002423269
2019100034 11 Jan 2019 processing module 332 can select one of the actionable intents as the task that the user intended the digital assistant to perform. In some examples, the domain that has the most “triggered” nodes can be selected. In some examples, the domain having the highest confidence value (e.g., based on the relative importance of its various triggered nodes) can be selected. In some examples, the domain can be selected based on a combination of the number and the importance of the triggered nodes. In some examples, additional factors are considered in selecting the node as well, such as whether the digital assistant has previously correctly interpreted a similar request from a user.
[0063] User data 348 can include user-specific information, such as user-specific vocabulary, user preferences, user address, user’s default and secondary languages, user’s contact list, and other short-term or long-term information for each user. In some examples, the natural language processing module 332 can use the user-specific information to supplement the information contained in the user input to further define the user intent. For example, for a user request “invite my friends to my birthday party,” the natural language processing module 332 can be able to access user data 348 to determine who the “friends” are and when and where the “birthday party” would be held, rather than requiring the user to provide such information explicitly in his/her request.
[0064] In some examples, once the natural language processing module 332 identifies an actionable intent (or domain) based on the user request, the natural language processing module 332 can generate a structured query to represent the identified actionable intent. In some examples, the structured query can include parameters for one or more nodes within the domain for the actionable intent, and at least some of the parameters are populated with the specific information and requirements specified in the user request. For example, the user may say “Make me a dinner reservation at a sushi place at 7.” In this case, the natural language processing module 332 can be able to correctly identify the actionable intent to be “restaurant reservation” based on the user input. According to the ontology, a structured query for a “restaurant reservation” domain may include parameters such as {Cuisine}, {Time}, {Date}, {Party Size}, and the like. In some examples, based on the speech input and the text derived from the speech input using the STT processing module 330, the natural language processing module 332 can generate a partial structured query for the restaurant reservation domain, where the partial structured query includes the parameters {Cuisine= “Sushi”} and {Time = “7pm”}. However, in this example, the user’s utterance contains
1002423269
2019100034 11 Jan 2019 insufficient information to complete the structured query associated with the domain. Therefore, other necessary parameters such as {Party Size} and {Date} may not be specified in the structured query based on the information currently available. In some examples, the natural language processing module 332 can populate some parameters of the structured query with received context information. For example, in some examples, if the user requested a sushi restaurant “near me,” the natural language processing module 332 can populate a {location} parameter in the structured query with GPS coordinates from the user device 104.
[0065] In some examples, the natural language processing module 332 can pass the structured query (including any completed parameters) to the task flow processing module 336 (“task flow processor”). The task flow processing module 336 can be configured to receive the structured query from the natural language processing module 332, complete the structured query, if necessary, and perform the actions required to “complete” the user’s ultimate request. In some examples, the various procedures necessary to complete these tasks can be provided in task flow models 354. In some examples, the task flow models can include procedures for obtaining additional information from the user, and task flows for performing actions associated with the actionable intent.
[0066] As described above, in order to complete a structured query, the task flow processing module 336 may need to initiate additional dialogue with the user in order to obtain additional information, and/or disambiguate potentially ambiguous utterances. When such interactions are necessary, the task flow processing module 336 can invoke the dialogue flow processing module 334 to engage in a dialogue with the user. In some examples, the dialogue flow processing module 334 can determine how (and/or when) to ask the user for the additional information, and receives and processes the user responses. The questions can be provided to and answers can be received from the users through the I/O processing module 328. In some examples, the dialogue flow processing module 334 can present dialogue output to the user via audio and/or visual output, and receives input from the user via spoken or physical (e.g., clicking) responses. Continuing with the example above, when the task flow processing module 336 invokes the dialogue flow processing module 334 to determine the “party size” and “date” information for the structured query associated with the domain “restaurant reservation,” the dialogue flow processing module 334 can generate questions such as “For how many people?” and “On which day?” to pass to the user. Once answers are
1002423269
2019100034 11 Jan 2019 received from the user, the dialogue flow processing module 334 can then populate the structured query with the missing information, or pass the information to the task flow processing module 336 to complete the missing information from the structured query.
[0067] Once the task flow processing module 336 has completed the structured query for an actionable intent, the task flow processing module 336 can proceed to perform the ultimate task associated with the actionable intent. Accordingly, the task flow processing module 336 can execute the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, the task flow model for the actionable intent of “restaurant reservation” can include steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, using a structured query such as: {restaurant reservation, restaurant = ABC Cafe, date = 3/12/2012, time = 7pm, party size = 5}, the task flow processing module 336 can perform the steps of: (1) logging onto a server of the ABC Cafe or a restaurant reservation system such as OPENTABLE®, (2) entering the date, time, and party size information in a form on the website, (3) submitting the form, and (4) making a calendar entry for the reservation in the user’s calendar.
[0068] In some examples, the task flow processing module 336 can employ the assistance of a service processing module 338 (“service processing module”) to complete a task requested in the user input or to provide an informational answer requested in the user input. For example, the service processing module 338 can act on behalf of the task flow processing module 336 to make a phone call, set a calendar entry, invoke a map search, invoke or interact with other user applications installed on the user device, and invoke or interact with third-party services (e.g., a restaurant reservation portal, a social networking website, a banking portal, etc.). In some examples, the protocols and application programming interfaces (API) required by each service can be specified by a respective service model among the service models 356. The service processing module 338 can access the appropriate service model for a service and generate requests for the service in accordance with the protocols and APIs required by the service according to the service model.
[0069] For example, if a restaurant has enabled an online reservation service, the restaurant can submit a service model specifying the necessary parameters for making a reservation and the APIs for communicating the values of the necessary parameter to the online reservation service. When requested by the task flow processing module 336, the
1002423269
2019100034 11 Jan 2019 service processing module 338 can establish a network connection with the online reservation service using the web address stored in the service model, and send the necessary parameters of the reservation (e.g., time, date, party size) to the online reservation interface in a format according to the API of the online reservation service.
[0070] In some examples, the natural language processing module 332, dialogue flow processing module 334, and task flow processing module 336 can be used collectively and iteratively to infer and define the user’s intent, obtain information to further clarify and refine the user intent, and finally generate a response (i.e., an output to the user, or the completion of a task) to fulfill the user’s intent.
[0071] Additional details on the digital assistant can be found in the U.S. Utility Application No. 12/987,982, entitled “Intelligent Automated Assistant,” filed January 18, 2010 and U.S. Utility Application No. 61/493,201, entitled “Generating and Processing Data Items That Represent Tasks to Perform”, filed June 3, 2011, the entire disclosures of which are incorporated herein by reference.
4. Process for Speech Processing in a Digital Assistant [0072] FIGS. 4A-B illustrate a process 400 for processing speech according to various examples. The process 400 can be performed at an electronic device with one or more processors and memory storing one or more programs for execution by the one or more processors. In some examples, the process 400 can be performed at the user device 104 or the server system 108. In some examples, the process 400 can be performed by the digital assistant system 300 (FIG. 3A), which, as noted above, may be implemented on a standalone computer system (e.g., either the user device 104 or the server system 108) or distributed across multiple computers (e.g., the user device 104, the server system 108, and/or additional or alternative devices or systems). While the following discussion describes the process 400 as being performed by a digital assistant (e.g., the digital assistant system 300), the process is not limited to performance by any particular device, combination of devices, or implementation. Moreover, the individual blocks of the process may be distributed among the one or more computers, systems, or devices in any appropriate manner.
[0073] At block 402 of process 400, with reference to FIG. 4, a first speech input can be received from a user. In some examples, the first speech input can be received in the course of, or as part of, an interaction with the digital assistant. In other examples, the first speech input can be a dictation to be transcribed by the digital assistant for input into an application
1002423269
2019100034 11 Jan 2019 (e.g., email, word processing, messages, web search, and the like) of the electronic device. The first speech input can be received in the form of sound waves, an audio file, or a representative audio signal (analog or digital). In some examples, the first speech input can be sound waves that are received by the microphone (e.g., microphone 230) of the electronic device (e.g., user device 104). In other examples, the first speech input can be a representative audio signal or a recorded audio file that is received by the audio subsystem (e.g., audio subsystem 226), the peripheral interface (e.g., peripheral interface 206), or the processor (e.g., processor 204) of the electronic device. In yet other examples, the first speech input can be a representative audio signal or a recorded audio file that is received by the I/O interface (e.g., I/O interface 306) or the processor (e.g., processor 304) of the digital assistant system.
[0074] In some examples, the first speech input can include a user request. The user request can be any request, including a request that indicates a task that the digital assistant can perform (e.g., making and/or facilitating restaurant reservations, initiating telephone calls or text messages, etc.), a request for a response (e.g., an answer to a question, such as “how far is Earth from the sun?”), and the like.
[0075] At block 404 of process 400, the first speech input can be processed using a first ASR system to produce a first recognition result. Conventional speech-to-text processing techniques can be used to process the first speech input. In some examples, the first speech input can be processed using the STT processing module (e.g., STT processing module 330) of the electronic device to produce the first recognition result. Specifically, the STT processing module can include the first ASR system. One or more speech recognition models (e.g., acoustic models and/or language models) of the first ASR system and one or more speech recognition engines of the first ASR system can be used to process the first speech input.
[0076] In some examples, the first recognition result can include intermediate recognition results such as phonemes and sub-words (e.g., syllables, morphemes and the like). In particular, the first recognition result can include a sequence of phonemes or tokens corresponding to the first speech input. This sequence of phonemes that corresponds to the first speech input can be referred to as a phonetic transcription of the first speech input. In some examples, the first recognition result can include text such as a word or a sequence of words corresponding to at least a portion of the first speech input. Further, in some
1002423269
2019100034 11 Jan 2019 examples, processing the first speech input can include determining a confidence measure for each phoneme, sequence of phonemes, sub-word, word, or sequence of words derived from the first speech input. The confidence measure can reflect the confidence or accuracy in the recognition result. In some examples, the first recognition result can thus include confidence measures of the derived phonemes, sequence of phonemes, sub-words, words, or sequence of words.
[0077] At block 406 of process 400, an action based on the first recognition result can be performed. In examples where process 400 is implemented to transcribe speech for input into an application (e.g., email, messaging, word processing, web search, and the like) of the electronic device, the first recognition result can include text transcribed from at least a portion of the first speech input. In these examples, the action can include displaying (e.g., on touchscreen 246) at least a portion of the text of the first recognition result on the electronic device.
[0078] In other examples, the first speech input can include a user request directed to the digital assistant. In these examples, the action can include executing a task to satisfy the user request. Specifically, the action can include executing (e.g., using service processing module 338) a task intended to fulfill the user’s intent (e.g., determined using the natural language processing module 332, the dialogue flow processing module 334, and the task flow processing module 336) associated with the user request. For example, the first speech input can include the user request “call Tim Carpenter.” The first recognition result can thus include the text “call Tim Carpenter.” Based on this text, the digital assistant can execute the task of calling the phone number associated with the person “Tim Carpenter” listed in the user’s contact list.
[0079] In yet other examples, the action can include generating an output (e.g., speech or text) that summarizes, describes, or confirms the intent inferred by the digital assistant from the first text (e.g., using the natural language processing module 332). In one such example, the digital assistant can output the phrase, “Searching the web for information about pine trees,” based on the first recognition result containing the text, “web search for pine trees.” In another such example, the digital assistant can output the question, “Did you wish to call Tim Carpenter?” based on the first recognition result containing the text “call Tim Carpenter.”
1002423269
2019100034 11 Jan 2019 [0080] At block 408 of process 400, an input indicative of a potential error in the first recognition result can be received from the user. In some examples, the potential error can be inferred by the user based on the action performed at block 406. In one such example, the first speech input received at block 402 from the user can include, “Please call Tim Carpenter.” However, the action performed by the digital assistant at block 406 can include calling “Jim Carpenter.” In this example, the user may infer that there is a potential error in the first recognition result and thus an input indicative of the potential error can be received from the user at the electronic device.
[0081] The input received at block 408 can be any input that is indicative of a potential error in the first recognition result. In some examples, the input can include a second speech input. In these examples, the second speech input can be processed at block 410 to determine whether the second speech input is a repetition of at least a portion of the first speech input of block 402. A repetition by the user of at least a portion of the first speech input can be indicative of a potential error in the first recognition result.
[0082] At block 410 of process 400, it can be determined whether the input at block 408 includes a second speech input that is a repetition of at least a portion of the first speech input. Determining whether the input includes a second speech input that is a repetition of at least a portion of the first speech input can be desirable to determine whether the input received at block 408 can be used to improve the recognition results and reduce the probability of the same error reoccurring. For example, if it is determined that the input includes a second speech input that is a repetition of at least a portion of the first speech input, the second speech input can be processed to produce a second recognition result (e.g., at block 412) that can be used to improve the recognition result.
[0083] In some examples, the determination can include comparing an audio waveform of the second speech input with an audio waveform of the first speech input. In these examples, the second speech input can be determined to be a repetition of at least a portion of the first speech input if it is determined that the audio waveform of the second speech input is substantially similar to a corresponding portion of the audio waveform of the first speech input. In particular, the second speech input can be determined to be a repetition of at least a portion of the first speech input if a difference between the audio waveform of the second speech input and a correspond portion of the audio waveform of the first speech input is less than a predetermine threshold value.
1002423269
2019100034 11 Jan 2019 [0084] In other examples, determining whether the second speech input is a repetition of at least a portion of the first speech input can include comparing the phonetic transcription of the second speech input with the phonetic transcription of the first speech input. In these examples, the second speech input can be initially processed using an ASR system (e.g., the first ASR system at block 404, the second ASR system at block 412, and the like) to produce a phonetic transcription of the second speech input. An error rate of the phonemic transcription of the second speech input with respect to the phonemic transcription of a corresponding portion of the first speech input can then be determined. In other words, the phonetic transcription of the second speech input can be compared against the phonetic transcription of the corresponding portion of the first speech input to determine the error rate. In these examples, the second speech input can be determined to be a repetition of at least a portion of the first speech input if the error rate of the phonemic transcription of the second speech input with respect to the phonemic transcription of the corresponding portion of the first speech input is less than a predetermined value.
[0085] It should be recognized that the second speech input can be only a portion of the input received at block 408. For example, the first speech input can include, “Call Tim Carpenter” and the input received at block 408 can include the speech, “No, not Jim, I said ‘Tim Carpenter’.” In this example, the input received at block 408 includes the second speech input “Tim Carpenter,” which is a repetition of at least a portion of the first speech input. Further, it should be recognized that block 410 can include other steps necessary to determine whether the input at block 408 includes a second speech input that is a repetition of at least a portion of the first speech input. For example, block 410 can additionally include determining whether the input received at block 408 includes speech.
[0086] At block 412 of process 400, the second speech input can be processed using a second ASR system to produce a second recognition result. Processing the second speech input using the second ASR system can be similar to processing the first speech at block 404 using the first ASR system. In some examples, the second speech input can be processed using the STT processing module (e.g., STT processing module 330) of the electronic device to produce the second recognition result. Specifically, the STT processing module can include the second ASR system. One or more speech recognition models (e.g., acoustic models and/or language models) of the second ASR system and one or more speech recognition engines of the second ASR system can be used to process the first speech input.
1002423269
2019100034 11 Jan 2019 [0087] In some examples, the second recognition result can include intermediate recognition results such as phonemes and sub-words (e.g., syllables, morphemes, and the like). In particular, the second recognition result can include a sequence of phonemes or tokens corresponding to the second speech input. This sequence of phonemes that corresponds to the second speech input can be referred to as a phonetic transcription of the second speech input. In some examples, the second recognition result can include text such as a word or a sequence of words corresponding to at least a portion of the second speech input. Further, in some examples, processing the second speech input can include determining a confidence measure for each phoneme, sequence of phonemes, sub-word, word, or sequence of words derived from the second speech input. The confidence measure can reflect the confidence or accuracy in the recognition result. In some examples, the second recognition result can thus include confidence measures of the derived phonemes, sequence of phonemes, sub-words, words, or sequence of words.
[0088] In some examples, the second ASR system can be the same ASR system as the first ASR system. Thus, in these examples, the first speech input received at block 402 and the second speech input of the input received at block 408 can both be processed using the same ASR system. For example, the user can enunciate the second speech input more clearly than the first speech input. Further, the user can speak louder and thus the second speech input can have a higher signal-to-noise ratio than the first speech input. Therefore, in some examples, the second recognition result produced can be more accurate than the first recognition result despite using the same ASR system. In this way, the recognition result can be improved using the input received at block 408 and the same error can be avoided.
[0089] In other examples, the second ASR system can be different from the first ASR system. In these examples, the first speech input received at block 402 and the second speech input of the input received at block 408 can be processed using different ASR systems. Using different ASR systems can be advantageous to producing a more accurate recognition result for the second speech input and thus reducing the probability of the same error being made. For example, the error rate of the second ASR system used can be lower than the error rate of the first ASR system and thus the second ASR system can yield more accurate recognition results than the first ASR system. Further, the second ASR system can require greater computation cost than the first ASR system to achieve greater accuracy. Thus, the latency of the second ASR system can be greater than the latency of the first ASR system. In
1002423269
2019100034 11 Jan 2019 some examples, the one or more speech recognition engines of the second ASR system can be different from the one or more speech recognition engines of the first ASR system. In particular, the speech recognition engine of the second ASR system can be more robust than the speech recognition engine of the first ASR system. In some examples, the one or more speech recognition models of the second ASR system can be different from the one or more speech recognition models of the first ASR system. In particular, the one or more speech recognition models of the second ASR system can have larger vocabularies than the one or more speech recognition models of the first ASR system. Accordingly, in some examples, the second recognition result can have a higher confidence measure than the first recognition result.
[0090] In some examples, block 412 can be performed in response to determining at block 410 that the input includes a second speech input that is a repetition of at least a portion of the first speech input. In one such example, the second speech input of the input received at block 408 can be initially processed using a default ASR system (e.g., the first ASR system at block 404) to produce an initial recognition result. Whether the input includes a second speech input that is a repetition of at least a portion of the first speech input can then be determined based on comparing this initial recognition result to the first recognition result. In response to determining that the input of block 408 is a second speech input that includes a repetition of at least a portion of the first speech input, the second speech input can be reprocessed using a more accurate ASR system (e.g., the second ASR system) to produce the second recognition result. The second recognition result can be different from the initial recognition result where the second recognition result can have a higher confidence measure than the initial recognition result.
[0091] In other examples, block 412 can be performed prior to determining at block 410 whether the input include a second speech input that is a repetition of at least a portion of the first speech input. In these examples, the second speech input of the input received at block 408 can be initially processed using the second ASR system to produce the second recognition result. Whether the input of block 408 includes a second speech input that is a repetition of at least a portion of the first speech input can then be determined based on comparing the second recognition result to the first recognition result. In response to determining that the input includes a second speech input that is a repetition of at least a portion of the first speech input, the second recognition result can be utilized for subsequent
1002423269
2019100034 11 Jan 2019 steps of process 400. For example, the second recognition result can be used to determine a combined result (e.g., at block 414, described below) or the second recognition result can be utilized to perform an action based on the second recognition result (e.g., at blocks 416, described below).
[0092] At block 414 of process 400, a first combined result can be determined (e.g., using STT processing module 330) based on the first recognition result and the second recognition result. In some examples, the first combined result can comprise at least a portion of the first recognition result and at least a portion of the second recognition result. For example, the second speech input can be a repetition of only a portion of the first speech input. In these examples, the first recognition result and the second recognition result can be combined to produce the first combined result. In a specific example, the first speech input can include the utterance, “How many calories in a kiwi fruit?” and the first recognition result can include the sequence of words, “How many calories in a cure for it.” In this example, the user can repeat only the portion of the first speech input that corresponds to the error in the first recognition result. For example, the second speech input can include the utterance, “in a kiwi fruit” and the second recognition result can include the sequence of words, “in a kiwi fruit.” Thus, in this example, the first recognition result and the second recognition result can be combined to produce the first combined result of “How many calories in a kiwi fruit?” [0093] Further, in some examples, the first recognition result and the second recognition result can be combined by means of ASR system combination to determine the first combined result. ASR system combination can incorporate the recognition results of multiple ASR systems (e.g, the first ASR system and the second ASR system) that apply different speech recognition models and/or speech recognition engines to achieve greater accuracy. Therefore, the first combined result can be more accurate than each of the first recognition result and the second recognition result. Examples of ASR system combination include recognition output voting error reduction (ROVER), cross-adaptation, confusion network combination, and lattice combination.
[0094] Additional details of implementing ASR system combination can be found in the following references: J. Fiscus, “A Post-Processing System To Yield Reduced Word Error Rates: Recognizer Output Voting Error Reduction (ROVER),” In Proc. IEEE Automatic Speech Recognition and Understanding Workshop, pages 347-354, 1997; S. Striker, C. Fugen, S. Burger, and Matthias Wolfel, “Cross-system adaptation and combination for
1002423269
2019100034 11 Jan 2019 continuous speech recognition: the influence of phoneme set and acoustic front-end.” In Interspeech, Pittsburgh, PA, USA, September 2006; G. Evermann and P. Woodland, “Posterior Probability Decoding, Confidence Estimation And System Combination, In Proc. NIST Speech Transcription Workshop,” 2000; L. Mangu, E. Brill, A. Stolcke, “Finding Consensus In Speech Recognition: Word Error Minimization And Other Applications Of Confusion Networks,” Computer Speech and Language 14 (4), pages 291-294, 2000; A. Sankar, “Bayesian model combination (baycom) for improved recognition,” In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 845848, Philadelphia, PA, USA, April 2005; all incorporated herein by reference in their entirety.
[0095] At block 416 of process 400, an action can be performed based on the second recognition result or the first combined result. The action performed at block 416 can be similar to the action performed at block 406. In some examples, the action can include displaying at least a portion of the text of the second recognition result or the text of the first combined result on the electronic device. In other examples, the action can include executing a task to satisfy a user request contained in the first speech input and/or the second speech input. In yet other examples, the action can include generating an output (e.g., speech or text) that summarizes, describes, or confirms the intent inferred by the digital assistant from the second recognition result or the first combined result (e.g., using the natural language processing module 332).
[0096] In examples where block 414 is performed, the action performed at block 416 can be based on the first combined result. However, in other examples, block 414 may not be performed and the action performed at block 416 can be based on the second recognition result. In one such example, the second speech input can be a repetition of the entire first speech input. Further, the second recognition result can have a sufficiently high confidence measure such that ASR system combination need not be performed. Thus, in this example, the action performed at block 416 can be based on the second recognition result.
[0097] Referring back to block 410 of process 400, it can be determined that the input does not include a second speech input that is a repetition of the first speech input. In some examples, it can be determine that the input includes a second speech input, however, the second speech input is not a repetition of the first speech input. In these examples, the second speech input can include a predetermined utterance that is indicative of a potential error in the first recognition result. In some examples, the predetermined utterance can
1002423269
2019100034 11 Jan 2019 include specific words, such as, no, error, wrong, incorrect, misunderstand, bad, and the like. In some examples, the predetermined utterance can include specific phases, such as, “try again,” “what was that?”, “what are you talking about?”, “that was way off,” and the like. In some examples, the predetermined utterance can include playful insults, rebukes, or expressions of frustration to the digital assistant, such as, “forget you!”, “what?!?”, “ugh!”, “you stink,” and the like.
[0098] In other examples, it can be determined at block 410 that the input received at block 408 does not include any speech input. For example, the input received at block 408 can include a non-speech input indicative of a potential error in the first recognition result. In some examples, the input can be a selection of an affordance. Specifically, an affordance can be provided that, when selected, indicates a potential error in the first recognition result. In one example, the affordance can be a physical button. In another example, the affordance can be a touchscreen element. The touchscreen element can be associated with text, such as “report a mistake,” thus informing the user that selection of the touchscreen element will indicate a potential error in the first recognition result.
[0099] In examples where at least a portion of the text of the first recognition result is displayed on the electronic device (e.g., at block 406), the input can be a selection of at least a portion of the displayed text. Specifically, the user can highlight, via the touchscreen (e.g., touchscreen 246) or other input/control devices (e.g., other input/control devices 248) at least a portion of the displayed text of the first recognition result on the electronic device. The selected portion can also indicate the portion of the text of the first recognition result that is associated with the potential error.
[0100] In some examples, the input received at block 408 can be a predetermined motion of the electronic device. The predetermined motion can be detected by a motion sensor (e.g., the motion sensor 210) of the electronic device. In one such example, the predetermined motion can be a shaking motion. Further, the predetermined motion can have a certain motion profile (e.g., a certain speed, frequency, and/or magnitude of movement).
[0101] In some examples, the input received at block 408 can be a rejection or cancellation associated with the action performed at block 406. In one such example, the action performed at block 406 can include calling “Jim Carpenter” in response to the first speech input of “Call Tim Carpenter.” In this example, the input received at block 408 can include canceling the call to “Jim Carpenter.” In other examples, the input can include
1002423269
2019100034 11 Jan 2019 rejecting a proposed task by the digital assistant. In one such example, the action performed at block 406 can include outputting the question, “Would you like to call Jim Carpenter?” based on the first speech input of “Call Tim Carpenter.” In this example, the input received in response to this question can include the response of “no” or a similar input indicative of a rejection to the question.
[0102] At block 418 of process 400, the user can be prompted to repeat at least a portion of the first speech input. Block 418 can be performed in response to receiving at block 408 the input indicative of a potential error in the first recognition result. Further, in some examples, block 418 can be performed in response to determining at block 410 that the input received at block 408 does not include a second speech that is a repetition of at least a portion of the first speech input. The prompt provided at block 418 can include an output (e.g., speech or text) that requests the user to repeat at least a portion of the first speech input. For example, the digital assistant implemented on the electronic device can prompt the user by outputting, “Sorry about that - can you please repeat your request?” [0103] Block 418 can further include identifying a portion of the first speech input that is associated with the potential error in the first speech recognition. In some examples, the portion of the first speech input associated with the potential error can be identified based on the confidence measure of the first recognition result. In particular, the confidence measure can indicate the confidence of each word or phoneme derived from the first speech input using the first ASR system. The portion of the first speech input corresponding to words or phonemes that have a confidence measure less than a predetermined value can be identified as the portion of the first speech input that is associated with the potential error in the first speech recognition. In a specific example, the first speech input can include, “Call Tim Carpenter” and the first recognition result can include the text, “Call Jim Carpenter.” Further, the first recognition result can include confidence measures for each of the words in the text. In this example, the words “call” and “Carpenter” can each have a confidence measure that is greater than a predetermined value while the word “Jim” can have a confidence measure that is less than the predetermine value. Therefore, in this example, the portion “Tim” in the first speech input can be identified to be associated with the potential error in the first speech recognition. The user can then be prompted to repeat the portion of the first speech input that is identified to be associated with the potential error in the first
1002423269
2019100034 11 Jan 2019 speech recognition. In this example, the digital assistant can prompt the user by outputting, “Could you please repeat the name of the person you would like to call?” [0104] At block 420 of process 400, a third speech input can be received from the user. The third speech input can be a repetition of at least a portion of the first speech input. The user may provide the third speech input in response to the prompt provided at block 418. The third speech input can be subsequently used to improve the recognition results and thus reducing the probability of the same error reoccurring.
[0105] At block 422 of process 400, the third speech input can be processed using the second ASR system to produce a third recognition result. The third speech input can be processed in a similar manner as the second speech input at block 412. In some examples, the third recognition result can include intermediate recognition results such as phonemes and sub-words (e.g., syllables, morphemes, and the like). In some examples, the third recognition result can include text such as a word or a sequence of words corresponding to at least a portion of the third speech input. Further, the third recognition result can include a confidence measure for each phoneme, sequence of phonemes, sub-word, word, or sequence of words derived from the third speech input.
[0106] As described above, the second ASR system can be the same as the first ASR system. In other examples, the second ASR system can be different from the first ASR system. The one or more speech recognition models of the second ASR system can be different from the one or more speech recognition models of the first ASR system. Further, the one or more speech recognition engines of the second ASR system can be different from the one or more speech recognition engines of the first ASR system. In some examples, the error rate of the second ASR system can be lower than an error rate of the first ASR system. In some examples, the latency of the second ASR system can be greater than the latency of the first ASR system.
[0107] At block 424 of process 400, a second combined result can be determined based on the first recognition result and the third recognition result. The second combined result can be determined in a similar manner as the first combined result at block 414. In some examples, the second combined result can comprise at least a portion of the first recognition result and at least a portion of the third recognition result. In some examples, the first recognition result and the third recognition result can be combined by means of ASR system combination to produce the second combined result.
1002423269
2019100034 11 Jan 2019 [0108] At block 426 of process 400, an action can be performed based on the third recognition result or the second combined result. The action performed at block 426 can be similar to the action performed at block 416. In some examples, the action can include displaying at least a portion of the text of the third recognition result or the second combined result on the electronic device. In other examples, the action can include executing a task to satisfy a user request contained in the first speech input and/or the third speech input. In yet other examples, the action can include generating an output (e.g., speech or text) that summarizes, describes, or confirms the intent inferred by the digital assistant from the third recognition result or the second combined result (e.g., using the natural language processing module 332). In examples where block 424 is performed, the action performed at block 426 can be based on the second combined result. However, in examples where block 424 is not performed, the action performed at block 426 can be based at least in part on the third recognition result.
[0109] At block 428 of process 400, the first speech input can be processed using a second ASR system to produce a fourth recognition result. In some examples, block 428 can be performed in response to receiving at block 408 the input that is indicative of a potential error in the first recognition result. In some examples, block 428 can be performed in response to determining at block 410 that the input received block 408 does not include a second speech input that is a repetition of at least a portion of the first speech input.
[0110] The first speech input can be processed using the second ASR system in a similar manner as the second speech input is processed using the second ASR system at block 412. In some examples, the second ASR system can be different from the first ASR system. In one such example, the one or more speech recognition engines of the second ASR system can be different from the one or more speech recognition engines of the first ASR system. In another such example, the one or more speech recognition models of the second ASR system can be different from the one or more speech recognition models of the first ASR system. In some examples, the second ASR system can be more accurate and thus have a lower error rate than the first ASR system. Reprocessing the first speech input using a different and more accurate ASR system can be desirable to reduce the probability of the same error reoccurring. This can result in improved subsequence speech recognition and thus improve the user experience. In some examples, the second ASR system can have a greater latency than the first ASR system due to the greater computing cost associated with a more accurate ASR
1002423269
2019100034 11 Jan 2019 system. The greater latency can be an acceptable trade-off for achieving a more accurate recognition result and reducing the probability of the same error reoccurring. In particular, after experiencing an error associated with a recognition result, a user may typically be willing to wait longer for a more accurate result than to experience the same error again in a shorter time period.
[0111] In some examples, only a portion of the first speech input can be processed using the second ASR system to produce the fourth recognition result. In particular, block 428 can include identifying a portion of the first speech input that is associated with the potential error in the first speech recognition. The portion of the first speech input associated with the potential error can then be processed using the second ASR system to produce a fourth recognition result. The portion of first speech input that is associated with the potential error can be identified in a similar manner as described in block 418. For example, the portion of the first speech input associated with the potential error can be identified based on the confidence measure of the first recognition result.
[0112] At block 430 of process 400, a third combined result can be determined based on the first recognition result and the fourth recognition result. The third combined result can be determined in a similar manner as the first combined result at block 414 and the second combined result at block 424. In some examples, the third combined result can comprise at least a portion of the first recognition result and at least a portion of the fourth recognition result. In some examples, the first recognition result and the fourth recognition result can be combined by means of ASR system combination to produce the third combined result.
[0113] At block 432 of process 400, an action can be performed based on the fourth recognition result or the third combined result. The action performed at block 432 can be similar to the action performed at block 416 or 426. In some examples, the action can include displaying at least a portion of the text of the fourth recognition result or the third combined result on the electronic device. In other examples, the action can include executing a task to satisfy a user request contained in the first speech input and/or the fourth speech input. In yet other examples, the action can include generating an output (e.g., speech or text) that summarizes, describes, or confirms the intent inferred by the digital assistant from the fourth recognition result or the third combined result (e.g., using the natural language processing module 332). In examples where block 430 is performed, the action performed at block 432 can be based on the third combined result. However, in examples where block 430
1002423269
2019100034 11 Jan 2019 is not performed, the action performed at block 432 can be based at least in part on the fourth recognition result.
[0114] Although blocks 402 through 432 of process 400 are shown in a particular order in FIGS. 4A-B, it should be appreciated that these blocks can be performed in any order. For instance, in some examples, block 412 can be performed prior to or concurrently with block 410. Further, it should be appreciated that in some cases, one or more blocks of process 400 can be optional and additional blocks can also be performed. For instance, in some examples, process 400 can include blocks 402, 404, 408, and 412 with the rest of the blocks being optional. In other examples, process 400 can include 402, 404, 408, 418, 420, and 422 with the rest of the blocks being optional. In yet other examples, process 400 can include 402, 404, 408, and 428 with the rest of the blocks being optional.
5. Electronic D evic e [0115] FIG. 5 shows a functional block diagram of an electronic device 500 configured in accordance with the principles of the various described examples. The functional blocks of the device can be optionally implemented by hardware, software, or a combination of hardware and software to carry out the principles of the various described examples. It is understood by persons of skill in the art that the functional blocks described in FIG. 5 can be optionally combined, or separated into sub-blocks to implement the principles of the various described examples. Therefore, the description herein optionally supports any possible combination, separation, or further definition of the functional blocks described herein.
[0116] As shown in FIG. 5, an electronic device 500 can include a touch screen display unit 502 configured to display a user interface and receive input from the user, an audio input unit 504 configured to receive speech input, an input unit 506 configured to receive an input from the user, and a speaker unit 508 configured to output audio. In some examples, audio input unit 504 can be configured to receive a speech input in the form of sound waves from a user and transmit the speech input in the form of a representative signal to processing unit 510. The electronic device 500 can further include a processing unit 510 coupled to the touch screen display unit 502, the audio input unit 504, the input unit 506, and the speaker unit 508. In some examples, the processing unit 510 can include a receiving unit 512, a speech processing unit 514, a determining unit 516, a performing unit 518, a prompting unit 520, and an identifying unit 522.
1002423269
2019100034 11 Jan 2019 [0117] The processing unit 510 is configured to receive (e.g., from the audio input unit 504 and using the receiving unit 512), from a user, a first speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514) the first speech input using a first automatic speech recognition system to produce a first recognition result. The processing unit 510 is configured to receive (e.g., from the input unit 506 and using the receiving unit 512), from the user, an input indicative of a potential error in the first recognition result. The input can include a second speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514), the second speech input using a second automatic speech recognition system to produce a second recognition result. In some examples, the second speech input is a repetition of at least a portion of the first speech input.
[0118] In some examples, the processing unit 510 is configured to determine (e.g., using the determining unit 516) whether the second speech input comprises a repetition of at least a portion of the first speech input. In some examples, the second speech input is processed using the second automatic speech recognition system to produce the second recognition result in response to determining that the second speech input comprises a repetition of at least a portion of the first speech input. In some examples, determining whether the second speech input comprises a repetition of at least a portion of the first speech input comprises determining whether an error rate of a phonemic transcription of the second speech input with respect to a phonemic transcription of a corresponding portion of the first speech input is less than a predetermined value. In some examples, determining whether the second speech input comprises a repetition of at least a portion of the first speech input comprises comparing an audio waveform of the second speech input with an audio waveform of a corresponding portion of the first speech input.
[0119] In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the first recognition result. In some examples, the action includes displaying at least a portion of text of the first recognition result on the electronic device. In some examples, the first speech input contains a user request and the action includes executing a task to satisfy the user request.
[0120] In some examples, the first automatic speech recognition system and the second automatic speech recognition system are a same automatic speech recognition system. In some examples, the first automatic speech recognition system and the second automatic
1002423269
2019100034 11 Jan 2019 speech recognition system are different automatic speech recognition systems. In some examples, the first automatic speech recognition system includes one or more speech recognition models and the second automatic speech recognition system includes one or more speech recognition models that are different from the one or more speech recognition models of the first automatic speech recognition system. In some examples, the first automatic speech recognition system includes a speech recognition engine and the second automatic speech recognition system includes a speech recognition engine that is different from the speech recognition engine of the first automatic speech recognition system.
[0121] In some examples, the processing unit 510 is configured to determine (e.g., using the determining unit 516) a combined result based on the first recognition result and the second recognition result. In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the combined result. In some examples, the combined result is determined by performing automatic speech recognition system combination using the first recognition result and the second recognition result. In some examples, performing automatic speech recognition system combination comprises implementing at least one of recognition output voting error reduction, crossadaptation, confusion network combination, and lattice combination.
[0122] In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the second recognition result. In some examples, the action includes displaying at least a portion of text of the second recognition result on the electronic device. In some examples, the first speech input contains a user request and the action includes executing a task to satisfy the user request.
[0123] In some examples, the processing unit 510 is configured to receive (e.g., from the audio input unit 504 and using the receiving unit 512), from a user, a first speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514) the first speech input using a first automatic speech recognition system to produce a first recognition result. The processing unit 510 is configured to receive (e.g., from the input unit 506 and using the receiving unit 512), from the user, an input indicative of a potential error in the first recognition result. In some examples, the processing unit 510 is configured to prompt (e.g., using the prompting unit 520 and via the touch screen display unit 502 or the speaker unit 508) the user to repeat at least a portion of the first speech input. In some examples, the processing unit 510 is configured to receive (e.g., from the audio input unit 504
1002423269
2019100034 11 Jan 2019 and using the receiving unit 512), from the user, a second speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514), the second speech input using a second automatic speech recognition system to produce a second recognition result.
[0124] In some examples, the input is a speech input that includes a predetermined utterance. In some examples, the input is a predetermined motion of the electronic device. In some examples, the input is a selection of an affordance. In some examples, text of the first recognition result is displayed on the electronic device and the input is a selection of at least a portion of the displayed text. In some examples, the input is associated with a rejection of a proposed task.
[0125] In some examples, the processing unit 510 is configured to identify (e.g., using the identifying unit 522) a portion of the first speech input corresponding to the potential error in the first recognition result. In some examples, processing the first speech input using the first automatic speech recognition system includes determining a confidence measure of each word in a text of the first recognition result. In some examples, the portion of the first speech input associated with the potential error is identified based on the confidence measure of each word in the text. In some examples, the user is prompted to repeat the identified portion of the first speech input corresponding to the potential error.
[0126] In some examples, the processing unit 510 is configured to perform an action associated with the first speech input. In some examples, the action includes displaying at least a portion of text of the first recognition result on the electronic device. In some examples, the first speech input contains a user request and the action includes executing a task to satisfy the user request.
[0127] In some examples, the first automatic speech recognition system and the second automatic speech recognition system are a same automatic speech recognition system. In some examples, the first automatic speech recognition system and the second automatic speech recognition system are different automatic speech recognition systems. In some examples, the first automatic speech recognition system includes one or more speech recognition models, and the second automatic speech recognition system includes one or more speech recognition models that are different from the one or more speech recognition models of the first automatic speech recognition system. In some examples, the first automatic speech recognition system includes a speech recognition engine, and the second
1002423269
2019100034 11 Jan 2019 automatic speech recognition system includes a speech recognition engine that is different from the speech recognition engine of the first automatic speech recognition system.
[0128] In some examples, the processing unit 510 is configured to determine (e.g., using the determining unit 516) a combined result based on the first recognition result and the second recognition result. In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the combined result. In some examples, the combined result is determined by performing automatic speech recognition system combination, using the first recognition result and the second recognition result. In some examples, performing automatic speech recognition system combination comprises implementing at least one of recognition output voting error reduction, crossadaptation, confusion network combination, and lattice combination.
[0129] In some examples, the processing unit 510 is configured to receive (e.g., from the audio input unit 504 and using the receiving unit 512), from a user, a speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514) the speech input using a first automatic speech recognition system to produce a first recognition result. The processing unit 510 is configured to receive (e.g., from the input unit 506 and using the receiving unit 512), from the user, an input indicative of a potential error in the first recognition result. In some examples, the processing unit 510 is configured to process (e.g., using the speech processing unit 514), the speech input using a second automatic speech recognition system to produce a second recognition result.
[0130] In some examples, an error rate of the second automatic speech recognition system is lower than an error rate of the first automatic speech recognition system. In some examples, a latency of the second automatic speech recognition system is greater than a latency of the first automatic speech recognition system.
[0131] In some examples, the first automatic speech recognition system includes one or more speech recognition models, and the second automatic speech recognition system includes one or more speech recognition models that are different from the one or more speech recognition models of the first automatic speech recognition system. In some examples, the first automatic speech recognition system includes a speech recognition engine, and the second automatic speech recognition system includes a speech recognition engine that is different from the speech recognition engine of the first automatic speech recognition system.
1002423269
2019100034 11 Jan 2019 [0132] In some examples, the input is a speech input containing a predetermined utterance. In some examples, the input is a selection of an affordance. In some examples, the input is associated with a rejection of a proposed task.
[0133] In some examples, the processing unit 510 is configured to identify (e.g., using the identifying unit 522) a portion of the speech input corresponding to the potential error in the first recognition result. In some examples, the identified portion of the first speech input corresponding to the potential error is processed using a second automatic speech recognition system to produce a second recognition result.
[0134] In some examples, the processing unit 510 is configured to determine (e.g., using the determining unit 516) a combined result based on the first recognition result and the second recognition result. In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the combined result. In some examples, the combined result is determined by performing automatic speech recognition system combination using the first recognition result and the second recognition result. In some examples, performing system combination comprises implementing at least one of recognition output voting error reduction, cross-adaptation, confusion network combination, and lattice combination.
[0135] In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the first recognition result. In some examples, the processing unit 510 is configured to perform (e.g., using the performing unit 518) an action based on the second recognition result.
[0136] In some examples, the processing unit 510 is configured to receive (e.g., from the audio input unit 504 and using the receiving unit 512), from a user, a first speech input. The processing unit 510 is configured to process (e.g., using the speech processing unit 514) the first speech input using a first automatic speech recognition system to produce a first recognition result. The processing unit 510 is configured to receive (e.g., from the input unit 506 and using the receiving unit 512), from the user, an input indicative of a potential error in the first recognition result. In some examples, the processing unit 510 is configured to determine (e.g., using the determining unit 516) whether the input includes a second speech input that repeats at least a portion of the first speech input. In some examples, in response to determining that the input includes a second speech input that repeats at least a portion of the first speech input, the processing unit 510 is configured to process (e.g., using the speech
1002423269
2019100034 11 Jan 2019 processing unit 514), the second speech input using a second automatic speech recognition system to produce a second recognition result. In some examples, in response to determining that the input does not include a second speech input that repeats at least a portion of the first speech input, the processing unit 510 is configured to prompt (e.g., using prompting unit 520) the user to repeat at least a portion of the first speech input, to receive (e.g., from the audio input unit 504 and using receiving unit 512), from the user, a third speech input, and to process (e.g., using the speech processing unit 514) the third speech input using the second automatic speech recognition system to produce a third recognition result.
[0137] Although examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the various examples as defined by the appended claims.

Claims (5)

  1. WHAT IS CLAIMED IS:
    1. A method for processing speech in a digital assistant, the method comprising: at an electronic device with a processor and memory storing one or more programs for execution by the processor:
    receiving, from a network interface, a first speech input;
    processing the first speech input using a first automatic speech recognition system to produce a first recognition result;
    performing a first task corresponding to a first user intent determined from the first speech recognition result;
    upon performing the first task, receiving an input representing a rejection of the first task;
    in response to receiving the input, processing at least a portion of the first speech input using a second automatic speech recognition system to produce a second speech recognition result, wherein the first automatic speech recognition system includes one or more speech recognition models, and the second automatic speech recognition system includes one or more speech recognition models that are different from the one or more speech recognition models of the first automatic speech recognition system;
    determining a combined speech recognition result based on the first speech recognition result and the second speech recognition result; and performing a second task corresponding to a second user intent determined from the combined speech recognition result.
  2. 2. The method of claim 1, wherein a latency of the second automatic speech recognition system is greater than a latency of the first automatic speech recognition system.
  3. 3. The method of any of claims 1-2, wherein the combined result is determined by performing automatic speech recognition system combination using the first speech recognition result and the second speech recognition result.
  4. 4. The method of any of claims 1-3, wherein performing automatic speech recognition system combination comprises implementing at least one of recognition output voting error reduction, cross-adaptation, confusion network combination, and lattice combination.
    1002423269
    2019100034 11 Jan 2019
  5. 5. A computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform the methods of any one of claims 14.
AU2019100034A 2014-08-28 2019-01-11 Improving automatic speech recognition based on user feedback Expired AU2019100034B4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2019100034A AU2019100034B4 (en) 2014-08-28 2019-01-11 Improving automatic speech recognition based on user feedback

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US62/043,041 2014-08-28
US14/591,754 2015-01-07
PCT/US2015/047062 WO2016033257A1 (en) 2014-08-28 2015-08-26 Improving automatic speech recognition based on user feedback
AU2017100240A AU2017100240A4 (en) 2014-08-28 2017-02-28 Improving automatic speech recognition based on user feedback
AU2017101551A AU2017101551B4 (en) 2014-08-28 2017-11-01 Improving automatic speech recognition based on user feedback
AU2018101475A AU2018101475B4 (en) 2014-08-28 2018-10-02 Improving automatic speech recognition based on user feedback
AU2019100034A AU2019100034B4 (en) 2014-08-28 2019-01-11 Improving automatic speech recognition based on user feedback

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2018101475A Division AU2018101475B4 (en) 2014-08-28 2018-10-02 Improving automatic speech recognition based on user feedback

Publications (2)

Publication Number Publication Date
AU2019100034A4 AU2019100034A4 (en) 2019-02-14
AU2019100034B4 true AU2019100034B4 (en) 2019-09-05

Family

ID=58397907

Family Applications (4)

Application Number Title Priority Date Filing Date
AU2017100240A Ceased AU2017100240A4 (en) 2014-08-28 2017-02-28 Improving automatic speech recognition based on user feedback
AU2017101551A Expired AU2017101551B4 (en) 2014-08-28 2017-11-01 Improving automatic speech recognition based on user feedback
AU2018101475A Expired AU2018101475B4 (en) 2014-08-28 2018-10-02 Improving automatic speech recognition based on user feedback
AU2019100034A Expired AU2019100034B4 (en) 2014-08-28 2019-01-11 Improving automatic speech recognition based on user feedback

Family Applications Before (3)

Application Number Title Priority Date Filing Date
AU2017100240A Ceased AU2017100240A4 (en) 2014-08-28 2017-02-28 Improving automatic speech recognition based on user feedback
AU2017101551A Expired AU2017101551B4 (en) 2014-08-28 2017-11-01 Improving automatic speech recognition based on user feedback
AU2018101475A Expired AU2018101475B4 (en) 2014-08-28 2018-10-02 Improving automatic speech recognition based on user feedback

Country Status (1)

Country Link
AU (4) AU2017100240A4 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5855000A (en) * 1995-09-08 1998-12-29 Carnegie Mellon University Method and apparatus for correcting and repairing machine-transcribed input using independent or cross-modal secondary input
US6064959A (en) * 1997-03-28 2000-05-16 Dragon Systems, Inc. Error correction in speech recognition
US20060293889A1 (en) * 2005-06-27 2006-12-28 Nokia Corporation Error correction for speech recognition systems
US20100125458A1 (en) * 2006-07-13 2010-05-20 Sri International Method and apparatus for error correction in speech recognition applications
WO2016033257A1 (en) * 2014-08-28 2016-03-03 Apple Inc. Improving automatic speech recognition based on user feedback

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5855000A (en) * 1995-09-08 1998-12-29 Carnegie Mellon University Method and apparatus for correcting and repairing machine-transcribed input using independent or cross-modal secondary input
US6064959A (en) * 1997-03-28 2000-05-16 Dragon Systems, Inc. Error correction in speech recognition
US20060293889A1 (en) * 2005-06-27 2006-12-28 Nokia Corporation Error correction for speech recognition systems
US20100125458A1 (en) * 2006-07-13 2010-05-20 Sri International Method and apparatus for error correction in speech recognition applications
WO2016033257A1 (en) * 2014-08-28 2016-03-03 Apple Inc. Improving automatic speech recognition based on user feedback

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
McNAIR A. E. et al, 'Improving Recognizer Acceptance Through Robust, Natural Speech Repair' Proceedings of the 3rd International Conference on Spoken Language Processing, ICSLP 1994, Yokohama, Japan, September 18-22, 1994 *

Also Published As

Publication number Publication date
AU2017100240A4 (en) 2017-03-30
AU2018101475A4 (en) 2018-11-01
AU2017101551A4 (en) 2017-11-30
AU2017101551B4 (en) 2018-08-30
AU2019100034A4 (en) 2019-02-14
AU2018101475B4 (en) 2018-12-13

Similar Documents

Publication Publication Date Title
US10446141B2 (en) Automatic speech recognition based on user feedback
US11727219B2 (en) System and method for inferring user intent from speech inputs
AU2015261693B2 (en) Disambiguating heteronyms in speech synthesis
US9606986B2 (en) Integrated word N-gram and class M-gram language models
US10796702B2 (en) Method and system for controlling home assistant devices
US9633674B2 (en) System and method for detecting errors in interactions with a voice-based digital assistant
US9966060B2 (en) System and method for user-specified pronunciation of words for speech synthesis and recognition
US10672379B1 (en) Systems and methods for selecting a recipient device for communications
US11676572B2 (en) Instantaneous learning in text-to-speech during dialog
US10699706B1 (en) Systems and methods for device communications
AU2019100034B4 (en) Improving automatic speech recognition based on user feedback
KR101830210B1 (en) Method, apparatus and computer-readable recording medium for improving a set of at least one semantic unit

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
FF Certified innovation patent
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry