GB2544149A - Auto-complete methods for spoken complete value entries - Google Patents

Auto-complete methods for spoken complete value entries Download PDF

Info

Publication number
GB2544149A
GB2544149A GB1613949.5A GB201613949A GB2544149A GB 2544149 A GB2544149 A GB 2544149A GB 201613949 A GB201613949 A GB 201613949A GB 2544149 A GB2544149 A GB 2544149A
Authority
GB
United Kingdom
Prior art keywords
spoken
complete
value entry
complete value
patent application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1613949.5A
Other versions
GB201613949D0 (en
Inventor
Nichols Matthew
Nikolaus Miracna Alexander
Charles Miller Kurt
Evans Russell
Koenig Mark
Kriley Bernard
Sadecky Luke
Manuel Brian
Meyer Lauren
George Craig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hand Held Products Inc
Original Assignee
Hand Held Products Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/233,992 external-priority patent/US10410629B2/en
Application filed by Hand Held Products Inc filed Critical Hand Held Products Inc
Priority to GB1903587.2A priority Critical patent/GB2573631B/en
Publication of GB201613949D0 publication Critical patent/GB201613949D0/en
Publication of GB2544149A publication Critical patent/GB2544149A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

A voice-enabled system (fig.4) for guiding a user through a particular task prompts a user to speak spoken complete value entry 120 (eg. workflow steps in a task, or credit card/vehicle identification numbers), receives a spoken subset of the complete long value 130 (eg. the first few digits of the number, the minimum number of such characters being pre-selected), compares this subset with all the possible complete entries in a suggestion list (received or compiled previously, 105), and predicts the intended complete value in order to automatically complete the entry 150. The user may be alerted should no list entry match the spoken subset, while correct predictions can be confirmed 160 or rejected by the user via eg. a graphical user interface. The mobile device (16, fig. 1) may notify a server (12) of the confirmation.

Description

AUTO-COMPLETE METHODS FOR SPOKEN COMPLETE VALUE ENTRIES
Cross-Reference to Priority Application [0001] This application is a non-provisional application of U.S. provisional application Ser. No. 62/206,884 for Auto-Complete for Spoken Long Value Entry in a Speech Recognition System filed August 19, 2015, which is hereby incorporated by reference in its entirety.
Field of the Invention [0002] The present invention relates to auto-complete methods, and more particularly, to auto-complete methods for spoken complete value entries.
Background [0003] Voice-enabled systems help users complete assigned tasks. For example, in a workflow process, a voice-enabled system may guide users through a particular task. The task may be at least a portion of the workflow process comprising at least one workflow stage. As a user completes his/her assigned tasks, a bi-directional dialog or communication stream of information is provided over a wireless network between the user wearing a mobile computing device (herein, "mobile device") and a central computer system that is directing multiple users and verifying completion of their tasks. To direct the user's actions, information received by the mobile device from the central computer system is translated into speech or voice instructions for the corresponding user. To receive the voice instructions and transmit information, the user wears a communications headset (also referred to herein as a "headset assembly" or simply a "headset") communicatively coupled to the mobile device.
[0004] The user may be prompted for a verbal response during completion of the task. The verbal response may be a string of characters, such as digits and/or letters. The string of characters may correspond, for example, to a credit card number, a telephone number, a serial number, a vehicle identification number, or the like. The spoken string of characters (i.e., the verbal response) may be referred to herein as a "spoken complete value entry".
[0005] Unfortunately, speaking the spoken complete value entry is often time-consuming and difficult to speak correctly, especially if the spoken complete value entry is long (i.e., the number of characters in the spoken string of characters is relatively large (referred to herein as a "spoken long value entry"). If errors in speaking the spoken complete value entry are made by the user (i.e., the speaker), conventional systems and methods often require that the user restart the spoken string, causing user frustration and being even more time-consuming.
[0006] Therefore, a need exists for auto-complete methods for spoken complete value entries, particularly spoken long value entries, and for use in a workflow process.
Summary [0007] An auto-complete method for a spoken complete value entry is provided, according to various embodiments of the present invention. A processor receives a possible complete value entry having a unique subset, prompts a user to speak the spoken complete value entry, receives a spoken subset of the spoken complete value entry, compares the spoken subset with the unique subset of the possible complete value entry, and automatically completes the spoken complete value entry to match the possible complete value entry if the unique subset matches the spoken subset. The spoken subset has a predetermined minimum number of characters.
[0008] An auto-complete method for a spoken complete value entry is provided, according to various embodiments of the present invention. A processor receives one or more possible complete value entries each having a unique subset, prompts a user to speak the spoken complete value entry, receives a spoken subset of the spoken complete value entry, compares the spoken subset with the unique subset of each possible complete value entry as choices, automatically completes the spoken complete value entry to match a possible complete value entry of the one or more possible complete value entries if the spoken subset matches the unique subset of the possible complete value entry, and confirms the automatically completed spoken complete value entry as the spoken complete value entry. The spoken subset has a predetermined minimum number of characters .
[0009] An auto-complete method for a spoken complete value entry in a workflow process, according to various embodiments of the present invention. The method comprises receiving, by a processor, a voice assignment to perform the workflow process comprising- at least one workflow stage. The processor identifies a task that is to be performed by a user, the task being at least a portion of the workflow process. The processor receives a possible complete value entry having a unique subset and prompts a user to speak the spoken complete value entry. The processor receives a spoken subset of the spoken complete value entry. The spoken subset has a predetermined minimum number of characters. The processor compares the spoken subset with the unique subset of the possible complete value entry and confirms the automatically completed spoken complete value entry as the spoken complete value entry.
[0010] The foregoing illustrative summary, as well as other exemplary objectives and/or advantages of the present invention, and the manner in which the same are accomplished, are further explained within the following detailed description and its accompanying drawings.
Brief Description of the Drawings [0011] FIG. 1 is a simplified block diagram of a system in which an auto-complete method for spoken complete value entries may be implemented, according to various embodiments; [0012] FIG. 2 is a diagrammatic illustration of hardware and software components of an exemplary server of the system of FIG. 1, according to various embodiments; [0013] FIG. 3 is an illustration of the mobile computing system of the system of FIG. 1, depicting a mobile computing device and an exemplary headset that may be worn by a user performing a task in a workflow process, according to various embodiments; [0014] FIG. 4 is a diagrammatic illustration of hardware and software components of the mobile computing device and the headset of FIG. 3, according to various embodiments; and [0015] FIG. 5 is a flow diagram of an auto-complete method for spoken complete value entries, according to various embodiments .
Detailed Description [0016] Various embodiments are directed to auto-complete methods for spoken complete value entries. According to various embodiments, the spoken complete value entry that a user intends to speak is predicted after only a predetermined minimum number of characters (a spoken subset) have been spoken by the user. The spoken complete value entry is longer and harder to speak than the spoken subset thereof. Various embodiments speed up human-computer interactions. Various embodiments as described herein are especially useful for spoken long value entries as hereinafter described, and for use in workflow processes, thereby improving workflow efficiencies and easing worker frustration.
[0017] As used herein, the "spoken complete value entry" comprises a string of characters. As used herein, the term "string" is any finite sequence of characters (i.e., letters, numerals, symbols and punctuation marks). Each string has a length, which is the number of characters in the string. The length can be any natural number (any positive integer, but excluding zero). For numerals, valid entry values may be 0-9. For letters, valid entry values may be A-Z. Of course, in languages other than English, valid entry values may be different. The number of characters in a spoken complete value entry is greater than the number of characters in the spoken subset thereof as hereinafter described. As noted previously, the number of characters in the string of characters of the spoken complete value entry comprising a spoken long value entry is relatively large. Exemplary spoken complete value entries may be credit card numbers, telephone numbers, serial numbers, vehicle identification numbers, or the like.
[0018] Referring now to FIG. 1, according to various embodiments, an exemplary system 10 is provided in which an auto-complete method 100 for a spoken complete value entry may be implemented. The exemplary depicted system comprises a server 12 and a mobile computing system 16 that are configured to communicate through at least one communications network 18. The communications network 18 may include any collection of computers or communication devices interconnected by communication channels. The communication channels may be wired or wireless. Examples of such communication networks 18 include, without limitation, local area networks (LAN), the internet, and cellular networks.
[0019] FIG. 2 is a diagrammatic illustration of the hardware and software components of the server 12 of system 10 according to various embodiments of the present invention. The server 12 may be a computing system, such as a computer, computing device, disk array, or programmable device, including a handheld computing device, a networked device (including a computer in a cluster configuration), a mobile telecommunications device, a video game console (or other gaming system), etc. As such, the server 12 may operate as a multi-user computer or a single-user computer. The server 12 includes at least one central processing unit (CPU) 30 coupled to a memory 32. Each CPU 30 is typically implemented in hardware using circuit logic disposed on one or more physical integrated circuit devices or chips and may be one or more microprocessors, micro-controllers, FPGAs, or ASICs. Memory 32 may include RAM, DRAM, SRAM, flash memory, and/or another digital storage medium, and also typically implemented using circuit logic disposed on one or more physical integrated circuit devices, or chips. As such, memory 32 may be considered to include memory storage physically located elsewhere in the server 12, e.g., any cache memory in the at least one CPU 30, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device 34, another computing system (not shown), a network storage device (e.g., a tape drive) (not shown), or another network device (not shown) coupled to the server 12 through at least one network interface 36 (illustrated and referred to hereinafter as "network I/F" 36) by way of the communications network 18.
[0020] The server 12 may optionally (as indicated by dotted lines in FIG. 2) be coupled to at least one peripheral device through an input/output device interface 38 (illustrated as, and hereinafter, "I/O I/F" 38). In particular, the server 12 may receive data from a user through at least one user interface 40 (including, for example, a keyboard, mouse, a microphone, and/or other user interface) and/or outputs data to the user through at least one output device 42 (including, for example, a display, speakers, a printer, and/or another output device). Moreover, in various embodiments, the I/O I/F 38 communicates with a device that is operative as a user interface 40 and output device 42 in combination, such as a touch screen display (not shown).
[0021] The server 12 is typically under the control of an operating system 44 and executes or otherwise relies upon various computer software applications, sequences of operations, components, programs, files, objects, modules, etc., according to various embodiments of the present invention. In various embodiments, the server 12 executes or otherwise relies on one or more business logic applications 46 that are configured to provide a task message/task instruction to the mobile computing system 16. The task message/task instruction is communicated to the mobile computing system 16 for a user thereof (such as a warehouse worker) to, for example, execute a task in at least one workflow stage of a workflow process.
[0022] Referring now to FIG. 3, according to various embodiments, the mobile computing system comprises a mobile computing device communicatively coupled to a headset. The mobile computing device may comprise a portable and/or wearable mobile computing device 7 0 worn by a user 7 6, for example, such as on a belt 78 as illustrated in the depicted embodiment of FIG. 4. In various embodiments, the mobile computing device may be carried or otherwise transported, on a vehicle 74 (FIG. 4) used in the workflow process.
[0023] According to various embodiments, FIG. 3 is a diagrammatic illustration of at least a portion of the components of the mobile computing device 70 according to various embodiments. The mobile computing device 70 comprises a memory 92 and a program code resident in the memory 92 and a processor 90 communicatively coupled to the memory 92. The mobile computing device 70 further comprises a power supply 98, such as a battery, rechargeable battery, rectifier, and/or another power source and may comprise a power monitor 75.
[0024] The processor 90 of the mobile computing device 70 is typically implemented in hardware using circuit logic disposed in one or more physical integrated circuit devices, or chips. Each processor may be one or more microprocessors, micro-controllers, field programmable gate arrays, or ASICs, while memory may include RAM, DRAM, SRAM, flash memory, and/or another digital storage medium, and that is also typically implemented using circuit logic disposed in one or more physical integrated circuit devices, or chips. As such, memory is considered to include memory storage physically located elsewhere in the mobile computing device, e.g., any cache memory in the at least one processor, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device, a computer, and/or or another device coupled to the mobile computing device, including coupled to the mobile computing device through at least one network I/F 94 by way of the communications network 18. The mobile computing device 70, in turn, couples to the communications network 18 through the network I/F 94 with at least one wired and/or wireless connection.
[0025] Still referring to FIGS. 3 and 4, according to various embodiments, the mobile computing system 16 may further comprise a user input/output device, such as the headset 72. The headset 72 may be used, for example, in voice-enabled workflow processes. In various embodiments, the user 76 may interface with the mobile computing device 70 (and the mobile computing device interfaces with the user 76) through the headset 72, which may be coupled to the mobile computing device 70 through a cord 80. In various embodiments, the headset 72 is a wireless headset and coupled to the mobile computing device through a wireless signal (not shown). The headset 72 may include one or more speakers 82 and one or more microphones 84. The speaker 82 is configured to play audio (e.g., such as speech output associated with a voice dialog to instruct the user 76 to perform a task, i.e., a "voice assignment"), while the microphone 84 is configured to capture speech input from the user 76 (e.g., such as for conversion to machine readable input). The speech input from the user 76 may comprise a verbal response comprising the spoken complete value entry. As such, and in some embodiments, the user 7 6 interfaces with the mobile computing device 70 hands-free through the headset 72. The mobile computing device 70 is configured to communicate with the headset 72 through a headset interface 102 (illustrated as, and hereinafter, "headset I/F" 102), which is in turn configured to couple to the headset 72 through the cord 80 and/or wirelessly. For example, the mobile computing device 7 0 may be coupled to the headset 72 through the BlueTooth® open wireless technology standard that is known in the art.
[0026] Referring now specifically to FIG. 3, in various embodiments, the mobile computing device 70 may additionally include at least one input/output interface 96 (illustrated as, and hereinafter, "I/O I/F" 96) configured to communicate with at least one peripheral 113 other than the headset 72. Exemplary peripherals may include a printer, a headset, an image scanner, an identification code reader (e.g., a barcode reader or an RFID reader), a monitor, a user interface (e.g., keyboard, keypad), an output device, a touch screen, to name a few. In various embodiments, the I/O I/F 96 includes at least one peripheral interface, including at least one of one or more serial, universal serial bus (USB), PC Card, VGA, HDMI, DVI, and/or other interfaces (e.g., for example, other computer, communicative, data, audio, and/or visual interfaces) (none shown). In various embodiments, the mobile computing device 70 may be communicatively coupled to the peripheral (s) 110 through a wired or wireless connection such as the BlueTooth® open wireless technology standard that is known in the art.
[0027] The mobile computing device 70 may be under the control and/or otherwise rely upon various software applications, components, programs, files, objects, modules, etc. (herein the "program code" that is resident in memory 92) according to various embodiments of the present invention. This program code may include an operating system 104 (e.g., such as a
Windows Embedded Compact operating system as distributed by Microsoft Corporation of Redmond, Wash.) as well as one or more software applications (e.g., configured to operate in an operating systemor as "stand-alone" applications).
[0028] In accordance with various embodiments, the program code may include a prediction software program as hereinafter described. As such, the memory 92 may also be configured with one or more task applications 106. The one or more task applications 106 process messages or task instructions (the "voice assignment") for the user 76 (e.g., by displaying and/or converting the task messages or task instructions into speech output). The one or more task application (s) 106 implement a dialog flow. The task application (s) 106 communicate with the server 12 to receive task messages or task instructions. In turn, the task application (s) 106 may capture speech input for subsequent conversion to a useable digital format (e.g., machine readable input) by application (s) 46 to the server 12 (e.g., to update the database 48 of the server 12). As noted previously, the speech input may be a spoken subset or user confirmation as hereinafter described. In the context of a workflow process, according to various embodiments, the processor of the mobile computing device may receive a voice assignment to perform the workflow process comprising at least one workflow stage. The processor may identify a task that is to be performed by a user, the task being at least a portion of the workflow process .
[0029] Referring now to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises collecting one or more possible complete value entries (step 105). The possible complete value entries may be collected from common responses defined by the user or system 10, (verbal) responses expected to be received from the user in a particular context (e.g., the workflow process, the least one workflow stage, and/or the particular task being performed by the user. The possible complete value entries can be based on other options. The possible complete value entries may be stored in the memory of the server or the memory of the mobile computing device and used to form a suggestion list for purposes as hereinafter described. The collection of one or more possible value entries may be performed at any time prior to receiving the possible complete value entry (step 110) as hereinafter described. It is be understood that the collection of the one or more possible value entries may only need to be performed once with the suggestion list prepared therefrom as hereinafter described used multiple times.
[0030] In the context of a performing a workflow process, when the host system (server) sends down the voice assignment, it can optionally send the list of possible responses (or expected responses) to the mobile device.
[0031] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises receiving a possible complete value entry having a unique subset (step 110) . The processor 90 of the mobile computing· device 7 0 is configured to receive the possible complete long value entry having the unique subset from the server 12 or elsewhere (e.g., its own memory). Receiving a possible complete value entry comprises receiving the one or more possible complete value entries in the suggestion list. Each of the possible complete value entries has a unique subset configured to match with a particular spoken subset as hereinafter described. The unique subset is a predetermined portion of the possible complete value entry. The number of characters in a unique subset is less than the number of characters in the possible complete value entry. The unique subsets of the possible complete value entries may be stored in the memory of the server or the memory of the mobile computing device.
[0032] For example only, the user may be assigned the task of inspecting vehicles in a workflow process. Prior to each vehicle inspection, the user may be prompted to speak the vehicle identification number (VIN) of the particular vehicle. In this exemplary context, the vehicle identification number of each of the vehicles to be inspected may be considered exemplary expected responses to be received in the context of the particular workflow process, the at least one workflow stage, and/or the particular task being performed by the user. The vehicle identification numbers may thus be possible complete value entries in the suggestion list.
[0033] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises prompting a user to speak a spoken complete value entry (step 120) . The processor of the mobile computing device is configured to prompt the user to speak the complete value entry. For example, the server 12 transmits task messages or task instructions to the mobile computing device 7 0 to perform a task. The processor 90 of the mobile computing device 70 receives the task messages or task instructions from the server 12 and prompts the user for a spoken complete value entry. While task messages and task instructions have been described as possible preludes for a spoken complete value entry from the user, it is to be understood that the processor 90 of the mobile computing device 7 0 may prompt the user for a spoken complete value entry that does not involve a task at all. For example, a prompt may be for a credit card number, a phone number, a VIN, etc .
[0034] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises receiving a spoken subset of the spoken complete value entry (step 130) . When prompted for the spoken complete value entry (step 120), the user begins speaking the complete value entry (i.e., the string of characters that make up the complete value entry). The processor of the mobile computing device receives the spoken subset from the headset when the user speaks into the one or more microphones of the headset. The spoken subset comprises a predetermined minimum number of characters of the complete value entry. The number of characters in a spoken complete value entry is greater than the number of characters in the spoken subset thereof. The predetermined minimum of characters (i.e., the spoken subset) comprises one or more sequential characters in the string of characters of the spoken complete long value. The predetermined minimum number of characters in the spoken subset may be (pre) determined in order to differentiate between possible complete value entries having at least one common character. For example, the suggestion list of possible complete value entries may include the following three possible complete value entries: 1234567890222222222, 1245678903111111111, 1345678900200000000. In this example, as two of the possible complete value entries share the first characters 1 and 2, the predetermined minimum of characters of the spoken subset may be three. The predetermined number of characters may be selected in a field provided by the prediction software program.
[0035] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises comparing the spoken subset with the unique subset of the possible complete value entry (step 140). The processor of the mobile computing device compares the spoken subset with the unique subset of the possible complete value entry. After the predetermined minimum number of characters (i.e., the spoken subset) has been spoken by the user, the processor compares the spoken subset against the possible complete value entries in the suggestion list. More particularly, the processor, configured by the prediction software program, compares the spoken subset against the unique subset of each of the possible complete value entries. The prediction software program predicts the spoken complete value entry that a user intends to speak after only the predetermined minimum number of characters (i.e., the spoken subset) has been spoken by the user. The prediction software program predicts the complete value entry by matching the spoken subset with the unique subset of the one or more possible complete value entries. It is to be understood that the greater the predetermined number of characters in the spoken subset (and in the unique subset), the suggestion of the spoken complete value entry is more apt to be correct. The complete value entry is the possible complete value entry having the unique subset that matches (i.e., is the same as) the spoken subset of the complete value entry. The unique subset and the spoken subset "match" by having the same characters, in the same order, and in the same position, i.e., the unique subset and the spoken subset are identical.
[0036] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries comprises automatically completing the spoken complete value entry (step 150) . The spoken complete value entry is automatically completed (an "automatically completed spoken complete value entry") to match the possible complete value entry if the possible complete value entry having the unique subset that matches the spoken subset is included in the suggestion list and the unique subset thereof matches the spoken subset. If the spoken subset that the user speaks does not match the unique subset of at least one of the possible complete value entries on the suggestion list, the processor may alert the user that the spoken subset may be incorrect.
[0037] Still referring to FIG. 5, according to various embodiments, the auto-complete method 100 for spoken complete value entries continues by confirming the auto-completed spoken complete value entry with the user (i.e., that the auto-completed spoken complete value entry matches the spoken complete value entry that the user intended to speak when prompted in step 120) (step 160) . The auto-completed spoken complete value entry is a suggestion for the spoken complete value entry. The processor of the mobile computing device confirms the auto-completed spoken complete value entry by speaking back the spoken complete value entry. After the mobile computing device (more particularly, the processor thereof) speaks back the auto-completed spoken complete value entry, the mobile computing device may ask the user for user confirmation that the suggestion (the auto-completed spoken complete value entry as spoken back by the mobile computing device) is correct.
[0038] The user may accept or decline the suggestion. The user may accept or decline the suggestion in a number of ways (e.g., by the graphical user interface). If the suggestion is accepted, the auto-complete method 100 for a spoken complete value entry ends. More specifically, the mobile computing device may send a signal to the server 12 that the auto- completed spoken complete value entry has been confirmed. The server 12 (more particularly, the business logic application thereof) may then repeat method 100 for another spoken complete value entry.
[0039] If the suggestion is declined by the user, at least the comparing and automatically completing steps may be repeated until the user accepts the suggestion. According to various embodiments, if the suggestion is declined by the user (i.e., the user does not confirm the automatically completed spoken complete value entry as the spoken complete value entry), the method further comprises removing the possible complete value entry from the suggestion list so it will not be used again to (incorrectly) automatically complete the spoken value entry (step 170).
[0040] If the suggestion is neither accepted nor declined, the processor 90 may be further configured to generate and transmit to the user 7 6 an alert. The alert may comprise an audible sound, a visual indication, or the like. Additionally, or alternatively, the business logic application may stop until the suggestion is accepted or declined (e.g., the server may discontinue sending task messages and task instructions until the suggestion is accepted or declined).
[0041] Based on the foregoing, it is to be appreciated that various embodiments provide auto-correct methods for spoken complete value entries. Various embodiments speed up human- computer interactions. Various embodiments as described herein are especially useful for spoken long value entries and for use in workflow processes, improving workflow efficiencies and easing worker frustration.
[0042] A person having ordinary skill in the art will recognize that the environments illustrated in FIGS. 1 through 4 are not intended to limit the scope of various embodiments of the present invention. In particular, the server 12 and the mobile computing system 16 may include fewer or additional components, or alternative configurations, consistent with alternative embodiments of the present invention. Thus, a person having skill in the art will recognize that other alternative hardware and/or software environments may be used without departing from the scope of the present. For example, a person having ordinary skill in the art will appreciate that the server 12 and mobile computing system 16 may include more or fewer applications disposed therein. As such, other alternative hardware and software environments may be used without departing from the scope of embodiments of the present. Moreover, a person having ordinary skill in the art will appreciate that the terminology used to describe various pieces of data, task messages, task instructions, voice dialogs, speech output, speech input, and machine readable input are merely used for purposes of differentiation and are not intended to be limiting. The routines executed to implement the embodiments of the present invention, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions executed by one or more computing systems will be referred to herein as a "sequence of operations," a "program product," or, more simply, "program code." The program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computing system (e.g., the server 12 and/or mobile computing system 16), and that, when read and executed by one or more processors of the mobile computing system, cause that computing system to perform the steps necessary to execute steps, elements, and/or blocks embodying the various aspects of the present.
[0043] While the present invention has and hereinafter will be described in the context of fully functioning computing systems, those skilled in the art will appreciate that the various embodiments of the present are capable of being distributed as a program product in a variety of forms, and that the present applies equally regardless of the particular type of computer readable media used to actually carry out the distribution. Examples of computer readable media include but are not limited to physical and tangible recordable type media such as volatile and nonvolatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., CD-ROM's, DVD's, Blu-Ray disks, etc.), among others. In addition, various program code described hereinafter may be identified based upon the application or software component within which it is implemented in a specific embodiment of the present. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the present should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, APIs, applications, applets, etc.), it should be appreciated that the present is not limited to the specific organization and allocation of program functionality described herein. * * * [0044] To supplement the present disclosure, this application incorporates entirely by reference the following commonly assigned patents, patent application publications, and patent applications: U.S. Patent No. 6,832,725; U.S. Patent No. 7,128,266; U.S. Patent No. 7,159,783; U.S. Patent No. 7,413,127; U.S. Patent No. 7,726,575; U.S. Patent No. 8,294,969; U.S. Patent No. 8,317,105; U.S. Patent No. 8,322,622; U.S. Patent No. 8,366,005; U.S. Patent No. 8,371,507; U.S. Patent No. 8,376,233; U.S. Patent No. 8,381,979; U.S. Patent No. 8,390,909; U.S. Patent No. 8,408,464; U.S. Patent No. 8,408,468; U.S. Patent No. 8,408,469; U.S. Patent No. 8,424,768; U.S. Patent No. 8,448,863; U.S. Patent No. 8,457,013; U.S. Patent No. 8,459,557; U.S. Patent No. 8,469,272; U.S. Patent No. 8,474,712; U.S. Patent No. 8,479,992; U.S. Patent No. 8,490,877; U.S. Patent No. 8,517,271; U.S. Patent No. 8,523,076; U.S. Patent No. 8,528,818; U.S. Patent No. 8,544,737; U.S. Patent No. 8,548,242; U.S. Patent No. 8,548,420; U.S. Patent No. 8,550,335; U.S. Patent No. 8,550,354; U.S. Patent No. 8,550,357; U.S. Patent No. 8,556,174; U.S. Patent No. 8,556,176; U.S. Patent No. 8,556,177; U.S. Patent No. 8,559,767; U.S. Patent No. 8,599,957; U.S. Patent No. 8,561,895; U.S. Patent No. 8,561,903; U.S. Patent No. 8,561,905; U.S. Patent No. 8,565,107; U.S. Patent No. 8,571,307; U.S. Patent No. 8,579,200; U.S. Patent No. 8,583,924; U.S. Patent No. 8,584,945; U.S. Patent No. 8,587,595; U.S. Patent No. 8,587,697; U.S. Patent No. 8,588,869; U.S. Patent No. 8,590,789; U.S. Patent No. 8,596,539; U.S. Patent No. 8,596,542; U.S. Patent No. 8,596,543; U.S. Patent No. 8,599,271; U.S. Patent No. 8,599,957; U.S. Patent No. 8,600,158; U.S. Patent No. 8,600,167; U.S. Patent No. 8,602,309; U.S. Patent No. 8,608,053; U.S. Patent No. 8,608,071; U.S. Patent No. 8,611,309; U.S. Patent No. 8,615,487; U.S. Patent No. 8,616,454; U.S. Patent No. 8,621,123; U.S. Patent No. 8,622,303; U.S. Patent No. 8,628,013; U.S. Patent No. 8,628,015; U.S. Patent No. 8,628,016; U.S. Patent No. 8,629,926; U.S. Patent No. 8,630,491; U.S. Patent No. 8,635,309; U.S. Patent No. 8,636,200; U.S. Patent No. 8,636,212; U.S. Patent No. 8,636,215; U.S. Patent No. 8,636,224; U.S. Patent No. 8,638,806; U.S. Patent No. 8,640,958; U.S. Patent No. 8,640,960; U.S. Patent No. 8,643,717; U.S. Patent No. 8,646,692; U.S. Patent No. 8,646,694; U.S. Patent No. 8,657,200; U.S. Patent No. 8,659,397; U.S. Patent No. 8,668,149; U.S. Patent No. 8,678,285; U.S. Patent No. 8,678,286; U.S. Patent No. 8,682,077; U.S. Patent No. 8,687,282; U.S. Patent No. 8,692,927; U.S. Patent No. 8,695,880; U.S. Patent No. 8,698,949; U.S. Patent No. 8,717,494; U.S. Patent No. 8,717,494; U.S. Patent No. 8,720,783; U.S. Patent No. 8,723,804; U.S. Patent No. 8,723,904; U.S. Patent No. 8,727,223; U.S. Patent No. D702,237; U.S. Patent No. 8,740,082; U.S. Patent No. 8,740,085; U.S. Patent No. 8,746,563; U.S. Patent No. 8,750,445; U.S. Patent No. 8,752,766; U.S. Patent No. 8,756,059; U.S. Patent No. 8,757,495; U.S. Patent No. 8,760,563; U.S. Patent No. 8,763,909; U.S. Patent No. 8,777,108; U.S. Patent No. 8,777,109; U.S. Patent No. 8,779,898; U.S. Patent No. 8,781,520; U.S. Patent No. 8,783,573; U.S. Patent No. 8,789,757; U.S. Patent No. 8,789,758; U.S. Patent No. 8,789,759; U.S. Patent No. 8,794,520; U.S. Patent No. 8,794,522; U.S. Patent No. 8,794,525; U.S. Patent No. 8,794,526; U.S. Patent No. 8,798,367; U.S. Patent No. 8,807,431; U.S. Patent No. 8,807,432; U.S. Patent No. 8,820,630; U.S. Patent No. 8,822,848; U.S. Patent No. 8,824,692; U.S. Patent No. 8,824,696; U.S. Patent No. 8,842,849; U.S. Patent No. 8,844,822; U.S. Patent No. 8,844,823; U.S. Patent No. 8,849,019; U.S. Patent No. 8,851,383; U.S. Patent No. 8,854,633; U.S. Patent No. 8,866,963; U.S. Patent No. 8,868,421; U.S. Patent No. 8,868,519; U.S. Patent No. 8,868,802; U.S. Patent No. 8,868,803; U.S. Patent No. 8,870,074; U.S. Patent No. 8,879,639; U.S. Patent No. 8,880,426; U.S. Patent No. 8,881,983; U.S. Patent No. 8,881,987; U.S. Patent No. 8,903,172; U.S. Patent No. 8,908,995; U.S. Patent No. 8,910,870; U.S. Patent No. 8,910,875; U.S. Patent No. 8,914,290; U.S. Patent No. 8,914,788; U.S. Patent No. 8,915,439; U.S. Patent No. 8,915,444; U.S. Patent No. 8,916,789; U.S. Patent No. 8,918,250; U.S. Patent No. 8,918,564; U.S. Patent No. 8,925,818; U.S. Patent No. 8,939,374; U.S. Patent No. 8,942,480; U.S. Patent No. 8,944,313; U.S. Patent No. 8,944,327; U.S. Patent No. 8,944,332; U.S. Patent No. 8,950,678; U.S. Patent No. 8,967,468; U.S. Patent No. 8,971,346; U.S. Patent No. 8,976,030; U.S. Patent No. 8,976,368; U.S. Patent No. 8,978,981; U.S. Patent No. 8,978,983; U.S. Patent No. 8,978,984; U.S. Patent No. 8,985,456; U.S. Patent No. 8,985,457; U.S. Patent No. 8,985,459; U.S. Patent No. 8,985,461; U.S. Patent No. 8,988,578; U.S. Patent No. 8,988,590; U.S. Patent No. 8,991,704; U.S. Patent No. 8,996,194; U.S. Patent No. 8,996,384; U.S. Patent No. 9,002,641; U.S. Patent No. 9,007,368; U.S. Patent No. 9,010,641; U.S. Patent No. 9,015,513; U.S. Patent No. 9,016,576; U.S. Patent No. 9,022,288; U.S. Patent No. 9,030,964; U.S. Patent No. 9,033,240; U.S. Patent No. 9,033,242; U.S. Patent No. 9,036,054; U.S. Patent No. 9,037,344; U.S. Patent No. 9,038,911; U.S. Patent No. 9,038,915; U.S. Patent No. 9,047,098; U.S. Patent No. 9,047,359; U.S. Patent No. 9,047,420; U.S. Patent No. 9,047,525; U.S. Patent No. 9,047,531; U.S. Patent No. 9,053,055; U.S. Patent No. 9,053,378; U.S. Patent No. 9,053,380; U.S. Patent No. 9,058,526; U.S. Patent No. 9,064,165; U.S. Patent No. 9,064,167; U.S. Patent No. 9,064,168; U.S. Patent No. 9,064,254; U.S. Patent No. 9,066,032; U.S. Patent No. 9,070,032; U.S. Design Patent No. D716,285; U.S. Design Patent No. D723,560; U.S. Design Patent No. D730,357; U.S. Design Patent No. D730,901; U.S. Design Patent No. D730,902; U.S. Design Patent No. D733,112; U.S. Design Patent No. D734,339;
International Publication No. 2013/163789; International Publication No. 2013/173985; International Publication No. 2014/019130; International Publication No. 2014/110495; U.S. Patent Application Publication No. 2008/0185432; U.S. Patent Application Publication No. 2009/0134221; U.S. Patent Application Publication No. 2010/0177080; U.S. Patent Application Publication No. 2010/0177076; U.S. Patent Application Publication No. 2010/0177707; U.S. Patent Application Publication No. 2010/0177749; U.S. Patent Application Publication No. 2010/0265880; U.S. Patent Application Publication No. 2011/0202554; U.S. Patent Application Publication No. 2012/0111946; U.S. Patent Application Publication No. 2012/0168511; U.S. Patent Application Publication No. 2012/0168512; U.S. Patent Application Publication No. 2012/0193423; U.S. Patent Application Publication No. 2012/0203647; U.S. Patent Application Publication No. 2012/0223141; U.S. Patent Application Publication No. 2012/0228382; U.S. Patent Application Publication No. 2012/0248188; U.S. Patent Application Publication No. 2013/0043312; U.S. Patent Application Publication No. 2013/0082104; U.S. Patent Application Publication No. 2013/0175341; U.S. Patent Application Publication No. 2013/0175343; U.S. Patent Application Publication No. 2013/0257744; U.S. Patent Application Publication No. 2013/0257759; U.S. Patent Application Publication No. 2013/0270346; U.S. Patent Application Publication No. 2013/0287258; U.S. Patent Application Publication No. 2013/0292475; U.S. Patent Application Publication No. 2013/0292477; U.S. Patent Application Publication No. 2013/0293539; U.S. Patent Application Publication No. 2013/0293540; U.S. Patent Application Publication No. 2013/0306728; U.S. Patent Application Publication No. 2013/0306731; U.S. Patent Application Publication No. 2013/0307964; U.S. Patent Application Publication No. 2013/0308625; U.S. Patent Application Publication No. 2013/0313324; U.S. Patent Application Publication No. 2013/0313325; U.S. Patent Application Publication No. 2013/0342717; U.S. Patent Application Publication No. 2014/0001267; U.S. Patent Application Publication No. 2014/0008439; U.S. Patent Application Publication No. 2014/0025584; U.S. Patent Application Publication No. 2014/0034734; U.S. Patent Application Publication No. 2014/0036848; U.S. Patent Application Publication No. 2014/0039693; U.S. Patent Application Publication No. 2014/0042814; U.S. Patent Application Publication No. 2014/0049120; U.S. Patent Application Publication No. 2014/0049635; U.S. Patent Application Publication No. 2014/0061306; U.S. Patent Application Publication No. 2014/0063289; U.S. Patent Application Publication No. 2014/0066136; U.S. Patent Application Publication No. 2014/0067692; U.S. Patent Application Publication No. 2014/0070005; U.S. Patent Application Publication No. 2014/0071840; U.S. Patent Application Publication No. 2014/0074746; U.S. Patent Application Publication No. 2014/0076974; U.S. Patent Application Publication No. 2014/0078341; U.S. Patent Application Publication No. 2014/0078345; U.S. Patent Application Publication No. 2014/0097249; U.S. Patent Application Publication No. 2014/0098792; U.S. Patent Application Publication No. 2014/0100813; U.S. Patent Application Publication No. 2014/0103115; U.S. Patent Application Publication No. 2014/0104413; U.S. Patent Application Publication No. 2014/0104414; U.S. Patent Application Publication No. 2014/0104416; U.S. Patent Application Publication No. 2014/0104451; U.S. Patent Application Publication No. 2014/0106594; U.S. Patent Application Publication No. 2014/0106725; U.S. Patent Application Publication No. 2014/0108010; U.S. Patent Application Publication No. 2014/0108402; U.S. Patent Application Publication No. 2014/0110485; U.S. Patent Application Publication No. 2014/0114530; U.S. Patent Application Publication No. 2014/0124577; U.S. Patent Application Publication No. 2014/0124579; U.S. Patent Application Publication No. 2014/0125842; U.S. Patent Application Publication No. 2014/0125853; U.S. Patent Application Publication No. 2014/0125999; U.S. Patent Application Publication No. 2014/0129378; U.S. Patent Application Publication No. 2014/0131438; U.S. Patent Application Publication No. 2014/0131441; U.S. Patent Application Publication No. 2014/0131443; U.S. Patent Application Publication No. 2014/0131444; U.S. Patent Application Publication No. 2014/0131445; U.S. Patent Application Publication No. 2014/0131448; U.S. Patent Application Publication No. 2014/0133379; U.S. Patent Application Publication No. 2014/0136208; U.S. Patent Application Publication No. 2014/0140585; U.S. Patent Application Publication No. 2014/0151453; U.S. Patent Application Publication No. 2014/0152882; U.S. Patent Application Publication No. 2014/0158770; U.S. Patent Application Publication No. 2014/0159869; U.S. Patent Application Publication No. 2014/0166755; U.S. Patent Application Publication No. 2014/0166759; U.S. Patent Application Publication No. 2014/0168787; U.S. Patent Application Publication No. 2014/0175165; U.S. Patent Application Publication No. 2014/0175172; U.S. Patent Application Publication No. 2014/0191644; U.S. Patent Application Publication No. 2014/0191913; U.S. Patent Application Publication No. 2014/0197238; U.S. Patent Application Publication No. 2014/0197239; U.S. Patent Application Publication No. 2014/0197304; U.S. Patent Application Publication No. 2014/0214631; U.S. Patent Application Publication No. 2014/0217166; U.S. Patent Application Publication No. 2014/0217180; U.S. Patent Application Publication No. 2014/0231500; U.S. Patent Application Publication No. 2014/0232930; U.S. Patent Application Publication No. 2014/0247315; U.S. Patent Application Publication No. 2014/0263493; U.S. Patent Application Publication No. 2014/0263645; U.S. Patent Application Publication No. 2014/0267609; U.S. Patent Application Publication No. 2014/0270196; U.S. Patent Application Publication No. 2014/0270229; U.S. Patent Application Publication No. 2014/0278387; U.S. Patent Application Publication No. 2014/0278391; U.S. Patent Application Publication No. 2014/0282210; U.S. Patent Application Publication No. 2014/0284384; U.S. Patent Application Publication No. 2014/0288933; U.S. Patent Application Publication No. 2014/0297058; U.S. Patent Application Publication No. 2014/0299665; U.S. Patent Application Publication No. 2014/0312121; U.S. Patent Application Publication No. 2014/0319220; U.S. Patent Application Publication No. 2014/0319221; U.S. Patent Application Publication No. 2014/0326787; U.S. Patent Application Publication No. 2014/0332590; U.S. Patent Application Publication No. 2014/0344943; U.S. Patent Application Publication No. 2014/0346233; U.S. Patent Application Publication No. 2014/0351317; U.S. Patent Application Publication No. 2014/0353373; U.S. Patent Application Publication No. 2014/0361073; U.S. Patent Application Publication No. 2014/0361082; U.S. Patent Application Publication No. 2014/0362184; U.S. Patent Application Publication No. 2014/0363015; U.S. Patent Application Publication No. 2014/0369511; U.S. Patent Application Publication No. 2014/0374483; U.S. Patent Application Publication No. 2014/0374485; U.S. Patent Application Publication No. 2015/0001301; U.S. Patent Application Publication No. 2015/0001304; U.S. Patent Application Publication No. 2015/0003673; U.S. Patent Application Publication No. 2015/0009338; U.S. Patent Application Publication No. 2015/0009610; U.S. Patent Application Publication No. 2015/0014416; U.S. Patent Application Publication No. 2015/0021397; U.S. Patent Application Publication No. 2015/0028102; U.S. Patent Application Publication No. 2015/0028103; U.S. Patent Application Publication No. 2015/0028104; U.S. Patent Application Publication No. 2015/0029002; U.S. Patent Application Publication No. 2015/0032709; U.S. Patent Application Publication No. 2015/0039309; U.S. Patent Application Publication No. 2015/0039878; U.S. Patent Application Publication No. 2015/0040378; U.S. Patent Application Publication No. 2015/0048168; U.S. Patent Application Publication No. 2015/0049347; U.S. Patent Application Publication No. 2015/0051992; U.S. Patent Application Publication No. 2015/0053766; U.S. Patent Application Publication No. 2015/0053768; U.S. Patent Application Publication No. 2015/0053769; U.S. Patent Application Publication No. 2015/0060544; U.S. Patent Application Publication No. 2015/0062366; U.S. Patent Application Publication No. 2015/0063215; U.S. Patent Application Publication No. 2015/0063676; U.S. Patent Application Publication No. 2015/0069130; U.S. Patent Application Publication No. 2015/0071819; U.S. Patent Application Publication No. 2015/0083800; U.S. Patent Application Publication No. 2015/0086114; U.S. Patent Application Publication No. 2015/0088522; U.S. Patent Application Publication No. 2015/0096872; U.S. Patent Application Publication No. 2015/0099557; U.S. Patent Application Publication No. 2015/0100196; U.S. Patent Application Publication No. 2015/0102109; U.S. Patent Application Publication No. 2015/0115035; U.S. Patent Application Publication No. 2015/0127791; U.S. Patent Application Publication No. 2015/0128116; U.S. Patent Application Publication No. 2015/0129659; U.S. Patent Application Publication No. 2015/0133047; U.S. Patent Application Publication No. 2015/0134470; U.S. Patent Application Publication No. 2015/0136851; U.S. Patent Application Publication No. 2015/0136854; U.S. Patent Application Publication No. 2015/0142492; U.S. Patent Application Publication No. 2015/0144692; U.S. Patent Application Publication No. 2015/0144698; U.S. Patent Application Publication No. 2015/0144701; U.S. Patent Application Publication No. 2015/0149946; U.S. Patent Application Publication No. 2015/0161429; U.S. Patent Application Publication No. 2015/0169925; U.S. Patent Application Publication No. 2015/0169929; U.S. Patent Application Publication No. 2015/0178523; U.S. Patent Application Publication No. 2015/0178534; U.S. Patent Application Publication No. 2015/0178535; U.S. Patent Application Publication No. 2015/0178536; U.S. Patent Application Publication No. 2015/0178537; U.S. Patent Application Publication No. 2015/0181093; U.S. Patent Application Publication No. 2015/0181109; U.S. Patent Application No. 13/367,978 for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed February 7, 2012 (Feng et al.); U.S. Patent Application No. 29/458,405 for an Electronic Device, filed June 19, 2013 (Fitch et al.); U.S. Patent Application No. 29/459,620 for an Electronic Device Enclosure, filed July 2, 2013 (London et al.); U.S. Patent Application No. 29/468,118 for an Electronic Device Case, filed September 26, 2013 (Oberpriller et al.)/ U.S. Patent Application No. 14/150,393 for Indicia-reader Having Unitary Construction Scanner, filed January 8, 2014 (Colavito et al.); U.S. Patent Application No. 14/200,405 for Indicia Reader for Size-Limited Applications filed March 7, 2014 (Feng et al.)/ U.S. Patent Application No. 14/231,898 for Hand-Mounted Indicia-Reading Device with Finger Motion Triggering filed April 1, 2014 (Van Horn et al.); U.S. Patent Application No. 29/486,759 for an Imaging Terminal, filed April 2, 2014 (Oberpriller et al.); U.S. Patent Application No. 14/257,364 for Docking System and Method Using Near Field Communication filed April 21, 2014 (Showering,) ; U.S. Patent Application No. 14/264,173 for Autofocus Lens System for Indicia Readers filed April 29, 2014 (Ackley et al.) ; U.S. Patent Application No. 14/277,337 for MULTIPURPOSE OPTICAL READER, filed May 14, 2014 (Jovanovski et al.); U.S. Patent Application No. 14/283,282 for TERMINAL HAVING ILLUMINATION AND FOCUS CONTROL filed May 21, 2014 (Liu et al.) ; U.S. Patent Application No. 14/327,827 for a MOBILE-PHONE ADAPTER FOR ELECTRONIC TRANSACTIONS, filed July 10, 2014 (Hej1) ; U.S. Patent Application No. 14/334,934 for a SYSTEM AND METHOD FOR INDICIA VERIFICATION, filed July 18, 2014 (Hejl); U.S. Patent Application No. 14/339,708 for LASER SCANNING CODE SYMBOL READING SYSTEM, filed July 24, 2014 (Xian et al.) ; U.S. Patent Application No. 14/340,627 for an AXIALLY REINFORCED FLEXIBLE SCAN ELEMENT, filed July 25, 2014 (Rueblinger et al.); U.S. Patent Application No. 14/446,391 for MULTIFUNCTION POINT OF SALE APPARATUS WITH OPTICAL SIGNATURE CAPTURE filed July 30, 2014 (Good et al.)/ U.S. Patent Application No. 14/452,697 for INTERACTIVE INDICIA READER, filed August 6, 2014 (Todeschini); U.S. Patent Application No. 14/453,019 for DIMENSIONING SYSTEM WITH GUIDED ALIGNMENT, filed August 6, 2014 (Li et al.) ; U.S. Patent Application No. 14/462,801 for MOBILE COMPUTING DEVICE WITH DATA COGNITION SOFTWARE, filed on August 19, 2014 (Todeschini et al.); U.S. Patent Application No. 14/483,056 for VARIABLE DEPTH OF FIELD BARCODE SCANNER filed Sep. 10, 2014 (McCloskey et al.)/ U.S. Patent Application No. 14/513,808 for IDENTIFYING INVENTORY ITEMS IN A STORAGE FACILITY filed Oct. 14, 2014 (Singel et al.); U.S. Patent Application No. 14/519,195 for HANDHELD DIMENSIONING SYSTEM WITH FEEDBACK filed Oct. 21, 2014 (Laffargue et al.); U.S. Patent Application No. 14/519,179 for DIMENSIONING SYSTEM WITH MULTIPATH INTERFERENCE MITIGATION filed Oct. 21, 2014 (Thuries et al.); U.S. Patent Application No. 14/519,211 for SYSTEM AND METHOD FOR DIMENSIONING filed Oct. 21, 2014 (Ackley et al.); U.S. Patent Application No. 14/519,233 for HANDHELD DIMENSIONER WITH DATA-QUALITY INDICATION filed Oct. 21, 2014 (Laffargue et al.); U.S. Patent Application No. 14/519,249 for HANDHELD DIMENSIONING SYSTEM WITH MEASUREMENT-CONFORMANCE FEEDBACK filed Oct. 21, 2014 (Ackley et al.)/ U.S. Patent Application No. 14/527,191 for METHOD AND SYSTEM FOR RECOGNIZING SPEECH USING WILDCARDS IN AN EXPECTED RESPONSE filed Oct. 29, 2014 (Braho et al.)/ U.S. Patent Application No. 14/529,563 for ADAPTABLE INTERFACE FOR A MOBILE COMPUTING DEVICE filed Oct. 31, 2014 (Schoon et al.) ; U.S. Patent Application No. 14/529,857 for BARCODE READER WITH SECURITY FEATURES filed October 31, 2014 (Todeschini et al.) ; U.S. Patent Application No. 14/398,542 for PORTABLE ELECTRONIC DEVICES HAVING A SEPARATE LOCATION TRIGGER UNIT FOR USE IN CONTROLLING AN APPLICATION UNIT filed November 3, 2014 (Bian et al.); U.S. Patent Application No. 14/531,154 for DIRECTING AN INSPECTOR THROUGH AN INSPECTION filed Nov. 3, 2014 (Miller et al.) ; U.S. Patent Application No. 14/533,319 for BARCODE SCANNING SYSTEM USING WEARABLE DEVICE WITH EMBEDDED CAMERA filed Nov. 5, 2014 (Todeschini); U.S. Patent Application No. 14/535,764 for CONCATENATED EXPECTED RESPONSES FOR SPEECH RECOGNITION filed Nov. 7, 2014 (Braho et al.); U.S. Patent Application No. 14/568,305 for AUTO-CONTRAST VIEWFINDER FOR AN INDICIA READER filed Dec. 12, 2014 (Todeschini); U.S. Patent Application No. 14/573,022 for DYNAMIC DIAGNOSTIC INDICATOR GENERATION filed Dec. 17, 2014 (Goldsmith); U.S. Patent Application No. 14/578,627 for SAFETY SYSTEM AND METHOD filed Dec. 22, 2014 (Ackley et al.); U.S. Patent Application No. 14/580,262 for MEDIA GATE FOR THERMAL TRANSFER PRINTERS filed Dec. 23, 2014 (Bowles); U.S. Patent Application No. 14/590,024 for SHELVING AND PACKAGE LOCATING SYSTEMS FOR DELIVERY VEHICLES filed January 6, 2015 (Payne); U.S. Patent Application No. 14/596,757 for SYSTEM AND METHOD FOR DETECTING BARCODE PRINTING ERRORS filed Jan. 14, 2015 (Ackley); U.S. Patent Application No. 14/416,147 for OPTICAL READING APPARATUS HAVING VARIABLE SETTINGS filed January 21, 2015 (Chen et al.)/ U.S. Patent Application No. 14/614,706 for DEVICE FOR SUPPORTING AN ELECTRONIC TOOL ON A USER'S HAND filed Feb. 5, 2015 (Oberpriller et al.); U.S. Patent Application No. 14/614,796 for CARGO APPORTIONMENT TECHNIQUES filed Feb. 5, 2015 (Morton et al.); U.S. Patent Application No. 29/516,892 for TABLE COMPUTER filed Feb. 6, 2015 (Bidwell et al.); U.S. Patent Application No. 14/619,093 for METHODS FOR TRAINING A SPEECH RECOGNITION SYSTEM filed Feb. 11, 2015 (Pecorari); U.S. Patent Application No. 14/628,708 for DEVICE, SYSTEM, AND METHOD FOR DETERMINING THE STATUS OF CHECKOUT LANES filed Feb. 23, 2015 (Todeschini); U.S. Patent Application No. 14/630,841 for TERMINAL INCLUDING IMAGING ASSEMBLY filed Feb. 25, 2015 (Gomez et al.); U.S. Patent Application No. 14/635,346 for SYSTEM AND METHOD FOR RELIABLE STORE-AND-FORWARD DATA HANDLING BY ENCODED INFORMATION READING TERMINALS filed March 2, 2015 (Sevier); U.S. Patent Application No. 29/519,017 for SCANNER filed March 2, 2015 (Zhou et al.); U.S. Patent Application No. 14/405,278 for DESIGN PATTERN FOR SECURE STORE filed March 9, 2015 (Zhu et al.); U.S. Patent Application No. 14/660,970 for DECODABLE INDICIA READING TERMINAL WITH COMBINED ILLUMINATION filed March 18, 2015 (Kearney et al.); U.S. Patent Application No. 14/661,013 for REPROGRAMMING SYSTEM AND METHOD FOR DEVICES INCLUDING PROGRAMMING SYMBOL filed March 18, 2015 (Soule et al.); U.S. Patent Application No. 14/662,922 for MULTIFUNCTION POINT OF SALE SYSTEM filed March 19, 2015 (Van Horn et al.); U.S. Patent Application No. 14/663,638 for VEHICLE MOUNT COMPUTER WITH CONFIGURABLE IGNITION SWITCH BEHAVIOR filed March 20, 2015 (Davis et al.); U.S. Patent Application No. 14/664,063 for METHOD AND APPLICATION FOR SCANNING A BARCODE WITH A SMART DEVICE WHILE CONTINUOUSLY RUNNING AND DISPLAYING AN APPLICATION ON THE SMART DEVICE DISPLAY filed March 20, 2015 (Todeschini); U.S. Patent Application No. 14/669,280 for TRANSFORMING COMPONENTS OF A WEB PAGE TO VOICE PROMPTS filed March 26, 2015 (Funyak et al. ) ; U.S. Patent Application No. 14/674,329 for AIMER FOR BARCODE SCANNING filed March 31, 2015 (Bidwell); U.S. Patent Application No. 14/676,109 for INDICIA READER filed April 1, 2015 (Huck); U.S. Patent Application No. 14/676,327 for DEVICE MANAGEMENT PROXY FOR SECURE DEVICES filed April 1, 2015 (Yeakley et al.) ; U.S. Patent Application No. 14/676,898 for NAVIGATION SYSTEM CONFIGURED TO INTEGRATE MOTION SENSING DEVICE INPUTS filed April 2, 2015 (Showering); U.S. Patent Application No. 14/679,275 for DIMENSIONING SYSTEM CALIBRATION SYSTEMS AND METHODS filed April 6, 2015 (Laffargue et al.); U.S. Patent Application No. 29/523,098 for HANDLE FOR A TABLET COMPUTER filed April 7, 2015 (Bidwell et al.); U.S. Patent Application No. 14/682,615 for SYSTEM AND METHOD FOR POWER MANAGEMENT OF MOBILE DEVICES filed April 9, 2015 (Murawski et al.); U.S. Patent Application No. 14/686,822 for MULTIPLE PLATFORM SUPPORT SYSTEM AND METHOD filed April 15, 2015 (Qu et al.); U.S. Patent Application No. 14/687,289 for SYSTEM FOR COMMUNICATION VIA A PERIPHERAL HUB filed April 15, 2015 (Kohtz et al.); U.S. Patent Application No. 29/524,186 for SCANNER filed April 17, 2015 (Zhou et al.) ; U.S. Patent Application No. 14/695,364 for MEDICATION MANAGEMENT SYSTEM filed April 24, 2015 (Sewell et al. ) ; U.S. Patent Application No. 14/695,923 for SECURE UNATTENDED NETWORK AUTHENTICATION filed April 24, 2015 (Kubler et al.)/ U.S. Patent Application No. 29/525,068 for TABLET COMPUTER WITH REMOVABLE SCANNING DEVICE filed April 27, 2015 (Schulte et al.); U.S. Patent Application No. 14/699,436 for SYMBOL READING SYSTEM HAVING PREDICTIVE DIAGNOSTICS filed April 29, 2015 (Nahill et al.); U.S. Patent Application No. 14/702,110 for SYSTEM AND METHOD FOR REGULATING BARCODE DATA INJECTION INTO A RUNNING APPLICATION ON A SMART DEVICE filed May 1, 2015 (Todeschini et al.); U.S. Patent Application No. 14/702,979 for TRACKING BATTERY CONDITIONS filed May 4, 2015 (Young et al.)/ U.S. Patent Application No. 14/704,050 for INTERMEDIATE LINEAR POSITIONING filed May 5, 2015 (Charpentier et al.); U.S. Patent Application No. 14/705,012 for HANDS-FREE HUMAN MACHINE INTERFACE RESPONSIVE TO A DRIVER OF A VEHICLE filed May 6, 2015 (Fitch et al.); U.S. Patent Application No. 14/705,407 for METHOD AND SYSTEM TO PROTECT SOFTWARE-BASED NETWORK-CONNECTED DEVICES FROM ADVANCED PERSISTENT THREAT filed May 6, 2015 (Hussey et al.); U.S. Patent Application No. 14/707,037 for SYSTEM AND METHOD FOR DISPLAY OF INFORMATION USING A VEHICLE-MOUNT COMPUTER filed May 8, 2015 (Chamberlin); U.S. Patent Application No. 14/707,123 for APPLICATION INDEPENDENT DEX/UCS INTERFACE filed May 8, 2015 (Pape); U.S. Patent Application No. 14/707,492 for METHOD AND APPARATUS FOR READING OPTICAL INDICIA USING A PLURALITY OF DATA SOURCES filed May 8, 2015 (Smith et al.); U.S. Patent Application No. 14/710,666 for PRE-PAID USAGE SYSTEM FOR ENCODED INFORMATION READING TERMINALS filed May 13, 2015 (Smith); U.S. Patent Application No. 29/526,918 for CHARGING BASE filed May 14, 2015 (Fitch et al.) ; U.S. Patent Application No. 14/715,672 for AUGUMENTED REALITY ENABLED HAZARD DISPLAY filed May 19, 2015 (Venkatesha et al.) ; U.S. Patent Application No. 14/715,916 for EVALUATING IMAGE VALUES filed May 19, 2015 (Ackley)/ U.S. Patent Application No. 14/722,608 for INTERACTIVE USER INTERFACE FOR CAPTURING A DOCUMENT IN AN IMAGE SIGNAL filed May 27, 2015 (Showering et al.); U.S. Patent Application No. 29/528,165 for IN-COUNTER BARCODE SCANNER filed May 27, 2015 (Oberpriller et al.)/ U.S. Patent Application No. 14/724,134 for ELECTRONIC DEVICE WITH WIRELESS PATH SELECTION CAPABILITY filed May 28, 2015 (Wang et al.); U.S. Patent Application No. 14/724,849 for METHOD OF PROGRAMMING THE DEFAULT CABLE INTERFACE SOFTWARE IN AN INDICIA READING DEVICE filed May 29, 2015 (Barten); U.S. Patent Application No. 14/724,908 for IMAGING APPARATUS HAVING IMAGING ASSEMBLY filed May 29, 2015 (Barber et al.); U.S. Patent Application No. 14/725,352 for APPARATUS AND METHODS FOR MONITORING ONE OR MORE PORTABLE DATA TERMINALS (Caballero et al.); U.S. Patent Application No. 29/528,590 for ELECTRONIC DEVICE filed May 29, 2015 (Fitch et al.); U.S. Patent Application No. 29/528,890 for MOBILE COMPUTER HOUSING filed June 2, 2015 (Fitch et al.); U.S. Patent Application No. 14/728,397 for DEVICE MANAGEMENT USING VIRTUAL INTERFACES CROSS-REFERENCE TO RELATED APPLICATIONS filed June 2, 2015 (Caballero); U.S. Patent Application No. 14/732,870 for DATA COLLECTION MODULE AND SYSTEM filed June 8, 2015 (Powilleit); U.S. Patent Application No. 29/529,441 for INDICIA READING DEVICE filed June 8, 2015 (Zhou et al.)/ U.S. Patent Application No. 14/735,717 for INDICIA-READING SYSTEMS HAVING AN INTERFACE WITH A USER'S NERVOUS SYSTEM filed June 10, 2015 (Todeschini); U.S. Patent Application No. 14/738,038 for METHOD OF AND SYSTEM FOR DETECTING OBJECT WEIGHING INTERFERENCES filed June 12, 2015 (Amundsen et al.) ; U.S. Patent Application No. 14/740,320 for TACTILE SWITCH FOR A MOBILE ELECTRONIC DEVICE filed June 16, 2015 (Bandringa); U.S. Patent Application No. 14/740,373 for CALIBRATING A VOLUME DIMENSIONER filed June 16, 2015 (Ackley et al.)/ U.S. Patent Application No. 14/742,818 for INDICIA READING SYSTEM EMPLOYING DIGITAL GAIN CONTROL filed June 18, 2015 (Xian et al.); U.S. Patent Application No. 14/743,257 for WIRELESS MESH POINT PORTABLE DATA TERMINAL filed June 18, 2015 (Wang et al.) ; U.S. Patent Application No. 29/530,600 for CYCLONE filed June 18, 2015 (Vargo et al); U.S. Patent Application No. 14/744,633 for IMAGING APPARATUS COMPRISING IMAGE SENSOR ARRAY HAVING SHARED GLOBAL SHUTTER CIRCUITRY filed June 19, 2015 (Wang); U.S. Patent Application No. 14/744,836 for CLOUD-BASED SYSTEM FOR READING OF DECODABLE INDICIA filed June 19, 2015 (Todeschini et al.); U.S. Patent Application No. 14/745,006 for SELECTIVE OUTPUT OF DECODED MESSAGE DATA filed June 19, 2015 (Todeschini et al.); U.S. Patent Application No. 14/747,197 for OPTICAL PATTERN PROJECTOR filed June 23, 2015 (Thuries et al.)/ U.S. Patent Application No. 14/747,490 for DUAL-PROJECTOR THREE-DIMENSIONAL SCANNER filed June 23, 2015 (Jovanovski et al.); and U.S. Patent Application No. 14/748,446 for CORDLESS INDICIA READER WITH A MULTIFUNCTION COIL FOR WIRELESS CHARGING AND EAS DEACTIVATION, filed June 24, 2015 (Xie et al.). * * * [0045] In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term "and/or" includes any and all combinations of one or more of the associated listed items. The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.

Claims (19)

Claims
1. An auto-complete method for a spoken complete value entry comprising, by a processor: receiving a possible complete value entry having a unique subset; prompting a user to speak the spoken complete value entry; receiving a spoken subset of the spoken complete value entry, the spoken subset having a predetermined minimum number of characters; comparing the spoken subset with the unique subset of the possible complete value entry; and automatically completing the spoken complete value entry to match the possible complete value entry if the unique subset matches the spoken subset.
2. The auto-complete method according to claim 1, wherein receiving the possible complete value entry comprises receiving a suggestion list of one or more possible complete value entries, each possible complete value entry having a unique subset.
3. The auto-complete method according to claim 2, wherein comparing the spoken subset with the unique subset of the possible complete value entry comprises comparing the spoken subset with the unique subset of each possible complete value entry.
4. The auto-complete method according to claim 2, further comprising generating and transmitting an alert if the spoken subset does not match the unique subset of the possible complete value entry or at least one of the possible complete value entries.
5. The auto-complete method according to claim 2, wherein prior to receiving the possible complete value entry, the method further comprises collecting the possible complete value entries from at least one of common responses and responses expected to be received from a user in a particular context.
6. The auto-complete method according to claim 2, further comprising prompting the user to confirm the automatically completed spoken value entry as the spoken complete value entry, the automatically completed spoken complete value entry comprising a suggestion.
7. The auto-complete method according to claim 6, wherein prompting the user to confirm the automatically completed spoken complete value entry comprises reading back the automatically completed spoken complete value entry to the user.
8. The auto-complete method according to claim 7, wherein the user accepts or declines the suggestion, and if the user declines the suggestion, the method further comprises repeating, in order, at least the comparing and automatically completing steps.
9. The auto-complete method according to claim 6, wherein the user accepts or declines the suggestion, and if the user declines the suggestion, the method further comprises removing the possible complete value entry from the suggestion list.
10. An auto-complete method for a spoken complete value entry comprising, by a processor: receiving one or more possible complete value entries each having a unique subset; prompting a user to speak the spoken complete value entry; receiving a spoken subset of the spoken complete value entry, the spoken subset having a predetermined minimum number of characters; comparing the spoken subset with the unique subset of each possible complete value entry as choices; automatically completing the spoken complete value entry to match a possible complete value entry of the one or more possible complete value entries if the spoken subset matches the unique subset of the possible complete value entry; and confirming the automatically completed spoken complete value entry as the spoken complete value entry.
11. The auto-complete method according to claim 10, further comprising generating and transmitting an alert if the spoken subset does not match the unique subset of at least one of the one or more possible complete value entries.
12. The auto-complete method according to claim 10, wherein prior to receiving the possible complete value entry, the method further comprises collecting the one or more possible complete value entries from at least one of common responses and responses expected to be received from the user in a particular context.
13. The auto-complete method according to claim 10, wherein the automatically completed spoken complete value entry comprises a suggestion for the user to accept or decline, and if the user declines the suggestion, the method further comprises repeating, in order, at least the comparing and automatically completing steps.
14. The auto-complete method according to claim 10, wherein confirming the automatically completed spoken complete value entry as the spoken complete value entry comprises reading back the automatically completed spoken complete value entry to the user.
15. The auto-complete method according to claim 10, wherein the automatically completed spoken complete value entry comprises a suggestion for the user to accept or decline, and if the user declines the suggestion, the method further comprises removing the possible complete value entry from the suggestion list.
16. An auto-complete method for a spoken complete value entry in a workflow process, the method comprising: receiving, by a processor, a voice assignment to perform the workflow process comprising at least one workflow stage; identifying, by the processor, a task that is to be performed, by a user, the task being at least a portion of the workflow1 process; receiving, by the processor, a possible complete value entry having a unique subset; prompting a user to speak the spoken complete value entry; receiving a spoken subset of the spoken complete value entry, the spoken subset having a predetermined minimum number of characters; comparing the spoken subset with the unique subset of the possible complete value entry; and confirming the automatically completed spoken complete value entry as the spoken complete value entry.
17. The auto-complete method according to claim 16, wherein receiving the possible complete value entry comprises receiving a suggestion list of one or more possible complete value entries, each possible complete value entry having a unique subset and comparing the spoken subset with the unique subset of the possible complete value entry comprises comparing the spoken subset with the unique subset of each possible complete value entry.
18. The auto-complete method according to claim 16, wherein prior to receiving the possible complete value entry, the method further comprises collecting the possible complete value entry from at least one of a common response and a response expected to be received in a context of performing at least one of the workflow process, the at least one workflow stage, and the task.
19. The auto-complete method according to claim 17, wherein the automatically completed spoken complete value entry comprises a suggestion for the user to accept or decline, and if the user declines the suggestion, the method further comprises removing the possible complete value entry from the suggestion list.
GB1613949.5A 2015-08-19 2016-08-15 Auto-complete methods for spoken complete value entries Withdrawn GB2544149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1903587.2A GB2573631B (en) 2015-08-19 2016-08-15 Auto-complete methods for spoken complete value entries

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562206884P 2015-08-19 2015-08-19
US15/233,992 US10410629B2 (en) 2015-08-19 2016-08-11 Auto-complete methods for spoken complete value entries

Publications (2)

Publication Number Publication Date
GB201613949D0 GB201613949D0 (en) 2016-09-28
GB2544149A true GB2544149A (en) 2017-05-10

Family

ID=56985893

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1613949.5A Withdrawn GB2544149A (en) 2015-08-19 2016-08-15 Auto-complete methods for spoken complete value entries

Country Status (1)

Country Link
GB (1) GB2544149A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4566065A (en) * 1983-04-22 1986-01-21 Kalman Toth Computer aided stenographic system
US7657423B1 (en) * 2003-10-31 2010-02-02 Google Inc. Automatic completion of fragments of text
US8645825B1 (en) * 2011-08-31 2014-02-04 Google Inc. Providing autocomplete suggestions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4566065A (en) * 1983-04-22 1986-01-21 Kalman Toth Computer aided stenographic system
US7657423B1 (en) * 2003-10-31 2010-02-02 Google Inc. Automatic completion of fragments of text
US8645825B1 (en) * 2011-08-31 2014-02-04 Google Inc. Providing autocomplete suggestions

Also Published As

Publication number Publication date
GB201613949D0 (en) 2016-09-28

Similar Documents

Publication Publication Date Title
US10529335B2 (en) Auto-complete methods for spoken complete value entries
US11646028B2 (en) Multiple inspector voice inspection
US11704085B2 (en) Augmented reality quick-start and user guide
US20160202951A1 (en) Portable dialogue engine
US11443363B2 (en) Confirming product location using a subset of a product identifier
US11244264B2 (en) Interleaving surprise activities in workflow
US10313340B2 (en) Method and system for tracking an electronic device at an electronic device docking station
US10262660B2 (en) Voice mode asset retrieval
US9830488B2 (en) Real-time adjustable window feature for barcode scanning and process of scanning barcode with adjustable window feature
US20170169198A1 (en) Generation of randomized passwords for one-time usage
US20170123598A1 (en) System and method for focus on touch with a touch sensitive screen display
US20170171035A1 (en) Easy wi-fi connection system and method
US20180183990A1 (en) Method and system for synchronizing illumination timing in a multi-sensor imager
US20180063310A1 (en) Systems and methods for identifying wireless devices for correct pairing
US20170337402A1 (en) Tool verification systems and methods for a workflow process
US10163044B2 (en) Auto-adjusted print location on center-tracked printers
GB2544149A (en) Auto-complete methods for spoken complete value entries
US20180253270A1 (en) Automatic printing language detection algorithm
US20160179369A1 (en) Host controllable pop-up soft keypads
EP3016023A1 (en) Scanner with illumination system

Legal Events

Date Code Title Description
R108 Alteration of time limits (patents rules 1995)

Free format text: EXTENSION APPLICATION

Effective date: 20170217

Free format text: EXTENSION ALLOWED

Effective date: 20170330

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)