US20140324428A1 - System and method of improving speech recognition using context - Google Patents

System and method of improving speech recognition using context Download PDF

Info

Publication number
US20140324428A1
US20140324428A1 US13/874,304 US201313874304A US2014324428A1 US 20140324428 A1 US20140324428 A1 US 20140324428A1 US 201313874304 A US201313874304 A US 201313874304A US 2014324428 A1 US2014324428 A1 US 2014324428A1
Authority
US
United States
Prior art keywords
contextual information
speech recognition
speech
user
method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/874,304
Other versions
US9626963B2 (en
Inventor
Eric J. Farraro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PayPal Inc
Original Assignee
eBay Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by eBay Inc filed Critical eBay Inc
Priority to US13/874,304 priority Critical patent/US9626963B2/en
Assigned to EBAY INC. reassignment EBAY INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FARRARO, ERIC J.
Publication of US20140324428A1 publication Critical patent/US20140324428A1/en
Assigned to PAYPAL, INC. reassignment PAYPAL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EBAY INC.
Application granted granted Critical
Publication of US9626963B2 publication Critical patent/US9626963B2/en
Application status is Active legal-status Critical
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0635Training updating or merging of old and new templates; Mean values; Weighting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Taking into account non-speech caracteristics
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use

Abstract

A system and method are provided for improving speech recognition accuracy. Contextual information about user speech may be received, and then speech recognition analysis can be performed on the user speech using the contextual information. This allows the system and method to improve accuracy when performing tasks like searching and navigating using speech recognition.

Description

    BACKGROUND
  • Speech recognition involves the translation of spoken words, typically recorded by a microphone, into text. Speech recognition is used in a variety of different applications. With the rise in popularity of mobile devices, such as smartphones, and of in-dash computing systems utilized in vehicles, there has been an increase in use of speech recognition software. Despite advances in speech recognition algorithms, accuracy of results still remains a problem. As the size of the vocabulary (also known as a dictionary) grows, accuracy declines due to the fact that there are more words that could be confused with one another. Thus, as the number of different applications that utilize speech recognition grows, there is a desire to provide for larger and larger vocabularies.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a network diagram depicting a client-server system, within which one example embodiment may be deployed.
  • FIG. 2 is a block diagram illustrating a mobile device in accordance with a example embodiment.
  • FIG. 3 is a block diagram illustrating ambient noise being used to improve speech recognition in accordance with an example embodiment.
  • FIG. 4 is a block diagram illustrating information from one or more sensors other than a microphone being used to improve the accuracy of speech recognition.
  • FIG. 5 is a flow diagram illustrating a method, in accordance with an example embodiment, of improving accuracy of speech recognition.
  • FIG. 6 is a flow diagram illustrating a method, in accordance with another example embodiment, of improving accuracy of speech recognition.
  • FIG. 7 is a flow diagram illustrating a method, in accordance with another example embodiment, of improving accuracy of speech recognition.
  • FIG. 8 shows a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • DETAILED DESCRIPTION
  • The description that follows includes illustrative systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques have not been shown in detail.
  • Although the present embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the embodiments. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
  • In an example embodiment, contextual information may be utilized to improve speech recognition. Contextual information is information derived from data other than the speech itself, but that provides context to the speech. This may include, for example, information about the location of the user, temperature and weather, ambient noises, time of day, speed, acceleration, etc. It should be noted that while some definitions of the term “context” may be broad enough to encompass other phrases spoken during the same sentence or paragraph, for purposes of this disclosure context will be limited to non-speech information. For example, words spoken just before or just after an analyzed word, while potentially useful in aiding in the determination of what the analyzed word is, shall not be considered contextual information for the analyzed word for purposes of this disclosure.
  • In some example embodiments, the contextual information is information gathered from a different sensor than the sensor detecting the speech. For example, the contextual information may be information derived from a global positioning system (GPS) module in a mobile device having a microphone that is recording the speech. In other embodiments, the contextual information is gathered from the same sensor detecting the speech, but the contextual information itself is not speech, such as ambient sounds or music playing in the background while a user is speaking.
  • In some example embodiments, the detected speech is used to perform searches. These searches may include, for example, general Internet queries, or specific marketplace queries on one or more specific ecommerce sites. Searching, however, is merely one example of potential applications for the techniques described in this disclosure.
  • FIG. 1 is a network diagram depicting a client-server system 100, within which one example embodiment may be deployed. A networked system 102, in the example forms of a network-based marketplace or publication system, provides server-side functionality, via a network 104 (e.g., the Internet or Wide Area Network (WAN)) to one or more clients. FIG. 1 illustrates, for example, a web client 106 (e.g., a brows and a programmatic client 108 executing on respective client machines 110 and 112.
  • An Application Program Interface (API) server 114 and a web server 116 are coupled to, and provide programmatic and web interfaces respectively to, one or more application servers 118. The application servers 118 host one or more marketplace applications 120 and payment applications 122. The application servers 118 are, in turn, shown to be coupled to one or more database servers 124 that facilitate access to one or more databases 126.
  • The marketplace applications 120 may provide a number of marketplace functions and services to users that access the networked system 102. The payment applications 122 may likewise provide a number of payment services and functions to users. The payment applications 122 may allow users to accumulate value (e.g., in a commercial currency, such as the U.S. dollar, or a proprietary currency, such as “points”) in accounts, and then later to redeem the accumulated value for products (e.g., goods or services) that are made available via the marketplace applications 120. While the marketplace and payment applications 120 and 122 are shown in FIG. 1 to both form part of the networked system 102, it will be appreciated that, in alternative embodiments, the payment applications 122 may form part of a payment service that is separate and distinct from the networked system 102.
  • Further, while the system 100 shown in FIG. 1 employs a client-server architecture, the present disclosure is of course not limited to such an architecture, and may equally well find application in a distributed, or peer-to-peer, architecture system, for example. The various marketplace and payment applications 120 and 122 may also be implemented as standalone software programs, which do not necessarily have networking capabilities.
  • The web client 106 accesses the various marketplace and payment applications 120 and 122 via the web interface supported by the web server 116. Similarly, the programmatic client 108 accesses the various services and functions provided by the marketplace and payment applications 120 and 122 via the programmatic interface provided by the API server 114. The programmatic client 108 may, for example, be a seller application (e.g., the TurboLister application developed by eBay Inc., of San Jose, Calif.) to enable sellers to author and manage listings on the networked system 102 in an off-line manner, and to perform batch-mode communications between the programmatic client 108 and the networked system 102.
  • FIG. 1 also illustrates a third party application 128, executing on a third party server machine 130, as having programmatic access to the networked system 102 via the programmatic interface provided by the API server 114. For example, the third party application 128 may, utilizing information retrieved from the networked system 102, support one or more features or functions on a website hosted by the third party. The third party website may, for example, provide one or more promotional, marketplace, or payment functions that are supported by the relevant applications of the networked system 102.
  • FIG. 2 is a block diagram illustrating a mobile device 200 in accordance with an example embodiment. The mobile device 200 may contain a microphone 202, a touchscreen 204, and one or more physical buttons 206. In some example embodiments, the mobile device 200 may also contain a global positioning system module 208, a wireless communications module 210, and an accelerometer 212. The wireless communications module 210 may be designed to communicate wirelessly via any number of different wireless communications standards, including cellular communications such as Code Division Multiple Access (CDMA) and Global System for Mobile Communications (GSM), 3G, 4G, LTE, WiFi, Bluetooth, WiMax, etc. The mobile device 200 may also include a processor 214 and a memory 216. The memory 216 may include any combination of persistent (e.g., hard drive) and/or non-persistent (e.g., Random Access Memory (RAM)) storage.
  • Speech recognition may be performed by, for example, recording user speech using the microphone 202. The speech recognition itself may be performed with a speech recognition module 218 using any number of different speech recognition algorithms, including acoustic modeling, language modeling, and hidden Markov models. Hidden Markov models are statistical models that output a sequence of symbols or quantities. A speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In short time scales, speech can be approximated as a stationary process. Hidden Markov models can be trained automatically, and can output a sequence of n-dimensional real-valued vectors (with n being a small integer, such as 10), outputting these repeatedly at short intervals (e.g., every 10 milliseconds. The vectors may comprise cepstral coefficients, which are obtained by taking a Fourier transform. of a short time window of speech and decorrelating the spectrum using a cosine transform, then taking the most significant coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word or phoneme may have a different output distribution. A hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes.
  • These algorithms may also be combined in various combinations to further improve speech recognition. Cepstral normalization may be utilized to normalize for different speaker and recording conditions, and techniques such as vocal tract length normalization and maximum likelihood linear regression may also be used. Further techniques such as heteroscedastic linear discriminant analysis, global semitied covariance transforms, and discriminative training techniques such as maximum mutual information, minimum classification error, and minimum phone error can also be used.
  • In an example embodiment, scores or other outputs derived from one or more of these speech recognition techniques may be weighted along with information derived from contextual information. This may be performed by or with a context scoring module 220. The context information may be derived from ambient sounds from the microphone 202, or alternatively may be derived from one or more sensors 222. This acts to alter the results of the speech recognition techniques based on the contextual information.
  • Various specific example embodiments will now be described.
  • FIG. 3 is a block diagram illustrating ambient noise being used to improve speech recognition in accordance with an example embodiment. Here, microphone 202 records input 300. This input 300 may be passed to speech recognition module 216, which may act to analyze the input and derive one or more potential output words or phonemes 302 from the input 300. Any of the input that is not determined to be a potential output word or phoneme 302 may be considered to be ambient noise 304. The speech recognition module 216 may perform this analysis by comparing the input 300 to various waveforms stored for a dictionary 306. These waveforms may be stored in memory 214 (shown in FIG. 2). It should be noted that white in this example embodiment the speech recognition module 216 and the memory 214 are depicted on the same device as the microphone 202, in some example embodiments some or all of the speech recognition processes and storage may be performed and/or located on a separate device, such as a server.
  • Also output from the speech recognition module 216 may be various scores 308 for the potential output words or phonemes 302. These scores 308 may indicate the likelihood that each particular output word or phoneme 302 accurately reflects what the user was saying. A context scoring module 218 may then take these scores 308 and modify them based on an analysis of the ambient noise 304. This may include, for example, comparing the ambient noise 304 to various stored waveforms to identify the ambient noise 304 and then altering the values of one or more scores 308 based on these identifications. In another example embodiment, rather than modify the scores 308 directly, the context scoring module 218 alters the dictionary 306 and the speech recognition module 216 reperforms the speech analysis using the modified dictionary 306. The dictionary modification may include replacing the dictionary 306 with an alternate dictionary more appropriate in light of the ambient noise 304, or modifying entries in the dictionary 306 based on the ambient noise 304. In some example embodiments, the modifications to the dictionary 306 may be temporary, for example, expiring once the particular ambient noise is discontinued.
  • In an example embodiment, the ambient noise 304 may include music. The analysis in the context scoring module 218 may include identifying the music being played. Any terms that are related to the identified music, such as the song title, artist, album title, genre, lyrics, band members, etc. may be either weighted more heavily or added to the dictionary 306. Other terms related generally to music (e.g., tickets, concert, billboard, etc.) also may be weighted more heavily or added to the dictionary 306. The presumption is that the user may be more likely to be speaking words related to music in general, or this particular piece of music, while the music is playing in the backroom. This is especially true of situations where the user is attempting to perform a search using speech recognition.
  • In another example embodiment, the ambient noise 304 may include background sounds. Examples include birds chirping, a baby crying, traffic noises, etc. If this ambient noise 304 can be identified, then this information can be used to improve the speech recognition accuracy. For example, a user performing a search while a baby is crying in the background may be more likely to be searching for baby or child related items or pieces of information. Terms related to babies or children may therefore be weighted more heavily or added to the dictionary 306. Likewise, a user performing a search while birds chirp in the background may be more likely to be performing a search about birds, and thus bird-related terms may be weighted more heavily or added to the dictionary 306. The more specifically the context scoring module 216 can identify the ambient sounds, the more specific the terms may be added. For example, bird species may be identified if there are enough sample bird calls accessible during the context analysis. If a specific bird species is identified, terms related to this specific species, in addition to birds generally, could be weighted more heavily or added to the dictionary 306.
  • FIG. 4 is a block diagram illustrating information from one or more sensors other than a microphone being used to improve the accuracy of speech recognition. Here, a sensor 220 may detect sensor information, which is then input to a context scoring module 218. As with FIG. 3, the speech recognition module 216 may obtain recorded sounds from a microphone 202, and output various potential words or phonemes 302 and scores 308. The context scoring module 218 may then take these scores 308 and modify them based on an analysis of the sensor information 400. This may include, for example, identifying aspects of the sensor information 400 to alter the values of one or more scores 308 based on these aspects. The exact implementation of these aspects may vary greatly based on the type of sensor 220 utilized.
  • In an example embodiment, the sensor 220 may be a GPS module, and the aspect of the GPS information may be a location. This location may be further cross-referenced against map information or other information that may provide more contextual information than the location alone. For example, the map may be used to determine whether the location is inside or outside, at home or at work, in a new or foreign city, etc. The scores 308 may then be modified based on this contextual information. For example, if the user is in a new city or foreign city, chances are their query may be regional in nature. Local points of interests, restaurants, lingo, etc. could be weighted more heavily. and/or added to a dictionary.
  • In another example embodiment, the GPS module 206 is used to detect a speed of the user. A user traveling at, for example, 65 miles per hour, is more likely to be performing searches about directions or guidance than if the same user was not moving. The dictionary and/or scores could then be modified to reflect this knowledge.
  • In another example embodiment, rather than modify the scores 308 directly, the context scoring module 218 alters the dictionary 306 and the speech recognition module 216 reperforms the speech analysis using the modified dictionary 306. The dictionary modification may include replacing the dictionary 306 with an alternate dictionary more appropriate in light of the sensor information 400, or modifying entries in the dictionary 306 based on the sensor information 400. In some example embodiments, the modifications to the dictionary 306 may be temporary, thr example, expiring once the particular ambient noise is discontinued.
  • FIG. 5 is a flow diagram illustrating a method 500, in accordance with an example embodiment, of improving accuracy of speech recognition. At operation 502, contextual information about user speech is received. At operation 504, speech recognition analysis is performed on the user speech, wherein the speech recognition analysis uses the contextual information.
  • FIG. 6 is a flow diagram illustrating a method 600, in accordance with another example embodiment, of improving accuracy of speech recognition. At operation 602, user speech is received. This may either be received from another device, such as a mobile device, or may be received directly through a microphone on the device performing the method 600. The user speech is speech spoken by a user. At operation 604, location information is received about the user. This may include, for example, GPS coordinates of the location of a device operated by the user. At operation 606, the location information may be utilized to derive context information about the user speech. This may include, for example, analyzing the location information using preset rules or settings that provide some information about the location that is relevant to the analysis of the user speech. This may include, for example, identification of the location within geographic boundaries (e.g., regions, states, cities, streets, etc.), identification of the location with respect to preset locations frequented by the user (e.g., home, work, etc.), identification of the location with respect to points of interest (e.g., lakes, museums, etc.), and the like. At operation 608, the context information may be used when performing speech recognition analysis on the user speech.
  • FIG. 7 is a flow diagram illustrating a method 700, in accordance with another example embodiment, of improving accuracy of speech recognition. At operation 702, user speech is received. This may either be received from another device, such as a mobile device, or may be received directly through a microphone on the device performing the method 700. The user speech is speech spoken by a user. At operation 704, ambient sounds may be received. This may either be received from another device, such as a mobile device, or may be received directly through a microphone on the device performing the method 700. The ambient sounds reflect non-user speech recorded by the same microphone as the user speech. At operation 706, the ambient sounds may be compared to a catalog of sounds, searching for one or more matching sounds in the catalog. This catalog may contain background sounds, music, non-user speech, or other types of sounds. This catalog may not just contain matching sounds, but contain some metadata about each sound. This may include, for example, an identification of the sounds, or other types of information about the sounds. At operation 708, metadata from one or more matching sounds is used when performing speech recognition analysis on the user speech.
  • FIG. 8 shows a diagrammatic representation of a machine in the example form of a computer system 800 within which a set of instructions 824 for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 800 includes a processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 804 and a static memory 806, which communicate with each other via a bus 808. The computer system 800 may further include a video display unit 810 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 800 also includes an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), a disk drive unit 816, a signal generation device 818 (e.g., a speaker), and a network interface device 820.
  • The disk drive unit 816 includes a computer-readable medium 822 on which is stored one or more sets of instructions 824 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 824 may also reside, completely or at least partially, within the main memory 804 and/or within the processor 802 during execution thereof by the computer system 800, with the main memory 804 and the processor 802 also constituting machine-readable media. The instructions 824 may further be transmitted or received over a network 826 via the network interface device 820.
  • While the machine-readable medium 822 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 824. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies described herein. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
  • Although the inventive concepts have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the inventive concepts. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
  • The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims (20)

1. A system, comprising:
a processor;
a microphone configured to record user speech; and
a speech recognition module configured to analyze the recorded user speech and detect words spoken by a user from the recorded user speech, the speech recognition module using contextual information about the recorded user speech in the analysis.
2. The system of claim 1, wherein the microphone and the speech recognition module are located on a single electronic device.
3. The system of claim 1, wherein the microphone is located on a mobile electronic device and the speech recognition module is located on a server.
4. The system of claim 1, wherein the microphone is further configured to record ambient noise and the system further comprises a context scoring module configured to analyze the ambient noise to determine the contextual information.
5. The system of claim 1, further comprising a sensor, and herein the contextual information is identified from sensor information detected by the sensor.
6. The system of claim 5, wherein the sensor is a global positioning system module and the contextual information is location.
7. The system of claim 5, wherein the sensor is a global positioning system module and the contextual information is speed.
8. A method comprising:
receiving contextual information about user speech; and
performing speech recognition analysis on the user speech, wherein the speech recognition analysis uses the contextual information.
9. The method of claim 8, wherein the contextual information includes user location.
10. The method of claim 8, wherein the contextual information includes speed of movement of a user.
11. The method of claim 8, wherein the contextual information includes ambient noise recorded by a microphone.
12. The method of claim 11, wherein the ambient noise includes music playing in the background as the user speech is recorded.
13. The method of claim 11, wherein the ambient noise includes background sounds recorded as the user speech is recorded.
14. The method of claim 8, wherein the performing speech recognition analysis includes analyzing the user speech based on a dictionary, and is further configured to alter the dictionary based on the contextual information.
15. The method of claim 14, wherein the dictionary is altered by replacing the dictionary with a different dictionary.
16. The method of claim 14, wherein the dictionary is altered by adding words pertaining to the contextual information to the dictionary.
17. The method of claim 14, wherein the performing speech recognition analysis includes scoring one or more potential terms from the dictionary that match the user speech, and wherein the contextual information is used by weighting the one or more potential terms based on the contextual information.
18. A non-transitory machine-readable storage medium comprising a set of instructions which, when executed by a processor, causes execution of operations comprising:
receiving contextual information about user speech; and
performing speech recognition analysis on the user speech, wherein the speech recognition analysis uses the contextual information.
19. The non-transitory machine-readable storage medium of claim 18, wherein the speech recognition analysis includes utilizing a hidden Markov model.
20. The non-transitory machine-readable storage medium of claim 18, wherein the user speech is recorded by a microphone and the contextual information is detected from a non-microphone sensor.
US13/874,304 2013-04-30 2013-04-30 System and method of improving speech recognition using context Active 2033-11-20 US9626963B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/874,304 US9626963B2 (en) 2013-04-30 2013-04-30 System and method of improving speech recognition using context

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/874,304 US9626963B2 (en) 2013-04-30 2013-04-30 System and method of improving speech recognition using context
US15/490,703 US10176801B2 (en) 2013-04-30 2017-04-18 System and method of improving speech recognition using context

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/490,703 Continuation US10176801B2 (en) 2013-04-30 2017-04-18 System and method of improving speech recognition using context

Publications (2)

Publication Number Publication Date
US20140324428A1 true US20140324428A1 (en) 2014-10-30
US9626963B2 US9626963B2 (en) 2017-04-18

Family

ID=51789968

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/874,304 Active 2033-11-20 US9626963B2 (en) 2013-04-30 2013-04-30 System and method of improving speech recognition using context
US15/490,703 Active 2033-05-20 US10176801B2 (en) 2013-04-30 2017-04-18 System and method of improving speech recognition using context

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/490,703 Active 2033-05-20 US10176801B2 (en) 2013-04-30 2017-04-18 System and method of improving speech recognition using context

Country Status (1)

Country Link
US (2) US9626963B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213796A1 (en) * 2014-01-28 2015-07-30 Lenovo (Singapore) Pte. Ltd. Adjusting speech recognition using contextual information
DE102015226408A1 (en) * 2015-12-22 2017-06-22 Robert Bosch Gmbh Method and apparatus for performing a voice recognition for controlling at least one function of a vehicle
US20170236518A1 (en) * 2016-02-16 2017-08-17 Carnegie Mellon University, A Pennsylvania Non-Profit Corporation System and Method for Multi-User GPU-Accelerated Speech Recognition Engine for Client-Server Architectures
WO2018056846A1 (en) * 2016-09-21 2018-03-29 Motorola Solutions, Inc. Method and system for optimizing voice recognition and information searching based on talkgroup activities

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9973887B2 (en) * 2016-01-21 2018-05-15 Google Llc Sharing navigation data among co-located computing devices

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5708759A (en) * 1996-11-19 1998-01-13 Kemeny; Emanuel S. Speech recognition using phoneme waveform parameters
US6889189B2 (en) * 2003-09-26 2005-05-03 Matsushita Electric Industrial Co., Ltd. Speech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20050143970A1 (en) * 2003-09-11 2005-06-30 Voice Signal Technologies, Inc. Pronunciation discovery for spoken words
US20070100637A1 (en) * 2005-10-13 2007-05-03 Integrated Wave Technology, Inc. Autonomous integrated headset and sound processing system for tactical applications
US20080181417A1 (en) * 2006-01-25 2008-07-31 Nice Systems Ltd. Method and Apparatus For Segmentation of Audio Interactions
US20080275699A1 (en) * 2007-05-01 2008-11-06 Sensory, Incorporated Systems and methods of performing speech recognition using global positioning (GPS) information
US20090112593A1 (en) * 2007-10-24 2009-04-30 Harman Becker Automotive Systems Gmbh System for recognizing speech for searching a database
US20090234651A1 (en) * 2008-03-12 2009-09-17 Basir Otman A Speech understanding method and system
US20100082343A1 (en) * 2008-09-29 2010-04-01 Microsoft Corporation Sequential speech recognition with two unequal asr systems
US20100114344A1 (en) * 2008-10-31 2010-05-06 France Telecom Communication system incorporating ambient sound pattern detection and method of operation thereof
US7761296B1 (en) * 1999-04-02 2010-07-20 International Business Machines Corporation System and method for rescoring N-best hypotheses of an automatic speech recognition system
US20110037596A1 (en) * 2006-07-25 2011-02-17 Farhan Fariborz M Identifying activity in an area utilizing sound detection and comparison
US20110136542A1 (en) * 2009-12-09 2011-06-09 Nokia Corporation Method and apparatus for suggesting information resources based on context and preferences
US20110166856A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Noise profile determination for voice-related feature
US8023663B2 (en) * 2002-05-06 2011-09-20 Syncronation, Inc. Music headphones for manual control of ambient sound
US20110300806A1 (en) * 2010-06-04 2011-12-08 Apple Inc. User-specific noise suppression for voice quality improvements
US20110320114A1 (en) * 2010-06-28 2011-12-29 Microsoft Corporation Map Annotation Messaging
US20120035931A1 (en) * 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US20120076321A1 (en) * 2010-09-28 2012-03-29 Bose Corporation Single Microphone for Noise Rejection and Noise Measurement
US8254964B2 (en) * 2009-02-23 2012-08-28 Sony Ericsson Mobile Communications Ab Method and arrangement relating to location based services for a communication device
US20120224743A1 (en) * 2011-03-04 2012-09-06 Rodriguez Tony F Smartphone-based methods and systems
US20120271631A1 (en) * 2011-04-20 2012-10-25 Robert Bosch Gmbh Speech recognition using multiple language models
US20130054235A1 (en) * 2011-08-24 2013-02-28 Sensory, Incorporated Truly handsfree speech recognition in high noise environments
US20130090926A1 (en) * 2011-09-16 2013-04-11 Qualcomm Incorporated Mobile device context information using speech detection
US20130117016A1 (en) * 2011-11-07 2013-05-09 Dietmar Ruwisch Method and an apparatus for generating a noise reduced audio signal
US8473289B2 (en) * 2010-08-06 2013-06-25 Google Inc. Disambiguating input based on context
US20140019126A1 (en) * 2012-07-13 2014-01-16 International Business Machines Corporation Speech-to-text recognition of non-dictionary words using location data
US20140044269A1 (en) * 2012-08-09 2014-02-13 Logitech Europe, S.A. Intelligent Ambient Sound Monitoring System
US20140314242A1 (en) * 2013-04-19 2014-10-23 Plantronics, Inc. Ambient Sound Enablement for Headsets
USRE45289E1 (en) * 1997-11-25 2014-12-09 At&T Intellectual Property Ii, L.P. Selective noise/channel/coding models and recognizers for automatic speech recognition

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6574596B2 (en) * 1999-02-08 2003-06-03 Qualcomm Incorporated Voice recognition rejection scheme
DE10015960C2 (en) * 2000-03-30 2003-01-16 Micronas Munich Gmbh Speech recognition method and speech recognition device
US20030078777A1 (en) * 2001-08-22 2003-04-24 Shyue-Chin Shiau Speech recognition system for mobile Internet/Intranet communication
US6990445B2 (en) * 2001-12-17 2006-01-24 Xl8 Systems, Inc. System and method for speech recognition and transcription
JP3826032B2 (en) * 2001-12-28 2006-09-27 株式会社東芝 Speech recognition device, speech recognition method and a speech recognition program
US7885420B2 (en) * 2003-02-21 2011-02-08 Qnx Software Systems Co. Wind noise suppression system
US7809565B2 (en) * 2003-03-01 2010-10-05 Coifman Robert E Method and apparatus for improving the transcription accuracy of speech recognition software
US20070299671A1 (en) * 2004-03-31 2007-12-27 Ruchika Kapur Method and apparatus for analysing sound- converting sound into information
US8332218B2 (en) 2006-06-13 2012-12-11 Nuance Communications, Inc. Context-based grammars for automated speech recognition
US8255224B2 (en) 2008-03-07 2012-08-28 Google Inc. Voice recognition grammar selection based on context
US20110218802A1 (en) * 2010-03-08 2011-09-08 Shlomi Hai Bouganim Continuous Speech Recognition
US8744091B2 (en) * 2010-11-12 2014-06-03 Apple Inc. Intelligibility control using ambient noise detection
US9244984B2 (en) * 2011-03-31 2016-01-26 Microsoft Technology Licensing, Llc Location based conversational understanding
US9633654B2 (en) * 2011-12-06 2017-04-25 Intel Corporation Low power voice detection
CN103456301B (en) * 2012-05-28 2019-02-12 中兴通讯股份有限公司 A kind of scene recognition method and device and mobile terminal based on ambient sound

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5708759A (en) * 1996-11-19 1998-01-13 Kemeny; Emanuel S. Speech recognition using phoneme waveform parameters
USRE45289E1 (en) * 1997-11-25 2014-12-09 At&T Intellectual Property Ii, L.P. Selective noise/channel/coding models and recognizers for automatic speech recognition
US7761296B1 (en) * 1999-04-02 2010-07-20 International Business Machines Corporation System and method for rescoring N-best hypotheses of an automatic speech recognition system
US8023663B2 (en) * 2002-05-06 2011-09-20 Syncronation, Inc. Music headphones for manual control of ambient sound
US20050143970A1 (en) * 2003-09-11 2005-06-30 Voice Signal Technologies, Inc. Pronunciation discovery for spoken words
US6889189B2 (en) * 2003-09-26 2005-05-03 Matsushita Electric Industrial Co., Ltd. Speech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20070100637A1 (en) * 2005-10-13 2007-05-03 Integrated Wave Technology, Inc. Autonomous integrated headset and sound processing system for tactical applications
US20080181417A1 (en) * 2006-01-25 2008-07-31 Nice Systems Ltd. Method and Apparatus For Segmentation of Audio Interactions
US20110037596A1 (en) * 2006-07-25 2011-02-17 Farhan Fariborz M Identifying activity in an area utilizing sound detection and comparison
US20080275699A1 (en) * 2007-05-01 2008-11-06 Sensory, Incorporated Systems and methods of performing speech recognition using global positioning (GPS) information
US20090112593A1 (en) * 2007-10-24 2009-04-30 Harman Becker Automotive Systems Gmbh System for recognizing speech for searching a database
US20090234651A1 (en) * 2008-03-12 2009-09-17 Basir Otman A Speech understanding method and system
US20100082343A1 (en) * 2008-09-29 2010-04-01 Microsoft Corporation Sequential speech recognition with two unequal asr systems
US20100114344A1 (en) * 2008-10-31 2010-05-06 France Telecom Communication system incorporating ambient sound pattern detection and method of operation thereof
US8254964B2 (en) * 2009-02-23 2012-08-28 Sony Ericsson Mobile Communications Ab Method and arrangement relating to location based services for a communication device
US20110136542A1 (en) * 2009-12-09 2011-06-09 Nokia Corporation Method and apparatus for suggesting information resources based on context and preferences
US20110166856A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Noise profile determination for voice-related feature
US20110300806A1 (en) * 2010-06-04 2011-12-08 Apple Inc. User-specific noise suppression for voice quality improvements
US20110320114A1 (en) * 2010-06-28 2011-12-29 Microsoft Corporation Map Annotation Messaging
US20120035931A1 (en) * 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US8473289B2 (en) * 2010-08-06 2013-06-25 Google Inc. Disambiguating input based on context
US20120076321A1 (en) * 2010-09-28 2012-03-29 Bose Corporation Single Microphone for Noise Rejection and Noise Measurement
US20120224743A1 (en) * 2011-03-04 2012-09-06 Rodriguez Tony F Smartphone-based methods and systems
US20120271631A1 (en) * 2011-04-20 2012-10-25 Robert Bosch Gmbh Speech recognition using multiple language models
US20130054235A1 (en) * 2011-08-24 2013-02-28 Sensory, Incorporated Truly handsfree speech recognition in high noise environments
US20130090926A1 (en) * 2011-09-16 2013-04-11 Qualcomm Incorporated Mobile device context information using speech detection
US20130117016A1 (en) * 2011-11-07 2013-05-09 Dietmar Ruwisch Method and an apparatus for generating a noise reduced audio signal
US20140019126A1 (en) * 2012-07-13 2014-01-16 International Business Machines Corporation Speech-to-text recognition of non-dictionary words using location data
US20140044269A1 (en) * 2012-08-09 2014-02-13 Logitech Europe, S.A. Intelligent Ambient Sound Monitoring System
US20140314242A1 (en) * 2013-04-19 2014-10-23 Plantronics, Inc. Ambient Sound Enablement for Headsets

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213796A1 (en) * 2014-01-28 2015-07-30 Lenovo (Singapore) Pte. Ltd. Adjusting speech recognition using contextual information
DE102015226408A1 (en) * 2015-12-22 2017-06-22 Robert Bosch Gmbh Method and apparatus for performing a voice recognition for controlling at least one function of a vehicle
US20170236518A1 (en) * 2016-02-16 2017-08-17 Carnegie Mellon University, A Pennsylvania Non-Profit Corporation System and Method for Multi-User GPU-Accelerated Speech Recognition Engine for Client-Server Architectures
WO2018056846A1 (en) * 2016-09-21 2018-03-29 Motorola Solutions, Inc. Method and system for optimizing voice recognition and information searching based on talkgroup activities

Also Published As

Publication number Publication date
US10176801B2 (en) 2019-01-08
US20170221477A1 (en) 2017-08-03
US9626963B2 (en) 2017-04-18

Similar Documents

Publication Publication Date Title
US9495956B2 (en) Dealing with switch latency in speech recognition
Michalevsky et al. Gyrophone: Recognizing speech from gyroscope signals
US8983839B2 (en) System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US9619572B2 (en) Multiple web-based content category searching in mobile search application
US8635243B2 (en) Sending a communications header with voice recording to send metadata for use in speech recognition, formatting, and search mobile search application
JP6397067B2 (en) System and method for integrating a third-party service digital assistant
US9020819B2 (en) Recognition dictionary system and recognition dictionary system updating method
US20090326947A1 (en) System and method for spoken topic or criterion recognition in digital media and contextual advertising
US8527279B2 (en) Voice recognition grammar selection based on context
US20110054899A1 (en) Command and control utilizing content information in a mobile voice-to-speech application
JP4705023B2 (en) Speech recognition device, speech recognition method, and a program
US20110060587A1 (en) Command and control utilizing ancillary information in a mobile voice-to-speech application
EP3091535A2 (en) Multi-modal input on an electronic device
US9697822B1 (en) System and method for updating an adaptive speech recognition model
CN103069480B (en) Speech and noise models for speech recognition
Schalkwyk et al. “Your Word is my Command”: google search by voice: A case study
EP2577653B1 (en) Acoustic model adaptation using geographic information
US20110054895A1 (en) Utilizing user transmitted text to improve language model in mobile dictation application
US9754589B2 (en) Architecture for multi-domain natural language processing
US20130346068A1 (en) Voice-Based Image Tagging and Searching
US10176167B2 (en) System and method for inferring user intent from speech inputs
US9286892B2 (en) Language modeling in speech recognition
US20110112827A1 (en) System and method for hybrid processing in a natural language voice services environment
US20110054898A1 (en) Multiple web-based content search user interface in mobile search application
US20080228496A1 (en) Speech-centric multimodal user interface design in mobile technology

Legal Events

Date Code Title Description
AS Assignment

Owner name: EBAY INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FARRARO, ERIC J.;REEL/FRAME:030321/0584

Effective date: 20130429

AS Assignment

Owner name: PAYPAL, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EBAY INC.;REEL/FRAME:036170/0248

Effective date: 20150717