WO2002080142A2 - Voice recognition system using implicit speaker adaptation - Google Patents
Voice recognition system using implicit speaker adaptation Download PDFInfo
- Publication number
- WO2002080142A2 WO2002080142A2 PCT/US2002/008727 US0208727W WO02080142A2 WO 2002080142 A2 WO2002080142 A2 WO 2002080142A2 US 0208727 W US0208727 W US 0208727W WO 02080142 A2 WO02080142 A2 WO 02080142A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- acoustic model
- speaker
- acoustic
- pattern matching
- voice recognition
- Prior art date
Links
- 230000006978 adaptation Effects 0.000 title description 2
- 238000012549 training Methods 0.000 claims abstract description 49
- 230000001419 dependent effect Effects 0.000 claims abstract description 37
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 239000013598 vector Substances 0.000 claims description 46
- 238000000034 method Methods 0.000 claims description 31
- 238000013459 approach Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/12—Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
- G10L15/144—Training of HMMs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
Definitions
- the present invention relates to speech signal processing. More particularly, the present invention relates to a novel voice recognition method and apparatus for achieving improved performance through unsupervised training.
- FIG. 1 shows a basic VR system having a preemphasis filter 102, an acoustic feature extraction (AFE) unit 104, and a pattern matching engine 110.
- the AFE unit 104 converts a series of digital voice samples into a set of measurement values (for example, extracted frequency components) called an acoustic feature vector.
- the pattern matching engine 110 matches a series of acoustic feature vectors with the templates contained in a VR acoustic model 112.
- VR pattern matching engines generally employ either Dynamic Time Warping (DTW) or Hidden Markov Model (HMM) techniques. Both DTW and HMM are well known in the art, and are described in detail in Rabiner, L. R. and Juang, B. H., FUNDAMENTALS OF SPEECH RECOGNITION, Prentice Hall, 1993.
- DTW Dynamic Time Warping
- HMM Hidden Markov Model
- the acoustic model 112 is generally either a HMM model or a DTW model.
- a DTW acoustic model may be thought of as a database of templates associated with each of the words that need to be recognized.
- a DTW template consists of a sequence of feature vectors that has been averaged over many examples of the associated word.
- DTW pattern matching generally involves locating a stored template that has minimal distance to the input feature vector sequence representing input speech.
- a template used in an HMM based acoustic model contains a detailed statistical description of the associated speech utterance.
- a HMM template stores a sequence of mean vectors, variance vectors and a set of transition probabilities.
- HMM pattern matching generally involves generating a probability for each template in the model based on the series of input feature vectors associated with the input speech. The template having the highest probability is selected as the most likely input utterance.
- Training refers to the process of collecting speech samples of a particular speech segment or syllable from one or more speakers in order to generate templates in the acoustic model 112.
- Each template in the acoustic model is associated with a particular word or speech segment called an utterance class. There may be multiple templates in the acoustic model associated with the same utterance class.
- “Testing” refers to the procedure for matching the templates in the acoustic model to a sequence of feature vectors extracted from input speech. The performance of a given system depends largely upon the degree of match between the input speech of the end-user and the contents of the database, and hence on the match between the reference templates created through training and the speech samples used for VR testing.
- the two common types of training are supervised training and unsupervised training.
- supervised training the utterance class associated with each set of training feature vectors is known a priori.
- the speaker providing the input speech is often provided with a script of words or speech segments corresponding to the predetermined utterance classes.
- the feature vectors resulting from the reading of the script may then be incorporated into the acoustic model templates associated with the correct utterance classes.
- unsupervised training the utterance class associated with a set of training feature vectors is not known a priori. The utterance class must be correctly identified before a set of training feature vectors can be incorporated into the correct acoustic model template.
- the end-user provides speech acoustic feature vectors during both training and testing, so that the acoustic model 112 will match strongly with the speech of the end-user.
- An individualized acoustic model that is tailored to a single speaker is also called a speaker dependent (SD) acoustic* model.
- Generating an SD acoustic model generally requires the end-user to provide a large amount of supervised training samples. First, the user must provide training samples for a large variety of utterance classes. Also, in order to achieve the best performance, the end-user must provide multiple templates representing a variety of possible acoustic environments for each utterance class.
- SI acoustic models are referred to as speaker independent (SI) acoustic models, and are designed to have the best performance over a broad range of users. SI acoustic models, however, may not be optimized to any single user. A VR system that uses an SI acoustic model will not perform as well for a specific user as a VR system that uses an SD acoustic model tailored to that user.
- an SD acoustic model would be generated for each individual user. As discussed above, building SD acoustic models using supervised training is impractical. But using unsupervised training to generate a
- SD acoustic model can take a long time, during which VR performance based on a partial SD acoustic model may be very poor.
- the methods and apparatus disclosed herein are directed to a novel and improved voice recognition (VR) system that utilizes a combination of speaker independent (SI) and speaker dependent (SD) acoustic models.
- SI speaker independent
- SD speaker dependent
- At least one SI acoustic model is used in combination with at least one SD acoustic model to provide a level of speech recognition performance that at least equals that of a purely SI acoustic model.
- the disclosed hybrid SI/SD VR system continually uses unsupervised training to update the acoustic templates in the one or more SD acoustic models.
- the hybrid VR system uses the updated SD acoustic models, alone or in combination with the at least one SI acoustic model, to provide improved VR performance during VR testing.
- the word "exemplary” is used herein to mean "serving as an example, instance, or illustration.” Any embodiment described as an "exemplary embodiment” is not necessarily to be construed as being preferred or advantageous over another embodiment.
- FIG. 1 shows a basic voice recognition system
- FIG. 2 shows a voice recognition system according to an exemplary embodiment
- FIG. 3 shows a method for performing unsupervised training.
- FIG. 4 shows an exemplary approach to generating a combined matching score used in unsupervised training.
- FIG. 5 is a flowchart showing a method for performing voice recognition (testing) using both speaker independent (SI) and speaker dependent (SD) matching scores;
- FIG. 6 shows an approach to generating a combined matching score from both speaker independent (SI) and speaker dependent (SD) matching scores
- FIG. 2 shows an exemplary embodiment of a hybrid voice recognition (VR) system as might be implemented within a wireless remote station 202.
- the remote station 202 communicates through a wireless channel (not shown) with a wireless communication network (not shown).
- the remote station 202 may be a wireless phone communicating with a wireless phone system.
- voice signals from a user are converted into electrical signals in a microphone (MIC) 210 and converted into digital speech samples in an analog-to-digital converter (ADC) 212.
- MIC microphone
- ADC analog-to-digital converter
- the digital sample stream is then filtered using a preemphasis (PE) filter 214, for example a finite impulse response (FIR) filter that attenuates low-frequency signal components.
- PE preemphasis
- FIR finite impulse response
- the filtered samples are then analyzed in an acoustic feature extraction (AFE) unit 216.
- the AFE unit 216 converts digital voice samples into acoustic feature vectors.
- the AFE unit 216 performs a Fourier Transform on a segment of consecutive digital samples to generate a vector of signal strengths corresponding to different frequency bins.
- the frequency bins have varying bandwidths in accordance with a bark scale.
- each acoustic feature vector is extracted from a series of speech samples collected over a fixed time interval. In an exemplary embodiment, these time intervals overlap. For example, acoustic features may be obtained from 20-millisecond intervals of speech data beginning every ten milliseconds, such that each two consecutive intervals share a 10-millisecond segment.
- the acoustic feature vectors generated by the AFE unit 216 are provided to a VR engine 220, which performs pattern matching to characterize the acoustic feature vector based on the contents of one or more acoustic models 230, 232, and 234.
- a speaker-independent (SI) Hidden Markov Model (HMM) model 230 a speaker-independent Dynamic Time Warping (DTW) model 232
- DTW speaker-independent Dynamic Time Warping
- SD speaker-dependent acoustic model 234.
- SI Sound-independent
- DTW Dynamic Time Warping
- SD speaker-dependent
- a remote station 202 might include just the SIHMM acoustic model 230 and the SD acoustic model 234 and omit the SIDTW acoustic model 232.
- a remote station 202 might include a single SIHMM acoustic model 230, a SD acoustic model 234 and two different SIDTW acoustic models 232.
- the SD acoustic model 234 may be of the HMM type or the DTW type or a combination of the two.
- the SD acoustic model 234 is a DTW acoustic model.
- the VR engine 220 performs pattern matching to determine the degree of matching between the acoustic feature vectors and the contents of one or more acoustic models 230, 232, and 234.
- the VR engine 220 generates matching scores based on matching acoustic feature vectors with the different acoustic templates in each of the acoustic models 230 , 232, and 234.
- the VR engine 220 generates HMM matching scores based on matching a set of acoustic feature vectors with multiple HMM templates in the SIHMM acoustic model 230.
- the VR engine 220 generates DTW matching scores based on matching the acoustic feature vectors with multiple DTW templates in the
- the VR engine 220 generates matching scores based on matching the acoustic feature vectors with the templates in the SD acoustic model 234.
- each template in an acoustic model is associated with an utterance class.
- the VR engine 220 combines scores for templates associated with the same utterance class to create a combined matching score to be used in unsupervised training.
- the VR engine 220 combines SIHMM and SIDTW scores obtained from correlating an input set of acoustic feature vectors to generate a combined
- the VR engine 220 determines whether to store the input set of acoustic feature vectors as a SD template in the SD acoustic model 234.
- unsupervised training to update the SD acoustic model 234 is performed using exclusively SI matching scores. This prevents additive errors that might otherwise result from using an evolving SD acoustic model 234 for unsupervised training of itself. An exemplary method of performing this unsupervised training is described in greater detail below.
- the VR engine 220 uses the various acoustic models (230, 232, 234) during testing.
- the VR engine 220 retrieves matching scores from the acoustic models (230, 232, 234) and generates combined matching scores for each utterance class.
- the combined matching scores are used to select the utterance class that best matches the input speech.
- the VR engine 220 groups consecutive utterance classes together as necessary to recognize whole words or phrases.
- the VR engine 220 then provides information about the recognized word or phrase to a control processor 222, which uses the information to determine the appropriate response to the speech information or command.
- the control processor 222 may provide feedback to the user through a display or other user interface.
- the control processor 222 may send a message through a wireless modem 218 and an antenna 224 to a wireless network (not shown), initiating a mobile phone call to a destination phone number associated with the person whose name was uttered and recognized.
- the wireless modem 218 may transmit signals through any of a variety of wireless channel types including CDMA, TDMA, or FDMA.
- the wireless modem 218 may be replaced with other types of communications interfaces that communicate over a non-wireless channel without departing from the scope of the described embodiments.
- FIG. 3 is a flowchart showing an exemplary method for performing unsupervised training.
- analog speech data is sampled in an analog-to-digital converter (ADC) (212 in FIG. 2).
- ADC analog-to-digital converter
- PE preemphasis
- AFE acoustic feature extraction unit
- the VR engine (220 in FIG. 2) receives the input acoustic feature vectors from the AFE unit 216 and performs pattern matching of the input acoustic feature vectors against the contents of the SI acoustic models (230 and 232 in FIG. 2).
- the VR engine 220 generates matching scores from the results of the pattern matching.
- the VR engine 220 generates SIHMM matching scores by matching the input acoustic feature vectors with the SIHMM acoustic model 230, and generates SIDTW matching scores by matching the input acoustic feature vectors with the SIDTW acoustic model 232.
- Each acoustic template in the SIHMM and SIDTW acoustic models (230 and 232) is associated with a particular utterance class.
- SIHMM and SIDTW scores are combined to form combined matching scores.
- FIG. 4 shows the generation of combined matching scores for use in unsupervised training.
- the speaker independent combined matching score SC O MB_SI for a particular utterance class is a weighted sum according to EQN. 1 as shown, where:
- SIHMMT is the SIHMM matching score for the target utterance class
- SIHMMNT is the next best matching score for a template in the SIHMM acoustic model that is associated with a non-target utterance class (an utterance class other than the target utterance class)
- SIHMMG is the SIHMM matching score for the "garbage" utterance class
- SIDTW ⁇ is the SIDTW matching score for the target utterance class
- SIDTWNT is the next best matching score for a template in the SIDTW acoustic model that is associated with a non-target utterance class
- SIDTW G is the SIDTW matching score for the "garbage" utterance class.
- the various individual matching scores SIHMM n and SIDTW n may be viewed as representing a distance value between a series of input acoustic feature vectors and a template in the acoustic model. The greater the distance between the input acoustic feature vectors and a template, the greater the matching score. A close match between a template and the input acoustic feature vectors yields a very low matching score. If comparing a series of input acoustic feature vectors to two templates associated with different utterances classes yields two matching scores that are nearly equal; then the VR system may be unable to recognize either is the "correct" utterance class.
- SIHMMG and SIDTW G are matching scores for "garbage" utterance classes.
- the template or templates associated with the garbage utterance class are called garbage templates and do not correspond to a specific word or phrase. For this reason, they tend to be equally uncorrelated to all input speech.
- Garbage matching scores are useful as a sort of noise floor measurement in a VR system.
- a series of input acoustic feature vectors should have a much better degree of matching with a template associated with a target utterance class than with the garbage template before the utterance class can be confidently recognized.
- the input acoustic feature vectors should have a higher degree of matching with templates associated with that utterance class than with garbage templates or templates associated other utterance classes.
- Combined matching scores generated from a variety of acoustic models can more confidently discriminate between utterance classes than matching scores based on only one acoustic model.
- the VR system uses such combination matching scores to determine whether to replace a template in the SD acoustic model (234 in FIG. 2) with one derived from a new set of input acoustic feature vectors.
- the weighting factors (Wi . . . W ⁇ ) are selected to provide the best training performance over all acoustic environments.
- the weighting factors (Wi . . . W ⁇ ) are constant for all utterance classes.
- the W n used to create the combined matching score for a first target utterance class is the same as the W n value used to create the combined matching score for another target utterance class.
- the weighting factors vary based on the target utterance class.
- Other ways of combining shown in FIG. 4 will be obvious to one skilled in the art, and are to be viewed as within the scope of the embodiments described herein.
- weighted inputs may also be used.
- Another obvious variation would be to generate a combined matching score based on one type of acoustic model. For example, a combined matching score could be generated based on SIHMMT, SIHMMNT. and SIHMMG. Or, a combined matching score could be generated based on SIDTWT, SIDTWNT, and SIDTW G .
- Wi and W are negative numbers, and a greater (or less negative) value of S CO MB indicates a greater degree of matching (smaller distance) between a target utterance class and a series of input acoustic feature vectors.
- a greater degree of matching corresponds to a lesser value without departing from the scope of the disclosed embodiments.
- combined matching scores are generated for utterance classes associated with templates in the HMM and DTW acoustic models (230 and 232).
- the remote station 202 compares the combined matching scores with the combined matching scores stored with corresponding templates
- a new SD template is generated from the new series of input acoustic feature vectors.
- the series of input acoustic vectors itself constitutes the new SD template.
- the older template is then replaced with the new template, and the combined matching score associated with the new template is stored in the SD acoustic model to be used in future comparisons.
- unsupervised training is used to update one or more templates in a speaker dependent hidden markov model (SDHMM) acoustic model.
- SDHMM speaker dependent hidden markov model
- This SDHMM acoustic model could be used either in place of an SDDTW model or in addition to an SDDTW acoustic model within the SD acoustic model 234.
- the comparison at step 312 also includes comparing the combined matching score of a prospective new SD template with a constant training threshold. Even if there has not yet been any template stored in a SD acoustic model for a particular utterance class, a new template will not be stored in the SD acoustic model unless it has a combined matching score that is better (indicative of a greater degree of matching) than the training threshold value.
- the SD acoustic model is populated by default with templates from the SI acoustic model.
- Such an initialization provides an alternate approach to ensuring that VR performance using the SD acoustic model will start out at least as good as VR performance using just the SI acoustic model.
- the VR performance using the SD acoustic model will surpass VR performance using just the SI acoustic model.
- the VR system allows a user to perform supervised training.
- the user must put the VR system into a supervised training mode before performing such supervised training.
- the VR system has a priori knowledge of the correct utterance class. If the combined matching score for the input speech is better than the combined matching score for the SD template previously stored for that utterance class, then the input speech is used to form a replacement SD template.
- the VR system allows the user to force replacement of existing SD templates during supervised training.
- the SD acoustic model may be designed with room for multiple (two or more) templates for a single utterance class.
- two templates are stored in the SD acoustic model for each utterance class.
- the comparison at step 312 therefore entails comparing the matching score obtained with a new template with the matching scores obtained for both templates in the SD acoustic model for the same utterance class. If the new template has a better matching score than either older template in the SD acoustic model, then at step 314 the SD acoustic model template having the worst matching score is replaced with the new template. If the matching score of the new template is no better than either older template, then step 314 is skipped.
- the matching score obtained with the new template is compared against a matching score threshold. So, until new templates having a matching score that is better than the threshold are stored in the SD acoustic model, the new templates are compared against this threshold value before they will be used to overwrite the prior contents of the SD acoustic model.
- Obvious variations such as storing the SD acoustic model templates in sorted order according to combined matching score and comparing new matching scores only with the lowest, are anticipated and are to be considered within the scope of the embodiments disclosed herein. Obvious variations on numbers of templates stored in the acoustic model for each utterance class are also anticipated.
- FIG. 5 is a flowchart showing an exemplary method for performing VR testing using a combination of SI and SD acoustic models. Steps 302, 304, 306, and 308 are the same as described for FIG. 3. The exemplary method diverges from the method shown in FIG. 3 at step 510. At step 510, the VR engine 220 generates SD matching scores based on comparing the input acoustic feature vectors with templates in the SD acoustic model.
- the SD acoustic model may contain multiple templates for a single utterance class.
- the VR engine 220 generates hybrid combined matching scores for use in VR testing. In an exemplary embodiment, these hybrid combined matching scores are based on both individual SI and individual SD matching scores.
- the word or utterance having the best combined matching score is selected and compared against a testing threshold.
- the weights [Wi . . . W ⁇ ] used to generate combined scores for training are equal to the weights [Wi . . . W ⁇ ] used to generate combined scores for testing (as shown in FIG. 6), but the training threshold is not equal to the testing threshold.
- FIG. 6 shows the generation of hybrid combined matching scores performed at step 512.
- the exemplary embodiment shown operates identically to the combiner shown in FIG. 4, except that the weighting factor W 4 is applied to DTW T instead of SIDTWr and the weighting factor W 5 is applied to DTWNT instead of SIDTWNT- DTWT (the dynamic time warping matching score for the target utterance class) is selected from the best of the SIDTW and SDDTW scores associated with the target utterance class. Similarly, DTW N ⁇ (the dynamic time warping matching score for the remaining non-target utterance classes) is selected from the best of the SIDTW and SDDTW scores associated with non-target utterance classes.
- the SI/SD hybrid score S CO MB_H for a particular utterance class is a weighted sum according to EQN. 2 as shown, where SIHMMT, SIHMMNT,
- SIHMM G and SIDTW G are the same as in EQN. 1. Specifically, in EQN. 2:
- SIHMMT is the SIHMM matching score for the target utterance class
- SIHMMNT is the next best matching score for a template in the SIHMM acoustic model that is associated with a non-target utterance class (an utterance class other than the target utterance class);
- SIHMM G is the SIHMM matching score for the "garbage" utterance class
- DTWT is the best DTW matching score for SI and SD templates corresponding to the target utterance class
- DTWNT is the best DTW matching score for SI and SD templates corresponding to non-target utterance classes
- SIDTWG is the SIDTW matching score for the "garbage" utterance class.
- SI/SD hybrid score S CO MBJH is a combination of individual SI and SD matching scores. The resulting combination matching score does not rely entirely on either SI or SD acoustic models. If the matching score SIDTWT is better than any SDDTWT score, then the SI/SD hybrid score is computed from the better SIDTWT score. Similarly, if the matching score SDDTWT is better than any SIDTWT score, then the SI/SD hybrid score is computed from the better SDDTWT score.
- the VR system may still recognize the input speech based on the SI portions of the SI/SD hybrid scores.
- poor SD matching scores might have a variety of causes including differences between acoustic environments during training and testing or perhaps poor quality input used for training.
- the SI scores are weighted less heavily than the SD scores, or may even be ignored entirely.
- DTW T is selected from the best of the SDDTW scores associated with the target utterance class, ignoring the SIDTW scores for the target utterance class.
- DTWNT may be selected from the best of either the SIDTW or SDDTW scores associated with non-target utterance classes, instead of using both sets of scores.
- the exemplary embodiment is described using only SDDTW acoustic models for speaker dependent modeling, the hybrid approach described herein is equally applicable to a VR system using SDHMM acoustic models or even a combination of SDDTW and SDHMM acoustic models.
- the weighting factor Wi could be applied to a matching score selected from the best of SIHMMT and SDHMMT scores.
- the weighting factor W 2 could be applied to a matching score selected from the best of SIHMMNT and SDHMMNT scores.
- a VR method and apparatus utilizing a combination of SI and SD acoustic models for improved VR performance during unsupervised training and testing.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- EPROM memory EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a
- An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
- the storage medium may be integral to the processor.
- the processor and the storage medium may reside in an ASIC.
- the processor and the storage medium may reside as discrete components in a user terminal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Probability & Statistics with Applications (AREA)
- Artificial Intelligence (AREA)
- Circuit For Audible Band Transducer (AREA)
- Electrically Operated Instructional Devices (AREA)
- Telephonic Communication Services (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Telephone Function (AREA)
- Complex Calculations (AREA)
- Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
Abstract
Description
Claims
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020037012775A KR100933107B1 (en) | 2001-03-28 | 2002-03-22 | Speech Recognition System Using Implicit Speaker Adaptation |
KR1020097017621A KR101031717B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
CN028105869A CN1531722B (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
DK02725288T DK1374223T3 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system that uses implicit speech customization |
AU2002255863A AU2002255863A1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
KR1020097017599A KR101031744B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
KR1020077024057A KR100933109B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
KR1020077024058A KR100933108B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
KR1020097017648A KR101031660B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
JP2002578283A JP2004530155A (en) | 2001-03-28 | 2002-03-22 | Speech recognition system using technology that adapts implicitly to speaker |
EP02725288A EP1374223B1 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
DE60222249T DE60222249T2 (en) | 2001-03-28 | 2002-03-22 | SPEECH RECOGNITION SYSTEM BY IMPLICIT SPEAKER ADAPTION |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/821,606 | 2001-03-28 | ||
US09/821,606 US20020143540A1 (en) | 2001-03-28 | 2001-03-28 | Voice recognition system using implicit speaker adaptation |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2002080142A2 true WO2002080142A2 (en) | 2002-10-10 |
WO2002080142A3 WO2002080142A3 (en) | 2003-03-13 |
Family
ID=25233818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/008727 WO2002080142A2 (en) | 2001-03-28 | 2002-03-22 | Voice recognition system using implicit speaker adaptation |
Country Status (13)
Country | Link |
---|---|
US (1) | US20020143540A1 (en) |
EP (3) | EP1374223B1 (en) |
JP (5) | JP2004530155A (en) |
KR (6) | KR101031660B1 (en) |
CN (3) | CN101221759B (en) |
AT (3) | ATE372573T1 (en) |
AU (1) | AU2002255863A1 (en) |
DE (2) | DE60233763D1 (en) |
DK (1) | DK1374223T3 (en) |
ES (3) | ES2371094T3 (en) |
HK (2) | HK1092269A1 (en) |
TW (1) | TW577043B (en) |
WO (1) | WO2002080142A2 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2409560A (en) * | 2003-12-23 | 2005-06-29 | Ibm | Interactive speech recognition model |
US8239198B2 (en) | 2005-08-09 | 2012-08-07 | Nuance Communications, Inc. | Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020143540A1 (en) * | 2001-03-28 | 2002-10-03 | Narendranath Malayath | Voice recognition system using implicit speaker adaptation |
US20040148169A1 (en) * | 2003-01-23 | 2004-07-29 | Aurilab, Llc | Speech recognition with shadow modeling |
KR20050059766A (en) * | 2003-12-15 | 2005-06-21 | 엘지전자 주식회사 | Voice recognition method using dynamic time warping |
US7949533B2 (en) * | 2005-02-04 | 2011-05-24 | Vococollect, Inc. | Methods and systems for assessing and improving the performance of a speech recognition system |
US8200495B2 (en) | 2005-02-04 | 2012-06-12 | Vocollect, Inc. | Methods and systems for considering information about an expected response when performing speech recognition |
US7827032B2 (en) * | 2005-02-04 | 2010-11-02 | Vocollect, Inc. | Methods and systems for adapting a model for a speech recognition system |
US7895039B2 (en) | 2005-02-04 | 2011-02-22 | Vocollect, Inc. | Methods and systems for optimizing model adaptation for a speech recognition system |
US7865362B2 (en) * | 2005-02-04 | 2011-01-04 | Vocollect, Inc. | Method and system for considering information about an expected response when performing speech recognition |
US8762148B2 (en) * | 2006-02-27 | 2014-06-24 | Nec Corporation | Reference pattern adaptation apparatus, reference pattern adaptation method and reference pattern adaptation program |
US20070219801A1 (en) * | 2006-03-14 | 2007-09-20 | Prabha Sundaram | System, method and computer program product for updating a biometric model based on changes in a biometric feature of a user |
US8244545B2 (en) * | 2006-03-30 | 2012-08-14 | Microsoft Corporation | Dialog repair based on discrepancies between user model predictions and speech recognition results |
EP2019985B1 (en) * | 2006-05-12 | 2018-04-04 | Nuance Communications Austria GmbH | Method for changing over from a first adaptive data processing version to a second adaptive data processing version |
CN101154379B (en) * | 2006-09-27 | 2011-11-23 | 夏普株式会社 | Method and device for locating keywords in voice and voice recognition system |
US7552871B2 (en) * | 2006-12-19 | 2009-06-30 | Nordic Id Oy | Method for collecting data fast in inventory systems and wireless apparatus thereto |
US9026444B2 (en) | 2009-09-16 | 2015-05-05 | At&T Intellectual Property I, L.P. | System and method for personalization of acoustic models for automatic speech recognition |
US9478216B2 (en) | 2009-12-08 | 2016-10-25 | Nuance Communications, Inc. | Guest speaker robust adapted speech recognition |
JP2012168477A (en) * | 2011-02-16 | 2012-09-06 | Nikon Corp | Noise estimation device, signal processor, imaging apparatus, and program |
US8914290B2 (en) | 2011-05-20 | 2014-12-16 | Vocollect, Inc. | Systems and methods for dynamically improving user intelligibility of synthesized speech in a work environment |
CN102999161B (en) * | 2012-11-13 | 2016-03-02 | 科大讯飞股份有限公司 | A kind of implementation method of voice wake-up module and application |
JP5982297B2 (en) * | 2013-02-18 | 2016-08-31 | 日本電信電話株式会社 | Speech recognition device, acoustic model learning device, method and program thereof |
US9978395B2 (en) | 2013-03-15 | 2018-05-22 | Vocollect, Inc. | Method and system for mitigating delay in receiving audio stream during production of sound from audio stream |
US9282096B2 (en) | 2013-08-31 | 2016-03-08 | Steven Goldstein | Methods and systems for voice authentication service leveraging networking |
US20150081294A1 (en) * | 2013-09-19 | 2015-03-19 | Maluuba Inc. | Speech recognition for user specific language |
US10405163B2 (en) | 2013-10-06 | 2019-09-03 | Staton Techiya, Llc | Methods and systems for establishing and maintaining presence information of neighboring bluetooth devices |
JP5777178B2 (en) * | 2013-11-27 | 2015-09-09 | 国立研究開発法人情報通信研究機構 | Statistical acoustic model adaptation method, acoustic model learning method suitable for statistical acoustic model adaptation, storage medium storing parameters for constructing a deep neural network, and statistical acoustic model adaptation Computer programs |
CN104700831B (en) * | 2013-12-05 | 2018-03-06 | 国际商业机器公司 | The method and apparatus for analyzing the phonetic feature of audio file |
AU2015266863B2 (en) * | 2014-05-30 | 2018-03-15 | Apple Inc. | Multi-command single utterance input method |
JP6118838B2 (en) * | 2014-08-21 | 2017-04-19 | 本田技研工業株式会社 | Information processing apparatus, information processing system, information processing method, and information processing program |
US9959863B2 (en) * | 2014-09-08 | 2018-05-01 | Qualcomm Incorporated | Keyword detection using speaker-independent keyword models for user-designated keywords |
US20170011406A1 (en) * | 2015-02-10 | 2017-01-12 | NXT-ID, Inc. | Sound-Directed or Behavior-Directed Method and System for Authenticating a User and Executing a Transaction |
KR102371697B1 (en) | 2015-02-11 | 2022-03-08 | 삼성전자주식회사 | Operating Method for Voice function and electronic device supporting the same |
GB2557132B (en) * | 2015-08-24 | 2021-06-23 | Ford Global Tech Llc | Dynamic acoustic model for vehicle |
US10714121B2 (en) | 2016-07-27 | 2020-07-14 | Vocollect, Inc. | Distinguishing user speech from background speech in speech-dense environments |
EP4293661A3 (en) * | 2017-04-20 | 2024-02-21 | Google LLC | Multi-user authentication on a device |
CN111243606B (en) * | 2017-05-12 | 2023-07-21 | 苹果公司 | User-specific acoustic models |
WO2018208859A1 (en) * | 2017-05-12 | 2018-11-15 | Apple Inc. | User-specific acoustic models |
US10896673B1 (en) * | 2017-09-21 | 2021-01-19 | Wells Fargo Bank, N.A. | Authentication of impaired voices |
CN107993653A (en) * | 2017-11-30 | 2018-05-04 | 南京云游智能科技有限公司 | The incorrect pronunciations of speech recognition apparatus correct update method and more new system automatically |
KR102263973B1 (en) | 2019-04-05 | 2021-06-11 | 주식회사 솔루게이트 | Artificial intelligence based scheduling system |
KR102135182B1 (en) | 2019-04-05 | 2020-07-17 | 주식회사 솔루게이트 | Personalized service system optimized on AI speakers using voiceprint recognition |
CN113261056B (en) * | 2019-12-04 | 2024-08-02 | 谷歌有限责任公司 | Speaker perception using speaker dependent speech models |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5091947A (en) * | 1987-06-04 | 1992-02-25 | Ricoh Company, Ltd. | Speech recognition method and apparatus |
US5734793A (en) * | 1994-09-07 | 1998-03-31 | Motorola Inc. | System for recognizing spoken sounds from continuous speech and method of using same |
EP1011094A1 (en) * | 1998-12-17 | 2000-06-21 | Sony International (Europe) GmbH | Semi-supervised speaker adaption |
WO2002007148A1 (en) * | 2000-07-18 | 2002-01-24 | Qualcomm Incorporated | System and method for voice recognition with a plurality of voice recognition engines |
WO2002021513A1 (en) * | 2000-09-08 | 2002-03-14 | Qualcomm Incorporated | Combining dtw and hmm in speaker dependent and independent modes for speech recognition |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6045298A (en) * | 1983-08-22 | 1985-03-11 | 富士通株式会社 | Word voice recognition equipment |
JPS6332596A (en) * | 1986-07-25 | 1988-02-12 | 日本電信電話株式会社 | Voice recognition equipment |
JPH01309099A (en) * | 1987-06-04 | 1989-12-13 | Ricoh Co Ltd | Speech responding device |
US5315689A (en) * | 1988-05-27 | 1994-05-24 | Kabushiki Kaisha Toshiba | Speech recognition system having word-based and phoneme-based recognition means |
JPH02232696A (en) * | 1989-03-06 | 1990-09-14 | Toshiba Corp | Voice recognition device |
JP2989231B2 (en) * | 1989-10-05 | 1999-12-13 | 株式会社リコー | Voice recognition device |
JPH04280299A (en) * | 1991-03-08 | 1992-10-06 | Ricoh Co Ltd | Speech recognition device |
JPH05188991A (en) * | 1992-01-16 | 1993-07-30 | Oki Electric Ind Co Ltd | Speech recognition device |
US5502774A (en) * | 1992-06-09 | 1996-03-26 | International Business Machines Corporation | Automatic recognition of a consistent message using multiple complimentary sources of information |
JPH08314493A (en) * | 1995-05-22 | 1996-11-29 | Sanyo Electric Co Ltd | Voice recognition method, numeral line voice recognition device and video recorder system |
JPH0926799A (en) * | 1995-07-12 | 1997-01-28 | Aqueous Res:Kk | Speech recognition device |
US5719921A (en) * | 1996-02-29 | 1998-02-17 | Nynex Science & Technology | Methods and apparatus for activating telephone services in response to speech |
JPH1097276A (en) * | 1996-09-20 | 1998-04-14 | Canon Inc | Method and device for speech recognition, and storage medium |
US6151575A (en) * | 1996-10-28 | 2000-11-21 | Dragon Systems, Inc. | Rapid adaptation of speech models |
US6003002A (en) * | 1997-01-02 | 1999-12-14 | Texas Instruments Incorporated | Method and system of adapting speech recognition models to speaker environment |
US5893059A (en) * | 1997-04-17 | 1999-04-06 | Nynex Science And Technology, Inc. | Speech recoginition methods and apparatus |
US5913192A (en) * | 1997-08-22 | 1999-06-15 | At&T Corp | Speaker identification with user-selected password phrases |
US6243677B1 (en) * | 1997-11-19 | 2001-06-05 | Texas Instruments Incorporated | Method of out of vocabulary word rejection |
US6226612B1 (en) * | 1998-01-30 | 2001-05-01 | Motorola, Inc. | Method of evaluating an utterance in a speech recognition system |
JP3865924B2 (en) * | 1998-03-26 | 2007-01-10 | 松下電器産業株式会社 | Voice recognition device |
US6223155B1 (en) * | 1998-08-14 | 2001-04-24 | Conexant Systems, Inc. | Method of independently creating and using a garbage model for improved rejection in a limited-training speaker-dependent speech recognition system |
JP2000137495A (en) * | 1998-10-30 | 2000-05-16 | Toshiba Corp | Device and method for speech recognition |
US20020143540A1 (en) * | 2001-03-28 | 2002-10-03 | Narendranath Malayath | Voice recognition system using implicit speaker adaptation |
-
2001
- 2001-03-28 US US09/821,606 patent/US20020143540A1/en not_active Abandoned
-
2002
- 2002-03-22 WO PCT/US2002/008727 patent/WO2002080142A2/en active Application Filing
- 2002-03-22 ES ES07014802T patent/ES2371094T3/en not_active Expired - Lifetime
- 2002-03-22 JP JP2002578283A patent/JP2004530155A/en not_active Withdrawn
- 2002-03-22 AT AT02725288T patent/ATE372573T1/en not_active IP Right Cessation
- 2002-03-22 EP EP02725288A patent/EP1374223B1/en not_active Expired - Lifetime
- 2002-03-22 CN CN200710196697.4A patent/CN101221759B/en not_active Expired - Lifetime
- 2002-03-22 ES ES02725288T patent/ES2288549T3/en not_active Expired - Lifetime
- 2002-03-22 KR KR1020097017648A patent/KR101031660B1/en not_active IP Right Cessation
- 2002-03-22 CN CNA200710196696XA patent/CN101221758A/en active Pending
- 2002-03-22 KR KR1020077024057A patent/KR100933109B1/en not_active IP Right Cessation
- 2002-03-22 AU AU2002255863A patent/AU2002255863A1/en not_active Abandoned
- 2002-03-22 DK DK02725288T patent/DK1374223T3/en active
- 2002-03-22 KR KR1020037012775A patent/KR100933107B1/en not_active IP Right Cessation
- 2002-03-22 KR KR1020097017599A patent/KR101031744B1/en not_active IP Right Cessation
- 2002-03-22 EP EP07014802A patent/EP1850324B1/en not_active Expired - Lifetime
- 2002-03-22 AT AT07014802T patent/ATE525719T1/en not_active IP Right Cessation
- 2002-03-22 EP EP05025989A patent/EP1628289B1/en not_active Expired - Lifetime
- 2002-03-22 ES ES05025989T patent/ES2330857T3/en not_active Expired - Lifetime
- 2002-03-22 DE DE60233763T patent/DE60233763D1/en not_active Expired - Lifetime
- 2002-03-22 KR KR1020077024058A patent/KR100933108B1/en not_active IP Right Cessation
- 2002-03-22 DE DE60222249T patent/DE60222249T2/en not_active Expired - Lifetime
- 2002-03-22 KR KR1020097017621A patent/KR101031717B1/en not_active IP Right Cessation
- 2002-03-22 CN CN028105869A patent/CN1531722B/en not_active Expired - Fee Related
- 2002-03-22 AT AT05025989T patent/ATE443316T1/en not_active IP Right Cessation
- 2002-03-26 TW TW091105907A patent/TW577043B/en not_active IP Right Cessation
-
2006
- 2006-08-14 HK HK06109012.9A patent/HK1092269A1/en not_active IP Right Cessation
-
2007
- 2007-10-26 JP JP2007279235A patent/JP4546512B2/en not_active Expired - Fee Related
-
2008
- 2008-04-09 JP JP2008101180A patent/JP4546555B2/en not_active Expired - Fee Related
- 2008-04-17 HK HK08104363.3A patent/HK1117260A1/en not_active IP Right Cessation
-
2010
- 2010-04-19 JP JP2010096043A patent/JP2010211221A/en active Pending
-
2013
- 2013-03-04 JP JP2013041687A patent/JP2013152475A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5091947A (en) * | 1987-06-04 | 1992-02-25 | Ricoh Company, Ltd. | Speech recognition method and apparatus |
US5734793A (en) * | 1994-09-07 | 1998-03-31 | Motorola Inc. | System for recognizing spoken sounds from continuous speech and method of using same |
EP1011094A1 (en) * | 1998-12-17 | 2000-06-21 | Sony International (Europe) GmbH | Semi-supervised speaker adaption |
WO2002007148A1 (en) * | 2000-07-18 | 2002-01-24 | Qualcomm Incorporated | System and method for voice recognition with a plurality of voice recognition engines |
WO2002021513A1 (en) * | 2000-09-08 | 2002-03-14 | Qualcomm Incorporated | Combining dtw and hmm in speaker dependent and independent modes for speech recognition |
Non-Patent Citations (1)
Title |
---|
FARRELL K R: "Model combination and weight selection criteria for speaker verification" NEURAL NETWORKS FOR SIGNAL PROCESSING IX: PROCEEDINGS OF THE 1999 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP (CAT. NO.98TH8468), 1999, pages 439-448, XP002212899 MADISON, WI, USA, Piscataway, NJ, USA, IEEE, USA ISBN: 0-7803-5673-X * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2409560A (en) * | 2003-12-23 | 2005-06-29 | Ibm | Interactive speech recognition model |
GB2409560B (en) * | 2003-12-23 | 2007-07-25 | Ibm | Interactive speech recognition model |
US8160876B2 (en) | 2003-12-23 | 2012-04-17 | Nuance Communications, Inc. | Interactive speech recognition model |
US8239198B2 (en) | 2005-08-09 | 2012-08-07 | Nuance Communications, Inc. | Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1374223B1 (en) | Voice recognition system using implicit speaker adaptation | |
US7024359B2 (en) | Distributed voice recognition system using acoustic feature vector modification | |
US5960397A (en) | System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition | |
US6836758B2 (en) | System and method for hybrid voice recognition | |
US20020178004A1 (en) | Method and apparatus for voice recognition | |
Sivaraman et al. | Higher Accuracy of Hindi Speech Recognition Due to Online Speaker Adaptation | |
Ming et al. | Speaker verification over handheld devices with realistic noisy speech data | |
Kim et al. | Speaker adaptation techniques for speech recognition with a speaker-independent phonetic recognizer | |
Stokes-Rees | A study of the automatic speech recognition process and speaker adaptation | |
Marí Hilario | Discriminative connectionist approaches for automatic speech recognition in cars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002725288 Country of ref document: EP Ref document number: 2002578283 Country of ref document: JP Ref document number: 1539/CHENP/2003 Country of ref document: IN Ref document number: 1020037012775 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 028105869 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2002725288 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWG | Wipo information: grant in national office |
Ref document number: 2002725288 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077024058 Country of ref document: KR Ref document number: 1020077024057 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020097017621 Country of ref document: KR Ref document number: 1020097017599 Country of ref document: KR Ref document number: 1020097017648 Country of ref document: KR |