US20100057453A1 - Voice activity detection system and method - Google Patents
Voice activity detection system and method Download PDFInfo
- Publication number
- US20100057453A1 US20100057453A1 US12/515,048 US51504807A US2010057453A1 US 20100057453 A1 US20100057453 A1 US 20100057453A1 US 51504807 A US51504807 A US 51504807A US 2010057453 A1 US2010057453 A1 US 2010057453A1
- Authority
- US
- United States
- Prior art keywords
- frames
- feature vector
- determining
- weighting factor
- classes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims description 48
- 230000000694 effects Effects 0.000 title claims description 42
- 238000000034 method Methods 0.000 title claims description 37
- 239000013598 vector Substances 0.000 claims abstract description 94
- 238000012549 training Methods 0.000 claims description 33
- 238000012545 processing Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000001419 dependent effect Effects 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 description 10
- 239000000203 mixture Substances 0.000 description 10
- 230000015654 memory Effects 0.000 description 9
- 239000002609 medium Substances 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 238000013518 transcription Methods 0.000 description 5
- 230000035897 transcription Effects 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 241000408659 Darpa Species 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000006401 lm-medium Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
Definitions
- Embodiments of the invention relates in general to voice activity detection, and more specifically, to discriminating between event types, such as speech and noise.
- VAD Voice activity detection
- ASR automatic speech recognition system
- VAD has attracted significant interest in speech recognition.
- two major approaches are used for designing such a system: threshold comparison techniques and model based techniques.
- threshold comparison approach a variety of features like, for example, energy, zero crossing, autocorrelations coefficients, etc. are extracted from the input signal and then compared against some thresholds.
- Some approaches can be found in the following publications: Li, Q., Zheng, J., Zhou, Q., and Lee, C.-H., “A robust, real-time endpoint detector with energy normalization for ASR in adverse environments,” Proc. ICASSP , pp. 233-236, 2001; L. R.
- Rabiner et al., “Application of an LPC Distance Measure to the Voiced-Unvoiced-Silence Detection Problem,” IEEE Trans. On ASSP , vol. ASSP-25, no. 4, pp. 338-343, August 1977.
- the thresholds are usually estimated from noise-only and updated dynamically. By using adaptive thresholds or appropriate filtering their performance can be improved. See, for example, Martin, A., Charlet, D., and Mauuary, L, “Robust Speech/Nonspeech Detection Using LDA applied to MFCC,” Proc. ICASSP , pp. 237-240, 2001; Monkowski, M., Automatic Gain Control in a Speech Recognition System , U.S. Pat. No. 6,314,396; and Lie Lu, Hong-Jiang Zhang, H. Jiang, “Content Analysis for Audio Classification and Segmentation,” IEEE Trans. Speech & Audio Processing , Vol. 10, N0.7, pp. 504-516, October 2002.
- model based VAD were widely introduced to reliably distinguish speech from other complex environment sounds.
- Some approaches can be found in the following publications: J. Ajmera, I. McCowan, “Speech/Music Discrimination Using Entropy and Dynamism Features in a HMM Classification Framework,” IDIAP - RR 01-26, IDIAP, Martigny, Switzerland 2001; and T. Hain, S. Johnson, A. Tuerk, P. Woodland, S. Young, “Segment Generation and Clustering in the HTK Broadcast News Transcription System”, DARPA Broadcast News Transcription und Understanding Workshop , pp. 133-137, 1998.
- MFCC Mel Frequency Cepstral Coefficients
- Threshold adaptation and energy features based VAD techniques fail to handle complex acoustic situations encountered in many real life applications where the signal energy level is usually highly dynamic and background sounds such as music and non-stationary noise are common. As a consequence, noise events are often recognized as words causing insertion errors while speech events corrupted by the neighbouring noise events cause substitution errors. Model based VAD techniques work better in noisy conditions, but their dependency on one single language (since they encode phoneme level information) reduces their functionality considerably.
- the environment type plays an important role in VAD accuracy. For instance, in a car environment where high signal to noisy ratio (SNR) conditions are commonly encountered when the car is stationary an accurate detection is possible. Voice activity detection remains a challenging problem when the SNR is very low and it is common to have high intensity semi-stationary background noise from the car engine and high transient noises such as road bumps, wiper noise, door slams. Also in other situations, where the SNR is low and there is background noise and high transient noises, voice activity detection is challenging.
- SNR signal to noisy ratio
- the present invention there is provided a computerized method for discriminating between at least two classes of events, the method comprising the steps of:
- the computerised method may comprise determining at least one distance between outputs of each of said sets of preclassifiers, and determining values for said at least one weighting factor based on said at least one distance.
- the method may further comprise comparing said at least one distance to at least one predefined threshold, and calculating values for said at least one weighting factor using a formula dependent on said comparison.
- Said formula may use at least one of said at least one threshold values as input.
- the at least one distance may be based on at least one of the following: Kullback-Leibler distance, Mahalanobis distance, and Euclidian distance.
- An energy-based feature vector may be determined for each of said frames.
- Said energy-based feature vector may be based on at least one of the following: energy in different frequency bands, log energy, and speech energy contour.
- a model-based feature vector may be determined for each of said frames.
- Said model-based technique may be based on at least one of the following: an acoustic model, neural networks, and hybrid neural networks and hidden Markow model scheme.
- a first feature vector based on energy in different frequency bands and a second feature vector based on an acoustic model is determined for each of said frames.
- Said acoustic model in this specific embodiment may be one of the following: a monolingual acoustic model, and a multilingual acoustic model.
- a second aspect of an embodiment of an embodiment of the present invention provides a computerized method for training a voice activity detection system, comprising
- the method may comprise determining thresholds for distances between outputs of said preclassifiers for determining values for said at least one weighting factor.
- a third aspect of the invention provides a voice activity detection system for discriminating between at least two classes of events, the system comprising:
- said weighting factor value calculator may comprise thresholds for distances between outputs of said preclassifiers for determining values for said at least one weighting factor.
- a further aspect of the invention provides a computer program product comprising a computer-usable medium and a computer readable program, wherein the computer readable program when executed on a data processing system causes the data processing system to carry out method steps as described above.
- FIG. 1 shows schematically, as an example, a voice activity detection system in accordance with an embodiment of the invention
- FIG. 2 shows, as an example, a flowchart of a voice activity detection method in accordance with an embodiment of the invention
- FIG. 3 shows schematically one example of training a voice activity detection system in accordance with an embodiment of the invention.
- FIG. 4 shows schematically a further example of training a voice activity detection system in accordance with an embodiment of the invention.
- Embodiments of the present invention combine a model based voice activity detection technique with a voice activity detection technique based on signal energy on different frequency bands. This combination provides robustness to environmental changes, since information provided by signal energy in different energy bands and by an acoustic model complements each other.
- the two types of feature vectors obtained from the signal energy and acoustic model follow the environmental changes.
- the voice activity detection technique presented here uses a dynamic weighting factor, which reflects the environment associated with the input signal. By combining the two types of feature vectors with such a dynamic weighting factor, the voice activity detection technique adapts to the environment changes.
- feature vectors based on acoustic model and energy in different frequency bands are discussed in detail below as a concrete example, any other feature vector types may be used, as long as the feature vector types are different from each other and they provide complement information on the input signal.
- a simple and effective feature for speech detection in high SNR conditions is signal energy. Any robust mechanism based on energy must adapt to the relative signal and noise levels and the overall gain of the signal. Moreover, since the information conveyed in different frequency bands is different depending on the type of phonemes (sonorant, fricatives, glides, etc) energy bands are used to compute these features type.
- a feature vector with m components can be written like (En 1 , En 2 , En 3 , . . . , En m ), where m represents the number of bands.
- a feature vector based on signal energy is the first type of feature vectors used in voice activity detection systems in accordance with embodiments of the present invention. Other feature vector types based on energy are spectral amplitude, such as log energy and speech energy contour. In principle, any feature vector which is sensitive to noise can be used.
- Frequency based speech features like mel frequency cepstral coefficients (MFCC) and their derivatives, Perceptual Linear Predictive coefficients (PLP), are known to be very effective to achieve improved robustness to noise in speech recognition systems. Unfortunately, they are not so effective for discriminating speech from other environmental sounds when they are directly used in a VAD system. Therefore a way of employing them in a VAD system is through an acoustic model (AM).
- MFCC mel frequency cepstral coefficients
- PLP Perceptual Linear Predictive coefficients
- the functionality of the VAD typically limited only to that language for which the AM has been trained.
- the use of a feature based VAD for another language may require a new AM and re-training of the whole VAD system at increased cost of computation. It is thus advantageous to use an AM trained on a common phonology which is able to handle more than one language. This minimizes the effort at a low cost of accuracy.
- a multilingual AM requires speech transcription based on a common alphabet across all the languages. To reach a common alphabet one can start from the previous existing alphabets for each of the involved languages where some of them need to be simplify and then to merge phones present in several languages that correspond to the same IPA symbol. This approach is discussed in F. Palou Cambra, P. Bravetti, O. Emam, V. Fischer, and E. Janke, “Towards a common alphabet for multilingual speech recognition,” in Proc. of the 6 th Int. Conf. on Spoken Language Processing , Beijing, 2000.
- a VAD system can also benefit from an existing speech recognition system where the statistic AM is modelled as a Gaussian Model Mixtures (GMM) within the hidden Markov model framework.
- GMM Gaussian Model Mixtures
- An example can be found in “E. Marcheret, K. Visweswariah, G. Potamianos, “Speech Activity Detection fusing Acoustic Phonetic and Energy Features,” Proc./ICASLP 2005.
- GMM Gaussian Model Mixtures
- Each class is modelled by a GMM (with a chosen number of mixtures).
- the class posterior probabilities for speech/noise events are computed on a frame basis and called within this invention as (P 1 , P 2 ). They represent the second type of FV.
- a multilingual acoustic model is often used as an example of a model providing feature vectors. It is appreciated that it is straightforward to derive a monolingual acoustic model from a multilingual acoustic model. Furthermore, it is possible to use a specific monolingual acoustic model in a voice detection system in accordance with an embodiment of the invention.
- the first feature vectors (En 1 , En 2 , En 3 , . . . , En m ) relating to the energy of frequency bands are input to a first set of pre-classifiers.
- the second feature vectors, for example (P 1 , P 2 ) for the two event types, provided by an acoustic model or other relevant model are input into a second set of pre-classifiers.
- the pre-classifiers are typically Gaussian mixture pre-classifiers, outputting Gaussian mixture distributions. For any of the Gaussian Mixture Models employed in embodiments of this invention, one can use for instance neural networks to estimate the posterior probabilities of each of the classes.
- the number of pre-classifiers in these sets corresponds with the number of event classes the voice activity detection system needs to detect.
- event classes there are two event classes: speech and non-speech (or, in other words, speech and noise). But depending on the application, there may be need for a larger number of event classes.
- speech, noise and silence A quite common example is to have the following three event classes: speech, noise and silence.
- the pre-classifiers have been trained for the respective event classes. Training is discussed in some detail below.
- a simple and effective way of inferring the type of the environment involves computing distances between the event type distributions, for example between the speech/noise distributions. Highly discriminative feature vectors which provide better discriminative classes and lead to large distances between the distributions are emphasized against the feature vectors which no dot differentiate between the distributions so well. Based on the distances between the models of the pre-classifiers, a value for the weighting factor is determined.
- FIG. 1 shows schematically a voice activity detection system 100 in accordance with an embodiment of the invention.
- FIG. 2 shows a flowchart of the voice activity diction method 200 . It is appreciated that the order of the steps in the method 200 may be varied. Also the arrangement of blocks may be varied from that shown in FIG. 1 , as long as the functionality provided by the block is present in the voice detection system 100 .
- the voice activity detection system 100 receives input data 101 (step 201 ).
- the input data is typically split into frames, which are overlapping consecutive segments of speech (input signal) of sizes varying between 10-30 ms.
- the signal energy block 104 determines for each frame a first feature vector, (En 1 , En 2 , En 3 , . . . , En m ) (step 202 ).
- the front end 102 calculates typically for each frame MFCC coefficients and their derivatives, or perceptual linear predictive (PLP) coefficients (step 204 ). These coefficients are input to an acoustic model AM 103 .
- the acoustic model is, by the way of example, shown to be a multilingual acoustic model.
- the acoustic model 103 provides phonetic acoustic likelihoods as a second feature vector for each frame (step 205 ).
- a multilingual acoustic model ensures the usage of a model dependent VAD at least for any of the language for which it has been trained.
- the first feature vectors (En 1 , En 2 , En 3 , . . . , En m ) provided by the energy band block 104 are input are input to a first set of pre-classifiers M 3 , M 4 121 , 122 (step 203 ).
- the second feature vectors (P 1 , P 2 ) provided by the acoustic model 103 are input into a second set of pre-classifiers M 1 , M 2 111 , 112 (step 206 )
- the pre-classifiers M 1 , M 2 , M 3 , M 4 are typically Gaussian mixture pre-classifiers, outputting Gaussian mixture distributions.
- a neural network can be also used to provide the posterior probabilities of each of the classes.
- the number of pre-classifiers in these sets corresponds with the number of event classes the voice activity detection system 100 needs to detect.
- FIG. 1 shows the event classes speech/noise as an example. But depending on the application, there may be need for a larger number of event classes.
- the pre-classifiers have been trained for the respective event classes.
- M 1 is the speech model trained only with (P 1 , P 2 )
- M 2 is the noise model trained only with (P 1 , P 2 )
- M 3 is the speech model trained only with (En 1 , En 2 , En 3 . . . En m )
- M 4 is the noise model trained only with (En 1 , En 2 , En 3 . . . En m ).
- the voice activity detection system 100 calculates the distances between the distributions output by the pre-classifiers in each set (step 207 ). In other words, a distance KL 12 between the outputs of the pre-classifiers M 1 and M 2 is calculated and, similarly, a distance KL 34 between the outputs of the pre-classifiers M 3 and M 4 . If there are more than two classes of event types, distances can be calculated between all pairs of pre-classifiers in a set or, alternatively, only between some predetermined pairs of pre-classifiers. The distances may be, for example, Kullback-Leibler distances, Mahalanobis distances, or Euclidian distances. Typically same distance type is used for both sets of pre-classifiers.
- the VAD system 100 combines the feature vectors (P 1 , P 2 ) and (En 1 , En 2 , En 3 . . . En m ) into a combined feature vector by applying a weighting factor k on the feature vectors (step 209 ).
- the combined feature vector can be, for example, of the following form:
- a value for the weighting factor k is determined based on the distances KL 12 and KL 34 (step 208 ).
- One example of determined the value for the weighting factor k is the following.
- a data structure is formed containing SNR class labels and corresponding KL 12 and KL 34 distances. Table 1 is an example of such a data structure.
- threshold value THRESHOLD 1 divide the SNR space into two ranges: low SNR, and high SNR.
- the distance values KL 12 and KL 34 are used to predict the current environment type and are computed for each input speech frame (e.g. 10 ms).
- Table 1 there is one column for each SRN class and distance pair. In other words, in the specific example here, there are two columns (SNR high, SNR low) for distance KL 12 and two columns (SNR high, SNR low) for distance KL 34 .
- SNR high, SNR low two columns for distance KL 12
- SNR high, SNR low two columns for distance KL 34 .
- SNR low/high the distinction between SNR low/high by the entries in the SNR class column.
- the values in Table 1 or in a similar data structure are collected during the training phase, and the thresholds are determined during the training phase.
- the distance values KL 12 and KL 34 are compared to the thresholds in Table 1 (or in the similar data structure), and based on the comparison it is determined which SNR class describing the environment of the current frame.
- the value for the weighting factor can be determined based on the environment type, for example, based on the threshold values themselves using the following relations.
- the distance values KL 12 and KL 34 can be used.
- the combined feature vector (Weighted FV*) is input to a set of classifiers 131 , 132 (step 210 ), which have been trained for speech and noise. If there are more than two event types, the number of pre-classifier and classifiers in the set of classifiers acting on the combined feature vector will be in line with the number of event types.
- the set of classifiers for the combined feature vector typically uses heuristic decision rules, Gaussian mixture models, perceptron, support vector machine or other neural networks.
- the score provided by the classifiers 131 and 132 is typically smoothed over a couple of frames (step 211 ).
- the voice activity detection system decides on the event type based on the smoothed scores (step 212 ).
- FIG. 3 shows schematically training of the voice activity detection system 100 .
- training of the voice activity detection system 100 occurs automatically, by inputting a training signal 301 and switching the system 100 into a training mode.
- the acoustic FVs computed for each frame in the front end 102 are input into the acoustic model 103 for two reasons: to label the data into speech/noise and to produce another type of FV which is more effective for discriminating speech from other noise. The latter reason applies also to the run-time phase of the VAD system.
- the labels for each frame can be obtained from one of following methods: manually, by running a speech recognition system in a forced alignment mode (forced alignment block 302 in FIG. 3 ) or by using the output of an already existing speech decoder.
- forced alignment block 302 in FIG. 3 the second method of labeling the training data is discussed in more detail in the following, with reference to FIG. 3 .
- phone to class mapping which takes place in block 303 .
- the acoustic phonetic space for all languages in place is defined by mapping all of the phonemes from the inventory to the discriminative classes.
- the phonetic transcription of the training data is necessary for this step. For instance, the pure silence phonemes, the unvoice fricatives and plosives are chosen for noise class while the rest of phonemes for speech class.
- the speech detection class posterior are derived by mapping the whole Gaussians of the AM into the corresponding phones and then to corresponding classes. For example, for class noise, all Gaussians belonging to noisy and silence classes are mapped in to noise; and the rest of the classes of mapped into the class speech.
- Viterbi alignment occurs in the forced alignment block 302 .
- forced alignment determines the phonetic information for each signal segment (frame) using the same mechanism as for speech recognition. This aligns features to alophones (from AM).
- the phone to class mapping (block 303 ) then gives the mapping from allophones to phones and finally to class.
- the speech/noise labels from forced alignment are treated as correct label.
- the Gaussian models (blocks 111 , 112 ) for the defined classes irrespective of the language can then be trained.
- the second feature vectors (P 1 , P 2 ) are computed by multilingual acoustic model in block 103 and aligned to the corresponding class by block 302 and 303 .
- the SNR is also computed at this stage.
- the block 302 outputs the second feature vectors together with the SNR information to the second set of pre-classifiers 111 , 112 that are pre-trained Speech/noise Gaussian Mixtures.
- the voice activity detection system 100 inputs the training signal 301 also to the energy bands block 104 , which determines the energy of the signal in different frequency bands.
- the energy bands block 104 inputs the first feature vectors to the first set of pre-classifiers 121 , 122 which have been previously trained for the relevant event types.
- the voice activity detection system 100 in the training phase calculates the distance KL 12 between the outputs of the pre-classifiers 111 , 112 and the distance KL 34 between the outputs of the pre-classifiers 121 , 122 .
- Information about the SNR is passed along with the distances KL 12 and KL 34 .
- the voice activity detection system 100 generates a data structure, for example a lookup table, based on the distances KL 12 , KL 34 between the outputs of the pre-classifiers and the SNR.
- the data structure typically has various environment types, and values of the distances KL 12 , KL 34 associated with these environment types.
- Table 1 contains two environment types (SNR low, and SNR high). Thresholds are determined at the training phase to separate these environment types.
- distances KL 12 and KL 34 are collected into columns of Table 1, according to the SNR associated with each KL 12 , KL 34 value. This way the columns KL 121 , KL 12 h , KL 34 l , and KL 34 h are formed.
- the voice activity detection system 100 determines the combined feature vector by applying the weighting factor to the first and second feature vectors as discussed above.
- the combined feature vector is input to the set of classifiers 131 , 132 .
- thresholds are determined during the training phase to distinguish the SNR classes from one another.
- Table 2 shows an example, where two event classes and three SNR classes are used. In this example there are two SNR thresholds (THRESHOLD 1 , THRESHOLD 2 ) and 8 thresholds for the distance values.
- TRESHOLD 1 THRESHOLD 1
- THRESHOLD 2 8 thresholds for the distance values.
- k ⁇ TH 12 ⁇ _ ⁇ LM + TH 12 ⁇ _ ⁇ MH + TH 34 ⁇ _ ⁇ LM + TH 34 ⁇ _ ⁇ MH 4 , if ⁇ ⁇ TH 12 ⁇ _ ⁇ LM + TH 12 ⁇ _ ⁇ MH + TH 34 ⁇ _ ⁇ LM + TH 34 ⁇ _ ⁇ MH 4 ⁇ 0.5 1 - TH 12 ⁇ _ ⁇ LM + TH 12 ⁇ _ ⁇ MH + TH 34 ⁇ _ ⁇ LM + TH 34 ⁇ _ ⁇ MH 4 , if ⁇ ⁇ TH 12 ⁇ _ ⁇ LM + TH 12 ⁇ _ ⁇ MH + TH 34 ⁇ _ ⁇ MH + TH 34 ⁇ _ ⁇ MH 4 > 0.5
- SNR SNR class value (dB) KL 12low KL 12med KL 12hi KL 34low KL 34med KL 34hi Low . . . THRESHOLD 1 TH 12 — L TH 12 — LM TH 34 — L TH 34 — LM Medium . . . THRESHOLD 2 TH 12 — MH TH 12 — H TH 34 — MH TH 34 — H High . . .
- FIG. 4 shows, as an example, training phase of a voice activity detection system, where there are three event classes and two SNR classes (environments type).
- pre-classifiers that is, the number of the event classes
- models 111 , 112 , 113 and models 121 , 122 , 123 are examples of pre-classifiers and classifiers.
- the number of distances monitored during the training phase is 6 for each feature vector type, for example KL 12H. , KL 12L KL 13H. KL 13L KL 23H. KL 23L for the feature vector obtained from the acoustic model.
- the weight factor between the FVs depends on the SNR and FV's type. Therefore, if the number of defined SNR classes and the number of feature vectors remains unchanged, the procedure of weighting remains also unchanged. If the third SNR class is medium, a maximum value of 0.5 for the energy type FV is recommended but depending on the application it might be slightly adjusted.
- the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
- the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
- Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
- a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- I/O devices can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- a computerized method refers to a method whose steps are performed by a computing system containing a suitable combination of one or more processors, memory means and storage means.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Circuit For Audible Band Transducer (AREA)
- Telephonic Communication Services (AREA)
- Machine Translation (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- 1. Field
- Embodiments of the invention relates in general to voice activity detection, and more specifically, to discriminating between event types, such as speech and noise.
- 2. Background
- Voice activity detection (VAD) is an essential part in many speech processing tasks such as speech coding, hands-free telephony and speech recognition. For example, in mobile communication the transmission bandwidth over the wireless interface is considerably reduced when the mobile device detects the absence of speech. A second example is automatic speech recognition system (ASR). VAD is important in ASR, because of restrictions regarding memory and accuracy. Inaccurate detection of the speech boundaries causes serious problems such as degradation of recognition performance and deterioration of speech quality.
- VAD has attracted significant interest in speech recognition. In general, two major approaches are used for designing such a system: threshold comparison techniques and model based techniques. For the threshold comparison approach, a variety of features like, for example, energy, zero crossing, autocorrelations coefficients, etc. are extracted from the input signal and then compared against some thresholds. Some approaches can be found in the following publications: Li, Q., Zheng, J., Zhou, Q., and Lee, C.-H., “A robust, real-time endpoint detector with energy normalization for ASR in adverse environments,” Proc. ICASSP, pp. 233-236, 2001; L. R. Rabiner, et al., “Application of an LPC Distance Measure to the Voiced-Unvoiced-Silence Detection Problem,” IEEE Trans. On ASSP, vol. ASSP-25, no. 4, pp. 338-343, August 1977.
- The thresholds are usually estimated from noise-only and updated dynamically. By using adaptive thresholds or appropriate filtering their performance can be improved. See, for example, Martin, A., Charlet, D., and Mauuary, L, “Robust Speech/Nonspeech Detection Using LDA applied to MFCC,” Proc. ICASSP, pp. 237-240, 2001; Monkowski, M., Automatic Gain Control in a Speech Recognition System, U.S. Pat. No. 6,314,396; and Lie Lu, Hong-Jiang Zhang, H. Jiang, “Content Analysis for Audio Classification and Segmentation,” IEEE Trans. Speech & Audio Processing, Vol. 10, N0.7, pp. 504-516, October 2002.
- Alternatively, model based VAD were widely introduced to reliably distinguish speech from other complex environment sounds. Some approaches can be found in the following publications: J. Ajmera, I. McCowan, “Speech/Music Discrimination Using Entropy and Dynamism Features in a HMM Classification Framework,” IDIAP-RR 01-26, IDIAP, Martigny, Switzerland 2001; and T. Hain, S. Johnson, A. Tuerk, P. Woodland, S. Young, “Segment Generation and Clustering in the HTK Broadcast News Transcription System”, DARPA Broadcast News Transcription und Understanding Workshop, pp. 133-137, 1998. Features such us full band energy, sub-band energy, linear prediction residual energy or frequency based features like Mel Frequency Cepstral Coefficients (MFCC) are usually employed in such systems.
- Threshold adaptation and energy features based VAD techniques fail to handle complex acoustic situations encountered in many real life applications where the signal energy level is usually highly dynamic and background sounds such as music and non-stationary noise are common. As a consequence, noise events are often recognized as words causing insertion errors while speech events corrupted by the neighbouring noise events cause substitution errors. Model based VAD techniques work better in noisy conditions, but their dependency on one single language (since they encode phoneme level information) reduces their functionality considerably.
- The environment type plays an important role in VAD accuracy. For instance, in a car environment where high signal to noisy ratio (SNR) conditions are commonly encountered when the car is stationary an accurate detection is possible. Voice activity detection remains a challenging problem when the SNR is very low and it is common to have high intensity semi-stationary background noise from the car engine and high transient noises such as road bumps, wiper noise, door slams. Also in other situations, where the SNR is low and there is background noise and high transient noises, voice activity detection is challenging.
- It is therefore highly desirable to develop a VAD method/system which performs well for various environments and where robustness and accuracy are important considerations.
- It is an aim of embodiments of the present invention to address one or more of the problems discussed above.
- According to a first aspect of an embodiment the present invention there is provided a computerized method for discriminating between at least two classes of events, the method comprising the steps of:
-
- receiving a set of frames containing an input signal,
- determining at least two different feature vectors for each of said frames,
- classifying said at least two different feature vectors using respective sets of preclassifiers trained for said at least two classes of events,
- determining values for at least one weighting factor based on outputs of said preclassifiers for each of said frames,
- calculating a combined feature vector for each of said frames by applying said at least one weighting factor to said at least two different feature vectors, and
- classifying said combined feature vector using a set of classifiers trained for said at least two classes of events.
- The computerised method may comprise determining at least one distance between outputs of each of said sets of preclassifiers, and determining values for said at least one weighting factor based on said at least one distance.
- The method may further comprise comparing said at least one distance to at least one predefined threshold, and calculating values for said at least one weighting factor using a formula dependent on said comparison. Said formula may use at least one of said at least one threshold values as input.
- The at least one distance may be based on at least one of the following: Kullback-Leibler distance, Mahalanobis distance, and Euclidian distance.
- An energy-based feature vector may be determined for each of said frames. Said energy-based feature vector may be based on at least one of the following: energy in different frequency bands, log energy, and speech energy contour.
- A model-based feature vector may be determined for each of said frames. Said model-based technique may be based on at least one of the following: an acoustic model, neural networks, and hybrid neural networks and hidden Markow model scheme.
- In one specific embodiment, a first feature vector based on energy in different frequency bands and a second feature vector based on an acoustic model is determined for each of said frames. Said acoustic model in this specific embodiment may be one of the following: a monolingual acoustic model, and a multilingual acoustic model.
- A second aspect of an embodiment of an embodiment of the present invention provides a computerized method for training a voice activity detection system, comprising
-
- receiving a set of frames containing a training signal,
- determining quality factor for each of said frames,
- labelling said frames into at least two classes of events based on the content of the training signal,
- determining at least two different feature vectors for each of said frames,
- training respective sets of preclassifiers to classify said at least two different feature vectors using for said at least two classes of events,
- determining values for at least one weighting factor based on outputs of said preclassifiers for each of said frames,
- calculating a combined feature vector for each of said frames by applying said at least one weighting factor to said at least two different feature vectors, and
- classifying said combined feature vector using a set of classifiers to classify said combined feature vector into said at least two classes of events.
- The method may comprise determining thresholds for distances between outputs of said preclassifiers for determining values for said at least one weighting factor.
- A third aspect of the invention provides a voice activity detection system for discriminating between at least two classes of events, the system comprising:
-
- feature vector units for determining at least two different feature vectors for each frame of a set of frames containing an input signal,
- sets of preclassifiers trained for said at least two classes of events for classifying said at least two different feature vectors,
- a weighting factor value calculator for determining values for at least one weighting factor based on outputs of said preclassifiers for each of said frames,
- a combined feature vector calculator for calculating a value for the combined feature vector for each of said frames by applying said at least one weighting factor to said at least two different feature vectors, and
- a set of classifiers trained for said at least two classes of events for classifying said combined feature vector.
- In the voice activity detection system, said weighting factor value calculator may comprise thresholds for distances between outputs of said preclassifiers for determining values for said at least one weighting factor.
- A further aspect of the invention provides a computer program product comprising a computer-usable medium and a computer readable program, wherein the computer readable program when executed on a data processing system causes the data processing system to carry out method steps as described above.
- For a better understanding of embodiments of the present invention and as how the same may be carried into effect, reference will now be made by way of example only to the accompanying drawings in which:
-
FIG. 1 shows schematically, as an example, a voice activity detection system in accordance with an embodiment of the invention; -
FIG. 2 shows, as an example, a flowchart of a voice activity detection method in accordance with an embodiment of the invention; -
FIG. 3 shows schematically one example of training a voice activity detection system in accordance with an embodiment of the invention; and -
FIG. 4 shows schematically a further example of training a voice activity detection system in accordance with an embodiment of the invention. - Embodiments of the present invention combine a model based voice activity detection technique with a voice activity detection technique based on signal energy on different frequency bands. This combination provides robustness to environmental changes, since information provided by signal energy in different energy bands and by an acoustic model complements each other. The two types of feature vectors obtained from the signal energy and acoustic model follow the environmental changes. Furthermore, the voice activity detection technique presented here uses a dynamic weighting factor, which reflects the environment associated with the input signal. By combining the two types of feature vectors with such a dynamic weighting factor, the voice activity detection technique adapts to the environment changes. Although feature vectors based on acoustic model and energy in different frequency bands are discussed in detail below as a concrete example, any other feature vector types may be used, as long as the feature vector types are different from each other and they provide complement information on the input signal.
- A simple and effective feature for speech detection in high SNR conditions is signal energy. Any robust mechanism based on energy must adapt to the relative signal and noise levels and the overall gain of the signal. Moreover, since the information conveyed in different frequency bands is different depending on the type of phonemes (sonorant, fricatives, glides, etc) energy bands are used to compute these features type. A feature vector with m components can be written like (En1, En2, En3, . . . , Enm), where m represents the number of bands. A feature vector based on signal energy is the first type of feature vectors used in voice activity detection systems in accordance with embodiments of the present invention. Other feature vector types based on energy are spectral amplitude, such as log energy and speech energy contour. In principle, any feature vector which is sensitive to noise can be used.
- Frequency based speech features, like mel frequency cepstral coefficients (MFCC) and their derivatives, Perceptual Linear Predictive coefficients (PLP), are known to be very effective to achieve improved robustness to noise in speech recognition systems. Unfortunately, they are not so effective for discriminating speech from other environmental sounds when they are directly used in a VAD system. Therefore a way of employing them in a VAD system is through an acoustic model (AM).
- When an acoustic model is used, the functionality of the VAD typically limited only to that language for which the AM has been trained. The use of a feature based VAD for another language may require a new AM and re-training of the whole VAD system at increased cost of computation. It is thus advantageous to use an AM trained on a common phonology which is able to handle more than one language. This minimizes the effort at a low cost of accuracy.
- A multilingual AM requires speech transcription based on a common alphabet across all the languages. To reach a common alphabet one can start from the previous existing alphabets for each of the involved languages where some of them need to be simplify and then to merge phones present in several languages that correspond to the same IPA symbol. This approach is discussed in F. Palou Cambra, P. Bravetti, O. Emam, V. Fischer, and E. Janke, “Towards a common alphabet for multilingual speech recognition,” in Proc. of the 6th Int. Conf. on Spoken Language Processing, Beijing, 2000. Acoustic modelling for multilingual speech recognition to a large extend makes use of well established methods for (semi-) continuous Hidden-Markov-Model training, but a neural network which will produce the posterior class probability for each class can also be taken into consideration for this task. This approach is discussed in V. Fischer, J. Gonzalez, E. Janke, M. Villani, and C. Waast-Richard, “Towards Multilingual Acoustic Modeling for Large Vocabulary Continuous Speech Recognition,” in Proc. of the IEEE Workshop on Multilingual Speech Communications, Kyoto, Japan, 2000; S. Kunzmann, V. Fischer, J. Gonzalez, O. Emam, C. Günther, and E. Janke, “Multilingual Acoustic Models for Speech Recognition and Synthesis,” in Proc. of the IEEE Int. Conference on Acoustics, Speech, and Signal Processing, Montreal, 2004.
- Assuming that both speech and noise observations can be characterized by individual distributions of Gaussian mixture density functions, a VAD system can also benefit from an existing speech recognition system where the statistic AM is modelled as a Gaussian Model Mixtures (GMM) within the hidden Markov model framework. An example can be found in “E. Marcheret, K. Visweswariah, G. Potamianos, “Speech Activity Detection fusing Acoustic Phonetic and Energy Features,” Proc./ICASLP 2005. Each class is modelled by a GMM (with a chosen number of mixtures). The class posterior probabilities for speech/noise events are computed on a frame basis and called within this invention as (P1, P2). They represent the second type of FV.
- In the following description, a multilingual acoustic model is often used as an example of a model providing feature vectors. It is appreciated that it is straightforward to derive a monolingual acoustic model from a multilingual acoustic model. Furthermore, it is possible to use a specific monolingual acoustic model in a voice detection system in accordance with an embodiment of the invention.
- The first feature vectors (En1, En2, En3, . . . , Enm) relating to the energy of frequency bands are input to a first set of pre-classifiers. The second feature vectors, for example (P1, P2) for the two event types, provided by an acoustic model or other relevant model are input into a second set of pre-classifiers. The pre-classifiers are typically Gaussian mixture pre-classifiers, outputting Gaussian mixture distributions. For any of the Gaussian Mixture Models employed in embodiments of this invention, one can use for instance neural networks to estimate the posterior probabilities of each of the classes.
- The number of pre-classifiers in these sets corresponds with the number of event classes the voice activity detection system needs to detect. Typically, there are two event classes: speech and non-speech (or, in other words, speech and noise). But depending on the application, there may be need for a larger number of event classes. A quite common example is to have the following three event classes: speech, noise and silence. The pre-classifiers have been trained for the respective event classes. Training is discussed in some detail below.
- At high SNR (clean environment) the distributions of the two classes are well separated and any of the pre-classifiers associated with the energy based models will provide a reliable output. It is also expected that the classification models associated with the (multilingual) acoustic model will provide a reasonably good class separation. At low SNR (noisy environment) the distributions of the two classes associated with the energy bands overlap considerably making questionable the decision based on the pre-classifiers associated with energy bands alone.
- It seems that one of the FV type is more effective than the other depending on the environment type (noisy or clean). But in real applications changes in environment occur very often requiring the presence of both FV types in order to increase the robustness of the voice activity detection system to these changes. Therefore a scheme where the two FV types are weighted dynamically depending on the type of the environment will be used in embodiments of the invention.
- There remains the problem of defining the environment in order to decide which of the FV will provide the most reliable decision. A simple and effective way of inferring the type of the environment involves computing distances between the event type distributions, for example between the speech/noise distributions. Highly discriminative feature vectors which provide better discriminative classes and lead to large distances between the distributions are emphasized against the feature vectors which no dot differentiate between the distributions so well. Based on the distances between the models of the pre-classifiers, a value for the weighting factor is determined.
-
FIG. 1 shows schematically a voiceactivity detection system 100 in accordance with an embodiment of the invention.FIG. 2 shows a flowchart of the voice activity diction method 200. It is appreciated that the order of the steps in the method 200 may be varied. Also the arrangement of blocks may be varied from that shown inFIG. 1 , as long as the functionality provided by the block is present in thevoice detection system 100. - The voice
activity detection system 100 receives input data 101 (step 201). The input data is typically split into frames, which are overlapping consecutive segments of speech (input signal) of sizes varying between 10-30 ms. Thesignal energy block 104 determines for each frame a first feature vector, (En1, En2, En3, . . . , Enm) (step 202). Thefront end 102 calculates typically for each frame MFCC coefficients and their derivatives, or perceptual linear predictive (PLP) coefficients (step 204). These coefficients are input to anacoustic model AM 103. InFIG. 1 , the acoustic model is, by the way of example, shown to be a multilingual acoustic model. Theacoustic model 103 provides phonetic acoustic likelihoods as a second feature vector for each frame (step 205). A multilingual acoustic model ensures the usage of a model dependent VAD at least for any of the language for which it has been trained. - The first feature vectors (En1, En2, En3, . . . , Enm) provided by the
energy band block 104 are input are input to a first set of pre-classifiers M3,M4 121, 122 (step 203). The second feature vectors (P1, P2) provided by theacoustic model 103 are input into a second set of pre-classifiers M1,M2 111, 112 (step 206) The pre-classifiers M1, M2, M3, M4 are typically Gaussian mixture pre-classifiers, outputting Gaussian mixture distributions. A neural network can be also used to provide the posterior probabilities of each of the classes. The number of pre-classifiers in these sets corresponds with the number of event classes the voiceactivity detection system 100 needs to detect.FIG. 1 shows the event classes speech/noise as an example. But depending on the application, there may be need for a larger number of event classes. The pre-classifiers have been trained for the respective event classes. In the example inFIG. 1 , M1 is the speech model trained only with (P1, P2), M2 is the noise model trained only with (P1, P2), M3 is the speech model trained only with (En1, En2, En3 . . . Enm) and M4 is the noise model trained only with (En1, En2, En3 . . . Enm). - The voice
activity detection system 100 calculates the distances between the distributions output by the pre-classifiers in each set (step 207). In other words, a distance KL12 between the outputs of the pre-classifiers M1 and M2 is calculated and, similarly, a distance KL34 between the outputs of the pre-classifiers M3 and M4. If there are more than two classes of event types, distances can be calculated between all pairs of pre-classifiers in a set or, alternatively, only between some predetermined pairs of pre-classifiers. The distances may be, for example, Kullback-Leibler distances, Mahalanobis distances, or Euclidian distances. Typically same distance type is used for both sets of pre-classifiers. - The
VAD system 100 combines the feature vectors (P1, P2) and (En1, En2, En3 . . . Enm) into a combined feature vector by applying a weighting factor k on the feature vectors (step 209). The combined feature vector can be, for example, of the following form: - (k*En1 k*En2 k*En3 . . . k*Enm (1−k)*P1 (1−k)*P2).
- A value for the weighting factor k is determined based on the distances KL12 and KL34 (step 208). One example of determined the value for the weighting factor k is the following. During the training phase, when the SNR of the training signal can be computed, a data structure is formed containing SNR class labels and corresponding KL12 and KL34 distances. Table 1 is an example of such a data structure.
-
TABLE 1 Look-up table for distance/SNR correspondence. SNR class SNR for each frame value (dB) KL12L KL12H KL34L KL34H Low KL12L-frame-1 KL34L-frame-1 Low KL12L-frame-2 KL34L-frame-2 Low KL12L-frame-3 KL34L-frame-3 . . . . . . . . . . . . . . . Low KL12L-frame-n KL34L-frame-n THRESHOLD1 TH12L TH12H TH34L TH34H High KL12H-frame-n+1 KL34H-frame-n+1 High KL12H-frame-n+2 KL34H-frame-n+2 High KL12H-frame-n+3 KL34H-frame-n+3 . . . . . . . . . . . . . . . . . . High KL12H-frame-n+m KL34H-frame-n+m - As Table 1 shows, there may be threshold values that divide the SNR space into ranges. In Table 1, threshold value THRESHOLD1 divide the SNR space into two ranges: low SNR, and high SNR. The distance values KL12 and KL34 are used to predict the current environment type and are computed for each input speech frame (e.g. 10 ms).
- In Table 1, there is one column for each SRN class and distance pair. In other words, in the specific example here, there are two columns (SNR high, SNR low) for distance KL12 and two columns (SNR high, SNR low) for distance KL34. As a further option to the format of Table 1, it is possible during the training phase to collect all distance values KL12 to one column and all distance values KL34 to a further column. It is possible to make the distinction between SNR low/high by the entries in the SNR class column.
- Referring back to the training phase and Table 1, at the frame x if the environment is noisy (low SNR) only (KL12L-frame-x and KL34L-frame-x) pair will be computed. At the next frame (x+1), if the environment is still noisy, (KL12L-frame-x+1 and KL34L-frame-x+1) pair will be computed otherwise (high SNR) (KL12H-frame-x+1 and KL34H-frame-x+1) pair is computed. The environment type is computed at the training phase for each frame and the corresponding KL distances are collected into the look up table (Table 1). At run time, when the information about the SNR is missing, for each speech frame one computes distance values KL12 and KL34. Based on comparison of KL12 and KL34 values against the corresponding threshold values in the look up table, one retrieves the information about SNR type. In this way the type of environment (SRN class) can be retrieved.
- As a summary, the values in Table 1 or in a similar data structure are collected during the training phase, and the thresholds are determined during the training phase. In the run-time phase, when voice activity detection is carried out, the distance values KL12 and KL34 are compared to the thresholds in Table 1 (or in the similar data structure), and based on the comparison it is determined which SNR class describing the environment of the current frame.
- After determining the current environment (SNR range), the value for the weighting factor can be determined based on the environment type, for example, based on the threshold values themselves using the following relations.
- 1. for SNR<THRESHOLD1, k=min (TH12-L, TH34-L,)
2. for SNR>THRESHOLD1, k=max (TH12-H, TH34-H,) - As an alternative to using the threshold values in the calculation of the weighting factor value, the distance values KL12 and KL34 can be used. For example, the value for k can be k=min(KL12, KL34), when SNR<THRESHOLD1, and k=max(KL12, KL34), when SNR>THRESHOLD1. This way the voice activity detection system is even more dynamic in taking into account changes in the environment.
- The combined feature vector (Weighted FV*) is input to a set of
classifiers 131, 132 (step 210), which have been trained for speech and noise. If there are more than two event types, the number of pre-classifier and classifiers in the set of classifiers acting on the combined feature vector will be in line with the number of event types. The set of classifiers for the combined feature vector typically uses heuristic decision rules, Gaussian mixture models, perceptron, support vector machine or other neural networks. The score provided by theclassifiers -
FIG. 3 shows schematically training of the voiceactivity detection system 100. Preferably, training of the voiceactivity detection system 100 occurs automatically, by inputting atraining signal 301 and switching thesystem 100 into a training mode. The acoustic FVs computed for each frame in thefront end 102 are input into theacoustic model 103 for two reasons: to label the data into speech/noise and to produce another type of FV which is more effective for discriminating speech from other noise. The latter reason applies also to the run-time phase of the VAD system. - The labels for each frame can be obtained from one of following methods: manually, by running a speech recognition system in a forced alignment mode (forced
alignment block 302 inFIG. 3 ) or by using the output of an already existing speech decoder. For illustrative purposes, the second method of labeling the training data is discussed in more detail in the following, with reference toFIG. 3 . - Consider “phone to class” mapping which takes place in
block 303. The acoustic phonetic space for all languages in place is defined by mapping all of the phonemes from the inventory to the discriminative classes. We choose two classes (speech/noise) as an illustrative example, but the event classes and their number can be any depending on the needs imposed by the environment under which the voice activity detection intends to work. The phonetic transcription of the training data is necessary for this step. For instance, the pure silence phonemes, the unvoice fricatives and plosives are chosen for noise class while the rest of phonemes for speech class. - Consider next the class likelihood generation that occurs in the multilingual
acoustic model block 103. Based on the outcome from theacoustic model 103 and on the acoustic feature (e.g. MFCC coefficients input to the multilingual AM (block 103), the speech detection class posterior are derived by mapping the whole Gaussians of the AM into the corresponding phones and then to corresponding classes. For example, for class noise, all Gaussians belonging to noisy and silence classes are mapped in to noise; and the rest of the classes of mapped into the class speech. - Viterbi alignment occurs in the forced
alignment block 302. Given the correct transcription of the signal, forced alignment determines the phonetic information for each signal segment (frame) using the same mechanism as for speech recognition. This aligns features to alophones (from AM). The phone to class mapping (block 303) then gives the mapping from allophones to phones and finally to class. The speech/noise labels from forced alignment are treated as correct label. - The Gaussian models (
blocks 111, 112) for the defined classes irrespective of the language can then be trained. - So, for each input frame, based on the MFCC coefficients, the second feature vectors (P1, P2) are computed by multilingual acoustic model in
block 103 and aligned to the corresponding class byblock block 302 outputs the second feature vectors together with the SNR information to the second set ofpre-classifiers - The voice
activity detection system 100 inputs thetraining signal 301 also to the energy bands block 104, which determines the energy of the signal in different frequency bands. The energy bands block 104 inputs the first feature vectors to the first set ofpre-classifiers - The voice
activity detection system 100 in the training phase calculates the distance KL12 between the outputs of thepre-classifiers pre-classifiers activity detection system 100 generates a data structure, for example a lookup table, based on the distances KL12, KL34 between the outputs of the pre-classifiers and the SNR. - The data structure typically has various environment types, and values of the distances KL12, KL34 associated with these environment types. As an example, Table 1 contains two environment types (SNR low, and SNR high). Thresholds are determined at the training phase to separate these environment types. During the training phase, distances KL12 and KL34 are collected into columns of Table 1, according to the SNR associated with each KL12, KL34 value. This way the columns KL121, KL12 h, KL34 l, and KL34 h are formed.
- The voice
activity detection system 100 determines the combined feature vector by applying the weighting factor to the first and second feature vectors as discussed above. The combined feature vector is input to the set ofclassifiers - As mentioned above, it is possible to have more than two SNR classes. Also in this case, thresholds are determined during the training phase to distinguish the SNR classes from one another. Table 2 shows an example, where two event classes and three SNR classes are used. In this example there are two SNR thresholds (THRESHOLD1, THRESHOLD2) and 8 thresholds for the distance values. Below is an example of a formula for determining values for the weighting factor in this example.
- 1. for SNR<THRESHOLD1, k=min(TH12-L, TH34-L)
-
- 3. for SNR>THRESHOLD2, k=max(TH12-H, TH34-H).
-
TABLE 2 A further example for a look-up table for distance/SNR correspondence. SNR SNR class value (dB) KL12low KL12med KL12hi KL34low KL34med KL34hi Low . . . THRESHOLD1 TH12 — LTH12 — LMTH34 — LTH34 — LMMedium . . . THRESHOLD2 TH12 — MHTH12 — HTH34 — MHTH34 — HHigh . . . - It is furthermore possible to have more than two event classes. In this case there are more pre-classifiers and classifiers in the voice activity detection system. For example, for three event classes (speech, noise, silence), three distances are considered: KL(speech, noise), KL(speech, silence) and KL(noise, silence).
FIG. 4 shows, as an example, training phase of a voice activity detection system, where there are three event classes and two SNR classes (environments type). There are three pre-classifiers (that is, the number of the event classes) for each feature vector type, namelymodels models FIG. 4 , the number of distances monitored during the training phase is 6 for each feature vector type, for example KL12H., KL12L KL13H. KL13L KL23H. KL23L for the feature vector obtained from the acoustic model. The weight factor between the FVs depends on the SNR and FV's type. Therefore, if the number of defined SNR classes and the number of feature vectors remains unchanged, the procedure of weighting remains also unchanged. If the third SNR class is medium, a maximum value of 0.5 for the energy type FV is recommended but depending on the application it might be slightly adjusted. - It is furthermore feasible to have more than two feature vectors for a frame. The final weighted FV be of the form: (k1*FV1, k2*FV2, k3*FV3, . . . , knFVn), where k1+k2+k3+ . . . +kn=1. What needs to be taken into account by using more FVs is their behaviour with respect to different SNR classes. So, the number of SNR classes could influence the choice of FV. One FV for one class may be ideal. Currently, however, there is no such fine classification in the area of voice activity detection.
- The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
- A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- It is appreciated that although embodiments of the invention have been discussed on the assumption that the values for the dynamic weighting coefficient are updated for each frame, this is not obligatory. It is possible to determine values for the weighting factor, for example, in every third frame. The “set of frames” in the appended claims does not necessarily need to refer to a set of frames strictly subsequent to each other. The weighting can be done for more than one frame without loosing the precision of class separation. Updating the weighting factor values less often may reduce the accuracy of the voice activity detection, but depending on the application, the accuracy may still be sufficient.
- It is appreciated that although in the above description signal to noise ratio has been used as a quality factor reflecting the environment associated with the input signal, other quality factors may additionally or alternatively be applicable.
- This description explicitly describes some combinations of the various features discussed herein. It is appreciated that various other combinations are evident to a skilled person studying this description.
- In the appended claims a computerized method refers to a method whose steps are performed by a computing system containing a suitable combination of one or more processors, memory means and storage means.
- While the foregoing has been with reference to particular embodiments of the invention, it will be appreciated by those skilled in the art that changes in these embodiments may be made without departing from the principles of the invention, the scope of which is defined by the appended claims.
Claims (21)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06124228 | 2006-11-16 | ||
EP06124228.5 | 2006-11-16 | ||
EP06124228 | 2006-11-16 | ||
PCT/EP2007/061534 WO2008058842A1 (en) | 2006-11-16 | 2007-10-26 | Voice activity detection system and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2007/061534 A-371-Of-International WO2008058842A1 (en) | 2006-11-16 | 2007-10-26 | Voice activity detection system and method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/602,390 Continuation US8554560B2 (en) | 2006-11-16 | 2012-09-04 | Voice activity detection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100057453A1 true US20100057453A1 (en) | 2010-03-04 |
US8311813B2 US8311813B2 (en) | 2012-11-13 |
Family
ID=38857912
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/515,048 Expired - Fee Related US8311813B2 (en) | 2006-11-16 | 2007-10-26 | Voice activity detection system and method |
US13/602,390 Active US8554560B2 (en) | 2006-11-16 | 2012-09-04 | Voice activity detection |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/602,390 Active US8554560B2 (en) | 2006-11-16 | 2012-09-04 | Voice activity detection |
Country Status (9)
Country | Link |
---|---|
US (2) | US8311813B2 (en) |
EP (1) | EP2089877B1 (en) |
JP (1) | JP4568371B2 (en) |
KR (1) | KR101054704B1 (en) |
CN (1) | CN101548313B (en) |
AT (1) | ATE463820T1 (en) |
CA (1) | CA2663568C (en) |
DE (1) | DE602007005833D1 (en) |
WO (1) | WO2008058842A1 (en) |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110015766A1 (en) * | 2009-07-20 | 2011-01-20 | Apple Inc. | Transient detection using a digital audio workstation |
US20120022863A1 (en) * | 2010-07-21 | 2012-01-26 | Samsung Electronics Co., Ltd. | Method and apparatus for voice activity detection |
US8131543B1 (en) * | 2008-04-14 | 2012-03-06 | Google Inc. | Speech detection |
US20120215541A1 (en) * | 2009-10-15 | 2012-08-23 | Huawei Technologies Co., Ltd. | Signal processing method, device, and system |
US8296133B2 (en) | 2009-10-15 | 2012-10-23 | Huawei Technologies Co., Ltd. | Voice activity decision base on zero crossing rate and spectral sub-band energy |
US8543398B1 (en) | 2012-02-29 | 2013-09-24 | Google Inc. | Training an automatic speech recognition system using compressed word frequencies |
US8554559B1 (en) | 2012-07-13 | 2013-10-08 | Google Inc. | Localized speech recognition with offload |
US8571859B1 (en) | 2012-05-31 | 2013-10-29 | Google Inc. | Multi-stage speaker adaptation |
US8805684B1 (en) | 2012-05-31 | 2014-08-12 | Google Inc. | Distributed speaker adaptation |
US8965763B1 (en) * | 2012-02-02 | 2015-02-24 | Google Inc. | Discriminative language modeling for automatic speech recognition with a weak acoustic model and distributed training |
US20150071461A1 (en) * | 2013-03-15 | 2015-03-12 | Broadcom Corporation | Single-channel suppression of intefering sources |
US20150154981A1 (en) * | 2013-12-02 | 2015-06-04 | Nuance Communications, Inc. | Voice Activity Detection (VAD) for a Coded Speech Bitstream without Decoding |
US9123333B2 (en) | 2012-09-12 | 2015-09-01 | Google Inc. | Minimum bayesian risk methods for automatic speech recognition |
US9202461B2 (en) | 2012-04-26 | 2015-12-01 | Google Inc. | Sampling training data for an automatic speech recognition system based on a benchmark classification distribution |
US9235799B2 (en) | 2011-11-26 | 2016-01-12 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
US20160260443A1 (en) * | 2010-12-24 | 2016-09-08 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US9466292B1 (en) * | 2013-05-03 | 2016-10-11 | Google Inc. | Online incremental adaptation of deep neural networks using auxiliary Gaussian mixture models in speech recognition |
US9477925B2 (en) | 2012-11-20 | 2016-10-25 | Microsoft Technology Licensing, Llc | Deep neural networks training for speech and pattern recognition |
US20170092297A1 (en) * | 2015-09-24 | 2017-03-30 | Google Inc. | Voice Activity Detection |
US20170162194A1 (en) * | 2015-12-04 | 2017-06-08 | Conexant Systems, Inc. | Semi-supervised system for multichannel source enhancement through configurable adaptive transformations and deep neural network |
US20170294186A1 (en) * | 2014-09-11 | 2017-10-12 | Nuance Communications, Inc. | Method for scoring in an automatic speech recognition system |
CN107808659A (en) * | 2017-12-02 | 2018-03-16 | 宫文峰 | Intelligent sound signal type recognition system device |
US20180102136A1 (en) * | 2016-10-11 | 2018-04-12 | Cirrus Logic International Semiconductor Ltd. | Detection of acoustic impulse events in voice applications using a neural network |
US20180174574A1 (en) * | 2016-12-19 | 2018-06-21 | Knowles Electronics, Llc | Methods and systems for reducing false alarms in keyword detection |
US10121471B2 (en) * | 2015-06-29 | 2018-11-06 | Amazon Technologies, Inc. | Language model speech endpointing |
US10242696B2 (en) | 2016-10-11 | 2019-03-26 | Cirrus Logic, Inc. | Detection of acoustic impulse events in voice applications |
US10311874B2 (en) | 2017-09-01 | 2019-06-04 | 4Q Catalyst, LLC | Methods and systems for voice-based programming of a voice-controlled device |
US10339921B2 (en) | 2015-09-24 | 2019-07-02 | Google Llc | Multichannel raw-waveform neural networks |
US10403269B2 (en) | 2015-03-27 | 2019-09-03 | Google Llc | Processing audio waveforms |
CN111199733A (en) * | 2018-11-19 | 2020-05-26 | 珠海全志科技股份有限公司 | Multi-stage recognition voice awakening method and device, computer storage medium and equipment |
CN112509598A (en) * | 2020-11-20 | 2021-03-16 | 北京小米松果电子有限公司 | Audio detection method and device and storage medium |
CN112820324A (en) * | 2020-12-31 | 2021-05-18 | 平安科技(深圳)有限公司 | Multi-label voice activity detection method, device and storage medium |
US11270720B2 (en) * | 2019-12-30 | 2022-03-08 | Texas Instruments Incorporated | Background noise estimation and voice activity detection system |
US20220310081A1 (en) * | 2021-03-26 | 2022-09-29 | Google Llc | Multilingual Re-Scoring Models for Automatic Speech Recognition |
US12014728B2 (en) * | 2019-03-25 | 2024-06-18 | Microsoft Technology Licensing, Llc | Dynamic combination of acoustic model states |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010103333A (en) * | 2000-05-09 | 2001-11-23 | 류영선 | The powder manufacturing method for instant bean curd |
US8938389B2 (en) * | 2008-12-17 | 2015-01-20 | Nec Corporation | Voice activity detector, voice activity detection program, and parameter adjusting method |
WO2011010647A1 (en) * | 2009-07-21 | 2011-01-27 | 独立行政法人産業技術総合研究所 | Method and system for estimating mixture ratio in mixed-sound signal, and phoneme identifying method |
WO2011049515A1 (en) * | 2009-10-19 | 2011-04-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and voice activity detector for a speech encoder |
US8626498B2 (en) * | 2010-02-24 | 2014-01-07 | Qualcomm Incorporated | Voice activity detection based on plural voice activity detectors |
JP5575977B2 (en) | 2010-04-22 | 2014-08-20 | クゥアルコム・インコーポレイテッド | Voice activity detection |
CN102446506B (en) * | 2010-10-11 | 2013-06-05 | 华为技术有限公司 | Classification identifying method and equipment of audio signals |
US8898058B2 (en) | 2010-10-25 | 2014-11-25 | Qualcomm Incorporated | Systems, methods, and apparatus for voice activity detection |
WO2012083552A1 (en) * | 2010-12-24 | 2012-06-28 | Huawei Technologies Co., Ltd. | Method and apparatus for voice activity detection |
CN102097095A (en) * | 2010-12-28 | 2011-06-15 | 天津市亚安科技电子有限公司 | Speech endpoint detecting method and device |
US20130090926A1 (en) * | 2011-09-16 | 2013-04-11 | Qualcomm Incorporated | Mobile device context information using speech detection |
US10381002B2 (en) | 2012-10-30 | 2019-08-13 | Google Technology Holdings LLC | Voice control user interface during low-power mode |
US10304465B2 (en) | 2012-10-30 | 2019-05-28 | Google Technology Holdings LLC | Voice control user interface for low power mode |
US9584642B2 (en) | 2013-03-12 | 2017-02-28 | Google Technology Holdings LLC | Apparatus with adaptive acoustic echo control for speakerphone mode |
US10373615B2 (en) | 2012-10-30 | 2019-08-06 | Google Technology Holdings LLC | Voice control user interface during low power mode |
US9454958B2 (en) | 2013-03-07 | 2016-09-27 | Microsoft Technology Licensing, Llc | Exploiting heterogeneous data in deep neural network-based speech recognition systems |
CN104080024B (en) | 2013-03-26 | 2019-02-19 | 杜比实验室特许公司 | Volume leveller controller and control method and audio classifiers |
US8768712B1 (en) | 2013-12-04 | 2014-07-01 | Google Inc. | Initiating actions based on partial hotwords |
EP2945303A1 (en) | 2014-05-16 | 2015-11-18 | Thomson Licensing | Method and apparatus for selecting or removing audio component types |
US9324320B1 (en) * | 2014-10-02 | 2016-04-26 | Microsoft Technology Licensing, Llc | Neural network-based speech processing |
US9842608B2 (en) | 2014-10-03 | 2017-12-12 | Google Inc. | Automatic selective gain control of audio data for speech recognition |
CN105529038A (en) * | 2014-10-21 | 2016-04-27 | 阿里巴巴集团控股有限公司 | Method and system for processing users' speech signals |
US10515301B2 (en) | 2015-04-17 | 2019-12-24 | Microsoft Technology Licensing, Llc | Small-footprint deep neural network |
CN104980211B (en) * | 2015-06-29 | 2017-12-12 | 北京航天易联科技发展有限公司 | A kind of signal processing method and device |
US9959887B2 (en) | 2016-03-08 | 2018-05-01 | International Business Machines Corporation | Multi-pass speech activity detection strategy to improve automatic speech recognition |
US10490209B2 (en) * | 2016-05-02 | 2019-11-26 | Google Llc | Automatic determination of timing windows for speech captions in an audio stream |
CN107564512B (en) * | 2016-06-30 | 2020-12-25 | 展讯通信(上海)有限公司 | Voice activity detection method and device |
CN106782529B (en) * | 2016-12-23 | 2020-03-10 | 北京云知声信息技术有限公司 | Awakening word selection method and device for voice recognition |
US10810995B2 (en) * | 2017-04-27 | 2020-10-20 | Marchex, Inc. | Automatic speech recognition (ASR) model training |
US10403303B1 (en) * | 2017-11-02 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying speech based on cepstral coefficients and support vector machines |
CN109065027B (en) * | 2018-06-04 | 2023-05-02 | 平安科技(深圳)有限公司 | Voice distinguishing model training method and device, computer equipment and storage medium |
EP3813061A4 (en) * | 2018-06-21 | 2021-06-23 | NEC Corporation | Attribute identifying device, attribute identifying method, and program storage medium |
CN108922556B (en) * | 2018-07-16 | 2019-08-27 | 百度在线网络技术(北京)有限公司 | Sound processing method, device and equipment |
US20200074997A1 (en) * | 2018-08-31 | 2020-03-05 | CloudMinds Technology, Inc. | Method and system for detecting voice activity in noisy conditions |
CN111524536B (en) * | 2019-02-01 | 2023-09-08 | 富士通株式会社 | Signal processing method and information processing apparatus |
CN109754823A (en) * | 2019-02-26 | 2019-05-14 | 维沃移动通信有限公司 | A kind of voice activity detection method, mobile terminal |
CN110349597B (en) * | 2019-07-03 | 2021-06-25 | 山东师范大学 | Voice detection method and device |
KR20210044559A (en) | 2019-10-15 | 2021-04-23 | 삼성전자주식회사 | Method and device for determining output token |
CN112420022B (en) * | 2020-10-21 | 2024-05-10 | 浙江同花顺智能科技有限公司 | Noise extraction method, device, equipment and storage medium |
CN112466056B (en) * | 2020-12-01 | 2022-04-05 | 上海旷日网络科技有限公司 | Self-service cabinet pickup system and method based on voice recognition |
KR102318642B1 (en) * | 2021-04-16 | 2021-10-28 | (주)엠제이티 | Online platform using voice analysis results |
US12022016B2 (en) * | 2022-04-07 | 2024-06-25 | Bank Of America Corporation | System and method for managing exception request blocks in a blockchain network |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4696039A (en) * | 1983-10-13 | 1987-09-22 | Texas Instruments Incorporated | Speech analysis/synthesis system with silence suppression |
US4780906A (en) * | 1984-02-17 | 1988-10-25 | Texas Instruments Incorporated | Speaker-independent word recognition method and system based upon zero-crossing rate and energy measurement of analog speech signal |
US6314396B1 (en) * | 1998-11-06 | 2001-11-06 | International Business Machines Corporation | Automatic gain control in a speech recognition system |
US6556967B1 (en) * | 1999-03-12 | 2003-04-29 | The United States Of America As Represented By The National Security Agency | Voice activity detector |
US6615170B1 (en) * | 2000-03-07 | 2003-09-02 | International Business Machines Corporation | Model-based voice activity detection system and method using a log-likelihood ratio and pitch |
US20060053007A1 (en) * | 2004-08-30 | 2006-03-09 | Nokia Corporation | Detection of voice activity in an audio signal |
US20060178877A1 (en) * | 2000-04-19 | 2006-08-10 | Microsoft Corporation | Audio Segmentation and Classification |
US20060224382A1 (en) * | 2003-01-24 | 2006-10-05 | Moria Taneda | Noise reduction and audio-visual speech activity detection |
US20070033042A1 (en) * | 2005-08-03 | 2007-02-08 | International Business Machines Corporation | Speech detection fusing multi-class acoustic-phonetic, and energy features |
US20070036342A1 (en) * | 2005-08-05 | 2007-02-15 | Boillot Marc A | Method and system for operation of a voice activity detector |
US20080010065A1 (en) * | 2006-06-05 | 2008-01-10 | Harry Bratt | Method and apparatus for speaker recognition |
US20090076814A1 (en) * | 2007-09-19 | 2009-03-19 | Electronics And Telecommunications Research Institute | Apparatus and method for determining speech signal |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4422545A1 (en) | 1994-06-28 | 1996-01-04 | Sel Alcatel Ag | Start / end point detection for word recognition |
JP3721948B2 (en) * | 2000-05-30 | 2005-11-30 | 株式会社国際電気通信基礎技術研究所 | Voice start edge detection method, voice section detection method in voice recognition apparatus, and voice recognition apparatus |
US6754626B2 (en) * | 2001-03-01 | 2004-06-22 | International Business Machines Corporation | Creating a hierarchical tree of language models for a dialog system based on prompt and dialog context |
CN100573663C (en) * | 2006-04-20 | 2009-12-23 | 南京大学 | Mute detection method based on speech characteristic to jude |
US20080300875A1 (en) * | 2007-06-04 | 2008-12-04 | Texas Instruments Incorporated | Efficient Speech Recognition with Cluster Methods |
US8131543B1 (en) * | 2008-04-14 | 2012-03-06 | Google Inc. | Speech detection |
-
2007
- 2007-10-26 EP EP07821894A patent/EP2089877B1/en active Active
- 2007-10-26 WO PCT/EP2007/061534 patent/WO2008058842A1/en active Search and Examination
- 2007-10-26 KR KR1020097009548A patent/KR101054704B1/en not_active IP Right Cessation
- 2007-10-26 CN CN2007800414946A patent/CN101548313B/en not_active Expired - Fee Related
- 2007-10-26 CA CA2663568A patent/CA2663568C/en active Active
- 2007-10-26 JP JP2009536691A patent/JP4568371B2/en active Active
- 2007-10-26 DE DE602007005833T patent/DE602007005833D1/en active Active
- 2007-10-26 AT AT07821894T patent/ATE463820T1/en not_active IP Right Cessation
- 2007-10-26 US US12/515,048 patent/US8311813B2/en not_active Expired - Fee Related
-
2012
- 2012-09-04 US US13/602,390 patent/US8554560B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4696039A (en) * | 1983-10-13 | 1987-09-22 | Texas Instruments Incorporated | Speech analysis/synthesis system with silence suppression |
US4780906A (en) * | 1984-02-17 | 1988-10-25 | Texas Instruments Incorporated | Speaker-independent word recognition method and system based upon zero-crossing rate and energy measurement of analog speech signal |
US6314396B1 (en) * | 1998-11-06 | 2001-11-06 | International Business Machines Corporation | Automatic gain control in a speech recognition system |
US6556967B1 (en) * | 1999-03-12 | 2003-04-29 | The United States Of America As Represented By The National Security Agency | Voice activity detector |
US6615170B1 (en) * | 2000-03-07 | 2003-09-02 | International Business Machines Corporation | Model-based voice activity detection system and method using a log-likelihood ratio and pitch |
US20060178877A1 (en) * | 2000-04-19 | 2006-08-10 | Microsoft Corporation | Audio Segmentation and Classification |
US20060224382A1 (en) * | 2003-01-24 | 2006-10-05 | Moria Taneda | Noise reduction and audio-visual speech activity detection |
US20060053007A1 (en) * | 2004-08-30 | 2006-03-09 | Nokia Corporation | Detection of voice activity in an audio signal |
US20070033042A1 (en) * | 2005-08-03 | 2007-02-08 | International Business Machines Corporation | Speech detection fusing multi-class acoustic-phonetic, and energy features |
US20070036342A1 (en) * | 2005-08-05 | 2007-02-15 | Boillot Marc A | Method and system for operation of a voice activity detector |
US20080010065A1 (en) * | 2006-06-05 | 2008-01-10 | Harry Bratt | Method and apparatus for speaker recognition |
US20090076814A1 (en) * | 2007-09-19 | 2009-03-19 | Electronics And Telecommunications Research Institute | Apparatus and method for determining speech signal |
Non-Patent Citations (6)
Title |
---|
Cohn et al. "Semi-supervised Clustering with User Feedback" 2003. * |
Gorriz et al. "Independent Component Analysis Applied to Voice Activity Detection" May 28-31, 2006. * |
Marcheret et al. "The IBM RT06s Evaluation System for Speech Activity Detection in CHIL Seminars" May 1-4, 2006. * |
Shin et al. "SPEECWNON-SPEECH CLASSIFICATION USING MULTIPLE FEATURES FOR ROBUST ENDPOINT DETECTION" 2000. * |
Wang et al. "Feature extraction and dimensionality reduction algorithms and their applications in vowel recognition" 2003. * |
Yamamoto et al. "Robust Endpoint Detection for Speech Recognition Based on Discriminative Feature Extraction" May 14-19, 2006. * |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8131543B1 (en) * | 2008-04-14 | 2012-03-06 | Google Inc. | Speech detection |
US8554348B2 (en) * | 2009-07-20 | 2013-10-08 | Apple Inc. | Transient detection using a digital audio workstation |
US20110015766A1 (en) * | 2009-07-20 | 2011-01-20 | Apple Inc. | Transient detection using a digital audio workstation |
US20120215541A1 (en) * | 2009-10-15 | 2012-08-23 | Huawei Technologies Co., Ltd. | Signal processing method, device, and system |
US8296133B2 (en) | 2009-10-15 | 2012-10-23 | Huawei Technologies Co., Ltd. | Voice activity decision base on zero crossing rate and spectral sub-band energy |
US8554547B2 (en) | 2009-10-15 | 2013-10-08 | Huawei Technologies Co., Ltd. | Voice activity decision base on zero crossing rate and spectral sub-band energy |
US20120022863A1 (en) * | 2010-07-21 | 2012-01-26 | Samsung Electronics Co., Ltd. | Method and apparatus for voice activity detection |
US8762144B2 (en) * | 2010-07-21 | 2014-06-24 | Samsung Electronics Co., Ltd. | Method and apparatus for voice activity detection |
US10796712B2 (en) | 2010-12-24 | 2020-10-06 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US10134417B2 (en) | 2010-12-24 | 2018-11-20 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US9761246B2 (en) * | 2010-12-24 | 2017-09-12 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US20160260443A1 (en) * | 2010-12-24 | 2016-09-08 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US11430461B2 (en) | 2010-12-24 | 2022-08-30 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting a voice activity in an input audio signal |
US10325200B2 (en) | 2011-11-26 | 2019-06-18 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
US9235799B2 (en) | 2011-11-26 | 2016-01-12 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
US8965763B1 (en) * | 2012-02-02 | 2015-02-24 | Google Inc. | Discriminative language modeling for automatic speech recognition with a weak acoustic model and distributed training |
US8543398B1 (en) | 2012-02-29 | 2013-09-24 | Google Inc. | Training an automatic speech recognition system using compressed word frequencies |
US9202461B2 (en) | 2012-04-26 | 2015-12-01 | Google Inc. | Sampling training data for an automatic speech recognition system based on a benchmark classification distribution |
US8805684B1 (en) | 2012-05-31 | 2014-08-12 | Google Inc. | Distributed speaker adaptation |
US8571859B1 (en) | 2012-05-31 | 2013-10-29 | Google Inc. | Multi-stage speaker adaptation |
US8880398B1 (en) | 2012-07-13 | 2014-11-04 | Google Inc. | Localized speech recognition with offload |
US8554559B1 (en) | 2012-07-13 | 2013-10-08 | Google Inc. | Localized speech recognition with offload |
US9123333B2 (en) | 2012-09-12 | 2015-09-01 | Google Inc. | Minimum bayesian risk methods for automatic speech recognition |
US9477925B2 (en) | 2012-11-20 | 2016-10-25 | Microsoft Technology Licensing, Llc | Deep neural networks training for speech and pattern recognition |
US9570087B2 (en) * | 2013-03-15 | 2017-02-14 | Broadcom Corporation | Single channel suppression of interfering sources |
US20150071461A1 (en) * | 2013-03-15 | 2015-03-12 | Broadcom Corporation | Single-channel suppression of intefering sources |
US9466292B1 (en) * | 2013-05-03 | 2016-10-11 | Google Inc. | Online incremental adaptation of deep neural networks using auxiliary Gaussian mixture models in speech recognition |
US9997172B2 (en) * | 2013-12-02 | 2018-06-12 | Nuance Communications, Inc. | Voice activity detection (VAD) for a coded speech bitstream without decoding |
US20150154981A1 (en) * | 2013-12-02 | 2015-06-04 | Nuance Communications, Inc. | Voice Activity Detection (VAD) for a Coded Speech Bitstream without Decoding |
US20170294186A1 (en) * | 2014-09-11 | 2017-10-12 | Nuance Communications, Inc. | Method for scoring in an automatic speech recognition system |
US10650805B2 (en) * | 2014-09-11 | 2020-05-12 | Nuance Communications, Inc. | Method for scoring in an automatic speech recognition system |
US10403269B2 (en) | 2015-03-27 | 2019-09-03 | Google Llc | Processing audio waveforms |
US10930270B2 (en) | 2015-03-27 | 2021-02-23 | Google Llc | Processing audio waveforms |
US10121471B2 (en) * | 2015-06-29 | 2018-11-06 | Amazon Technologies, Inc. | Language model speech endpointing |
US10229700B2 (en) * | 2015-09-24 | 2019-03-12 | Google Llc | Voice activity detection |
US10339921B2 (en) | 2015-09-24 | 2019-07-02 | Google Llc | Multichannel raw-waveform neural networks |
US20170092297A1 (en) * | 2015-09-24 | 2017-03-30 | Google Inc. | Voice Activity Detection |
US10347271B2 (en) * | 2015-12-04 | 2019-07-09 | Synaptics Incorporated | Semi-supervised system for multichannel source enhancement through configurable unsupervised adaptive transformations and supervised deep neural network |
US20170162194A1 (en) * | 2015-12-04 | 2017-06-08 | Conexant Systems, Inc. | Semi-supervised system for multichannel source enhancement through configurable adaptive transformations and deep neural network |
US10242696B2 (en) | 2016-10-11 | 2019-03-26 | Cirrus Logic, Inc. | Detection of acoustic impulse events in voice applications |
US10475471B2 (en) * | 2016-10-11 | 2019-11-12 | Cirrus Logic, Inc. | Detection of acoustic impulse events in voice applications using a neural network |
US20180102136A1 (en) * | 2016-10-11 | 2018-04-12 | Cirrus Logic International Semiconductor Ltd. | Detection of acoustic impulse events in voice applications using a neural network |
US20180174574A1 (en) * | 2016-12-19 | 2018-06-21 | Knowles Electronics, Llc | Methods and systems for reducing false alarms in keyword detection |
US10311874B2 (en) | 2017-09-01 | 2019-06-04 | 4Q Catalyst, LLC | Methods and systems for voice-based programming of a voice-controlled device |
CN107808659A (en) * | 2017-12-02 | 2018-03-16 | 宫文峰 | Intelligent sound signal type recognition system device |
CN111199733A (en) * | 2018-11-19 | 2020-05-26 | 珠海全志科技股份有限公司 | Multi-stage recognition voice awakening method and device, computer storage medium and equipment |
US12014728B2 (en) * | 2019-03-25 | 2024-06-18 | Microsoft Technology Licensing, Llc | Dynamic combination of acoustic model states |
US11270720B2 (en) * | 2019-12-30 | 2022-03-08 | Texas Instruments Incorporated | Background noise estimation and voice activity detection system |
CN114930451A (en) * | 2019-12-30 | 2022-08-19 | 德克萨斯仪器股份有限公司 | Background noise estimation and voice activity detection system |
US20220165297A1 (en) * | 2020-11-20 | 2022-05-26 | Beijing Xiaomi Pinecone Electronics Co., Ltd. | Method and device for detecting audio signal, and storage medium |
CN112509598A (en) * | 2020-11-20 | 2021-03-16 | 北京小米松果电子有限公司 | Audio detection method and device and storage medium |
US11848029B2 (en) * | 2020-11-20 | 2023-12-19 | Beijing Xiaomi Pinecone Electronics Co., Ltd. | Method and device for detecting audio signal, and storage medium |
CN112820324A (en) * | 2020-12-31 | 2021-05-18 | 平安科技(深圳)有限公司 | Multi-label voice activity detection method, device and storage medium |
US20220310081A1 (en) * | 2021-03-26 | 2022-09-29 | Google Llc | Multilingual Re-Scoring Models for Automatic Speech Recognition |
US12080283B2 (en) * | 2021-03-26 | 2024-09-03 | Google Llc | Multilingual re-scoring models for automatic speech recognition |
Also Published As
Publication number | Publication date |
---|---|
JP4568371B2 (en) | 2010-10-27 |
DE602007005833D1 (en) | 2010-05-20 |
JP2010510534A (en) | 2010-04-02 |
US20120330656A1 (en) | 2012-12-27 |
CN101548313A (en) | 2009-09-30 |
KR20090083367A (en) | 2009-08-03 |
CN101548313B (en) | 2011-07-13 |
US8311813B2 (en) | 2012-11-13 |
ATE463820T1 (en) | 2010-04-15 |
EP2089877B1 (en) | 2010-04-07 |
EP2089877A1 (en) | 2009-08-19 |
KR101054704B1 (en) | 2011-08-08 |
WO2008058842A1 (en) | 2008-05-22 |
CA2663568A1 (en) | 2008-05-22 |
CA2663568C (en) | 2016-01-05 |
US8554560B2 (en) | 2013-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8311813B2 (en) | Voice activity detection system and method | |
US6615170B1 (en) | Model-based voice activity detection system and method using a log-likelihood ratio and pitch | |
US8140330B2 (en) | System and method for detecting repeated patterns in dialog systems | |
Ramırez et al. | Efficient voice activity detection algorithms using long-term speech information | |
Evangelopoulos et al. | Multiband modulation energy tracking for noisy speech detection | |
Ramirez et al. | Voice activity detection. fundamentals and speech recognition system robustness | |
US8532991B2 (en) | Speech models generated using competitive training, asymmetric training, and data boosting | |
US6078884A (en) | Pattern recognition | |
Yoo et al. | Formant-based robust voice activity detection | |
Chowdhury et al. | Bayesian on-line spectral change point detection: a soft computing approach for on-line ASR | |
Akbacak et al. | Environmental sniffing: noise knowledge estimation for robust speech systems | |
JP2797861B2 (en) | Voice detection method and voice detection device | |
Nasibov | Decision fusion of voice activity detectors | |
Fujimoto et al. | Frame-wise model re-estimation method based on Gaussian pruning with weight normalization for noise robust voice activity detection | |
Odriozola et al. | An on-line VAD based on Multi-Normalisation Scoring (MNS) of observation likelihoods | |
Beaufays et al. | Using speech/non-speech detection to bias recognition search on noisy data | |
Skorik et al. | On a cepstrum-based speech detector robust to white noise | |
Martin et al. | Voicing parameter and energy based speech/non-speech detection for speech recognition in adverse conditions. | |
Stahl et al. | Phase-processing for voice activity detection: A statistical approach | |
Sinha et al. | Exploring the role of pitch-adaptive cepstral features in context of children's mismatched ASR | |
Kathania et al. | Soft-weighting technique for robust children speech recognition under mismatched condition | |
Beritelli et al. | Adaptive V/UV speech detection based on characterization of background noise | |
Zeng et al. | Robust children and adults speech classification | |
De Leon et al. | Voice activity detection using a sliding-window, maximum margin clustering approach | |
Norouzian et al. | Incorporating formant cues into distributed speech recognition systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION,NEW YO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VALSAN, ZICA;REEL/FRAME:022688/0227 Effective date: 20090514 Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VALSAN, ZICA;REEL/FRAME:022688/0227 Effective date: 20090514 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20161113 |