WO1988007739A1 - An adaptive threshold voiced detector - Google Patents
An adaptive threshold voiced detector Download PDFInfo
- Publication number
- WO1988007739A1 WO1988007739A1 PCT/US1988/000031 US8800031W WO8807739A1 WO 1988007739 A1 WO1988007739 A1 WO 1988007739A1 US 8800031 W US8800031 W US 8800031W WO 8807739 A1 WO8807739 A1 WO 8807739A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frames
- calculating
- speech
- value
- unvoiced
- Prior art date
Links
- 230000003044 adaptive effect Effects 0.000 title description 9
- 238000000034 method Methods 0.000 claims description 15
- 238000007476 Maximum Likelihood Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004836 empirical method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/93—Discriminating between voiced and unvoiced parts of speech signals
Definitions
- This invention relates to determining whether or not speech contains a fundamental frequency which is commonly referred to as the unvoiced voiced decision. More particularly, the unvoiced/voiced decision is made by a two stage voiced detector with the final threshold values being adaptively calculated for the speech environment utilizing statistical techniques.
- a frame of speech is declared voice if a weighted sum of classifiers is greater than a specified threshold, and unvoiced otherwise.
- the weights and threshold are chosen to maximize performance on a training set of speech where the voicing of each frame is known.
- a problem associated with the fixed weighted sum method is that it does not perform well when the speech environment changes.
- the reason is that the threshold is determined from the training set which is different from speech subject to background noise, non-linear distortion, and filtering.
- One method for adapting the threshold value to changing speech environment is disclosed in the paper of H. Hassanein, et al., "Implementation of the Gold-Rabiner Pitch Detector in a Real Time Environment Using an Improved voicingng Detector," IEEE Transactions on Acoustic, Speech and Signal Processing, 1986, Tokyo, Vol. ASSP-33, No. 1, pp. 319-320.
- This paper discloses an empirical method which compares three different parameters against independent thresholds associated with these parameters and on the basis of each comparison either increments or decrements by one an adaptive threshold value.
- the three parameters utilized are energy of the signal, first reflection coefficient, and zero-crossing count. For example, if the energy of the speech signal is less than a predefined energy level, the adaptive threshold is incremented. On the other hand, if the energy of the speech signal is greater than another predefined energy level, the adaptive threshold is decremented by one. After the adaptive threshold has been calculated, it is subtracted from an output of a elementary pitch detector. If the results of the subtraction yield a positive number, the speech frame is declared voice; otherwise, the speech frame is declared on unvoice.
- a voicing decision apparatus that adapts to a changing environment by utilizing adaptive statistical values to make the voicing decision.
- the statistical values are adapted to the changing environment by utilizing statistics based on an output of a voiced detector.
- the statistical parameters are calculated by the voiced detector generating a general value indicating the presence of a fundamental frequency in a speech frame in response to speech attributes of the frame.
- the mean for unvoiced ones and voiced ones of speech frames is calculated in response to the generated value.
- the two means are then used to determine decision regions, and the determination of the presence of the fundamental frequency is done in response to the decision regions and the present speech frame.
- the mean for unvoiced frames is calculated by calculating the probability that the present speech frame is unvoiced, calculating the overall probability that any frame will be unvoiced, and calculating the probability that the present speech frame is voiced.
- the mean of the unvoiced speech frames is then calculated in response to the probability that the present speech frame is unvoiced and the overall probability.
- the mean of the voiced speech frame is calculated in response to the probability that the present speech frame is voiced and the overall probability.
- the calculations of probabilities are performed utilizing a maximum likelihood statistical operation.
- the generation of the general value is performed utilizing a discriminant analysis procedure, and the speech attributes are speech classifiers.
- the decision regions are defined by the mean of the unvoiced and voiced speech frames and a weight and threshold value generated in response to the general values of past and present frames and the means of the voiced and unvoiced frames.
- the method for detecting the presence of a fundamental frequency in speech frames comprises the steps of: generating a general value in response to a set of classifiers defining speech attributes of a present speech frame to indicate the presence of the fundamental frequency, calculating a set of statistical parameters in response to the general value, and determining the presence of the fundamental frequency in response to the general value and the calculated set of statistical parameters.
- the step of generating the general value is performed utilizing a disCTiminant analysis procedure.
- the step of determining the fundamental frequency comprises the step of calculating a weight and a threshold value in response to the set of parameters.
- FIG. 1 illustrates, in block diagram form, the present invention
- FIGS. 2 and 3 illustrate, in greater detail, certain functions performed by the voiced detection apparatus of FIG. 1.
- FIG. 1 illustrates an apparatus for performing the unvoiced/voiced decision operation by first utilizing a discriminant voiced detector to process voice classifiers in order to generate a discriminant variable or general variable.
- the latter variable is statistically analyzed to make the voicing decision.
- the statistical analysis adapts the threshold utilized in making the unvoiced/voiced decision so as to give reliable performance in a variety of voice environments.
- Classifier generator 100 is responsive to each frame of voice to generate classifiers which advantageously may be the log of the speech energy, the log of the LPC gain, the log area ratio of the first reflection coefficient, and the squared correlation coefficient of two speech segments one frame long which are offset by one pitch period.
- Generator 100 transmits the classifiers to silence detector 101 and discriminant voiced detector 102 via path 106.
- Discriminant voiced detector 102 is responsive to the classifiers received via path 106 to calculate the discriminant value, x.
- "c" is a vector comprising the weights
- "y” is a vector comprising the classifiers
- d is a scalar representing a threshold value.
- the components of vector c are initialized as follows: component corresponding to log of the speech energy equals 0.3918606, component corresponding to log of the LPC gain equals -0.0520902, component corresponding to log area ratio of the first reflection coefficient equals 0.5637082, and component corresponding to squared correlation coefficient equals 1.361249; and d initially equals -8.36454.
- the detector 102 transmits this value via path 111 to statistical calculator 103 and subtracter 107.
- Silence detector 101 is responsive to the classifiers transmitted via path 106 to determine whether speech is actually present on the data being received on path 109 by classifier generator 100.
- the indication of the presence of speech is transmitted via path 110 to statistical calculator 103 by silence detector 101.
- detector 102 For each frame of speech, detector 102 generates and transmits the discriminant value x via path 111.
- Statistical calculator 103 maintains an average of the discriminant values received via path 111 by averaging in the discriminant value, for the present, non-silence frame with the discriminant values for previous non-silence frames.
- Statistical calculator 103 is also responsive to the signal received via path 110 to calculate the overall probability that any frame is unvoiced and the probability that any frame is voiced.
- statistical calculator 103 calculates the statistical value that the discriminant value for the present frame would have if the frame was unvoiced and the statistical value that the discriminant value for the present frame would have if the frame was voiced.
- that statistical value may be the mean.
- calculator 103 performs these calculations not only on the basis of the discriminant value received for the present frame via path 106 and the average of the classifiers but also on the basis of a weight and a threshold value defining whether a frame is unvoiced or voiced received via path 113 from threshold calculator 104.
- Calculator 104 is responsive to the probabilities and statistical values of the classifiers for the present frame as generated by calculator 103 and received via path 112 to recalculate the values used as weight value a, and threshold value b for the present frame. Then, these new values of a and b are transmitted back to statistical calculator 103 via path 113.
- Calculator 104 transmits the weight, threshold, and statistical values via path 114 to U/V determinator 105.
- the latter detector is responsive to the information transmitted via paths 114 and 115 to determine whether or not the frame is unvoiced or voiced and to transmit this decision via path 116.
- Statistical calculator 103 implements an improved EM algorithm similar to that suggested in the article by N. E. Day entitled “Estimating the Components of a Mixture of Normal Distributions", Biometrika, Vol. 56, No. 3, pp. 463-474, 1969. Utilizing the concept of a decaying average, calculator 103 calculates the average for the discriminant values for the present and previous frames by calculating following equations 1, 2, and 3:
- n n+l if n ⁇ 2000 (1)
- x n is the discriminant value for the present frame and is received from detector 102 via path 111, and n is the number of frames that have been processed up to 2000.
- z represents the decaying average coefficient, and X n represents the average of the discriminant values for the present and past frames.
- Statistical calculator 103 is responsive to receipt of the z, x n and X n values to calculate the variance value, T, by first calculating the second moment of x n , Q n , as follows:
- T is calculated as follows:
- the mean is subtracted from the discriminant value of the present frame as follows:
- calculator 103 determines the probability that the frame represented by the present value x n is unvoiced by solving equation 7 shown below:
- calculator 103 determines the probability that the discriminant value represents a voiced frame by solving the following:
- calculator 103 determines the overall probability that any frame will be unvoiced by solving equation 9 for p n :
- calculator 103 determines two values, u and v, which give the mean values of discriminant value for both unvoiced and voiced type frames.
- Value u statistical average unvoiced value
- value v statistical average voiced value
- Value u for the present frame is solved by calculating equation 10
- value v is determined for the present frame by calculating equation.11 as follows:
- Calculator 103 now communicates the u, v, and T values, and probability p n to threshold calculator 104 via path 112.
- Calculator 104 is responsive to this information to calculate new values for a and b. These new values are then transmitted back to statistical calculator 103 via path 113. This allows rapid adaptations to changing environments. If n is greater than advantageously 99, values a and b are calculated as follows. Value a is determined by solving the following equation:
- calculator 104 transmits values a, u, and v to block 105 via path 114.
- Determinator 105 is responsive to this transmitted information to decide whether the present frame is voiced or unvoiced. If the value a is positive, then, a frame is declared voiced if the following equation is true:
- Equation 14 can also be expressed as:
- Equation 15 can also be expressed as:
- determinator 105 declares the frame unvoiced.
- FIGS. 2 and 3 illustrate, in greater detail, the operations performed by the apparatus of FIG. 1.
- Block 200 implements block 101 of FIG. 1.
- Blocks 202 through 218 implement statistical calculator 103.
- Block 222 implements threshold calculator 104, and blocks 226 through 239 implement block 105 of FIG. 1.
- Subtracter 107 is implemented by both block 208 and block 224.
- Block 202 calculates the value which represents the average of the discriminant value for the present frame and all previous frames.
- Block 200 determines whether speech is present in the present frame; and if speech is not present in the present frame, the mean for the discriminant value is subtracted from the present discriminant value by block 224 before control is transferred to decision block 226.
- the statistical and weight calculations are performed by blocks 202 through 222.
- the average value is found in block 202.
- the second moment value is calculated in block 206.
- the latter value along with the mean value X for the present and past frames is then utilized to calculate the variance, T, also in block 206.
- the mean X is then subtracted from the discriminant value x n in block 208.
- Block 210 calculates the probability that the present frame is unvoiced by utilizing the present weight value a, the present threshold value b, and the discriminant value for the present frame, x n . After calculating the probability that the present frame is unvoiced, the probability that the present frame is voiced is calculated by block 212. Then, the overall probability, p n , that any frame will be unvoiced is calculated by block 214.
- Blocks 216 and 218 calculate two values: u and v.
- the value u represents the statistical average value that the discriminant value would have if the frame were unvoiced.
- value v represents the statistical average value that the discriminant value would have if the frame were voiced.
- the actual discriminant values for the present and previous frames are clustered around either value u or value v.
- the discriminant values for the previous and present frames are clustered around value u if these frames had been found to be unvoiced; otherwise, the previous values are clustered around value v.
- Block 222 then calculates a new weight value a and a new threshold value b. The values a and b are used in the next sequential frame by the preceding blocks in FIG. 2.
- Blocks 226 through 239 implement U/V determinator 105 of FIG. 1.
- Block 226 determines whether the value a for the present frame is greater than zero. If this condition is true, then decision block 228 is executed. The latter decision block determines whether the test for voiced or unvoiced is met If the frame is found to be voiced in decision block 228, then the frame is so marked as voiced by block 230 otherwise the frame is marked as unvoiced by block 232. If the value a is less than zero for the present frame, blocks 234 through 238 are executed and function in a similar manner to blocks 228 through 232. It is to be understood that the afore-described embodiment is merely illustrative of the principles of the invention and that other arrangements may be devised by those skilled in the art without departing from the spirit and the scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Radio Relay Systems (AREA)
- Oscillators With Electromechanical Resonators (AREA)
- Interface Circuits In Exchanges (AREA)
- Radar Systems Or Details Thereof (AREA)
- Telephonic Communication Services (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Apparatus for detecting a fundamental frequency in speech by statistically analyzing a discriminant variable generated by a discriminant voiced detector (102) so as to determine the presence of the fundamental frequency in a changing speech environment. A statistical calculator (103) is responsive to the discriminant variable to first calculate the average of all of the values of the discriminant variable over the present and past speech frames and then to determine the overall probability that any frame will be unvoiced. In addition, the calculator informs two values, one value represents the statistical average of discriminant values that an unvoiced frame's discriminant variable would have and the other value represents the statistical average of the discriminant values for voice frames. These latter calculations are performed utilizing not only the average discriminant value but also a weight value and a threshold value which are adaptively determined by a threshold calculator (104) from frame to frame. An unvoiced/voiced determinator (105) makes the unvoiced/voiced decision by utilizing the weight and threshold values.
Description
AN ADAPTIVE THRESHOLD VOICED DETECTOR
Technical Field
This invention relates to determining whether or not speech contains a fundamental frequency which is commonly referred to as the unvoiced voiced decision. More particularly, the unvoiced/voiced decision is made by a two stage voiced detector with the final threshold values being adaptively calculated for the speech environment utilizing statistical techniques. Background and Problem
In low bit rate voice coders, degradation of voice quality is often due to inaccurate voicing decisions. The difficulty in correctly making these voicing decisions lies in the fact that no single speech parameter or classifier can reliably distinguish voiced speech from unvoiced speech. In order to make the voice decision, it is known in the art to combine multiple speech classifiers in the form of a weighted sum. This method is commonly called discriminant analysis. Such a method is illustrated in D. P. Prezas, et al., "Fast and Accurate Pitch Detection Using Pattern Recognition and Adaptive Time-Domain Analysis," Proc. IEEE Int. Conf. Acoust., Speech and Signal Proc., Vol. 1, pp. 109-112, April 1986. As described in that article, a frame of speech is declared voice if a weighted sum of classifiers is greater than a specified threshold, and unvoiced otherwise. The weights and threshold are chosen to maximize performance on a training set of speech where the voicing of each frame is known.
A problem associated with the fixed weighted sum method is that it does not perform well when the speech environment changes. The reason is that the threshold is determined from the training set which is different from speech subject to background noise, non-linear distortion, and filtering. One method for adapting the threshold value to changing speech environment is disclosed in the paper of H. Hassanein, et al., "Implementation of the Gold-Rabiner Pitch Detector in a Real Time Environment Using an Improved Voicing Detector," IEEE Transactions on Acoustic, Speech and Signal Processing, 1986, Tokyo, Vol. ASSP-33, No. 1, pp. 319-320. This paper discloses an empirical method which compares three different parameters against independent thresholds associated with these parameters and on the basis of each comparison either increments or decrements by one an adaptive threshold value. The three parameters utilized are energy of the signal, first reflection coefficient,
and zero-crossing count. For example, if the energy of the speech signal is less than a predefined energy level, the adaptive threshold is incremented. On the other hand, if the energy of the speech signal is greater than another predefined energy level, the adaptive threshold is decremented by one. After the adaptive threshold has been calculated, it is subtracted from an output of a elementary pitch detector. If the results of the subtraction yield a positive number, the speech frame is declared voice; otherwise, the speech frame is declared on unvoice. The problem with the disclosed method is that the parameters themselves are not used in the elementary pitch detector. Hence, the adjustment of the adaptive threshold is ad hoc and is not directly linked to the physical phenomena from which it is calculated. In addition, the threshold cannot adapt to rapidly changing speech environments. Solution
The above described problem is solved and a technical advance is achieved by a voicing decision apparatus that adapts to a changing environment by utilizing adaptive statistical values to make the voicing decision. The statistical values are adapted to the changing environment by utilizing statistics based on an output of a voiced detector. The statistical parameters are calculated by the voiced detector generating a general value indicating the presence of a fundamental frequency in a speech frame in response to speech attributes of the frame. Second, the mean for unvoiced ones and voiced ones of speech frames is calculated in response to the generated value. The two means are then used to determine decision regions, and the determination of the presence of the fundamental frequency is done in response to the decision regions and the present speech frame.
Advantageously, in response to speech attributes of the present and past speech frames, the mean for unvoiced frames is calculated by calculating the probability that the present speech frame is unvoiced, calculating the overall probability that any frame will be unvoiced, and calculating the probability that the present speech frame is voiced. The mean of the unvoiced speech frames is then calculated in response to the probability that the present speech frame is unvoiced and the overall probability. In addition, the mean of the voiced speech frame is calculated in response to the probability that the present speech frame is voiced and the overall probability. Advantageously, the calculations of probabilities are performed utilizing a maximum likelihood statistical operation.
Advantageously, the generation of the general value is performed utilizing a discriminant analysis procedure, and the speech attributes are speech classifiers.
Advantageously, the decision regions are defined by the mean of the unvoiced and voiced speech frames and a weight and threshold value generated in response to the general values of past and present frames and the means of the voiced and unvoiced frames.
The method for detecting the presence of a fundamental frequency in speech frames comprises the steps of: generating a general value in response to a set of classifiers defining speech attributes of a present speech frame to indicate the presence of the fundamental frequency, calculating a set of statistical parameters in response to the general value, and determining the presence of the fundamental frequency in response to the general value and the calculated set of statistical parameters. The step of generating the general value is performed utilizing a disCTiminant analysis procedure. Further, the step of determining the fundamental frequency comprises the step of calculating a weight and a threshold value in response to the set of parameters. Brief Description of the Drawing
FIG. 1 illustrates, in block diagram form, the present invention; and FIGS. 2 and 3 illustrate, in greater detail, certain functions performed by the voiced detection apparatus of FIG. 1. Detailed Description
FIG. 1 illustrates an apparatus for performing the unvoiced/voiced decision operation by first utilizing a discriminant voiced detector to process voice classifiers in order to generate a discriminant variable or general variable. The latter variable is statistically analyzed to make the voicing decision. The statistical analysis adapts the threshold utilized in making the unvoiced/voiced decision so as to give reliable performance in a variety of voice environments. Consider now the overall operation of the apparatus illustrated in FIG. 1. Classifier generator 100 is responsive to each frame of voice to generate classifiers which advantageously may be the log of the speech energy, the log of the LPC gain, the log area ratio of the first reflection coefficient, and the squared correlation coefficient of two speech segments one frame long which are offset by one pitch period. The calculation of these classifiers involves digitally sampling analog speech, forming frames of the digital samples, and processing those frames
and is well known in the art. Generator 100 transmits the classifiers to silence detector 101 and discriminant voiced detector 102 via path 106. Discriminant voiced detector 102 is responsive to the classifiers received via path 106 to calculate the discriminant value, x. Detector 102 performs that calculation by solving the equation: x = c'y+d. Advantageously, "c" is a vector comprising the weights, "y" is a vector comprising the classifiers, and "d" is a scalar representing a threshold value. Advantageously, the components of vector c are initialized as follows: component corresponding to log of the speech energy equals 0.3918606, component corresponding to log of the LPC gain equals -0.0520902, component corresponding to log area ratio of the first reflection coefficient equals 0.5637082, and component corresponding to squared correlation coefficient equals 1.361249; and d initially equals -8.36454. After calculating the value of the discriminant variable x, the detector 102 transmits this value via path 111 to statistical calculator 103 and subtracter 107. Silence detector 101 is responsive to the classifiers transmitted via path 106 to determine whether speech is actually present on the data being received on path 109 by classifier generator 100. The indication of the presence of speech is transmitted via path 110 to statistical calculator 103 by silence detector 101. For each frame of speech, detector 102 generates and transmits the discriminant value x via path 111. Statistical calculator 103 maintains an average of the discriminant values received via path 111 by averaging in the discriminant value, for the present, non-silence frame with the discriminant values for previous non-silence frames. Statistical calculator 103 is also responsive to the signal received via path 110 to calculate the overall probability that any frame is unvoiced and the probability that any frame is voiced. addition, statistical calculator 103 calculates the statistical value that the discriminant value for the present frame would have if the frame was unvoiced and the statistical value that the discriminant value for the present frame would have if the frame was voiced. Advantageously, that statistical value may be the mean. The calculations performed by calculator 103 are not only based on the present frame but on previous frames as well. Statistical calculator 103 performs these calculations not only on the basis of the discriminant value received for the present frame via path 106 and the average of the classifiers but also on the basis of a weight and a threshold value defining whether a frame is unvoiced or voiced received via
path 113 from threshold calculator 104.
Calculator 104 is responsive to the probabilities and statistical values of the classifiers for the present frame as generated by calculator 103 and received via path 112 to recalculate the values used as weight value a, and threshold value b for the present frame. Then, these new values of a and b are transmitted back to statistical calculator 103 via path 113.
Calculator 104 transmits the weight, threshold, and statistical values via path 114 to U/V determinator 105. The latter detector is responsive to the information transmitted via paths 114 and 115 to determine whether or not the frame is unvoiced or voiced and to transmit this decision via path 116.
Consider now in greater detail the operations of blocks 103, 104, 105, and 107 illustrated in FIG. 1. Statistical calculator 103 implements an improved EM algorithm similar to that suggested in the article by N. E. Day entitled "Estimating the Components of a Mixture of Normal Distributions", Biometrika, Vol. 56, No. 3, pp. 463-474, 1969. Utilizing the concept of a decaying average, calculator 103 calculates the average for the discriminant values for the present and previous frames by calculating following equations 1, 2, and 3:
n = n+l if n < 2000 (1)
z = 1/n (2)
Xn = (1-z) Xn-1 + zxn (3)
xn is the discriminant value for the present frame and is received from detector 102 via path 111, and n is the number of frames that have been processed up to 2000. z represents the decaying average coefficient, and Xn represents the
average of the discriminant values for the present and past frames. Statistical calculator 103 is responsive to receipt of the z, xn and Xn values to calculate the variance value, T, by first calculating the second moment of xn, Qn, as follows:
Qπ = d-z)^! + zxn . (4)
After Qn has been calculated, T is calculated as follows:
T = Qa - n - (5)
The mean is subtracted from the discriminant value of the present frame as follows:
— xn — Xn (6)
Next, calculator 103 determines the probability that the frame represented by the present value xn is unvoiced by solving equation 7 shown below:
After solving equation 7, calculator 103 determines the probability that the discriminant value represents a voiced frame by solving the following:
P(vlxn) = l-P(u lxn) . (8)
Next, calculator 103 determines the overall probability that any frame will be unvoiced by solving equation 9 for pn:
Pn s Cl-zJ pn-i + z PCulXn . (9)
After determining the probability that a frame will be unvoiced, calculator 103 determines two values, u and v, which give the mean values of discriminant value for both unvoiced and voiced type frames. Value u, statistical average unvoiced value, contains the mean discriminant value if a frame is unvoiced, and value v, statistical average voiced value, gives the mean discriminant value if a frame is voiced. Value u for the present frame is solved by calculating equation 10, and value v is determined for the present frame by calculating equation.11 as follows:
un = (1-z) Un-! + z xn P(u l xn)/ρn - zxn (10)
= (1-z) n_ι + z xn P(v lxn)/(l-pn) Z n (ID
Calculator 103 now communicates the u, v, and T values, and probability pn to threshold calculator 104 via path 112.
Calculator 104 is responsive to this information to calculate new values for a and b. These new values are then transmitted back to statistical calculator 103 via path 113. This allows rapid adaptations to changing environments. If n is greater than advantageously 99, values a and b are calculated as follows. Value a is determined by solving the following equation:
, . __, (V≥) (12)
1-Pnd-Pn) T"1 (U„-Vn)2
Value b is determined by solving the following equation:
b = -- a(un+vn) + log[(l-p„)/pn ] (13)
After calculating equations 12 and 13, calculator 104 transmits values a, u, and v to block 105 via path 114. Determinator 105 is responsive to this transmitted information to decide whether the present frame is voiced or unvoiced. If the value a is positive, then, a frame is declared voiced if the following equation is true:
axn - a(Un+vn)/2 > 0 ; (14)
or if the value a is negative, then, a frame is declared voiced if the following equation is true:
axn - a(un+vn)/2 < 0 (15)
Equation 14 can also be expressed as:
axn + b - log[(l-pn)/pn] > 0
Equation 15 can also be expressed as:
axn + b - log[(l-pn)/pn] < 0
If the previous conditions are not met, determinator 105 declares the frame unvoiced.
In flow chart form, FIGS. 2 and 3 illustrate, in greater detail, the operations performed by the apparatus of FIG. 1. Block 200 implements block 101 of FIG. 1. Blocks 202 through 218 implement statistical calculator 103. Block 222 implements threshold calculator 104, and blocks 226 through 239 implement block 105 of FIG. 1. Subtracter 107 is implemented by both block 208 and block 224. Block 202 calculates the value which represents the average of the discriminant value for the present frame and all previous frames. Block 200 determines whether speech is present in the present frame; and if speech is not present in the present frame, the mean for the discriminant value is subtracted from the present discriminant value by block 224 before control is transferred to decision block 226.
However, if speech is present in the present frame, then the statistical and weight calculations are performed by blocks 202 through 222. First, the average value is found in block 202. Second, the second moment value is calculated in block 206. The latter value along with the mean value X for the present and past frames is then utilized to calculate the variance, T, also in block 206. The mean X is then subtracted from the discriminant value xn in block 208.
Block 210 calculates the probability that the present frame is unvoiced by utilizing the present weight value a, the present threshold value b, and the discriminant value for the present frame, xn. After calculating the probability that the present frame is unvoiced, the probability that the present frame is voiced is calculated by block 212. Then, the overall probability, pn, that any frame will be unvoiced is calculated by block 214.
Blocks 216 and 218 calculate two values: u and v. The value u represents the statistical average value that the discriminant value would have if the frame were unvoiced. Whereas, value v represents the statistical average value that the discriminant value would have if the frame were voiced. The actual discriminant values for the present and previous frames are clustered around either value u or value v. The discriminant values for the previous and present frames are clustered around value u if these frames had been found to be unvoiced; otherwise, the previous values are clustered around value v. Block 222 then calculates a new weight value a and a new threshold value b. The values a and b
are used in the next sequential frame by the preceding blocks in FIG. 2.
Blocks 226 through 239 implement U/V determinator 105 of FIG. 1. Block 226 determines whether the value a for the present frame is greater than zero. If this condition is true, then decision block 228 is executed. The latter decision block determines whether the test for voiced or unvoiced is met If the frame is found to be voiced in decision block 228, then the frame is so marked as voiced by block 230 otherwise the frame is marked as unvoiced by block 232. If the value a is less than zero for the present frame, blocks 234 through 238 are executed and function in a similar manner to blocks 228 through 232. It is to be understood that the afore-described embodiment is merely illustrative of the principles of the invention and that other arrangements may be devised by those skilled in the art without departing from the spirit and the scope of the invention.
Claims
1. An apparatus for detecting the presence of a fundamental frequency in frames of speech, comprising: means responsive to a set of classifiers defining speech attributes of one said frames of speech for generating a general value indicating said presence of said fundamental frequency; means responsive to said general value for calculating a set of statistical parameters; and means responsive to said general value and the calculated set of statistical parameters for determining said presence of said fundamental frequency.
2. The apparatus of claim 1 wherein said generating means comprises means for performing a discriminant analysis to generate said general value.
3. The apparatus of claim 1 wherein said determining means comprises means for calculating a threshold value in response to said set of said parameters; means for calculating a weight value in response to said set of said parameters; and means for communicating said weight and threshold values to said means for calculating said set of parameters to be used for calculating another set of parameters for another one of said frames of speech.
4. The apparatus of claim 3 wherein said mean for calculating said set of parameters further responsive to the communicated weight and threshold values and another general value of said other one of said frames for calculating another set of statistical parameters.
5. The apparatus of claim 4 wherein said means for calculating said set of parameters further comprises means for calculating the average of said general values over said present and previous ones of said speech frames; and means responsive to said average of said general values for said present and previous ones of said speech frames and said communicated weight and threshold values and said other general value for determining said other set of statistical parameters.
6. An apparatus for detecting the presence of a fundamental frequency in frames of speech, comprising: means responsive to a set of classifiers defining speech attributes of one of said frames of speech for generating a general value indicating said presence of said fundamental frequency; means for estimating a mean for unvoiced ones of said frames in response to said general value; means for estimating a mean for voiced ones of said frames in response to said general value; means responsive to said mean for unvoiced ones of said frames and said mean of voiced ones of said frames for determining decision regions; and means for making the determination of said presence of said fundamental frequency in response to said decision regions and a present one of said frames.
7. The apparatus of claim 6 wherein said means for determining comprises means for calculating the variance of said general values over said present and previous ones of said speech frames; means for generating said decision regions in response to said mean for unvoiced ones of said frames and said mean of voiced ones of said frames and said variance; said means for estimating said mean of said unvoiced frames comprises means responsive to present and past ones of said frames for calculating the probability that said present one of said frames is unvoiced; means responsive to said present and past ones of said frames and said probability that said present one of said frames is unvoiced for calculating the overall probability that any frame will be unvoiced; said means for estimating said mean of said voiced frames further comprises means for calculating the probability that said present one of said frames is voiced; means responsive to said probability that said present one of said frames is unvoiced and said overall probability and said variance for calculating the said mean of said unvoiced ones of said frames; and means responsive to said probability that said present one of said frames is voiced and said overall probability and said variance for calculating the said mean of said voiced ones of said frames.
8. The apparatus of claim 7 wherein said means for calculating said probability that said present one of said frames is unvoiced performs a maximum likelihood statistical operation.
9. The apparatus of claim 8 wherein said means for calculating said probability that said present one of said frames is unvoiced further responsive to a weight and threshold value to perform said maximum likelihood statistical operation.
10. A method for detecting the presence of a fundamental frequency in frames of speech comprising the steps of: generating a general value in response to a set of classifiers defining speech attributes of one of said frames of speech to indicate said presence of said fundamental frequency; calculating a set of statistical parameters in response to said general value; and determining said presence of said fundamental frequency in response to said general value and the calculated said set of statistical parameters.
11. The method of claim 10 wherein said step of generating comprises the step of performing a discriminant analysis to generate said general value.
12. The method of claim 11 wherein said step of deteπnining comprises the steps of calculating a threshold value in response to said set of said parameters; calculating a weight value in response to said set of said parameters; and communicating said weight and threshold values to said means for calculating said set of parameters to be used for calculating another set of parameters for another one of said frames of speech.
13. The method of claim 12 wherein said step of calculating said set of parameters further responsive to the communicated weight and threshold values and another general value of said other one of said frames for calculating another set of statistical parameters.
14. The method of claim 13 wherein said step of calculating said set of parameters further comprises the steps of calculating the average of said general values over said present and previous ones of said speech frames; and determining said other set of statistical parameters in response to said average of said general values for said present and previous ones of said speech frames and said communicated weight and threshold values and said other general values.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AT88903995T ATE83329T1 (en) | 1987-04-03 | 1988-01-12 | DETECTOR FOR VOICED LOUD WITH ADAPTIVE THRESHOLD. |
JP63503536A JPH0795239B2 (en) | 1987-04-03 | 1988-01-12 | Device and method for detecting the presence of a fundamental frequency in a speech frame |
DE8888903995T DE3876569T2 (en) | 1987-04-03 | 1988-01-12 | DETECTOR FOR TUNING LOUD WITH ADAPTIVE THRESHOLD. |
SG609/93A SG60993G (en) | 1987-04-03 | 1993-05-07 | An adaptive threshold voiced detector |
HK217/94A HK21794A (en) | 1987-04-03 | 1994-03-10 | An adaptive threshold voiced detector |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3429887A | 1987-04-03 | 1987-04-03 | |
US034,298 | 1987-04-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1988007739A1 true WO1988007739A1 (en) | 1988-10-06 |
Family
ID=21875533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1988/000031 WO1988007739A1 (en) | 1987-04-03 | 1988-01-12 | An adaptive threshold voiced detector |
Country Status (9)
Country | Link |
---|---|
EP (1) | EP0309561B1 (en) |
JP (1) | JPH0795239B2 (en) |
AT (1) | ATE83329T1 (en) |
AU (1) | AU598933B2 (en) |
CA (1) | CA1336208C (en) |
DE (1) | DE3876569T2 (en) |
HK (1) | HK21794A (en) |
SG (1) | SG60993G (en) |
WO (1) | WO1988007739A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0439073A1 (en) * | 1990-01-18 | 1991-07-31 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0442342A1 (en) * | 1990-02-13 | 1991-08-21 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0459384A1 (en) * | 1990-05-28 | 1991-12-04 | Matsushita Electric Industrial Co., Ltd. | Speech signal processing apparatus for cutting out a speech signal from a noisy speech signal |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3875894T2 (en) * | 1987-04-03 | 1993-05-19 | American Telephone & Telegraph | ADAPTIVE MULTIVARIABLE ANALYSIS DEVICE. |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS60114900A (en) * | 1983-11-25 | 1985-06-21 | 松下電器産業株式会社 | Voice/voiceless discrimination |
JPS60200300A (en) * | 1984-03-23 | 1985-10-09 | 松下電器産業株式会社 | Voice head/end detector |
JPS6148898A (en) * | 1984-08-16 | 1986-03-10 | 松下電器産業株式会社 | Voice/voiceless discriminator for voice |
DE3875894T2 (en) * | 1987-04-03 | 1993-05-19 | American Telephone & Telegraph | ADAPTIVE MULTIVARIABLE ANALYSIS DEVICE. |
-
1988
- 1988-01-12 JP JP63503536A patent/JPH0795239B2/en not_active Expired - Fee Related
- 1988-01-12 DE DE8888903995T patent/DE3876569T2/en not_active Expired - Fee Related
- 1988-01-12 EP EP88903995A patent/EP0309561B1/en not_active Expired - Lifetime
- 1988-01-12 WO PCT/US1988/000031 patent/WO1988007739A1/en active IP Right Grant
- 1988-01-12 AT AT88903995T patent/ATE83329T1/en not_active IP Right Cessation
- 1988-01-12 AU AU17007/88A patent/AU598933B2/en not_active Ceased
- 1988-03-29 CA CA000562765A patent/CA1336208C/en not_active Expired - Fee Related
-
1993
- 1993-05-07 SG SG609/93A patent/SG60993G/en unknown
-
1994
- 1994-03-10 HK HK217/94A patent/HK21794A/en not_active IP Right Cessation
Non-Patent Citations (2)
Title |
---|
IEEE Transactions on Acoustics, Speech, and Signal Processing, volume ASSP-31, no. 3, June 1983, IEEE, (New York, US), P. De Souza: "A statistical approach to the design of an adaptive self-normalizing silence detector", pages 678-684 * |
Proceedings: ICASSP 87, 1987 International Conference on Acoustics, Speech, and Signal Processing, 6-9 April 1987, Dallas, Texas, volume 1 of 4, IEEE, (New York, US) D.L. Thompson: "A multivariate voicing decision rule adapts to noise distortion, and spectral shaping", pages 197-200 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0439073A1 (en) * | 1990-01-18 | 1991-07-31 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
US5195138A (en) * | 1990-01-18 | 1993-03-16 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0614169A1 (en) * | 1990-01-18 | 1994-09-07 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0614171A1 (en) * | 1990-01-18 | 1994-09-07 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0614170A1 (en) * | 1990-01-18 | 1994-09-07 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0442342A1 (en) * | 1990-02-13 | 1991-08-21 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
US5204906A (en) * | 1990-02-13 | 1993-04-20 | Matsushita Electric Industrial Co., Ltd. | Voice signal processing device |
EP0459384A1 (en) * | 1990-05-28 | 1991-12-04 | Matsushita Electric Industrial Co., Ltd. | Speech signal processing apparatus for cutting out a speech signal from a noisy speech signal |
US5220610A (en) * | 1990-05-28 | 1993-06-15 | Matsushita Electric Industrial Co., Ltd. | Speech signal processing apparatus for extracting a speech signal from a noisy speech signal |
Also Published As
Publication number | Publication date |
---|---|
EP0309561A1 (en) | 1989-04-05 |
CA1336208C (en) | 1995-07-04 |
EP0309561B1 (en) | 1992-12-09 |
ATE83329T1 (en) | 1992-12-15 |
DE3876569T2 (en) | 1993-04-08 |
SG60993G (en) | 1993-07-09 |
AU598933B2 (en) | 1990-07-05 |
JPH01502858A (en) | 1989-09-28 |
HK21794A (en) | 1994-03-18 |
DE3876569D1 (en) | 1993-01-21 |
JPH0795239B2 (en) | 1995-10-11 |
AU1700788A (en) | 1988-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0548054B1 (en) | Voice activity detector | |
US5604839A (en) | Method and system for improving speech recognition through front-end normalization of feature vectors | |
US20020165713A1 (en) | Detection of sound activity | |
WO1996034382A1 (en) | Methods and apparatus for distinguishing speech intervals from noise intervals in audio signals | |
WO2000017859A1 (en) | Noise suppression for low bitrate speech coder | |
JPH08505715A (en) | Discrimination between stationary and nonstationary signals | |
US5007093A (en) | Adaptive threshold voiced detector | |
EP0653091B1 (en) | Discriminating between stationary and non-stationary signals | |
US5046100A (en) | Adaptive multivariate estimating apparatus | |
WO1988007739A1 (en) | An adaptive threshold voiced detector | |
US4972490A (en) | Distance measurement control of a multiple detector system | |
JP4673828B2 (en) | Speech signal section estimation apparatus, method thereof, program thereof and recording medium | |
EP0308433B1 (en) | An adaptive multivariate estimating apparatus | |
CN111128244B (en) | Short wave communication voice activation detection method based on zero crossing rate detection | |
EP0310636B1 (en) | Distance measurement control of a multiple detector system | |
KR100798056B1 (en) | Speech processing method for speech enhancement in highly nonstationary noise environments | |
KR970003035B1 (en) | Pitch information detecting method of speech signal | |
Moulsley et al. | An adaptive voiced/unvoiced speech classifier. | |
Li et al. | Voice activity detection under Rayleigh distribution | |
NZ286953A (en) | Speech encoder/decoder: discriminating between speech and background sound |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE FR GB IT LU NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1988903995 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1988903995 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1988903995 Country of ref document: EP |