EP1269462B1 - Voice activity detection apparatus and method - Google Patents
Voice activity detection apparatus and method Download PDFInfo
- Publication number
- EP1269462B1 EP1269462B1 EP01958309A EP01958309A EP1269462B1 EP 1269462 B1 EP1269462 B1 EP 1269462B1 EP 01958309 A EP01958309 A EP 01958309A EP 01958309 A EP01958309 A EP 01958309A EP 1269462 B1 EP1269462 B1 EP 1269462B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio parameter
- unit
- delay
- audio
- averaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 230000000694 effects Effects 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 title claims description 31
- 238000012935 Averaging Methods 0.000 claims abstract description 37
- 230000003111 delayed effect Effects 0.000 claims description 8
- 230000003595 spectral effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 2
- 230000001934 delay Effects 0.000 claims 1
- 239000013598 vector Substances 0.000 description 19
- 230000007704 transition Effects 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 101100126625 Caenorhabditis elegans itr-1 gene Proteins 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006837 decompression Effects 0.000 description 2
- 206010019133 Hangover Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Definitions
- the present invention relates to voice processing systems in general, and to methods and apparatus for detecting voice activity in a low resource environment, in particular.
- a voice activity detector operates under the assumption that speech is present only in part of the audio signals while there are many intervals, which exhibit only silence or background noise.
- a voice activity detector can be used for many purposes such as suppressing overall transmission activity in a transmission system, when there is no speech, thus potentially saving power and channel bandwidth.
- the VAD detects that speech activity has resumed, then it can reinitiate transmission activity.
- a voice activity detector can also be used in conjunction with speech storage devices, by differentiating audio portions which include speech from those that are "speechless". The portions including speech are then stored in the storage device and the "speechless" portions are not stored.
- Conventional methods for detecting voice are based at least in part on methods for detecting and assessing the power of a speech signal.
- the estimated power is compared to either constant or adaptive threshold, for determining a decision.
- the main advantage of these methods is their low complexity, which makes them suitable for low resources implementations.
- the main disadvantage of such methods is that background noise can result in "speech" being detected when none is present or "speech" which is present not being detected because it is obscured and difficult to detect.
- Some methods for detecting speech activity are directed at noisy mobile environments and are based on adaptive filtering of the speech signal. This reduces the noise content from the signal, prior to the final decision.
- the frequency spectrum and noise level may vary because the method will be used for different speakers and in different environments.
- the input filter and thresholds are often adaptive so as to track these variations. Examples of these methods are provided in GSM specification 06.42 Voice Activity Detector (VAD) for half rate, full rate, and enhanced full rate speech traffic channels respectively.
- VAD Voice Activity Detector
- Another such method is the "Multi-Boundary Voice Activity Detection Algorithm" as proposed in ITU G.729 annex B. These methods are more accurate in noisy environment but are significantly complex to implement.
- European Patent application No. 0785419A2 Benyassine et al. is directed to a method for voice activity detection which includes the following steps: extracting a predetermined set of parameters from the incoming speech signal for each frame and making a frame voicing decision of the incoming speech signal for each frame according to a set of difference measures extracted from the predetermined set of parameters.
- WO-A-0017856 relates to a method and apparatus for detecting voice activity in a speech signal.
- WO-A-0017856 has a filing date of 27 August 1999, a publication date of 30 March 2000, and claims a priority date of 18 September 1998.
- the present invention alleviates the disadvantages of the prior art by providing a method which utilizes conventional vocoder output data of a voice related stream for detecting voice activity therein.
- the method for voice activity detection is based on the analysis of audio parameters such as Line Spectral Frequencies (LSF) parameters.
- LSF Line Spectral Frequencies
- the detection is based on a stationarity estimate of spectral characteristics of the incoming speech frames, which is represented by LSF parameters.
- Apparatus 100 includes two delay arrays 102 and 110, a plurality of distance measure units 106A, 106B, 106C and 106D, an averaging unit 108, a subtraction unit 114 and decision logic unit (DLU) 116.
- Delay array 102 includes a plurality of delay units 104A, 104B, 104C, 104D, 104E, 104F and 104G, all connected in series, so that each adds a further delay to the previous one.
- Delay array 110 includes a plurality of delay units, 112A, 112B, 112C and 112D, all connected in series, so that each adds a further delay to the previous one.
- Apparatus 100 is further connected to a Line Spectral Frequencies (LSF) generation unit 120, which can be a part of the voice encoder (vocoder) apparatus of an audio system.
- LSF unit 120 produces LSF values for each received audio frame. It is noted that LSF unit 120 is only one example for an audio parameter generation unit.
- the output of the LSF unit 120 is coupled to the input of delay unit 104A.
- the input of each of delay units 104A, 104B, 104C and 104D is connected to a respective one of distance measure units 106A, 106B, 106C and 106D.
- the input of delay unit 104A is connected to distance measure unit 106A.
- Delay unit 104A has its output connected to distance measure unit 106B.
- Unit 104B has its output connected to unit 106C.
- Unit 104C has its output connected 106D.
- the output of delay units 104D, 104E, 104F and 104G is connected to a respective one of distance measure units 106A, 106B, 106C and 106D.
- the output of delay unit 104D is connected to distance measure unit 106A.
- the LSF value L (n) at the input of delay unit 104A is associated with the value L (n-4) at the output of delay unit 104D.
- each of the LSF values L (n-1), L (n-2) and L (n-3) is associated with a respective one of LSF values L (n-5), L (n-6) and L (n-7), at a respective one of distance measure units 106B, 106C and 106D.
- the system includes a different number of delay units and can combine more than two LSF values, which are at different distances from each other, such as L ⁇ n + L ⁇ ⁇ n - 4 + L ⁇ ⁇ n - 6 .
- the distance measure units 106A, 106B, 106C and 106D are all connected to the averaging unit 108.
- Averaging unit 108 is further connected to delay unit 112A, subtraction unit 114 and to DLU 116.
- the output of each of delay units 112A, 112B, 112C and 112D is connected to DLU 116.
- the output of delay unit 112A is further connected to the subtraction unit 114.
- Figure 2 is an illustration of a method for operating the apparatus 100 of Figure 1 , operative in accordance with another preferred embodiment of the present invention.
- a plurality of audio parameters are received.
- Each of the audio parameters is related to a predetermined audio frame.
- the audio parameters include LSF values, which represent the short-time frequency spectrum characteristics of the signal envelope for each audio frame.
- LSF parameters are derived from the Linear Prediction Coefficients (LPC's), which are widely used by many modern speech compression and analysis schemes and are discussed in detail in A. M. Kondoz, Digital Speech: Coding for Low Bit Rate Communications Systems, New York: John Wiley & Sons, 1994 .
- LPC's Linear Prediction Coefficients
- step 152 the audio parameters are grouped according to a predetermined pattern of audio frames.
- each audio frame is associated with a voice frame, which is four places ahead of it.
- the audio parameters of audio frame n are grouped with the audio parameters of audio frame n-4.
- any other number can be used for the distance between the frames.
- further combinations can also be used such as combination (n,n-2,n-7) and the like.
- distance measure unit 106A groups vector L ( n ) of frame n with vector L ( n -4) of frame n-4.
- Distance measure unit 106B groups vector L ( n -1) of frame n-1 with vector L ( n -5) of frame n-5.
- Distance measure unit 106C groups vector L ( n - 2) of frame n-2 with vector L ( n -6) of frame n-6.
- Distance measure unit 106D groups vector L ( n -3) of frame n-3 with vector L ( n -7) of frame n-7.
- each distance measure unit 106A, 106B, 106C and 106D performs a two-stage operation.
- the distance measure units 106A, 106B, 106C and 106D provide the D vectors D ⁇ n D ⁇ ⁇ n - 1 D ⁇ ⁇ n - 2 ... D ⁇ ⁇ n - M 1 - 1 2 to averaging unit 108.
- step 156 an average value is determined for all of the present characteristic values.
- the measure a(n) is applied to a second M 2 -stage delay line.
- the delay line includes four delay units 112A, 112B, 112C and 112D.
- Averaging unit 108 further provides the latest average value a ( n ) to DLU 116 and to subtraction unit 114.
- Delay unit 112A provides the previous average value a ( n-1 ) to the subtraction unit 114.
- step 160 a decision is produced according to the values, which are present. Reference is now made to Figures 3 and 4 .
- the implementation of each of the decision functions can be according to a Boolean expression, which compares the value e(n) and components of the averaging vector A ( n ) with predetermined or variable threshold values.
- decision logic can vary according to specific performance requirements to a trade-off between "false alarm”, "miss detect” statistics, and the like.
- the logic can be either constant or adapted to other components such as background noise characteristic estimator, voicing mode if available, periodicity check, and the like.
- the instantaneous decision result can further be applied to an additional hangover function.
- step 180 represents the initial stage of the decision phase, wherein the current state of the VAD (speech-on or speech off) is detected. If the current state of the VAD is speech on, then the system 100 proceeds to step 182. Otherwise, the system 100 proceeds to step 186.
- VAD speech-on or speech off
- step 182 compliance with a speech-on-to-off transition condition is detected.
- a condition includes a predetermined combination of a ( n ) and e ( n ) with respect to predetermined values (Note that the threshold can be adaptive in general case).
- the system proceeds to step 184, which performs a transition in the VAD state to speech-off. Otherwise, this step is repeated until such compliance is detected.
- DLU 116 detects if the received values comply with the predetermined condition.
- step 186 compliance with a speech-off-to-on transition condition is detected.
- a condition includes another predetermined combination of a ( n ) and e(n) with respect to predetermined values.
- step 188 which performs a transition in the VAD state to speech-on. Otherwise, this step is repeated until such compliance is detected.
- step 184 After performing a VAD mode transition (either of step 184 and 188), the system proceeds back to step 180.
- Apparatus 200 includes a multi stage delay unit 202, two delay arrays 204 and 210, a distance measure unit 206, an averaging unit 208, a subtraction unit 214 and decision logic unit (DLU) 216.
- Delay array 204 includes a plurality of delay units 218A, 218B and 218M 2 , all connected in series, so that each adds a further delay to the previous one.
- delay array 210 includes a plurality of delay units, 212A, 212B, 212C and 212D, all connected in series, so that each adds a further delay stage to the previous one.
- System 200 is further connected to a Line Spectral Frequencies (LSF) generation unit 220, which can be a part of the voice encoder (vocoder) apparatus of an audio system.
- LSF unit 220 produces LSF values for each received audio frame.
- the input of multi stage delay unit 202 is connected to LSF unit 220.
- the output of multi stage delay unit 202 is connected distance measure unit 206.
- the LSF value L(n) at the input of delay unit 218A is associated with an M 1 stage delayed value L(n-M 1 ) at the output of delay unit 202.
- the output of distance measure unit 206 is connected to averaging unit 208 and to delay array 204.
- the output of each of the delay units 218A, 218B and 218M 2 is connected to the averaging unit 208 so that each provides a previously delayed distance measure output value to the averaging unit 208.
- delay unit 218A provides a distance measure value, which is respective of the pair, L(n-1) and L(n-M 1 -1). Accordingly, only the first distance measure value has to be calculated and the rest are stored, delayed and provided to the averaging unit 208 at the appropriate timing.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Geophysics And Detection Of Objects (AREA)
- Mobile Radio Communication Systems (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
- Time-Division Multiplex Systems (AREA)
- Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)
Abstract
Description
- The present invention relates to voice processing systems in general, and to methods and apparatus for detecting voice activity in a low resource environment, in particular.
- Methods and apparatus for detecting voice activity are known in the art. A voice activity detector (VAD) operates under the assumption that speech is present only in part of the audio signals while there are many intervals, which exhibit only silence or background noise.
- A voice activity detector can be used for many purposes such as suppressing overall transmission activity in a transmission system, when there is no speech, thus potentially saving power and channel bandwidth. When the VAD detects that speech activity has resumed, then it can reinitiate transmission activity.
- A voice activity detector can also be used in conjunction with speech storage devices, by differentiating audio portions which include speech from those that are "speechless". The portions including speech are then stored in the storage device and the "speechless" portions are not stored.
- Conventional methods for detecting voice are based at least in part on methods for detecting and assessing the power of a speech signal. The estimated power is compared to either constant or adaptive threshold, for determining a decision. The main advantage of these methods is their low complexity, which makes them suitable for low resources implementations. The main disadvantage of such methods is that background noise can result in "speech" being detected when none is present or "speech" which is present not being detected because it is obscured and difficult to detect.
- Some methods for detecting speech activity are directed at noisy mobile environments and are based on adaptive filtering of the speech signal. This reduces the noise content from the signal, prior to the final decision. The frequency spectrum and noise level may vary because the method will be used for different speakers and in different environments. Hence, the input filter and thresholds are often adaptive so as to track these variations. Examples of these methods are provided in GSM specification 06.42 Voice Activity Detector (VAD) for half rate, full rate, and enhanced full rate speech traffic channels respectively. Another such method is the "Multi-Boundary Voice Activity Detection Algorithm" as proposed in ITU G.729 annex B. These methods are more accurate in noisy environment but are significantly complex to implement.
- All of these methods require the speech signal to be input. Some applications employing speech decompression schemes require carrying out speech detection during the speech decompression process.
-
European Patent application No. 0785419A2 Benyassine et al. is directed to a method for voice activity detection which includes the following steps: extracting a predetermined set of parameters from the incoming speech signal for each frame and making a frame voicing decision of the incoming speech signal for each frame according to a set of difference measures extracted from the predetermined set of parameters. - International Patent Application number
WO-A-0017856 WO-A-0017856 - It is an object of the present invention to provide a method and an apparatus for detecting the presence of speech activity, which alleviates the disadvantages of the prior art.
- It is a further object of the present invention to provide a method and an apparatus, which can classify speech activity by utilizing compressed speech parameters.
- In accordance with the present invention in a first aspect there is provided a voice activity detection apparatus as defined in
claim 1 of the accompanying claims. - In accordance with the present invention in a second aspect there is provided a method of use as defined in claim 11 of the accompanying claims.
- Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which:
-
-
Figure 1 is a schematic illustration of apparatus, constructed and operative in accordance with a preferred embodiment of the present invention; -
Figure 2 is an illustration of a method for operating the apparatus ofFigure 1 , operative in accordance with the present invention; -
Figure 3 is a schematic illustration of a two-state logic structure utilised in the preferred emobodiment; -
Figure 4 shows more detail of a step of the method shown inFigure 2 ; and -
Figure 5 , which is a schematic illustration of an apparatus constructed and operative in accordance with a further embodiment of the invention. - The present invention alleviates the disadvantages of the prior art by providing a method which utilizes conventional vocoder output data of a voice related stream for detecting voice activity therein. According to one aspect of the present invention, the method for voice activity detection (VAD) is based on the analysis of audio parameters such as Line Spectral Frequencies (LSF) parameters. The detection is based on a stationarity estimate of spectral characteristics of the incoming speech frames, which is represented by LSF parameters.
- Reference is now made to
Figure 1 , which is a schematic illustration of an apparatus, generally referenced 100, constructed and operative in accordance with a preferred embodiment of the present invention.Apparatus 100 includes twodelay arrays 102 and 110, a plurality ofdistance measure units averaging unit 108, asubtraction unit 114 and decision logic unit (DLU) 116.Delay array 102 includes a plurality ofdelay units - Delay array 110 includes a plurality of delay units, 112A, 112B, 112C and 112D, all connected in series, so that each adds a further delay to the previous one.
Apparatus 100 is further connected to a Line Spectral Frequencies (LSF)generation unit 120, which can be a part of the voice encoder (vocoder) apparatus of an audio system. The LSFunit 120 produces LSF values for each received audio frame. It is noted that LSFunit 120 is only one example for an audio parameter generation unit. - The output of the
LSF unit 120 is coupled to the input ofdelay unit 104A. The input of each ofdelay units distance measure units delay unit 104A is connected todistance measure unit 106A. - The outputs of each of the delay units is connected to a distance measure unit.
Delay unit 104A has its output connected todistance measure unit 106B.Unit 104B has its output connected tounit 106C.Unit 104C has its output connected 106D. - The output of
delay units distance measure units delay unit 104D is connected todistance measure unit 106A. Hence, the LSF valueL (n) at the input ofdelay unit 104A is associated with the valueL (n-4) at the output ofdelay unit 104D. - Similarly, each of the LSF values
L (n-1),L (n-2) andL (n-3) is associated with a respective one of LSF valuesL (n-5),L (n-6) andL (n-7), at a respective one ofdistance measure units - The
distance measure units averaging unit 108. Averagingunit 108 is further connected to delay unit 112A,subtraction unit 114 and to DLU 116. The output of each ofdelay units DLU 116. The output of delay unit 112A is further connected to thesubtraction unit 114. - Reference is further made to
Figure 2 , which is an illustration of a method for operating theapparatus 100 ofFigure 1 , operative in accordance with another preferred embodiment of the present invention. - In step 150 a plurality of audio parameters are received. Each of the audio parameters is related to a predetermined audio frame. In the present example, the audio parameters include LSF values, which represent the short-time frequency spectrum characteristics of the signal envelope for each audio frame. With respect to
Figure 1 ,delay unit 104A anddistance measure unit 106A receive LSF valuesL (n) where L is a vectorL (n)=[l 1(n),l 2(n),...,lN (n)], n denotes the index of a selected frame and N denotes the number of spectrum frequencies within an LSF vector. - It is noted that LSF parameters are derived from the Linear Prediction Coefficients (LPC's), which are widely used by many modern speech compression and analysis schemes and are discussed in detail in A. M. Kondoz, Digital Speech: Coding for Low Bit Rate Communications Systems, New York: John Wiley & Sons, 1994.
- In
step 152 the audio parameters are grouped according to a predetermined pattern of audio frames. In the present example, each audio frame is associated with a voice frame, which is four places ahead of it. Accordingly the audio parameters of audio frame n are grouped with the audio parameters of audio frame n-4. In general, the current frame LSF vectorL (n) can be applied to an M1-stage (M1 is odd) delay line (DL1), where the delay line produces pairs of LSF vectors with delay of - It is noted that any other number can be used for the distance between the frames. In addition, further combinations can also be used such as combination (n,n-2,n-7) and the like.
- Referring to
Figure 1 ,distance measure unit 106A groups vectorL (n) of frame n with vectorL (n-4) of frame n-4.Distance measure unit 106B groups vectorL (n-1) of frame n-1 with vectorL (n-5) of frame n-5.Distance measure unit 106C groups vectorL (n - 2) of frame n-2 with vectorL (n-6) of frame n-6.Distance measure unit 106D groups vectorL (n-3) of frame n-3 with vectorL (n-7) of frame n-7. - In
step 154, a characteristic value is determined for each group of audio values. In the present example, eachdistance measure unit v =[ν1,ν2...ν N ] where each of the components v(i) of the vector are determined as follows: -
-
-
- The measure a(n) is applied to a second M2-stage delay line. With reference to
Figure 1 , the delay line includes fourdelay units delay units A (n) toDLU 116, whereA (n)=[a(n)a(n-1)a(n-2)...a(n-M2)]. Averagingunit 108 further provides the latest average value a(n) toDLU 116 and tosubtraction unit 114. Delay unit 112A provides the previous average value a(n-1) to thesubtraction unit 114.Subtraction unit 114 provides a subtraction value g(n) toDLU 116, where g(n)=a(n)-a(n-1). It is noted that an additional signal energy values e(n) can also be provided as enabling switch to the DLU 116 (step 158) by connectingDLU 116 to a signal energy detector (not shown), which is usually present in most voice oriented communication systems. - In
step 160, a decision is produced according to the values, which are present. Reference is now made toFigures 3 and4 . - With reference to
Figure 3 , ST1 (referenced 170) and ST2 (referenced 172) are states which indicate "speech-on" and "speech-off" modes, respectively. DEC1, DEC2, DEC3 and DEC4 are state transition decision functions, which in the present example, comply with the following rule: DEC3 = NOT (DEC1) and DEC4 = NOT (DEC2). The implementation of each of the decision functions can be according to a Boolean expression, which compares the value e(n) and components of the averaging vectorA (n) with predetermined or variable threshold values. - It is noted that the decision logic can vary according to specific performance requirements to a trade-off between "false alarm", "miss detect" statistics, and the like. The logic can be either constant or adapted to other components such as background noise characteristic estimator, voicing mode if available, periodicity check, and the like. The instantaneous decision result can further be applied to an additional hangover function.
- With reference to
Figure 4 ,step 180 represents the initial stage of the decision phase, wherein the current state of the VAD (speech-on or speech off) is detected. If the current state of the VAD is speech on, then thesystem 100 proceeds to step 182. Otherwise, thesystem 100 proceeds to step 186. - In
step 182, compliance with a speech-on-to-off transition condition is detected. Such a condition includes a predetermined combination of a(n) and e(n) with respect to predetermined values (Note that the threshold can be adaptive in general case). When such compliance is detected then the system proceeds to step 184, which performs a transition in the VAD state to speech-off. Otherwise, this step is repeated until such compliance is detected. With reference toFigure 1 ,DLU 116 detects if the received values comply with the predetermined condition. - In
step 186, compliance with a speech-off-to-on transition condition is detected. Such a condition includes another predetermined combination of a(n) and e(n) with respect to predetermined values. When such compliance is detected then the system proceeds to step 188, which performs a transition in the VAD state to speech-on. Otherwise, this step is repeated until such compliance is detected. - After performing a VAD mode transition (either of
step 184 and 188), the system proceeds back tostep 180. - Reference is now made to
Figure 5 , which is a schematic illustration of apparatus, generally referenced 200, constructed and operative in accordance with a further preferred embodiment of the present invention.Apparatus 200 includes a multistage delay unit 202, twodelay arrays distance measure unit 206, an averagingunit 208, asubtraction unit 214 and decision logic unit (DLU) 216.Delay array 204 includes a plurality ofdelay units delay array 210 includes a plurality of delay units, 212A, 212B, 212C and 212D, all connected in series, so that each adds a further delay stage to the previous one.System 200 is further connected to a Line Spectral Frequencies (LSF)generation unit 220, which can be a part of the voice encoder (vocoder) apparatus of an audio system. TheLSF unit 220 produces LSF values for each received audio frame. - The input of multi
stage delay unit 202 is connected toLSF unit 220. The output of multistage delay unit 202 is connecteddistance measure unit 206. Hence, the LSF value L(n) at the input ofdelay unit 218A is associated with an M1 stage delayed value L(n-M1) at the output ofdelay unit 202. - The output of
distance measure unit 206 is connected to averagingunit 208 and to delayarray 204. The output of each of thedelay units averaging unit 208 so that each provides a previously delayed distance measure output value to theaveraging unit 208. For example, delayunit 218A provides a distance measure value, which is respective of the pair, L(n-1) and L(n-M1-1). Accordingly, only the first distance measure value has to be calculated and the rest are stored, delayed and provided to theaveraging unit 208 at the appropriate timing. - It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims, which follow.
Claims (14)
- Voice activity detection apparatus (100) comprising:an audio parameter generator (120) for producing audio parameters from received audio frames;a multi-stage delay unit (102) connected to the audio parameter generator for producing a series of the audio parameters delayed by different amounts;a distance measure means (106A, 106B) for measuring the distance between predetermined groups of the audio parameters having different delays;a plurality of determination units (106A, 106B) for determining a characteristic value for each group of the audio parameters whose distances have been measured; and, connected to the determination units, an averaging unit (108) for determining an average value for all of the characteristic values.
- Voice activity detection apparatus (100) according to claim 1 wherein the multi-stage delay unit comprises a plurality of audio parameter delay units (104A, 104B) for delaying the audio parameters, the audio parameter delay units being connected together in series so that each audio parameter delay unit following a previous audio parameter delay unit adds a further delay to the delay added by the previous one, the first of said audio parameter delay units being connected to the audio parameter generator (120);
wherein the distance measure means comprises a plurality of distance measure units (106A, 106B) each connected to at least two of said audio parameter delay units for grouping the delayed audio values produced by the audio parameter delay units according to the predetermined delay pattern, and the distance measure units are also the determination units. - The voice activity detection apparatus according to claim 2, wherein said first audio parameter delay unit (104A) operates to receive a plurality of audio parameter values, in respect of of a predetermined speech period, from said audio parameter generator, each of the rest of said audio parameter delay units (104A, 104B, 104C) is operable to receive said audio parameter values from a preceding one of said audio parameter delay units (104A, 104B, 104C) each said distance measure units is operable to process the audio parameter values received from selected ones of said audio parameter delay units connected thereto, thereby producing differential values, the averaging unit being operable to produce an average value from said differential values.
- The voice activity detection apparatus according to claim 1, claim 2 or claim 3 and further comprising:a plurality of averaging delay units (112A, 112B) connected in series therebetween, the first of said averaging delay units (112A) being further connected to the output of said averaging unit (108); anda digital logic unit (116) connected to said averaging delay units.
- The voice activity detection apparatus according to claim 4, wherein said first averaging delay unit (112A) is operable to receive a plurality of processed audio parameter average values from said averaging unit (108), each said delay units being operable to delay each said processed audio parameter average value, said digital logic unit being operable to receive a plurality of successive processed audio parameter average values, the latest of said successive processed audio parameter average values being received from said averaging unit and the rest of said successive processed audio parameter average values being received from said averaging delay unit, said digital logic unit processing being operable said successive processed audio parameter average values thereby producing a speech presence indication.
- The voice activity detection apparatus according to claim 4 or claim 5, wherein said first audio parameter delay unit (104A) is operable to receive a plurality of audio parameter values from said audio parameter generator (120), each of the rest of said audio parameter delay units (104B, 104C, 104D) is operable to receive said audio parameter values from a preceding one of said audio parameter delay units (104A, 104B, 104C), and each said distance measure units (106A, 106B) is operable to process together audio parameter values received from selected ones of said audio parameter delay unit, connected thereto, thereby producing differential values, said averaging unit being operable to produce a processed audio parameter average value from each set of said differential values, and
wherein said first averaging delay unit is operable to receive said processed audio parameter average values from said averaging unit, each of the delay units is operable to delay each of the processed audio parameter average values, said digital logic unit receives a plurality of successive processed audio parameter average values, the digital logic unit being operable to receive a plurality of successive processed audio parameter average values, the latest of said successive processed audio parameter average values being received from said averaging unit and the rest of said successive processed audio parameter average values being received from said averaging delay unit, the digital logic unit being operable to process said successive processed audio parameter average values thereby producing a speech presence indication. - The voice activity detection apparatus according to claim 1, claim 4 or claim 5 wherein each of the determination units (218A, 218B) is operable to provide a previously delayed distance measure output to the averaging unit (208), the first (218A) of said determination units being connected to the audio parameter generator (120) through a distance measure unit (206) which is operable to measure a distance between each of a series of differently delayed audio parameters produced by the multi-stage delay unit (206) and an undelayed output of the audio parameter generator (120).
- The voice activity detection apparatus according to any one of the preceding claims, wherein said audio parameter includes line spectral frequencies.
- The voice activity detection apparatus according to claim 8, wherein said audio parameter generator comprises a line spectral frequency generator.
- The voice activity detection apparatus according to any one of claims 4 to 9 and further comprising a subtraction unit (114) connected between the input and output of said first averaging delay unit and further to said digital logic unit,
wherein said subtraction unit is operable to produce difference values from processed audio parameter average values received from said averaging unit and from processed audio parameter average values delayed by said first averaging delay unit, and
wherein said digital logic unit is operable to process said difference values together with said successive processed audio parameter average values thereby producing a speech presence indication. - A method of use of the apparatus according to any one of the preceding claims for detecting speech activity, comprising the steps of:grouping audio parameters, which are associated with a predetermined combination of audio frames, thereby producing a plurality of groups;determining a characteristic value for each of said groups;determining an average value for each of a plurality of selections of a plurality of said characteristic values; anddetermining the presence of speech activity from selected ones of said average values.
- The method according to claim 11, further comprising the step of detecting the energy of audio samples associated with said audio parameters, prior to said step of determining the presence of speech activity.
- The method according to claim 11 or claim 12 and further comprising the preliminary step of receiving said audio parameters from an audio generator.
- The method according to claim 11, claim 12 or claim 13 and further comprising the preliminary step of producing said audio parameters from a plurality of audio samples.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0006312 | 2000-03-15 | ||
GB0006312A GB2360428B (en) | 2000-03-15 | 2000-03-15 | Voice activity detection apparatus and method |
PCT/IB2001/001603 WO2001080220A2 (en) | 2000-03-15 | 2001-03-14 | Voice activity detection apparatus and method |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1269462A2 EP1269462A2 (en) | 2003-01-02 |
EP1269462B1 true EP1269462B1 (en) | 2008-05-14 |
Family
ID=9887716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01958309A Expired - Lifetime EP1269462B1 (en) | 2000-03-15 | 2001-03-14 | Voice activity detection apparatus and method |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1269462B1 (en) |
AT (1) | ATE395683T1 (en) |
AU (1) | AU2001280027A1 (en) |
DE (1) | DE60133998D1 (en) |
GB (1) | GB2360428B (en) |
WO (1) | WO2001080220A2 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2317084B (en) * | 1995-04-28 | 2000-01-19 | Northern Telecom Ltd | Methods and apparatus for distinguishing speech intervals from noise intervals in audio signals |
US5774849A (en) * | 1996-01-22 | 1998-06-30 | Rockwell International Corporation | Method and apparatus for generating frame voicing decisions of an incoming speech signal |
US6385548B2 (en) * | 1997-12-12 | 2002-05-07 | Motorola, Inc. | Apparatus and method for detecting and characterizing signals in a communication system |
US6188981B1 (en) * | 1998-09-18 | 2001-02-13 | Conexant Systems, Inc. | Method and apparatus for detecting voice activity in a speech signal |
-
2000
- 2000-03-15 GB GB0006312A patent/GB2360428B/en not_active Expired - Fee Related
-
2001
- 2001-03-14 EP EP01958309A patent/EP1269462B1/en not_active Expired - Lifetime
- 2001-03-14 WO PCT/IB2001/001603 patent/WO2001080220A2/en active IP Right Grant
- 2001-03-14 AT AT01958309T patent/ATE395683T1/en not_active IP Right Cessation
- 2001-03-14 AU AU2001280027A patent/AU2001280027A1/en not_active Abandoned
- 2001-03-14 DE DE60133998T patent/DE60133998D1/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
ATE395683T1 (en) | 2008-05-15 |
GB2360428B (en) | 2002-09-18 |
GB2360428A (en) | 2001-09-19 |
EP1269462A2 (en) | 2003-01-02 |
WO2001080220A2 (en) | 2001-10-25 |
WO2001080220A3 (en) | 2002-05-23 |
DE60133998D1 (en) | 2008-06-26 |
AU2001280027A1 (en) | 2001-10-30 |
GB0006312D0 (en) | 2000-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0909442B1 (en) | Voice activity detector | |
KR100883712B1 (en) | Method of estimating sound arrival direction, and sound arrival direction estimating apparatus | |
EP3493205B1 (en) | Method and apparatus for adaptively detecting a voice activity in an input audio signal | |
EP0551803B1 (en) | A method of synchronizing and channel estimation in a TDMA radio communication system | |
JP5006279B2 (en) | Voice activity detection apparatus, mobile station, and voice activity detection method | |
KR100770839B1 (en) | Method and apparatus for estimating harmonic information, spectrum information and degree of voicing information of audio signal | |
JP3878482B2 (en) | Voice detection apparatus and voice detection method | |
US8818811B2 (en) | Method and apparatus for performing voice activity detection | |
EP0556992A1 (en) | Noise attenuation system | |
US20070239437A1 (en) | Apparatus and method for extracting pitch information from speech signal | |
EP1548703B1 (en) | Apparatus and method for voice activity detection | |
US6876965B2 (en) | Reduced complexity voice activity detector | |
KR20090080777A (en) | Method and Apparatus for detecting signal | |
JP3418005B2 (en) | Voice pitch detection device | |
EP1269462B1 (en) | Voice activity detection apparatus and method | |
JPH08221097A (en) | Detection method of audio component | |
Beritelli et al. | A low‐complexity speech‐pause detection algorithm for communication in noisy environments | |
CA2279264C (en) | Speech immunity enhancement in linear prediction based dtmf detector | |
US5734679A (en) | Voice signal transmission system using spectral parameter and voice parameter encoding apparatus and decoding apparatus used for the voice signal transmission system | |
JPS61184912A (en) | Constant variable type audible sense weighting filter | |
US6993478B2 (en) | Vector estimation system, method and associated encoder | |
GB2437868A (en) | Estimating noise power spectrum, sorting time frames, calculating the quantile and interpolating values over all remaining frequencies | |
JPH10304023A (en) | Telephone set | |
WO1988007740A1 (en) | Distance measurement control of a multiple detector system | |
JPH0311479B2 (en) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20021125 |
|
17Q | First examination report despatched |
Effective date: 20061201 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REF | Corresponds to: |
Ref document number: 60133998 Country of ref document: DE Date of ref document: 20080626 Kind code of ref document: P |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080825 |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20081014 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080814 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20090217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090331 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20091130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090314 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090331 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20091123 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080815 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090314 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080514 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20200327 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20200528 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R071 Ref document number: 60133998 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20210313 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20210313 |