EP1269462B1 - Voice activity detection apparatus and method - Google Patents

Voice activity detection apparatus and method Download PDF

Info

Publication number
EP1269462B1
EP1269462B1 EP01958309A EP01958309A EP1269462B1 EP 1269462 B1 EP1269462 B1 EP 1269462B1 EP 01958309 A EP01958309 A EP 01958309A EP 01958309 A EP01958309 A EP 01958309A EP 1269462 B1 EP1269462 B1 EP 1269462B1
Authority
EP
European Patent Office
Prior art keywords
audio parameter
unit
delay
audio
averaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01958309A
Other languages
German (de)
French (fr)
Other versions
EP1269462A2 (en
Inventor
Mark Shahaf
Yishay Ben-Shimol
Moti Shor-Haham
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Israel Ltd
Original Assignee
Motorola Israel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Israel Ltd filed Critical Motorola Israel Ltd
Publication of EP1269462A2 publication Critical patent/EP1269462A2/en
Application granted granted Critical
Publication of EP1269462B1 publication Critical patent/EP1269462B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the present invention relates to voice processing systems in general, and to methods and apparatus for detecting voice activity in a low resource environment, in particular.
  • a voice activity detector operates under the assumption that speech is present only in part of the audio signals while there are many intervals, which exhibit only silence or background noise.
  • a voice activity detector can be used for many purposes such as suppressing overall transmission activity in a transmission system, when there is no speech, thus potentially saving power and channel bandwidth.
  • the VAD detects that speech activity has resumed, then it can reinitiate transmission activity.
  • a voice activity detector can also be used in conjunction with speech storage devices, by differentiating audio portions which include speech from those that are "speechless". The portions including speech are then stored in the storage device and the "speechless" portions are not stored.
  • Conventional methods for detecting voice are based at least in part on methods for detecting and assessing the power of a speech signal.
  • the estimated power is compared to either constant or adaptive threshold, for determining a decision.
  • the main advantage of these methods is their low complexity, which makes them suitable for low resources implementations.
  • the main disadvantage of such methods is that background noise can result in "speech" being detected when none is present or "speech" which is present not being detected because it is obscured and difficult to detect.
  • Some methods for detecting speech activity are directed at noisy mobile environments and are based on adaptive filtering of the speech signal. This reduces the noise content from the signal, prior to the final decision.
  • the frequency spectrum and noise level may vary because the method will be used for different speakers and in different environments.
  • the input filter and thresholds are often adaptive so as to track these variations. Examples of these methods are provided in GSM specification 06.42 Voice Activity Detector (VAD) for half rate, full rate, and enhanced full rate speech traffic channels respectively.
  • VAD Voice Activity Detector
  • Another such method is the "Multi-Boundary Voice Activity Detection Algorithm" as proposed in ITU G.729 annex B. These methods are more accurate in noisy environment but are significantly complex to implement.
  • European Patent application No. 0785419A2 Benyassine et al. is directed to a method for voice activity detection which includes the following steps: extracting a predetermined set of parameters from the incoming speech signal for each frame and making a frame voicing decision of the incoming speech signal for each frame according to a set of difference measures extracted from the predetermined set of parameters.
  • WO-A-0017856 relates to a method and apparatus for detecting voice activity in a speech signal.
  • WO-A-0017856 has a filing date of 27 August 1999, a publication date of 30 March 2000, and claims a priority date of 18 September 1998.
  • the present invention alleviates the disadvantages of the prior art by providing a method which utilizes conventional vocoder output data of a voice related stream for detecting voice activity therein.
  • the method for voice activity detection is based on the analysis of audio parameters such as Line Spectral Frequencies (LSF) parameters.
  • LSF Line Spectral Frequencies
  • the detection is based on a stationarity estimate of spectral characteristics of the incoming speech frames, which is represented by LSF parameters.
  • Apparatus 100 includes two delay arrays 102 and 110, a plurality of distance measure units 106A, 106B, 106C and 106D, an averaging unit 108, a subtraction unit 114 and decision logic unit (DLU) 116.
  • Delay array 102 includes a plurality of delay units 104A, 104B, 104C, 104D, 104E, 104F and 104G, all connected in series, so that each adds a further delay to the previous one.
  • Delay array 110 includes a plurality of delay units, 112A, 112B, 112C and 112D, all connected in series, so that each adds a further delay to the previous one.
  • Apparatus 100 is further connected to a Line Spectral Frequencies (LSF) generation unit 120, which can be a part of the voice encoder (vocoder) apparatus of an audio system.
  • LSF unit 120 produces LSF values for each received audio frame. It is noted that LSF unit 120 is only one example for an audio parameter generation unit.
  • the output of the LSF unit 120 is coupled to the input of delay unit 104A.
  • the input of each of delay units 104A, 104B, 104C and 104D is connected to a respective one of distance measure units 106A, 106B, 106C and 106D.
  • the input of delay unit 104A is connected to distance measure unit 106A.
  • Delay unit 104A has its output connected to distance measure unit 106B.
  • Unit 104B has its output connected to unit 106C.
  • Unit 104C has its output connected 106D.
  • the output of delay units 104D, 104E, 104F and 104G is connected to a respective one of distance measure units 106A, 106B, 106C and 106D.
  • the output of delay unit 104D is connected to distance measure unit 106A.
  • the LSF value L (n) at the input of delay unit 104A is associated with the value L (n-4) at the output of delay unit 104D.
  • each of the LSF values L (n-1), L (n-2) and L (n-3) is associated with a respective one of LSF values L (n-5), L (n-6) and L (n-7), at a respective one of distance measure units 106B, 106C and 106D.
  • the system includes a different number of delay units and can combine more than two LSF values, which are at different distances from each other, such as L ⁇ n + L ⁇ ⁇ n - 4 + L ⁇ ⁇ n - 6 .
  • the distance measure units 106A, 106B, 106C and 106D are all connected to the averaging unit 108.
  • Averaging unit 108 is further connected to delay unit 112A, subtraction unit 114 and to DLU 116.
  • the output of each of delay units 112A, 112B, 112C and 112D is connected to DLU 116.
  • the output of delay unit 112A is further connected to the subtraction unit 114.
  • Figure 2 is an illustration of a method for operating the apparatus 100 of Figure 1 , operative in accordance with another preferred embodiment of the present invention.
  • a plurality of audio parameters are received.
  • Each of the audio parameters is related to a predetermined audio frame.
  • the audio parameters include LSF values, which represent the short-time frequency spectrum characteristics of the signal envelope for each audio frame.
  • LSF parameters are derived from the Linear Prediction Coefficients (LPC's), which are widely used by many modern speech compression and analysis schemes and are discussed in detail in A. M. Kondoz, Digital Speech: Coding for Low Bit Rate Communications Systems, New York: John Wiley & Sons, 1994 .
  • LPC's Linear Prediction Coefficients
  • step 152 the audio parameters are grouped according to a predetermined pattern of audio frames.
  • each audio frame is associated with a voice frame, which is four places ahead of it.
  • the audio parameters of audio frame n are grouped with the audio parameters of audio frame n-4.
  • any other number can be used for the distance between the frames.
  • further combinations can also be used such as combination (n,n-2,n-7) and the like.
  • distance measure unit 106A groups vector L ( n ) of frame n with vector L ( n -4) of frame n-4.
  • Distance measure unit 106B groups vector L ( n -1) of frame n-1 with vector L ( n -5) of frame n-5.
  • Distance measure unit 106C groups vector L ( n - 2) of frame n-2 with vector L ( n -6) of frame n-6.
  • Distance measure unit 106D groups vector L ( n -3) of frame n-3 with vector L ( n -7) of frame n-7.
  • each distance measure unit 106A, 106B, 106C and 106D performs a two-stage operation.
  • the distance measure units 106A, 106B, 106C and 106D provide the D vectors D ⁇ n D ⁇ ⁇ n - 1 D ⁇ ⁇ n - 2 ... D ⁇ ⁇ n - M 1 - 1 2 to averaging unit 108.
  • step 156 an average value is determined for all of the present characteristic values.
  • the measure a(n) is applied to a second M 2 -stage delay line.
  • the delay line includes four delay units 112A, 112B, 112C and 112D.
  • Averaging unit 108 further provides the latest average value a ( n ) to DLU 116 and to subtraction unit 114.
  • Delay unit 112A provides the previous average value a ( n-1 ) to the subtraction unit 114.
  • step 160 a decision is produced according to the values, which are present. Reference is now made to Figures 3 and 4 .
  • the implementation of each of the decision functions can be according to a Boolean expression, which compares the value e(n) and components of the averaging vector A ( n ) with predetermined or variable threshold values.
  • decision logic can vary according to specific performance requirements to a trade-off between "false alarm”, "miss detect” statistics, and the like.
  • the logic can be either constant or adapted to other components such as background noise characteristic estimator, voicing mode if available, periodicity check, and the like.
  • the instantaneous decision result can further be applied to an additional hangover function.
  • step 180 represents the initial stage of the decision phase, wherein the current state of the VAD (speech-on or speech off) is detected. If the current state of the VAD is speech on, then the system 100 proceeds to step 182. Otherwise, the system 100 proceeds to step 186.
  • VAD speech-on or speech off
  • step 182 compliance with a speech-on-to-off transition condition is detected.
  • a condition includes a predetermined combination of a ( n ) and e ( n ) with respect to predetermined values (Note that the threshold can be adaptive in general case).
  • the system proceeds to step 184, which performs a transition in the VAD state to speech-off. Otherwise, this step is repeated until such compliance is detected.
  • DLU 116 detects if the received values comply with the predetermined condition.
  • step 186 compliance with a speech-off-to-on transition condition is detected.
  • a condition includes another predetermined combination of a ( n ) and e(n) with respect to predetermined values.
  • step 188 which performs a transition in the VAD state to speech-on. Otherwise, this step is repeated until such compliance is detected.
  • step 184 After performing a VAD mode transition (either of step 184 and 188), the system proceeds back to step 180.
  • Apparatus 200 includes a multi stage delay unit 202, two delay arrays 204 and 210, a distance measure unit 206, an averaging unit 208, a subtraction unit 214 and decision logic unit (DLU) 216.
  • Delay array 204 includes a plurality of delay units 218A, 218B and 218M 2 , all connected in series, so that each adds a further delay to the previous one.
  • delay array 210 includes a plurality of delay units, 212A, 212B, 212C and 212D, all connected in series, so that each adds a further delay stage to the previous one.
  • System 200 is further connected to a Line Spectral Frequencies (LSF) generation unit 220, which can be a part of the voice encoder (vocoder) apparatus of an audio system.
  • LSF unit 220 produces LSF values for each received audio frame.
  • the input of multi stage delay unit 202 is connected to LSF unit 220.
  • the output of multi stage delay unit 202 is connected distance measure unit 206.
  • the LSF value L(n) at the input of delay unit 218A is associated with an M 1 stage delayed value L(n-M 1 ) at the output of delay unit 202.
  • the output of distance measure unit 206 is connected to averaging unit 208 and to delay array 204.
  • the output of each of the delay units 218A, 218B and 218M 2 is connected to the averaging unit 208 so that each provides a previously delayed distance measure output value to the averaging unit 208.
  • delay unit 218A provides a distance measure value, which is respective of the pair, L(n-1) and L(n-M 1 -1). Accordingly, only the first distance measure value has to be calculated and the rest are stored, delayed and provided to the averaging unit 208 at the appropriate timing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Time-Division Multiplex Systems (AREA)
  • Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)

Abstract

Voice activity detection system includes a plurality of audio parameter delay units 104A-104G connected in series therebetween, the first audio parameter delay unit 104A being further connected to an audio parameter generator 120, a plurality of distance measure units 106A-106D, each connected to at least two of the delay units, an averaging unit 108, connected to the distance measure units, a plurality of averaging delay units 112A-112D connected in series therebetween, the first averaging delay unit 112A being further connected to the output of the averaging unit, and a digital logic unit 116 connected to the averaging delay units. In an alternative embodiment (fig 5), there is a single distance measure unit (206) coupled between a multi-stage delay unit (202) and a plurality of delay units (218) connected in series.

Description

    FIELD OF THE INVENTION
  • The present invention relates to voice processing systems in general, and to methods and apparatus for detecting voice activity in a low resource environment, in particular.
  • BACKGROUND OF THE INVENTION
  • Methods and apparatus for detecting voice activity are known in the art. A voice activity detector (VAD) operates under the assumption that speech is present only in part of the audio signals while there are many intervals, which exhibit only silence or background noise.
  • A voice activity detector can be used for many purposes such as suppressing overall transmission activity in a transmission system, when there is no speech, thus potentially saving power and channel bandwidth. When the VAD detects that speech activity has resumed, then it can reinitiate transmission activity.
  • A voice activity detector can also be used in conjunction with speech storage devices, by differentiating audio portions which include speech from those that are "speechless". The portions including speech are then stored in the storage device and the "speechless" portions are not stored.
  • Conventional methods for detecting voice are based at least in part on methods for detecting and assessing the power of a speech signal. The estimated power is compared to either constant or adaptive threshold, for determining a decision. The main advantage of these methods is their low complexity, which makes them suitable for low resources implementations. The main disadvantage of such methods is that background noise can result in "speech" being detected when none is present or "speech" which is present not being detected because it is obscured and difficult to detect.
  • Some methods for detecting speech activity are directed at noisy mobile environments and are based on adaptive filtering of the speech signal. This reduces the noise content from the signal, prior to the final decision. The frequency spectrum and noise level may vary because the method will be used for different speakers and in different environments. Hence, the input filter and thresholds are often adaptive so as to track these variations. Examples of these methods are provided in GSM specification 06.42 Voice Activity Detector (VAD) for half rate, full rate, and enhanced full rate speech traffic channels respectively. Another such method is the "Multi-Boundary Voice Activity Detection Algorithm" as proposed in ITU G.729 annex B. These methods are more accurate in noisy environment but are significantly complex to implement.
  • All of these methods require the speech signal to be input. Some applications employing speech decompression schemes require carrying out speech detection during the speech decompression process.
  • European Patent application No. 0785419A2 Benyassine et al. is directed to a method for voice activity detection which includes the following steps: extracting a predetermined set of parameters from the incoming speech signal for each frame and making a frame voicing decision of the incoming speech signal for each frame according to a set of difference measures extracted from the predetermined set of parameters.
  • International Patent Application number WO-A-0017856 relates to a method and apparatus for detecting voice activity in a speech signal. WO-A-0017856 has a filing date of 27 August 1999, a publication date of 30 March 2000, and claims a priority date of 18 September 1998.
  • SUMMARY OF THE PRESENT INVENTION
  • It is an object of the present invention to provide a method and an apparatus for detecting the presence of speech activity, which alleviates the disadvantages of the prior art.
  • It is a further object of the present invention to provide a method and an apparatus, which can classify speech activity by utilizing compressed speech parameters.
  • In accordance with the present invention in a first aspect there is provided a voice activity detection apparatus as defined in claim 1 of the accompanying claims.
  • In accordance with the present invention in a second aspect there is provided a method of use as defined in claim 11 of the accompanying claims.
  • Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which:
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • Figure 1 is a schematic illustration of apparatus, constructed and operative in accordance with a preferred embodiment of the present invention;
    • Figure 2 is an illustration of a method for operating the apparatus of Figure 1, operative in accordance with the present invention;
    • Figure 3 is a schematic illustration of a two-state logic structure utilised in the preferred emobodiment;
    • Figure 4 shows more detail of a step of the method shown in Figure 2; and
    • Figure 5, which is a schematic illustration of an apparatus constructed and operative in accordance with a further embodiment of the invention.
    DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The present invention alleviates the disadvantages of the prior art by providing a method which utilizes conventional vocoder output data of a voice related stream for detecting voice activity therein. According to one aspect of the present invention, the method for voice activity detection (VAD) is based on the analysis of audio parameters such as Line Spectral Frequencies (LSF) parameters. The detection is based on a stationarity estimate of spectral characteristics of the incoming speech frames, which is represented by LSF parameters.
  • Reference is now made to Figure 1, which is a schematic illustration of an apparatus, generally referenced 100, constructed and operative in accordance with a preferred embodiment of the present invention. Apparatus 100 includes two delay arrays 102 and 110, a plurality of distance measure units 106A, 106B, 106C and 106D, an averaging unit 108, a subtraction unit 114 and decision logic unit (DLU) 116. Delay array 102 includes a plurality of delay units 104A, 104B, 104C, 104D, 104E, 104F and 104G, all connected in series, so that each adds a further delay to the previous one.
  • Delay array 110 includes a plurality of delay units, 112A, 112B, 112C and 112D, all connected in series, so that each adds a further delay to the previous one. Apparatus 100 is further connected to a Line Spectral Frequencies (LSF) generation unit 120, which can be a part of the voice encoder (vocoder) apparatus of an audio system. The LSF unit 120 produces LSF values for each received audio frame. It is noted that LSF unit 120 is only one example for an audio parameter generation unit.
  • The output of the LSF unit 120 is coupled to the input of delay unit 104A. The input of each of delay units 104A, 104B, 104C and 104D is connected to a respective one of distance measure units 106A, 106B, 106C and 106D. For example, the input of delay unit 104A is connected to distance measure unit 106A.
  • The outputs of each of the delay units is connected to a distance measure unit. Delay unit 104A has its output connected to distance measure unit 106B. Unit 104B has its output connected to unit 106C. Unit 104C has its output connected 106D.
  • The output of delay units 104D, 104E, 104F and 104G is connected to a respective one of distance measure units 106A, 106B, 106C and 106D. For example, the output of delay unit 104D is connected to distance measure unit 106A. Hence, the LSF value L(n) at the input of delay unit 104A is associated with the value L (n-4) at the output of delay unit 104D.
  • Similarly, each of the LSF values L (n-1), L (n-2) and L (n-3) is associated with a respective one of LSF values L (n-5), L (n-6) and L (n-7), at a respective one of distance measure units 106B, 106C and 106D. According to another embodiment of the invention (not shown), the system includes a different number of delay units and can combine more than two LSF values, which are at different distances from each other, such as L n + L n - 4 + L n - 6 .
    Figure imgb0001
  • The distance measure units 106A, 106B, 106C and 106D are all connected to the averaging unit 108. Averaging unit 108 is further connected to delay unit 112A, subtraction unit 114 and to DLU 116. The output of each of delay units 112A, 112B, 112C and 112D is connected to DLU 116. The output of delay unit 112A is further connected to the subtraction unit 114.
  • Reference is further made to Figure 2, which is an illustration of a method for operating the apparatus 100 of Figure 1, operative in accordance with another preferred embodiment of the present invention.
  • In step 150 a plurality of audio parameters are received. Each of the audio parameters is related to a predetermined audio frame. In the present example, the audio parameters include LSF values, which represent the short-time frequency spectrum characteristics of the signal envelope for each audio frame. With respect to Figure 1, delay unit 104A and distance measure unit 106A receive LSF values L (n) where L is a vector L (n)=[l 1(n),l 2(n),...,lN (n)], n denotes the index of a selected frame and N denotes the number of spectrum frequencies within an LSF vector.
  • It is noted that LSF parameters are derived from the Linear Prediction Coefficients (LPC's), which are widely used by many modern speech compression and analysis schemes and are discussed in detail in A. M. Kondoz, Digital Speech: Coding for Low Bit Rate Communications Systems, New York: John Wiley & Sons, 1994.
  • In step 152 the audio parameters are grouped according to a predetermined pattern of audio frames. In the present example, each audio frame is associated with a voice frame, which is four places ahead of it. Accordingly the audio parameters of audio frame n are grouped with the audio parameters of audio frame n-4. In general, the current frame LSF vector L (n) can be applied to an M1-stage (M1 is odd) delay line (DL1), where the delay line produces pairs of LSF vectors with delay of K = M 1 + 1 2 .
    Figure imgb0002
  • It is noted that any other number can be used for the distance between the frames. In addition, further combinations can also be used such as combination (n,n-2,n-7) and the like.
  • Referring to Figure 1, distance measure unit 106A groups vector L (n) of frame n with vector L (n-4) of frame n-4. Distance measure unit 106B groups vector L (n-1) of frame n-1 with vector L (n-5) of frame n-5. Distance measure unit 106C groups vector L (n - 2) of frame n-2 with vector L (n-6) of frame n-6. Distance measure unit 106D groups vector L (n-3) of frame n-3 with vector L (n-7) of frame n-7.
  • In step 154, a characteristic value is determined for each group of audio values. In the present example, each distance measure unit 106A, 106B, 106C and 106D performs a two-stage operation. The first operation includes generating a vector v =[ν12...ν N ] where each of the components v(i) of the vector are determined as follows: v i = l i j × 1 - l i 2 j - K - 1 - l i 2 j × l i j - K i = 1 N j = n n - 3
    Figure imgb0003
  • The second operation includes a transformation of vector V to a vector D according to the following expression: Note the change in formula : d i = j = 1 T c j v 2 j i i = 1 N
    Figure imgb0004

    where cj are coefficients and T is the number of the elements in the summation.
  • The distance measure units 106A, 106B, 106C and 106D provide the D vectors D n D n - 1 D n - 2 D n - M 1 - 1 2
    Figure imgb0005
    to averaging unit 108.
  • In step 156, an average value is determined for all of the present characteristic values. In the present example, averaging unit 108 applies the averaging expression which is as follows: a n = 2 M 1 - 1 i = 1 N j = 0 M 1 - 1 2 d i n - j
    Figure imgb0006
  • The measure a(n) is applied to a second M2-stage delay line. With reference to Figure 1, the delay line includes four delay units 112A, 112B, 112C and 112D. The delay units 112A, 112B, 112C and 112D provide a vector A(n) to DLU 116, where A(n)=[a(n)a(n-1)a(n-2)...a(n-M2)]. Averaging unit 108 further provides the latest average value a(n) to DLU 116 and to subtraction unit 114. Delay unit 112A provides the previous average value a(n-1) to the subtraction unit 114. Subtraction unit 114 provides a subtraction value g(n) to DLU 116, where g(n)=a(n)-a(n-1). It is noted that an additional signal energy values e(n) can also be provided as enabling switch to the DLU 116 (step 158) by connecting DLU 116 to a signal energy detector (not shown), which is usually present in most voice oriented communication systems.
  • In step 160, a decision is produced according to the values, which are present. Reference is now made to Figures 3 and 4.
  • With reference to Figure 3, ST1 (referenced 170) and ST2 (referenced 172) are states which indicate "speech-on" and "speech-off" modes, respectively. DEC1, DEC2, DEC3 and DEC4 are state transition decision functions, which in the present example, comply with the following rule: DEC3 = NOT (DEC1) and DEC4 = NOT (DEC2). The implementation of each of the decision functions can be according to a Boolean expression, which compares the value e(n) and components of the averaging vector A (n) with predetermined or variable threshold values.
  • It is noted that the decision logic can vary according to specific performance requirements to a trade-off between "false alarm", "miss detect" statistics, and the like. The logic can be either constant or adapted to other components such as background noise characteristic estimator, voicing mode if available, periodicity check, and the like. The instantaneous decision result can further be applied to an additional hangover function.
  • With reference to Figure 4, step 180 represents the initial stage of the decision phase, wherein the current state of the VAD (speech-on or speech off) is detected. If the current state of the VAD is speech on, then the system 100 proceeds to step 182. Otherwise, the system 100 proceeds to step 186.
  • In step 182, compliance with a speech-on-to-off transition condition is detected. Such a condition includes a predetermined combination of a(n) and e(n) with respect to predetermined values (Note that the threshold can be adaptive in general case). When such compliance is detected then the system proceeds to step 184, which performs a transition in the VAD state to speech-off. Otherwise, this step is repeated until such compliance is detected. With reference to Figure 1, DLU 116 detects if the received values comply with the predetermined condition.
  • In step 186, compliance with a speech-off-to-on transition condition is detected. Such a condition includes another predetermined combination of a(n) and e(n) with respect to predetermined values. When such compliance is detected then the system proceeds to step 188, which performs a transition in the VAD state to speech-on. Otherwise, this step is repeated until such compliance is detected.
  • After performing a VAD mode transition (either of step 184 and 188), the system proceeds back to step 180.
  • Reference is now made to Figure 5, which is a schematic illustration of apparatus, generally referenced 200, constructed and operative in accordance with a further preferred embodiment of the present invention. Apparatus 200 includes a multi stage delay unit 202, two delay arrays 204 and 210, a distance measure unit 206, an averaging unit 208, a subtraction unit 214 and decision logic unit (DLU) 216. Delay array 204 includes a plurality of delay units 218A, 218B and 218M2, all connected in series, so that each adds a further delay to the previous one. It is noted that delay array 210 includes a plurality of delay units, 212A, 212B, 212C and 212D, all connected in series, so that each adds a further delay stage to the previous one. System 200 is further connected to a Line Spectral Frequencies (LSF) generation unit 220, which can be a part of the voice encoder (vocoder) apparatus of an audio system. The LSF unit 220 produces LSF values for each received audio frame.
  • The input of multi stage delay unit 202 is connected to LSF unit 220. The output of multi stage delay unit 202 is connected distance measure unit 206. Hence, the LSF value L(n) at the input of delay unit 218A is associated with an M1 stage delayed value L(n-M1) at the output of delay unit 202.
  • The output of distance measure unit 206 is connected to averaging unit 208 and to delay array 204. The output of each of the delay units 218A, 218B and 218M2 is connected to the averaging unit 208 so that each provides a previously delayed distance measure output value to the averaging unit 208. For example, delay unit 218A provides a distance measure value, which is respective of the pair, L(n-1) and L(n-M1-1). Accordingly, only the first distance measure value has to be calculated and the rest are stored, delayed and provided to the averaging unit 208 at the appropriate timing.
  • It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims, which follow.

Claims (14)

  1. Voice activity detection apparatus (100) comprising:
    an audio parameter generator (120) for producing audio parameters from received audio frames;
    a multi-stage delay unit (102) connected to the audio parameter generator for producing a series of the audio parameters delayed by different amounts;
    a distance measure means (106A, 106B) for measuring the distance between predetermined groups of the audio parameters having different delays;
    a plurality of determination units (106A, 106B) for determining a characteristic value for each group of the audio parameters whose distances have been measured; and, connected to the determination units, an averaging unit (108) for determining an average value for all of the characteristic values.
  2. Voice activity detection apparatus (100) according to claim 1 wherein the multi-stage delay unit comprises a plurality of audio parameter delay units (104A, 104B) for delaying the audio parameters, the audio parameter delay units being connected together in series so that each audio parameter delay unit following a previous audio parameter delay unit adds a further delay to the delay added by the previous one, the first of said audio parameter delay units being connected to the audio parameter generator (120);
    wherein the distance measure means comprises a plurality of distance measure units (106A, 106B) each connected to at least two of said audio parameter delay units for grouping the delayed audio values produced by the audio parameter delay units according to the predetermined delay pattern, and the distance measure units are also the determination units.
  3. The voice activity detection apparatus according to claim 2, wherein said first audio parameter delay unit (104A) operates to receive a plurality of audio parameter values, in respect of of a predetermined speech period, from said audio parameter generator, each of the rest of said audio parameter delay units (104A, 104B, 104C) is operable to receive said audio parameter values from a preceding one of said audio parameter delay units (104A, 104B, 104C) each said distance measure units is operable to process the audio parameter values received from selected ones of said audio parameter delay units connected thereto, thereby producing differential values, the averaging unit being operable to produce an average value from said differential values.
  4. The voice activity detection apparatus according to claim 1, claim 2 or claim 3 and further comprising:
    a plurality of averaging delay units (112A, 112B) connected in series therebetween, the first of said averaging delay units (112A) being further connected to the output of said averaging unit (108); and
    a digital logic unit (116) connected to said averaging delay units.
  5. The voice activity detection apparatus according to claim 4, wherein said first averaging delay unit (112A) is operable to receive a plurality of processed audio parameter average values from said averaging unit (108), each said delay units being operable to delay each said processed audio parameter average value, said digital logic unit being operable to receive a plurality of successive processed audio parameter average values, the latest of said successive processed audio parameter average values being received from said averaging unit and the rest of said successive processed audio parameter average values being received from said averaging delay unit, said digital logic unit processing being operable said successive processed audio parameter average values thereby producing a speech presence indication.
  6. The voice activity detection apparatus according to claim 4 or claim 5, wherein said first audio parameter delay unit (104A) is operable to receive a plurality of audio parameter values from said audio parameter generator (120), each of the rest of said audio parameter delay units (104B, 104C, 104D) is operable to receive said audio parameter values from a preceding one of said audio parameter delay units (104A, 104B, 104C), and each said distance measure units (106A, 106B) is operable to process together audio parameter values received from selected ones of said audio parameter delay unit, connected thereto, thereby producing differential values, said averaging unit being operable to produce a processed audio parameter average value from each set of said differential values, and
    wherein said first averaging delay unit is operable to receive said processed audio parameter average values from said averaging unit, each of the delay units is operable to delay each of the processed audio parameter average values, said digital logic unit receives a plurality of successive processed audio parameter average values, the digital logic unit being operable to receive a plurality of successive processed audio parameter average values, the latest of said successive processed audio parameter average values being received from said averaging unit and the rest of said successive processed audio parameter average values being received from said averaging delay unit, the digital logic unit being operable to process said successive processed audio parameter average values thereby producing a speech presence indication.
  7. The voice activity detection apparatus according to claim 1, claim 4 or claim 5 wherein each of the determination units (218A, 218B) is operable to provide a previously delayed distance measure output to the averaging unit (208), the first (218A) of said determination units being connected to the audio parameter generator (120) through a distance measure unit (206) which is operable to measure a distance between each of a series of differently delayed audio parameters produced by the multi-stage delay unit (206) and an undelayed output of the audio parameter generator (120).
  8. The voice activity detection apparatus according to any one of the preceding claims, wherein said audio parameter includes line spectral frequencies.
  9. The voice activity detection apparatus according to claim 8, wherein said audio parameter generator comprises a line spectral frequency generator.
  10. The voice activity detection apparatus according to any one of claims 4 to 9 and further comprising a subtraction unit (114) connected between the input and output of said first averaging delay unit and further to said digital logic unit,
    wherein said subtraction unit is operable to produce difference values from processed audio parameter average values received from said averaging unit and from processed audio parameter average values delayed by said first averaging delay unit, and
    wherein said digital logic unit is operable to process said difference values together with said successive processed audio parameter average values thereby producing a speech presence indication.
  11. A method of use of the apparatus according to any one of the preceding claims for detecting speech activity, comprising the steps of:
    grouping audio parameters, which are associated with a predetermined combination of audio frames, thereby producing a plurality of groups;
    determining a characteristic value for each of said groups;
    determining an average value for each of a plurality of selections of a plurality of said characteristic values; and
    determining the presence of speech activity from selected ones of said average values.
  12. The method according to claim 11, further comprising the step of detecting the energy of audio samples associated with said audio parameters, prior to said step of determining the presence of speech activity.
  13. The method according to claim 11 or claim 12 and further comprising the preliminary step of receiving said audio parameters from an audio generator.
  14. The method according to claim 11, claim 12 or claim 13 and further comprising the preliminary step of producing said audio parameters from a plurality of audio samples.
EP01958309A 2000-03-15 2001-03-14 Voice activity detection apparatus and method Expired - Lifetime EP1269462B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB0006312 2000-03-15
GB0006312A GB2360428B (en) 2000-03-15 2000-03-15 Voice activity detection apparatus and method
PCT/IB2001/001603 WO2001080220A2 (en) 2000-03-15 2001-03-14 Voice activity detection apparatus and method

Publications (2)

Publication Number Publication Date
EP1269462A2 EP1269462A2 (en) 2003-01-02
EP1269462B1 true EP1269462B1 (en) 2008-05-14

Family

ID=9887716

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01958309A Expired - Lifetime EP1269462B1 (en) 2000-03-15 2001-03-14 Voice activity detection apparatus and method

Country Status (6)

Country Link
EP (1) EP1269462B1 (en)
AT (1) ATE395683T1 (en)
AU (1) AU2001280027A1 (en)
DE (1) DE60133998D1 (en)
GB (1) GB2360428B (en)
WO (1) WO2001080220A2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2317084B (en) * 1995-04-28 2000-01-19 Northern Telecom Ltd Methods and apparatus for distinguishing speech intervals from noise intervals in audio signals
US5774849A (en) * 1996-01-22 1998-06-30 Rockwell International Corporation Method and apparatus for generating frame voicing decisions of an incoming speech signal
US6385548B2 (en) * 1997-12-12 2002-05-07 Motorola, Inc. Apparatus and method for detecting and characterizing signals in a communication system
US6188981B1 (en) * 1998-09-18 2001-02-13 Conexant Systems, Inc. Method and apparatus for detecting voice activity in a speech signal

Also Published As

Publication number Publication date
ATE395683T1 (en) 2008-05-15
GB2360428B (en) 2002-09-18
GB2360428A (en) 2001-09-19
EP1269462A2 (en) 2003-01-02
WO2001080220A2 (en) 2001-10-25
WO2001080220A3 (en) 2002-05-23
DE60133998D1 (en) 2008-06-26
AU2001280027A1 (en) 2001-10-30
GB0006312D0 (en) 2000-05-03

Similar Documents

Publication Publication Date Title
EP0909442B1 (en) Voice activity detector
KR100883712B1 (en) Method of estimating sound arrival direction, and sound arrival direction estimating apparatus
EP3493205B1 (en) Method and apparatus for adaptively detecting a voice activity in an input audio signal
EP0551803B1 (en) A method of synchronizing and channel estimation in a TDMA radio communication system
JP5006279B2 (en) Voice activity detection apparatus, mobile station, and voice activity detection method
KR100770839B1 (en) Method and apparatus for estimating harmonic information, spectrum information and degree of voicing information of audio signal
JP3878482B2 (en) Voice detection apparatus and voice detection method
US8818811B2 (en) Method and apparatus for performing voice activity detection
EP0556992A1 (en) Noise attenuation system
US20070239437A1 (en) Apparatus and method for extracting pitch information from speech signal
EP1548703B1 (en) Apparatus and method for voice activity detection
US6876965B2 (en) Reduced complexity voice activity detector
KR20090080777A (en) Method and Apparatus for detecting signal
JP3418005B2 (en) Voice pitch detection device
EP1269462B1 (en) Voice activity detection apparatus and method
JPH08221097A (en) Detection method of audio component
Beritelli et al. A low‐complexity speech‐pause detection algorithm for communication in noisy environments
CA2279264C (en) Speech immunity enhancement in linear prediction based dtmf detector
US5734679A (en) Voice signal transmission system using spectral parameter and voice parameter encoding apparatus and decoding apparatus used for the voice signal transmission system
JPS61184912A (en) Constant variable type audible sense weighting filter
US6993478B2 (en) Vector estimation system, method and associated encoder
GB2437868A (en) Estimating noise power spectrum, sorting time frames, calculating the quantile and interpolating values over all remaining frequencies
JPH10304023A (en) Telephone set
WO1988007740A1 (en) Distance measurement control of a multiple detector system
JPH0311479B2 (en)

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20021125

17Q First examination report despatched

Effective date: 20061201

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 60133998

Country of ref document: DE

Date of ref document: 20080626

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080825

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081014

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20090217

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090331

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20091130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090314

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090331

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080815

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090314

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080514

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20200327

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20200528

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60133998

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20210313

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210313