EP0993670B1 - Verfahren und vorrichtung zur sprachverbesserung in einem sprachübertragungssystem - Google Patents
Verfahren und vorrichtung zur sprachverbesserung in einem sprachübertragungssystem Download PDFInfo
- Publication number
- EP0993670B1 EP0993670B1 EP98932337A EP98932337A EP0993670B1 EP 0993670 B1 EP0993670 B1 EP 0993670B1 EP 98932337 A EP98932337 A EP 98932337A EP 98932337 A EP98932337 A EP 98932337A EP 0993670 B1 EP0993670 B1 EP 0993670B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- speech
- frequency
- amplitude
- formant
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims description 36
- 238000004891 communication Methods 0.000 title claims description 35
- 230000003595 spectral effect Effects 0.000 claims abstract description 57
- 238000004458 analytical method Methods 0.000 claims abstract description 29
- 238000001228 spectrum Methods 0.000 claims description 22
- 230000001965 increasing effect Effects 0.000 claims description 12
- 230000005534 acoustic noise Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 abstract description 18
- 230000004075 alteration Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 230000005284 excitation Effects 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 206010011878 Deafness Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000004704 glottis Anatomy 0.000 description 2
- 230000010370 hearing loss Effects 0.000 description 2
- 231100000888 hearing loss Toxicity 0.000 description 2
- 208000016354 hearing loss disease Diseases 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000002459 sustained effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
- G10L21/0364—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
- G10L2021/0135—Voice conversion or morphing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/15—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being formant information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
Definitions
- the present invention relates to a method and apparatus for speech enhancement in a speech communication system, and in particular to such a method and apparatus for enhancing speech to make it more intelligible to a listener in a noisy environment.
- Speech communication systems such as mobile phones and radios are often used in noisy environments, such as inside vehicles. Furthermore, this environmental noise can vary during a conversation. This varying environmental noise can make it very difficult for a listener to understand the speech being output by their phone or radio.
- EP-A-0732686 merely discloses the use of an algorithm to map a speech signal into a given frequency range for transmission
- the paper "Frequency Domain Adaptive Postfiltering for Enhancement of noisy Speech” relates only to the suppression of noise in a speech signal and not to the alteration of the characteristics of speech
- the paper "Formant-Based Processing for Hearing Aids” relates to the modifications of speech signals but not in response to background noise.
- a method for increasing the intelligibility of speech output by a speech communication system to a listener using the system characterised by:
- a speech communication system characterised by:
- the present invention thus monitors the background noise in which a speech communication system is being used (i.e. the external environmental acoustic noise in the vicinity of the listener) and can adjust the characteristics of the speech to be output by the speech communication system to the listener to make it more intelligible in that current background acoustic noise. It therefore provides enhanced intelligibility of speech output as sound by, for example, the loudspeaker or earpiece of a mobile phone or radio when used in noisy environments.
- the present invention analyses current background noise, it can take account of changes in the background noise and enhance the speech accordingly.
- the background acoustic noise is therefore preferably continuously analysed and the speech continuously altered on the basis of that analysis. This provides for dynamic enhancement of the speech and is particularly advantageous in environments where background noise can change continuously and significantly, such as in a vehicle.
- the background acoustic environmental noise can be analysed by various techniques, as is known in the art. It can be picked up or sampled using, for example, the usual microphone for picking up the user's speech of the speech communication system (e.g. mobile phone or radio), or a separate microphone.
- the usual microphone for picking up the user's speech of the speech communication system e.g. mobile phone or radio
- a separate microphone e.g. the usual microphone for picking up the user's speech of the speech communication system
- An example background noise analysis system would be a process whereby the user's speech (for example in the microphone signal) is detected (using one of many common techniques, such as adding all input noise values in a given time interval and comparing these against a threshold) and the acoustic background noise is analysed during the gaps between the speech periods.
- LPC linear prediction coefficient
- the intelligibility of speech to be output by the speech communication system in the current background noise is determined by classifying the contents of the speech into at least two categories, and comparing the amplitude of the speech in one category at one frequency with the noise amplitude at that frequency.
- descriptions of the speech and the background noise in the form of spectral analyses and amplitude scaling factor (gain) are compared to determine if the speech would be audible to a listener in that noise.
- the speech contents could initially be classified into non-speech, voiced speech or unvoiced speech. If non-speech is present (perhaps a pause between words), then the audibility of this is unimportant and so it can be ignored.
- voiced speech is present, then its intelligibility needs to be determined. This is preferably done by comparing the amplitude of one or more, or most preferably each, spectral peak and/or of one or more, or most preferably each, formant (as is known in the art, voiced speech contains a series of resonant peaks at varying frequencies called formants which convey a great deal of information and to which spectral peaks in the spectral plot of the speech often correspond) in the voiced speech with the noise amplitude at the frequency of the peak or formant, respectively. If more than one peak or formant is to be considered, then the amplitude of each peak or formant should be compared with the noise amplitude at the frequency of the respective peak or formant.
- the speech is determined to be unintelligible if the noise amplitude at any formant frequency or spectral peak or at a particular number of formant or spectral peak frequencies exceeds the corresponding formant or spectral peak amplitude(s).
- the speech could be classified into vowel and consonant sounds (or other speech sounds).
- a classification is used which is helpful or appropriate to determining intelligibility.
- the classification includes a category which includes formants of the speech (preferably only formants) and that category is compared with the noise.
- the classification is into formant containing and non-formant containing categories.
- the speech can be altered to make it more intelligible in accordance with that determination.
- the speech characteristics are altered, but not otherwise.
- Alteration of the speech characteristics can be done in various ways, as is known in the art. It is preferably done by increasing the volume (amplitude) and/or altering the frequency of speech components and in particular the formants and/or spectral peaks in the speech.
- the speech characteristics will be altered by adjusting the positions of the formants and/or spectral peaks in the speech spectral plot.
- Such alterations will have a more perceptible effect on the speech to a human listener and thus are particularly effective for increasing the intelligibility of the speech.
- one or more peaks or formants could be shifted upwards or downwards in frequency, or the amplitude of one or more peaks or formants could be increased (corresponding to a decrease in bandwidth), or the bandwidth of one or more of the peaks or formants could be increased (corresponding to a decrease in amplitude).
- the volume of the formants can be increased such that they are audible over the background noise.
- this can be an undesirable way of altering the speech characteristics as speech volume levels sufficient to cause hearing loss (if sustained) may be required to make the speech intelligible in certain situations, notably those within noisy motor vehicles.
- the frequency of speech components such as formants or peaks in the speech spectrum is adjusted. This is preferably done to move them to a frequency where the noise level is lower, such that the components, e.g. peaks or formants, are audible (i.e. have an amplitude greater than the noise) at that frequency.
- the alteration of speech characteristics is preferably carried out in accordance with the results of the analysis of the background noise, and may be dependent upon the present or past values of the noise. Using present values of noise, a direct comparison may be made and an alteration made to the speech characteristics; using past values, it is possible to make predictive changes. For example, if the noise analysis indicates the noise amplitude reduces at a particular frequency to a level at which a presently inaudible formant would be audible, the speech characteristics could be altered to change the frequency of that formant to that particular frequency.
- the speech signal could be passed through an adaptive filter, such as a perceptual error weighting filter (as described in CHEN, J. H., COK, E.V., LIN, Y., JAYANT, N., and MIECHER, M.J., "A low delay CELP coder for the CCITT 16 kb/s speech coding standard". IEEE J. Scl. Ateas Commun. 1992, 10. (5). pp 830-849) to narrow or widen the formant bandwidth.
- the amplitude peaks could be clipped so that the energy in the unvoiced parts of the speech becomes a more significant part of the total speech energy. This can increase intelligibility but at the expense of sound quality.
- the speech characteristics are altered by altering line spectral pair (LSP) data representing the speech.
- LSP line spectral pair
- line spectral pairs are representations of the linear-prediction parameters derived for periods of sound.
- the sound is speech
- the resonant frequencies in the speech or formants can be noted in the linear-prediction spectrum.
- LSP values usually uniquely relate to positions of such resonances or formants in the linear-prediction spectrum.
- LSP data can be used to represent speech, and the Applicants have recognised that by altering the LSP data, characteristics such as the frequency and amplitude of formants in the speech can be adjusted. This allows the speech characteristics to be adjusted relatively easily and in a way that can readily change the speech as perceived by a listener and at a much lower computational overhead than when using, for example, adaptive filtering. Also, such adjustment does not eliminate parts of the speech spectrum, but rather modifies them.
- this embodiment of the present invention is particularly advantageous in such systems which use LSPs for speech transmission, since the LSP information that is transmitted may be altered in the speech communication system when it is received to enhance the intelligibility of the speech. This altered LSP data would then be converted back to linear-prediction parameters and hence reconstructed into speech and output as sound, but with altered characteristics.
- the frequency or the power and bandwidth of specific frequency-domain features, such as formants, found in the speech are altered in this way.
- the LSP alterations can be designed to affect the reconstructed speech in specific ways so as to enhance the intelligibility of the speech over the background noise.
- the particular line spectral pair (LISP) associated with a formant can be identified and its separation (or spacing) then widened or narrowed to increase or decrease the formant bandwidth.
- line spectral pairs can be moved higher or lower in frequency to increase or decrease the frequency of particular formants.
- the LSP information is preferably altered by adding or subtracting values to one or more LSPs (or LSP lines), or by moving one or more LSPs (or LSP lines) in the speech spectrum.
- the values may be determined in accordance with the analysis of the background noise, and may be dependent upon the present or past values of each LSP. Using present values of LSP data, a direct comparison can be made with the ambient noise and an adjustment made to the LSP data; using past values, it is possible to make predictive changes.
- the invention includes making a numerical increment or decrement in the value of any or all of the set of LSPs (or LSP lines) defining the speech.
- individual or groups of LSPs can be moved to: shift one or more spectral peaks or formants in frequency (either upwards or downwards); or change the amplitude (either to increase the amplitude (decrease the bandwidth) or decrease the amplitude (increase the bandwidth)) of one or more spectral peaks or formants.
- the separation between the values of two or more of a set of LSP lines can be narrowed or widened to narrow or widen frequency features (such as spectral peaks or formants) found in the speech frequency spectrum.
- the values of two or more of a set of LSP lines (and most preferably of a pair of LSP lines) can be incremented or decremented, most preferably by identical amounts (either in absolute terms or as a percentage of their original values), to adjust the centre frequency of features (such as spectral peaks or formants) found in the frequency spectrum of the speech.
- line spectral pairs are translated in frequency so as to change the centre frequency of particular peaks or formants in the speech data. As discussed above, this is a particularly advantageous way of changing speech characteristics as heard by a listener, for example to increase intelligibility over background noise.
- any or all of the above adjustments can be used individually or in combination to alter the speech characteristics of the speech to be output by the speech communication system in accordance with the analysis of the background noise of the listener to make the speech output by the speech communication system more intelligible to the listener.
- the present invention has been described in relation to speech communication systems, such as mobile phones and radios. It is particularly suited to use in speech decoders, such as would be found for example in mobile phones or mobile radios. However, it would also be applicable (and in particular the aspects relating to LSP alteration would be applicable) to use in speech coders where it was desired to alter the characteristics of the user's input speech to be transmitted by the speech coder (for example to increase intelligibility over the speaker's background noise). It would also be applicable in radio receivers, televisions, or other devices which broadcast speech to listeners.
- the present invention is particularly applicable to use in a speech codec system such as would be used in a mobile phone or radio system.
- a speech codec system such as would be used in a mobile phone or radio system.
- An example of such a codec structure is shown in Figure 1, in the form of a generic CELP coder.
- CELP codebook-excited linear prediction
- FIG. 1 shows input speech 21 being analysed by linear prediction analyser unit or device 2 resulting in linear prediction (LPC) parameters 3.
- LPC linear prediction
- the remainder of the input signal which linear prediction cannot describe is passed to a pitch filter, VQ encoding block 4 which produces parameters representative of, for example, the gain and pitch of the speech.
- the LPC parameters 3 and any other parameters (such as gain and pitch) 5 describing the input speech are quantized by a quantizer 6 and transmitted (as transmission parameters 7) to the CELP decoder 14 which dequantizes them using a dequantizer 8. These dequantized values are then used to recreate speech 15 to be output as sound to a listener.
- the dequantizer 8 reproduces the LPC parameters 3 and other parameters 5 by means of an LPC synthesiser 30 and pitch filter, VQ decoding block 31, respectively, which reproduce the speech for it to be output as sound 15.
- LPC parameters may alternatively be converted to a different form prior to quantization in the coder (and also converted back to LPC coefficients after dequantization).
- Such forms may include log area ratios, PARCOR (reflection coefficients) and line spectral pairs.
- CELP pitch filter and vector quantizer
- the LPC parameters are transmitted as LSPs.
- LSPs' refers to the parameters generated by a conversion of linear prediction coefficients using the line spectrum pair approach as described in the paper by Sugamura and Itakura (Sugamura N, Itakura F, "Speech analysis and synthesis methods developed at ECL in NTT - from LPC to LSP - ", Speech Communication, vol. 5, pp. 199-213, 1986).
- the linear prediction coefficients themselves are generated by any of the well-established analysis methods operating on a set of data (speech) such as those described in Makhoul J, "Linear prediction: a tutorial review", Proc. IEEE, vol 63, no. 4, pp. 561-580, 1975.
- LSPs are generated via a mathematical transformation from LPCs and thus have identical information content, but different form. Many other mathematical transformations from LPCs have been determined, but none of the resulting parameters can be altered in the same way as LSPs and as described in the present invention.
- the line spectral pair parameters may be referred to as line spectral frequencies, however this term is not applied exclusively to LSPs.
- the roots obtained by solving the polynomials P and Q give the line spectral frequency parameters, referred to as line spectral pairs. Many methods exist to determine these roots, as explained in, for example, the paper by Sugamura and Itakura referred to above. The choice of method is irrelevant for the purposes of the present invention.
- the set of LSPs are often scaled. With reference to a 'basic' LSP value, the cosine or sine of these are also referred to as LSPs.
- the basic LSP may reside in one of various domains, i.e. its maximum and minimum values may be between 0 and ⁇ , between 0 and 4000Hz (a typical sampling frequency), or within other arbitrary ranges such as 0 to 1.
- LSPs line spectral pairs
- Linear prediction is the usage of a fixed-length formula to model an unknown system.
- the formula structure is fixed but the values to be inserted into the formula must be found.
- Linear predictive analysis is the process of finding the best set of values for that formula. These values are the linear prediction coefficients, and the best set of these values is the set that causes the equation output to resemble the output of the system to be modelled most closely, when the inputs to the two systems are identical.
- the reflection coefficient equation is very easy to relate to a real system.
- the LPC analysis is attempting to find the best parameters that model a short period of speech.
- the model is made up of a number of different width but equal length tubes connected in series.
- the reflection coefficients fit well into this physical model as the reflection coefficients relate directly to the difference between each consecutive tube.
- the LSP parameters each relate to the resonant frequency of one of the connected tubes. Half of the parameters are generated assuming that the source end of tubes is open, and half assuming that it is closed. In fact, the glottis opens and closes rapidly and so is neither open nor closed. Thus each true spectral resonance occurs between two nearby line spectral frequencies and these two values are considered to be a pair (thus line spectral pair).
- FIG. 2 An embodiment of the present invention in a speech communication system comprising a speech codec, and using LSP alteration to enhance the intelligibility of speech in a noisy environment is shown in Figure 2, and the signal processing is illustrated in Figures 3 and 4.
- the system as shown in Figure 2 has many features in common with the system of Figure 1 and thus the same reference numerals have been used for the like features of the systems.
- the LSP alteration mechanism may act within a speech codec (a codec comprises both a coding 22 and a decoding 14 mechanism) in the positions shown in Figure 2 (i.e. in the speech decoder 14).
- the speech coder 22 transforms the input speech 21 into a set of condensed parameters 20 suitable for transmission by radio or other means to a receiving unit 14.
- the LPC parameters produced by the linear prediction analyser 2 are converted to line spectral pair data by an LPC to LSP converter 32 before being quantized by the quantizer 6.
- the receiving unit then decodes the transmitted data to reconstruct speech 15.
- the coding unit 22 may reside in an office telephone and the decoding unit 14 within a mobile telephone handset.
- the LSP processing depends upon the degree and type of acoustic noise background 16 that is present in the environment of the listener.
- the analysis unit 12 shown in Figure 2 determines the type and level of background noise by use of a microphone 13 which picks up, inter alia , the actual external background acoustic noise of the listener's environment.
- An example of a noise analysis system would be a process whereby the user's speech is detected (using one of many common techniques, such as adding all input noise values in a given time interval and comparing these against a threshold) and the external acoustic background noise is considered during the gaps between speech periods.
- LPC linear prediction coefficient
- the decision device or unit 11 determines whether the speech data currently being received by the decoder and replayed as sound via the loudspeaker or ear piece of the mobile telephone unit would be intelligible to an average listener in the current background acoustic noise 16 of the mobile telephone unit (i.e. listener).
- the decision unit determines that the speech is unintelligible, then processing is necessary and the processing unit 10 would alter the dequantized LSP parameters to alter the speech characteristics before passing them to the LSP to LPC converter for subsequent playback to the listener.
- the decision unit may also predict that the speech will shortly become unintelligible.
- Inputs to the decision process are descriptions of speech and background noise, in the form of spectral analyses and amplitude scaling factor (gain). It is necessary to compare the speech and noise data to determine if the speech would be audible to a listener in that noise.
- the comparison is done by initially classifying the contents of the speech signal into non-speech, voiced speech or unvoiced speech. If non-speech was present (perhaps a pause between words), then the audibility of this is unimportant and thus no enhancement is required, and the LSP-process module would be commanded to perform no processing.
- voiced speech is present (voiced speech contains a series of resonance peaks at various frequencies called formants), then the amplitude of each formant would be compared to the noise amplitude at that frequency to determine its audibility. If the noise amplitude at any formant frequency exceeds the formant amplitude then formant adjustment is required.
- the LSP process unit 10 performs mathematical operations on individual LSPs to enhance the speech under the control of the decision unit.
- an automatic examination of the noise amplitudes around the formant frequency might reveal if, perhaps, shifting the formant frequency upwards or downwards by 10% may improve matters. If this is likely (perhaps because the noise amplitude reduces at a frequency 10% lower than the formant frequency), then the LSP processing block is directed to shift the appropriate LSPs by the corresponding amount.
- processing may all be described as adding/subtracting values to one or more LSP lines (with adding LSP lines to themselves being equivalent to multiplication).
- the values may be determined by the decision module or may be dependent upon the present or past value of each LSP line.
- Figure 3 An example of such LSP processing is illustrated in Figure 3, in which the frequency spectrum of a period of sound has been plotted, and the 10 LSP lines obtained from analysing this sound have been overlaid. LSP values may be readily converted to and from the LPC parameters from which the spectrum is plotted.
- Figure 3 thus shows the frequency spectrum of the sound obtained from the analysis of speech 21 in the CELP coder 22 of Figure 2.
- the output speech 15 would be reconstructed using the data of Figure 3.
- the LSP processing block 10 would be capable of altering the LSP values in order to change the output speech 15.
- the sound under analysis is speech.
- the spectral peaks evident in the spectral plots will then often, as discussed above, correspond to formants, important constituents of speech that convey a great deal of information.
- the LSP-based adjustments discussed above have thus changed the characteristics of the speech to be output to and as it will be perceived by the listener. For example, in the case of vowels, moderately widening the lines corresponding to spectral peaks (i.e. increasing the bandwidths of the formants) has been found to improve intelligibility.
- the example shown in Figure 2 additionally analyses the noise present in the environment of the listener to determine if the speech to be replayed to that listener is intelligible. If not, then speech characteristics are altered in the present invention to improve the intelligibility of the speech by the operation of moving individual or groups of LSPs to provide the following set of operations:
- a well-known psychoacoustic theory states that a sound of given frequency will be masked by a second coincidental sound of similar frequency. If the second sound is loud enough, then the former sound will be inaudible. Thus, in the case of speech, the Applicants have recognised that loud noises with frequencies similar to those of the formants will mask the speech. In order to hear the speech it is necessary to either increase the volume or alter the frequency of the speech components.
- volume alteration is relatively straightforward, but it should be noted that speech volume levels sufficient to cause hearing loss (if sustained) may be required to make speech intelligible in certain situations, notably those within noisy motor vehicles. It is therefore preferred to alter the frequency of speech components.
- the present invention offers a method of reducing the masking of speech by acoustic background noise (and thus improving intelligibility) through an efficient process that may be combined with many of the current standard mobile telephone and radio systems, and standard speech codecs in such systems.
- Speech enhancement results when an analysis of the listener's background noise environment is combined with corrective LSP alteration, which adjusts received transmitted speech data to be replayed to the listener in order to improve the chances of the listener hearing the processed sounds.
- the technique adjusts the values of LSPs found within the speech data codec based upon an analysis of the background acoustic noise environment of the listener.
- the frequency or the power and bandwidth of specific frequency-domain features found in the received speech are altered in this way.
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Telephonic Communication Services (AREA)
- Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
- Interconnected Communication Systems, Intercoms, And Interphones (AREA)
- Document Processing Apparatus (AREA)
- Machine Translation (AREA)
- Telephone Function (AREA)
Claims (19)
- Verfahren zum Verbessern bzw. Steigern der Verständlichkeit einer Sprachausgabe durch ein Sprachkommunikationssystem für einen das System nutzenden Hörer mit folgenden Verfahrensschritten:Analysieren der gegenwärtigen akustischen Hintergrund-Rauschumgebung des Hörers,Bestimmen unter Verwendung der Ergebnisse der Hintergrund-Rauschanalyse, ob die zu dem Hörer auszugebende Sprache für den Hörer in seiner gegenwärtigen Hintergrund-Rauschumgebung verständlich sein würde, indem die Inhalte der Sprache in mindestens zwei Kategorien eingestuft werden, und die Amplitude der Sprache in einer Kategorie bei einer Frequenz mit der Rauschamplitude bei dieser Frequenz verglichen wird, undÄndern der Eigenschaften bzw. Kenndaten der Sprache, die durch das Sprachkommunikationssystem auszugeben ist, auf Basis der Bestimmung, so daß die veränderte Sprache eine verbesserte Verständlichkeit für den Hörer in seiner gegenwärtigen Hintergrund-Rauschumgebung hat.
- Verfahren nach Anspruch 1, bei den die Verständlichkeit der auszugebenden Sprache bestimmt wird durch Einstufen des Inhalts der Sprache in eine Kategorie, die Formance in der Sprache enthält, und durch Vergleichen der Amplitude der Sprachkategorie, die Formante enthält, bei einer Frequenz mit der Rauschamplitude bei dieser Frequenz.
- Verfahren nach Anspruch 1 oder 2, bei dem die Verständlichkeit der auszugebenden Sprache bestimmt wird durch Einstufen des Inhalts der Sprache in Nicht-Sprache, gesprochene Sprache oder ungesprochene Sprache, und durch Vergleichen der Amplitude der gesprochenen Sprache bei einer Frequenz mit der Rauschamplitude bei dieser Frequenz.
- Verfahren nach einem der Ansprüche 1 bis 3, bei dem die Verständlichkeit der auszugebenden Sprache bestimmt wird durch Einstufen des Inhalts der Sprache in Nicht-Sprache, gesprochene Sprache oder ungesprochene Sprache, und durch Vergleichen der Amplitude einer spektralen Spitze der gesprochenen Sprache, die eine Mittenfrequenz bzw. Ruhefrequenz hat, mit der Rauschamplitude bei der Mittenfrequenz der spektralen Spitze.
- Verfahren nach einem der Ansprüche 1 bis 4, bei dem die Verständlichkeit der auszugebenden Sprache bestimmt wird durch Einstufen des Inhalts der Sprache in Nicht-Sprache, gesprochene Sprache oder ungesprochene Sprache, und durch Vergleichen der Amplitude eines Formants der gesprochenen Sprache, die eine Mittenfrequenz hat, mit der Rauschamplitude bei der Mittenfrequenz des Formants.
- Verfahren nach einem der Ansprüche 1 bis 5, bei dem die Sprache als unverständlich bestimmt wird, wenn die Hintergrund-Rauschamplitude bei im wesentlichen der gleichen Frequenz wie eine spektrale Spitze in der Sprache die Amplitude der spektralen Spitze übertrifft.
- Verfahren nach einem der Ansprüche 1 bis 6, bei dem die Sprache als unverständlich bestimmt wird, wenn die Hintergrund-Rauschamplitude bei im wesentlichen der gleichen Frequenz wie ein Formant in der Sprache die Amplitude des Formants übertrifft.
- Verfahren nach einem der Ansprüche 1 bis 7, bei dem die Spracheigenschaften bzw. Sprachkenndaten durch Ändern der Daten eines Spektrallinienpaares (line spectral pair: LSP), die die Sprache repräsentieren, geändert werden.
- Verfahren nach Anspruch 8, bei dem die Spracheigenschaften durch Bewegen eines Spektrallinienpaares in das Sprachspektrum geändert werden.
- Verfahren nach einem der Ansprüche 1 bis 9, bei dem die Spracheigenschaften durch Ändern der Frequenz einer Komponente in dem Sprachspektrum geändert werden.
- Verfahren nach Anspruch 10, bei dem die Frequenz eines Formants in dem Sprachspektrum geändert wird.
- Verfahren nach Anspruch 11, bei dem die Frequenz eines Formants in der Sprache geändert wird, um den Formant zu einer Frequenz zu bewegen, bei der die Hintergrund-Rauschamplitude niedriger ist.
- Verfahren nach einem der Ansprüche 10 bis 12, bei dem das Sprachspektrum eine spektrale Spitze mit einer Mittenfrequenz umfaßt und die Mittenfrequenz der spektralen spitze in dem Sprachspektrum geändert wird.
- Sprachkommunikationssystem mit:Mitteln (12) zum Analysieren der gegenwärtigen akustischen Hintergrund-Rauschumgebung des Sprachkommunikationssystems,Mitteln (11) zum Bestimmen unter Verwendung der Ergebnisse der Hintergrund-Rauschanalyse, ob die durch das Sprachkommunikationssystem an einen Hörer, der dem Sprachkommunikationssystem zuhört, auszugebende Sprache, verständlich für den Zuhörer in der gegenwärtigen Hintergrund-Rauschumgebung sein würde, undMitteln (10) zum Ändern der Figenschaften der durch das Sprachkommnikationssystem für den Hörer auszugebenden Sprache, um die Verständlichkeit der Sprache für den Hörer in dem gegenwärtigen Hintergrundrauschen zu verbessern, gemäß der Ausgabe der Bestimmungsmittel,bei dem das Mittel (11) zum Bestimmen, ob die auszugebende Sprache verständlich sein würde, Mittel zum Einstufen des Inhalts der Sprache in unterschiedliche Kategorien umfaßt, und Mittel zum Vergleichen der Amplitude einer Sprachkategorie bei einer Frequenz mit der Rauschamplitude bei dieser Frequenz.
- System nach Anspruch 14, bei dem das Mittel zum Einstufen des Inhalts der Sprache in unterschiedliche Kategorien den Inhalt der Sprache in eine Kategorie einstuft, die Formanten in der Sprache enthält, und das Vergleichsmittel die Amplitude der Sprachkategorie, die Formanten enthält, bei einer Frequenz mit der Rauschamplitude bei dieser Frequenz vergleicht.
- System nach Anspruch 14 oder 15, bei dem das Mittel (11) zum Bestimmen, ob die auszugebende Sprache verständlich sein würde, Mittel zum Vergleichen der Rauschamplitude bei der im wesentlichen gleichen Frequenz wie ein Formant in der Sprache mit der Amplitude des Formante umfaßt.
- System nach einem der Ansprüche 14 bis 16, bei dem die Sprache durch Daten repräsentiert ist, die Daten eines Spektrallinienpaares (LSP) enthalten, und das Mittel (10) zum Ändern der Eigengchaften der durch das Sprachkommunikationssystem auszugebenden Sprache Mittel zum Ändern der Daten des Spektrallinienpaares (LSP) umfaßt, die die Sprache repräsentieren.
- System nach einem der Ansprüche 14 bis 17, bei dem das Mittel (10) zum Ändern der Eigenschaften der durch das Sprachkommunikationssystem auszugebenden Sprache Mittel zum Ändern der Frequenz einer Komponente in dem Sprachspektrum umfaßt.
- System nach Anspruch 18, bei dem das Mittel (10) zum Ändern der Eigenschaften der durch das Sprachkommunikationssystem auszugebenden Sprache Mittel zum Ändern der Frequenz eines Forments in der Sprache, um den Formant zu einer Frequenz zu bewegen, bei der die Rauschamplitude niedriger ist, umfaßt.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB9714001.6A GB9714001D0 (en) | 1997-07-02 | 1997-07-02 | Method and apparatus for speech enhancement in a speech communication system |
GB9714001 | 1997-07-02 | ||
PCT/GB1998/001936 WO1999001863A1 (en) | 1997-07-02 | 1998-07-01 | Method and apparatus for speech enhancement in a speech communication system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0993670A1 EP0993670A1 (de) | 2000-04-19 |
EP0993670B1 true EP0993670B1 (de) | 2002-03-20 |
Family
ID=10815285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP98932337A Expired - Lifetime EP0993670B1 (de) | 1997-07-02 | 1998-07-01 | Verfahren und vorrichtung zur sprachverbesserung in einem sprachübertragungssystem |
Country Status (12)
Country | Link |
---|---|
EP (1) | EP0993670B1 (de) |
JP (1) | JP2002507291A (de) |
KR (1) | KR20010014352A (de) |
CN (1) | CN1265217A (de) |
AT (1) | ATE214832T1 (de) |
AU (1) | AU8227798A (de) |
CA (1) | CA2235455A1 (de) |
DE (1) | DE69804310D1 (de) |
GB (2) | GB9714001D0 (de) |
PL (1) | PL337717A1 (de) |
WO (1) | WO1999001863A1 (de) |
ZA (1) | ZA985607B (de) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1808853A1 (de) | 2006-01-13 | 2007-07-18 | Robert Bosch Gmbh | Beschallungsanlage, Verfahren zur Verbesserung der Sprachqualität und/oder Verständlichkeit von Sprachdurchsagen sowie Computerprogramm |
US8630427B2 (en) | 2005-12-29 | 2014-01-14 | Motorola Solutions, Inc. | Telecommunications terminal and method of operation of the terminal |
US11265660B2 (en) | 2007-01-03 | 2022-03-01 | Lizn Aps | Speech intelligibility enhancing system |
US11817114B2 (en) | 2019-12-09 | 2023-11-14 | Dolby Laboratories Licensing Corporation | Content and environmentally aware environmental noise compensation |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE9903553D0 (sv) * | 1999-01-27 | 1999-10-01 | Lars Liljeryd | Enhancing percepptual performance of SBR and related coding methods by adaptive noise addition (ANA) and noise substitution limiting (NSL) |
FR2794322B1 (fr) * | 1999-05-27 | 2001-06-22 | Sagem | Procede de suppression de bruit |
US7120579B1 (en) | 1999-07-28 | 2006-10-10 | Clear Audio Ltd. | Filter banked gain control of audio in a noisy environment |
US6876968B2 (en) * | 2001-03-08 | 2005-04-05 | Matsushita Electric Industrial Co., Ltd. | Run time synthesizer adaptation to improve intelligibility of synthesized speech |
DE10124189A1 (de) * | 2001-05-17 | 2002-11-21 | Siemens Ag | Verfahren zum Signalempfang |
JP2003255993A (ja) * | 2002-03-04 | 2003-09-10 | Ntt Docomo Inc | 音声認識システム、音声認識方法、音声認識プログラム、音声合成システム、音声合成方法、音声合成プログラム |
AU2003263380A1 (en) * | 2002-06-19 | 2004-01-06 | Koninklijke Philips Electronics N.V. | Audio signal processing apparatus and method |
WO2004068467A1 (en) * | 2003-01-31 | 2004-08-12 | Oticon A/S | Sound system improving speech intelligibility |
KR20050049103A (ko) * | 2003-11-21 | 2005-05-25 | 삼성전자주식회사 | 포만트 대역을 이용한 다이얼로그 인핸싱 방법 및 장치 |
WO2006026812A2 (en) * | 2004-09-07 | 2006-03-16 | Sensear Pty Ltd | Apparatus and method for sound enhancement |
US8280730B2 (en) | 2005-05-25 | 2012-10-02 | Motorola Mobility Llc | Method and apparatus of increasing speech intelligibility in noisy environments |
EP1814109A1 (de) * | 2006-01-27 | 2007-08-01 | Texas Instruments Incorporated | Sprachsignalverstärker zur Modellierung des Lombard-Effekts |
JP2007295347A (ja) * | 2006-04-26 | 2007-11-08 | Mitsubishi Electric Corp | 音声処理装置 |
KR101414233B1 (ko) | 2007-01-05 | 2014-07-02 | 삼성전자 주식회사 | 음성 신호의 명료도를 향상시키는 장치 및 방법 |
JP4926005B2 (ja) * | 2007-11-13 | 2012-05-09 | ソニー・エリクソン・モバイルコミュニケーションズ株式会社 | 音声信号処理装置及び音声信号処理方法、通信端末 |
WO2009086174A1 (en) | 2007-12-21 | 2009-07-09 | Srs Labs, Inc. | System for adjusting perceived loudness of audio signals |
JP5453740B2 (ja) * | 2008-07-02 | 2014-03-26 | 富士通株式会社 | 音声強調装置 |
US8538042B2 (en) | 2009-08-11 | 2013-09-17 | Dts Llc | System for increasing perceived loudness of speakers |
EP2372700A1 (de) * | 2010-03-11 | 2011-10-05 | Oticon A/S | Sprachverständlichkeitsprädikator und Anwendungen dafür |
US9117455B2 (en) | 2011-07-29 | 2015-08-25 | Dts Llc | Adaptive voice intelligibility processor |
CN103002105A (zh) * | 2011-09-16 | 2013-03-27 | 宏碁股份有限公司 | 可增加通讯内容清晰度的移动通讯方法 |
CN103297896B (zh) * | 2012-02-27 | 2016-07-06 | 联想(北京)有限公司 | 一种音频输出方法及电子设备 |
US9015044B2 (en) | 2012-03-05 | 2015-04-21 | Malaspina Labs (Barbados) Inc. | Formant based speech reconstruction from noisy signals |
US9312829B2 (en) | 2012-04-12 | 2016-04-12 | Dts Llc | System for adjusting loudness of audio signals in real time |
EP3010017A1 (de) * | 2014-10-14 | 2016-04-20 | Thomson Licensing | Verfahren und Vorrichtung zur Trennung von Sprachdaten von Hintergrunddaten in der Audiokommunikation |
JP6565206B2 (ja) * | 2015-02-20 | 2019-08-28 | ヤマハ株式会社 | 音声処理装置および音声処理方法 |
EP3107097B1 (de) | 2015-06-17 | 2017-11-15 | Nxp B.V. | Verbesserte sprachverständlichkeit |
US9847093B2 (en) | 2015-06-19 | 2017-12-19 | Samsung Electronics Co., Ltd. | Method and apparatus for processing speech signal |
JP6790732B2 (ja) * | 2016-11-02 | 2020-11-25 | ヤマハ株式会社 | 信号処理方法、および信号処理装置 |
CN108369805B (zh) * | 2017-12-27 | 2019-08-13 | 深圳前海达闼云端智能科技有限公司 | 一种语音交互方法、装置和智能终端 |
CN109346058B (zh) * | 2018-11-29 | 2024-06-28 | 西安交通大学 | 一种语音声学特征扩大系统 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5870292A (ja) * | 1981-10-22 | 1983-04-26 | 日産自動車株式会社 | 車両用音声認識装置 |
US4538295A (en) * | 1982-08-16 | 1985-08-27 | Nissan Motor Company, Limited | Speech recognition system for an automotive vehicle |
WO1987000366A1 (en) * | 1985-07-01 | 1987-01-15 | Motorola, Inc. | Noise supression system |
GB8801014D0 (en) * | 1988-01-18 | 1988-02-17 | British Telecomm | Noise reduction |
US5235669A (en) * | 1990-06-29 | 1993-08-10 | At&T Laboratories | Low-delay code-excited linear-predictive coding of wideband speech at 32 kbits/sec |
CA2056110C (en) * | 1991-03-27 | 1997-02-04 | Arnold I. Klayman | Public address intelligibility system |
FI102337B (fi) * | 1995-09-13 | 1998-11-13 | Nokia Mobile Phones Ltd | Menetelmä ja piirijärjestely audiosignaalin käsittelemiseksi |
GB2306086A (en) * | 1995-10-06 | 1997-04-23 | Richard Morris Trim | Improved adaptive audio systems |
-
1997
- 1997-07-02 GB GBGB9714001.6A patent/GB9714001D0/en not_active Ceased
-
1998
- 1998-04-21 CA CA002235455A patent/CA2235455A1/en not_active Abandoned
- 1998-06-26 ZA ZA9805607A patent/ZA985607B/xx unknown
- 1998-07-01 PL PL98337717A patent/PL337717A1/xx unknown
- 1998-07-01 JP JP50665899A patent/JP2002507291A/ja active Pending
- 1998-07-01 CN CN98807458A patent/CN1265217A/zh active Pending
- 1998-07-01 DE DE69804310T patent/DE69804310D1/de not_active Expired - Lifetime
- 1998-07-01 EP EP98932337A patent/EP0993670B1/de not_active Expired - Lifetime
- 1998-07-01 AT AT98932337T patent/ATE214832T1/de not_active IP Right Cessation
- 1998-07-01 KR KR1019997012508A patent/KR20010014352A/ko not_active Application Discontinuation
- 1998-07-01 GB GB9814279A patent/GB2327835B/en not_active Expired - Fee Related
- 1998-07-01 WO PCT/GB1998/001936 patent/WO1999001863A1/en not_active Application Discontinuation
- 1998-07-01 AU AU82277/98A patent/AU8227798A/en not_active Abandoned
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8630427B2 (en) | 2005-12-29 | 2014-01-14 | Motorola Solutions, Inc. | Telecommunications terminal and method of operation of the terminal |
EP1808853A1 (de) | 2006-01-13 | 2007-07-18 | Robert Bosch Gmbh | Beschallungsanlage, Verfahren zur Verbesserung der Sprachqualität und/oder Verständlichkeit von Sprachdurchsagen sowie Computerprogramm |
US11265660B2 (en) | 2007-01-03 | 2022-03-01 | Lizn Aps | Speech intelligibility enhancing system |
US11817114B2 (en) | 2019-12-09 | 2023-11-14 | Dolby Laboratories Licensing Corporation | Content and environmentally aware environmental noise compensation |
Also Published As
Publication number | Publication date |
---|---|
GB9814279D0 (en) | 1998-09-02 |
GB9714001D0 (en) | 1997-09-10 |
CN1265217A (zh) | 2000-08-30 |
ATE214832T1 (de) | 2002-04-15 |
KR20010014352A (ko) | 2001-02-26 |
AU8227798A (en) | 1999-01-25 |
WO1999001863A1 (en) | 1999-01-14 |
CA2235455A1 (en) | 1999-01-02 |
GB2327835A (en) | 1999-02-03 |
ZA985607B (en) | 2000-06-01 |
PL337717A1 (en) | 2000-08-28 |
GB2327835B (en) | 2000-04-19 |
DE69804310D1 (de) | 2002-04-25 |
JP2002507291A (ja) | 2002-03-05 |
EP0993670A1 (de) | 2000-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0993670B1 (de) | Verfahren und vorrichtung zur sprachverbesserung in einem sprachübertragungssystem | |
US10885926B2 (en) | Classification between time-domain coding and frequency domain coding for high bit rates | |
US8265940B2 (en) | Method and device for the artificial extension of the bandwidth of speech signals | |
EP1252621B1 (de) | Vorrichtung und verfahren zur sprachsignalmodifizierung | |
US7379866B2 (en) | Simple noise suppression model | |
KR100574031B1 (ko) | 음성합성방법및장치그리고음성대역확장방법및장치 | |
KR102105044B1 (ko) | 낮은 레이트의 씨이엘피 디코더의 비 음성 콘텐츠의 개선 | |
US20080312916A1 (en) | Receiver Intelligibility Enhancement System | |
JPH1097296A (ja) | 音声符号化方法および装置、音声復号化方法および装置 | |
KR100216018B1 (ko) | 배경음을 엔코딩 및 디코딩하는 방법 및 장치 | |
US7603271B2 (en) | Speech coding apparatus with perceptual weighting and method therefor | |
JP2010520503A (ja) | 通信ネットワークにおける方法及び装置 | |
GB2343822A (en) | Using LSP to alter frequency characteristics of speech | |
Vicente-Peña et al. | Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition | |
McLoughlin | CELP and speech enhancement | |
Ekeroth | Improvements of the voice activity detector in AMR-WB | |
Hennix | Decoder based noise suppression | |
Chen | Adaptive variable bit-rate speech coder for wireless applications | |
Kwon | Improved Excitation Modeling for Low-Rate CELP Speech Coding | |
Chen | Perceptual postfiltering for low bit rate speech coders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20000126 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RAX | Requested extension states of the european patent have changed |
Free format text: AL PAYMENT 20000126;LT PAYMENT 20000126;LV PAYMENT 20000126;MK PAYMENT 20000126;RO PAYMENT 20000126;SI PAYMENT 20000126 |
|
RIC1 | Information provided on ipc code assigned before grant |
Free format text: 7G 10L 21/02 A, 7G 10L 13/02 B |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
17Q | First examination report despatched |
Effective date: 20001114 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: IF02 |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL PAYMENT 20000126;LT PAYMENT 20000126;LV PAYMENT 20000126;MK PAYMENT 20000126;RO PAYMENT 20000126;SI PAYMENT 20000126 |
|
LTIE | Lt: invalidation of european patent or patent extension | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: LI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRE;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.SCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: FR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: CH Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020320 |
|
REF | Corresponds to: |
Ref document number: 214832 Country of ref document: AT Date of ref document: 20020415 Kind code of ref document: T |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REF | Corresponds to: |
Ref document number: 69804310 Country of ref document: DE Date of ref document: 20020425 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020620 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020620 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020620 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020621 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020701 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020701 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020701 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020731 |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20020925 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
EN | Fr: translation not filed | ||
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20030201 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20020701 |
|
26N | No opposition filed |
Effective date: 20021223 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |