AU8227798A - Method and apparatus for speech enhancement in a speech communication system - Google Patents

Method and apparatus for speech enhancement in a speech communication system Download PDF

Info

Publication number
AU8227798A
AU8227798A AU82277/98A AU8227798A AU8227798A AU 8227798 A AU8227798 A AU 8227798A AU 82277/98 A AU82277/98 A AU 82277/98A AU 8227798 A AU8227798 A AU 8227798A AU 8227798 A AU8227798 A AU 8227798A
Authority
AU
Australia
Prior art keywords
speech
frequency
output
amplitude
altering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU82277/98A
Inventor
Robert James Chance
Ian Vince Mcloughlin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Simoco International Ltd
Original Assignee
Simoco International Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Simoco International Ltd filed Critical Simoco International Ltd
Publication of AU8227798A publication Critical patent/AU8227798A/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/15Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being formant information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Telephonic Communication Services (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
  • Interconnected Communication Systems, Intercoms, And Interphones (AREA)
  • Document Processing Apparatus (AREA)
  • Machine Translation (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The characteristics of the speech received by the decoding unit are altered by a processing unit 10 based upon an analysis of the listener's current background noise before the speech is output to enhance its intelligibility to a listener. An analysis unit 12 determines the type and level of the background noise by use of a microphone 13. A decision unit 11 then determines whether the speech currently being received and replayed would be intelligible to an average listener in the current background noise. If unit 11 determines that the speech is readily intelligible then no processing is necessary and the processing unit 10 does not alter the speech which has been passed to it. However, if unit 11 determines that the speech would be unintelligible, then unit 10 alters the speech before passing it to the output to make the speech more intelligible. In a particularly preferred embodiment, the speech characteristics are altered by altering line spectral pair/formant data representing the speech.

Description

WO 99/01863 PCT/GB98/01936 - 1 METHOD AND APPARATUS FOR SPEECH ENHANCEMENT IN A SPEECH COMMUNICATION SYSTEM 5 The present invention relates to a method and apparatus for speech enhancement in a speech communication system, and in particular to such a method and apparatus for enhancing speech to make it more 10 intelligible to a listener in a noisy environment. Speech communication systems such as mobile phones and radios are often used in noisy environments, such as inside vehicles. Furthermore, this environmental noise can vary during a conversation. This varying 15 environmental noise can make it very difficult for a listener to understand the speech being output by their phone or radio. According to one aspect of the present invention, there is provided a method for increasing the 20 intelligibility of speech output by a speech communication system to a listener using the system, comprising: analysing the current background acoustic noise environment of the speech communication system; 25 determining using the results of the background noise analysis whether the speech to be output to the listener would be intelligible to the listener in the current background noise; and altering the characteristics of the speech to be 30 output by the speech communication system on the basis of said determination such that the altered speech output by the speech communication system has enhanced intelligibility to the listener in the current background noise. 35 According to a second aspect of the present invention, there is provided a speech communication system comprising: WO99/01863 PCT/GB98/01936 - 2 means for analysing the current background acoustic noise environment of the speech communication system; means for determining using the results of the background noise analysis whether speech to be output by 5 the speech communication system would be intelligible to a listener in the current background noise environment; and means for altering the characteristics of the speech to be output by the speech communication system 10 to enhance the intelligibility of the speech to a listener in the current background noise in accordance with the output of said determining means. The present invention thus monitors the background noise in which a speech communication system is being 15 used (i.e. the external environmental acoustic noise in the vicinity of the listener) and can adjust the characteristics of the speech to be output by the speech communication system to the listener to make it more intelligible in that current background acoustic noise. 20 It therefore provides enhanced intelligibility of speech output as sound by, for example, the loudspeaker or earpiece of a mobile phone or radio when used in noisy environments. Furthermore, because the present invention analyses 25 current background noise, it can take account of changes in the background noise and enhance the speech accordingly. In the present invention the background acoustic noise is therefore preferably continuously analysed and the speech continuously altered on the 30 basis of that analysis. This provides for dynamic enhancement of the speech and is particularly advantageous in environments where background noise can change continuously and significantly, such as in a vehicle. 35 The background acoustic environmental noise can be analysed by various techniques, as is known in the art. It can be picked up or sampled using, for example, the WO 99/01863 PCT/GB98/01936 -3 usual microphone for picking up the user's speech of the speech communication system (e.g. mobile phone or radio), or a separate microphone. An example background noise analysis system would 5 be a process whereby the user's speech (for example in the microphone signal) is detected (using one of many common techniques, such as adding all input noise values in a given time interval and comparing these against a threshold) and the acoustic background noise is analysed 10 during the gaps between the speech periods. The sampled noise would then be analysed (perhaps using linear prediction) to determine both its spectral content and its amplitude. LPC (linear prediction coefficient) values resulting from a linear predictive 15 analysis contain sufficient spectral information, and a gain parameter could be used to relate the relative amplitudes of the LPC parameters to absolute amplitudes. The intelligibility of speech to be output by the speech communication system in the current background 20 noise can be determined using any known standard technique to determine whether the speech would be intelligible to an average listener in the current background noise (i.e. any suitable technique for assessing the effect of that noise on the listener's 25 perception of the speech). Preferably, descriptions of the speech and the background noise in the form of spectral analyses and amplitude scaling factor (gain) are compared to determine if the speech would be audible to a listener 30 in that noise. In a preferred embodiment the speech is first classified into two or more categories, and the amplitude of one of the speech categories at one or more frequencies compared with the noise amplitude at those 35 frequencies. In one such comparison process, the speech contents could initially be classified into non-speech, voiced WO99/01863 PCT/GB98/01936 - 4 speech or unvoiced speech. If non-speech is present (perhaps a pause between words), then the audibility of this is unimportant and so it can be ignored. If voiced speech is present, then its 5 intelligibility needs to be determined. This is preferably done by comparing the amplitude of one or more, or most preferably each, spectral peak and/or of one or more, or most preferably each, formant (as is known in the art, voiced speech contains a series of 10 resonant peaks at varying frequencies called formants which convey a great deal of information and to which spectral peaks in the spectral plot of the speech often correspond) in the voiced speech with the noise amplitude at the frequency of the peak or formant, 15 respectively. If more than one peak or formant is to be considered, then the amplitude of each peak or formant should be compared with the noise amplitude at the frequency of the respective peak or formant. Most preferably, the speech is determined to be 20 unintelligible if the noise amplitude at any formant frequency or spectral peak or at a particular number of formant or spectral peak frequencies exceeds the corresponding formant or spectral peak amplitude(s). Such comparison of the relative amplitudes of 25 spectral peaks and formants in the speech with the background noise will give a good indication of the intelligibility of the speech, because it effectively determines the intelligibility of the speech in terms of a human listener model of intelligibility, i.e. it 30 assesses the intelligibility of the speech in a manner that models closely a human listener's actual perception of the speech. As a well-known psycho-acoustic theory states, a sound of a given frequency will be masked by a second coincidental sound of similar frequency, and if 35 the second sound is loud enough, then the former sound will be inaudible. Thus the Applicants have recognised that in the case of speech, loud noises with frequencies WO99/01863 PCT/GB98/01936 - 5 similar to those of formants or spectral peaks in the speech will mask the speech. Thus comparison of the amplitude of one or more or each formant or one or more or each spectral peak in the speech with the noise 5 amplitude at the corresponding frequency or frequencies will give a good indication of the audibility of that (or those) formant(s) or spectral peak(s) and thus of the intelligibility of the speech to a human listener. Other speech classifications and categories could 10 be used if desired. For example, the speech could be classified into vowel and consonant sounds (or other speech sounds). Preferably, a classification is used which is helpful or appropriate to determining intelligibility. Thus preferably, as in the above 15 example, the classification includes a category which includes formants of the speech (preferably only formants) and that category is compared with the noise. Preferably the classification is into formant containing and non-formant containing categories. 20 Once the intelligibility of the speech has been determined, the speech can be altered to make it more intelligible in accordance with that determination. Preferably, if it is determined that the speech would be unintelligible, then the speech characteristics are 25 altered, but not otherwise. Alteration of the speech characteristics can be done in various ways, as is known in the art. It is preferably done by increasing the volume (amplitude) and/or altering the frequency of speech components and 30 in particular the formants and/or spectral peaks in the speech. In a particularly preferred such arrangement, the speech characteristics will be altered by adjusting the positions of the formants and/or spectral peaks in the 35 speech spectral plot. Such alterations will have a more perceptible effect on the speech to a human listener and thus are particularly effective for increasing the WO99/01863 PCT/GB98/01936 - 6 intelligibility of the speech. For example, one or more peaks or formants could be shifted upwards or downwards in frequency, or the amplitude of one or more peaks or formants could be increased (corresponding to a decrease 5 in bandwidth), or the bandwidth of one or more of the peaks or formants could be increased (corresponding to a decrease in amplitude). Thus, for example, the volume of the formants can be increased such that they are audible over the 10 background noise. However, this can be an undesirable way of altering the speech characteristics as speech volume levels sufficient to cause hearing loss (if sustained) may be required to make the speech intelligible in certain situations, notably those within 15 noisy motor vehicles. Preferably therefore the frequency of speech components such as formants or peaks in the speech spectrum is adjusted. This is preferably done to move them to a frequency where the noise level is lower, such 20 that the components, e.g. peaks or formants, are audible (i.e. have an amplitude greater than the noise) at that frequency. The alteration of speech characteristics is preferably carried out in accordance with the results of 25 the analysis of the background noise, and may be dependent upon the present or past values of the noise. Using present values of noise, a direct comparison may be made and an alteration made to the speech characteristics; using past values, it is possible to 30 make predictive changes. For example, if the noise analysis indicates the noise amplitude reduces at a particular frequency to a level at which a presently inaudible formant would be audible, the speech characteristics could be altered to change the frequency 35 of that formant to that particular frequency. The actual alteration of speech characteristics can be carried out in a number of ways, as is known in the WO 99/0i863 PCT/GB98/01936 - 7 art. For example, the speech signal could be passed through an adaptive filter, such as a perceptual error weighting filter (as described in CHEN, J. H., COK, E.V., LIN, Y., JAYANT, N., and MIECHER, M.J., "A low 5 delay CELP coder for the CCITT 16 kb/s speech coding standard". IEEE J. Scl. Ateas Commun. 1992, 10. (5). pp 830-849) to narrow or widen the formant bandwidth. Alternatively the amplitude peaks could be clipped so that the energy in the unvoiced parts of the speech 10 becomes a more significant part of the total speech energy. This can increase intelligibility but at the expense of sound quality. In a particularly preferred embodiment, the speech characteristics are altered by altering line spectral 15 pair (LSP) data representing the speech. As is known in the art, line spectral pairs are representations of the linear-prediction parameters derived for periods of sound. Where the sound is speech, the resonant frequencies in the speech or 20 formants, can be noted in the linear-prediction spectrum. LSP values usually uniquely relate to positions of such resonances or formants in the linear prediction spectrum. Thus LSP data can be used to represent speech, and the Applicants have recognised 25 that by altering the LSP data, characteristics such as the frequency and amplitude of formants in the speech can be adjusted. This allows the speech characteristics to be adjusted relatively easily and in a way that can readily change the speech as perceived by a listener and 30 at a much lower computational overhead than when using, for example, adaptive filtering. Also, such adjustment does not eliminate parts of the speech spectrum, but rather modifies them. Furthermore, many speech communication systems such 35 as speech coding/decoding systems used in mobile telephones or modern digital radio systems, utilise a linear-prediction model of speech, and convert this to WO99/0-1863 PCT/GB98/01936 - 8 an LSP representation for transmission. The LSP representation is generally used within such speech systems for reasons of information security and transmission efficiency. 5 Thus this embodiment of the present invention is particularly advantageous in such systems which use LSPs for speech transmission, since the LSP information that is transmitted may be altered in the speech communication system when it is received to enhance the 10 intelligibility of the speech. This altered LSP data would then be converted back to linear-prediction parameters and hence reconstructed into speech and output as sound, but with altered characteristics. It is believed that the adjustment of LSPs 15 representing speech in a speech communication system to change the characteristics of speech output by that system could be advantageous in itself. Thus according to another aspect of the present invention, there is provided a method of altering the 20 characteristics of speech to be output to a listener in a speech communication system in which the speech data to be processed and output by the speech communication system includes line spectral pair data, comprising altering the line spectral pair data in the speech data. 25 According to a further aspect of the present invention, there is provided a speech communication system in which the speech data to be processed by the speech communication system includes line spectral pair data, comprising means for altering the line spectral 30 pair data in the speech data processed by the speech communication system to change the characteristics of the processed speech as heard by a listener. In these aspects of the invention, the alteration of the LSP data in the speech data is preferably used 35 for the purpose of enhancing the intelligibility of the output speech when listened to in a noisy environment (but it could be useful in other situations where it is WO99/1863 PCT/GB98/01936 - 9 desired to alter the characteristics of speech as heard by a listener, e.g. to disguise the speaker's voice). Thus these aspects of the present invention preferably comprise the technique of adjusting the values of LSPs 5 found within the speech data based upon an analysis of the background acoustic noise environment of the system (i.e. the listener). Preferably, the frequency or the power and bandwidth of specific frequency-domain features, such as formants, found in the speech are 10 altered in this way. The LSP alterations can be designed to affect the reconstructed speech in specific ways and in particular to enhance the intelligibility of the speech over the background noise, as discussed above. For example, the 15 particular line spectral pair (LSP) associated with a formant can be identified and its separation (or spacing) then widened or narrowed to increase or decrease the formant bandwidth. Alternatively or additionally, line spectral pairs can be moved higher or 20 lower in frequency to increase or decrease the frequency of particular formants. The LSP information is preferably altered by adding or subtracting values to one or more LSPs (or LSP lines), or by moving one or more LSPs (or LSP lines) in 25 the speech spectrum. The values may be determined in accordance with the analysis of the background noise, and may be dependent upon the present or past values of each LSP. Using present values of LSP data, a direct comparison can be made with the ambient noise and an 30 adjustment made to the LSP data; using past values, it is possible to make predictive changes. In a particularly preferred such arrangement, the invention includes making a numerical increment or decrement in the value of any or all of the set of LSPs 35 (or LSP lines) defining the speech. Thus individual or groups of LSPs can be moved to: shift one or more spectral peaks or formants in frequency (either upwards WO99/01863 PCT/GB98/01936 - 10 or downwards); or change the amplitude (either to increase the amplitude (decrease the bandwidth) or decrease the amplitude (increase the bandwidth)) of one or more spectral peaks or formants. 5 For example, the separation between the values of two or more of a set of LSP lines (and most preferably between a pair of LSP lines) can be narrowed or widened to narrow or widen frequency features (such as spectral peaks or formants) found in the speech frequency 10 spectrum. Alternatively or additionally, the values of two or more of a set of LSP lines (and most preferably of a pair of LSP lines) can be incremented or decremented, most preferably by identical amounts (either in absolute terms or as a percentage of their 15 original values), to adjust the centre frequency of features (such as spectral peaks or formants) found in the frequency spectrum of the speech. In a particularly preferred embodiment, line spectral pairs are translated in frequency so as to 20 change the centre frequency of particular peaks or formants in the speech data. As discussed above, this is a particularly advantageous way of changing speech characteristics as heard by a listener, for example to increase intelligibility over background noise. 25 It is also possible to predict the behaviour of the background noise from an analysis of previous changes in its spectral content, to enable a faster or more appropriate adjustment to the LSPs. This is particularly applicable to repetitive noise such as a 30 siren in a police car, fire appliance or ambulance. Knowledge of which way the frequency of the interfering noise is changing may affect the decision about which way to shift the formant frequencies. Any or all of the above adjustments can be used 35 individually or in combination to alter the speech characteristics of the speech to be output by the speech communication system in accordance with the analysis of WO99/01863 PCT/GB98/01936 - 11 the background noise of the listener to make the speech output by the speech communication system more intelligible to the listener. The present invention has been described in 5 relation to speech communication systems, such as mobile phones and radios. It is particularly suited to use in speech decoders, such as would be found for example in mobile phones or mobile radios. However, it would also be applicable (and in particular the aspects relating to 10 LSP alteration would be applicable) to use in speech coders where it was desired to alter the characteristics of the user's input speech to be transmitted by the speech coder (for example to increase intelligibility over the speaker's background noise). It would also be 15 applicable in radio receivers, televisions, or other devices which broadcast speech to listeners. Also although it has been described with particular reference to increasing the intelligibility of speech, it could also be used to increase the intelligibility of other 20 sounds, such as music. A preferred embodiment of the present invention will now be described by way of example only, and with reference to the accompanying drawings, in which: Figure 1 shows a generic CELP codec structure; 25 Figure 2 shows a block diagram of a typical speech communication system in accordance with the present invention; Figure 3 shows the frequency spectrum of a period of sound, with numbered LSP values for that sound 30 overlaid as vertical lines; and Figure 4 shows the frequency spectrum of a period of sound derived from the LSP values of Figure 3 with specific alterations. The altered LSP values for that sound are overlaid as vertical lines. 35 The present invention is particularly applicable to use in a speech codec system such as would be used in a mobile phone or radio system. An example of such a WO99/01863 PCT/GB98/01936 - 12 codec structure is shown in Figure 1, in the form of a generic CELP coder. The general CELP (codebook-excited linear prediction) structure was introduced in 1985 (see, for 5 example, Shroeder MR, Atal BS, "Code-excited linear prediction (CELP): high-quality speech at very low bit rates", ICASSP, pp. 937-940, 1985), and many modifications have been made since. A generic CELP codec structure 22 is shown in 10 Figure 1. Figure 1 shows input speech 21 being analysed by linear prediction analyser unit or device 2 resulting in linear prediction (LPC) parameters 3. The remainder of the input signal which linear prediction cannot describe is passed to a pitch filter, VQ encoding block 15 4 which produces parameters representative of, for example, the gain and pitch of the speech. These processes are unimportant to the invention and vary widely between different CELP implementations in their detail, however they result in various other parameters 20 which, together with the LPC parameters, describe the input speech. The LPC parameters 3 and any other parameters (such as gain and pitch) 5 describing the input speech are quantized by a quantizer 6 and transmitted (as 25 transmission parameters 7) to the CELP decoder 14 which dequantizes them using a dequantizer 8. These dequantized values are then used to recreate speech 15 to be output as sound to a listener. (The dequantizer 8 reproduces the LPC parameters 3 and other parameters 5 30 by means of an LPC synthesiser 30 and pitch filter, VQ decoding block 31, respectively, which reproduce the speech for it to be output as sound 15.) LPC parameters may alternatively be converted to a different form prior to quantization in the coder (and 35 also converted back to LPC coefficients after dequantization). Such forms may include log area ratios, PARCOR (reflection coefficients) and line WO99/01863 PCT/GB98/01936 - 13 spectral pairs. Differences in the representation of LPC parameter used and the types of (or usage of) pitch filter and vector quantizer (VQ) have led to many CELP variants. A 5 small selection of examples are: MELP (mixed excitation linear prediction); VSELP (variable slope excitation linear prediction); SB-CELP (sub-band CELP); LD-CELP (low delay CELP); RELP (residual excitation linear prediction); RPE-LP (residual pulse excitation linear 10 prediction); and others. As noted above, in many such codecs the LPC parameters are transmitted as LSPs. The terminology 'LSPs' refers to the parameters generated by a conversion of linear prediction 15 coefficients using the line spectrum pair approach as described in the paper by Sugamura and Itakura (Sugamura N, Itakura F, "Speech analysis and synthesis methods developed at ECL in NTT - from LPC to LSP - ", Speech Communication, vol. 5, pp. 199-213, 1986). The linear 20 prediction coefficients themselves are generated by any of the well-established analysis methods operating on a set of data (speech) such as those described in Makhoul J, "Linear prediction: a tutorial review", Proc. IEEE, vol 63, no. 4, pp. 561-580, 1975. 25 LSPs are generated via a mathematical transformation from LPCs and thus have identical information content, but different form. Many other mathematical transformations from LPCs have been determined, but none of the resulting parameters can be 30 altered in the same way as LSPs and as described in the present invention. The line spectral pair parameters may be referred to as line spectral frequencies, however this term is not applied exclusively to LSPs. 35 Mathematically speaking, LSP parameters may be defined as: the roots of the two polynomials formed by a particular re-arrangement of the coefficients of the WO99/f1863 PCT/GB98/01936 - 14 inverse linear prediction polynomial. These two polynomials may be called P and Q and are formed using the set of linear prediction coefficients, Ap (where p is the index of the array, usually running from 0 to the 5 filter order, p), having the following recursive relationship: P(z
-
) = AP (z
-
) - z-(P + 1 )AP (z) Q (z
-
1) = A (z
-
) ( + zP 1 )AP, (z) 10 The roots obtained by solving the polynomials P and Q give the line spectral frequency parameters, referred to as line spectral pairs. Many methods exist to determine these roots, as explained in, for example, the paper by 15 Sugamura and Itakura referred to above. The choice of method is irrelevant for the purposes of the present invention. The set of LSPs are often scaled. With reference to a 'basic' LSP value, the cosine or sine of these are 20 also referred to as LSPs. In addition, the basic LSP may reside in one of various domains, i.e. its maximum and minimum values may be between 0 and n, between 0 and 4000Hz (a typical sampling frequency), or within other arbitrary ranges such as 0 to 1. 25 As an aid to understanding of the present invention, a non-mathematical description of line spectral pairs (LSPs) will also be considered. As LSPs are derived from LPC and reflection coefficients, it is necessary to cover these first. 30 Linear prediction is the usage of a fixed-length formula to model an unknown system. The formula structure is fixed but the values to be inserted into the formula must be found. Linear predictive analysis is the process of finding the best set of values for 35 that formula. These values are the linear prediction coefficients, and the best set of these values is the set that causes the equation output to resemble the WO99/01863 PCT/GB98/01936 - 15 output of the system to be modelled most closely, when the inputs to the two systems are identical. If the equation of that formula is re-ordered mathematically then another standard equation can be 5 arrived at. The coefficients for the new equation are called reflection coefficients and can be found easily from the LPC coefficients. The reflection coefficient equation is very easy to relate to a real system. For speech processing, the LPC 10 analysis is attempting to find the best parameters that model a short period of speech. In physical terms, the model is made up of a number of different width but equal length tubes connected in series. The reflection coefficients fit well into this physical model as the 15 reflection coefficients relate directly to the difference between each consecutive tube. When air is blown down tubes, resonances occur (organ pipes). In a human vocal tract, air originates at the glottis (which opens and closes rapidly) and 20 proceeds through the vocal tract to be expelled at the mouth. The sound relates strongly to the shape of the vocal tract due to the resonances. The LSP parameters each relate to the resonant frequency of one of the connected tubes. Half of the 25 parameters are generated assuming that the source end of tubes is open, and half assuming that it is closed. In fact, the glottis opens and closes rapidly and so is neither open nor closed. Thus each true spectral resonance occurs between two nearby line spectral 30 frequencies and these two values are considered to be a pair (thus line spectral pair). An embodiment of the present invention in a speech communication system comprising a speech codec, and using LSP alteration to enhance the intelligibility of 35 speech in a noisy environment is shown in Figure 2, and the signal processing is illustrated in Figures 3 and 4. The system as shown in Figure 2 has many features in WO99/01863 PCT/GB98/01936 - 16 common with the system of Figure 1 and thus the same reference numerals have been used for the like features of the systems. The LSP alteration mechanism may act within a 5 speech codec (a codec comprises both a coding 22 and a decoding 14 mechanism) in the positions shown in Figure 2 (i.e. in the speech decoder 14). The speech coder 22 transforms the input speech 21 into a set of condensed parameters 20 suitable for transmission by radio or 10 other means to a receiving unit 14. (It should be noted that in this arrangement the LPC parameters produced by the linear prediction analyser 2 are converted to line spectral pair data by an LPC to LSP converter 32 before being quantized by the quantizer 6.) The receiving unit 15 then decodes the transmitted data to reconstruct speech 15. By way of example, the coding unit 22 may reside in an office telephone and the decoding unit 14 within a mobile telephone handset. In this embodiment alterations to the data received 20 by the decoding unit, where that data comprises LSP information, are performed. This alteration unit is shown in Figure 2 as LSP processor 10. The LSP processing depends upon the degree and type of acoustic noise background 16 that is present in the 25 environment of the listener. The analysis unit 12 shown in Figure 2 determines the type and level of background noise by use of a microphone 13 which picks up, inter alia, the actual external background acoustic noise of the listener's environment. 30 An example of a noise analysis system would be a process whereby the user's speech is detected (using one of many common techniques, such as adding all input noise values in a given time interval and comparing these against a threshold) and the external acoustic 35 background noise is considered during the gaps between speech periods. The sampled noise must then be analysed (perhaps WO99/01863 PCT/GB98/01936 - 17 using linear prediction) to determine both its spectral content and its amplitude. LPC (linear prediction coefficient) values resulting from a linear predictive analysis contain sufficient spectral information, and a 5 gain parameter would relate the relative amplitudes of the LPC parameters to absolute amplitudes. The decision device or unit 11 determines whether the speech data currently being received by the decoder and replayed as sound via the loudspeaker or ear piece 10 of the mobile telephone unit would be intelligible to an average listener in the current background acoustic noise 16 of the mobile telephone unit (i.e. listener). If the decision unit determines that speech is readily intelligible then no processing is necessary and 15 the processing unit 10 would not alter the dequantized LSP parameters 17 which have been passed to it by the standard speech decoder, before passing them to the LSP to LPC converter 33. On the other hand, if the decision unit determines 20 that the speech is unintelligible, then processing is necessary and the processing unit 10 would alter the dequantized LSP parameters to alter the speech characteristics before passing them to the LSP to LPC converter for subsequent playback to the listener. The 25 decision unit may also predict that the speech will shortly become unintelligible. Inputs to the decision process are descriptions of speech and background noise, in the form of spectral analyses and amplitude scaling factor (gain). It is 30 necessary to compare the speech and noise data to determine if the speech would be audible to a listener in that noise. Comparison could be to initially classify the contents of the speech signal into non-speech, voiced 35 speech or unvoiced speech. If non-speech was present (perhaps a pause between words), then the audibility of this is unimportant and thus no enhancement is required, WO99/01863 PCT/GB98/01936 - 18 and the LSP-process module would be commanded to perform no processing. If voiced speech is present (voiced speech contains a series of resonance peaks at various frequencies 5 called formants), then the amplitude of each formant would be compared to the noise amplitude at that frequency to determine its audibility. If the noise amplitude at any formant frequency exceeds the formant amplitude then formant adjustment is required. 10 Other known techniques for determining the intelligibility of the speech to be output could be used, if desired. The LSP process unit 10 performs mathematical operations on individual LSPs to enhance the speech 15 under the control of the decision unit. The exact operations would depend upon the directions of the decision process. One speech enhancement function would entail the shifting of LSP lines to more favourable locations. 20 For example, an automatic examination of the noise amplitudes around the formant frequency might reveal if, perhaps, shifting the formant frequency upwards or downwards by 10% may improve matters. If this is likely (perhaps because the noise amplitude reduces at a 25 frequency 10% lower than the formant frequency), then the LSP processing block is directed to shift the appropriate LSPs by the corresponding amount. If, for example, the formant that requires moving is located at 600Hz, then two LSP coefficients would 30 exist, usually very close to and either side of 600Hz. If audibility is to be improved by a downwards shift of 10%, then the values of these two LSP parameters would each be multiplied by 0.9 to effect that shift. The LSP adjustment itself is confined to within the LSP process 35 block. As a further example, if the decision module determined that shifting lines 1 and 2 from a set of WO99/01863 PCT/GB98/01936 - 19 LSPs downwards in frequency by 10% would improve intelligibility, then the values of lines 1 and 2 would both be multiplied by a factor of 0.9. If the decision module determined that upward 5 shifting of line 3 by 100Hz improves intelligibility then an amount would be added to line 3. This amount would be equal to 100 if the LSP parameters were scaled to have values in Hz, or would more generally be 100x2U f S 10 where fs is the sampling rate of the system, and the values of the LSPs are confined to the angular frequency domain. Other types of processing are possible, but may all be described as adding/subtracting values to one or more 15 LSP lines (with adding LSP lines to themselves being equivalent to multiplication). The values may be determined by the decision module or may be dependent upon the present or past value of each LSP line. An example of such LSP processing is illustrated in 20 Figure 3, in which the frequency spectrum of a period of sound has been plotted, and the 10 LSP lines obtained from analysing this sound have been overlaid. LSP values may be readily converted to and from the LPC parameters from which the spectrum is plotted. For the 25 specific example in question, Figure 3 thus shows the frequency spectrum of the sound obtained from the analysis of speech 21 in the CELP coder 22 of Figure 2. In the case of a standard CELP decoder, operating without the benefit of this invention, the output speech 30 15 would be reconstructed using the data of Figure 3. When the invention is included, the LSP processing block 10 would be capable of altering the LSP values in order to change the output speech 15.
WO99/01863 PCT/GB98/01936 - 20 For the specific example of Figure 4, certain of the LSP values of the spectrum of Figure 3 have been altered and a new set of LPC coefficients have thus been generated forming the spectrum as shown in Figure 4. 5 Referring to the LSP values of the original spectrum in Figure 3, three operations have been performed: 1. The separation between lines 1 and 2 has been increased by moving both of the lines further apart 10 (in other words 1 has been lowered in frequency and 2 has been raised) 2. Lines 5 and 6 have been increased in frequency 15 3. Line 10 has been increased in frequency. The three actions have specific consequences to the sound that is transmitted: 20 1. Lines 1 and 2 lie on either side of a spectral peak. The movement in the two lines has induced this spectral peak to both reduce in amplitude and become wider (equivalent to an increase in bandwidth). 25 2. Lines 5 and 6 lie on either side of a second spectral peak. The movement of these two lines has induced that peak to increase in frequency. 30 3. Line 10 previously lay to the right of a very small spectral 'bump' which is now no longer evident as the line has been increased in frequency by a substantial amount. 35 In this specific example of a speech codec, the sound under analysis is speech. The spectral peaks evident in the spectral plots will then often, as WO99/01863 PCT/GB98/01936 - 21 discussed above, correspond to formants, important constituents of speech that convey a great deal of information. The LSP-based adjustments discussed above have thus changed the characteristics of the speech to 5 be output to and as it will be perceived by the listener. For example, in the case of vowels, moderately widening the lines corresponding to spectral peaks (i.e. increasing the bandwidths of the formants) has been found to improve intelligibility. 10 The example shown in Figure 2 additionally analyses the noise present in the environment of the listener to determine if the speech to be replayed to that listener is intelligible. If not, then speech characteristics are altered in the present invention to improve the 15 intelligibility of the speech by the operation of moving individual or groups of LSPs to provide the following set of operations: 1. Shift peak/formant upwards in frequency. 20 2. Shift peak/formant downwards in frequency. 3. Increase amplitude (decrease bandwidth) of peak/formant. 25 4. Increase bandwidth (decrease amplitude) of peak/formant. A well-known psychoacoustic theory states that a 30 sound of given frequency will be masked by a second coincidental sound of similar frequency. If the second sound is loud enough, then the former sound will be inaudible. Thus, in the case of speech, the Applicants have recognised that loud noises with frequencies 35 similar to those of the formants will mask the speech. In order to hear the speech it is necessary to either increase the volume or alter the frequency of the speech WO99/01863 PCT/GB98/01936 - 22 components. Volume alteration is relatively straightforward, but it should be noted that speech volume levels sufficient to cause hearing loss (if sustained) may be 5 required to make speech intelligible in certain situations, notably those within noisy motor vehicles. It is therefore preferred to alter the frequency of speech components. As can be seen, the present invention offers a 10 method of reducing the masking of speech by acoustic background noise (and thus improving intelligibility) through an efficient process that may be combined with many of the current standard mobile telephone and radio systems, and standard speech codecs in such systems. 15 Speech enhancement results when an analysis of the listener's background noise environment is combined with corrective LSP alteration, which adjusts received transmitted speech data to be replayed to the listener in order to improve the chances of the listener hearing 20 the processed sounds. The technique adjusts the values of LSPs found within the speech data codec based upon an analysis of the background acoustic noise environment of the listener. Preferably, the frequency or the power and bandwidth of specific frequency-domain features found in 25 the received speech are altered in this way.

Claims (38)

1. A method for increasing the intelligibility of speech output by a speech communication system to a 5 listener using the system, comprising: analysing the current background acoustic noise environment of the listener; determining using the results of the background noise analysis whether the speech to be output to the 10 listener would be intelligible to the listener in their current background noise environment; and altering the characteristics of the speech to be output by the speech communication system on the basis of said determination such that the altered speech has 15 enhanced intelligibility to the listener in their current background noise environment.
2. A method as claimed in claim 1, wherein the intelligibility of the speech to be output is determined 20 by classifying the contents of the speech into at least two categories, and comparing the amplitude of the speech in one category at one frequency with the noise amplitude at that frequency. 25
3. A method as claimed in claim 1 or 2, wherein the intelligibility of the speech to be output is determined by classifying the contents of the speech into a category which contains formants in the speech, and comparing the amplitude of the formant containing speech 30 category at one frequency with the noise amplitude at that frequency.
4. A method as claimed in any one of claims 1 to 3, wherein the intelligibility of the speech to be output 35 is determined by classifying the contents of the speech into non-speech, voiced speech or unvoiced speech, and comparing the amplitude of the voiced speech at one WO99/01863 PCT/GB98/01936 - 24 frequency with the noise amplitude at that frequency.
5. A method as claimed in any one of claims 1 to 4, wherein the intelligibility of the speech to be output 5 is determined by classifying the contents of the speech into non-speech, voiced speech or unvoiced speech, and comparing the amplitude of a spectral peak of the voiced speech having a centre frequency, with the noise amplitude at the centre frequency of the spectral peak. 10
6. A method as claimed in any one of claims 1 to 5, wherein the intelligibility of the speech to be output is determined by classifying the contents of the speech into non-speech, voiced speech or unvoiced speech, and 15 comparing the amplitude of a formant of the voiced speech having a centre frequency, with the noise amplitude at the centre frequency of the formant.
7. A method as claimed in any of claims 1 to 6, 20 wherein the speech is determined to be unintelligible if the background noise amplitude at substantially the same frequency as a spectral peak in the speech exceeds the amplitude of the spectral peak. 25
8. A method as claimed in any one of claims 1 to 7, wherein the speech is determined to be unintelligible if the background noise amplitude at substantially the same frequency as a formant in the speech exceeds the amplitude of the formant. 30
9. A method as claimed in any one of claims 1 to 8, wherein the speech characteristics are altered by altering line spectral pair (LSP) data representing the speech. 35
10. A method as claimed in claim 9, wherein the speech characteristics are altered by moving a line spectral WO99/01863 PCT/GB98/01936 - 25 pair in the speech spectrum.
11. A method as claimed in any one of claims 1 to 10, wherein the speech characteristics are altered by 5 altering the frequency of a component in the speech spectrum.
12. A method as claimed in claim 11, wherein the frequency of a formant in the speech spectrum is 10 altered.
13. A method as claimed in claim 12, wherein the frequency of a formant in the speech is altered to move the formant to a frequency where the background noise 15 amplitude is lower.
14. A method as claimed in claim 11, 12 or 13, wherein the speech spectrum includes a spectral peak having a centre frequency, and the centre frequency of the 20 spectral peak in the speech spectrum is altered.
15. A speech communication system comprising: means for analysing the current background acoustic noise environment of the speech communication system; 25 means for determining using the results of the background noise analysis whether speech to be output by the speech communication system to a listener listening to the speech communication system would be intelligible to the listener in the current background noise 30 environment; and means for altering the characteristics of the speech to be output by the speech communication system to the listener to enhance the intelligibility of the speech to the listener in the current background noise 35 in accordance with the output of said determining means.
16. A system as claimed in claim 15, wherein the means WO 99/01863 PCT/GB98/01936 - 26 for determining whether the speech to be output would be intelligible comprises means for classifying the contents of the speech into different categories, and means for comparing the amplitude of one of the speech 5 categories at one frequency with the noise amplitude at that frequency.
17. A system as claimed in claim 16, wherein the means for classifying the contents of the speech into 10 different categories classifies the contents of the speech into a category which contains formants in the speech, and the comparing means compares the amplitude of the formant containing speech category at one frequency with the noise amplitude at that frequency. 15
18. A system as claimed in any one of claims 15 to 17, wherein the means for determining whether the speech to be output would be intelligible comprises means for comparing the noise amplitude at substantially the same 20 frequency as a formant in the speech with the amplitude of the formant.
19. A system as claimed in any one of claims 15 to 18, wherein the speech is represented by data including line 25 spectral pair (LSP) data, and the means for altering the characteristics of the speech to be output by the speech communication system comprises means for altering the line spectral pair (LSP) data representing the speech. 30
20. A system as claimed in any one of claims 15 to 19, wherein the means for altering the characteristics of the speech to be output by the speech communication system comprises means for altering the frequency of a component in the speech spectrum. 35
21. A system as claimed in claim 20, wherein the means for altering the characteristics of the speech to be WO99/01863 PCT/GB98/01936 - 27 output by the speech communication system comprises means for altering the frequency of a formant in the speech to move the formant to a frequency where the noise amplitude is lower. 5
22. A method of altering the characteristics of speech to be output to a listener in a speech communication system in which the speech data to be processed by the speech communication system and output as sound 10 includes line spectral pair data, comprising altering the line spectral pair data in the speech data.
23. A method as claimed in claim 22, wherein the line spectral pair data in the speech data is altered to 15 alter the frequency of a component in the speech spectrum.
24. A method as claimed in claim 23, wherein the frequency of a formant in the speech spectrum is 20 altered.
25. A method as claimed in claim 23 or 24, wherein the centre frequency of a spectral peak in a speech spectrum is altered. 25
26. A method as claimed in any one of claims 22 to 25, wherein the line spectral pair data is altered by changing the frequency of a line spectral pair in the speech spectrum. 30
27. A method as claimed in any one of claims 22 to 26, wherein the line spectral pair data is altered by decreasing the spacing of a line spectral pair in the speech spectrum. 35
28. A speech communication system in which the speech data to be processed by the speech communication system WO99/01863 PCT/GB98/01936 - 28 includes line spectral pair data, comprising means for altering the line spectral pair data in the speech data processed by the speech communication system to change the characteristics of the processed speech as heard by 5 a listener.
29. A system as claimed in claim 28, wherein the means for altering the line spectral pair data comprises means for altering the line spectral pair data in such a 10 manner that the frequency of a component in the speech spectrum is changed.
30. A system as claimed in claim 29, wherein the means for altering the line spectral pair data comprises means 15 for altering the frequency of a formant in the speech spectrum.
31. A system as claimed in claim 29 or 30, wherein the means for altering the line spectral pair data comprises 20 means for altering the frequency of a spectral peak in the speech spectrum.
32. A system as claimed in any one of claims 28 to 31, wherein the means for altering the line spectral pair 25 data comprises means for altering the frequency of a line spectral pair in the speech spectrum.
33. A system as claimed in any one of claims 28 to 32, wherein the means for altering the line spectral pair 30 data comprises means for decreasing the spacing of a line spectral pair in the speech spectrum.
34. A method for increasing the intelligibility of speech output by a speech communication system to a 35 listener using the system, comprising: analysing the current background acoustic noise environment of the listener; WO99/01863 PCT/GB98/01936 - 29 comparing, using the results of the background noise analysis, the amplitude of formants in the speech spectrum of the speech to be output to the listener with the amplitude of the background noise; and 5 altering the characteristics of the speech to be output by the speech communication system on the basis of said comparison such that the altered speech has enhanced intelligibility to the listener in their current background noise environment. 10
35. A speech communication system comprising: means for analysing the current background acoustic noise environment of the speech communication system; means for comparing, using the results of the 15 background noise analysis, the amplitude of formants in the speech spectrum of the speech to be output by the speech communication system with the amplitude of the background noise; and means for altering the characteristics of the 20 speech to be output by the speech communication system to the listener to enhance the intelligibility of the speech to the listener in the current background noise in accordance with the output of said comparison device. 25
36. A speech communication system, substantially as hereinbefore described with reference to any one of the accompanying drawings.
37. A method for increasing the intelligibility of 30 speech output by a speech communication system to a listener using the system, substantially as hereinbefore described with reference to any one of the accompanying drawings. 35
38. A method of altering the characteristics of speech to be output to a listener in a speech communication system, substantially as hereinbefore described with reference to any one of the accompanying drawings.
AU82277/98A 1997-07-02 1998-07-01 Method and apparatus for speech enhancement in a speech communication system Abandoned AU8227798A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GBGB9714001.6A GB9714001D0 (en) 1997-07-02 1997-07-02 Method and apparatus for speech enhancement in a speech communication system
GB9714001 1997-07-02
PCT/GB1998/001936 WO1999001863A1 (en) 1997-07-02 1998-07-01 Method and apparatus for speech enhancement in a speech communication system

Publications (1)

Publication Number Publication Date
AU8227798A true AU8227798A (en) 1999-01-25

Family

ID=10815285

Family Applications (1)

Application Number Title Priority Date Filing Date
AU82277/98A Abandoned AU8227798A (en) 1997-07-02 1998-07-01 Method and apparatus for speech enhancement in a speech communication system

Country Status (12)

Country Link
EP (1) EP0993670B1 (en)
JP (1) JP2002507291A (en)
KR (1) KR20010014352A (en)
CN (1) CN1265217A (en)
AT (1) ATE214832T1 (en)
AU (1) AU8227798A (en)
CA (1) CA2235455A1 (en)
DE (1) DE69804310D1 (en)
GB (2) GB9714001D0 (en)
PL (1) PL337717A1 (en)
WO (1) WO1999001863A1 (en)
ZA (1) ZA985607B (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE9903553D0 (en) * 1999-01-27 1999-10-01 Lars Liljeryd Enhancing conceptual performance of SBR and related coding methods by adaptive noise addition (ANA) and noise substitution limiting (NSL)
FR2794322B1 (en) * 1999-05-27 2001-06-22 Sagem NOISE SUPPRESSION PROCESS
EP1210765B1 (en) 1999-07-28 2007-03-07 Clear Audio Ltd. Filter banked gain control of audio in a noisy environment
US6876968B2 (en) * 2001-03-08 2005-04-05 Matsushita Electric Industrial Co., Ltd. Run time synthesizer adaptation to improve intelligibility of synthesized speech
DE10124189A1 (en) * 2001-05-17 2002-11-21 Siemens Ag Signal reception in digital communications system involves generating output background signal with bandwidth greater than that of background signal characterized by received data
JP2003255993A (en) * 2002-03-04 2003-09-10 Ntt Docomo Inc System, method, and program for speech recognition, and system, method, and program for speech synthesis
JP2005530213A (en) * 2002-06-19 2005-10-06 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Audio signal processing device
WO2004068467A1 (en) * 2003-01-31 2004-08-12 Oticon A/S Sound system improving speech intelligibility
KR20050049103A (en) * 2003-11-21 2005-05-25 삼성전자주식회사 Method and apparatus for enhancing dialog using formant
WO2006026812A2 (en) * 2004-09-07 2006-03-16 Sensear Pty Ltd Apparatus and method for sound enhancement
US8280730B2 (en) 2005-05-25 2012-10-02 Motorola Mobility Llc Method and apparatus of increasing speech intelligibility in noisy environments
GB2433849B (en) 2005-12-29 2008-05-21 Motorola Inc Telecommunications terminal and method of operation of the terminal
DE102006001730A1 (en) 2006-01-13 2007-07-19 Robert Bosch Gmbh Sound system, method for improving the voice quality and / or intelligibility of voice announcements and computer program
EP1814109A1 (en) * 2006-01-27 2007-08-01 Texas Instruments Incorporated Voice amplification apparatus for modelling the Lombard effect
JP2007295347A (en) * 2006-04-26 2007-11-08 Mitsubishi Electric Corp Voice processor
WO2018127263A2 (en) * 2017-01-03 2018-07-12 Lizn Aps Speech intelligibility enhancing system
KR101414233B1 (en) 2007-01-05 2014-07-02 삼성전자 주식회사 Apparatus and method for improving speech intelligibility
JP4926005B2 (en) 2007-11-13 2012-05-09 ソニー・エリクソン・モバイルコミュニケーションズ株式会社 Audio signal processing apparatus, audio signal processing method, and communication terminal
WO2009086174A1 (en) 2007-12-21 2009-07-09 Srs Labs, Inc. System for adjusting perceived loudness of audio signals
JP5453740B2 (en) * 2008-07-02 2014-03-26 富士通株式会社 Speech enhancement device
US8538042B2 (en) 2009-08-11 2013-09-17 Dts Llc System for increasing perceived loudness of speakers
EP2372700A1 (en) * 2010-03-11 2011-10-05 Oticon A/S A speech intelligibility predictor and applications thereof
PL2737479T3 (en) 2011-07-29 2017-07-31 Dts Llc Adaptive voice intelligibility enhancement
CN103002105A (en) * 2011-09-16 2013-03-27 宏碁股份有限公司 Mobile communication method capable of improving articulation of communication contents
CN103297896B (en) * 2012-02-27 2016-07-06 联想(北京)有限公司 A kind of audio-frequency inputting method and electronic equipment
US9015044B2 (en) 2012-03-05 2015-04-21 Malaspina Labs (Barbados) Inc. Formant based speech reconstruction from noisy signals
US9312829B2 (en) 2012-04-12 2016-04-12 Dts Llc System for adjusting loudness of audio signals in real time
EP3010017A1 (en) * 2014-10-14 2016-04-20 Thomson Licensing Method and apparatus for separating speech data from background data in audio communication
JP6565206B2 (en) * 2015-02-20 2019-08-28 ヤマハ株式会社 Audio processing apparatus and audio processing method
EP3107097B1 (en) 2015-06-17 2017-11-15 Nxp B.V. Improved speech intelligilibility
US9847093B2 (en) 2015-06-19 2017-12-19 Samsung Electronics Co., Ltd. Method and apparatus for processing speech signal
JP6790732B2 (en) * 2016-11-02 2020-11-25 ヤマハ株式会社 Signal processing method and signal processing device
CN108369805B (en) * 2017-12-27 2019-08-13 深圳前海达闼云端智能科技有限公司 Voice interaction method and device and intelligent terminal
CN109346058B (en) * 2018-11-29 2024-06-28 西安交通大学 Voice acoustic feature expansion system
US11817114B2 (en) * 2019-12-09 2023-11-14 Dolby Laboratories Licensing Corporation Content and environmentally aware environmental noise compensation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5870292A (en) * 1981-10-22 1983-04-26 日産自動車株式会社 Voice recognition equipment for vehicle
US4538295A (en) * 1982-08-16 1985-08-27 Nissan Motor Company, Limited Speech recognition system for an automotive vehicle
WO1987000366A1 (en) * 1985-07-01 1987-01-15 Motorola, Inc. Noise supression system
GB8801014D0 (en) * 1988-01-18 1988-02-17 British Telecomm Noise reduction
US5235669A (en) * 1990-06-29 1993-08-10 At&T Laboratories Low-delay code-excited linear-predictive coding of wideband speech at 32 kbits/sec
CA2056110C (en) * 1991-03-27 1997-02-04 Arnold I. Klayman Public address intelligibility system
FI102337B1 (en) * 1995-09-13 1998-11-13 Nokia Mobile Phones Ltd Method and circuit arrangement for processing an audio signal
GB2306086A (en) * 1995-10-06 1997-04-23 Richard Morris Trim Improved adaptive audio systems

Also Published As

Publication number Publication date
GB9714001D0 (en) 1997-09-10
CN1265217A (en) 2000-08-30
ZA985607B (en) 2000-06-01
GB2327835A (en) 1999-02-03
CA2235455A1 (en) 1999-01-02
GB2327835B (en) 2000-04-19
PL337717A1 (en) 2000-08-28
WO1999001863A1 (en) 1999-01-14
GB9814279D0 (en) 1998-09-02
EP0993670A1 (en) 2000-04-19
JP2002507291A (en) 2002-03-05
KR20010014352A (en) 2001-02-26
EP0993670B1 (en) 2002-03-20
ATE214832T1 (en) 2002-04-15
DE69804310D1 (en) 2002-04-25

Similar Documents

Publication Publication Date Title
EP0993670B1 (en) Method and apparatus for speech enhancement in a speech communication system
US10885926B2 (en) Classification between time-domain coding and frequency domain coding for high bit rates
US8265940B2 (en) Method and device for the artificial extension of the bandwidth of speech signals
KR100574031B1 (en) Speech Synthesis Method and Apparatus and Voice Band Expansion Method and Apparatus
US20090192791A1 (en) Systems, methods and apparatus for context descriptor transmission
KR102105044B1 (en) Improving non-speech content for low rate celp decoder
JPH1097296A (en) Method and device for voice coding, and method and device for voice decoding
US5706392A (en) Perceptual speech coder and method
KR100216018B1 (en) Method and apparatus for encoding and decoding of background sounds
JP2010520503A (en) Method and apparatus in a communication network
GB2343822A (en) Using LSP to alter frequency characteristics of speech
Vicente-Peña et al. Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition
Crochiere et al. A Variable‐Band Coding Scheme for Speech Encoding at 4.8 kb/s
Motlicek et al. Wide-band audio coding based on frequency-domain linear prediction
McLoughlin CELP and speech enhancement
Ekeroth Improvements of the voice activity detector in AMR-WB
Hennix Decoder based noise suppression
Kwon Improved Excitation Modeling for Low-Rate CELP Speech Coding
Chen Adaptive variable bit-rate speech coder for wireless applications
Mermelstein et al. INR

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted