US10360917B2 - Speech/audio signal processing method and apparatus - Google Patents

Speech/audio signal processing method and apparatus Download PDF

Info

Publication number
US10360917B2
US10360917B2 US16/021,621 US201816021621A US10360917B2 US 10360917 B2 US10360917 B2 US 10360917B2 US 201816021621 A US201816021621 A US 201816021621A US 10360917 B2 US10360917 B2 US 10360917B2
Authority
US
United States
Prior art keywords
signal
spectrum tilt
high frequency
value
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/021,621
Other versions
US20180374488A1 (en
Inventor
Zexin LIU
Lei Miao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to US16/021,621 priority Critical patent/US10360917B2/en
Publication of US20180374488A1 publication Critical patent/US20180374488A1/en
Priority to US16/457,165 priority patent/US10559313B2/en
Application granted granted Critical
Publication of US10360917B2 publication Critical patent/US10360917B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/083Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being an excitation gain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • G10L19/125Pitch excitation, e.g. pitch synchronous innovation CELP [PSI-CELP]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition

Definitions

  • the present invention relates to the field of digital signal processing technologies, and in particular, to a speech/audio signal processing method and apparatus.
  • Audio is digitized, and is transmitted from one terminal to another terminal by using an audio communications network.
  • the terminal herein may be a mobile phone, a digital telephone terminal, or an audio terminal of any other type, where the digital telephone terminal is, for example, a VOW telephone, an ISDN telephone, a computer, or a cable communications telephone.
  • the speech/audio signal is compressed at a transmit end and then transmitted to a receive end, and at the receive end, the speech/audio signal is restored by means of decompression processing and is played.
  • a network truncates bit streams at different bit rates, where the bit streams are transmitted from an encoder to the network, and at a decoder, the truncated bit streams are decoded into speech/audio signals of different bandwidths.
  • the output speech/audio signals switch between different bandwidths.
  • An objective of embodiments of the present invention is to provide a speech/audio signal processing method and apparatus, so as to improve aural comfort during bandwidth switching of speech/audio signals.
  • a speech/audio signal processing method includes:
  • obtaining a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame comprises:
  • the first type of signal is a fricative signal
  • the second type of signal is a non-fricative signal
  • the narrow frequency signal is classified as a fricative signal, the rest being non-fricative signals
  • the first predetermined value is 8
  • the first preset range is [0.5, 1].
  • the first possible implementation manner of the first aspect and the second possible implementation manner of the first aspect in a third possible implementation manner, wherein the correcting the initial high frequency signal by using the time-domain global gain parameter, to obtain a corrected high frequency time-domain signal comprises:
  • the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal
  • the first possible implementation manner of the first aspect and the second possible implementation manner of the first aspect in a fourth possible implementation manner, further comprising:
  • the correcting the initial high frequency signal by using the time-domain global gain parameter comprises:
  • a speech/audio signal processing method includes:
  • the decoder performing, by the decoder, weighting processing on an energy ratio and the time-domain global gain parameter to obtain a, and using an obtained weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal of a historical frame and energy of a current frame of the initial high frequency signal of the current frame;
  • synthesizing, by the decoder, a narrow frequency time-domain signal of the current frame and the corrected high frequency time-domain signal comprises:
  • obtaining the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of a current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame comprises:
  • the current frame of speech/audio signal classifying the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, wherein the first type of signal is a fricative signal and the second type of signal is a non-fricative signal;
  • the step of limiting the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a spectrum tilt parameter limit value comprises:
  • the value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value
  • the first predetermined value is used as the spectrum tilt parameter limit value.
  • a fourth possible implementation manner wherein the step of limiting the spectrum tilt parameter to a value in a first range to obtain a spectrum tilt parameter limit value comprises:
  • the value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value
  • the upper limit of the first range is used as the spectrum tilt parameter limit value
  • the lower limit of the first range is used as the spectrum tilt parameter limit value.
  • obtaining an initial high frequency signal corresponding to a current frame of speech/audio signal comprises:
  • a speech/audio signal processing apparatus includes:
  • a predicting unit configured to: when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
  • a parameter obtaining unit configured to obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame;
  • a correcting unit configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal
  • a synthesizing unit configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
  • the parameter obtaining unit comprises:
  • a classifying unit configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
  • a first limiting unit configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal;
  • a second limiting unit configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
  • the first type of signal is a fricative signal
  • the second type of signal is a non-fricative signal
  • the narrow frequency signal is classified as a fricative, the rest being non-fricatives
  • the first predetermined value is 8
  • the first preset range is [0.5, 1].
  • the first possible implementation manner of the third aspect and the second possible implementation manner of the third aspect in a third possible implementation manner, further comprising:
  • a weighting processing unit configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal, wherein
  • the correcting unit is configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the parameter obtaining unit is further configured to obtain a time-domain envelope parameter corresponding to the initial high frequency signal
  • the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
  • a speech/audio signal processing apparatus includes:
  • an acquiring unit configured to: when a speech/audio signal switches bandwidth, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
  • a parameter obtaining unit configured to obtain a time-domain global gain parameter corresponding to the initial high frequency signal
  • a weighting processing unit configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal;
  • a correcting unit configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal
  • a synthesizing unit configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal output the synthesized signal.
  • the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal
  • the parameter obtaining unit comprises:
  • a global gain parameter obtaining unit configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
  • the global gain parameter obtaining unit comprises:
  • a classifying unit configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
  • a first limiting unit configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal;
  • a second limiting unit configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
  • the first type of signal is a fricative signal
  • the second type of signal is a non-fricative signal
  • the narrow frequency signal is classified as a fricative, the rest being non-fricatives
  • the first predetermined value is 8
  • the first preset range is [0.5, 1].
  • the apparatus further comprises:
  • a time-domain envelope obtaining unit configured to use a series of preset values as a high frequency time-domain envelope parameter of the current frame of speech/audio signal
  • the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the acquiring unit comprises:
  • an excitation signal obtaining unit configured to predict an excitation signal of the high frequency signal according to the current frame of speech/audio signal
  • an LPC coefficient obtaining unit configured to predict an LPC coefficient of the high frequency signal
  • a synthesizing unit configured to synthesize the excitation signal of the high frequency signal and the LPC coefficient of the high frequency signal, to obtain the predicted high frequency signal.
  • the apparatus further comprises:
  • a weighting factor setting unit configured to: when narrowband signals of the current frame of speech/audio signal and a previous frame of speech/audio signal have a predetermined correlation, use a value obtained by attenuating, according to a step size, a weighting factor alfa of an energy ratio corresponding to the previous frame of speech/audio signal as a weighting factor of an energy ratio corresponding to the current audio frame, wherein the attenuation is performed frame by frame until alfa is 0.
  • a high frequency signal is corrected, so as to implement a smooth transition of the high frequency signal between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before switching are in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
  • FIG. 1 is a schematic flowchart of an embodiment of a speech/audio signal processing method according to the present invention
  • FIG. 2 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention.
  • FIG. 3 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention.
  • FIG. 4 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention.
  • FIG. 5 is a schematic structural diagram of an embodiment of a speech/audio signal processing apparatus according to the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a speech/audio signal processing apparatus according to the present invention.
  • FIG. 7 is a schematic structural diagram of an embodiment of a parameter obtaining unit according to the present invention.
  • FIG. 8 is a schematic structural diagram of an embodiment of a global gain parameter obtaining unit according to the present invention.
  • FIG. 9 is a schematic structural diagram of an embodiment of an acquiring unit according to the present invention.
  • FIG. 10 is a schematic structural diagram of another embodiment of a speech/audio signal processing apparatus according to the present invention.
  • audio codecs and video codecs are widely applied in various electronic devices, for example, a mobile phone, a wireless apparatus, a personal data assistant (PDA), a handheld or portable computer, a GPS receiver/navigator, a camera, an audio/video player, a video camera, a video recorder, and a monitoring device.
  • this type of electronic device includes an audio coder or an audio decoder, where the audio coder or decoder may be directly implemented by a digital circuit or a chip, for example, a DSP (digital signal processor), or be implemented by a software code driving a processor to execute a process in the software code.
  • DSP digital signal processor
  • bandwidth switching includes switching from a narrow frequency signal to a wide frequency signal and switching from a wide frequency signal to a narrow frequency signal.
  • the narrow frequency signal mentioned in the present invention is a speech signal that only has a low frequency component and a high frequency component is empty after up-sampling and low-pass filtering, while the wide frequency speech/audio signal has both a low frequency signal component and a high frequency signal component.
  • the narrow frequency signal and the wide frequency signal are relative. For example, for a narrowband signal, a wideband signal is a wide frequency signal; and for a wideband signal, a super-wideband signal is a wide frequency signal.
  • a narrowband signal is a speech/audio signal of which a sampling rate is 8 kHz;
  • a wideband signal is a speech/audio signal of which a sampling rate is 16 kHz;
  • a super-wideband signal is a speech/audio signal of which a sampling rate is 32 kHz.
  • a switching algorithm is kept in a signal domain for processing, where the signal domain is the same as that of the high frequency coding/decoding algorithm before the switching.
  • a time-domain switching algorithm is used as a switching algorithm to be used; when the frequency-domain coding/decoding algorithm is used for the high frequency signal before the switching, a frequency-domain switching algorithm is used as a switching algorithm to be used.
  • a time-domain frequency band extension algorithm is used before switching, a similar time-domain switching technology is not used after the switching.
  • a current input audio frame that needs to be processed is a current frame of speech/audio signal.
  • the current frame of speech/audio signal includes a narrow frequency signal and a high frequency signal, that is, a narrow frequency signal of current frame and a high frequency signal of current frame.
  • Any frame of speech/audio signal before the high frequency signal of current frame is a historical frame of speech/audio signal, which also includes a narrow frequency signal of historical frame and a high frequency signal of historical frame.
  • a frame of speech/audio signal previous to the current frame of speech/audio signal is a previous frame of speech/audio signal.
  • an embodiment of a speech/audio signal processing method of the present invention includes:
  • the current frame of speech/audio signal includes a narrow frequency signal of current frame and a high frequency time-domain signal of current frame.
  • Bandwidth switching includes switching from a narrow frequency signal to a wide frequency signal and switching from a wide frequency signal to a narrow frequency signal.
  • the current frame of speech/audio signal is the current frame of wide frequency signal, including a narrow frequency signal and a high frequency signal
  • the initial high frequency signal of the current frame of speech/audio signal is a real signal and may be directly obtained from the current frame of speech/audio signal.
  • the current frame of speech/audio signal is the narrow frequency signal of current frame of which a high frequency time-domain signal of current frame is empty, the initial high frequency signal of the current frame of speech/audio signal is a predicted signal, and a high frequency signal corresponding to the narrow frequency signal of current frame needs to be predicted and used as the initial high frequency signal.
  • the time-domain global gain parameter of the high frequency signal may be obtained by decoding.
  • the time-domain global gain parameter of the high frequency signal may be obtained according to the current frame of signal: the time-domain global gain parameter of the high frequency signal is obtained according to a spectrum tilt parameter of the narrow frequency signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame.
  • S 103 Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of the initial high frequency signal of the current frame of speech/audio signal.
  • a historical frame of final output speech/audio signal is used as the historical frame of speech/audio signal is used, and the initial high frequency signal is used as the current frame of speech/audio signal.
  • the energy ratio Ratio Esyn( ⁇ 1)/Esyn_tmp, where Esyn( ⁇ 1) represents the energy of the output high frequency time-domain signal syn of the historical frame, and Esyn_tmp represents the energy of the initial high frequency time-domain signal syn corresponding to the current frame.
  • the correction refers to that the signal is multiplied, that is, the initial high frequency signal is multiplied by the predicted global gain parameter.
  • step S 102 a time-domain envelope parameter and the time-domain global gain parameter that are corresponding to the initial high frequency signal are obtained; therefore, in step S 104 , the initial high frequency signal is corrected by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal; that is, the predicted high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the time-domain envelope parameter of the high frequency signal may be obtained by decoding.
  • the time-domain envelope parameter of the high frequency signal may be obtained according to the current frame of signal: a series of predetermined values or a high frequency time-domain envelope parameter of the historical frame may be used as the high frequency time-domain envelope parameter of the current frame of speech/audio signal.
  • a high frequency signal is corrected, so as to implement a smooth transition of the high frequency signal between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before switching are in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
  • FIG. 2 another embodiment of a speech/audio signal processing method of the present invention includes:
  • the step of predicting a predicted high frequency signal corresponding to a narrow frequency signal of current frame includes: predicting an excitation signal of the high frequency signal of the current frame of speech/audio signal according to the narrow frequency signal of current frame; predicting an LPC (Linear Predictive Coding, linear predictive coding) coefficient of the high frequency signal of the current frame of speech/audio signal; and synthesizing the predicted high frequency excitation signal and the LPC coefficient, to obtain the predicted high frequency signal syn_tmp.
  • LPC Linear Predictive Coding, linear predictive coding
  • parameters such as a pitch period, an algebraic codebook, and a gain may be extracted from the narrow frequency signal, and the high frequency excitation signal is predicted by resampling and filtering.
  • operations such as up-sampling, low-pass, and obtaining of an absolute value or a square may be performed on the narrow frequency time-domain signal or a narrow frequency time-domain excitation signal, so as to predict the high frequency excitation signal.
  • a high frequency LPC coefficient of a historical frame or a series of preset values may be used as the LPC coefficient of the current frame; or different prediction manners may be used for different signal types.
  • S 202 Obtain a time-domain envelope parameter and a time-domain global gain parameter that are corresponding to the predicted high frequency signal.
  • a series of predetermined values may be used as the high frequency time-domain envelope parameter of the current frame.
  • Narrowband signals may be generally classified into several types, a series of values may be preset for each type, and a group of preset time-domain envelope parameters may be selected according to types of current frame of narrowband signals; or a group of time-domain envelope values may be set, for example, when the number of time-domain envelops is M, the preset values may be M 0.3536 s.
  • the obtaining of a time-domain envelope parameter is an optional but not a necessary step.
  • the time-domain global gain parameter of the high frequency signal is obtained according to a spectrum tilt parameter of the narrow frequency signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame, which includes the following steps in an embodiment:
  • S 2021 Classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, where in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; and when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, classify the narrow frequency signal as a fricative, and the rest as non-fricatives.
  • the parameter cor showing the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame may be determined according to an energy magnitude relationship between signals of a same frequency band, or may be determined according to an energy relationship between several same frequency bands, or may be calculated according to a formula showing a self-correlation or a cross-correlation between time-domain signals or showing a self-correlation or a cross-correlation between time-domain excitation signals.
  • the time-domain global gain parameter gain′ is obtained according to the following formula:
  • gain ′ ⁇ tilt , tilt ⁇ ⁇ 1 ⁇ 1 , tilt > ⁇ 1 , where ⁇ ⁇ tilt ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ spectrum ⁇ ⁇ tilt ⁇ ⁇ parameter , and ⁇ ⁇ ⁇ 1 ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ first ⁇ ⁇ predetermined ⁇ ⁇ value .
  • the spectrum tilt parameter of the current frame of speech/audio signal belongs to the first range, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is greater than an upper limit of the first range, the upper limit of the first range is used as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is less than a lower limit of the first range, the lower limit of the first range is used as the spectrum tilt parameter limit value.
  • the time-domain global gain parameter gain′ is obtained according to the following formula:
  • a spectrum tilt parameter may be any value greater than 5, and for a non-fricative, a spectrum tilt parameter may be any value less than or equal to 5, or may be greater than 5.
  • S 203 Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of the initial high frequency signal of the current frame of speech/audio signal.
  • the predicted high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the high frequency time-domain signal.
  • the time-domain envelope parameter is optional.
  • the predicted high frequency signal may be corrected by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal. That is, the predicted high frequency signal is multiplied by the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the energy Esyn of the high frequency time-domain signal syn is used to predict a time-domain global gain parameter of a next frame. That is, a value of Esyn is assigned to Esyn( ⁇ 1).
  • a high frequency band of a narrow frequency signal following a wide frequency signal is corrected, so as to implement a smooth transition of the high frequency part between a wide frequency band and a narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because corresponding processing is performed on the frame during the switching, a problem that occurs during parameter and status updating is indirectly eliminated.
  • a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before the switching in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
  • FIG. 3 another embodiment of a speech/audio signal processing method of the present invention includes:
  • a narrow frequency signal switches to a wide frequency signal
  • a previous frame is a narrow frequency signal
  • a current frame is a wide frequency signal
  • S 302 Obtain a time-domain envelope parameter and a time-domain global gain parameter that are corresponding to the high frequency signal.
  • the time-domain envelope parameter and the time-domain global gain parameter may be directly obtained from the high frequency signal of current frame.
  • the obtaining of a time-domain envelope parameter is an optional step.
  • S 303 Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of an initial high frequency signal of a current frame of speech/audio signal.
  • the time-domain global gain parameter is smoothed in the following manner:
  • a value obtained by attenuating, according to a certain step size, a weighting factor alfa of the energy ratio corresponding to the previous frame of speech/audio signal is used as a weighting factor of the energy ratio corresponding to the current audio frame, where the attenuation is performed frame by frame until alfa is 0.
  • alfa is attenuated frame by frame according to a certain step size until alfa is attenuated to 0; when the narrow frequency signals of the consecutive frames have no correlation, alfa is directly attenuated to 0, that is, a current decoding result is maintained without performing weighting or correcting.
  • the correction refers to that the high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the time-domain envelope parameter is optional.
  • the high frequency signal may be corrected by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal. That is, the high frequency signal is multiplied by the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • a high frequency band of a wide frequency signal following a narrow frequency signal is corrected, so as to implement a smooth transition of the high frequency part between a wide frequency band and a narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because corresponding processing is performed on the frame of during the switching, a problem that occurs during parameter and status updating is indirectly eliminated.
  • a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before the switching in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
  • FIG. 4 another embodiment of a speech/audio signal processing method of the present invention includes:
  • the step of predicting an initial high frequency signal corresponding to a narrow frequency signal of current frame includes: predicting an excitation signal of the high frequency signal of the current frame of speech/audio signal according to the narrow frequency signal of current frame; predicting an LPC coefficient of the high frequency signal of the current frame of speech/audio signal; and synthesizing the predicted high frequency excitation signal and the LPC coefficient, to obtain the predicted high frequency signal syn_tmp.
  • parameters such as a pitch period, an algebraic codebook, and a gain may be extracted from the narrow frequency signal, and the high frequency excitation signal is predicted by resampling and filtering.
  • operations such as up-sampling, low-pass, and obtaining of an absolute value or a square may be performed on the narrow frequency time-domain signal or a narrow frequency time-domain excitation signal, so as to predict the high frequency excitation signal.
  • a high frequency LPC coefficient of a historical frame or a series of preset values may be used as the LPC coefficient of the current frame; or different prediction manners may be used for different signal types.
  • S 402 Obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame.
  • S 2021 Classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, where in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal.
  • the narrow frequency signal when the spectrum tilt parameter tilt>5, and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives.
  • the parameter cor showing the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame may be determined according to an energy magnitude relationship between signals of a same frequency band, or may be determined according to an energy relationship between several same frequency bands, or may be calculated according to a formula showing a self-correlation or a cross-correlation between time-domain signals or showing a self-correlation or a cross-correlation between time-domain excitation signals.
  • the time-domain global gain parameter gain′ is obtained according to the following formula:
  • gain ′ ⁇ tilt , tilt ⁇ ⁇ 1 ⁇ 1 , tilt > ⁇ 1 , where ⁇ ⁇ tilt ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ spectrum ⁇ ⁇ tilt ⁇ ⁇ parameter , and ⁇ ⁇ ⁇ 1 ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ first ⁇ ⁇ predetermined ⁇ ⁇ value .
  • the spectrum tilt parameter of the current frame of speech/audio signal belongs to the first range, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is greater than an upper limit of the first range, the upper limit of the first range is used as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is less than a lower limit of the first range, the lower limit of the first range is used as the spectrum tilt parameter limit value.
  • the time-domain global gain parameter gain′ is obtained according to the following formula:
  • a spectrum tilt parameter may be any value greater than 5, and for a non-fricative, a spectrum tilt parameter may be any value less than or equal to 5, or may be greater than 5.
  • the initial high frequency signal is multiplied by the time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
  • step S 403 may include:
  • the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of a initial high frequency signal of current frame
  • the method may further include:
  • the correcting the initial high frequency signal by using the predicted global gain parameter includes:
  • a time-domain global gain parameter of a high frequency signal is obtained according to a spectrum tilt parameter and an interframe correlation.
  • the narrow frequency spectrum tilt parameter an energy relationship between a narrow frequency signal and a high frequency signal can be correctly estimated, so as to better estimate energy of the high frequency signal.
  • the interframe correlation an interframe correlation between high frequency signals can be estimated by making a good use of the correlation between narrow frequency frames. In this way, when weighting is performed to obtain a high frequency global gain, the foregoing real information can be used well, and an undesirable noise is not introduced.
  • the high frequency signal is corrected by using the time-domain global gain parameter, so as to implement a smooth transition of the high frequency part between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band.
  • the present invention further provides a speech/audio signal processing apparatus.
  • the apparatus may be located in a terminal device, a network device, or a test device.
  • the speech/audio signal processing apparatus may be implemented by a hardware circuit, or may be implemented by software in combination with hardware.
  • a processor invokes the speech/audio signal processing apparatus, to implement speech/audio signal processing.
  • the speech/audio signal processing apparatus may execute the methods and processes in the foregoing method embodiments.
  • an embodiment of a speech/audio signal processing apparatus includes:
  • an acquiring unit 601 configured to: when a speech/audio signal switches bandwidth, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
  • a parameter obtaining unit 602 configured to obtain a time-domain global gain parameter corresponding to the initial high frequency signal
  • a weighting processing unit 603 configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of the initial high frequency signal of current frame;
  • a correcting unit 604 configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal
  • a synthesizing unit 605 configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
  • the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal
  • the parameter obtaining unit 602 includes:
  • a global gain parameter obtaining unit configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
  • the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal
  • the parameter obtaining unit 602 includes:
  • a time-domain envelope obtaining unit 701 configured to use a series of preset values as a high frequency time-domain envelope parameter of the current frame of speech/audio signal;
  • a global gain parameter obtaining unit 702 configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
  • the correcting unit 604 is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • an embodiment of the global gain parameter obtaining unit 702 includes:
  • a classifying unit 801 configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
  • a first limiting unit 802 configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal;
  • a second limiting unit 803 configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
  • the first type of signal is a fricative signal
  • the second type of signal is a non-fricative signal
  • the narrow frequency signal is classified as a fricative, the rest being non-fricatives
  • the first predetermined value is 8
  • the first preset range is [0.5, 1].
  • the acquiring unit 601 includes:
  • an excitation signal obtaining unit 901 configured to predict an excitation signal of the high frequency signal according to the current frame of speech/audio signal;
  • an LPC coefficient obtaining unit 902 configured to predict an LPC coefficient of the high frequency signal
  • a generating unit 903 configured to synthesize the excitation signal of the high frequency signal and the LPC coefficient of the high frequency signal, to obtain the predicted high frequency signal.
  • the bandwidth switching is switching from a narrow frequency signal to a wide frequency signal
  • the speech/audio signal processing apparatus further includes:
  • a weighting factor setting unit configured to: when narrowband signals of the current audio frame of speech/audio signal and a previous frame of speech/audio signal have a predetermined correlation, use a value obtained by attenuating, according to a certain step size, a weighting factor alfa of the energy ratio corresponding to the previous frame of speech/audio signal as a weighting factor of the energy ratio corresponding to the current audio frame, where the attenuation is performed frame by frame until alfa is 0.
  • FIG. 10 another embodiment of a speech/audio signal processing apparatus includes:
  • a predicting unit 1001 configured to: when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
  • a parameter obtaining unit 1002 configured to obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame;
  • a correcting unit 1003 configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal
  • a synthesizing unit 1004 configured to synthesize the narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
  • the parameter obtaining unit 1002 includes:
  • a classifying unit 801 configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
  • a first limiting unit 802 configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal;
  • a second limiting unit 803 configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
  • the first type of signal is a fricative signal
  • the second type of signal is a non-fricative signal
  • the narrow frequency signal is classified as a fricative, the rest being non-fricatives
  • the first predetermined value is 8
  • the first preset range is [0.5, 1].
  • the speech/audio signal processing apparatus further includes:
  • a weighting processing unit configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of the initial high frequency signal of current frame;
  • the correcting unit is configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
  • the parameter obtaining unit is further configured to obtain a time-domain envelope parameter corresponding to the initial high frequency signal; and the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
  • the program may be stored in a computer readable storage medium. When the program runs, the processes of the methods in the embodiments are performed.
  • the storage medium may include: a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Telephone Function (AREA)
  • Transmitters (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The present invention discloses a speech/audio signal processing method and apparatus. In an embodiment, the speech/audio signal processing method includes: when a speech/audio signal switches bandwidth, obtaining an initial high frequency signal corresponding to a current frame of speech/audio signal; obtaining a time-domain global gain parameter of the initial high frequency signal; performing weighting processing on an energy ratio and the time-domain global gain parameter, and using an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal; correcting the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and synthesizing a current frame of narrow frequency time-domain signal and the corrected high frequency time-domain signal and outputting the synthesized signal.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application is a continuation of U.S. patent application Ser. No. 15/616,188, filed on Jun. 7, 2017, which is a continuation of U.S. patent application Ser. No. 14/470,559, filed on Aug. 27, 2014, now U.S. Pat. No. 9,691,396. which is a continuation of International Application No. PCT/CN2013/072075, filed on Mar. 1, 2013. The International Application No. PCT/CN2013/072075 claims priority to Chinese Patent Application No. 201210051672.6, filed on Mar. 1, 2012. All of the afore-mentioned patent applications are hereby incorporated by reference in their entireties.
TECHNICAL FIELD
The present invention relates to the field of digital signal processing technologies, and in particular, to a speech/audio signal processing method and apparatus.
BACKGROUND
In the field of digital communications, transmission of voice, images, audio, and videos is needed in a wide range of applications such as a mobile phone call, an audio/video conference, broadcast television, and multimedia entertainment. Audio is digitized, and is transmitted from one terminal to another terminal by using an audio communications network. The terminal herein may be a mobile phone, a digital telephone terminal, or an audio terminal of any other type, where the digital telephone terminal is, for example, a VOW telephone, an ISDN telephone, a computer, or a cable communications telephone. To reduce resources occupied by a speech/audio signal during storage or transmission, the speech/audio signal is compressed at a transmit end and then transmitted to a receive end, and at the receive end, the speech/audio signal is restored by means of decompression processing and is played.
In current multirate speech/audio coding, because of different network statuses, a network truncates bit streams at different bit rates, where the bit streams are transmitted from an encoder to the network, and at a decoder, the truncated bit streams are decoded into speech/audio signals of different bandwidths. As a result, the output speech/audio signals switch between different bandwidths.
Sudden switching between signals of different bandwidths causes obvious aural discomfort in human ears. Besides, because updating of states of filters during time-frequency transform or frequency-time transform generally requires the use of a parameter between consecutive frames, when some proper processing is not performed during bandwidth switching, an error may occur during the updating of these states, which causes some phenomena of abrupt energy changes and deterioration of aural quality.
SUMMARY
An objective of embodiments of the present invention is to provide a speech/audio signal processing method and apparatus, so as to improve aural comfort during bandwidth switching of speech/audio signals.
According to a first aspect of the present invention, a speech/audio signal processing method includes:
when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtaining an initial high frequency signal corresponding to a current frame of speech/audio signal;
obtaining a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame;
correcting the initial high frequency signal by using the time-domain global gain parameter, to obtain a corrected high frequency time-domain signal; and
synthesizing a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and outputting the synthesized signal.
In a first possible implementation manner of the first aspect, wherein the obtaining a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame comprises:
classifying the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame;
when the current frame of speech/audio signal is a first type of signal, limiting the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value;
when the current frame of speech/audio signal is a second type of signal, limiting the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value; and
using the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, wherein the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative signal, the rest being non-fricative signals; the first predetermined value is 8; and the first preset range is [0.5, 1].
With reference to anyone of the first aspect, the first possible implementation manner of the first aspect and the second possible implementation manner of the first aspect, in a third possible implementation manner, wherein the correcting the initial high frequency signal by using the time-domain global gain parameter, to obtain a corrected high frequency time-domain signal comprises:
performing weighting processing on an energy ratio and the time-domain global gain parameter, and using an obtained weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal; and
correcting the initial high frequency signal by using the predicted global gain parameter.
With reference to anyone of the first aspect, the first possible implementation manner of the first aspect and the second possible implementation manner of the first aspect, in a fourth possible implementation manner, further comprising:
obtaining a time-domain envelope parameter corresponding to the initial high frequency signal, wherein
the correcting the initial high frequency signal by using the time-domain global gain parameter comprises:
correcting the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
According to a second aspect of the present invention, a speech/audio signal processing method includes:
when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtaining, by a decoder, an initial high frequency signal corresponding to a current frame of speech/audio signal;
obtaining, by the decoder, a time-domain global gain parameter of the initial high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of the current frame and a narrow frequency signal of a historical frame;
performing, by the decoder, weighting processing on an energy ratio and the time-domain global gain parameter to obtain a, and using an obtained weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal of a historical frame and energy of a current frame of the initial high frequency signal of the current frame;
correcting, by the decoder, the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and
synthesizing, by the decoder, a narrow frequency time-domain signal of the current frame and the corrected high frequency time-domain signal In a first possible implementation manner of the second aspect, wherein the obtaining a time-domain global gain parameter of the initial high frequency signal comprises:
obtaining the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, wherein the obtaining the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of a current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame comprises:
classifying the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, wherein the first type of signal is a fricative signal and the second type of signal is a non-fricative signal;
when the current frame of speech/audio signal is a first type of signal, limiting the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a spectrum tilt parameter limit value;
when the current frame of speech/audio signal is a second type of signal, limiting the spectrum tilt parameter to a value in a first range to obtain a spectrum tilt parameter limit value; and
using the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, wherein the step of limiting the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a spectrum tilt parameter limit value comprises:
when a value of the spectrum tilt parameter is less than or equal to the first predetermined value, the value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value;
when a value of the spectrum tilt parameter is greater than the first predetermined value, the first predetermined value is used as the spectrum tilt parameter limit value.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner, wherein the step of limiting the spectrum tilt parameter to a value in a first range to obtain a spectrum tilt parameter limit value comprises:
when a value of the spectrum tilt parameter belongs to the first range, the value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value;
when a value of the spectrum tilt parameter is greater than an upper limit of the first range, the upper limit of the first range is used as the spectrum tilt parameter limit value;
when a value of the spectrum tilt parameter is less than a lower limit of the first range, the lower limit of the first range is used as the spectrum tilt parameter limit value.
With reference to the second possible implementation manner of the first aspect, in a fifth possible implementation manner, wherein the first predetermined value is 8 and the first range is [0.5, 1].
In a sixth possible implementation manner of the second aspect, wherein the obtaining an initial high frequency signal corresponding to a current frame of speech/audio signal comprises:
predicting a high frequency excitation signal according to the current frame of speech/audio signal;
predicting an LPC coefficient of the high frequency signal; and
synthesizing the high frequency excitation signal and the LPC coefficient of the high frequency signal, to obtain the predicted high frequency signal.
According to a third aspect of the present invention, a speech/audio signal processing apparatus includes:
a predicting unit, configured to: when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
a parameter obtaining unit, configured to obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame;
a correcting unit, configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and
a synthesizing unit, configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
In a first possible implementation manner of the third aspect, wherein the parameter obtaining unit comprises:
a classifying unit, configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
a first limiting unit, configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal; and
a second limiting unit, configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner, wherein the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; the first predetermined value is 8; and the first preset range is [0.5, 1].
With reference to anyone of the third aspect, the first possible implementation manner of the third aspect and the second possible implementation manner of the third aspect, in a third possible implementation manner, further comprising:
a weighting processing unit, configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal, wherein
the correcting unit is configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
With reference to anyone of the third aspect, the first possible implementation manner of the third aspect and the second possible implementation manner of the third aspect, in a fourth possible implementation manner, wherein
the parameter obtaining unit is further configured to obtain a time-domain envelope parameter corresponding to the initial high frequency signal; and
the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
According to a fourth aspect of the present invention, a speech/audio signal processing apparatus includes:
an acquiring unit, configured to: when a speech/audio signal switches bandwidth, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
a parameter obtaining unit, configured to obtain a time-domain global gain parameter corresponding to the initial high frequency signal;
a weighting processing unit, configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a historical frame of high frequency time-domain signal and energy of a current frame of initial high frequency signal;
a correcting unit, configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and
a synthesizing unit, configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal output the synthesized signal.
In a first possible implementation manner of the fourth aspect, wherein the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal, and the parameter obtaining unit comprises:
a global gain parameter obtaining unit, configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
With reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner, wherein the global gain parameter obtaining unit comprises:
a classifying unit, configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
a first limiting unit, configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal; and
a second limiting unit, configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
With reference to the second possible implementation manner of the fourth aspect, in a third possible implementation manner, wherein the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; the first predetermined value is 8; and the first preset range is [0.5, 1].
With reference to anyone of the fourth aspect, the first possible implementation manner of the fourth aspect and the second possible implementation manner of the fourth aspect, in a fourth possible implementation manner, wherein the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal, and the apparatus further comprises:
a time-domain envelope obtaining unit, configured to use a series of preset values as a high frequency time-domain envelope parameter of the current frame of speech/audio signal; and
the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
With reference to anyone of the fourth aspect, the first possible implementation manner of the fourth aspect and the second possible implementation manner of the fourth aspect, in a fifth possible implementation manner, wherein the acquiring unit comprises:
an excitation signal obtaining unit, configured to predict an excitation signal of the high frequency signal according to the current frame of speech/audio signal;
an LPC coefficient obtaining unit, configured to predict an LPC coefficient of the high frequency signal; and
a synthesizing unit, configured to synthesize the excitation signal of the high frequency signal and the LPC coefficient of the high frequency signal, to obtain the predicted high frequency signal.
With reference to anyone of the fourth aspect, the first possible implementation manner of the fourth aspect and the second possible implementation manner of the fourth aspect, in a sixth possible implementation manner, wherein the bandwidth switching is switching from a narrow frequency signal to a wide frequency signal, and the apparatus further comprises:
a weighting factor setting unit, configured to: when narrowband signals of the current frame of speech/audio signal and a previous frame of speech/audio signal have a predetermined correlation, use a value obtained by attenuating, according to a step size, a weighting factor alfa of an energy ratio corresponding to the previous frame of speech/audio signal as a weighting factor of an energy ratio corresponding to the current audio frame, wherein the attenuation is performed frame by frame until alfa is 0.
In the embodiments of the present invention, during switching between a wide frequency band and a narrow frequency band, a high frequency signal is corrected, so as to implement a smooth transition of the high frequency signal between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before switching are in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
BRIEF DESCRIPTION OF DRAWINGS
To describe the technical solutions in the embodiments of the present invention, the following briefly introduces the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a schematic flowchart of an embodiment of a speech/audio signal processing method according to the present invention;
FIG. 2 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention;
FIG. 3 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention;
FIG. 4 is a schematic flowchart of another embodiment of a speech/audio signal processing method according to the present invention;
FIG. 5 is a schematic structural diagram of an embodiment of a speech/audio signal processing apparatus according to the present invention;
FIG. 6 is a schematic structural diagram of an embodiment of a speech/audio signal processing apparatus according to the present invention;
FIG. 7 is a schematic structural diagram of an embodiment of a parameter obtaining unit according to the present invention;
FIG. 8 is a schematic structural diagram of an embodiment of a global gain parameter obtaining unit according to the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of an acquiring unit according to the present invention; and
FIG. 10 is a schematic structural diagram of another embodiment of a speech/audio signal processing apparatus according to the present invention.
DESCRIPTION OF EMBODIMENTS
The following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
In the field of digital signal processing, audio codecs and video codecs are widely applied in various electronic devices, for example, a mobile phone, a wireless apparatus, a personal data assistant (PDA), a handheld or portable computer, a GPS receiver/navigator, a camera, an audio/video player, a video camera, a video recorder, and a monitoring device. Usually, this type of electronic device includes an audio coder or an audio decoder, where the audio coder or decoder may be directly implemented by a digital circuit or a chip, for example, a DSP (digital signal processor), or be implemented by a software code driving a processor to execute a process in the software code.
In the prior art, because bandwidths of speech/audio signals transmitted in a network are different, in a process of transmitting speech/audio signals, bandwidths of the speech/audio signals frequently change, and phenomena of switching from a narrow frequency speech/audio signal to a wide frequency speech/audio signal and switching from a wide frequency speech/audio signal to a narrow frequency speech/audio signal exist. Such a process of switching a speech/audio signal between high and low frequency bands is referred to as bandwidth switching. The bandwidth switching includes switching from a narrow frequency signal to a wide frequency signal and switching from a wide frequency signal to a narrow frequency signal. The narrow frequency signal mentioned in the present invention is a speech signal that only has a low frequency component and a high frequency component is empty after up-sampling and low-pass filtering, while the wide frequency speech/audio signal has both a low frequency signal component and a high frequency signal component. The narrow frequency signal and the wide frequency signal are relative. For example, for a narrowband signal, a wideband signal is a wide frequency signal; and for a wideband signal, a super-wideband signal is a wide frequency signal. Generally, a narrowband signal is a speech/audio signal of which a sampling rate is 8 kHz; a wideband signal is a speech/audio signal of which a sampling rate is 16 kHz; and a super-wideband signal is a speech/audio signal of which a sampling rate is 32 kHz.
When a coding/decoding algorithm of a high frequency signal before switching is selected between time-domain and frequency-domain coding/decoding algorithms according to different signal types, or when a coding algorithm of the high frequency signal before switching is a time-domain coding algorithm, in order to ensure continuity of output signals during the switching, a switching algorithm is kept in a signal domain for processing, where the signal domain is the same as that of the high frequency coding/decoding algorithm before the switching. That is, when the time-domain coding/decoding algorithm is used for the high frequency signal before the switching, a time-domain switching algorithm is used as a switching algorithm to be used; when the frequency-domain coding/decoding algorithm is used for the high frequency signal before the switching, a frequency-domain switching algorithm is used as a switching algorithm to be used. In the prior art, when a time-domain frequency band extension algorithm is used before switching, a similar time-domain switching technology is not used after the switching.
In speech/audio coding, processing is generally performed by using a frame as a unit. A current input audio frame that needs to be processed is a current frame of speech/audio signal. The current frame of speech/audio signal includes a narrow frequency signal and a high frequency signal, that is, a narrow frequency signal of current frame and a high frequency signal of current frame. Any frame of speech/audio signal before the high frequency signal of current frame is a historical frame of speech/audio signal, which also includes a narrow frequency signal of historical frame and a high frequency signal of historical frame. A frame of speech/audio signal previous to the current frame of speech/audio signal is a previous frame of speech/audio signal.
Referring to FIG. 1, an embodiment of a speech/audio signal processing method of the present invention includes:
S101: When a speech/audio signal switches bandwidth, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal.
The current frame of speech/audio signal includes a narrow frequency signal of current frame and a high frequency time-domain signal of current frame. Bandwidth switching includes switching from a narrow frequency signal to a wide frequency signal and switching from a wide frequency signal to a narrow frequency signal. In the case of switching from a narrow frequency signal to a wide frequency signal, the current frame of speech/audio signal is the current frame of wide frequency signal, including a narrow frequency signal and a high frequency signal, and the initial high frequency signal of the current frame of speech/audio signal is a real signal and may be directly obtained from the current frame of speech/audio signal. In the case of switching from a wide frequency signal to a narrow frequency signal, the current frame of speech/audio signal is the narrow frequency signal of current frame of which a high frequency time-domain signal of current frame is empty, the initial high frequency signal of the current frame of speech/audio signal is a predicted signal, and a high frequency signal corresponding to the narrow frequency signal of current frame needs to be predicted and used as the initial high frequency signal.
S102: Obtain a time-domain global gain parameter corresponding to the initial high frequency signal.
In the case of switching from a narrow frequency signal to a wide frequency signal, the time-domain global gain parameter of the high frequency signal may be obtained by decoding. In the case of switching from a wide frequency signal to a narrow frequency signal, the time-domain global gain parameter of the high frequency signal may be obtained according to the current frame of signal: the time-domain global gain parameter of the high frequency signal is obtained according to a spectrum tilt parameter of the narrow frequency signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame.
S103: Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of the initial high frequency signal of the current frame of speech/audio signal.
A historical frame of final output speech/audio signal is used as the historical frame of speech/audio signal is used, and the initial high frequency signal is used as the current frame of speech/audio signal. The energy ratio Ratio=Esyn(−1)/Esyn_tmp, where Esyn(−1) represents the energy of the output high frequency time-domain signal syn of the historical frame, and Esyn_tmp represents the energy of the initial high frequency time-domain signal syn corresponding to the current frame.
The predicted global gain parameter gain=alfa*Ratio+beta*gain′, where gain′ is the time-domain global gain parameter, alfa+beta=1, and values of alfa and beta are different according to different signal types.
S104: Correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal.
The correction refers to that the signal is multiplied, that is, the initial high frequency signal is multiplied by the predicted global gain parameter. In another embodiment, in step S102, a time-domain envelope parameter and the time-domain global gain parameter that are corresponding to the initial high frequency signal are obtained; therefore, in step S104, the initial high frequency signal is corrected by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal; that is, the predicted high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
In the case of switching from a narrow frequency signal to a wide frequency signal, the time-domain envelope parameter of the high frequency signal may be obtained by decoding. In the case of switching from a wide frequency signal to a narrow frequency signal, the time-domain envelope parameter of the high frequency signal may be obtained according to the current frame of signal: a series of predetermined values or a high frequency time-domain envelope parameter of the historical frame may be used as the high frequency time-domain envelope parameter of the current frame of speech/audio signal.
S105: Synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
In the foregoing embodiment, during switching between a wide frequency band and a narrow frequency band, a high frequency signal is corrected, so as to implement a smooth transition of the high frequency signal between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before switching are in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
Referring to FIG. 2, another embodiment of a speech/audio signal processing method of the present invention includes:
S201: When a wide frequency signal switches to a narrow frequency signal, predict a predicted high frequency signal corresponding to a narrow frequency signal of current frame.
When a wide frequency signal switches to a narrow frequency signal, a previous frame is the wide frequency signal, and a current frame is the narrow frequency signal. The step of predicting a predicted high frequency signal corresponding to a narrow frequency signal of current frame includes: predicting an excitation signal of the high frequency signal of the current frame of speech/audio signal according to the narrow frequency signal of current frame; predicting an LPC (Linear Predictive Coding, linear predictive coding) coefficient of the high frequency signal of the current frame of speech/audio signal; and synthesizing the predicted high frequency excitation signal and the LPC coefficient, to obtain the predicted high frequency signal syn_tmp.
In an embodiment, parameters such as a pitch period, an algebraic codebook, and a gain may be extracted from the narrow frequency signal, and the high frequency excitation signal is predicted by resampling and filtering.
In another embodiment, operations such as up-sampling, low-pass, and obtaining of an absolute value or a square may be performed on the narrow frequency time-domain signal or a narrow frequency time-domain excitation signal, so as to predict the high frequency excitation signal.
To predicate the LPC coefficient of the high frequency signal, a high frequency LPC coefficient of a historical frame or a series of preset values may be used as the LPC coefficient of the current frame; or different prediction manners may be used for different signal types.
S202: Obtain a time-domain envelope parameter and a time-domain global gain parameter that are corresponding to the predicted high frequency signal.
A series of predetermined values may be used as the high frequency time-domain envelope parameter of the current frame. Narrowband signals may be generally classified into several types, a series of values may be preset for each type, and a group of preset time-domain envelope parameters may be selected according to types of current frame of narrowband signals; or a group of time-domain envelope values may be set, for example, when the number of time-domain envelops is M, the preset values may be M 0.3536 s. In this embodiment, the obtaining of a time-domain envelope parameter is an optional but not a necessary step.
The time-domain global gain parameter of the high frequency signal is obtained according to a spectrum tilt parameter of the narrow frequency signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame, which includes the following steps in an embodiment:
S2021: Classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, where in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; and when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, classify the narrow frequency signal as a fricative, and the rest as non-fricatives.
The parameter cor showing the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame may be determined according to an energy magnitude relationship between signals of a same frequency band, or may be determined according to an energy relationship between several same frequency bands, or may be calculated according to a formula showing a self-correlation or a cross-correlation between time-domain signals or showing a self-correlation or a cross-correlation between time-domain excitation signals.
S2022: When the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal. That is, when the spectrum tilt parameter of the current frame of speech/audio signal is less than or equal to the first predetermined value, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when spectrum tilt parameter of the current frame of speech/audio signal is greater than the first predetermined value, the first predetermined value is used as the spectrum tilt parameter limit value.
The time-domain global gain parameter gain′ is obtained according to the following formula:
gain = { tilt , tilt 1 1 , tilt > 1 , where tilt is the spectrum tilt parameter , and 1 is the first predetermined value .
S2023: When the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal. That is, when the spectrum tilt parameter of the current frame of speech/audio signal belongs to the first range, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is greater than an upper limit of the first range, the upper limit of the first range is used as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is less than a lower limit of the first range, the lower limit of the first range is used as the spectrum tilt parameter limit value.
The time-domain global gain parameter gain′ is obtained according to the following formula:
gain = { tilt , tilt [ a , b ] a , tilt < a b , tilt > b , where tilt is the spectrum tilt parameter , and [ a , b ] is the first range .
In an embodiment, a spectrum tilt parameter tilt of a narrow frequency signal and a parameter cor showing a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame are obtained; current frame of signals are classified into two types, fricative and non-fricative, according to tilt and cor; when the spectrum tilt parameter tilt>5 and the correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; tilt is limited within a value range of 0.5<=tilt<=1.0 and is used as a time-domain global gain parameter of a non-fricative, and tilt is limited to a value range of tilt<=8.0 and is used as a time-domain global gain parameter of a fricative. For a fricative, a spectrum tilt parameter may be any value greater than 5, and for a non-fricative, a spectrum tilt parameter may be any value less than or equal to 5, or may be greater than 5. In order to ensure that a spectrum tilt parameter tilt can be used as an estimated time-domain global gain parameter, tilt is limited within a value range and then used as a time-domain global gain parameter. That is, when tilt>8, it is determined that tilt=8 is used as a time-domain global gain parameter of a fricative; when tilt<0.5, it is determined that tilt=0.5, or when tilt>1.0, it is determined that tilt=1.0, and 0.5 or 1.0 is used as a time-domain global gain parameter of a non-fricative.
S203: Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of the initial high frequency signal of the current frame of speech/audio signal.
Calculation is performed on the energy ratio Ratio=Esyn(−1)/Esyn_tmp, and the weighted value of tilt and Ratio is used as the predicted global gain parameter gain of the current frame, that is, gain=alfa*Ratio+beta*gain′, where gain′ is the time-domain global gain parameter, alfa+beta=1, values of alfa and beta are different according to different signal types, Esyn(−1) represents the energy of the finally output high frequency time-domain signal syn of the historical frame, and Esyn_tmp represents the energy of the predicted high frequency time-domain signal syn of the current frame.
S204: Correct the predicted high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain a corrected high frequency time-domain signal.
The predicted high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the high frequency time-domain signal.
In this embodiment, the time-domain envelope parameter is optional. When only the time-domain global gain parameter is included, the predicted high frequency signal may be corrected by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal. That is, the predicted high frequency signal is multiplied by the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
S205: Synthesize the narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
The energy Esyn of the high frequency time-domain signal syn is used to predict a time-domain global gain parameter of a next frame. That is, a value of Esyn is assigned to Esyn(−1).
In the foregoing embodiment, a high frequency band of a narrow frequency signal following a wide frequency signal is corrected, so as to implement a smooth transition of the high frequency part between a wide frequency band and a narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because corresponding processing is performed on the frame during the switching, a problem that occurs during parameter and status updating is indirectly eliminated. By keeping, a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before the switching, in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
Referring to FIG. 3, another embodiment of a speech/audio signal processing method of the present invention includes:
S301: When a narrow frequency signal switches to a wide frequency signal, obtain a high frequency signal of current frame.
When a narrow frequency signal switches to a wide frequency signal, a previous frame is a narrow frequency signal, and a current frame is a wide frequency signal.
S302: Obtain a time-domain envelope parameter and a time-domain global gain parameter that are corresponding to the high frequency signal.
The time-domain envelope parameter and the time-domain global gain parameter may be directly obtained from the high frequency signal of current frame. The obtaining of a time-domain envelope parameter is an optional step.
S303: Perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of a historical frame of speech/audio signal and energy of an initial high frequency signal of a current frame of speech/audio signal.
Because the current frame is a wide frequency signal, parameters of the high frequency signal may all be obtained by decoding. In order to ensure a smooth transition during switching, the time-domain global gain parameter is smoothed in the following manner:
Calculation is performed on the energy ratio Ratio=Esyn(−1)/Esyn_tmp, where Esyn(−1) represents energy of a finally output high frequency time-domain signal syn of a historical frame, and Esyn_tmp represents energy of a high frequency time-domain signal syn of the current frame.
The weighted value of the time-domain global gain parameter gain and Ratio that are obtained by decoding is used as the predicted global gain parameter gain of the current frame, that is, gain=alfa*Ratio+beta*gain′, where gain′ is the time-domain global gain parameter, alfa+beta=1, and values of alfa and beta are different according to different signal types.
When narrowband signals of the current audio frame and a previous frame of speech/audio signal have a predetermined correlation, a value obtained by attenuating, according to a certain step size, a weighting factor alfa of the energy ratio corresponding to the previous frame of speech/audio signal is used as a weighting factor of the energy ratio corresponding to the current audio frame, where the attenuation is performed frame by frame until alfa is 0.
When narrow frequency signals of consecutive frames are of a same signal type, or a correlation between narrow frequency signals of consecutive frames satisfies a certain condition, that is, the consecutive frames have a certain correlation or signal types of the consecutive frames are similar, alfa is attenuated frame by frame according to a certain step size until alfa is attenuated to 0; when the narrow frequency signals of the consecutive frames have no correlation, alfa is directly attenuated to 0, that is, a current decoding result is maintained without performing weighting or correcting.
S304: Correct the high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain a corrected high frequency time-domain signal.
The correction refers to that the high frequency signal is multiplied by the time-domain envelope parameter and the predicted time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
In this embodiment, the time-domain envelope parameter is optional. When only the time-domain global gain parameter is included, the high frequency signal may be corrected by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal. That is, the high frequency signal is multiplied by the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
S305: Synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
In the foregoing embodiment, a high frequency band of a wide frequency signal following a narrow frequency signal is corrected, so as to implement a smooth transition of the high frequency part between a wide frequency band and a narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band; in addition, because corresponding processing is performed on the frame of during the switching, a problem that occurs during parameter and status updating is indirectly eliminated. By keeping, a bandwidth switching algorithm and a coding/decoding algorithm of the high frequency signal before the switching, in a same signal domain, it not only ensures that no extra delay is added and the algorithm is simple, it also ensures performance of an output signal.
Referring to FIG. 4, another embodiment of a speech/audio signal processing method of the present invention includes:
S401: When a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal.
When a wide frequency signal switches to a narrow frequency signal, a previous frame is the wide frequency signal, and a current frame is the narrow frequency signal. The step of predicting an initial high frequency signal corresponding to a narrow frequency signal of current frame includes: predicting an excitation signal of the high frequency signal of the current frame of speech/audio signal according to the narrow frequency signal of current frame; predicting an LPC coefficient of the high frequency signal of the current frame of speech/audio signal; and synthesizing the predicted high frequency excitation signal and the LPC coefficient, to obtain the predicted high frequency signal syn_tmp.
In an embodiment, parameters such as a pitch period, an algebraic codebook, and a gain may be extracted from the narrow frequency signal, and the high frequency excitation signal is predicted by resampling and filtering.
In another embodiment, operations such as up-sampling, low-pass, and obtaining of an absolute value or a square may be performed on the narrow frequency time-domain signal or a narrow frequency time-domain excitation signal, so as to predict the high frequency excitation signal.
To predicate the LPC coefficient of the high frequency signal, a high frequency LPC coefficient of a historical frame or a series of preset values may be used as the LPC coefficient of the current frame; or different prediction manners may be used for different signal types.
S402: Obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame.
In an embodiment, the following steps are included:
S2021: Classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame, where in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal.
In an embodiment, when the spectrum tilt parameter tilt>5, and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives. The parameter cor showing the correlation between the narrow frequency signal of current frame and the narrow frequency signal of historical frame may be determined according to an energy magnitude relationship between signals of a same frequency band, or may be determined according to an energy relationship between several same frequency bands, or may be calculated according to a formula showing a self-correlation or a cross-correlation between time-domain signals or showing a self-correlation or a cross-correlation between time-domain excitation signals.
S2022: When the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal. That is, when the spectrum tilt parameter of the current frame of speech/audio signal is less than or equal to the first predetermined value, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when spectrum tilt parameter of the current frame of speech/audio signal is greater than the first predetermined value, the first predetermined value is used as the spectrum tilt parameter limit value.
When the current frame of speech/audio signal is a fricative signal, the time-domain global gain parameter gain′ is obtained according to the following formula:
gain = { tilt , tilt 1 1 , tilt > 1 , where tilt is the spectrum tilt parameter , and 1 is the first predetermined value .
S2023: When the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal. That is, when the spectrum tilt parameter of the current frame of speech/audio signal belongs to the first range, an original value of the spectrum tilt parameter is kept as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is greater than an upper limit of the first range, the upper limit of the first range is used as the spectrum tilt parameter limit value; when the spectrum tilt parameter of the current frame of speech/audio signal is less than a lower limit of the first range, the lower limit of the first range is used as the spectrum tilt parameter limit value.
When the current frame of speech/audio signal is a non-fricative signal, the time-domain global gain parameter gain′ is obtained according to the following formula:
gain = { tilt , tilt [ a , b ] a , tilt < a b , tilt > b , where tilt is the spectrum tilt parameter , and [ a , b ] is the first range .
In an embodiment, a spectrum tilt parameter tilt of a narrow frequency signal and a parameter cor showing a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame are obtained; current frame of signals are classified into two types, fricative and non-fricative, according to tilt and cor; when the spectrum tilt parameter tilt>5 and the correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; tilt is limited within a value range of 0.5<=tilt<=1.0 and is used as a time-domain global gain parameter of a non-fricative, and tilt is limited to a value range of tilt<=8.0 and is used as a time-domain global gain parameter of a fricative. For a fricative, a spectrum tilt parameter may be any value greater than 5, and for a non-fricative, a spectrum tilt parameter may be any value less than or equal to 5, or may be greater than 5. In order to ensure that a spectrum tilt parameter tilt can be used as a predicted global gain parameter, tilt is limited within a value range and then used as a time-domain global gain parameter. That is, when tilt>8, it is determined that tilt=8 and 8 is used as a time-domain global gain parameter of a fricative signal; when tilt<0.5, it is determined that tilt=0.5, or when tilt>1.0, it is determined that tilt=1.0, and 0.5 or 1.0 is used as a time-domain global gain parameter of a non-fricative signal.
S403: Correct the initial high frequency signal by using the time-domain global gain parameter, to obtain a corrected high frequency time-domain signal.
In an embodiment, the initial high frequency signal is multiplied by the time-domain global gain parameter, to obtain the corrected high frequency time-domain signal.
In another embodiment, step S403 may include:
performing weighting processing on a energy ratio and the time-domain global gain parameter, and using an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of a initial high frequency signal of current frame; and
correcting the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; that is, the initial high frequency signal is multiplied by the predicted global gain parameter, to obtain a corrected high frequency time-domain signal.
Optionally, before step S403, the method may further include:
obtaining a time-domain envelope parameter corresponding to the initial high frequency signal, and
the correcting the initial high frequency signal by using the predicted global gain parameter includes:
correcting the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
S404: Synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
In the foregoing embodiment, when a wide frequency band switches to a narrow frequency band, a time-domain global gain parameter of a high frequency signal is obtained according to a spectrum tilt parameter and an interframe correlation. By using the narrow frequency spectrum tilt parameter, an energy relationship between a narrow frequency signal and a high frequency signal can be correctly estimated, so as to better estimate energy of the high frequency signal. By using the interframe correlation, an interframe correlation between high frequency signals can be estimated by making a good use of the correlation between narrow frequency frames. In this way, when weighting is performed to obtain a high frequency global gain, the foregoing real information can be used well, and an undesirable noise is not introduced. The high frequency signal is corrected by using the time-domain global gain parameter, so as to implement a smooth transition of the high frequency part between the wide frequency band and the narrow frequency band, thereby effectively eliminating aural discomfort caused by the switching between the wide frequency band and the narrow frequency band.
In association with the foregoing method embodiments, the present invention further provides a speech/audio signal processing apparatus. The apparatus may be located in a terminal device, a network device, or a test device. The speech/audio signal processing apparatus may be implemented by a hardware circuit, or may be implemented by software in combination with hardware. For example, referring to FIG. 5, a processor invokes the speech/audio signal processing apparatus, to implement speech/audio signal processing. The speech/audio signal processing apparatus may execute the methods and processes in the foregoing method embodiments.
Referring to FIG. 6, an embodiment of a speech/audio signal processing apparatus includes:
an acquiring unit 601, configured to: when a speech/audio signal switches bandwidth, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
a parameter obtaining unit 602, configured to obtain a time-domain global gain parameter corresponding to the initial high frequency signal;
a weighting processing unit 603, configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of the initial high frequency signal of current frame;
a correcting unit 604, configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and
a synthesizing unit 605, configured to synthesize a narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
In an embodiment, the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal, and the parameter obtaining unit 602 includes:
a global gain parameter obtaining unit, configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
Referring to FIG. 7, in another embodiment, the bandwidth switching is switching from a wide frequency signal to a narrow frequency signal, and the parameter obtaining unit 602 includes:
a time-domain envelope obtaining unit 701, configured to use a series of preset values as a high frequency time-domain envelope parameter of the current frame of speech/audio signal; and
a global gain parameter obtaining unit 702, configured to obtain the time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a current frame of speech/audio signal and a narrow frequency signal of historical frame.
Therefore, the correcting unit 604 is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
Referring to FIG. 8, further, an embodiment of the global gain parameter obtaining unit 702 includes:
a classifying unit 801, configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
a first limiting unit 802, configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal; and
a second limiting unit 803, configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
Further, in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; the first predetermined value is 8; and the first preset range is [0.5, 1].
Referring to FIG. 9, in an embodiment, the acquiring unit 601 includes:
an excitation signal obtaining unit 901, configured to predict an excitation signal of the high frequency signal according to the current frame of speech/audio signal;
an LPC coefficient obtaining unit 902, configured to predict an LPC coefficient of the high frequency signal; and
a generating unit 903, configured to synthesize the excitation signal of the high frequency signal and the LPC coefficient of the high frequency signal, to obtain the predicted high frequency signal.
In an embodiment, the bandwidth switching is switching from a narrow frequency signal to a wide frequency signal, and the speech/audio signal processing apparatus further includes:
a weighting factor setting unit, configured to: when narrowband signals of the current audio frame of speech/audio signal and a previous frame of speech/audio signal have a predetermined correlation, use a value obtained by attenuating, according to a certain step size, a weighting factor alfa of the energy ratio corresponding to the previous frame of speech/audio signal as a weighting factor of the energy ratio corresponding to the current audio frame, where the attenuation is performed frame by frame until alfa is 0.
Referring to FIG. 10, another embodiment of a speech/audio signal processing apparatus includes:
a predicting unit 1001, configured to: when a speech/audio signal switches from a wide frequency signal to a narrow frequency signal, obtain an initial high frequency signal corresponding to a current frame of speech/audio signal;
a parameter obtaining unit 1002, configured to obtain a time-domain global gain parameter of the high frequency signal according to a spectrum tilt parameter of the current frame of speech/audio signal and a correlation between a narrow frequency signal of current frame and a narrow frequency signal of historical frame;
a correcting unit 1003, configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain a corrected high frequency time-domain signal; and
a synthesizing unit 1004, configured to synthesize the narrow frequency time-domain signal of current frame and the corrected high frequency time-domain signal and output the synthesized signal.
Referring to FIG. 8, the parameter obtaining unit 1002 includes:
a classifying unit 801, configured to classify the current frame of speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of speech/audio signal and the correlation between the current frame of speech/audio signal and the narrow frequency signal of historical frame;
a first limiting unit 802, configured to: when the current frame of speech/audio signal is a first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal; and
a second limiting unit 803, configured to: when the current frame of speech/audio signal is a second type of signal, limit the spectrum tilt parameter to a value in a first range, to obtain a spectrum tilt parameter limit value, and use the spectrum tilt parameter limit value as the time-domain global gain parameter of the high frequency signal.
Further, in an embodiment, the first type of signal is a fricative signal, and the second type of signal is a non-fricative signal; when the spectrum tilt parameter tilt>5 and a correlation parameter cor is less than a given value, the narrow frequency signal is classified as a fricative, the rest being non-fricatives; the first predetermined value is 8; and the first preset range is [0.5, 1].
Optionally, in an embodiment, the speech/audio signal processing apparatus further includes:
a weighting processing unit, configured to perform weighting processing on an energy ratio and the time-domain global gain parameter, and use an obtained weighted value as a predicted global gain parameter, where the energy ratio is a ratio between energy of a high frequency time-domain signal of historical frame and energy of the initial high frequency signal of current frame; and
the correcting unit is configured to correct the initial high frequency signal by using the predicted global gain parameter, to obtain the corrected high frequency time-domain signal.
In another embodiment, the parameter obtaining unit is further configured to obtain a time-domain envelope parameter corresponding to the initial high frequency signal; and the correcting unit is configured to correct the initial high frequency signal by using the time-domain envelope parameter and the time-domain global gain parameter.
A person of ordinary skill in the art may understand that all or a part of the processes of the methods in the embodiments may be implemented by a computer program instructing relevant hardware. The program may be stored in a computer readable storage medium. When the program runs, the processes of the methods in the embodiments are performed. The storage medium may include: a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM).
The above are merely exemplary embodiments for illustrating the present invention, but the scope of the present invention is not limited thereto. Modifications or variations are readily apparent to persons skilled in the prior art without departing from the spirit and scope of the present invention.

Claims (20)

The invention claimed is:
1. A speech/audio signal processing method, comprising:
obtaining, by a decoder, an initial high frequency time-domain signal corresponding to a current frame of a speech/audio signal when a signal of the current frame is a narrow frequency signal and a signal of a previous frame is a wide frequency signal, wherein the previous frame is adjacent to the current frame;
obtaining, by the decoder, a time-domain global gain parameter of the initial high frequency time-domain signal according to a spectrum tilt parameter of the current frame of the speech/audio signal and a correlation between the narrow frequency signal of the current frame and a narrow frequency signal of the previous frame;
performing, by the decoder, weighting processing on an energy ratio and the time-domain global gain parameter to obtain a weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a high frequency time-domain signal of the previous frame and energy of the initial high frequency time-domain signal of the current frame;
correcting, by the decoder, the initial high frequency time-domain signal by using the predicted global gain parameter to obtain a corrected high frequency time-domain signal;
synthesizing, by the decoder, a synthesized signal by a narrow frequency time-domain signal of the current frame and the corrected high frequency time-domain signal; and
outputting, by the decoder, the synthesized signal.
2. The method according to claim 1, wherein the obtaining the time-domain global gain parameter of the initial high frequency time-domain signal according to the spectrum tilt parameter of the current frame of the speech/audio signal and the correlation between the narrow frequency signal of the current frame and the narrow frequency signal of the previous frame comprises:
classifying the current frame of the speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of the speech/audio signal and the correlation between the narrow frequency signal of the current frame and the narrow frequency signal of the previous frame;
when the current frame of the speech/audio signal is the first type of signal, limiting the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a first limited spectrum tilt parameter value;
when the current frame of the speech/audio signal is the second type of signal, limiting the spectrum tilt parameter to a value in a first range to obtain a second limited spectrum tilt parameter value; and
setting the first limited spectrum tilt parameter value or the second limited spectrum tilt parameter value as the time-domain global gain parameter of the initial high frequency time-domain signal.
3. The method according to claim 2, wherein the limiting the spectrum tilt parameter to less than or equal to the first predetermined value to obtain the first limited spectrum tilt parameter value comprises:
setting a value of the spectrum tilt parameter as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than or equal to the first predetermined value; and
setting a first predetermined value as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the first predetermined value.
4. The method according to claim 2, wherein the limiting the spectrum tilt parameter to the value in the first range to obtain the second limited spectrum tilt parameter value comprises:
setting a value of the spectrum tilt parameter as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter belongs to the first range;
setting an upper limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the upper limit of the first range; and
setting a lower limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than the lower limit of the first range.
5. The method according to claim 2, wherein the first type of signal is a fricative signal and the second type of signal is a non-fricative signal.
6. The method according to claim 2, wherein the first predetermined value is 8 and the first range is [0.5, 1].
7. The method according to claim 1, wherein the obtaining the initial high frequency time-domain signal corresponding to the current frame of the speech/audio signal comprises:
predicting a high frequency excitation signal according to the current frame of the speech/audio signal;
predicting a linear predictive coding (LPC) coefficient; and
synthesizing the initial high frequency time-domain signal by the high frequency excitation signal and the LPC coefficient.
8. A speech/audio signal processing apparatus, comprising:
a memory storage comprising instructions; and
one or more processors in communication with the memory, wherein the one or more processors execute the instructions to:
obtain an initial high frequency time-domain signal corresponding to a current frame of a speech/audio signal when a signal of the current frame is a narrow frequency signal and a signal of a previous frame is a wide frequency signal, wherein the previous frame is adjacent to the current frame;
obtain a time-domain global gain parameter of the initial high frequency time-domain signal according to a spectrum tilt parameter of the current frame of the speech/audio signal and a correlation between the narrow frequency signal of the current frame and a narrow frequency signal of the previous frame;
perform weighting processing on an energy ratio and the time-domain global gain parameter to obtain an weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a high frequency time-domain signal of the previous frame and energy of the initial high frequency time-domain signal of the current frame;
correct the initial high frequency time-domain signal by using the predicted global gain parameter to obtain a corrected high frequency time-domain signal;
synthesize a synthesized signal by a narrow frequency time-domain signal of the current frame and the corrected high frequency time-domain signal; and
output the synthesized signal.
9. The apparatus according to claim 8, wherein the one or more processors execute the instructions to:
classify the current frame of the speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of the speech/audio signal and the correlation between the narrow frequency signal of the current frame and the narrow frequency signal of the previous frame;
when the current frame of the speech/audio signal is the first type of signal, limit the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a first limited spectrum tilt parameter value;
when the current frame of the speech/audio signal is the second type of signal, limit the spectrum tilt parameter to a value in a first range to obtain a second limited spectrum tilt parameter value; and
set the first limited spectrum tilt parameter value or the second limited spectrum tilt parameter value as the time-domain global gain parameter of the initial high frequency time-domain signal.
10. The apparatus according to claim 9, wherein the one or more processors execute the instructions to:
set a value of the spectrum tilt parameter as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than or equal to the first predetermined value; and
set a first predetermined value as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the first predetermined value.
11. The apparatus according to claim 9, wherein the one or more processors execute the instructions to:
set a value of the spectrum tilt parameter as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter belongs to the first range;
set an upper limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the upper limit of the first range; and
set a lower limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than the lower limit of the first range.
12. The apparatus according to claim 9, wherein the first type of signal is a fricative signal and the second type of signal is a non-fricative signal.
13. The apparatus according to claim 9, wherein the first predetermined value is 8 and the first range is [0.5, 1].
14. The apparatus according to claim 8, wherein the one or more processors execute the instructions to:
predict a high frequency excitation signal according to the current frame of the speech/audio signal;
predict a linear predictive coding (LPC) coefficient; and
synthesize the initial high frequency time-domain signal by the high frequency excitation signal and the LPC coefficient.
15. A non-transitory computer-readable medium storing computer instructions, that when executed by one or more processors of a speech/audio signal processing apparatus, cause the one or more processors to perform steps of:
obtaining an initial high frequency time-domain signal corresponding to a current frame of a speech/audio signal when a signal of the current frame is a narrow frequency signal and a signal of a previous frame is a wide frequency signal, wherein the previous frame is adjacent to the current frame;
obtaining a time-domain global gain parameter of the initial high frequency time-domain signal according to a spectrum tilt parameter of the current frame of the speech/audio signal and a correlation between the narrow frequency signal of the current frame and a narrow frequency signal of the previous frame;
performing weighting processing on an energy ratio and the time-domain global gain parameter to obtain a weighted value as a predicted global gain parameter, wherein the energy ratio is a ratio between energy of a high frequency time-domain signal of the previous frame and energy of the initial high frequency time-domain signal of the current frame;
correcting the initial high frequency time-domain signal by using the predicted global gain parameter to obtain a corrected high frequency time-domain signal;
synthesizing a synthesized signal by a narrow frequency time-domain signal of the current frame and the corrected high frequency time-domain signal; and
outputting the synthesized signal.
16. The non-transitory computer-readable medium according to claim 15, wherein the obtaining the time-domain global gain parameter of the initial high frequency time-domain signal according to a spectrum tilt parameter of the current frame of the speech/audio signal and a correlation between the narrow frequency signal of the current frame and the narrow frequency signal of the previous frame comprises:
classifying the current frame of the speech/audio signal as a first type of signal or a second type of signal according to the spectrum tilt parameter of the current frame of the speech/audio signal and the correlation between the narrow frequency signal of the current frame and the narrow frequency signal of the previous frame;
limiting the spectrum tilt parameter to less than or equal to a first predetermined value to obtain a first limited spectrum tilt parameter value when the current frame of the speech/audio signal is the first type of signal;
limiting the spectrum tilt parameter to a value in a first range to obtain a second limited spectrum tilt parameter value when the current frame of the speech/audio signal is the second type of signal; and
setting the first limited spectrum tilt parameter value or the second limited spectrum tilt parameter value as the time-domain global gain parameter of the initial high frequency time-domain signal.
17. The non-transitory computer-readable medium according to claim 16, wherein the limiting the spectrum tilt parameter to less than or equal to the first predetermined value to obtain the first limited spectrum tilt parameter value comprises:
setting a value of the spectrum tilt parameter as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than or equal to the first predetermined value; and
setting a first predetermined value as the first limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the first predetermined value.
18. The non-transitory computer-readable medium s according to claim 16, wherein the limiting the spectrum tilt parameter to the value in the first range to obtain the second limited spectrum tilt parameter value comprises:
setting a value of the spectrum tilt parameter as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter belongs to the first range;
setting an upper limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is greater than the upper limit of the first range; and
setting a lower limit of the first range as the second limited spectrum tilt parameter value when the value of the spectrum tilt parameter is less than the lower limit of the first range.
19. The non-transitory computer-readable medium according to claim 16, wherein the first type of signal is a fricative signal and the second type of signal is a non-fricative signal.
20. The non-transitory computer-readable medium according to claim 16, wherein the first predetermined value is 8 and the first range is [0.5, 1].
US16/021,621 2012-03-01 2018-06-28 Speech/audio signal processing method and apparatus Active US10360917B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/021,621 US10360917B2 (en) 2012-03-01 2018-06-28 Speech/audio signal processing method and apparatus
US16/457,165 US10559313B2 (en) 2012-03-01 2019-06-28 Speech/audio signal processing method and apparatus

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
CN201210051672.6 2012-03-01
CN201210051672 2012-03-01
CN201210051672.6A CN103295578B (en) 2012-03-01 2012-03-01 A kind of voice frequency signal processing method and device
PCT/CN2013/072075 WO2013127364A1 (en) 2012-03-01 2013-03-01 Voice frequency signal processing method and device
US14/470,559 US9691396B2 (en) 2012-03-01 2014-08-27 Speech/audio signal processing method and apparatus
US15/616,188 US10013987B2 (en) 2012-03-01 2017-06-07 Speech/audio signal processing method and apparatus
US16/021,621 US10360917B2 (en) 2012-03-01 2018-06-28 Speech/audio signal processing method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/616,188 Continuation US10013987B2 (en) 2012-03-01 2017-06-07 Speech/audio signal processing method and apparatus

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/457,165 Continuation US10559313B2 (en) 2012-03-01 2019-06-28 Speech/audio signal processing method and apparatus

Publications (2)

Publication Number Publication Date
US20180374488A1 US20180374488A1 (en) 2018-12-27
US10360917B2 true US10360917B2 (en) 2019-07-23

Family

ID=49081655

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/470,559 Active 2033-12-16 US9691396B2 (en) 2012-03-01 2014-08-27 Speech/audio signal processing method and apparatus
US15/616,188 Active US10013987B2 (en) 2012-03-01 2017-06-07 Speech/audio signal processing method and apparatus
US16/021,621 Active US10360917B2 (en) 2012-03-01 2018-06-28 Speech/audio signal processing method and apparatus
US16/457,165 Active US10559313B2 (en) 2012-03-01 2019-06-28 Speech/audio signal processing method and apparatus

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/470,559 Active 2033-12-16 US9691396B2 (en) 2012-03-01 2014-08-27 Speech/audio signal processing method and apparatus
US15/616,188 Active US10013987B2 (en) 2012-03-01 2017-06-07 Speech/audio signal processing method and apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/457,165 Active US10559313B2 (en) 2012-03-01 2019-06-28 Speech/audio signal processing method and apparatus

Country Status (20)

Country Link
US (4) US9691396B2 (en)
EP (3) EP3534365B1 (en)
JP (3) JP6010141B2 (en)
KR (3) KR101667865B1 (en)
CN (2) CN103295578B (en)
BR (1) BR112014021407B1 (en)
CA (1) CA2865533C (en)
DK (1) DK3534365T3 (en)
ES (3) ES2629135T3 (en)
HU (1) HUE053834T2 (en)
IN (1) IN2014KN01739A (en)
MX (2) MX364202B (en)
MY (1) MY162423A (en)
PL (1) PL3534365T3 (en)
PT (2) PT3193331T (en)
RU (2) RU2616557C1 (en)
SG (2) SG11201404954WA (en)
TR (1) TR201911006T4 (en)
WO (1) WO2013127364A1 (en)
ZA (1) ZA201406248B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10559313B2 (en) * 2012-03-01 2020-02-11 Huawei Technologies Co., Ltd. Speech/audio signal processing method and apparatus

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364657B (en) 2013-07-16 2020-10-30 超清编解码有限公司 Method and decoder for processing lost frame
CN104517610B (en) * 2013-09-26 2018-03-06 华为技术有限公司 The method and device of bandspreading
EP3058569B1 (en) * 2013-10-18 2020-12-09 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung E.V. Concept for encoding an audio signal and decoding an audio signal using deterministic and noise like information
CN105745705B (en) 2013-10-18 2020-03-20 弗朗霍夫应用科学研究促进协会 Encoder, decoder and related methods for encoding and decoding an audio signal
US20150170655A1 (en) * 2013-12-15 2015-06-18 Qualcomm Incorporated Systems and methods of blind bandwidth extension
KR101864122B1 (en) * 2014-02-20 2018-06-05 삼성전자주식회사 Electronic apparatus and controlling method thereof
CN105225666B (en) 2014-06-25 2016-12-28 华为技术有限公司 The method and apparatus processing lost frames
WO2019002831A1 (en) 2017-06-27 2019-01-03 Cirrus Logic International Semiconductor Limited Detection of replay attack
GB2563953A (en) 2017-06-28 2019-01-02 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB201713697D0 (en) 2017-06-28 2017-10-11 Cirrus Logic Int Semiconductor Ltd Magnetic detection of replay attack
GB201801528D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Method, apparatus and systems for biometric processes
GB201801526D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
GB201801530D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
GB201801527D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Method, apparatus and systems for biometric processes
GB201801532D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for audio playback
GB2567503A (en) * 2017-10-13 2019-04-17 Cirrus Logic Int Semiconductor Ltd Analysing speech signals
GB201804843D0 (en) 2017-11-14 2018-05-09 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB201803570D0 (en) 2017-10-13 2018-04-18 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB201801664D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of liveness
GB201801874D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Improving robustness of speech processing system against ultrasound and dolphin attacks
GB201719734D0 (en) * 2017-10-30 2018-01-10 Cirrus Logic Int Semiconductor Ltd Speaker identification
GB201801663D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of liveness
GB201801659D0 (en) 2017-11-14 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of loudspeaker playback
US11475899B2 (en) 2018-01-23 2022-10-18 Cirrus Logic, Inc. Speaker identification
US11264037B2 (en) 2018-01-23 2022-03-01 Cirrus Logic, Inc. Speaker identification
US11735189B2 (en) 2018-01-23 2023-08-22 Cirrus Logic, Inc. Speaker identification
US10692490B2 (en) 2018-07-31 2020-06-23 Cirrus Logic, Inc. Detection of replay attack
US10915614B2 (en) 2018-08-31 2021-02-09 Cirrus Logic, Inc. Biometric authentication
US11037574B2 (en) 2018-09-05 2021-06-15 Cirrus Logic, Inc. Speaker recognition and speaker change detection
CN111554309A (en) * 2020-05-15 2020-08-18 腾讯科技(深圳)有限公司 Voice processing method, device, equipment and storage medium
CN112927709B (en) * 2021-02-04 2022-06-14 武汉大学 Voice enhancement method based on time-frequency domain joint loss function
CN113470691B (en) * 2021-07-08 2024-08-30 浙江大华技术股份有限公司 Automatic gain control method of voice signal and related device thereof
CN115294947B (en) * 2022-07-29 2024-06-11 腾讯科技(深圳)有限公司 Audio data processing method, device, electronic equipment and medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000025305A1 (en) 1998-10-27 2000-05-04 Voiceage Corporation High frequency content recovering method and device for over-sampled synthesized wideband signal
US20030012221A1 (en) 2001-01-24 2003-01-16 El-Maleh Khaled H. Enhanced conversion of wideband signals to narrowband signals
JP2003044098A (en) 2001-07-26 2003-02-14 Nec Corp Device and method for expanding voice band
US6606591B1 (en) 2000-04-13 2003-08-12 Conexant Systems, Inc. Speech coding employing hybrid linear prediction coding
WO2006028009A1 (en) 2004-09-06 2006-03-16 Matsushita Electric Industrial Co., Ltd. Scalable decoding device and signal loss compensation method
US7058079B1 (en) 1999-04-26 2006-06-06 Lucent Technologies Inc. Method for making a call in a multiple bit-rate transmission channel bit-rate switching method, corresponding network section and transmission network
WO2007000988A1 (en) 2005-06-29 2007-01-04 Matsushita Electric Industrial Co., Ltd. Scalable decoder and disappeared data interpolating method
US7191123B1 (en) 1999-11-18 2007-03-13 Voiceage Corporation Gain-smoothing in wideband speech and audio signal decoder
US20080027718A1 (en) 2006-07-31 2008-01-31 Venkatesh Krishnan Systems, methods, and apparatus for gain factor limiting
WO2008076534A2 (en) 2006-12-13 2008-06-26 Motorola, Inc. Code excited linear prediction speech coding
CN101335002A (en) 2007-11-02 2008-12-31 华为技术有限公司 Method and apparatus for audio decoding
JP2009134260A (en) 2007-10-30 2009-06-18 Nippon Telegr & Teleph Corp <Ntt> Voice musical sound false broadband forming device, voice speech musical sound false broadband forming method, and its program and its record medium
KR20090080777A (en) 2008-01-22 2009-07-27 성균관대학교산학협력단 Method and Apparatus for detecting signal
CN101499278A (en) 2008-02-01 2009-08-05 华为技术有限公司 Audio signal switching and processing method and apparatus
US20090222261A1 (en) 2006-01-18 2009-09-03 Lg Electronics, Inc. Apparatus and Method for Encoding and Decoding Signal
CN101751925A (en) 2008-12-10 2010-06-23 华为技术有限公司 Tone decoding method and device
US20100228557A1 (en) 2007-11-02 2010-09-09 Huawei Technologies Co., Ltd. Method and apparatus for audio decoding
CN101964189A (en) 2010-04-28 2011-02-02 华为技术有限公司 Audio signal switching method and device
RU2414009C2 (en) 2006-01-18 2011-03-10 ЭлДжи ЭЛЕКТРОНИКС ИНК. Signal encoding and decoding device and method
WO2011027709A1 (en) 2009-09-04 2011-03-10 三菱重工業株式会社 Outdoor unit of air conditioner
WO2011050347A1 (en) 2009-10-23 2011-04-28 Qualcomm Incorporated Determining an upperband signal from a narrowband signal
CN102044250A (en) 2009-10-23 2011-05-04 华为技术有限公司 Band spreading method and apparatus
JP2011112311A (en) 2009-11-30 2011-06-09 Daikin Industries Ltd Outdoor unit of air conditioner
US20110270614A1 (en) 2010-04-28 2011-11-03 Huawei Technologies Co., Ltd. Method and Apparatus for Switching Speech or Audio Signals
WO2012110482A2 (en) 2011-02-14 2012-08-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Noise generation in audio codecs
JP6010141B2 (en) 2012-03-01 2016-10-19 ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. Voice / audio signal processing method and apparatus

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002528777A (en) 1998-10-27 2002-09-03 ボイスエイジ コーポレイション Method and apparatus for high frequency component recovery of an oversampled synthesized wideband signal
WO2000025305A1 (en) 1998-10-27 2000-05-04 Voiceage Corporation High frequency content recovering method and device for over-sampled synthesized wideband signal
US7058079B1 (en) 1999-04-26 2006-06-06 Lucent Technologies Inc. Method for making a call in a multiple bit-rate transmission channel bit-rate switching method, corresponding network section and transmission network
US7191123B1 (en) 1999-11-18 2007-03-13 Voiceage Corporation Gain-smoothing in wideband speech and audio signal decoder
US6606591B1 (en) 2000-04-13 2003-08-12 Conexant Systems, Inc. Speech coding employing hybrid linear prediction coding
US20030012221A1 (en) 2001-01-24 2003-01-16 El-Maleh Khaled H. Enhanced conversion of wideband signals to narrowband signals
JP2003044098A (en) 2001-07-26 2003-02-14 Nec Corp Device and method for expanding voice band
US20040243402A1 (en) 2001-07-26 2004-12-02 Kazunori Ozawa Speech bandwidth extension apparatus and speech bandwidth extension method
US20070265837A1 (en) 2004-09-06 2007-11-15 Matsushita Electric Industrial Co., Ltd. Scalable Decoding Device and Signal Loss Compensation Method
WO2006028009A1 (en) 2004-09-06 2006-03-16 Matsushita Electric Industrial Co., Ltd. Scalable decoding device and signal loss compensation method
WO2007000988A1 (en) 2005-06-29 2007-01-04 Matsushita Electric Industrial Co., Ltd. Scalable decoder and disappeared data interpolating method
EP1898397A1 (en) 2005-06-29 2008-03-12 Matsushita Electric Industrial Co., Ltd. Scalable decoder and disappeared data interpolating method
US20090222261A1 (en) 2006-01-18 2009-09-03 Lg Electronics, Inc. Apparatus and Method for Encoding and Decoding Signal
RU2414009C2 (en) 2006-01-18 2011-03-10 ЭлДжи ЭЛЕКТРОНИКС ИНК. Signal encoding and decoding device and method
US20080027718A1 (en) 2006-07-31 2008-01-31 Venkatesh Krishnan Systems, methods, and apparatus for gain factor limiting
CN101496101A (en) 2006-07-31 2009-07-29 高通股份有限公司 Systems, methods, and apparatus for gain factor limiting
WO2008076534A2 (en) 2006-12-13 2008-06-26 Motorola, Inc. Code excited linear prediction speech coding
JP2009134260A (en) 2007-10-30 2009-06-18 Nippon Telegr & Teleph Corp <Ntt> Voice musical sound false broadband forming device, voice speech musical sound false broadband forming method, and its program and its record medium
US20100228557A1 (en) 2007-11-02 2010-09-09 Huawei Technologies Co., Ltd. Method and apparatus for audio decoding
CN101335002A (en) 2007-11-02 2008-12-31 华为技术有限公司 Method and apparatus for audio decoding
KR20090080777A (en) 2008-01-22 2009-07-27 성균관대학교산학협력단 Method and Apparatus for detecting signal
CN101499278A (en) 2008-02-01 2009-08-05 华为技术有限公司 Audio signal switching and processing method and apparatus
CN101751925A (en) 2008-12-10 2010-06-23 华为技术有限公司 Tone decoding method and device
WO2011027709A1 (en) 2009-09-04 2011-03-10 三菱重工業株式会社 Outdoor unit of air conditioner
CN102044250A (en) 2009-10-23 2011-05-04 华为技术有限公司 Band spreading method and apparatus
WO2011050347A1 (en) 2009-10-23 2011-04-28 Qualcomm Incorporated Determining an upperband signal from a narrowband signal
US20110099004A1 (en) 2009-10-23 2011-04-28 Qualcomm Incorporated Determining an upperband signal from a narrowband signal
JP2011112311A (en) 2009-11-30 2011-06-09 Daikin Industries Ltd Outdoor unit of air conditioner
CN101964189A (en) 2010-04-28 2011-02-02 华为技术有限公司 Audio signal switching method and device
US20110270614A1 (en) 2010-04-28 2011-11-03 Huawei Technologies Co., Ltd. Method and Apparatus for Switching Speech or Audio Signals
EP2485029A1 (en) 2010-04-28 2012-08-08 Huawei Technologies Co., Ltd. Audio signal switching method and device
WO2012110482A2 (en) 2011-02-14 2012-08-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Noise generation in audio codecs
JP6010141B2 (en) 2012-03-01 2016-10-19 ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. Voice / audio signal processing method and apparatus
US9691396B2 (en) * 2012-03-01 2017-06-27 Huawei Technologies Co., Ltd. Speech/audio signal processing method and apparatus
US20170270933A1 (en) 2012-03-01 2017-09-21 Huawei Technologies Co.,Ltd. Speech/audio signal processing method and apparatus
US10013987B2 (en) * 2012-03-01 2018-07-03 Huawei Technologies Co., Ltd. Speech/audio signal processing method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
G.729 based Embedded Variable bit-rate coder:An 8-32 kbits scable wideband coder bitstream interroperable with G.729; G.729.1(05/06),May 29, 2006,XP017404590,98 pages.
H.W. Kim et al., "The Trend of G.729.1 Wideband Multi-codec Technology," ETRI Electronics and Telecommunications Trend Analysis vol. 21, No. 6 (Dec. 2006), 18 pages.
Ragot S et al:"ITU-T G.729.1:AN 8-32 kbits Scable Coder Interoperable with G.729 for Wideband Telephony and Voice Over IP",Apr. 15, 2007,XP031463903 ,4 pages.
S. RAGOT ; B. KOVESI ; R. TRILLING ; D. VIRETTE ; N. DUC ; D. MASSALOUX ; S. PROUST ; B. GEISER ; M. GARTNER ; S. SCHANDL ; H. TAD: "ITU-T G.729.1: AN 8-32 Kbit/S Scalable Coder Interoperable with G.729 for Wideband Telephony and Voice Over IP", 2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING 15-20 APRIL 2007 HONOLULU, HI, USA, IEEE, PISCATAWAY, NJ, USA, 15 April 2007 (2007-04-15), Piscataway, NJ, USA, pages IV - IV-532, XP031463903, ISBN: 978-1-4244-0727-9

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10559313B2 (en) * 2012-03-01 2020-02-11 Huawei Technologies Co., Ltd. Speech/audio signal processing method and apparatus

Also Published As

Publication number Publication date
MX345604B (en) 2017-02-03
EP2821993A4 (en) 2015-02-25
RU2585987C2 (en) 2016-06-10
US10013987B2 (en) 2018-07-03
KR101702281B1 (en) 2017-02-03
TR201911006T4 (en) 2019-08-21
EP3193331B1 (en) 2019-05-15
US20190318747A1 (en) 2019-10-17
WO2013127364A1 (en) 2013-09-06
ES2741849T3 (en) 2020-02-12
US9691396B2 (en) 2017-06-27
CN103295578A (en) 2013-09-11
JP2015512060A (en) 2015-04-23
EP3193331A1 (en) 2017-07-19
ES2629135T3 (en) 2017-08-07
CN103295578B (en) 2016-05-18
DK3534365T3 (en) 2021-04-12
HUE053834T2 (en) 2021-07-28
PT3193331T (en) 2019-08-27
SG10201608440XA (en) 2016-11-29
RU2616557C1 (en) 2017-04-17
KR20140124004A (en) 2014-10-23
EP2821993A1 (en) 2015-01-07
CN105469805A (en) 2016-04-06
BR112014021407B1 (en) 2019-11-12
US20150006163A1 (en) 2015-01-01
MX364202B (en) 2019-04-16
KR101667865B1 (en) 2016-10-19
US10559313B2 (en) 2020-02-11
EP3534365B1 (en) 2021-01-27
PL3534365T3 (en) 2021-07-12
BR112014021407A2 (en) 2019-04-16
CA2865533C (en) 2017-11-07
JP6558748B2 (en) 2019-08-14
ES2867537T3 (en) 2021-10-20
KR20160121612A (en) 2016-10-19
CA2865533A1 (en) 2013-09-06
IN2014KN01739A (en) 2015-10-23
CN105469805B (en) 2018-01-12
MY162423A (en) 2017-06-15
JP6378274B2 (en) 2018-08-22
US20170270933A1 (en) 2017-09-21
ZA201406248B (en) 2016-01-27
JP6010141B2 (en) 2016-10-19
MX2014010376A (en) 2014-12-05
RU2014139605A (en) 2016-04-20
EP2821993B1 (en) 2017-05-10
US20180374488A1 (en) 2018-12-27
JP2018197869A (en) 2018-12-13
JP2017027068A (en) 2017-02-02
EP3534365A1 (en) 2019-09-04
KR20170013405A (en) 2017-02-06
PT2821993T (en) 2017-07-13
KR101844199B1 (en) 2018-03-30
SG11201404954WA (en) 2014-10-30

Similar Documents

Publication Publication Date Title
US10360917B2 (en) Speech/audio signal processing method and apparatus
US20220044692A1 (en) Method, Apparatus, and System for Processing Audio Data
US9406307B2 (en) Method and apparatus for polyphonic audio signal prediction in coding and networking systems
US9830920B2 (en) Method and apparatus for polyphonic audio signal prediction in coding and networking systems
EP2660812A1 (en) Bandwidth expansion method and apparatus
CN105761724B (en) Voice frequency signal processing method and device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4