WO2015007114A1 - 解码方法和解码装置 - Google Patents

解码方法和解码装置 Download PDF

Info

Publication number
WO2015007114A1
WO2015007114A1 PCT/CN2014/077096 CN2014077096W WO2015007114A1 WO 2015007114 A1 WO2015007114 A1 WO 2015007114A1 CN 2014077096 W CN2014077096 W CN 2014077096W WO 2015007114 A1 WO2015007114 A1 WO 2015007114A1
Authority
WO
WIPO (PCT)
Prior art keywords
subframe
frame
gain
current frame
subframes
Prior art date
Application number
PCT/CN2014/077096
Other languages
English (en)
French (fr)
Inventor
王宾
苗磊
刘泽新
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2016522198A priority Critical patent/JP6235707B2/ja
Priority to EP19162439.4A priority patent/EP3594942B1/en
Priority to KR1020157033903A priority patent/KR101800710B1/ko
Priority to SG11201509150UA priority patent/SG11201509150UA/en
Priority to NZ714039A priority patent/NZ714039A/en
Priority to CA2911053A priority patent/CA2911053C/en
Priority to MX2015017002A priority patent/MX352078B/es
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020177033206A priority patent/KR101868767B1/ko
Priority to EP14826461.7A priority patent/EP2983171B1/en
Priority to AU2014292680A priority patent/AU2014292680B2/en
Priority to RU2015155744A priority patent/RU2628159C2/ru
Priority to ES14826461T priority patent/ES2746217T3/es
Priority to BR112015032273-5A priority patent/BR112015032273B1/pt
Priority to UAA201512807A priority patent/UA112401C2/uk
Publication of WO2015007114A1 publication Critical patent/WO2015007114A1/zh
Priority to IL242430A priority patent/IL242430B/en
Priority to ZA2015/08155A priority patent/ZA201508155B/en
Priority to US14/985,831 priority patent/US10102862B2/en
Priority to US16/145,469 priority patent/US10741186B2/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
    • G10L19/0208Subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
    • G10L21/0388Details of processing therefor

Definitions

  • the present invention relates to the field of codecs, and in particular to a decoding method and a decoding apparatus. Background technique
  • the band extension technology is usually used to increase the bandwidth, and the band extension technology is divided into a time domain band extension technique and a frequency domain band extension technique.
  • packet loss rate is a key factor affecting signal quality. In the case of packet loss, it is necessary to recover the lost frame as accurately as possible.
  • the decoding end determines whether frame loss occurs by parsing the code stream information. If no frame loss occurs, normal decoding processing is performed. If frame loss occurs, frame loss processing is required.
  • the decoding end When performing frame loss processing, the decoding end obtains a high-band signal according to the decoding result of the previous frame, and uses the set fixed subframe gain and the global gain obtained by multiplying the global gain of the previous frame by a fixed attenuation factor. The gain adjustment is performed on the high frequency band signal to obtain the final high frequency band signal.
  • Embodiments of the present invention provide a decoding method and a decoding apparatus capable of avoiding noise reduction when performing frame loss processing, thereby improving voice quality.
  • a decoding method including: synthesizing a high frequency band signal according to a decoding result of a previous frame of a current frame in a case where the current frame is determined to be a lost frame; according to at least one frame before the current frame Determining a sub-frame gain of at least two subframes of the current frame by determining a gain gradient between the subframe gain of the frame and the subframe of the at least one frame; determining a global gain of the current frame; The subframe gain of the at least two subframes is adjusted to obtain a high-band signal of the current frame.
  • At least two of the current frame are determined according to a subframe gain of a subframe of at least one frame before the current frame and a gain gradient between the subframes of the at least one frame.
  • the subframe gain of the subframe includes: determining, according to a subframe gain of the subframe of the at least one frame and a gain gradient between the subframes of the at least one frame, a subframe gain of a start subframe of the current frame; The gain of the subframe between the start of the subframe and the gain of the subframe of the at least one frame is indeed combined with the first possible implementation.
  • according to the at least one frame according to the at least one frame.
  • a sub-frame gain of a start subframe of the current frame by using a gain gradient between the subframe gain of the subframe and the subframe of the at least one frame, including: a gain gradient between subframes according to a previous frame of the current frame Estimating a first gain gradient between a last subframe of a previous frame of the current frame and a start subframe of the current frame; a subframe gain and a first gain gradient of a last subframe of the previous frame of the current frame , Gain subframe of the current frame count starting subframe.
  • the last subframe and the current frame of the previous frame of the current frame are estimated according to the gain gradient between the subframes of the previous frame of the current frame.
  • the first gain gradient between the start subframes includes: performing weighted averaging on a gain gradient between at least two subframes of a previous frame of the current frame to obtain a first gain gradient, wherein, when performing weighted averaging, The gain gradient between the sub-frames that are closer to the current frame in the previous frame of the current frame is larger.
  • the gain is obtained by the following formula:
  • GainShapeTemp [n, 0] GainShape[n -1, 1 - 1] + ⁇ ⁇ * GainGradFEC [0]
  • GainShape [n, 0] GainShapeTemp [n, 0] * ⁇ 2 ;
  • GainShape [n - 1 , 1 - 1] is the subframe gain of the 1st to 1st subframe of the n-1th frame
  • GainShape [ ⁇ , ⁇ ] is the subframe gain of the starting subframe of the current frame
  • GainShapeTemp [ n, 0] is the intermediate value of the subframe gain of the starting subframe, 0 ⁇ ⁇ ⁇ 1.0, 0 ⁇ 3 ⁇ 4 ⁇ 1.0, the type of the last frame received before the current frame and the sign of the first gain gradient It is determined that % is determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the last subframe and the current frame of the previous frame of the current frame are estimated according to the gain gradient between the subframes of the previous frame of the current frame.
  • the first gain gradient between the start subframes includes: a gain gradient between a subframe preceding the last subframe of the previous frame of the current frame and a last subframe of the previous frame of the current frame as the first A gain gradient.
  • the current frame when the previous frame of the current frame is the n-1th frame, the current frame is the nth frame, and each frame includes 1 sub-frame.
  • GainShapeTemp [n, 0] GainShape [n -1, 1-1] + ⁇ * GainGradFEC [0] ,
  • GainShapeTemp [n, 0] ⁇ ( ⁇ 2 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
  • GainShape [n, 0] max( 3 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
  • GainShape[n -l, I-l] is the subframe gain of the 1-1th subframe of the previous frame of the current frame
  • GainShape [n, 0] is the subframe gain of the starting subframe
  • GainShapeTemp [n, 0] is the intermediate value of the subframe gain of the starting subframe
  • is determined by the multiple of the type of the last frame received before the current frame and the subframe gain of the last two subframes in the previous frame of the current frame, 4 and 4 by the last received before the current frame The type of frame and the number of consecutive lost frames before the current frame are determined.
  • the subframe gain and the first subframe of the last subframe of the previous frame of the current frame are a gain gradient, estimating a subframe gain of a starting subframe of the current frame, comprising: a subframe gain and a first gain gradient according to a last subframe of a previous frame of the current frame, and a last received before the current frame
  • the type of frame and the number of consecutive lost frames before the current frame estimate the subframe gain of the starting subframe of the current frame.
  • the subframe gain according to the start subframe of the current frame and the subframe between the at least one frame Adding: estimating, according to a gain gradient between the subframes of the at least one frame, a gain gradient between at least two subframes of the current frame; according to a gain gradient between at least two subframes of the current frame and a starting frame gain of the current frame .
  • each frame includes one subframe, and at least two subframes of the current frame are estimated according to a gain gradient between the subframes of the at least one frame.
  • the gain gradient includes: a gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the current frame and an i-th subframe and an i+th of the previous frame of the previous frame of the current frame
  • the gain gradient between the i-th subframe and the i+1th subframe of one frame is greater than the gain between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame.
  • the weight of the gradient is greater than the gain between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame.
  • the gain gradient between frames is determined by the following formula:
  • GainGradFEC [i + l] GainGrad [n -2,i] *p! + GainGrad [n - 1 , i ] * ⁇ 2 ,
  • GainGradFEC[i + l] is the gain gradient between the i-th subframe and the i+1th subframe
  • GainGrad[n -2,i] is the gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame
  • GainGrad[n -l,i] is the front of the current frame.
  • the subframe gain of the subframes other than the start subframe in the subframe is determined by the following formula:
  • GainShapeTemp[n,i] GainShapeTemp[n,i-1]+GainGradFEC[i] * ⁇ 3 ;
  • GainShape[n,i] GainShapeTemp[n,i]* ⁇ ⁇ ;
  • GainShape[n,i] is the subframe gain of the ith subframe of the current frame
  • GainShapeTemp[n,i] is the intermediate value of the subframe gain of the ith subframe of the current frame
  • 0 ⁇ ⁇ 3 ⁇ 1.0
  • ⁇ 3 is determined by the multiple relationship of GainGrad[nl,i] and GainGrad [n-1, i+1] and the sign of GainGrad [nl,i+l]
  • A is before the current frame The type of the last frame received and the number of consecutive lost frames before the current frame are determined.
  • each frame includes one subframe, and at least two subframes of the current frame are estimated according to a gain gradient between the subframes of the at least one frame.
  • the gain gradient between the first and second sub-frames of the current frame is estimated by weighted averaging the I gain frames between the 1+1 subframes before the i-th subframe of the current frame.
  • the gain gradient between at least two sub-frames of the current frame is determined by the following formula:
  • GainGradFEC [ 1 ] GainGrad[n- 1,0]* ⁇ +GainGrad[n-l,l]* ⁇
  • GainGradFEC [2 ] GainGrad[n-1 1,1]* ⁇ i +GainGrad[n-1,2]* ⁇ z
  • GainGradFEC [3 ] GainGrad[n-1,2]* ⁇ +GainGradFEC[0] * ⁇ 2
  • GainGradFECLj is the gain gradient between the jth subframe and the j+1th subframe of the current frame
  • GainGrad[n -l, j] is the jth subframe and the j+1th of the previous frame of the current frame.
  • r 2 , 3 and 4 consist
  • the type of the last frame received is determined, and the subframe gain of the other subframes except the starting subframe in at least two subframes is determined by the following formula:
  • the gain gradient between the at least two subframes of the current frame and the sub-frame of the starting subframe include: according to at least two subframes of the current frame Gain gradient and sub-frame gain of the starting sub-frame, And combining the first frame type received before the current frame with the consecutive lost frame before the current frame, in combination with the first aspect or any one of the foregoing possible implementation manners, in the fourteenth possible implementation manner, estimating the current frame
  • the global gain includes
  • a decoding method including: synthesizing a high-band signal according to a decoding result of a previous frame of a current frame in a case where the current frame is determined to be a lost frame; determining a sub-frame of at least two subframes of the current frame Frame gain; estimating the global gain gradient of the current frame based on the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame; estimating based on the global gain gradient and the global gain of the previous frame of the current frame Global Gain of Current Frame; The synthesized high frequency band signal is adjusted to obtain a high frequency band signal of the current frame based on the global gain and the subframe gain of at least two subframes.
  • GainAtten is the global gain gradient
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • a decoding apparatus including: a generating module, configured to synthesize a high-band signal according to a decoding result of a previous frame of a current frame in a case where the current frame is determined to be a lost frame; Determining, according to a subframe gain of a subframe of at least one frame before the current frame and a gain gradient between the subframes of the at least one frame, a subframe gain of at least two subframes of the current frame, and determining a global gain of the current frame; And an adjusting module, configured to adjust, according to the global gain determined by the determining module and the subframe gain of the at least two subframes, the high-band signal synthesized by the generating module to obtain a high-band signal of the current frame.
  • the determining module is configured according to the foregoing at least one a subframe gain of a subframe of the frame and a gain gradient between the subframes of the at least one frame, determining a subframe gain of a start subframe of the current frame, and a subframe gain according to a start subframe of the current frame and the foregoing
  • a gain gradient between the subframes of at least one frame determines a subframe gain of the subframes other than the start subframe of the at least two subframes.
  • the determining module estimates the last frame of the current frame according to the gain gradient between the subframes of the previous frame of the current frame. Estimating a starting subframe of the current frame according to a first gain gradient between a subframe and a starting subframe of the current frame, and according to a subframe gain of the last subframe of the previous frame of the current frame and a first gain gradient Subframe gain.
  • the determining module performs weighted averaging on a gain gradient between at least two subframes of a previous frame of the current frame to obtain a first gain. Gradient, wherein when weighted averaging is performed, the gain gradient between the sub-frames that are closer to the current frame in the previous frame of the current frame is larger.
  • the previous frame of the current frame is the n-1th frame, the current frame.
  • each frame includes 1 subframe, and the first gain gradient is obtained by the following formula:
  • GainGradFEC[0] ⁇ GainGrad[n -1, j]* aj , where GainGradFEC [0] is the first gain gradient,
  • GainShapeTemp [n,0] GainShape [ ⁇ -1, ⁇ -1] + ⁇ 1 * GainGradFEC [0]
  • GainShape [n, 0] GainShapeTemp [n, 0]* ⁇ 2 ;
  • GainShape [n - 1 , 1 - 1] is the subframe gain of the 1st to 1st subframe of the n-1th frame
  • GainShape [ ⁇ , ⁇ ] is the subframe gain of the starting subframe of the current frame
  • GainShapeTemp [ n, 0] is the intermediate value of the subframe gain of the starting subframe, 0 ⁇ ⁇ 1.0, 0 ⁇ 3 ⁇ 4 ⁇ 1.0, determined by the type of the last frame received before the current frame and the sign of the first gain gradient % is determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the determining module takes the gain gradient between the subframe before the last subframe of the previous frame of the current frame and the last subframe of the previous frame of the current frame as the first gain gradient.
  • GainShapeTemp [n,0] GainShape [n -1,1-1] + ⁇ * GainGradFEC [0] ,
  • GainShapeTemp [n, 0] min ( ⁇ 2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
  • GainShape [n,0] max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
  • GainShape[nl,Il] is the subframe gain of the 1-1st subframe of the previous frame of the current frame
  • GainShape[n, 0] is the subframe gain of the starting subframe
  • GainShapeTemp [n, 0] is The intermediate value of the subframe gain of the first sub-frame
  • 0 ⁇ 4 ⁇ 1.0, 1 ⁇ 2, 0 ⁇ 4 ⁇ 1.0, 4 is the type of the last frame received before the current frame and the previous frame of the current frame.
  • a 2 and ⁇ are determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the determining module is configured to obtain a subframe gain according to a last subframe of a previous frame of the current frame. And the first gain gradient, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, the subframe gain of the starting subframe of the current frame is estimated.
  • the determining module estimates the current frame according to the gain gradient between the subframes of the at least one frame a gain gradient between at least two subframes, and estimating other sub-frames of the at least two subframes other than the start subframe according to a gain gradient between at least two subframes of the current frame and a subframe gain of the start subframe The subframe gain of the frame.
  • each frame includes one subframe
  • the determining module is configured to the i-th subframe and the i+1th of the previous frame of the current frame.
  • the gain gradient between the subframes and the gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame are weighted averaged, and the i-th subframe and the i-th frame of the current frame are estimated.
  • the weight of the benefit gradient is greater than the weight of the gain gradient between the ith subframe and the i+1th subframe of the previous frame of the previous frame of the current frame.
  • the gain gradient between at least two subframes of the current frame is determined by the following formula:
  • GainGradFEC [i + 1] GainGrad [n - 2, i ] * + GainGrad [n - 1 , i ] * ⁇ 2 ,
  • GainGradFEC[i + l] is the gain gradient between the i-th subframe and the i+1th subframe
  • GainGrad[n -2, i] is the i-th subframe of the previous frame of the previous frame of the current frame and The gain gradient between the i+1th subframe
  • the subframe gains of the other sub-frames other than the start subframe in the at least two subframes are determined by the following formula:
  • GainShapeTemp[n,i] GainShapeTemp[n,i-1]+GainGradFEC[i] * ⁇ 3 ;
  • GainShape[n,i] GainShapeTemp[n,i]* ⁇ ⁇ ;
  • GainShape[n,i] is the subframe gain of the i-th subframe of the current frame
  • GainShapeTemp[n,i] is the intermediate value of the subframe gain of the i-th subframe of the current frame
  • 0 ⁇ 3 ⁇ 1.0 ⁇ 1.0
  • 0 ⁇ ⁇ 4 ⁇ 1.0, 3 3 is determined by the multiple of GainGrad[nl,i] and GainGrad [nl,i+l] and the sign of GainGrad [nl,i+l]
  • A is in the current frame The type of the last frame received previously and the number of consecutive lost frames before the current frame are determined.
  • the determining module performs weighting of one gain gradient between 1+1 subframes before the ith subframe of the current frame.
  • the gain gradient between at least two subframes of the current frame is determined by the following formula:
  • GainGradFEC [ 1 ] GainGrad[n- 1,0]* ⁇ +GainGrad[n-l,l]* ⁇
  • GainGradFEC [2 ] GainGrad[n-1 1,1]* ⁇ i +GainGrad[n-1,2]* ⁇ z
  • GainGradFEC [3 ] GainGrad[n-1,2]* ⁇ +GainGradFEC[0] * ⁇ 2
  • GainGradFECLj is the gain gradient between the jth subframe and the j+1th subframe of the current frame
  • GainGrad[n -l, j] is the gain gradient between the jth subframe and the j+1th subframe of the previous frame of the current frame
  • 2 , 3 and 4 are determined by the type of the last frame received, wherein the above at least two subframes except the starting subframe
  • the subframe gain of a subframe is determined by the following formula:
  • GainShapeTemp[n,0] is the first gain gradient
  • the determining module determines, according to a gain gradient of at least two subframes of the current frame, and a start subframe Subframe gain, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, estimating the subframe gain of the other subframes except the starting subframe in the at least two subframes .
  • the determining module determines, according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame. Estimating the global gain gradient of the current frame; estimating the global gain of the current frame based on the global gain gradient and the global gain of the previous frame of the current frame of the current frame.
  • a decoding apparatus including: a generating module, configured to synthesize a high-band signal according to a decoding result of a previous frame of a current frame in a case where the current frame is determined to be a lost frame; Determining a subframe gain of at least two subframes of the current frame, estimating a global gain gradient of the current frame according to a type of the last frame received before the current frame, a number of consecutive lost frames before the current frame, and according to a global gain gradient and The global gain of the previous frame of the current frame, estimating the current The global gain of the frame is used to adjust the high-band signal synthesized by the generating module to obtain the high-band signal of the current frame according to the global gain determined by the determining module and the subframe gain of the at least two subframes.
  • GainFrame GainFrame_prevfrm* GainAtten
  • GainFrame the global gain of the current middle
  • GainFrame_prevfrm the global gain of the previous frame of the current frame
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • the subframe gain of the subframe of the current frame is determined according to the subframe gain of the subframe before the current frame and the gain gradient between the subframes before the current frame, and the subframe gain of the subframe of the current frame is determined and utilized.
  • the determined subframe gain of the current frame adjusts the high band signal. Since the subframe gain of the current frame is obtained according to the gradient (variation trend) of the subframe gain of the subframe before the current frame, the transition before and after the frame loss has better continuity, thereby reducing the noise of the reconstructed signal and improving Voice quality.
  • FIG. 1 is a schematic flow chart of a decoding method in accordance with an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of a decoding method according to another embodiment of the present invention.
  • Figure 3A is a trend diagram showing the variation of the subframe gain of the previous frame of the current frame according to an embodiment of the present invention.
  • Figure 3B is a trend diagram showing the variation of the subframe gain of the previous frame of the current frame according to another embodiment of the present invention.
  • Figure 3C is a trend diagram showing the variation of the subframe gain of the previous frame of the current frame according to still another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a process of estimating a first gain gradient, in accordance with an embodiment of the present invention.
  • 5 is a schematic diagram of a process of estimating a gain gradient between at least two subframes of a current frame, in accordance with an embodiment of the present invention.
  • 6 is a schematic flow chart of a decoding process in accordance with an embodiment of the present invention.
  • Figure 7 is a schematic block diagram of a decoding apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a decoding apparatus according to another embodiment of the present invention.
  • Figure 9 is a schematic block diagram of a decoding apparatus according to another embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a decoding device according to an embodiment of the present invention. detailed description
  • the speech signal is generally subjected to framing processing, that is, the speech signal is divided into a plurality of frames.
  • framing processing that is, the speech signal is divided into a plurality of frames.
  • the vibration of the glottis has a certain frequency (corresponding to the pitch period).
  • the pitch period is small, if the frame length is too long, a plurality of pitch periods will exist in one frame, and thus the calculation is performed.
  • the pitch period is not accurate, so one frame can be divided into multiple subframes.
  • the core encoder encodes the low frequency band information of the signal, and obtains parameters such as a pitch period, an algebraic codebook, and respective gains, and performs high frequency band information on the signal.
  • LPC Linear Predictive Coding
  • the LSF parameters, the sub-frame gain and the global gain are inverse quantized, and the LSF parameters are converted into LPC parameters to obtain an LPC synthesis filter.
  • the pitch period is obtained by the core decoder, and the digital book is obtained.
  • the respective gain and other parameters, based on the pitch period, the algebraic code book and the respective gains and other parameters to obtain a high-band excitation signal, and the high-band excitation signal is synthesized by the LPC synthesis filter to form a high-band signal; finally, according to the subframe gain and the global Gain Gain adjustment of the high band signal to recover the high band signal of the lost frame.
  • whether the frame loss occurs in the current frame may be determined by parsing the code stream information, and if the frame loss does not occur in the current frame, the normal decoding process described above is performed. If the frame loss occurs in the current frame, that is, the current frame is a lost frame, the frame loss processing needs to be performed, that is, the lost frame needs to be recovered.
  • FIG. 1 is a schematic flow chart of a decoding method in accordance with an embodiment of the present invention.
  • the method of Figure 1 can be performed by a decoder, including the following.
  • the high frequency band signal is synthesized according to the decoding result of the previous frame of the current frame.
  • the decoding end determines whether frame loss occurs by parsing the code stream information. If no frame loss occurs, normal decoding processing is performed, and if frame loss occurs, frame dropping processing is performed.
  • the frame loss processing is performed, first, the high-band excitation signal is generated according to the decoding parameters of the previous frame; secondly, the LPC parameter of the previous frame is copied as the LPC parameter of the current frame, thereby obtaining the LPC synthesis filter; finally, The high-band excitation signal is passed through an LPC synthesis filter to obtain a synthesized high-band signal.
  • the subframe gain of one subframe may refer to a ratio of a difference between the synthesized high frequency band signal of the subframe and the original high frequency band signal to a synthesized high frequency band signal, for example, the subframe gain may indicate a high synthesis of the subframe.
  • the ratio of the difference between the amplitude of the band signal and the amplitude of the original high band signal to the amplitude of the synthesized high band signal may refer to a ratio of a difference between the synthesized high frequency band signal of the subframe and the original high frequency band signal to a synthesized high frequency band signal.
  • the gain gradient between the sub-frames is used to indicate the trend and extent of the sub-frame gain between adjacent sub-frames, i.e., the amount of gain variation.
  • the gain gradient between the first subframe and the second subframe may refer to a difference between a subframe gain of the second subframe and a subframe gain of the first subframe, and embodiments of the present invention are not limited thereto.
  • the gain gradient between sub-frames can also refer to the sub-frame gain attenuation factor.
  • the gain variation of the last subframe of the previous frame to the start subframe of the current frame may be estimated according to the trend and degree of the subframe gain between the subframes of the previous frame.
  • estimating the subframe gain of the starting subframe of the current frame by using the gain variation and the subframe gain of the last subframe of the previous frame; and then, according to the subframe between the subframes of at least one frame before the current frame
  • the trend and degree of change of the frame gain estimate the amount of gain variation between the subframes of the current frame; finally, the other subframes of the current frame are estimated by using the gain variation and the estimated subframe gain of the starting subframe.
  • Subframe gain is the gain variation and the estimated subframe gain between the subframes of the previous frame.
  • the global gain of a frame may refer to the ratio of the difference between the synthesized high band signal of the frame and the original high band signal to the synthesized high band signal.
  • the global gain may represent the ratio of the difference between the amplitude of the synthesized high frequency band signal and the amplitude of the original high frequency band signal to the amplitude of the synthesized high frequency band signal.
  • the global gain gradient is used to indicate the trend and extent of the global gain between adjacent frames.
  • the global gain gradient between one frame and another frame may refer to the difference between the global gain of one frame and the global gain of another frame, and embodiments of the present invention are not limited thereto, for example, between one frame and another frame.
  • the global gain gradient can also be referred to as the global gain attenuation factor.
  • the global gain of the previous frame of the current frame can be multiplied by a fixed attenuation factor to estimate the global gain of the current frame.
  • embodiments of the present invention may determine a global gain gradient based on the type of last frame received prior to the current frame and the number of consecutive lost frames before the current frame, and estimate the current frame based on the determined global gain gradient. The global gain.
  • the amplitude of the high band signal of the current frame can be adjusted according to the global gain, and the amplitude of the high band signal of the subframe can be adjusted according to the subframe gain.
  • the subframe gain of the subframe of the current frame is determined according to the subframe gain of the subframe before the current frame and the gain gradient between the subframes before the current frame, and the subframe gain of the subframe of the current frame is determined and utilized.
  • the determined subframe gain of the current frame adjusts the high band signal. Since the subframe gain of the current frame is obtained according to the gradient (change trend and degree) of the subframe gain of the subframe before the current frame, the transition before and after the frame loss has better continuity, thereby reducing the noise of the reconstructed signal. , improved voice quality.
  • the gain gradient between the last two subframes of the previous frame may be used as the estimated value of the first gain gradient, and the embodiment of the present invention is not limited thereto, and multiple subframes of the previous frame may be used.
  • a weighted average between the gain gradients yields an estimate of the first gain gradient.
  • the estimated value of the gain gradient between two adjacent subframes of the current frame may be: a gain between two subframes corresponding to the positions of the two adjacent subframes in the previous frame of the current frame.
  • the estimated value of the gain gradient may be: a weighted average of the gain gradients between several adjacent subframes preceding two adjacent subframes of the previous subframe.
  • the estimated value of the subframe gain of the starting subframe of the current frame may be the last sub-frame of the previous frame.
  • the subframe gain of the starting subframe of the current frame may be the subframe gain of the last subframe of the previous frame. The product of the first gain gradient.
  • performing weighted averaging on the gain gradient between at least two subframes of the previous frame of the current frame, to obtain a first gain gradient wherein, when performing weighted averaging, the distance from the current frame in the previous frame of the current frame is The gain of the gain gradient between the near subframes is larger; and the subframe gain and the first gain gradient of the last subframe of the previous frame of the current frame, and the last frame received before the current frame.
  • the type of the frame (or the last normal frame type) and the number of consecutive lost frames before the current frame, the subframe gain of the starting subframe of the current frame is estimated.
  • the two gain gradients between the last three subframes in the previous frame may be used.
  • a gain gradient between the second sub-frame and a gain gradient between the second and last sub-frames are weighted averaged to obtain a first gain gradient.
  • the gain gradient between all adjacent subframes in the previous frame may be weighted averaged.
  • the weight of the gain gradient between the subframes closer to the current frame in the previous frame may be set to a larger value, so that the estimation of the first gain gradient may be made.
  • the value is closer to the actual value of the first gain gradient, so that the transition before and after the frame loss has better continuity, and the quality of the speech is improved.
  • the estimated gain in estimating the subframe gain, may be adjusted according to the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame. Specifically, the gain gradient between each subframe of the current frame may be first estimated, and then the gain gradient between the subframes is used, and the subframe gain of the last subframe of the previous frame of the current frame is combined with the current.
  • the last normal frame type before the frame and the number of consecutive lost frames before the current frame are the decision conditions, and the subframe gain of all the subframes of the current frame is estimated.
  • the type of the last frame received before the current frame may refer to the type of the most recent normal frame (non-lost frame) received by the decoding end before the current frame. For example, suppose the encoding end sends 4 frames to the decoding end, wherein the decoding end correctly receives the first frame and the second frame, and the third frame and the fourth frame are lost, then the last normal frame before the frame loss can refer to the second frame. frame.
  • the type of frame may include: (1) a frame of one of several characteristics such as unvoiced, muted, noise, or voiced end (U VOICED CLAS frame ); (2) unvoiced to voiced transition, voiced start but weaker frame ( UNVOICED_TRANSITION frame ); ( 3 ) The transition after voiced sound, the frame with weak voiced characteristics ( VOICED_TRANSITION frame ); ( 4 ) The frame with voiced characteristics, the previous frame is voiced or voiced start frame ( VOICED — CLAS frame ); (5) The initial frame of the apparent voiced (ONSET frame); (6) the start frame of the harmonic and noise mixture (SIN_ONSET frame); (7) the inactive feature frame (INACTIVE_CLAS frame).
  • the number of consecutive lost frames may refer to the number of consecutive lost frames after the last normal frame or may refer to the number of frames in which the current lost frame is a consecutive lost frame. For example, the encoding end sends 5 frames to the decoding end, and the decoding end correctly receives the first frame and the second frame, and the third frame to the fifth frame are lost. If the current lost frame is the 4th frame, the number of consecutive lost frames is 2; if the current lost frame is the 5th frame, the number of consecutive lost frames is 3.
  • the subframe of the current frame For example, in a case where the type of the current frame (lost frame) is the same as the type of the last frame received before the current frame and the number of consecutive current frames is less than or equal to a threshold (for example, 3), the subframe of the current frame
  • a threshold for example, 3
  • the subframe of the current frame The estimated value of the gain gradient is close to the actual value of the gain gradient between the subframes of the current frame.
  • the estimated value of the gain gradient between the subframes of the current frame is far from the actual value of the gain gradient between the subframes of the current frame. Therefore, it can be based on the type of the last frame received before the current frame.
  • the decoding end determines that the last normal frame is the start frame of the voiced frame or the unvoiced frame, it may be determined that the current frame may also be a voiced frame or an unvoiced frame.
  • whether the type of the current frame is the same as the type of the last frame received before the current frame can be determined according to the last normal frame type before the current frame and the number of consecutive lost frames before the current frame. If they are the same, the coefficient of the adjustment gain takes a larger value. If it is not the same, the coefficient of the adjustment gain takes a smaller value.
  • the first gain gradient is obtained by the following formula (1):
  • GainGradFEC [0] ⁇ GainGrad [n - 1 , j] * ⁇ ", ( 1 )
  • GainGradFEC [0] is the first gain gradient
  • GainGrad[n -1, j] is the jth of the previous frame of the current frame
  • GainShapeTemp [n, 0] GainShape [n - 1 , 1 - 1] + * GainGradFEC [0] ( 2 )
  • GainShape [n, 0] GainShapeTemp [ ⁇ , 0] * ⁇ 2 ; ( 3 ) where GainShape [n - 1 , 1 - 1] is the sub-frame gain of the 1st - 1st subframe of the n-1th frame, GainShape [ ⁇ , ⁇ ] is the subframe gain of the starting subframe of the current frame, GainShapeTemp [n, 0] is the intermediate value of the subframe gain of the starting subframe, 0 ⁇ ⁇ ⁇ 1.0 , 0 ⁇ 3 ⁇ 4 ⁇ 1.0 , by The type of the last frame received before the current frame and the sign of the first gain gradient determine that % is determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the value is smaller, for example, less than the preset threshold, if the first gain If the gradient is negative, the value is larger, for example, greater than the preset threshold.
  • the value is larger, for example, greater than the preset.
  • the threshold value is negative, and the value of the first gain is negative, for example, less than the preset threshold.
  • % takes a smaller value, for example, less than a preset threshold.
  • % takes a larger value, for example, greater than the pre- Set the threshold.
  • a gain gradient between a subframe before the last subframe of the previous frame of the current frame and a last subframe of the previous frame of the current frame is used as the first gain gradient; and according to the previous frame of the current frame Estimating the subframe gain and the first gain gradient of the last subframe of the frame, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, estimating the child of the starting subframe of the current frame Frame gain.
  • the first gain gradient is obtained by the following formula (4):
  • GainGradFEC [0] GainGrad [n -1, 1-2] , ( 4 ) where GainGradFEC[0] is the first gain gradient and GainGrad[n -l, I-2] is the first frame of the previous frame of the current frame a gain gradient between -2 subframes and 1-1 subframes,
  • the subframe gain of the starting subframe is obtained by the following formulas (5), (6), and (7):
  • GainShapeTemp [n, 0] GainShape [n -1, 1-1] + ⁇ * GainGradFEC [0], ( 5 )
  • the current frame may also be a voiced frame or an unvoiced frame, in which case, if the subframe of the last subframe in the previous frame is The greater the ratio of the gain to the sub-frame gain of the second last sub-frame, then! The larger the value of 1 , such as The smaller the ratio of the subframe gain of the last subframe in the previous frame to the subframe gain of the second to last subframe, the smaller the value of 4.
  • the value of ⁇ when the type of the last frame received before the current frame is the unvoiced frame is larger than the value of 4 when the type of the last frame received before the current frame is the voiced frame.
  • the last normal frame type is an unvoiced frame
  • the current consecutive frame loss number is 1
  • the current lost frame is immediately after the last normal frame
  • the lost frame has a strong correlation with the last normal frame.
  • the energy of the lost frame is close to the last normal frame energy, and the value of ⁇ and can be close to 1, for example, A 2 can be 1.2, and ⁇ can be 0.8.
  • the gain gradient between the i-th subframe and the i+1th subframe is greater than the gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame.
  • a gain gradient between an ith subframe and an i+1th subframe of a previous frame of the current frame and an ith subframe of a previous frame of a previous frame of the current frame may be used.
  • the gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the current frame is greater than the i-th of the previous frame of the previous frame of the current frame.
  • the weight of the gain gradient between the subframe and the i+1th subframe, and based on the gain gradient between at least two subframes of the current frame and the subframe gain of the starting subframe, and received before the current frame The subframe gain of the other subframes other than the start subframe in the at least two subframes is estimated by the type of the last frame and the number of consecutive lost frames before the current frame.
  • the gain gradient between at least two subframes of the current frame is determined by the following formula (8):
  • GainGradFEC [i + 1] GainGrad [n - 2, i ] * + GainGrad [n - 1 , i ] * ⁇ 2 , ( 8 )
  • GainGradFEC[i + l] the i-th subframe and the i+1th sub- The gain gradient between frames
  • GainGrad[n -2,i] is the gain gradient between the ith subframe and the i+1th subframe of the previous frame of the previous frame of the current frame
  • GainShapeTemp[n,i] GainShapeTemp[n,i-1]+GainGradFEC[i] * ⁇ 3 ;
  • GainShape[n,i] GainShapeTemp[n,i]* ⁇ ⁇ ;
  • GainShape [n,i] is the subframe gain of the i-th subframe of the current frame
  • GainShapeTemp[n,i] is the intermediate value of the subframe gain of the i-th subframe of the current frame, 0 ⁇ ⁇ 3 ⁇ 1.0 , 0 ⁇ ⁇ 4 ⁇ 1.0, ⁇ 3 is determined by the multiple of GainGrad[nl,i] and GainGrad [nl,i+l] and the sign of GainGrad [nl,i+l]
  • A is the last one received before the current frame. The type of frame and the number of consecutive lost frames before the current frame are determined.
  • GainGrad[nl,i+l] the larger the ratio of GainGrad[nl,i+l] to GainGrad[nl,i], the larger the value of ⁇ 3 if GainGradFEC[0] is Negative values, the larger the ratio of B'J GainGrad [nl,i+l] to GainGrad[nl,i], the smaller the value of ⁇ 3 .
  • a smaller value is obtained, for example, less than a preset threshold.
  • A takes a larger value, for example, greater than the pre- Set the threshold.
  • each frame includes one subframe, and estimating a gain gradient between at least two subframes of the current frame according to a gain gradient between the subframes of the at least one frame, including:
  • Weighting the I gain gradients between 1+1 subframes before the i-th subframe of the current frame, and estimating a gain gradient of the i-th subframe and the i+1th subframe of the current frame, where i 0 , J., J-2, the gain of the gain gradient between the subframes closer to the i-th subframe is larger;
  • the gain gradient between the at least two subframes of the current frame and the subframe gain of the starting subframe according to the gain gradient between the at least two subframes of the current frame and the subframe gain of the starting subframe, and before the current frame.
  • the gain gradient between at least two sub-frames of the current frame is determined by the following formula
  • GainGradFEC[l] GainGrad[n-l,0]* i+GainGrad[n-l,l]* 2
  • GainGradFEC [3 ] GainGrad[n-1,2]* ⁇ ⁇ +GainGradFEC[0] * ⁇ 2
  • GainGradFECLj] is the gain gradient between the jth subframe and the j+1th subframe of the current frame
  • GainGrad[nl,j] is the current The gain gradient between the jth subframe and the j+1th subframe of the previous frame of the frame
  • j 0, 1, 2, ..., 1-2
  • ⁇ + ⁇ 2 + ⁇ 3 + ⁇ ⁇ , ⁇ > ⁇ 3 > ⁇ 2 > ⁇
  • 2 , 3 and 4 are determined by the type of the last frame received, and equations (14), (15) and (16) determine:
  • GainShapeTemp[n,i] min( 5 *GainShape[nl,i],GainShapeTemp[n,i]) ( 15 )
  • GainShape[n,i] max( ⁇ 6 * GainShape [ ⁇ - 1 ,i] ,GainShapeTem [n,i]) ( 16 )
  • GainShape[n,i] is the current frame
  • estimating a global gain gradient of the current frame according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame; according to the global gain gradient and the global gain of the previous frame of the current frame, Estimate the global gain of the current frame.
  • the global gain when estimating the global gain, it may be based on the global gain of at least one frame before the current frame (eg, the previous frame), and utilize the type of the last frame of the current frame received before the current frame and the current frame. Estimating the global increase of lost frames, such as the number of consecutive lost frames before transmission Benefit.
  • the global gain of the current frame is determined by the following formula (17):
  • GainFrame GainFrame_prevfrm* GainAtten , ( 17 ) where GainFrame is the global gain of the current frame, GainFrame_prevfrm is the global gain of the previous frame of the current frame, 0 ⁇ GainAtten ⁇ 1.0, GainAtten is the global gain gradient, and
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • the decoding end may determine that the global gain gradient is 1 in the case where it is determined that the type of the current frame is the same as the type of the last frame received before the current frame and the number of consecutive lost frames is less than or equal to three.
  • the global gain of the current lost frame can follow the global gain of the previous frame, so the global gain gradient can be determined to be one.
  • the decoding end can determine that the global gain gradient is a smaller value, that is, the global gain gradient can be smaller than the preset width. value.
  • the threshold can be set to 0.5.
  • the decoding end may determine the global gain gradient in a case where it is determined that the last normal frame is the start frame of the voiced frame, such that the global gain gradient is greater than the preset first threshold. If the decoding end determines that the last normal frame is the start frame of the voiced frame, it may be determined that the current lost frame is likely to be a voiced frame, and then the global gain gradient may be determined to be a larger value, that is, the global gain gradient may be greater than a preset threshold. .
  • the decoding end may determine the global gain gradient in the case where it is determined that the last normal frame is the start frame of the unvoiced frame, such that the global gain gradient is less than the preset threshold. For example, if the last normal frame is the start frame of the unvoiced frame, then the current lost frame is likely to be an unvoiced frame, then the decoder can determine that the global gain gradient is a small value, ie the global gain gradient can be less than the preset threshold.
  • Embodiments of the present invention estimate a subframe gain gradient and a global gain gradient using conditions such as the type of the last frame received before the frame loss occurs and the number of consecutive lost frames, and then combine the previous subframe gain and global at least one frame.
  • the gain determines the subframe gain and global gain of the current frame, and uses the two gains to gain control the reconstructed high-band signal to output the final high-band signal.
  • the embodiment of the present invention does not use a fixed value for the value of the subframe gain and the global gain required for decoding when the frame loss occurs, thereby avoiding the signal caused by setting a fixed gain value in the case where frame loss occurs.
  • the energy is discontinuous, making the transition before and after the frame loss more natural and stable, weakening the noise phenomenon, and improving Rebuild the quality of the signal.
  • FIG. 2 is a schematic flow chart of a decoding method according to another embodiment of the present invention.
  • the method of Figure 2 is performed by a decoder and includes the following.
  • the high frequency band signal is synthesized according to the decoding result of the previous frame of the current frame.
  • the global gain of the current frame is determined by the following formula:
  • GainFrame GainFrame_prevfrm* GainAtten , where GainFrame is the global gain of the current middle, and GainFrame_prevfrm is the global gain of the previous frame of the current frame.
  • GainAtten is the global gain gradient
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • 3A through 3C are graphs showing trends in the variation of the subframe gain of the previous frame, in accordance with an embodiment of the present invention.
  • 4 is a schematic diagram of a process of estimating a first gain gradient, in accordance with an embodiment of the present invention.
  • 5 is a schematic diagram of a process of estimating a gain gradient between at least two subframes of a current frame, in accordance with an embodiment of the present invention.
  • Figure 6 is a schematic flow diagram of a decoding process in accordance with an embodiment of the present invention.
  • the embodiment of 6 is an example of the method of FIG.
  • the decoding end parses the code stream information received from the encoding end.
  • the LSF parameters and the sub-frame gain and the global gain are inverse quantized, and the LSF parameters are converted into LPC parameters to obtain an LPC synthesis filter.
  • the pitch period, the digital book and the digital code are obtained by the core decoder.
  • Parameters such as respective gains, high-band excitation signals are obtained based on parameters such as pitch period, algebraic code, and respective gains, and the high-band excitation signal is synthesized by the LPC synthesis filter to synthesize the high-band signal; finally, according to the sub-frame gain and the global gain Into the high frequency band signal The line gain adjustment restores the final high band signal.
  • the frame loss processing includes steps 625 to 660.
  • This embodiment is described by taking a total of four subframe gains per frame as an example.
  • the current frame be the nth frame, that is, the nth frame is the lost frame
  • the previous subframe is the n-1th subframe
  • the previous frame of the previous frame is the n-2th frame
  • the fourth subframe of the nth frame The gains are GainShape[n,0], GainShape[n,l], GainShape[n,2] and GainShape[n,3]
  • the gain of the four sub-frames of the n-1th frame is GainShape[nl,0 GainShape[nl,l] l], GainShape[n-2,2] and GainShape[n-2,3].
  • the embodiment of the present invention uses the subframe gain GainShape[n,0] of the first subframe of the nth frame (that is, the subframe gain of the current frame coded to 0) and the subframe gain of the last three subframes are different.
  • Estimation algorithm The estimation process of the subframe gain GainShape[n,0] of the first subframe is: a gain variation variable is obtained from the trend and degree of the variation between the subframe gains of the n-1th frame, and the gain variation and the number are utilized.
  • the fourth sub-frame gain GainShape[n _ 1,3] of the n-1 frame (ie, the sub-frame gain of the previous frame with the encoding number of 3), combined with the type of the last frame received before the current frame and continuous
  • the number of lost frames estimates the subframe gain GainShape[n,0] of the first subframe;
  • the estimation flow for the last three subframes is: the subframe gain of the n-1th frame and the subframe of the n-2th frame
  • the trend and degree of change between the gains are obtained by taking a gain variation, using the gain variation and the subframe gain of the first subframe of the nth subframe that has been estimated, in combination with the last received before the current frame.
  • the type of frame and the number of consecutive lost frames estimate the gain of the last three subframes.
  • the trend and degree (or gradient) of the gain of the n-1th frame are monotonically increasing.
  • the trend and degree (or gradient) of the gain of the n-1th frame are monotonically decreasing.
  • the formula for calculating the first gain gradient can be as follows:
  • GainGradFEC[0] GainGrad [ ⁇ -1,1]* ⁇ +GainGrad[n-1 ,2] * ⁇ 2 ,
  • GainGradFEC[0] is the first gain gradient, that is, the last subframe of the n-1th frame and The gain gradient between the first subframe of the nth frame
  • GainGrad [n-1,1] is the gain gradient between the 1st subframe and the 2nd subframe of the n-1th subframe
  • the trend and degree (or gradient) of the gain of the n-1th frame are not monotonous (e.g., random).
  • the gain gradient is calculated as follows:
  • GainGradFEC[0] GainGrad[nl,0]* ⁇ +GainGrad[n-1 1,1]* a 2 +GainGrad[nl ,2]* « 3 ,
  • Embodiments of the present invention may calculate the type of the last frame received before the nth frame and the first gain gradient GainGradFEC[0] to calculate the middle of the subframe gain GainShape[n,0] of the first subframe of the nth frame.
  • GainShapeTemp[n,0] Specific steps are as follows:
  • GainShapeTemp[n,0] GainShape[n-l,3]+ ⁇ *GainGradFEC[0],
  • GainShape[n,0] is calculated from the median GainShapeTemp[n,0]:
  • GainShape[n,0] GainShapeTemp[n,0] * ⁇ 2 ,
  • % is determined by the type of the last frame received before the nth frame and the number of consecutive lost frames before the nth frame.
  • an embodiment of the present invention may estimate a gain gradient GainGradFEC[i] between at least two subframes of a current frame according to a gain gradient between subframes of the n-1th frame and a gain gradient between subframes of the n-2th frame. +l]:
  • ⁇ 3 can be determined by GainGrad[nl,x], for example, when
  • Gain Shape [n,i] GainShapeTemp [n,i] * ⁇ ⁇ ,
  • A is determined by the type of the last frame received before the nth frame and the number of consecutive lost frames before the nth frame.
  • the global gain gradient GainAtten can be determined by the type of the last frame received before the current frame and the number of consecutive lost frames, 0 ⁇ GainAtten ⁇ 1.0.
  • the global gain of the current lost frame can be obtained by the following formula:
  • GainFrame GainFrame_prevfrm*GainAtten, where GainFrame_prevfrm is the global gain of the previous frame.
  • the conventional frame loss processing method in the time domain high-band extension technology makes the transition at the time of frame loss more natural and stable, weakens the click phenomenon caused by frame loss, and improves the voice signal. quality.
  • 640 and 645 of the embodiment of Fig. 6 may be replaced by the following steps:
  • the second step based on the subframe gain of the last subframe of the n-1th frame, combined with the type of the last frame received before the current frame and the first gain gradient GainGradFEC[0]
  • GainShapeTemp[n,0] GainShape[nl,3]+ 1 * GainGradFEC[0]
  • GainShape[nl,3] is the fourth subframe gain of the n-1th frame, 0 ⁇ 4 ⁇ 1.0, the type of the last frame received before the nth frame and the last two subframe gains in the previous frame. The multiple relationship is determined.
  • Step 3 Calculate GainShape[n,0] from the median GainShapeTemp[n,0]:
  • the 550 of the embodiment of FIG. 5 may be replaced by the following steps: Step 1: Predict each subframe of the nth frame according to GainGrad[nl, x] and GainGradFEC[0] Gain GradFEC [l Bu GainGradFEC [3]:
  • GainGradFEC[l] GainGrad[nl,0]* ⁇ ⁇ +GainGrad[n-1 1,1]* ⁇ 2
  • GainGradFEC[2] GainGrad[n-1,1]* ⁇ ⁇ +GainGrad[n-1,2]* ⁇ 2
  • GainGradFEC [3 ] GainGrad[n-1,2]* ⁇ +GainGradFEC[0] * ⁇ 2
  • +r 2 + r 3 + r 4 1.0, r 4 > 3 > 2 >', r 2 , 3 and 4 are determined by the type of the last frame received before the current frame.
  • Step 2 Calculate the subframe gain between each subframe of the nth frame
  • GainShape[n,l] GainShape[n,3]
  • GainShapeTemp [n, 1 ] GainShapeTemp [n,3 ]:
  • GainShapeTemp[n,i] min( 5 *GainShape[nl,i] , GainShapeTem [n,i]) ,
  • FIG. 7 is a schematic block diagram of a decoding apparatus 700 in accordance with an embodiment of the present invention.
  • the decoding device 700 includes a generating module 710, a determining module 720, and an adjusting module 730.
  • the generating module 710 is configured to synthesize the high frequency band signal according to the decoding result of the previous frame of the current frame in the case of determining that the current frame is a lost frame.
  • the determining module 720 is configured to determine, according to a subframe gain of a subframe of the at least one frame before the current frame and a gain gradient between the subframes of the at least one frame, a subframe gain of at least two subframes of the current frame, and determine a current The global gain of the frame.
  • the adjusting module 730 is configured to adjust the high frequency band signal synthesized by the generating module according to the global gain determined by the determining module and the subframe gain of the at least two subframes to obtain a high frequency band signal of the current frame.
  • the determining module 720 determines the subframe gain of the starting subframe of the current frame according to the gain of the subframe between the subframe of the at least one frame and the gain of the subframe of the at least one frame, and According to an embodiment of the present invention, the determination module 720 determines the gain gradient between the subframes of the previous frame of the current frame according to an embodiment of the present invention.
  • the gain gradient and the subframe gain of the starting subframe estimate the subframe gain of the subframes other than the starting subframe in at least two subframes.
  • the determining module 720 performs weighted averaging on the gain gradient between at least two subframes of the previous frame of the current frame to obtain a first gain gradient, and according to the last subframe of the previous frame of the current frame. Subframe gain and first gain gradient, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, estimating the subframe gain of the starting subframe of the current frame, where weighting is performed On average, the gain gradient between the sub-frames that are closer to the current frame in the previous frame of the current frame is larger.
  • the previous frame of the current frame is the n-1th frame
  • the current frame is the nth frame
  • each frame includes 1 subframe
  • the first gain gradient is obtained by the following formula:
  • GainGradFEC[0] ⁇ GainGrad[n -1, j]* aj , where GainGradFEC [0] is the first gain gradient, GainGrad[nl,j] is the gain gradient between the jth subframe and the j+1th subframe of the previous frame of the current frame, c ⁇ a.,
  • GainShapeTemp [n,0] GainShape [ ⁇ -1, ⁇ -1] + ⁇ 1 * GainGradFEC [0]
  • GainShape [n, 0] GainShapeTemp [n, 0]* ⁇ 2 ;
  • GainShape [n - 1 , 1 - 1] is the subframe gain of the 1st to 1st subframe of the n-1th frame
  • GainShape [ ⁇ , ⁇ ] is the subframe gain of the starting subframe of the current frame
  • GainShapeTemp [ n, 0] is the intermediate value of the subframe gain of the starting subframe, 0 ⁇ ⁇ 1.0, 0 ⁇ 2 ⁇ 1.0, the type of the last frame received before the current frame and the positive and negative of the first gain gradient
  • the symbol determines that % is determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the determining module 720 takes the gain gradient between the subframe before the last subframe of the previous frame of the current frame and the last subframe of the previous frame of the current frame as the first gain gradient, and Estimating the current frame based on the subframe gain and the first gain gradient of the last subframe of the previous frame of the current frame, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame The subframe gain of the starting subframe.
  • the gain gradient between sub-frames, where the sub-frame gain of the starting sub-frame is obtained by the following formula:
  • GainShapeTemp [n,0] GainShape [n -1,1-1] + ⁇ * GainGradFEC [0] ,
  • GainShapeTemp [n, 0] ⁇ ( ⁇ 2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
  • GainShape [n,0] max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
  • GainShape[nl,Il] is the subframe gain of the 1-1st subframe of the previous frame of the current frame
  • GainShape[n, 0] is the subframe gain of the starting subframe
  • GainShapeTemp [n, 0] is The intermediate value of the subframe gain of the first sub-frame
  • 0 ⁇ 4 ⁇ 1.0, 1 ⁇ 2, 0 ⁇ 4 ⁇ 1.0, 4 is the type of the last frame received before the current frame and the previous frame of the current frame.
  • the multiple of the sub-frame gain of the last two sub-frames It is determined that A 2 and ⁇ are determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • each frame includes 1 subframe
  • the determining module 720 adds a gain gradient between the ith subframe and the (i+1)th subframe of the previous frame of the current frame and the previous frame of the current frame.
  • the weight of the gain gradient between the frame and the (i+1)th subframe; the determining module 720 according to the gain gradient between the at least two subframes of the current frame and the subframe gain of the starting subframe, and the received before the current frame The type of the last frame and the number of consecutive lost frames before the current frame are estimated to be determined according to an embodiment of the invention, the gain gradient between at least two subframes of the current frame is determined by the following formula:
  • GainGradFEC [i + l] GainGrad [n -2,i] *p! + GainGrad [n - 1 , i ] * ⁇ 2 ,
  • GainGradFEC[i + l] is the gain gradient between the i-th subframe and the i+1th subframe
  • GainGrad[n -2,i] is the gain gradient between the i-th subframe and the i+1th subframe of the previous frame of the previous frame of the current frame
  • GainGrad[n -l,i] is the front of the current frame.
  • the subframe gain of other subframes in the frame except the starting subframe is determined by the following formula:
  • GainShapeTemp[n,i] GainShapeTemp[n,i-1]+GainGradFEC[i] * ⁇ 3 ;
  • GainShape[n,i] GainShapeTemp[n,i]* ⁇ ⁇ ;
  • GainShape[n,i] is the subframe gain of the i-th subframe of the current frame
  • A is The type of the last frame received before the current frame and the number of consecutive lost frames before the current frame are determined.
  • the determining module 720 performs weighted averaging on the I gain gradients between 1+1 subframes before the ith subframe of the current frame, and estimates the ith subframe and the i+1th subframe of the current frame.
  • the gain gradient between frames and the subframe gain of the starting subframe, and the type of the last frame received before the current frame and the current frame are The number of consecutive consecutive lost frames estimates the subframe gain of at least two subframes other than the starting subframe.
  • the gain gradient between at least two subframes of the current frame is determined by The formula determines:
  • GainGradFEC[l] GainGrad[n-l,0]* ⁇ +GainGrad[n-l,l]* ⁇
  • GainGradFEC[2] GainGrad[n-1,1]* ⁇ ⁇ +GainGrad[n-1,2]* ⁇ z
  • GainGradFECLj is the gain gradient between the jth subframe and the j+1th subframe of the current frame
  • GainGrad[n -l, j] is the jth subframe and the j+1th of the previous frame of the current frame.
  • 2 , r 3 and 4 are The type determination of the last frame is received, wherein the subframe gain of the other subframes except the starting subframe in at least two subframes is determined by the following formula:
  • Gain ShapeTem [n,i] min( ⁇ 5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]),
  • GainShape [n,i] max( ⁇ 6 * GainShape[n- 1 ,i] , GainShapeTemp[n,i]) , where GainShapeTemp[n,i] is the middle of the subframe gain of the ith subframe of the current frame
  • the value, i 1, 2, 3
  • the determining module 720 estimates a global gain gradient of the current frame according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame; according to the global gain gradient and the current frame current The global gain of the previous frame of the frame estimates the global gain of the current frame.
  • the global gain of the current frame is determined by the following formula:
  • GainFrame GainFrame_prevfrm* GainAtten , where GainFrame is the global gain of the current middle, GainFrame_prevfrm is the global gain of the previous frame of the current frame, 0 ⁇ GainAtten ⁇ 1.0, GainAtten is the global gain gradient, and GainAtten is the most received The type of the next frame and the number of consecutive lost frames before the current frame are determined.
  • FIG. 8 is a schematic block diagram of a decoding apparatus 800 according to another embodiment of the present invention.
  • the decoding device 800 includes: a generating module 810, a determining module 820, and an adjusting module 830.
  • the generating module 810 in the case of determining that the current frame is a lost frame, synthesizes the high-band signal based on the decoding result of the previous frame of the current frame.
  • the determining module 820 determines a subframe gain of at least two subframes of the current frame, and estimates a global gain gradient of the current frame according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame, and according to the global
  • the global gain of the current frame is estimated by the gain gradient and the global gain of the previous frame of the current frame.
  • the adjustment module 830 adjusts the high-band signal synthesized by the generating module to obtain the high-band signal of the current frame according to the global gain determined by the determining module and the subframe gain of the at least two subframes.
  • GainFrame GainFrame_prevfrm * GainAtten
  • GainFrame the global gain of the current middle
  • GainFrame_prevfrm the global gain of the previous middle of the current middle
  • GainAtten the global gain gradient
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • FIG. 9 is a schematic block diagram of a decoding device 900 in accordance with an embodiment of the present invention.
  • the decoding device 900 includes a processor 910, a memory 920, and a communication bus 930.
  • the processor 910 is configured to call, by using the communication bus 930, the code stored in the memory 920 to synthesize a high-band signal according to the decoding result of the previous frame of the current frame in the case of determining that the current frame is a lost frame; according to the previous frame Determining a sub-frame gain of at least two subframes of the current frame, determining a global gain of the current frame, and determining a global gain of the current frame according to a gain gradient between the subframe gain of the at least one frame and the subframe of the at least one frame, and determining the global gain of the current frame, and according to the global gain and The sub-frame gain of at least two subframes adjusts the synthesized high-band signal to obtain a high-band signal of the current frame.
  • the processor 910 determines a subframe gain of a start subframe of a current frame according to a gain of a subframe between a subframe of the at least one frame and a gain of a subframe of the at least one frame, and According to an embodiment of the present invention, the processor 910 determines a gain gradient between subframes of a previous frame of the current frame according to an embodiment of the present invention.
  • the gain of the two subframes and the subframe gain of the starting subframe estimate the subframe gain of the subframes other than the starting subframe in at least two subframes.
  • the processor 910 performs weighted averaging on the gain gradient between at least two subframes of the previous frame of the current frame to obtain a first gain gradient, and according to the last subframe of the previous frame of the current frame. Subframe gain and first gain gradient, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, estimating the subframe gain of the starting subframe of the current frame, where weighting is performed On average, the gain gradient between the sub-frames that are closer to the current frame in the previous frame of the current frame is larger.
  • the previous frame of the current frame is the n-1th frame
  • the current frame is the nth frame
  • each frame includes 1 subframe
  • the first gain gradient is obtained by the following formula:
  • GainGradFEC [0] ⁇ GainGrad [n - 1, j] * aj , where GainGradFEC [0] is the first gain gradient,
  • GainGrad [n - l, j] is the gain gradient between the jth subframe and the j+1th subframe of the previous frame of the current frame, a /+i ⁇ a ; ,
  • GainShape [n, 0] GainShapeTemp [n, 0] * ⁇ 2 ;
  • GainShape [n - 1 , 1 - 1] is the subframe gain of the 1st to 1st subframe of the n-1th frame
  • GainShape [ ⁇ , ⁇ ] is the subframe gain of the starting subframe of the current frame
  • GainShapeTemp [ n, 0] is the intermediate value of the subframe gain of the starting subframe, 0 ⁇ ⁇ ⁇ 1.0, 0 ⁇ ⁇ 2 ⁇ 1.0, the type of the last frame received before the current frame and the positive and negative of the first gain gradient
  • the symbol determination is determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • the processor 910 uses a gain gradient between a subframe before the last subframe of the previous frame of the current frame and a last subframe of the previous frame of the current frame as the first gain gradient, and Estimating the current frame based on the subframe gain and the first gain gradient of the last subframe of the previous frame of the current frame, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame The subframe gain of the starting subframe.
  • the gain gradient between the 1-1st subframe, where the subframe gain of the starting subframe is obtained by the following formula:
  • GainShapeTemp [n,0] GainShape [n -1,1-1] + ⁇ * GainGradFEC [0] ,
  • GainShapeTemp [n, 0] ⁇ ( ⁇ 2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
  • GainShape [n,0] max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
  • GainShape[nl,Il] is the subframe gain of the 1-1st subframe of the previous frame of the current frame
  • GainShape[n, 0] is the subframe gain of the starting subframe
  • GainShapeTemp [n, 0] is The intermediate value of the subframe gain of the first sub-frame
  • 0 ⁇ 4 ⁇ 1.0, 1 ⁇ 2, 0 ⁇ 4 ⁇ 1.0, 4 is the type of the last frame received before the current frame and the previous frame of the current frame.
  • a 2 and ⁇ are determined by the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame.
  • each frame includes 1 subframe
  • the weight of the gain gradient between the frame and the i+1th subframe; the gain gradient between the at least two subframes of the current frame and the subframe gain of the starting subframe, and the last frame received before the current frame The type and the number of consecutive lost frames before the current frame, the subframe gain of the other subframes other than the starting subframe in the at least two subframes is estimated.
  • the gain gradient between at least two subframes of the current frame is determined by the following formula:
  • GainGradFEC [i + l] GainGrad [n-2,i]*p!+ GainGrad [n - 1 , i ] * ⁇ 2 ,
  • GainGradFEC [i + 1] is the gain gradient between the i-th subframe and the i+1th subframe
  • GainGrad[n-2,i] is the i-th subframe of the previous frame of the previous frame of the current frame
  • GainShape[n,i] GainShapeTemp[n,i]* ⁇ 4 ;
  • GainShape[n,i] is the subframe gain of the i-th subframe of the current frame
  • A is in the current frame The type of the last frame received previously and the number of consecutive lost frames before the current frame are determined.
  • the processor 910 performs weighted averaging on the I gain gradients between 1+1 subframes before the ith subframe of the current frame, and estimates the ith subframe and the i+1th subframe of the current frame.
  • the gain gradient between frames and the subframe gain of the starting subframe, and the type of the last frame received before the current frame and the number of consecutive lost frames before the current frame, and the starting subframes are estimated in at least two subframes. Subframe gain of other sub-frames.
  • the gain gradient between at least two subframes of the current frame is determined by The formula determines:
  • GainGradFEC[l] GainGrad[n-l,0]* i+GainGrad[n-l,l]* 2
  • GainGradFEC[2] GainGrad[n-1,1]* ⁇ +GainGrad[n-l,2]* ⁇
  • GainGradFEC [3 ] GainGrad[n-1,2]* ⁇ ⁇ +GainGradFEC[0] * ⁇ 2
  • GainGradFECLj is the gain gradient between the jth subframe and the j+1th subframe of the current frame
  • GainGrad[nl,j] is the jth subframe and the j+1th subframe of the previous frame of the current frame.
  • 2 , 3 and 4 are determined by the type of the last frame received, wherein the subframe gain of the other subframes other than the starting subframe in at least two subframes is determined by the following formula:
  • Gain ShapeTem [n,i] min( ⁇ 5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i])
  • GainShape[n,i] max( ⁇ ⁇ * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
  • the processor 910 estimates a global gain gradient of the current frame according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame; according to the global gain gradient and the current frame current
  • the global gain of the previous frame of the frame estimates the global gain of the current frame.
  • FIG. 10 is a schematic structural diagram of a decoding device 1000 according to an embodiment of the present invention.
  • the decoding device 1000 includes a processor 1010, a memory 1020, and a communication bus 1030.
  • the processor 1010 is configured to call, by using the communication bus 1030, the code stored in the memory 1020 to synthesize a high-band signal according to a decoding result of a previous frame of the current frame, and determine a current frame, if the current frame is determined to be a lost frame.
  • the subframe gain of at least two subframes estimating the global gain gradient of the current frame according to the type of the last frame received before the current frame, the number of consecutive lost frames before the current frame, according to the global gain gradient and the previous frame of the current frame.
  • the global gain of the frame, the global gain of the current frame is estimated, and the synthesized high-band signal is adjusted to obtain the high-band signal of the current frame based on the global gain and the subframe gain of at least two subframes.
  • GainFrame GainFrame_prevfrm * GainAtten
  • GainFrame the global gain of the current middle
  • GainFrame_prevfrm the global gain of the previous middle of the current middle
  • GainAtten the global gain gradient
  • GainAtten is determined by the type of the last frame received and the number of consecutive lost frames before the current frame.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct connection or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
  • the components displayed for the unit may or may not be physical units, ie may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (OM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Error Detection And Correction (AREA)

Abstract

一种解码方法和解码装置。该解码方法包括:在确定当前帧为丢失帧的情况下,根据前一帧的解码结果合成高频带信号(110);根据当前帧之前的至少一帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度确定当前帧的多个子帧的子帧增益(120);确定当前帧的全局增益(130);根据全局增益和多个子帧的子帧增益对合成的高频带信号进行调整,得到当前帧的高频带信号(140)。由于当前帧的子帧增益是根据当前帧之前的子帧的子帧增益的梯度得到的,使得丢帧前后的过渡有更好的连续性,从而减少了重建信号的杂音,提高了语音质量。

Description

解码方法和解码装置 本申请要求于 2013 年 7 月 16 日提交中国专利局、 申请号为 201310298040.4、 发明名称为"解码方法和解码装置 "的中国专利申请的优先 权, 其全部内容通过引用结合在本申请中。 技术领域
本发明涉及编解码领域, 尤其是涉及一种解码方法和解码装置。 背景技术
随着技术的不断进步, 用户对话音质量的需求越来越高, 其中提高话音 的带宽是提高话音质量提高的主要方法。 通常釆用频带扩展技术来提升带 宽, 频带扩展技术分为时域频带扩展技术和频域频带扩展技术。
在时域频带扩展技术中, 丟包率是一个影响信号质量的关键因素。 在丟 包情况下, 需要尽可能正确地恢复出丟失帧。 解码端通过解析码流信息判断 是否发生帧丟失, 若没有发生帧丟失, 则进行正常的解码处理, 若发生帧丟 失则, 需要进行丟帧处理。
在进行丟帧处理时, 解码端根据前一帧的解码结果得到高频带信号, 并 且利用设定的固定的子帧增益和对前一帧的全局增益乘以固定的衰减因子 得到的全局增益对高频带信号进行增益调整, 获得最终的高频带信号。
由于在丟帧处理时釆用的子帧增益为设定的固定值, 因此, 可能会产生 频谱不连续现象, 使得丟帧前后的过渡不连续, 重建信号出现杂音现象, 降 低了语音质量。 发明内容
本发明的实施例提供了一种解码方法和解码装置, 能够在进行丟帧处理 时避免减少杂音现象, 从而提高语音质量。
第一方面, 提供了一种解码方法, 包括: 在确定当前帧为丟失帧的情况 下, 根据当前帧的前一帧的解码结果合成高频带信号; 根据当前帧之前的至 少一帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确定当前 帧的至少两个子帧的子帧增益; 确定当前帧的全局增益; 才艮据全局增益和上 上述至少两个子帧的子帧增益,对所合成的高频带信号进行调整以得到当前 帧的高频带信号。
结合第一方面, 在第一种可能的实现方式下, 根据当前帧之前的至少一 帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确定当前帧的 至少两个子帧的子帧增益, 包括: 根据上述至少一帧的子帧的子帧增益和上 述至少一帧的子帧之间的增益梯度, 确定当前帧的起始子帧的子帧增益; 根 据当前帧的起始子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确 结合第一种可能的实现方式, 在第二种可能的实现方式中, 根据上述至 少一帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确定当前 帧的起始子帧的子帧增益, 包括: 根据当前帧的前一帧的子帧之间的增益梯 度,估计当前帧的前一帧的最后一个子帧与当前帧的起始子帧之间的第一增 益梯度; 根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯度, 估计当前帧的起始子帧的子帧增益。
结合第二种可能的实现方式, 在第三种可能的实现方式中, 根据当前帧 的前一帧的子帧之间的增益梯度,估计当前帧的前一帧的最后一个子帧与当 前帧的起始子帧之间的第一增益梯度, 包括: 对当前帧的前一帧的至少两个 子帧之间的增益梯度进行加权平均, 得到第一增益梯度, 其中, 在进行加权 平均时, 当前帧的前一帧中距当前帧越近的子帧之间的增益梯度所占的权重 越大。
结合第二种可能的实现方式或第三种可能的实现方式, 当当前帧的前一 帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 I个子帧时, 第一增益梯度由 下列公式得到: GainGradFEC [0] = GainGrad [n - l,j] * aj, 其中 GainGradFEC[0]为 第一增益梯度, GainGrad[n - l,j]为当前帧的前一帧的第 j子帧与第 j+1子帧之 间的增益梯度, aj+,≥a ∑ } = j = 0, 1, 2, ..., 1-2; 其中起始子帧的子 帧增益由下列公式得到:
GainShapeTemp [n, 0] = GainShape[n -1, 1 - 1] + φχ * GainGradFEC [0]
GainShape [n, 0] = GainShapeTemp [n, 0] * φ2 ; 其中 GainShape [n - 1 , 1 - 1]为第 n- 1帧的第 I- 1子帧的子帧增益, GainShape [η, θ] 为当前帧的起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧增益 中间值, 0 < ^ < 1.0 , 0 < ¾ < 1.0 , 由在当前帧之前接收到的最后一个帧的类 型和第一增益梯度的正负符号确定, %由在当前帧之前接收到的最后一个帧 的类型和当前帧以前的连续丟失帧的数目确定。
结合第二种可能的实现方式, 在第五种可能的实现方式中, 根据当前帧 的前一帧的子帧之间的增益梯度,估计当前帧的前一帧的最后一个子帧与当 前帧的起始子帧之间的第一增益梯度, 包括: 将当前帧的前一帧的最后一个 子帧之前的子帧与当前帧的前一帧的最后一个子帧之间的增益梯度作为第 一增益梯度。
结合第二种或第五种可能的实现方式, 在第六种可能的实现方式中, 当 当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 I个子帧时, 第 一增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n -1, 1- 2] , 其中 GainGradFEC [0]为第一增益梯度, GainGrad [n - 1 , 1 - 2]为当前帧的前一帧的第 1-2 子帧与第 1-1子帧之间的增益梯度, 其中起始子帧的子帧增益由下列公式得 到:
GainShapeTemp [n, 0] = GainShape [n -1, 1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = ηήη(λ2 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
GainShape [n, 0] = max( 3 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
其中 GainShape[n -l, I-l]为当前帧的前一帧的第 1-1 子帧的子帧增益,
GainShape [n, 0]为起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧 增益中间值, O i^l.O, 1<^<2, 0<4<1.0, ^由在当前帧之前接收到的最 后一个帧的类型和当前帧的前一帧中的最后两个子帧的子帧增益的倍数关 系确定, 4和 4由在当前帧之前接收到的最后一个帧的类型和当前帧以前的 连续丟失帧的数目确定。
结合上述第二种至第六种可能的实现方式中的任一种,在第七种可能的 实现方式中,才艮据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯 度, 估计当前帧的起始子帧的子帧增益, 包括: 根据当前帧的前一帧的最后 一个子帧的子帧增益和第一增益梯度, 以及在当前帧之前接收到的最后一个 帧的类型和当前帧以前的连续丟失帧的数目,估计当前帧的起始子帧的子帧 增益。 结合第一种至七种可能的实现方式中的任何一种,在第八种可能的实现 方式中,根据当前帧的起始子帧的子帧增益和上述至少一帧的子帧之间的增 包括: 根据上述至少一帧的子帧之间的增益梯度, 估计当前帧的至少两个子 帧间的增益梯度; 根据当前帧的至少两个子帧间的增益梯度和当前帧的起始 帧增益。
结合第八种可能的实现方式, 在第九种可能的实现方式中, 每个帧包括 I个子帧, 根据上述至少一帧的子帧之间的增益梯度, 估计当前帧的至少两 个子帧间的增益梯度, 包括: 对当前帧的前一帧的第 i子帧与第 i+1子帧的 之间增益梯度和当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度进行加权平均,估计当前帧的第 i子帧与第 i+1子帧之间的增益梯度, 其中 i = 0, 1... J-2, 当前帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯 度所占的权重大于当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的 增益梯度所占的权重。
结合第八或第九种可能的实现方式, 在第十种可能的实现方式中, 当当 前帧的前一帧为第 n-1帧, 当前帧为第 n帧时, 当前帧的至少两个子帧间的 增益梯度由下列公式来确定:
GainGradFEC [i + l] = GainGrad [n -2,i] *p! + GainGrad [n - 1 , i ] * β2
其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度,
GainGrad[n -2,i]为当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, GainGrad[n -l,i]为当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, A > A, A +A = i.o, i=0,l,2,...,1-2; 其中上述至少两个子帧中除起 始子帧之外的其它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* βΑ
其中, GainShape[n,i]为当前帧的第 i子帧的子帧增益, GainShapeTemp[n,i] 为当前帧的第 i 子帧的子帧增益中间值, 0 < β3 < 1.0 , 0 < β4≤1.0, β3由 GainGrad[n-l,i]与 GainGrad [n-1, i+1]的倍数关系和 GainGrad [n-l,i+l]的正负 符号确定, A由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。 结合第八种可能的实现方式, 在第十一种可能的实现方式中, 每个帧包 括 I个子帧, 根据上述至少一帧的子帧之间的增益梯度, 估计当前帧的至少 两个子帧间的增益梯度, 包括: 对当前帧的第 i子帧之前的 1+1个子帧之间 的 I个增益梯度进行加权平均, 估计当前帧的第 i子帧与第 i+1子帧之的增 益梯度, 其中 i = 0, 1... J-2, 距第 i子帧越近的子帧之间的增益梯度所占的 权重越大。
结合第八种或第十一种可能的实现方式, 在第十二种可能的实现方式 中, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括四个子帧 时, 当前帧的至少两个子帧间的增益梯度由以下公式确定:
GainGradFEC [ 1 ]=GainGrad[n- 1,0]* ^+GainGrad[n-l,l]* ^
+GainGrad[n-l,2]* ^+GainGradFEC[0]* ^
GainGradFEC [2 ]=GainGrad[n- 1,1]* γ i +GainGrad[n- 1,2]* γ z
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^
GainGradFEC [3 ]=GainGrad[n- 1,2]* γ +GainGradFEC[0] * γ2
+GainGradFEC[l ] * 3 +GainGradFEC[2] * ,4
其中 GainGradFECLj]为当前帧的第 j子帧与第 j+1子帧之间的增益梯度, GainGrad[n -l, j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, j = 0, 1, 2, …, 1-2, ^ +^2 + + =1 ·° ' 4 > 3 > 2 > i ' 其中 ,、 r234 由接收到的最后一个帧的类型确定, 其中至少两个子帧中除起始子帧之外的 其它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i = 1,2,3, 其中 GainShapeTemp[n,0]为第一增益梯度;
Gain ShapeTem [n,i] =min( χ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]) GainShape[n,i] =max( χ6 * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
其中, i= 1,2,3, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中 间值, GainShape[n,i]为当前帧的第 i子帧的子帧增益, ^5和^由接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目确定, l<r5<2, 0<= 6 <=l o 结合第八种至第十二种可能的实现方式中的任何一种,在第十三种可能 的实现方式下, 根据当前帧的至少两个子帧间的增益梯度和起始子帧的子 包括: 根据当前帧的至少两个子帧间的增益梯度和起始子帧的子帧增益, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以前的连续丟失帧 结合第一方面或上述任何一种可能的实现方式,在第十四种可能的实现 方式中, 估计当前帧的全局增益, 包括: 根据在当前帧之前接收到的最后一 个帧的类型、 当前帧以前的连续丟失帧的数目估计当前帧的全局增益梯度; 根据全局增益梯度和当前帧的前一帧的全局增益, 估计当前帧的全局增益。
结合第十四种可能的实现方式, 在第十五种可能的实现方式中, 当前帧 的全局增益由以下公式确定: GainFrame =GainFrame_prevfrm* GainAtten , 其 中 GainFrame为当前顿的全局增益, GainFrame_prevfrm为当前顿的前一中贞 的全局增益, 0 < GainAtten < 1.0 , GainAtten为全局增益梯度, 并且 GainAtten 由接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
第二方面, 提供了一种解码方法, 包括: 在确定当前帧为丟失帧的情况 下, 根据当前帧的前一帧的解码结果合成高频带信号; 确定当前帧的至少两 个子帧的子帧增益; 根据在当前帧之前接收到的最后一个帧的类型、 当前帧 以前的连续丟失帧的数目估计当前帧的全局增益梯度; 根据全局增益梯度和 当前帧的前一帧的全局增益, 估计当前帧的全局增益; 根据全局增益和至少 两个子帧的子帧增益,对所合成的高频带信号进行调整以得到当前帧的高频 带信号。
结合第二方面, 在第一种可能的实现方式中, 当前帧的全局增益由以下 公式确定: GainFrame =GainFrame_prevfrm* GainAtten , 其中 GainFrame为当 前帧的全局增益, GainFrame_prevfrm 为当前帧的前一帧的全局增益, 0 < GainAtten < 1.0, GainAtten为全局增益梯度, 并且 GainAtten由接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
第三方面, 提供了一种解码装置, 包括: 生成模块, 用于在确定当前帧 为丟失帧的情况下, 根据当前帧的前一帧的解码结果合成高频带信号; 确定 模块, 用于根据当前帧之前的至少一帧的子帧的子帧增益和上述至少一帧的 子帧之间的增益梯度, 确定当前帧的至少两个子帧的子帧增益, 并且确定当 前帧的全局增益; 调整模块, 用于根据确定模块确定的全局增益和上述至少 两个子帧的子帧增益对生成模块合成的高频带信号进行调整以得到当前帧 的高频带信号。
结合第三方面, 在第一种可能的实现方式中, 确定模块根据上述至少一 帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确定当前帧的 起始子帧的子帧增益, 并且根据当前帧的起始子帧的子帧增益和上述至少一 帧的子帧之间的增益梯度,确定上述至少两个子帧中除起始子帧之外的其它 子帧的子帧增益。
结合第三方面的第一种可能的实现方式, 在第二种可能的实现方式中, 确定模块根据当前帧的前一帧的子帧之间的增益梯度,估计当前帧的前一帧 的最后一个子帧与当前帧的起始子帧之间的第一增益梯度, 并根据当前帧的 前一帧的最后一个子帧的子帧增益和第一增益梯度,估计当前帧的起始子帧 的子帧增益。
结合第三方面的第二种可能的实现方式, 在第三种可能的实现方式中, 确定模块对当前帧的前一帧的至少两个子帧之间的增益梯度进行加权平均, 得到第一增益梯度, 其中在进行加权平均时, 当前帧的前一帧中距当前帧越 近的子帧之间的增益梯度所占的权重越大。
结合第三方面的第一种可能的实现方式或第三方面的第二种可能的实 现方式, 在第四种可能的实现方式中, 当前帧的前一帧为第 n-1帧, 当前帧 为第 n 帧, 每个帧包括 I 个子帧, 第一增益梯度由下列公式得到:
1-2
GainGradFEC[0] = ^GainGrad[n -1, j]*aj, 其中 GainGradFEC [0]为第一增益梯度,
GainGrad[n-l,j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, a ≥a ∑a}=\, j = 0, 1, 2, ..., 1-2, 其中起始子帧的子帧增益由下列公 式得到:
GainShapeTemp [n,0] = GainShape [η -1,Ι-1] + φ1 * GainGradFEC [0]
GainShape [n, 0] = GainShapeTemp [n, 0]*φ2
其中 GainShape [n - 1 , 1 - 1]为第 n- 1帧的第 I- 1子帧的子帧增益, GainShape [η,θ] 为当前帧的起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧增益 中间值, 0≤ ≤1.0, 0<¾<1.0 , 由在当前帧之前接收到的最后一个帧的类 型和第一增益梯度的正负符号确定, %由在当前帧之前接收到的最后一个帧 的类型和当前帧以前的连续丟失帧的数目确定。
结合第三方面的第二种可能的实现方式, 在第五种可能的实现方式中, 确定模块将当前帧的前一帧的最后一个子帧之前的子帧与当前帧的前一帧 的最后一个子帧之间的增益梯度作为第一增益梯度。
结合第三方面的第二种或第五种可能的实现方式,在第六种可能的实现 方式中, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 I个 子帧时, 第一增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n-1, 1-2] , 其中 GainGradFEC [0]为第一增益梯度, GainGrad[n -1,1-2]为当前帧的前一帧的 第 1-2子帧到第 1-1子帧之间的增益梯度, 其中起始子帧的子帧增益由下列 公式得到:
GainShapeTemp [n,0] = GainShape [n -1,1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = min (λ2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
GainShape [n,0] = max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
其中 GainShape[n-l,I-l]为当前帧的前一帧的第 1-1 子帧的子帧增益, GainShape [n, 0]为起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧 增益中间值, 0<4<1.0, 1<^<2, 0<4<1.0, 4由在当前帧之前接收到的最 后一个帧的类型和当前帧的前一帧的最后两个子帧的子帧增益的倍数关系 确定, A2和 ^由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
结合第三方面的第二种至第六种可能的实现方式中的任一种,在第七种 可能的实现方式中,确定模块根据当前帧的前一帧的最后一个子帧的子帧增 益和第一增益梯度, 以及在当前帧之前接收到的最后一个帧的类型和当前帧 以前的连续丟失帧的数目, 估计当前帧的起始子帧的子帧增益。
结合第三方面的第第一种至七种可能的实现方式中的任一种,在第八种 可能的实现方式中, 确定模块根据至少一帧的子帧之间的增益梯度, 估计当 前帧的至少两个子帧间的增益梯度, 并且根据当前帧的至少两个子帧间的增 益梯度和起始子帧的子帧增益,估计上述至少两个子帧中除起始子帧之外的 其它子帧的子帧增益。
结合第三方面的第八种可能的实现方式, 在第九种可能的实现方式中, 每个帧包括 I个子帧, 确定模块对当前帧的前一帧的第 i子帧与第 i+1子帧 之间的增益梯度和当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的 增益梯度进行加权平均, 估计当前帧的第 i子帧与第 i+1子帧之间的增益梯 度, 其中 i = 0, 1...J-2, 当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度所占的权重大于当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之 间的增益梯度所占的权重。
结合第三方面的第八种或九种可能的实现方式,在第十种可能的实现方 式中, 当前帧的至少两个子帧间的增益梯度由下列公式来确定:
GainGradFEC [i + 1] = GainGrad [n - 2, i ] * + GainGrad [n - 1 , i ] * β2
其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度, GainGrad[n -2,i]为当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, GainGrad[n -l,i]为当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, A > A, A +A = i.o, i=0,l,2,...,1-2; 其中上述至少两个子帧中除起 始子帧之外的其它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* βΑ
其中, GainShape[n,i]为当前帧的第 i子帧的子帧增益, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, 0≤β3≤1.0 < = 1.0, 0 < β4≤1.0, 33由 GainGrad[n-l,i]与 GainGrad [n-l,i+l]的倍数关系和 GainGrad [n-l,i+l]的正负 符号确定, A由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
结合第三方面的第八种可能的实现方式, 在第十一种可能的实现方式 中, 确定模块对当前帧的第 i子帧之前的 1+1个子帧之间的 I个增益梯度进 行加权平均, 估计当前帧的第 i子帧与第 i+1子帧的之间增益梯度, 其中 i = 0, 1... J-2, 距第 i子帧越近的子帧之间的增益梯度所占的权重越大。
结合第三方面的第八种或第十一种可能的实现方式,在第十二种可能的 实现方式中, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 四个子帧时, 当前帧的至少两个子帧间的增益梯度由以下公式确定:
GainGradFEC [ 1 ]=GainGrad[n- 1,0]* ^+GainGrad[n-l,l]* ^
+GainGrad[n-l,2]* ^+GainGradFEC[0]* ^
GainGradFEC [2 ]=GainGrad[n- 1,1]* γ i +GainGrad[n- 1,2]* γ z
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^
GainGradFEC [3 ]=GainGrad[n- 1,2]* γ +GainGradFEC[0] * γ2
+GainGradFEC[l]* 3+GainGradFEC[2]* ,4
其中 GainGradFECLj]为当前帧的第 j子帧与第 j+1子帧之间的增益梯度, GainGrad[n -l, j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, j = 0, 1, 2, …, 1-2, ^ +^2 + + =1 ·° ' 4 > 3 > 2 > i ' 其中 】、 234 由接收到最后一个帧的类型确定,其中上述至少两个子帧中除起始子帧之外 的其它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i =
1,2,3, 其中 GainShapeTemp[n,0]为第一增益梯度;
Gain ShapeTem [n,i] =min( χ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]) GainShape[n,i] =max( χ6 * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
其中, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, i= 1,2,3, GainShape[n,i]为当前帧的第 i子帧的增益, 和 由接收到的最后一 个帧的类型和当前帧以前的连续丟失帧的数目确定, 1<^5<2, 0<= <=1。
结合第八种至第十二种可能的实现方式中的任何一种,在第十三种可能 的实现方式中, 确定模块根据当前帧的至少两个子帧间的增益梯度和起始 子帧的子帧增益, 以及在当前帧之前接收到的最后一个帧的类型和当前帧 以前的连续丟失帧的数目, 估计上述至少两个子帧中除起始子帧之外的其 它子帧的子帧增益。
结合第三方面或上述任何一种可能的实现方式,在第十四种可能的实现 方式中, 确定模块根据在当前帧之前接收到的最后一个帧的类型、 当前帧以 前的连续丟失帧的数目估计当前帧的全局增益梯度; 根据全局增益梯度和当 前帧的当前帧的前一帧的全局增益, 估计当前帧的全局增益。
结合第三方面的第十四种可能的实现方式,在第十五种可能的实现方式 中 , 当 前 帧 的 全 局 增 益 由 以 下 公 式 确 定 : GainFrame =GainFrame_prevfrm* GainAtten, 其中 GainFrame 为当前中贞的全局增益, GainFrame_prevfrm 为当前帧的前一帧的全局增益, 0 < GainAtten≤ 1.0, GainAtten为全局增益梯度,并且 GainAtten由接收到的最后一个帧的类型和 当前帧以前的连续丟失帧的数目确定。
第四方面, 提供了一种解码装置, 包括: 生成模块, 用于在确定当前帧 为丟失帧的情况下, 根据当前帧的前一帧的解码结果合成高频带信号; 确定 模块, 用于确定当前帧的至少两个子帧的子帧增益, 根据在当前帧之前接收 到的最后一个帧的类型、 当前帧以前的连续丟失帧的数目估计当前帧的全局 增益梯度, 并且根据全局增益梯度和当前帧的前一帧的全局增益, 估计当前 帧的全局增益; 调整模块, 用于根据确定模块确定的全局增益和至少两个子 帧的子帧增益,对生成模块合成的高频带信号进行调整以得到当前帧的高频 带信号。
结合第四方面 , 在第一种可能的实现方式中 , GainFrame =GainFrame_prevfrm* GainAtten, 其中 GainFrame 为当前中贞的全局增益, GainFrame_prevfrm 为当前帧的前一帧的全局增益, 0 < GainAtten≤ 1.0, GainAtten为全局增益梯度,并且 GainAtten由接收到的最后一个帧的类型和 当前帧以前的连续丟失帧的数目确定。
本发明的实施例可以在确定当前帧为丟失帧时,根据当前帧之前的子帧 的子帧增益和当前帧之前的子帧间的增益梯度确定当前帧的子帧的子帧增 益, 并利用所确定的当前帧的子帧增益对高频带信号进行调整。 由于当前帧 的子帧增益是根据当前帧之前的子帧的子帧增益的梯度(变化趋势)得到的, 使得丟帧前后的过渡有更好的连续性, 从而减少了重建信号的杂音, 提高了 语音质量。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对本发明实施例中 所需要使用的附图作简单地介绍, 显而易见地, 下面所描述的附图仅仅是本 发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动的 前提下, 还可以根据这些附图获得其他的附图。
图 1是根据本发明的一个实施例的一种解码方法的示意性流程图。
图 2是根据本发明的另一实施例的解码方法的示意性流程图。
图 3A是才艮据本发明的一个实施例的当前帧的前一帧的子帧增益的变化 趋势图。
图 3B是才艮据本发明的另一实施例的当前帧的前一帧的子帧增益的变化 趋势图。
图 3C是才艮据本发明的又一实施例的当前帧的前一帧的子帧增益的变化 趋势图。
图 4是根据本发明的实施例的估计第一增益梯度的过程的示意图。
图 5是根据本发明的实施例的估计当前帧的至少两个子帧间的增益梯度 的过程的示意图。 图 6是根据本发明的实施例的解码过程的示意性流程图。
图 7是^ =艮据本发明的一个实施例的解码装置的示意性结构图。
图 8是^ =艮据本发明的另一实施例的解码装置的示意性结构图
图 9是^ =艮据本发明的另一实施例的解码装置的示意性结构图。
图 10是根据本发明的实施例的解码装置的示意性结构图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是 全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创 造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
在进行语音信号处理时, 为了降低编解码器在进行语音信号处理时的运 算复杂度及处理时延, 一般会将语音信号进行分帧处理, 即将语音信号分为 多个帧。 另外, 在语音发生时, 声门的振动具有一定的频率(对应于基音周 期), 当基音周期较小时, 如果帧长过长, 会导致一帧内会有多个基音周期 存在, 这样计算的基音周期不准确, 因此, 可以将一帧分为多个子帧。
在时域频带扩展技术中, 在编码时, 首先, 由核心编码器对信号的低频 带信息进行编码, 得到的基音周期、 代数码书及各自增益等参数, 并对信号 的高频带信息进行线性预测编码(Linear Predictive Coding, LPC )分析, 得 到高频带 LPC参数, 从而得到 LPC合成滤波器; 其次, 基于基音周期、 代 数码书及各自增益等参数计算得到高频带激励信号, 并由高频带激励信号经 过 LPC合成滤波器合成高频带信号; 然后, 比较原始高频带信号与合成高 频带信号得到子帧增益和全局增益; 最后, 将 LPC 参数转化为 (Linear Spectrum Frequency, LSF )参数, 并将 LSF参数与子帧增益和全局增益量化 后进行编码。
在解码时, 首先, 对 LSF参数、 子帧增益和全局增益进行反量化, 并将 LSF参数转化成 LPC参数, 从而得到 LPC合成滤波器; 其次, 利用由核心 解码器得到基音周期、 代数码书及各自增益等参数, 基于基音周期、 代数码 书及各自增益等参数得到高频带激励信号, 并由高频带激励信号经过 LPC 合成滤波器合成高频带信号; 最后根据子帧增益和全局增益对高频带信号进 行增益调整以恢复丟失帧的高频带信号。 根据本发明的实施例, 可以通过解析码流信息确定当前帧是否发生帧丟 失, 如果当前帧没有发生帧丟失, 则执行上述正常的解码过程。 如果当前帧 发生帧丟失, 即当前帧为丟失帧, 则需要对进行丟帧处理, 即需要恢复丟失 帧。
图 1是 居本发明的实施例的一种解码方法的示意性流程图。 图 1的方 法可以由解码器来执行, 包括下列内容。
110, 在确定当前帧为丟失帧的情况下, 根据当前帧的前一帧的解码结 果合成高频带信号。
例如, 解码端通过解析码流信息判断是否发生帧丟失, 若没有发生帧丟 失, 则进行正常的解码处理, 若发生帧丟失, 则进行丟帧处理。 在进行丟帧 处理时, 首先, 才艮据前一帧的解码参数生成高频带激励信号; 其次, 复制前 一帧的 LPC参数作为当前帧的 LPC参数,从而得到 LPC合成滤波器;最后, 将高频带激励信号经过 LPC合成滤波器得到合成的高频带信号。
120, 根据当前帧之前的至少一帧的子帧的子帧增益和上述至少一帧的 子帧之间的增益梯度, 确定当前帧的至少两个子帧的子帧增益。
一个子帧的子帧增益可以指该子帧的合成高频带信号和原始高频带信 号之间的差值与合成高频带信号的比值, 例如, 子帧增益可以表示子帧的合 成高频带信号的幅值和原始高频带信号的幅值之间的差值与合成高频带信 号的幅值的比值。
子帧之间的增益梯度用于指示相邻子帧之间的子帧增益的变化趋势和 程度, 即增益变化量。 例如, 第一子帧与第二子帧之间的增益梯度可以指第 二子帧的子帧增益与第一子帧的子帧增益之间的差值,本发明的实施例并不 限于此, 例如, 子帧之间的增益梯度也可以指子帧增益衰减因子。
例如, 可以根据前一帧的子帧之间的子帧增益的变化趋势和程度估计出 前一帧的最后一个子帧到当前帧的起始子帧 (第一个子帧) 的增益变化量, 并利用该增益变化量与前一帧的最后一个子帧的子帧增益估计出当前帧的 起始子帧的子帧增益; 然后, 根据当前帧之前的至少一帧的子帧之间的子帧 增益的变化趋势和程度估计出当前帧的子帧之间的增益变化量; 最后, 利用 该增益变化量和已经估计出的起始子帧的子帧增益,估计出当前帧的其它子 帧的子帧增益。
130, 确定当前帧的全局增益。 一帧的全局增益可以指该帧的合成高频带信号和原始高频带信号之间 的差值与合成高频带信号的比值。 例如, 全局增益可以表示合成高频带信号 的幅值和原始高频带信号的幅值的差值与合成高频带信号的幅值的比值。
全局增益梯度用于指示相邻帧之间的全局增益的变化趋势和程度。一帧 与另一帧之间的全局增益梯度可以指一帧的全局增益与另一帧的全局增益 的差值, 本发明的实施例并不限于此, 例如, 一帧与另一帧之间的全局增益 梯度也可以指全局增益衰减因子。
例如, 可以将当前帧的前一帧的全局增益乘以固定的衰减因子估计出当 前帧的全局增益。 特别地, 本发明的实施例可以根据在当前帧之前接收到的 最后一个帧的类型和当前帧以前的连续丟失帧的数目来确定全局增益梯度, 并才艮据确定的全局增益梯度估计当前帧的全局增益。
140, 根据全局增益和至少两个子帧的子帧增益, 对所合成的高频带信 号进行调整(或控制) 以得到当前帧的高频带信号。
例如, 可以根据全局增益调整当前帧的高频带信号的幅值, 并且可以根 据子帧增益调整子帧的高频带信号的幅值。
本发明的实施例可以在确定当前帧为丟失帧时,根据当前帧之前的子帧 的子帧增益和当前帧之前的子帧间的增益梯度确定当前帧的子帧的子帧增 益, 并利用所确定的当前帧的子帧增益对高频带信号进行调整。 由于当前帧 的子帧增益是根据当前帧之前的子帧的子帧增益的梯度 (变化趋势和程度) 得到的,使得丟帧前后的过渡有更好的连续性,从而减少了重建信号的杂音, 提高了语音质量。
根据本发明的实施例, 在 120中, 根据上述至少一帧的子帧的子帧增益 和上述至少一帧的子帧之间的增益梯度, 确定当前帧的起始子帧的子帧增 益; 根据当前帧的起始子帧的子帧增益和上述至少一帧的子帧之间的增益梯 根据本发明的实施例, 在 120中, 根据当前帧的前一帧的子帧之间的增 益梯度,估计当前帧的前一帧的最后一个子帧与当前帧的起始子帧之间的第 一增益梯度; 根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯 度, 估计当前帧的起始子帧的子帧增益; 根据上述至少一帧的子帧之间的增 益梯度, 估计当前帧的至少两个子帧间的增益梯度; 根据当前帧的至少两个 子帧间的增益梯度和当前帧的起始子帧的子帧增益,估计至少两个子帧中除 起始子帧之外的其它子帧的子帧增益。
根据本发明的实施例, 可以将前一帧的最后两个子帧之间的增益梯度作 为第一增益梯度的估计值, 本发明的实施例并不限于此, 可以对前一帧的多 个子帧之间的增益梯度进行加权平均得到第一增益梯度的估计值。
例如, 当前帧的两个相邻子帧之间的增益梯度的估计值可以为: 当前帧 的前一帧中与这两个相邻子帧在位置上相对应的两个子帧之间的增益梯度 与当前帧的前一帧的前一帧中与这两个相邻子帧在位置上相对应的两个子 帧之间增益梯度的加权平均, 或者当前帧的两个相邻子帧之间的增益梯度的 估计值可以为: 前子帧的两个相邻子帧之前的若干相邻子帧之间的增益梯度 的加权平均。
例如,在两个子帧之间的增益梯度指这两个子帧的增益之间的差值的情 况下, 当前帧的起始子帧的子帧增益的估计值可以为前一帧的最后一个子帧 的子帧增益和第一增益梯度之和。在两个子帧之间的增益梯度指这两个子帧 之间的子帧增益衰减因子情况下, 当前帧的起始子帧的子帧增益可以为前一 帧的最后一个子帧的子帧增益与第一增益梯度的乘积。
在 120中,对当前帧的前一帧的至少两个子帧之间的增益梯度进行加权 平均, 得到第一增益梯度, 其中, 在进行加权平均时, 当前帧的前一帧中距 当前帧越近的子帧之间的增益梯度所占的权重越大; 并且根据当前帧的前一 帧的最后一个子帧的子帧增益和第一增益梯度, 以及在当前帧之前接收到的 最后一个帧的类型(或称为最后一个正常帧类型)和当前帧以前的连续丟失 帧的数目, 估计当前帧的起始子帧的子帧增益。
例如, 在前一帧的子帧之间的增益梯度为单调递增或单调递减的情况 下, 可以将前一帧中的最后三个子帧之间的两个增益梯度(倒数第三个子帧 与倒数第二个子帧之间的增益梯度以及倒数第二个子帧与最后一个子帧之 间的增益梯度)进行加权平均来得到第一增益梯度。 在前一帧的子帧之间的 增益梯不是单调递增或单调递减的情况下, 可以将前一帧中的所有相邻子帧 之间的增益梯度进行加权平均。 因为当前帧之前的两个相邻子帧距离当前帧 越近, 这两个相邻子帧上传输的语音信号与当前帧上传输的语音信号的相关 性越大, 这样, 相邻子帧之间的增益梯度与第一增益梯度的实际值可能越接 近。 因此, 在估计第一增益梯度时, 可以将前一帧中距当前帧越近的子帧之 间的增益梯度的所占的权重设置越大的值, 这样可以使得第一增益梯度的估 计值更接近第一增益梯度的实际值,从而使得丟帧前后的过渡有更好的连续 性, 提高了语音的质量。
根据本发明的实施例, 在估计子帧增益的过程中, 可以根据在当前帧之 前接收到的最后一个帧的类型以及当前帧以前的连续丟失帧的数目对估计 出的增益进行调整。 具体地, 可以首先估计当前帧的各个子帧之间的增益梯 度, 再利用各个子帧之间的增益梯度, 再结合当前帧的前一帧的最后一个子 帧的子帧增益, 并以当前帧之前的最后一个正常帧类型和当前帧以前的连续 丟失帧的数目为判决条件, 估计出当前帧的所有子帧的子帧增益。
例如, 当前帧之前接收到的最后一个帧的类型可以是指解码端接收到当 前帧之前的最近的一个正常帧(非丟失帧)的类型。 例如, 假设编码端向解 码端发送了 4帧, 其中解码端正确地接收了第 1帧和第 2帧, 而第 3帧和第 4帧丟失, 那么丟帧前最后一个正常帧可以指第 2帧。 通常, 帧的类型可以 包括: ( 1 ) 清音、 静音、 噪声或浊音结尾等几种特性之一的帧 ( U VOICED CLAS frame ); ( 2 ) 清音到浊音过渡, 浊音开始但还比较微 弱的帧 ( UNVOICED_TRANSITION frame ); ( 3 ) 浊音之后的过渡, 浊音特 性已经艮弱的帧 ( VOICED_TRANSITION frame ); ( 4 ) 浊音特性的帧, 其 之前的帧为浊音或者浊音开始帧( VOICED— CLAS frame ); ( 5 )明显浊音的 开始帧( ONSET frame ); ( 6 )谐波和噪声混合的开始帧( SIN_ONSET frame ); ( 7 )非活动特性帧 ( INACTIVE_CLAS frame )。
连续丟失帧的数目可以指最后一个正常帧之后的连续丟失帧的数目或 者可以指当前丟失帧为连续丟失帧的第几帧。 例如, 编码端向解码端发送了 5帧, 解码端正确接收了第 1帧和第 2帧, 第 3帧至第 5帧均丟失。 如果当 前丟失帧为第 4帧, 那么连续丟失帧的数目就是 2; 如果当前丟失帧为第 5 帧, 那么连续丟失帧的数目为 3。
例如, 在当前帧(丟失帧)的类型与在当前帧之前接收到的最后一个帧 的类型相同且连续当前帧的数目小于等于一个阔值(例如, 3 ) 的情况下, 当前帧的子帧间的增益梯度的估计值接近当前帧的子帧间的增益梯度的实 际值, 反之, 当前帧的子帧间的增益梯度的估计值远离当前帧的子帧间的增 益梯度的实际值。 因此, 可以根据在当前帧之前接收到的最后一个帧的类型 和连续当前帧的数目对估计出的当前帧的子帧间的增益梯度进行调整,使得 调整后的当前帧的子帧间的增益梯度更接近增益梯度的实际值,从而使得丟 帧前后的过渡有更好的连续性, 提高了语音的质量。
例如, 在连续丟失帧的数目小于某个阔值时, 如果解码端确定最后一个 正常帧为浊音帧或清音帧的开始帧, 则可以确定当前帧可能也为浊音帧或清 音帧。 换句话说, 可以根据当前帧之前的最后一个正常帧类型和当前帧以前 的连续丟失帧的数目为判决条件,确定当前帧的类型是否与在当前帧之前接 收到的最后一个帧的类型是否相同, 如果相同, 则调整增益的系数取较大的 值, 如果不相同, 则调整增益的系数取较小的值。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 I个子帧时, 第一增益梯度由下列公式(1 )得到:
1-2
GainGradFEC [0] = ^ GainGrad [n - 1 , j] * α」, ( 1 ) 其中 GainGradFEC [0]为第一增益梯度, GainGrad[n -1, j]为当前帧的前一帧 的第 j子帧与第 j+1子帧之间的增益梯度, aj = l,j = 0, 1, 2,…, 1-2; 其中起始子帧的子帧增益由下列公式(2 )和(3 )得到:
GainShapeTemp [n, 0] = GainShape [n - 1 , 1 - 1] + * GainGradFEC [0] ( 2 )
GainShape [n, 0] = GainShapeTemp [η, 0] * φ2; ( 3 ) 其中 GainShape [n - 1 , 1 - 1]为第 n- 1帧的第 I- 1子帧的子帧增益, GainShape [η, θ] 为当前帧的起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧增益 中间值, 0 < ^ < 1.0 , 0 < ¾ < 1.0 , 由在当前帧之前接收到的最后一个帧的类 型和第一增益梯度的正负符号确定, %由在当前帧之前接收到的最后一个帧 的类型和当前帧以前的连续丟失帧的数目确定。
例如, 当当前帧之前接收到的最后一个帧的类型为浊音帧或清音帧时, 如果第一增益梯度为正, 则 的取值较小, 例如, 小于预设的阔值, 如果第 一增益梯度为负, 则 的取值较大, 例如, 大于预设的阔值。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧或清音帧的 开始帧时, 这时, 第一增益梯度为正, 则 的取值较大, 例如, 大于预设的 阔值, 第一增益梯度为负, 则 的取值较小, 例如, 小于预设的阔值。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧或清音帧 时, 且连续丟失帧的数目小于等于 3时, %取较小的值, 例如, 小于预设的 阔值。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧开始帧或清 音帧的开始帧时,且连续丟失帧的数目小于等于 3时, %取较大的值,例如, 大于预设的阔值。
例如,对于同一类型的帧来说,连续丟失帧的数目越小, %的取值越大。 在 120中,将当前帧的前一帧的最后一个子帧之前的子帧与当前帧的前 一帧的最后一个子帧之间的增益梯度作为第一增益梯度; 并且根据当前帧的 前一帧的最后一个子帧的子帧增益和第一增益梯度, 以及在当前帧之前接收 到的最后一个帧的类型和当前帧以前的连续丟失帧的数目,估计当前帧的起 始子帧的子帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括 I个子帧时, 第一增益梯度由下列公式(4 )得到:
GainGradFEC [0] = GainGrad [n -1, 1-2] , ( 4 ) 其中 GainGradFEC[0]为第一增益梯度, GainGrad[n -l, I-2]为当前帧的前一 帧的第 1-2子帧与第 1-1子帧之间的增益梯度,
其中起始子帧的子帧增益由下列公式(5 )、 (6 )和(7 )得到:
GainShapeTemp [n, 0] = GainShape [n -1, 1-1] + ^ * GainGradFEC [0], ( 5 )
GainShapeTemp [n, 0] = min (λ2 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]), ( 6 ) GainShape [n, 0] = max( 3 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]), ( 7 ) 其中 GainShape[n -l, I-l]为当前帧的前一帧的第 1-1 子帧的子帧增益, GainShape [n, 0]为起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧 增益中间值, ( l^l.O, 1<^<2, 0<^<1.0, ^由在当前帧之前接收到的最 后一个帧的类型和当前帧的前一帧中的最后两个子帧的子帧增益的倍数关 系确定, 和 由在当前帧之前接收到的最后一个帧的类型和当前帧以前的 连续丟失帧的数目确定。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧或清音帧 时, 当前帧可能也为浊音帧或清音帧, 这时, 如果前一帧中的最后一个子帧 的子帧增益与倒数第二个子帧的子帧增益的比值越大, 则! 1的取值越大, 如 果前一帧中的最后一个子帧的子帧增益与倒数第二个子帧的子帧增益的比 值越小, 则 4的取值越小。 另外, 在当前帧之前接收到的最后一个帧的类型 为清音帧时的 ^的取值大于在当前帧之前接收到的最后一个帧的类型为浊 音帧时的 4的取值。
例如, 如果最后一个正常帧类型为清音帧, 且当前连续丟帧数目为 1, 则当前丟失帧紧接在最后一个正常帧后面,丟失帧与最后一个正常帧有很强 的相关性, 可判决丟失帧的能量与最后一个正常帧能量比较接近, ^和 的 取值可以接近于 1, 例如, A2可取值 1.2, ^可取值 0.8。
在 120中, 对当前帧的前一帧的第 i子帧与第 i+1子帧的之间增益梯度 和当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度进行加 权平均, 估计当前帧的第 i子帧与第 i+1子帧之间的增益梯度, 其中 i = 0, 1... J-2, 当前帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度所占的权 重大于当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度所 占的权重; 并且根据当前帧的至少两个子帧间的增益梯度和起始子帧的子帧 增益, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以前的连续丟 根据本发明的实施例, 在 120中, 可以对当前帧的前一帧的第 i子帧与 第 i+1子帧的之间增益梯度和当前帧的前一帧的前一帧的第 i子帧与第 i+1 子帧之间的增益梯度进行加权平均, 估计当前帧的第 i子帧与第 i+1子帧之 间的增益梯度, 其中 i = 0, 1... J-2, 当前帧的前一帧的第 i子帧与第 i+1子 帧之间的增益梯度所占的权重大于当前帧的前一帧的前一帧的第 i子帧与第 i+1 子帧之间的增益梯度所占的权重, 并且根据当前帧的至少两个子帧间的 增益梯度和起始子帧的子帧增益, 以及在当前帧之前接收到的最后一个帧的 类型和当前帧以前的连续丟失帧的数目,估计至少两个子帧中除起始子帧之 外的其它子帧的子帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧 时, 当前帧的至少两个子帧间的增益梯度由下列公式(8 )来确定:
GainGradFEC [i + 1] = GainGrad [n - 2, i ] * + GainGrad [n - 1 , i ] * β2, ( 8 ) 其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度, GainGrad[n -2,i]为当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, GainGrad[n -l,i]为当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, β2 > β , β2 + β = \Λ , i=0,l,2,...,1-2; 式(9 )和( 10 )确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3 ; ( 9 ) GainShape[n,i]= GainShapeTemp[n,i]* βΑ; ( 10 ) 其中, GainShape[n,i]为当前帧的第 i子帧的子帧增益, GainShapeTemp[n,i] 为当前帧的第 i 子帧的子帧增益中间值, 0 < β3 < 1.0 , 0 < β4≤1.0, β3由 GainGrad[n-l,i]与 GainGrad [n-l,i+l]的倍数关系和 GainGrad [n-l,i+l]的正负 符号确定, A由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
例如, 如果 GainGrad[n-l,i+l]为正值, 则 GainGrad[n-l,i+l]与 GainGrad[n-l,i]的比值越大, β3的取值越大, 如果 GainGradFEC[0]为负值, 贝' J GainGrad [n-l,i+l]与 GainGrad[n-l,i]的比值越大, β3的取值越小。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧或清音帧 时, 且连续丟失帧的数目小于等于 3时, Α取较小的值, 例如, 小于预设的 阔值。
例如, 当在当前帧之前接收到的最后一个帧的类型为浊音帧开始帧或清 音帧的开始帧时,且连续丟失帧的数目小于等于 3时, A取较大的值,例如, 大于预设的阔值。
例如,对于同一类型的帧来说,连续丟失帧的数目越小, A的取值越大。 根据本发明的实施例, 每个帧包括 I个子帧, 根据上述至少一帧的子帧 之间的增益梯度, 估计当前帧的至少两个子帧间的增益梯度, 包括:
对当前帧的第 i子帧之前的 1+1个子帧之间的 I个增益梯度进行加权平 均, 估计当前帧的第 i子帧与第 i+1子帧之的增益梯度, 其中 i = 0, 1. . . J-2, 距第 i子帧越近的子帧之间的增益梯度所占的权重越大;
其中根据当前帧的至少两个子帧间的增益梯度和起始子帧的子帧增益, 根据当前帧的至少两个子帧间的增益梯度和起始子帧的子帧增益, 以及 在当前帧之前接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数 根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括四个子帧时, 当前帧的至少两个子帧间的增益梯度由以下公式
(11 )、 (12)和(13)确定:
GainGradFEC[l]=GainGrad[n-l,0]* i+GainGrad[n-l,l]* 2
+GainGrad[n- 1 ,2] * ^ +GainGradFEC[0] * ^ ( 11 ) GainGradFEC[2]=GainGrad[n- 1,1]* ^+GainGrad[n-l,2]* ^
+GainGradFEC[0]* ^+GainGradFEC[l]* ^ ( 12)
GainGradFEC [3 ]=GainGrad[n- 1,2]* γλ +GainGradFEC[0] * γ2
+GainGradFEC[l ] * 3 +GainGradFEC[2] * ,4 ( 13) 其中 GainGradFECLj]为当前帧的第 j子帧与第 j+1子帧之间的增益梯度, GainGrad[n-l,j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, j = 0, 1, 2, …, 1-2, γ +γ23+γ =\β, γ >γ32>γ^ 其中 】、 234 由接收到的最后一个帧的类型确定, 式(14)、 (15)和(16)确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], ( 14) 其中 i= 1,2,3, 其中 GainShapeTemp[n,0]为第一增益梯度;
GainShapeTemp[n,i]=min( 5*GainShape[n-l,i],GainShapeTemp[n,i]) ( 15 ) GainShape[n,i] =max( χ6 * GainShape [η- 1 ,i] ,GainShapeTem [n,i]) ( 16 ) 其中, i= 1,2,3, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中 间值, GainShape[n,i]为当前帧的第 i子帧的子帧增益, 和 由接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目确定, l<r5<2,0<= 6<=lo 例如, 如果最后一个正常帧类型为清音帧, 且当前连续丟帧数目为 1, 则当前丟失帧紧接在最后一个正常帧后面,丟失帧与最后一个正常帧有很强 的相关性, 可判决丟失帧的能量与最后一个正常帧能量比较接近, ^和 的 取值可以接近于 1, 例如, 可取值 1.2, 可取值 0.8。
在 130中, 根据在当前帧之前接收到的最后一个帧的类型、 当前帧以前 的连续丟失帧的数目估计当前帧的全局增益梯度; 根据全局增益梯度和当前 帧的前一帧的全局增益, 估计当前帧的全局增益。
例如, 在估计全局增益时, 可以以当前帧之前的至少一帧(例如, 前一 帧)的全局增益为基础, 并利用当前帧的在当前帧之前接收到的最后一个帧 的类型和当前帧发前的连续丟失帧的数目等条件, 估计出丟失帧的全局增 益。
根据本发明的实施例, 当前帧的全局增益由以下公式(17 )确定:
GainFrame =GainFrame_prevfrm* GainAtten , ( 17 ) 其中 GainFrame为当前帧的全局增益, GainFrame_prevfrm为当前帧的 前一帧的全局增益, 0 < GainAtten≤ 1.0, GainAtten 为全局增益梯度, 并且
GainAtten 由接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目 确定。
例如,解码端可以在确定当前帧的类型与在当前帧之前接收到的最后一 个帧的类型相同且连续丟失帧的数目小于或等于 3的情况下,确定全局增益 梯度为 1。换句话说, 当前丟失帧的全局增益可以跟随之前的帧的全局增益, 因此可以确定全局增益梯度为 1。
例如, 如果可以确定最后一个正常帧为清音帧或浊音帧, 且连续丟失帧 的数目小于或等于 3, 解码端可以确定全局增益梯度为较小的值, 即全局增 益梯度可以小于预设的阔值。 例如, 该阔值可以设为 0.5。
例如, 解码端可以在确定最后一个正常帧为浊音帧的开始帧的情况下, 确定全局增益梯度, 使得全局增益梯度大于预设的第一阔值。 如果解码端确 定最后一个正常帧为浊音帧的开始帧, 则可以确定当前丟失帧很可能为浊音 帧, 那么可以确定全局增益梯度为较大的值, 即全局增益梯度可以大于预设 的阔值。
根据本发明的实施例,解码端可以在确定最后一个正常帧为清音帧的开 始帧的情况下, 确定全局增益梯度, 使得全局增益梯度小于预设的阔值。 例 如, 如果最后一个正常帧为清音帧的开始帧, 那么当前丟失帧很可能为清音 帧, 那么解码端可以确定全局增益梯度为较小的值, 即全局增益梯度可以小 于预设的阔值。
本发明的实施例利用发生丟帧之前接收到的最后一个帧的类型以及连 续丟失帧的数目等条件估计出子帧增益梯度和全局增益梯度, 然后结合先前 的至少一帧的子帧增益和全局增益确定当前帧的子帧增益和全局增益, 并利 用这两个增益对重建的高频带信号进行增益控制输出最终的高频带信号。本 发明的实施例在发生丟帧时解码所需的子帧增益和全局增益的值并未釆用 固定值,从而避免了在发生丟帧的情况下由于设定固定的增益值而导致的信 号能量不连续, 使得丟帧前后的过渡更加自然平稳, 削弱杂音现象, 提高了 重建信号的质量。
图 2是根据本发明的另一实施例的解码方法的示意性流程图。 图 2的方 法由解码器执行, 包括下列内容。
210 , 在确定当前帧为丟失帧的情况下, 根据当前帧的前一帧的解码结 果合成高频带信号。
220, 确定当前帧的至少两个子帧的子帧增益。
230 , 根据在当前帧之前接收到的最后一个帧的类型、 当前帧以前的连 续丟失帧的数目估计当前帧的全局增益梯度。
240, 根据全局增益梯度和当前帧的前一帧的全局增益, 估计当前帧的 全局增益。
250 , 根据全局增益和至少两个子帧的子帧增益, 对所合成的高频带信 号进行调整以得到当前帧的高频带信号。
根据本发明的实施例, 当前帧的全局增益由以下公式确定:
GainFrame =GainFrame_prevfrm* GainAtten ,其中 GainFrame为当前中贞的 全局增益, GainFrame_prevfrm 为 当前帧的前一帧的全局增益,
0 < GainAtten < 1.0, GainAtten为全局增益梯度, 并且 GainAtten由接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
图 3A至图 3C是根据本发明的实施例的前一帧的子帧增益的变化趋势 图。 图 4是根据本发明的实施例的估计第一增益梯度的过程的示意图。 图 5 是根据本发明的实施例的估计当前帧的至少两个子帧间的增益梯度的过程 的示意图。 图 6是 居本发明的实施例的一种解码过程的示意性流程图。 图
6的实施例是图 1的方法的例子。
610, 解码端对从编码端接收到的码流信息进行解析。
615, 根据从码流信息中解析出的丟帧标志, 判断是否发生帧丟失。 620 , 如果没有发生帧丟失, 则根据从码流中得到的码流参数进行正常 的解码处理。
在解码时, 首先, 对 LSF参数和子帧增益和全局增益进行反量化, 并将 LSF参数转化成 LPC参数, 从而得到 LPC合成滤波器; 其次, 利用由核心 解码器得到基音周期、 代数码书及各自增益等参数, 基于基音周期、 代数码 书及各自增益等参数得到高频带激励信号, 并由高频带激励信号经过 LPC 合成滤波器合成高频带信号; 最后根据子帧增益和全局增益对高频带信号进 行增益调整恢复最终的高频带信号。
如果发生了帧丟失, 则进行丟帧处理。 丟帧处理包括步骤 625至 660。
625 , 利用由核心解码器得到前一帧的基音周期、 代数码书及各自增益 等参数,并基于基音周期、代数码书及各自增益等参数得到高频带激励信号。
630, 复制前一帧的 LPC参数。
635,根据前一帧的 LPC得到 LPC合成滤波器, 并将高频带激励信号经 过 LPC合成滤波器合成高频带信号。
640, 根据前一帧的子帧之间的增益梯度, 估计前一帧的最后一个子帧 到当前帧的起始子帧的第一增益梯度。
本实施例以每帧共有四个子帧增益为例进行说明。 设当前帧为第 n帧, 即第 n帧为丟失帧, 前一子帧为第 n-1子帧, 前一帧的前一帧为第 n-2帧, 第 n帧的四个子帧的增益为 GainShape[n,0], GainShape[n,l] , GainShape[n,2] 和 GainShape[n,3], 依次类推, 第 n - 1 帧的四个子帧的增益为 GainShape[n-l,0] , GainShape[n-l,l] , GainShape[n-l,2]和 GainShape[n-l,3] , 第 n-2 帧的四个子帧的增益为 GainShape[n-2,0], GainShape[n-2,l] , GainShape[n-2,2]和 GainShape[n-2,3]。 本发明的实施例将第 n帧的第一个子 帧的子帧增益 GainShape[n,0] (即当前帧的编码为 0的子帧增益)和后三个 子帧的子帧增益釆用不同的估计算法。第一个子帧的子帧增益 GainShape[n,0] 的估计流程为: 由第 n-1帧子帧增益之间的变化趋势和程度求取一个增益变 化变量, 利用这个增益变化量和第 n-1 帧的第四个子帧增益 GainShape[n _ 1,3] (即前一帧的以编码号为 3的子帧增益), 结合在当前帧之前接收到的最 后一个帧的类型以及连续丟失帧的数目估计出第一个子帧的子帧增益 GainShape[n,0]; 后三个子帧的估计流程为: 由第 n-1帧的子帧增益和第 n-2 帧的子帧增益之间的变化趋势和程度求取一个增益变化量, 利用这个增益变 化量和已经估计出的第 n子帧的第一个子帧的子帧增益, 结合在当前帧之前 接收到的最后一个帧的类型以及连续丟失帧的数目估计出后三个子帧增益。
如图 3A所示, 第 n-1帧的增益的变化趋势和程度 (或梯度) 为单调递 增。 如图 3B所示, 第 n-1帧的增益的变化趋势和程度(或梯度) 为单调递 减。 第一增益梯度的计算公式可以如下:
GainGradFEC[0] = GainGrad [η-1,1]* α +GainGrad[n- 1 ,2] * α2
其中, GainGradFEC[0]为第一增益梯度, 即第 n-1帧的最后一个子帧与 第 n帧的第一个子帧之间的增益梯度, GainGrad [n-1,1]为第 n-1子帧的第 1 子帧到第 2子帧之间的增益梯度, ax + a2 =\ , 即距第 n帧越近的子 帧之间的增益梯度所占的权重越大, 例如, 《=0.1, 《2 =0.9。
如图 3C所示, 第 n-1帧的增益的变化趋势和程度(或梯度) 为不单调 (例如, 是随机的)。 增益梯度计算公式如下:
GainGradFEC[0]=GainGrad[n-l,0]* α +GainGrad[n- 1,1]* a2 +GainGrad[n-l ,2]* «3
其中, 《32> , ^ + ^ + «3 =1.0, 即距第 n 帧越近的子帧之间的增益 梯度所占的权重越大, 例如, = 0.2, 2 = 0.3, ¾ = 0.5 )
645 , 根据前一帧的最后一个子帧的子帧增益和第一增益梯度, 估计当 前帧的起始子帧的子帧增益。
本发明的实施例可以由第 n帧之前接收到的最后一个帧的类型和第一增 益梯度 GainGradFEC[0]计算第 n帧的第一个子帧的子帧增益 GainShape[n,0] 的中间量 GainShapeTemp[n,0]。 具体步骤如下:
GainShapeTemp[n,0]= GainShape[n-l,3]+^ *GainGradFEC[0],
其中, 0≤ ≤1.0, 由第 η 帧之前接收到的最后一个帧的类型和 GainGradFEC[0]的正负确定。
由中间量 GainShapeTemp[n,0]计算得到 GainShape[n,0]:
GainShape[n,0] = GainShapeTemp[n,0] * φ2
其中%由第 η帧之前接收到的最后一个帧的类型和第 η帧以前的连续丟 失帧的数目确定。
650, 根据上述至少一帧的子帧之间的增益梯度, 估计当前帧的多个子 帧间的增益梯度; 根据当前帧的多个子帧间的增益梯度和起始子帧的子帧增 参见图 5, 本发明的实施例可以根据第 n-1帧的子帧间的增益梯度和第 n-2 帧的子帧间的增益梯度来估计当前帧的至少两个子帧间的增益梯度 GainGradFEC[i+l]:
GainGradFEC [i+ 1 ] = GainGrad[n-2,i]* βλ beltal+GainGrad[n-l,i]* β2, 其中 i =0,l,2, A + A =1.0, 即距第 n帧越近的子帧间的增益梯度所占的 权重越大, 例如, A =0.4, y¾ =0.6。
按照下列公式计算各个子帧的子帧增益的中间量 GainShapeTemp[n,i]: GainShapeTemp[n,i]=GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
其中, i=l,2,3 ; 0≤β3≤ΐ.0, β3可以由 GainGrad[n-l,x]确定, 例如, 当
GainGrad[n-l,2]大于 10.0*GainGrad[n-l,l]且 GainGrad[n-l,l]大于 0时, 33取 值为 0.8。
按照下列公式计算各个子帧的子帧增益:
Gain Shape [n,i] = GainShapeTemp [n,i] * βΑ
其中, i=l,2,3, A由第 n帧之前接收到的最后一个帧的类型和第 n帧以 前的连续丟失帧的数目决定。
655, 根据当前帧之前接收到的最后一个帧的类型、 当前帧以前的连续 丟失帧的数目估计全局增益梯度。
全局增益梯度 GainAtten可以由当前帧之前接收到的最后一个帧的类型 和连续丟失帧的数目确定, 0<GainAtten<1.0。 例如, 确定全局增益梯度的基 本原则可以是: 当在当前帧之前接收到的最后一个帧的类型为摩擦音时, 全 局增益梯度取接近于 1的值如 GainAtten = 0.95,例如, 当连续丟失帧的数目 大于 1时全局增益梯度取较小(例如,接近于 0 )的值,例如, GainAtten = 0.5。
660, 根据全局增益梯度和当前帧的前一帧的全局增益, 估计当前帧的 全局增益。 当前丟失帧的全局增益可以由下列公式得到:
GainFrame=GainFrame_prevfrm*GainAtten, 其中, GainFrame_prevfrm 为前一帧的全局增益。
665 , 根据全局增益和各子帧增益对合成的高频带信号进行增益调整, 从而恢复当前帧的高频带信号。 该步骤与常规技术类似, 在此不再赘述。
本发明的实施例对时域高频带扩展技术中的常规丟帧处理方法,使得发 生丟帧时的过渡更加自然平稳, 削弱了丟帧所导致的杂音(click )现象, 提 高了语音信号的质量。
可选地, 作为另一实施例, 图 6的实施例的 640和 645可以由替代为下 列步骤:
第一步: 将第 n-1帧 (前一帧) 中倒数第二个子帧的子帧增益到最后一 个子帧的子帧增益的变化梯度 GainGrad[n-l,2]作为第一增益梯度 GainGradFEC[0] , 即 GainGradFEC[0] = GainGrad[n-l,2]。
第二步: 以第 n-1帧的最后一个子帧的子帧增益为基础, 结合在当前帧 之前接收到的最后一个帧的类型和第一增益梯度 GainGradFEC[0]计算第一 个子帧增益 GainShape[n,0]的中间量 GainShapeTemp[n,0] :
GainShapeTemp[n,0]=GainShape[n-l,3]+ 1 * GainGradFEC[0]
其中, GainShape[n-l,3]为第 n-1 帧的第四个子帧增益, 0<4<1.0, 由 第 n帧之前接收到的最后一个帧的类型和前一帧中最后两个子帧增益的倍数 关系确定。
第三步: 由中间量 GainShapeTemp[n,0]计算得到 GainShape[n,0]:
GainShapeTemp[n,0] =min( , * GainShape[n-l,3],GainShapeTemp[n,0]), GainShape[n,0] =max( * GainShape[n- 1 ,3] ,GainShapeTemp[n,0]) , 其中, ^和 ^由在当前帧之前接收到的最后一个帧的类型和连续丟失帧 的数目确定, 并且使得所估计的第一个子帧的子帧增益 GainShape[n,0]与第 n-1帧的最后一个子帧的子帧增益 GainShape[n-l,3]相比在一定的范围内。
可选地,作为另一实施例,图 5的实施例的 550可以由替代为下列步骤: 第一步:根据 GainGrad[n-l,x]和 GainGradFEC[0]来预测估计第 n帧的各 个子帧间的增益梯度 GainGradFEC[l卜 GainGradFEC[3] :
GainGradFEC[l] = GainGrad[n-l,0]* χί +GainGrad[n- 1,1]* γ2
+GainGrad[n-l,2]* 3+GainGradFEC[0]* γ,
GainGradFEC[2]=GainGrad[n- 1,1]* χι +GainGrad[n- 1,2]* γ2
+GainGradFEC[0]* 3+GainGradFEC[l]* 4
GainGradFEC [3 ]=GainGrad[n- 1,2]* γ +GainGradFEC[0] * γ2
+GainGradFEC[l ] * 3 +GainGradFEC[2] * ,4
其中 +r2 + r3 + r4 =1.0, r4 > 3 > 2 > ' 、 r234由在当前帧之前 接收到的最后一个帧的类型确定。
第 二 步 : 计 算 第 n 帧 的 各个 子 帧 之 间 的 子 帧 增 益
GainShape[n,l]~GainShape[n,3] 的 中 间 量 GainShapeTemp [n, 1 ]~ GainShapeTemp [n,3 ]:
GainShapeTemp[n,i] = GainShapeTemp [n,i- 1 ] + GainGradFEC [i] , 其中 i = 1,2,3, GainShapeTemp[n,0]为第 n帧的第一个子帧的子帧增益。 第三步:由中间量 GainShapeTemp[n,l卜 GainShapeTemp[n,3]计算得到计 算第 n帧的各个子帧之间的子帧增益 GainShape[n,l卜 GainShape[n,3]:
GainShapeTemp[n,i] =min( 5 *GainShape[n-l,i] , GainShapeTem [n,i]) ,
GainShape[n,i] =max( χύ * GainShape[n-l,i] , GainShapeTem [n,i] ), 其中, i = 1,2,3, ^和^由第 n帧之前接收到的最后一个帧的类型和第 n 帧以前的连续丟失帧的数目确定。
图 7是 居本发明的实施例的一种解码装置 700的示意性结构图。解码 装置 700包括生成模块 710、 确定模块 720和调整模块 730。
生成模块 710用于在确定当前帧为丟失帧的情况下,根据当前帧的前一 帧的解码结果合成高频带信号。确定模块 720用于根据当前帧之前的至少一 帧的子帧的子帧增益和上述至少一帧的子帧之间的增益梯度,确定当前帧的 至少两个子帧的子帧增益, 并且确定当前帧的全局增益。 调整模块 730用于 根据确定模块确定的全局增益和至少两个子帧的子帧增益对生成模块合成 的高频带信号进行调整以得到当前帧的高频带信号。
根据本发明的实施例,确定模块 720根据上述至少一帧的子帧的子帧增 益和上述至少一帧的子帧之间的增益梯度,确定当前帧的起始子帧的子帧增 益, 并且根据当前帧的起始子帧的子帧增益和上述至少一帧的子帧之间的增 根据本发明的实施例,确定模块 720根据当前帧的前一帧的子帧之间的 增益梯度,估计当前帧的前一帧的最后一个子帧与当前帧的起始子帧之间的 第一增益梯度,根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益 梯度, 估计当前帧的起始子帧的子帧增益, 根据上述至少一帧的子帧之间的 增益梯度, 估计当前帧的至少两个子帧间的增益梯度, 并且根据当前帧的至 少两个子帧间的增益梯度和起始子帧的子帧增益,估计至少两个子帧中除起 始子帧之外的其它子帧的子帧增益。
根据本发明的实施例,确定模块 720对当前帧的前一帧的至少两个子帧 之间的增益梯度进行加权平均, 得到第一增益梯度, 并且根据当前帧的前一 帧的最后一个子帧的子帧增益和第一增益梯度, 以及当前帧之前接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目,估计当前帧的起始子帧 的子帧增益, 其中在进行加权平均时, 当前帧的前一帧中距当前帧越近的子 帧之间的增益梯度所占的权重越大。
根据本发明的实施例, 当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧 包括 I 个子帧 , 第 一增益梯度由 下列公式得到 :
1-2
GainGradFEC[0] = ^GainGrad[n -1, j]* aj, 其中 GainGradFEC [0]为第一增益梯度, GainGrad[n-l,j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, c ≥a., | 」=1, j = 0, 1, 2, ..., 1-2, 其中起始子帧的子帧增益由下列公 式得到:
GainShapeTemp [n,0] = GainShape [η -1,Ι-1] + φ1 * GainGradFEC [0]
GainShape [n, 0] = GainShapeTemp [n, 0]*φ2
其中 GainShape [n - 1 , 1 - 1]为第 n- 1帧的第 I- 1子帧的子帧增益, GainShape [η,θ] 为当前帧的起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧增益 中间值, 0<^ <1.0 , 0<φ2≤1.0 , 由在当前帧之前接收到的最后一个帧的类 型和第一增益梯度的正负符号确定, %由在当前帧之前接收到的最后一个帧 的类型和当前帧以前的连续丟失帧的数目确定。
根据本发明的实施例,确定模块 720将当前帧的前一帧的最后一个子帧 之前的子帧与当前帧的前一帧的最后一个子帧之间的增益梯度作为第一增 益梯度, 并且根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯 度, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以前的连续丟失 帧的数目, 估计当前帧的起始子帧的子帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 η-1帧, 当前帧为第 η帧, 每个帧 包括 I 个子帧时, 第一增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n - 1 , 1 - 2], 其中 GainGradFEC [0]为第一增益梯度, GainGrad[n-l,I-2]为当前帧的前一帧的第 1-2子帧到第 1-1子帧之间的增益梯 度, 其中起始子帧的子帧增益由下列公式得到:
GainShapeTemp [n,0] = GainShape [n -1,1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = ηήη(λ2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
GainShape [n,0] = max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
其中 GainShape[n-l,I-l]为当前帧的前一帧的第 1-1 子帧的子帧增益, GainShape [n, 0]为起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧 增益中间值, 0<4<1.0, 1<^<2, 0<4<1.0, 4由在当前帧之前接收到的最 后一个帧的类型和当前帧的前一帧的最后两个子帧的子帧增益的倍数关系 确定, A2和 ^由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
根据本发明的实施例, 每个帧包括 I个子帧, 确定模块 720对当前帧的 前一帧的第 i子帧与第 i+1子帧之间的增益梯度和当前帧的前一帧的前一帧 的第 i子帧与第 i+1子帧之间的增益梯度进行加权平均, 估计当前帧的第 i 子帧与第 i+1子帧之间的增益梯度, 其中 i = 0, 1 ... J-2, 当前帧的前一帧的 第 i子帧与第 i+1子帧之间的增益梯度所占的权重大于当前帧的前一帧的前 一帧的第 i子帧与第 i+1子帧之间的增益梯度所占的权重; 确定模块 720根 据当前帧的至少两个子帧间的增益梯度和起始子帧的子帧增益, 以及当前帧 之前接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目,估计至 根据本发明的实施例, 当前帧的至少两个子帧间的增益梯度由下列公式 来确定:
GainGradFEC [i + l] = GainGrad [n -2,i] *p! + GainGrad [n - 1 , i ] * β2
其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度,
GainGrad[n -2,i]为当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, GainGrad[n -l,i]为当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, A > A, A +A = i.o, i=0,l,2,...,1-2; 其中至少两个子帧中除起始子 帧之外的其它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* βΑ
其中, GainShape[n,i]为当前帧的第 i子帧的子帧增益, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, 0≤β3≤1.0 < = 1.0, 0 < β4≤1.0, 33由 GainGrad[n-l,i]与 GainGrad [n-1, i+1]的倍数关系和 GainGrad [n-l,i+l]的正负 符号确定, A由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
根据本发明的实施例, 确定模块 720对当前帧的第 i子帧之前的 1+1个 子帧之间的 I个增益梯度进行加权平均, 估计当前帧的第 i子帧与第 i+1子 帧的之间增益梯度, 其中 i = 0, 1 ... J-2, 距第 i子帧越近的子帧之间的增益 梯度所占的权重越大, 并且根据当前帧的至少两个子帧间的增益梯度和起始 子帧的子帧增益, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以 前的连续丟失帧的数目,估计至少两个子帧中除起始子帧之外的其它子帧的 子帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括四个子帧时, 当前帧的至少两个子帧间的增益梯度由以下公式确 定:
GainGradFEC[l]=GainGrad[n-l,0]* ^ +GainGrad[n-l,l]* ^
+GainGrad[n-l,2]* ^+GainGradFEC[0]* ^
GainGradFEC[2]=GainGrad[n- 1,1]* γ ι +GainGrad[n- 1,2]* γ z
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^ GainGradFEC[3]=GainGrad[n-l,2]* i +GainGradFEC[0]* ,2
+GainGradFEC[l]* 3+GainGradFEC[2]* ,4
其中 GainGradFECLj]为当前帧的第 j子帧与第 j+1子帧之间的增益梯度, GainGrad[n -l, j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, j = 0, 1, 2, …, 1-2, ^ +^2 + + =1 ·° ' 4 > 3 > 2 > i ' 其中 】、 2、 r34 由接收到最后一个帧的类型确定,其中至少两个子帧中除起始子帧之外的其 它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i = 1,2,3, 其中 GainShapeTemp[n,0]为第一增益梯度;
Gain ShapeTem [n,i] =min( χ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]),
GainShape [n,i] =max( χ6 * GainShape[n- 1 ,i] ,GainShapeTemp[n,i]) , 其中, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, i= 1,2,3, GainShape[n,i]为当前帧的第 i子帧的增益, 和 由接收到的最后一 个帧的类型和当前帧以前的连续丟失帧的数目确定, 1<^<2, 0<= <=1。
根据本发明的实施例,确定模块 720根据在当前帧之前接收到的最后一 个帧的类型、 当前帧以前的连续丟失帧的数目估计当前帧的全局增益梯度; 根据全局增益梯度和当前帧的当前帧的前一帧的全局增益,估计当前帧的全 局增益。
根据本发明的实施例, 当前帧的全局增益由以下公式确定:
GainFrame =GainFrame_prevfrm* GainAtten ,其中 GainFrame为当前中贞的 全局增益, GainFrame_prevfrm 为 当前帧的前一帧的全局增益, 0 < GainAtten < 1.0, GainAtten为全局增益梯度, 并且 GainAtten由接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
图 8是^ =艮据本发明的另一实施例的解码装置 800的示意性结构图。解码 装置 800包括: 生成模块 810、 确定模块 820和调整模块 830。
生成模块 810在确定当前帧为丟失帧的情况下,根据当前帧的前一帧的 解码结果合成高频带信号。确定模块 820确定当前帧的至少两个子帧的子帧 增益, 根据在当前帧之前接收到的最后一个帧的类型、 当前帧以前的连续丟 失帧的数目估计当前帧的全局增益梯度, 并且根据全局增益梯度和当前帧的 前一帧的全局增益, 估计当前帧的全局增益。 调整模块 830根据确定模块确 定的全局增益和至少两个子帧的子帧增益,对生成模块合成的高频带信号进 行调整以得到当前帧的高频带信号。
根据本发明的实施例, GainFrame =GainFrame_prevfrm * GainAtten, 其中 GainFrame为当前中贞的全局增益, GainFrame_prevfrm为当前中贞的前一中贞的全 局增益, 0 < GainAtten < 1.0 , GainAtten为全局增益梯度, 并且 GainAtten由接 收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
图 9是 居本发明的实施例的一种解码装置 900的示意性结构图。解码 装置 900包括处理器 910、 存储器 920和通信总线 930。
处理器 910用于通过通信总线 930调用存储器 920中存储的代码, 以在 确定当前帧为丟失帧的情况下,根据当前帧的前一帧的解码结果合成高频带 信号; 根据当前帧之前的至少一帧的子帧的子帧增益和上述至少一帧的子帧 之间的增益梯度, 确定当前帧的至少两个子帧的子帧增益, 并且确定当前帧 的全局增益, 并且根据全局增益和至少两个子帧的子帧增益对所合成的高频 带信号进行调整以得到当前帧的高频带信号。
根据本发明的实施例, 处理器 910根据上述至少一帧的子帧的子帧增益 和上述至少一帧的子帧之间的增益梯度, 确定当前帧的起始子帧的子帧增 益, 并且根据当前帧的起始子帧的子帧增益和上述至少一帧的子帧之间的增 根据本发明的实施例, 处理器 910根据当前帧的前一帧的子帧之间的增 益梯度,估计当前帧的前一帧的最后一个子帧与当前帧的起始子帧之间的第 一增益梯度,根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯 度, 估计当前帧的起始子帧的子帧增益, 根据上述至少一帧的子帧之间的增 益梯度, 估计当前帧的至少两个子帧间的增益梯度, 并且根据当前帧的至少 两个子帧间的增益梯度和起始子帧的子帧增益,估计至少两个子帧中除起始 子帧之外的其它子帧的子帧增益。
根据本发明的实施例, 处理器 910对当前帧的前一帧的至少两个子帧之 间的增益梯度进行加权平均, 得到第一增益梯度, 并且根据当前帧的前一帧 的最后一个子帧的子帧增益和第一增益梯度, 以及当前帧之前接收到的最后 一个帧的类型和当前帧以前的连续丟失帧的数目,估计当前帧的起始子帧的 子帧增益, 其中在进行加权平均时, 当前帧的前一帧中距当前帧越近的子帧 之间的增益梯度所占的权重越大。
根据本发明的实施例, 当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧 包括 I 个子帧 , 第 一增益梯度由 下列公式得到 :
1-2
GainGradFEC [0] = ^ GainGrad [n - 1, j] * aj, 其中 GainGradFEC [0]为第一增益梯度,
GainGrad [n - l, j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, a/+i≥a; , | 」= 1, j = 0, 1, 2, ..., 1-2, 其中起始子帧的子帧增益由下列公 式得到: GainShapeTemp [n, 0] = GainShape [n - 1 , 1 - 1] + * GainGradFEC [0]
GainShape [n, 0] = GainShapeTemp [n, 0] * φ2
其中 GainShape [n - 1 , 1 - 1]为第 n- 1帧的第 I- 1子帧的子帧增益, GainShape [η, Ο] 为当前帧的起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧增益 中间值, 0 < ^ < 1.0 , 0 < φ2≤1.0 , 由在当前帧之前接收到的最后一个帧的类 型和第一增益梯度的正负符号确定, 由在当前帧之前接收到的最后一个帧 的类型和当前帧以前的连续丟失帧的数目确定。
根据本发明的实施例, 处理器 910将当前帧的前一帧的最后一个子帧之 前的子帧与当前帧的前一帧的最后一个子帧之间的增益梯度作为第一增益 梯度, 并且根据当前帧的前一帧的最后一个子帧的子帧增益和第一增益梯 度, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以前的连续丟失 帧的数目, 估计当前帧的起始子帧的子帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧 包括 I 个子帧时, 第一增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n - 1 , 1 - 2], 其中 GainGradFEC [0]为第一增益梯度, GainGrad[n-l,I-2]为当前帧的前一帧的第 1-2子帧到第 1-1子帧之间的增益梯 度, 其中起始子帧的子帧增益由下列公式得到:
GainShapeTemp [n,0] = GainShape [n -1,1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = ηήη(λ2 * GainShape [n - 1 , 1 - 1] , GainShapeTemp [n, 0]),
GainShape [n,0] = max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
其中 GainShape[n-l,I-l]为当前帧的前一帧的第 1-1 子帧的子帧增益, GainShape [n, 0]为起始子帧的子帧增益, GainShapeTemp [n, 0]为起始子帧的子帧 增益中间值, 0<4<1.0, 1<^<2, 0<4<1.0, 4由在当前帧之前接收到的最 后一个帧的类型和当前帧的前一帧的最后两个子帧的子帧增益的倍数关系 确定, A2和 ^由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
根据本发明的实施例, 每个帧包括 I个子帧, 处理器 910对当前帧的前 一帧的第 i子帧与第 i+1子帧之间的增益梯度和当前帧的前一帧的前一帧的 第 i子帧与第 i+1子帧之间的增益梯度进行加权平均, 估计当前帧的第 i子 帧与第 i+1子帧之间的增益梯度, 其中 i = 0, 1...J-2, 当前帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度所占的权重大于当前帧的前一帧的前一 帧的第 i子帧与第 i+1子帧之间的增益梯度所占的权重; 根据当前帧的至少 两个子帧间的增益梯度和起始子帧的子帧增益, 以及当前帧之前接收到的最 后一个帧的类型和当前帧以前的连续丟失帧的数目,估计至少两个子帧中除 起始子帧之外的其它子帧的子帧增益。
根据本发明的实施例, 当前帧的至少两个子帧间的增益梯度由下列公式 来确定:
GainGradFEC [i + l] = GainGrad [n-2,i]*p!+ GainGrad [n - 1 , i ] * β2
其中 GainGradFEC [i + 1]为第 i 子帧与第 i+1 子帧之间的增益梯度, GainGrad[n-2,i]为当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, GainGrad[n-l,i]为当前帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度, 2>Α, β2+β、 = , i=0,l,2,...,1-2; 其中至少两个子帧中除起始子 帧之外的其它子帧的子帧增益由以下公式确定: GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* β4
其中, GainShape[n,i]为当前帧的第 i子帧的子帧增益, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, 0≤β3≤1.0<= 1.0, 0<β4≤1.0, 33由 GainGrad[n-l,i]与 GainGrad [n-l,i+l]的倍数关系和 GainGrad [n-l,i+l]的正负 符号确定, A由在当前帧之前接收到的最后一个帧的类型和当前帧以前的连 续丟失帧的数目确定。
根据本发明的实施例, 处理器 910对当前帧的第 i子帧之前的 1+1个子 帧之间的 I个增益梯度进行加权平均, 估计当前帧的第 i子帧与第 i+1子帧 的之间增益梯度, 其中 i = 0, 1...J-2, 距第 i子帧越近的子帧之间的增益梯 度所占的权重越大, 并且根据当前帧的至少两个子帧间的增益梯度和起始子 帧的子帧增益, 以及在当前帧之前接收到的最后一个帧的类型和当前帧以前 的连续丟失帧的数目,估计至少两个子帧中除起始子帧之外的其它子帧的子 帧增益。
根据本发明的实施例, 当当前帧的前一帧为第 n-1帧, 当前帧为第 n帧, 每个帧包括四个子帧时, 当前帧的至少两个子帧间的增益梯度由以下公式确 定:
GainGradFEC[l]=GainGrad[n-l,0]* i+GainGrad[n-l,l]* 2
+GainGrad[n-l,2]* ^+GainGradFEC[0]* ^
GainGradFEC[2]=GainGrad[n- 1,1]* ^ +GainGrad[n-l,2]* ^
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^
GainGradFEC [3 ]=GainGrad[n- 1,2]* γλ +GainGradFEC[0] * γ2
+GainGradFEC[l]* 3+GainGradFEC[2]* ,4
其中 GainGradFECLj]为当前帧的第 j子帧与第 j+1子帧之间的增益梯度, GainGrad[n-l,j]为当前帧的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, j = 0, 1, 2, …, 1-2, ri+r2+r3+r4=\.0, γ^>γ32>γ^ 其中 】、 234 由接收到最后一个帧的类型确定,其中至少两个子帧中除起始子帧之外的其 它子帧的子帧增益由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i = 1,2,3, 其中 GainShapeTemp[n,0]为第一增益梯度;
Gain ShapeTem [n,i] =min( χ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]) GainShape[n,i] =max( χύ * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
其中, GainShapeTemp[n,i] 为当前帧的第 i子帧的子帧增益中间值, i= 1,2,3, GainShape[n,i]为当前帧的第 i子帧的增益, 和 由接收到的最后一 个帧的类型和当前帧以前的连续丟失帧的数目确定, 1<^<2, 0<= <=1。
根据本发明的实施例, 处理器 910根据在当前帧之前接收到的最后一个 帧的类型、 当前帧以前的连续丟失帧的数目估计当前帧的全局增益梯度; 根 据全局增益梯度和当前帧的当前帧的前一帧的全局增益,估计当前帧的全局 增益。
根据本发明的实施例, 当前帧的全局增益由以下公式确定: GainFrame =GainFrame_prevfrm*GainAtten , 其中 GainFrame 为当前中贞的全局增益, GainFrame_prevfrm 为当前帧的前一帧的全局增益, 0 < GainAtten≤ 1.0, GainAtten为全局增益梯度,并且 GainAtten由接收到的最后一个帧的类型和 当前帧以前的连续丟失帧的数目确定。
图 10是根据本发明的实施例的解码装置 1000的示意性结构图。解码装 置 1000包括处理器 1010、 存储器 1020和通信总线 1030。
处理器 1010, 用于通过通信总线 1030调用存储器 1020中存储的代码, 以在确定当前帧为丟失帧的情况下,根据当前帧的前一帧的解码结果合成高 频带信号, 确定当前帧的至少两个子帧的子帧增益, 根据在当前帧之前接收 到的最后一个帧的类型、 当前帧以前的连续丟失帧的数目估计当前帧的全局 增益梯度, 根据全局增益梯度和当前帧的前一帧的全局增益, 估计当前帧的 全局增益, 并且根据全局增益和至少两个子帧的子帧增益, 对所合成的高频 带信号进行调整以得到当前帧的高频带信号。
根据本发明的实施例, GainFrame =GainFrame_prevfrm * GainAtten, 其中 GainFrame为当前中贞的全局增益, GainFrame_prevfrm为当前中贞的前一中贞的全 局增益, 0 < GainAtten≤ 1.0, GainAtten为全局增益梯度, 并且 GainAtten由接 收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目确定。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结 合来实现。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方案的特 定应用和设计约束条件。 专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能, 但是这种实现不应认为超出本发明的范围。 所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述描 述的系统、 装置和单元的具体工作过程, 可以参考前述方法实施例中的对应 过程, 在此不再赘述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置和 方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示 意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个 系统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间 的耦合或直接辆合或通信连接可以是通过一些接口, 装置或单元的间接耦合 或通信连接, 可以是电性, 机械或其它的形式。 为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或 者全部单元来实现本实施例方案的目的。
另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元 中, 也可以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一 个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明 的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部 分可以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质 中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。 而前 述的存储介质包括: U盘、移动硬盘、只读存储器( OM, Read-Only Memory )、 随机存取存储器(RAM, Random Access Memory )、 磁碟或者光盘等各种可 以存储程序代码的介质。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局限 于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易 想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护 范围应以权利要求的保护范围为准。

Claims

权利要求
1、 一种解码方法, 其特征在于, 包括:
在确定当前帧为丟失帧的情况下,根据所述当前帧的前一帧的解码结果 合成高频带信号;
根据所述当前帧之前的至少一帧的子帧的子帧增益和所述至少一帧的 子帧之间的增益梯度, 确定所述当前帧的至少两个子帧的子帧增益;
确定所述当前帧的全局增益;
根据所述全局增益和所述至少两个子帧的子帧增益,对所合成的高频带 信号进行调整以得到所述当前帧的高频带信号。
2、 根据权利要发求 1 所述的方法, 其特征在于, 所述根据所述当前帧 之前的至少一帧的子帧的子帧增益和所述至少一帧的子帧之间的增益梯度, 确定所述当前帧的至少两个子帧的子帧增益, 包括:
根据所述至少一帧的子帧的子帧增益和所述至少一帧的子帧之间的增 益梯度, 确定所述当前帧的起始子帧的子帧增益;
根据所述当前帧的起始子帧的子帧增益和所述至少一帧的子帧之间的 增益。
3、 根据权利要求 2所述的方法, 其特征在于, 所述根据所述至少一帧 的子帧的子帧增益和所述至少一帧的子帧之间的增益梯度,确定所述当前帧 的起始子帧的子帧增益, 包括:
根据所述当前帧的前一帧的子帧之间的增益梯度,估计所述当前帧的前 一帧的最后一个子帧与所述当前帧的起始子帧之间的第一增益梯度;
根据所述当前帧的前一帧的最后一个子帧的子帧增益和所述第一增益 梯度, 估计所述当前帧的起始子帧的子帧增益。
4、 根据权利要求 3所述的方法, 其特征在于, 所述根据所述当前帧的 前一帧的子帧之间的增益梯度,估计所述当前帧的前一帧的最后一个子帧与 所述当前帧的起始子帧之间的第一增益梯度, 包括:
对所述当前帧的前一帧的至少两个子帧之间的增益梯度进行加权平均, 得到所述第一增益梯度, 其中, 在进行所述加权平均时, 所述当前帧的前一 帧中距所述当前帧越近的子帧之间的增益梯度所占的权重越大。
5、 根据权利要求 3或 4所述的方法, 其特征在于, 当所述当前帧的前 一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括 I个子帧时, 所述第一 增益梯度由下列公式得到: GainGradFEC[0] = GainGrad[n-l,j]*aj, 其中 GainGradFEC[0]为所述第一增益梯度, GainGrad[n-l, j]为所述当前帧 的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, αί+χ >α ∑ \ , j=0, 1, 2, …, 1-2; 其中所述起始子帧的子帧增益由下列公式得到:
GainShapeTemp [n,0] = GainShape [η -1,Ι-1] + φ1 * GainGradFEC [0]
GainShape [n, 0] = GainShapeTemp [n, 0]*φ2
其中所述 GainShape[n-l,I-l]为所述第 n-1 帧的第 1-1 子帧的子帧增益, GainShape[n,0]为所述当前帧的起始子帧的子帧增益, GainShapeTemp [η,Ο]为所 述起始子帧的子帧增益中间值, 0<^ <1.0 , 0<φ2≤1.0 , 由在所述当前帧之 前接收到的最后一个帧的类型和所述第一增益梯度的正负符号确定, %由在 所述当前帧之前接收到的最后一个帧的类型和所述当前帧以前的连续丟失 帧的数目确定。
6、 根据权利要求 3所述的方法, 其特征在于, 所述根据所述当前帧的 前一帧的子帧之间的增益梯度,估计所述当前帧的前一帧的最后一个子帧与 所述当前帧的起始子帧之间的第一增益梯度, 包括:
将所述当前帧的前一帧的最后一个子帧之前的子帧与所述当前帧的前 一帧的最后一个子帧之间的增益梯度作为所述第一增益梯度。
7、 根据权利要求 3或 6所述的方法, 其特征在于, 当所述当前帧的前 一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括 I个子帧时, 所述第一 增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n-1, 1-2],
其中 GainGradFEC[0]为所述第一增益梯度, GainGrad[n-l,I-2]为所述当前 帧的前一帧的第 1-2子帧与第 1-1子帧之间的增益梯度,
其中所述起始子帧的子帧增益由下列公式得到:
GainShapeTemp [n,0] = GainShape [n -1,1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = ηήη(λ2 * GainShape [n- 1,1-1], GainShapeTemp [n,0]),
GainShape [n,0] = max( 3 * GainShape [n- 1,1-1], GainShapeTemp [n,0]), 其中所述 GainShape[n -l, I-l]为所述当前帧的前一帧的第 1-1 子帧的子帧 增益, GainShape[n, 0]为所述起始子帧的子帧增益, GainShapeTemp [n, 0]为所述 起始子帧的子帧增益中间值, ( Ι^Ι .Ο , \<λ2<2 , 0<^<1.0 , 4由在所述当 前帧之前接收到的最后一个帧的类型和所述当前帧的前一帧中的最后两个 子帧的子帧增益的倍数关系确定, !^和^由在所述当前帧之前接收到的最后 一个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
8、 根据权利要求 3至 7中的任一项所述的方法, 其特征在于, 其中, 所述根据所述当前帧的前一帧的最后一个子帧的子帧增益和所述第一增益 梯度, 估计所述当前帧的起始子帧的子帧增益, 包括:
根据所述当前帧的前一帧的最后一个子帧的子帧增益和所述第一增益 梯度, 以及在所述当前帧之前接收到的最后一个帧的类型和所述当前帧以前 的连续丟失帧的数目, 估计所述当前帧的起始子帧的子帧增益。
9、 根据权利要求 2至 8中的任一项所述的方法, 其特征在于, 所述根 据所述当前帧的起始子帧的子帧增益和所述至少一帧的子帧之间的增益梯 包括:
根据所述至少一帧的子帧之间的增益梯度,估计所述当前帧的至少两个 子帧间的增益梯度;
根据所述当前帧的至少两个子帧间的增益梯度和所述当前帧的起始子 子帧增益。
10、 根据权利要求 9所述的方法, 其特征在于, 每个帧包括 I个子帧, 所述根据所述至少一帧的子帧之间的增益梯度,估计所述当前帧的至少两个 子帧间的增益梯度, 包括:
对所述当前帧的前一帧的第 i子帧与第 i+1子帧的之间增益梯度和所述 当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度进行加权 平均, 估计所述当前帧的第 i子帧与第 i+1子帧之间的增益梯度, 其中 i = 0, 1 ... J-2 , 所述当前帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度所占 的权重大于所述当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度所占的权重。
11、 根据权利要求 9或 10所述的方法, 其特征在于, 当所述当前帧的 前一帧为第 n-1帧, 所述当前帧为第 n帧时, 所述当前帧的至少两个子帧间 的增益梯度由下列公式来确定:
GainGradFEC [i + l] = GainGrad [n -2,i] *p! + GainGrad [n - 1 , i ] * β2
其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度, GainGrad[n -2,i]为所述当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间 的增益梯度, GainGrad[n -l,i]为所述当前帧的前一帧的第 i子帧与第 i+1子帧 之间的增益梯度, β2 > β , β2 + β = \Ά , i=0,l,2,...,1-2; 由以下公式确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* βΑ
其中, GainShape[n,i]为所述当前帧的第 i 子帧的子帧增益, GainShapeTemp[n,i]为所述当前帧的第 i子帧的子帧增益中间值, 0≤ β3≤ 1.0, 0 < β4≤ΐ.0, β3由 GainGrad[n- 1 ,i]与 GainGrad [n- 1 ,i+ 1 ]的倍数关系和 GainGrad [n-l,i+l]的正负符号确定, A由在所述当前帧之前接收到的最后一个帧的类 型和所述当前帧以前的连续丟失帧的数目确定。
12、 根据权利要求 9所述的方法, 其特征在于, 每个帧包括 I个子帧, 所述根据所述至少一帧的子帧之间的增益梯度,估计所述当前帧的至少两个 子帧间的增益梯度, 包括:
对所述当前帧的第 i子帧之前的 1+1个子帧之间的 I个增益梯度进行加 权平均, 估计所述当前帧的第 i子帧与第 i+1子帧之的增益梯度, 其中 i = 0, 1... J-2, 距所述第 i子帧越近的子帧之间的增益梯度所占的权重越大。
13、 根据权利要求 9或 12所述的方法, 其特征在于, 当所述当前帧的 前一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括四个子帧时, 所述当 前帧的至少两个子帧间的增益梯度由以下公式确定:
GainGradFEC[l]=GainGrad[n-l,0]* i +GainGrad[n-l,l]* 2
+GainGrad[n-l,2]* ^+GainGradFEC[0]* ^
GainGradFEC [2 ]=GainGrad[n- 1,1]* γ i +GainGrad[n- 1,2]* γ z
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^ GainGradFEC[3]=GainGrad[n-l,2]* i +GainGradFEC[0]* ,2
+GainGradFEC[l]* 3+GainGradFEC[2]* ,4 其中 GainGradFECLj]为所述当前帧的第 j子帧与第 j+1子帧之间的增益 梯度, GainGrad[n -l,j]为所述当前帧的前一帧的第 j子帧与第 j+1子帧之间的 增益梯度, j = 0, 1, 2, …, 1-2, 7, + 72 + + =^ -0 , > > r2 > r^ 其中 】、 γ2、 ^和 由所述接收到的最后一个帧的类型确定, 由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i = 1,2,3, 其中 GainShapeTemp[n,0]为所述第一增益梯度;
Gain ShapeTem [n,i] =min( γ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]) GainShape [n,i] =max( χύ * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
其中, i= 1,2,3, GainShapeTemp[n,i] 为所述当前帧的第 i子帧的子帧增 益中间值, GainShape[n,i]为所述当前帧的第 i子帧的子帧增益, ^和^由所 述接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目确定, 1<„2, 0<=,6 <=1。
14、 根据权利要求 9至 13任一所述的方法, 其特征在于, 所述根据所 述当前帧的至少两个子帧间的增益梯度和所述起始子帧的子帧增益,估计所 根据所述当前帧的至少两个子帧间的增益梯度和所述起始子帧的子帧 增益, 以及所述在当前帧之前接收到的最后一个帧的类型和所述当前帧以前 的连续丟失帧的数目,估计所述至少两个子帧中除所述起始子帧之外的其它 子帧的子帧增益。
15、 根据权利要求 1至 14中的任一项所述的方法, 其特征在于, 所述 估计所述当前帧的全局增益, 包括:
根据在所述当前帧之前接收到的最后一个帧的类型、所述当前帧以前的 连续丟失帧的数目估计当前帧的全局增益梯度;
根据所述全局增益梯度和所述当前帧的前一帧的全局增益,估计所述当 前帧的全局增益。
16、 根据权利要求 15所述的方法, 其特征在于, 所述当前帧的全局增 益由以下公式确定:
GainFrame =GainFrame_prevfrm*GainAtten,其中 GainFrame为所述当前 帧的全局增益, GainFrame_prevfrm 为所述当前帧的前一帧的全局增益, 0 < GainAtten < 1.0, GainAtten为所述全局增益梯度, 并且所述 GainAtten由所 述接收到的最后一个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
17、 一种解码方法, 其特征在于, 包括:
在确定当前帧为丟失帧的情况下,根据所述当前帧的前一帧的解码结果 合成高频带信号;
确定所述当前帧的至少两个子帧的子帧增益;
根据在所述当前帧之前接收到的最后一个帧的类型、所述当前帧以前的 连续丟失帧的数目估计当前帧的全局增益梯度;
根据所述全局增益梯度和所述当前帧的前一帧的全局增益,估计所述当 前帧的全局增益;
根据所述全局增益和所述至少两个子帧的子帧增益,对所合成的高频带 信号进行调整以得到所述当前帧的高频带信号。
18、 根据权利要求 17所述的方法, 其特征在于, 所述当前帧的全局增 益由以下公式确定:
GainFrame =GainFrame_prevfrm*GainAtten,其中 GainFrame为所述当前 帧的全局增益, GainFrame_prevfrm 为所述当前帧的前一帧的全局增益, 0 < GainAtten < 1.0, GainAtten为所述全局增益梯度, 并且所述 GainAtten由所 述接收到的最后一个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
19、 一种解码装置, 其特征在于, 包括:
生成模块, 用于在确定当前帧为丟失帧的情况下, 根据当前帧的前一帧 的解码结果合成高频带信号;
确定模块,用于根据所述当前帧之前的至少一帧的子帧的子帧增益和所 述至少一帧的子帧之间的增益梯度,确定所述当前帧的至少两个子帧的子帧 增益, 并且确定所述当前帧的全局增益;
调整模块,用于根据所述确定模块确定的全局增益和所述至少两个子帧 的子帧增益对所述生成模块合成的高频带信号进行调整以得到所述当前帧 的高频带信号。
20、 根据权利要发求 19所述的解码装置, 所述确定模块根据所述至少 一帧的子帧的子帧增益和所述至少一帧的子帧之间的增益梯度,确定所述当 前帧的起始子帧的子帧增益, 并且根据所述当前帧的起始子帧的子帧增益和 所述至少一帧的子帧之间的增益梯度,确定所述至少两个子帧中除所述起始 子帧之外的其它子帧的子帧增益。
21、 根据权利要求 20所述的解码装置, 其特征在于, 所述确定模块根 据所述当前帧的前一帧的子帧之间的增益梯度,估计所述当前帧的前一帧的 最后一个子帧与所述当前帧的起始子帧之间的第一增益梯度, 并根据所述当 前帧的前一帧的最后一个子帧的子帧增益和所述第一增益梯度,估计所述当 前帧的起始子帧的子帧增益。
22、 根据权利要求 21 所述的解码装置, 其特征在于, 所述确定模块对 所述当前帧的前一帧的至少两个子帧之间的增益梯度进行加权平均,得到所 述第一增益梯度, 其中在进行所述加权平均时, 所述当前帧的前一帧中距所 述当前帧越近的子帧之间的增益梯度所占的权重越大。
23、 根据权利要求 21或 22所述的解码装置, 其特征在于, 所述当前帧 的前一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括 I个子帧, 所述第
1-2
一增益梯度由下列公式得到: GainGradFEC[0] = ZGainGrad[n-l,j]*aj, 其中 GainGradFEC[0]为所述第一增益梯度, GainGrad[n-l, j]为所述当前帧 的前一帧的第 j子帧与第 j+1子帧之间的增益梯度, f j=l, j = 0,
1, 2, …, 1-2, 其中所述起始子帧的子帧增益由下列公式得到:
GainShapeTemp [η,θ] = GainShape [η -1,Ι-ΐ] + φ1 * GainGradFEC [θ]
GainShape [n, 0] = GainShapeTemp [n, 0]*φ2
其中所述 GainShape[n-l,I-l]为所述第 n-1 帧的第 1-1 子帧的子帧增益,
GainShape[n,0]为所述当前帧的起始子帧的子帧增益, GainShapeTemp [η,Ο]为所 述起始子帧的子帧增益中间值, 0<^ <1.0 , 0<¾ <1.0 , 由在所述当前帧之 前接收到的最后一个帧的类型和所述第一增益梯度的正负符号确定, %由在 所述当前帧之前接收到的最后一个帧的类型和所述当前帧以前的连续丟失 帧的数目确定。
24、 根据权利要求 21 所述的解码装置, 其特征在于, 所述确定模块将 所述当前帧的前一帧的最后一个子帧之前的子帧与所述当前帧的前一帧的 最后一个子帧之间的增益梯度作为所述第一增益梯度。
25、 根据权利要求 21或 24所述的解码装置, 其特征在于, 当所述当前 帧的前一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括 I个子帧时, 所 述第一增益梯度由下列公式得到: GainGradFEC [0] = GainGrad [n -1, 1-2] ,
其中 GainGradFEC[0]为所述第一增益梯度, GainGrad [n -1, 1 -2]为所述当前 帧的前一帧的第 1-2子帧到第 1-1子帧之间的增益梯度,
其中所述起始子帧的子帧增益由下列公式得到:
GainShapeTemp [n, 0] = GainShape [n -1, 1-1] + ^ * GainGradFEC [0] ,
GainShapeTemp [n, 0] = ηιίη(λ2 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
GainShape [n, 0] = max( 3 * GainShape [n - 1, 1-1], GainShapeTemp [n, 0]),
其中所述 GainShape[n -l, I-l]为所述当前帧的前一帧的第 1-1 子帧的子帧 增益, GainShape [η, Ο]为所述起始子帧的子帧增益, GainShapeTemp [η, 0]为所述 起始子帧的子帧增益中间值, ( Ι^Ι .Ο , \<λ2<2 , 0<^<1.0 , 4由在所述当 前帧之前接收到的最后一个帧的类型和所述当前帧的前一帧的最后两个子 帧的子帧增益的倍数关系确定, Α2和 ^由在所述当前帧之前接收到的最后一 个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
26、 根据权利要求 21至 25中任一项所述的解码装置, 其特征在于, 所 述确定模块根据所述当前帧的前一帧的最后一个子帧的子帧增益和所述第 一增益梯度, 以及在所述当前帧之前接收到的最后一个帧的类型和所述当前 帧以前的连续丟失帧的数目, 估计所述当前帧的起始子帧的子帧增益。
27、 根据权利要求 20至 26中任一项所述的解码装置, 其特征在于, 所 述确定模块根据所述至少一帧的子帧之间的增益梯度,估计所述当前帧的至 少两个子帧间的增益梯度, 并且根据所述当前帧的至少两个子帧间的增益梯 度和所述起始子帧的子帧增益,估计所述至少两个子帧中除所述起始子帧之 外的其它子帧的子帧增益。
28、 根据权利要求 27所述的解码装置, 其特征在于, 每个帧包括 I个 子帧, 所述确定模块对所述当前帧的前一帧的第 i子帧与第 i+1子帧之间的 增益梯度和所述当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增 益梯度进行加权平均, 估计所述当前帧的第 i子帧与第 i+1子帧之间的增益 梯度, 其中 i = 0, 1 ... J-2 , 所述当前帧的前一帧的第 i子帧与第 i+1子帧之 间的增益梯度所占的权重大于所述当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间的增益梯度所占的权重。
29、 根据权利要求 27或 28所述的解码装置, 其特征在于, 所述当前帧 的至少两个子帧间的增益梯度由下列公式来确定:
GainGradFEC [i + l] = GainGrad [n-2,i]*p!+ GainGrad [n - 1 , i ] * β2
其中 GainGradFEC[i + l]为第 i 子帧与第 i+1 子帧之间的增益梯度, GainGrad[n-2,i]为所述当前帧的前一帧的前一帧的第 i子帧与第 i+1子帧之间 的增益梯度, GainGrad[n-l,i]为所述当前帧的前一帧的第 i子帧与第 i+1子帧 之间的增益梯度, β2>β , β2+β =\Ά , i=0,l,2,...,1-2; 由以下公式确定:
GainShapeTemp[n,i]= GainShapeTemp[n,i- 1 ]+GainGradFEC[i] * β3
GainShape[n,i]= GainShapeTemp[n,i]* βΑ
其中, GainShape[n,i]为所述当前帧的第 i 子帧的子帧增益, GainShapeTemp[n,i]为所述当前帧的第 i子帧的子帧增益中间值, 0<β3 <1.0< = 1.0, 0<β4≤ΐ.0, β3由 GainGrad[n-l,i]与 GainGrad [n-l,i+l]的倍数关系和 GainGrad [n-l,i+l]的正负符号确定, 由在所述当前帧之前接收到的最后一 个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
30、 根据权利要求 27所述的解码装置, 其特征在于, 所述确定模块对 所述当前帧的第 i子帧之前的 1+1个子帧之间的 I个增益梯度进行加权平均, 估计所述当前帧的第 i子帧与第 i+1子帧的之间增益梯度,其中 i = 0, 1... ,1-2, 距所述第 i子帧越近的子帧之间的增益梯度所占的权重越大。
31、 根据权利要求 27或 30所述的解码装置, 其特征在于, 当所述当前 帧的前一帧为第 n-1帧, 所述当前帧为第 n帧, 每个帧包括四个子帧时, 所 述当前帧的至少两个子帧间的增益梯度由以下公式确定:
GainGradFEC[l]=GainGrad[n-l,0]* i+GainGrad[n-l,l]* 2
+GainGrad[n- 1 ,2] * ^ +GainGradFEC[0] * ^
GainGradFEC [2 ]=GainGrad[n- 1,1]* γ i +GainGrad[n- 1,2]* γ z
+GainGradFEC[0] * ^ +GainGradFEC[l ] * ^
GainGradFEC [3 ]=GainGrad[n- 1,2]* γλ +GainGradFEC[0] * γ2
+GainGradFEC[l ] * 3 +GainGradFEC[2] * 4
其中 GainGradFECLj]为所述当前帧的第 j子帧与第 j+1子帧之间的增益 梯度, GainGrad[n-l,j]为所述当前帧的前一帧的第 j子帧与第 j+1子帧之间的 增益梯度, j = 0, 1, 2, …, 1-2, + γ2 + γ3 + γ4 =\ .0 , r4 > > r2 > r^ 其中 γ2、 ^和 由所述接收到最后一个帧的类型确定,
^益 由以下公式确定:
GainShapeTemp[n,i]=GainShapeTemp[n,i-l]+GainGradFEC[i], 其中 i =
1,2,3, 其中 GainShapeTemp[n,0]为所述第一增益梯度;
Gain ShapeTem [n,i] =min( γ5 * GainShape [n- 1 ,i] ,GainShapeTem [n,i]) GainShape[n,i] =max( χ6 * GainShape[n- 1 ,i] ,GainShapeTemp[n,i])
其中, GainShapeTemp[n,i] 为所述当前帧的第 i 子帧的子帧增益中间 值, i= 1,2,3, GainShape[n,i]为所述当前帧的第 i子帧的增益, ^和^由所述 接收到的最后一个帧的类型和当前帧以前的连续丟失帧的数目确定, 1< 5<2, 0<= 6 <=1。
32、 根据权利要求 27至 31中的任一项所述的解码装置, 所述确定模块 根据所述当前帧的至少两个子帧间的增益梯度和所述起始子帧的子帧增益, 以及所述在当前帧之前接收到的最后一个帧的类型和所述当前帧以前的连 的子帧增益。
33、 根据权利要求 19至 32中的任一项所述的解码装置, 其特征在于, 所述确定模块根据在所述当前帧之前接收到的最后一个帧的类型、所述当前 帧以前的连续丟失帧的数目估计当前帧的全局增益梯度;
根据所述全局增益梯度和所述当前帧的当前帧的前一帧的全局增益,估 计所述当前帧的全局增益。
34、 根据权利要求 33所述的解码装置, 其特征在于, 所述当前帧的全 局增益由以下公式确定:
GainFrame =GainFrame_prevfrm* GainAtten ,其中 GainFrame为所述当前 帧的全局增益, GainFrame_prevfrm 为所述当前帧的前一帧的全局增益, 0 < GainAtten < 1.0, GainAtten为所述全局增益梯度, 并且所述 GainAtten由所 述接收到的最后一个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
35、 一种解码装置, 其特征在于, 包括:
生成模块, 用于在确定当前帧为丟失帧的情况下, 根据所述当前帧的前 一帧的解码结果合成高频带信号; 确定模块, 用于确定所述当前帧的至少两个子帧的子帧增益, 根据在所 述当前帧之前接收到的最后一个帧的类型、所述当前帧以前的连续丟失帧的 数目估计当前帧的全局增益梯度, 并且根据所述全局增益梯度和所述当前帧 的前一帧的全局增益, 估计所述当前帧的全局增益;
调整模块,用于根据所述确定模块确定的全局增益和所述至少两个子帧 的子帧增益,对所述生成模块合成的高频带信号进行调整以得到所述当前帧 的高频带信号。
36、 根据权利要求 35 所述的解码装置, 其特征在于, GainFrame =GainFrame_prevfrm* GainAtten, 其中 GainFrame为所述当前中贞的全局增益, GainFrame_prevfrm 为所述当前帧的前一帧的全局增益, 0 < GainAtten≤ 1.0, GainAtten为所述全局增益梯度,并且所述 GainAtten由所述接收到的最后一 个帧的类型和所述当前帧以前的连续丟失帧的数目确定。
PCT/CN2014/077096 2013-07-16 2014-05-09 解码方法和解码装置 WO2015007114A1 (zh)

Priority Applications (18)

Application Number Priority Date Filing Date Title
BR112015032273-5A BR112015032273B1 (pt) 2013-07-16 2014-05-09 Método de decodificação e aparelho de decodificação para sinal de fala
KR1020157033903A KR101800710B1 (ko) 2013-07-16 2014-05-09 디코딩 방법 및 디코딩 디바이스
SG11201509150UA SG11201509150UA (en) 2013-07-16 2014-05-09 Decoding method and decoding apparatus
NZ714039A NZ714039A (en) 2013-07-16 2014-05-09 Decoding method and decoding apparatus
CA2911053A CA2911053C (en) 2013-07-16 2014-05-09 Decoding method and decoding apparatus for speech signal
MX2015017002A MX352078B (es) 2013-07-16 2014-05-09 Metodo de decodificacion y aparato de decodificacion.
AU2014292680A AU2014292680B2 (en) 2013-07-16 2014-05-09 Decoding method and decoding apparatus
KR1020177033206A KR101868767B1 (ko) 2013-07-16 2014-05-09 디코딩 방법 및 디코딩 디바이스
EP14826461.7A EP2983171B1 (en) 2013-07-16 2014-05-09 Decoding method and decoding device
JP2016522198A JP6235707B2 (ja) 2013-07-16 2014-05-09 復号方法および復号装置
RU2015155744A RU2628159C2 (ru) 2013-07-16 2014-05-09 Способ декодирования и устройство декодирования
ES14826461T ES2746217T3 (es) 2013-07-16 2014-05-09 Método de decodificación y dispositivo de decodificación
EP19162439.4A EP3594942B1 (en) 2013-07-16 2014-05-09 Decoding method and decoding apparatus
UAA201512807A UA112401C2 (uk) 2013-07-16 2014-09-05 Спосіб декодування та пристрій декодування
IL242430A IL242430B (en) 2013-07-16 2015-11-03 Decoding method and decoding device
ZA2015/08155A ZA201508155B (en) 2013-07-16 2015-11-04 Decoding method and decoding device
US14/985,831 US10102862B2 (en) 2013-07-16 2015-12-31 Decoding method and decoder for audio signal according to gain gradient
US16/145,469 US10741186B2 (en) 2013-07-16 2018-09-28 Decoding method and decoder for audio signal according to gain gradient

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310298040.4 2013-07-16
CN201310298040.4A CN104299614B (zh) 2013-07-16 2013-07-16 解码方法和解码装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/985,831 Continuation US10102862B2 (en) 2013-07-16 2015-12-31 Decoding method and decoder for audio signal according to gain gradient

Publications (1)

Publication Number Publication Date
WO2015007114A1 true WO2015007114A1 (zh) 2015-01-22

Family

ID=52319313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/077096 WO2015007114A1 (zh) 2013-07-16 2014-05-09 解码方法和解码装置

Country Status (20)

Country Link
US (2) US10102862B2 (zh)
EP (2) EP3594942B1 (zh)
JP (2) JP6235707B2 (zh)
KR (2) KR101800710B1 (zh)
CN (2) CN104299614B (zh)
AU (1) AU2014292680B2 (zh)
BR (1) BR112015032273B1 (zh)
CA (1) CA2911053C (zh)
CL (1) CL2015003739A1 (zh)
ES (1) ES2746217T3 (zh)
HK (1) HK1206477A1 (zh)
IL (1) IL242430B (zh)
MX (1) MX352078B (zh)
MY (1) MY180290A (zh)
NZ (1) NZ714039A (zh)
RU (1) RU2628159C2 (zh)
SG (1) SG11201509150UA (zh)
UA (1) UA112401C2 (zh)
WO (1) WO2015007114A1 (zh)
ZA (1) ZA201508155B (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299614B (zh) * 2013-07-16 2017-12-29 华为技术有限公司 解码方法和解码装置
US10109284B2 (en) 2016-02-12 2018-10-23 Qualcomm Incorporated Inter-channel encoding and decoding of multiple high-band audio signals
CN107248411B (zh) * 2016-03-29 2020-08-07 华为技术有限公司 丢帧补偿处理方法和装置
CN108023869B (zh) * 2016-10-28 2021-03-19 海能达通信股份有限公司 多媒体通信的参数调整方法、装置及移动终端
CN108922551B (zh) * 2017-05-16 2021-02-05 博通集成电路(上海)股份有限公司 用于补偿丢失帧的电路及方法
JP7139238B2 (ja) 2018-12-21 2022-09-20 Toyo Tire株式会社 高分子材料の硫黄架橋構造解析方法
CN113473229B (zh) * 2021-06-25 2022-04-12 荣耀终端有限公司 一种动态调节丢帧阈值的方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732512A (zh) * 2002-12-31 2006-02-08 诺基亚有限公司 用于隐蔽压缩域分组丢失的方法和装置
CN1989548A (zh) * 2004-07-20 2007-06-27 松下电器产业株式会社 语音解码装置及补偿帧生成方法
US20090248404A1 (en) * 2006-07-12 2009-10-01 Panasonic Corporation Lost frame compensating method, audio encoding apparatus and audio decoding apparatus
CN101836254A (zh) * 2008-08-29 2010-09-15 索尼公司 频带扩大装置和方法、编码装置和方法、解码装置和方法及程序
CN102915737A (zh) * 2011-07-31 2013-02-06 中兴通讯股份有限公司 一种浊音起始帧后丢帧的补偿方法和装置

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9512284D0 (en) * 1995-06-16 1995-08-16 Nokia Mobile Phones Ltd Speech Synthesiser
JP3707116B2 (ja) * 1995-10-26 2005-10-19 ソニー株式会社 音声復号化方法及び装置
US7072832B1 (en) 1998-08-24 2006-07-04 Mindspeed Technologies, Inc. System for speech encoding having an adaptive encoding arrangement
US6636829B1 (en) * 1999-09-22 2003-10-21 Mindspeed Technologies, Inc. Speech communication system and method for handling lost frames
CA2388439A1 (en) * 2002-05-31 2003-11-30 Voiceage Corporation A method and device for efficient frame erasure concealment in linear predictive based speech codecs
KR100501930B1 (ko) * 2002-11-29 2005-07-18 삼성전자주식회사 적은 계산량으로 고주파수 성분을 복원하는 오디오 디코딩방법 및 장치
US7146309B1 (en) * 2003-09-02 2006-12-05 Mindspeed Technologies, Inc. Deriving seed values to generate excitation values in a speech coder
DK1875463T3 (en) * 2005-04-22 2019-01-28 Qualcomm Inc SYSTEMS, PROCEDURES AND APPARATUS FOR AMPLIFIER FACTOR GLOSSARY
US7831421B2 (en) * 2005-05-31 2010-11-09 Microsoft Corporation Robust decoder
EP1898397B1 (en) * 2005-06-29 2009-10-21 Panasonic Corporation Scalable decoder and disappeared data interpolating method
JP4876574B2 (ja) * 2005-12-26 2012-02-15 ソニー株式会社 信号符号化装置及び方法、信号復号装置及び方法、並びにプログラム及び記録媒体
US8374857B2 (en) * 2006-08-08 2013-02-12 Stmicroelectronics Asia Pacific Pte, Ltd. Estimating rate controlling parameters in perceptual audio encoders
US8346546B2 (en) * 2006-08-15 2013-01-01 Broadcom Corporation Packet loss concealment based on forced waveform alignment after packet loss
EP2054878B1 (en) 2006-08-15 2012-03-28 Broadcom Corporation Constrained and controlled decoding after packet loss
US7877253B2 (en) * 2006-10-06 2011-01-25 Qualcomm Incorporated Systems, methods, and apparatus for frame erasure recovery
EP2538406B1 (en) 2006-11-10 2015-03-11 Panasonic Intellectual Property Corporation of America Method and apparatus for decoding parameters of a CELP encoded speech signal
CN103383846B (zh) * 2006-12-26 2016-08-10 华为技术有限公司 改进语音丢包修补质量的语音编码方法
US8688437B2 (en) * 2006-12-26 2014-04-01 Huawei Technologies Co., Ltd. Packet loss concealment for speech coding
CN101321033B (zh) 2007-06-10 2011-08-10 华为技术有限公司 帧补偿方法及系统
JP5618826B2 (ja) * 2007-06-14 2014-11-05 ヴォイスエイジ・コーポレーション Itu.t勧告g.711と相互運用可能なpcmコーデックにおいてフレーム消失を補償する装置および方法
CN101207665B (zh) * 2007-11-05 2010-12-08 华为技术有限公司 一种衰减因子的获取方法
CN100550712C (zh) 2007-11-05 2009-10-14 华为技术有限公司 一种信号处理方法和处理装置
KR101413967B1 (ko) * 2008-01-29 2014-07-01 삼성전자주식회사 오디오 신호의 부호화 방법 및 복호화 방법, 및 그에 대한 기록 매체, 오디오 신호의 부호화 장치 및 복호화 장치
CN101588341B (zh) * 2008-05-22 2012-07-04 华为技术有限公司 一种丢帧隐藏的方法及装置
US8712764B2 (en) * 2008-07-10 2014-04-29 Voiceage Corporation Device and method for quantizing and inverse quantizing LPC filters in a super-frame
US8428938B2 (en) * 2009-06-04 2013-04-23 Qualcomm Incorporated Systems and methods for reconstructing an erased speech frame
CN101958119B (zh) * 2009-07-16 2012-02-29 中兴通讯股份有限公司 一种改进的离散余弦变换域音频丢帧补偿器和补偿方法
MY167980A (en) * 2009-10-20 2018-10-09 Fraunhofer Ges Forschung Multi- mode audio codec and celp coding adapted therefore
CA2821577C (en) * 2011-02-15 2020-03-24 Voiceage Corporation Device and method for quantizing the gains of the adaptive and fixed contributions of the excitation in a celp codec
KR20160007581A (ko) 2013-05-14 2016-01-20 쓰리엠 이노베이티브 프로퍼티즈 컴파니 피리딘- 또는 피라진-함유 화합물
CN104299614B (zh) 2013-07-16 2017-12-29 华为技术有限公司 解码方法和解码装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732512A (zh) * 2002-12-31 2006-02-08 诺基亚有限公司 用于隐蔽压缩域分组丢失的方法和装置
CN1989548A (zh) * 2004-07-20 2007-06-27 松下电器产业株式会社 语音解码装置及补偿帧生成方法
US20090248404A1 (en) * 2006-07-12 2009-10-01 Panasonic Corporation Lost frame compensating method, audio encoding apparatus and audio decoding apparatus
CN101836254A (zh) * 2008-08-29 2010-09-15 索尼公司 频带扩大装置和方法、编码装置和方法、解码装置和方法及程序
CN102915737A (zh) * 2011-07-31 2013-02-06 中兴通讯股份有限公司 一种浊音起始帧后丢帧的补偿方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2983171A4 *

Also Published As

Publication number Publication date
CL2015003739A1 (es) 2016-12-02
KR20160003176A (ko) 2016-01-08
CN107818789B (zh) 2020-11-17
CN104299614A (zh) 2015-01-21
AU2014292680B2 (en) 2017-03-02
ZA201508155B (en) 2017-04-26
ES2746217T3 (es) 2020-03-05
US10102862B2 (en) 2018-10-16
JP2016530549A (ja) 2016-09-29
UA112401C2 (uk) 2016-08-25
EP2983171A1 (en) 2016-02-10
NZ714039A (en) 2017-01-27
CA2911053C (en) 2019-10-15
BR112015032273B1 (pt) 2021-10-05
US10741186B2 (en) 2020-08-11
KR101800710B1 (ko) 2017-11-23
JP6235707B2 (ja) 2017-11-22
MX2015017002A (es) 2016-04-25
RU2015155744A (ru) 2017-06-30
EP2983171B1 (en) 2019-07-10
KR20170129291A (ko) 2017-11-24
EP3594942A1 (en) 2020-01-15
MX352078B (es) 2017-11-08
US20190035408A1 (en) 2019-01-31
IL242430B (en) 2020-07-30
EP3594942B1 (en) 2022-07-06
CN107818789A (zh) 2018-03-20
SG11201509150UA (en) 2015-12-30
JP2018028688A (ja) 2018-02-22
BR112015032273A2 (pt) 2017-07-25
CA2911053A1 (en) 2015-01-22
HK1206477A1 (zh) 2016-01-08
RU2628159C2 (ru) 2017-08-15
CN104299614B (zh) 2017-12-29
US20160118055A1 (en) 2016-04-28
MY180290A (en) 2020-11-27
AU2014292680A1 (en) 2015-11-26
JP6573178B2 (ja) 2019-09-11
KR101868767B1 (ko) 2018-06-18
EP2983171A4 (en) 2016-06-29

Similar Documents

Publication Publication Date Title
WO2015007114A1 (zh) 解码方法和解码装置
WO2015154397A1 (zh) 一种噪声信号的处理和生成方法、编解码器和编解码系统
KR101924767B1 (ko) 음성 주파수 코드 스트림 디코딩 방법 및 디바이스
WO2014077254A1 (ja) 音声符号化装置、音声符号化方法、音声符号化プログラム、音声復号装置、音声復号方法及び音声復号プログラム
WO2017166800A1 (zh) 丢帧补偿处理方法和装置
US10984811B2 (en) Audio coding method and related apparatus
WO2013078974A1 (zh) 非激活音信号参数估计方法及舒适噪声产生方法及系统
RU2666471C2 (ru) Способ и устройство для обработки потери кадра
WO2019037714A1 (zh) 立体声信号的编码方法和编码装置
JP6264673B2 (ja) ロストフレームを処理するための方法および復号器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14826461

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2911053

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 242430

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 2014826461

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2014292680

Country of ref document: AU

Date of ref document: 20140509

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20157033903

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016522198

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/017002

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2015155744

Country of ref document: RU

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015032273

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: A201512807

Country of ref document: UA

ENP Entry into the national phase

Ref document number: 112015032273

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20151222