WO2001003115A1 - Decodeur audio et procede de compensation d'erreur de codage - Google Patents
Decodeur audio et procede de compensation d'erreur de codage Download PDFInfo
- Publication number
- WO2001003115A1 WO2001003115A1 PCT/JP2000/004323 JP0004323W WO0103115A1 WO 2001003115 A1 WO2001003115 A1 WO 2001003115A1 JP 0004323 W JP0004323 W JP 0004323W WO 0103115 A1 WO0103115 A1 WO 0103115A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- parameter
- gain
- decoding
- decoding unit
- lag
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 94
- 230000005284 excitation Effects 0.000 claims description 98
- 230000003044 adaptive effect Effects 0.000 claims description 93
- 238000001514 detection method Methods 0.000 claims description 52
- 230000005540 biological transmission Effects 0.000 claims description 38
- 230000001052 transient effect Effects 0.000 claims description 31
- 238000010586 diagram Methods 0.000 description 18
- 230000005236 sound signal Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 230000006866 deterioration Effects 0.000 description 8
- 230000002159 abnormal effect Effects 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 7
- 238000003786 synthesis reaction Methods 0.000 description 7
- 230000015556 catabolic process Effects 0.000 description 5
- 238000006731 degradation reaction Methods 0.000 description 5
- 238000000926 separation method Methods 0.000 description 5
- 230000000630 rising effect Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/005—Correction of errors induced by the transmission channel, if related to the coding algorithm
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/12—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being prediction coefficients
Definitions
- the present invention relates to a speech decoding device and a code error compensation method used for a mobile communication system, a speech recording device, and the like that encode and transmit a speech signal.
- voice coding devices that compress voice information and encode it at a low bit rate are used for effective use of radio waves and storage media.
- the decoding side detects the error and uses an error compensation method to suppress the deterioration of the decoded voice quality.
- CS-ACELP code of ITU-T recommendation G.729 (“Coding of speech at 8 kbit / s using conjugate-structure algebraic-code-excited linear-prediction (CS-ACELP)")
- CS-ACELP conjugate-structure algebraic-code-excited linear-prediction
- FIG. 1 is a block diagram showing a configuration of a speech decoding apparatus including error compensation of the CS-ACELP coding scheme.
- speech decoding is performed in 10 ms frame units (decoding units), and the presence or absence of error detection on the transmission path is notified to the speech decoding device in frame units.
- the received coded data in a frame in which a transmission path error is not detected are separated in the data separation unit 1 into parameters required for decoding.
- an adaptive excitation is generated by adaptive excitation codebook 3 using the lag parameter decoded by lag parameter overnight decoding section 2, and a fixed excitation is generated by fixed excitation codebook 4.
- the driving sound source is generated by performing the multiplication in the arithmetic unit 6 and adding in the adder 7.
- a decoded speech is generated via the LPC synthesis filter 9 and the post filter 10 using the LPC parameter data decoded by the LPC parameter data decoding unit 8.
- an adaptive excitation is generated using the lag parameter of the previous frame in which no error was detected as a lag parameter, and A fixed excitation is generated by giving a random fixed excitation code to Codebook 4, and as a gain parameter, a driving excitation is generated using the adaptive excitation gain of the previous frame and the value obtained by attenuating the fixed excitation gain. Then, LPC synthesis and post-filter processing are performed using the LPC parameters of the previous frame as LPC parameters to obtain decoded speech.
- the speech decoding apparatus can perform error compensation processing when a transmission path error occurs.
- An object of the present invention is to provide a speech decoding apparatus and an error compensation method that can realize more improved decoded speech quality in a frame in which an error is detected.
- An object of the present invention is to include mode information representing characteristics of each short section (frame) of a voice in a voice coding parameter, and a lag parameter used for voice decoding in a voice decoding apparatus according to the mode information. It is to adaptively calculate the evening and the gain parameter.
- the subject of the present invention is that in a speech decoding device, according to the mode information, The purpose is to adaptively control the ratio between the adaptive sound source gain and the fixed sound source gain. Furthermore, the subject of the present invention is an adaptive sound source used for speech decoding according to the value of the decoding gain parameter in a normal decoding unit in which no error is detected immediately after a decoding unit in which an error is detected in the encoding data. It is to adaptively control the gain parameter and the fixed sound source gain parameter.
- FIG. 1 is a block diagram showing the configuration of a conventional speech decoding apparatus
- FIG. 2 is a block diagram illustrating a configuration of a wireless communication system including a speech encoding device and a speech decoding device according to an embodiment of the present invention
- FIG. 3 is a block diagram showing the configuration of the speech decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 4 is a block diagram showing an internal configuration of a lag parameter decoding section in the speech decoding apparatus according to Embodiment 1 of the present invention
- FIG. 5 is a block diagram showing an internal configuration of a gain parameter overnight decoding unit in the speech decoding device according to Embodiment 1 of the present invention
- FIG. 6 is a block diagram showing a configuration of a speech decoding apparatus according to Embodiment 2 of the present invention.
- FIG. 7 is a block diagram showing an internal configuration of a gain parameter overnight decoding unit in the speech decoding apparatus according to Embodiment 2 of the present invention.
- FIG. 8 is a block diagram showing a configuration of a speech decoding apparatus according to Embodiment 3 of the present invention.
- FIG. 9 is a block diagram showing the internal configuration of the gain parameter overnight decoding unit in the speech decoding apparatus according to Embodiment 3 of the present invention.
- FIG. 2 is a block diagram showing a configuration of a wireless communication device including the speech decoding device according to Embodiment 1 of the present invention.
- the wireless communication device refers to a communication terminal device such as a base station device or a mobile station in a digital wireless communication system.
- the voice is converted into an electric analog signal by a voice input device 101 such as a microphone on the transmitting side, and the analog voice signal output to the AZD converter 102 is converted into an AZD converter 102 Is converted into a digital audio signal and output to the audio encoding unit 103.
- the voice coding unit 103 performs voice coding processing on the digital voice signal, and outputs the coded information to the modulation / demodulation unit 104.
- the modulation and demodulation unit 104 digitally modulates the encoded voice signal and sends it to the radio transmission unit 105.
- Radio transmission section 105 performs a predetermined radio transmission process on the modulated signal. This signal is transmitted via antenna 106.
- the received signal received by antenna 107 is subjected to predetermined wireless reception processing by wireless receiving section 108, and sent to modem 104.
- the modulation and demodulation section 104 performs demodulation processing on the received signal, and outputs the demodulated signal to the speech decoding section 109.
- Speech decoding section 109 performs a decoding process on the demodulated signal to obtain a digital decoded speech signal, and outputs the digital decoded speech signal to DZA converter 110.
- the DZA converter 110 converts the digitized decoded audio signal output from the audio decoding unit 109 into an analog decoded audio signal and outputs the analog decoded audio signal to an audio output device 111 such as a speaker.
- the audio output device 111 converts the electrical analog decoded audio signal into decoded audio and outputs it.
- FIG. 3 is a block diagram showing a configuration of the speech decoding device according to Embodiment 1 of the present invention.
- the error compensation method in this speech decoding device is based on the assumption that an error is detected on the speech decoding side with respect to the encoded data obtained by encoding the input speech signal on the speech encoding side. It operates to suppress the quality degradation of the decoded speech during speech decoding.
- speech decoding is performed in a fixed short section of about 10 to 50 ms (called a frame).
- the detection result of whether an error has occurred in the received data is reported as an error detection flag in the frame unit.
- CRC Cyclic Redundancy Check
- Error detection shall be performed in advance outside the speech decoding device.Eligible data for error detection may be the entire encoded data for each frame, or only encoded data that is perceptually important. May be targeted.
- At least one of the mode information and the speech signal representing the feature of each frame of the speech signal is included in the speech encoding parameter (transmission parameter).
- the data separation unit 201 separates each of the voice coding parameters from the coded data. Then, mode information, LPC parameter overnight, lag parameter — evening, and gain parameter are decoded by mode information decoding section 202, LPC parameter decoding section 203, lag parameter decoding section 204, and gain parameter decoding section 205, respectively. .
- the mode information indicates the state of the audio signal in units of frames.
- modes such as voiced, unvoiced, and transient
- the encoding side performs encoding according to these states.
- the pitch prediction gain is set on the encoding side.
- unvoiced, transient, voiced (weak periodicity), and voiced (strong periodicity) are classified into four modes: unvoiced, transient, voiced (weak periodicity), and voiced (strong periodicity), and encoding is performed according to the mode.
- the adaptive excitation codebook 206 is used to generate an adaptive excitation signal based on the lag parameter
- the fixed excitation codebook 207 is used to generate a fixed sound signal based on the fixed excitation code.
- Each of the generated excitation signals is multiplied by a gain using a decoded gain parameter in a multiplier 209, and the two excitation signals are added in an adder 209. Decoded voice is generated and output by 10 and post-fill 2 11.
- the data separating unit 201 separates the data into each encoded parameter.
- the mode information decoding unit 202 extracts the decoding mode information of the previous frame and uses this as the mode information of the current frame.
- the lag parameter overnight decoding section 204 and the gain parameter overnight decoding section 205 the lag parameter overnight code, gain parameter overnight code, and mode information of the current frame obtained by the data separation section 201 are obtained.
- the lag parameter and the gain parameter used in the current frame are adaptively calculated according to the mode information. Details of this calculation method will be described later.
- the decoding method of the LPC parameters and the fixed excitation parameters is arbitrary, but as in the conventional technique, the LPC parameters of the previous frame are used for the LPC parameters, and a random fixed excitation code is used for the fixed excitation parameters.
- a fixed sound source signal generated by applying the above may be used.
- An arbitrary noise signal generated by the random number generator may be used as the fixed sound source signal.
- decoding may be performed using the fixed excitation code separated from the encoded data of the current frame as it is.
- the decoded speech is generated via the driving excitation signal generation, LPC synthesis, and post-filling in the same way as when no error is detected.
- FIG. 4 is a block diagram showing an internal configuration of lag parameter overnight decoding section 204 in the speech decoding apparatus shown in FIG.
- the lag decoding unit 301 decodes the lag code of the current frame. I do.
- the intra-frame and inter-frame lag change detection sections 302 and 303 measure changes in the decoded lag parameters within and between frames.
- the lag parameter for one frame consists of a plurality of lag parameters corresponding to a plurality of sub-frames in one frame. This is done by detecting whether there is a difference.
- lag change detection between frames is performed by comparing multiple lag parameters in a frame with lag parameters in the previous frame (last subframe) and detecting whether there is a difference greater than a certain threshold. I do.
- the lag parameter overnight determining unit 304 finally determines a lag parameter used in the current frame.
- a description will be given of a method of determining this lag parame.
- the lag parameter used in the previous frame is unconditionally used as the value of the current frame.
- the parameters decoded from the coded data of the current frame are used under the condition that the lag change within a frame and between frames is restricted.
- the intra-frame lag fluctuates beyond the threshold
- the inter-frame lag change is measured.
- the variation from the previous frame (or previous sub-frame) is large (the difference exceeds the threshold).
- L (is) indicates the decoding lag parameter overnight
- L ′ (is) indicates the lag parameter overnight used in the current frame
- NS indicates the number of subframes
- Lprev indicates the previous frame (or previous subframe) lag parameter.
- Tha and Thb indicate thresholds.
- only the intra-frame lag change detection unit 302 or only the inter-frame lag change detection unit 303 is used to determine a lag parameter to be used in the current frame from only the intra-frame fluctuation information or the only inter-frame fluctuation information. Is also good.
- the above processing may be applied only to the case where the mode information indicates a transition, and in the case of no voice, the lag parameter decoded from the encoded data of the current frame may be used as it is.
- the lag change detection is performed on the lag parameter decoded from the lag code.
- the lag change detection may be directly performed on the lag code value.
- a transient frame is a frame in which the lag parameter plays an important role as the start of voice. For this reason, as described above, in the transient frame, the decoding lag parameter obtained from the encoded data of the current frame can be actively used conditionally so as to avoid deterioration due to an encoding error. As a result, it is possible to improve the decoded speech quality as compared with a method using the previous frame lag parameter unconditionally as in the related art.
- FIG. 5 is a block diagram showing an internal configuration of gain parameter overnight decoding section 205 in the audio decoding apparatus shown in FIG.
- the gain decoding unit 401 decodes the gain parameter data from the current parameter code of the current frame.
- the gain decoding method differs according to the mode information (for example, the table used for decoding differs)
- decoding is performed according to that.
- the The mode information used at this time is the one decoded from the coded data of the current frame.
- a method of expressing the gain parameter encoding method
- a method of expressing the gain value by a combination of the parameter indicating the frame (or subframe) parameter information and the parameter indicating the relative relationship to the parameter is described.
- the value of the previous frame or a value obtained by adding attenuation
- the switching unit 402 switches the processing according to the error detection flag and the mode information.
- the decoding gain parameter is output as is.
- processing is switched according to mode information for frames in which errors are detected.
- the voiced frame gain compensating unit 404 calculates a gain parameter used in the current frame.
- the gain parameter 403 of the previous frame held in the gain buffer 403 may be attenuated by a certain value. Good.
- the unvoiced'transient frame gain control unit 405 performs gain value control using the gain parameter decoded by the gain decoding unit 401. Specifically, based on the gain parameter of the previous frame obtained from the gain buffer 403, an upper limit and / or a lower limit (or any one) of a relative change from the value are set, and the upper limit (and the lower limit) are set.
- the decoding gain parameter whose range is limited by (value) is used as the gain parameter of the current frame. Equation (2) below shows an example of the limiting method when setting the upper limit for the adaptive sound source gain and the fixed sound source gain.
- G a Adaptive sound source gain parameter
- Ge_prev Fixed sound source gain parameter of the previous sub-frame Tha, The: threshold
- the gain parameter overnight code of the current frame which may include a code error is positively and conditionally combined with the lag parameter overnight decoding unit so as to avoid deterioration due to the coding error. use.
- the gain parameter overnight code of the current frame which may include a code error is positively and conditionally combined with the lag parameter overnight decoding unit so as to avoid deterioration due to the coding error.
- the lag parameter and gain parameter decoding sections decode the lag and gain parameters used for speech decoding.
- the adaptive calculation based on the mode information it is possible to provide an error compensation method that realizes more improved decoded speech quality.
- the lag parameter determination unit determines whether the mode information of the current frame indicates transient, or transient or unvoiced. And when there is little change in the decoding lag parameter within a frame or between frames, the decoding lag parameter decoded from the encoded data of the current frame is used as the current frame lag parameter, and other conditions
- the past lag parameter as the current frame lag parameter, it is possible to provide an error compensation method that can improve the decoded speech quality, especially when the error detection frame is the rising edge of speech.
- the unvoiced / transient frame gain control unit performs Includes errors by controlling the output gain by specifying the upper limit of increase or the lower limit of z and decrease from the gain parameter of the past for the gain parameter decoded from the encoded data of the frame It is possible to suppress the gain parameter decoded from the obtained encoded data from becoming an abnormal value due to an error, and to provide an error compensation method that realizes more improved decoded voice quality.
- the error compensation method using the speech decoding apparatus shown in FIG. 3 is intended for a speech coding scheme that includes mode information representing features of each short section of a speech signal as coding parameters.
- This error compensation method can also be applied to a speech coding scheme in which speech mode information is not included in the coding parameters.
- the decoding side may be provided with a mode calculation unit that calculates mode information representing characteristics of each short section of the audio signal from the decoding parameters or the decoded signal.
- the driving sound source is represented by the addition of an adaptive sound source and a fixed sound source, and the so-called CELP (Code Excited Linear prediction) type in which decoded speech is generated by LPC synthesis.
- CELP Code Excited Linear prediction
- the error compensating method of the present invention can be widely applied to any speech coding method in which pitch period information and gain information of a sound source or a speech signal are used as encoding parameters.
- FIG. 6 is a block diagram showing a configuration of a speech decoding apparatus according to Embodiment 2 of the present invention.
- the error compensation method in the speech decoding apparatus according to the present embodiment is similar to that in Embodiment 1, where an error is detected on the decoding side with respect to the encoded data obtained by encoding the input speech signal on the speech encoding side. In such a case, it operates to suppress the quality degradation of the decoded speech at the time of speech decoding in the speech decoding device.
- speech decoding is performed in units of a fixed short section (called a frame) of about 10 to 50 ms, and it is detected whether or not an error has occurred in the received data overnight in that frame unit. The detection result is notified as an error detection flag.
- the error detection is performed in advance outside the speech decoding apparatus, and the target data for the error detection may be all the coded data for each frame, or May be targeted only for coded data that is perceptually important.
- the speech coding parameters include at least mode information indicating features of each frame of the speech signal
- Targets include those that include gain parameters that represent the gain information of the sound source signal and the fixed sound source signal.
- the coded data is first separated into respective coding parameters by a data separation unit 501.
- the mode information decoding section 502 outputs the decoding mode information of the previous frame and uses this as the mode information of the current frame. This mode information is sent to gain parameter decoding section 505.
- the lag parameter overnight decoding section 504 decodes the lag parameter used in the current frame.
- the method is arbitrary, but the lag parameter of the previous frame in which no error was detected may be used as in the past.
- the gain parameter decoding unit 505 calculates a gain parameter using the mode information by a method described later.
- the decoding method for the LPC parameters and fixed sound source parameters is arbitrary.However, as in the past, the LPC parameters used for the previous frame are the LPC parameters for the previous frame, and the fixed sound source parameters are used for random fixed sound sources. A fixed excitation signal generated by giving a code may be used. Also, any noise signal generated by the random number generator may be used as the fixed sound source signal. Further, as the fixed excitation parameter, decoding may be performed using the fixed excitation code separated from the encoded data of the current frame as it is. As in the case where no error is detected, the decoded speech is generated from each parameter obtained through the generation of the driving sound source signal, the LPC synthesis, and the post-filling.
- FIG. 7 is a block diagram showing the internal configuration of gain parameter overnight decoding section 505 in the speech decoding apparatus shown in FIG.
- the gain decoding unit 601 decodes the gain parameter over time from the current parameter over time code of the current frame.
- the gain decoding method is different according to the mode information (for example, the table used for decoding is different)
- decoding is performed according to that.
- the processing is switched by the switching unit 602 according to the error detection flag. For frames in which no errors are detected, the decoding gain parameter is output as is.
- the adaptive sound source Z fixed sound source gain ratio control unit 604 controls the gain parameter of the previous frame held in the gain buffer 603 (adaptive sound source gain and fixed (Sound source gain), performs adaptive sound source fixed sound source gain ratio control according to the mode information, and outputs the gain parameters. Specifically, if the mode information of the current frame indicates voiced, control is performed so that the ratio of the gain of the adaptive sound source is high, and if the mode information indicates transient or unvoiced, the ratio of the gain of the adaptive sound source is reduced. .
- the power of the driving sound source input to the LPC synthesis filter that adds the adaptive sound source and the fixed sound source should be the same as before the ratio control.
- the error detection frames are continuous (including one continuous frame), it is preferable to perform control to attenuate the power of the driving sound source.
- a gain code buffer 7 for holding a past gain code is provided, and in a frame in which an error is detected, the gain decoding unit 601 uses the gain code of the previous frame.
- the gain may be decoded, and adaptive sound source fixed sound source gain ratio control may be performed on the decoded sound.
- the adaptive sound source component when the current frame to be error-compensated is voiced, the adaptive sound source component is dominant to make it more voiced and stationary, and in the unvoiced 'transient mode, the fixed sound source component is dominated. By doing so, it is possible to suppress deterioration due to inappropriate periodic components due to the adaptive sound source, and to improve audible quality.
- the adaptive excitation fixed gain control unit controls the gain parameters of the previous frame (adaptive excitation gain and fixed excitation gain).
- the adaptive excitation fixed gain ratio control by performing the adaptive excitation fixed gain ratio control according to the mode information, it is possible to provide an error compensation method that realizes improved decoded speech quality.
- the speech decoding apparatus shown in FIG. 6 has been described with respect to a speech encoding method including mode information representing characteristics of each short section of an audio signal as an encoding parameter, speech mode information is described.
- the error compensation method of the present invention can also be applied to a speech coding method that does not include any in the coding parameters.
- the decoding side may be provided with a mode calculation unit that calculates mode information representing characteristics of each short section of the audio signal from the decoding parameters or the decoded signal.
- FIG. 8 is a block diagram showing a configuration of a voice decoding device according to Embodiment 3 of the present invention.
- the error compensation method in the speech decoding apparatus according to the present embodiment is similar to that of the first and second embodiments.
- the decoder operates to suppress the quality degradation of the decoded speech at the time of speech decoding in the speech decoder.
- speech decoding is performed in units of a fixed short section (called a frame) of about 10 to 50 ms, and it is detected in each frame unit whether or not an error has occurred in the received data. The result is reported as an error detection flag.
- the error detection is performed in advance outside the speech decoding apparatus, and the error detection target data may be the entire encoded data for each frame or the audibly important encoded data. It may be used only in the evening.
- the speech coding parameters include at least gain parameters representing gain information of the adaptive excitation signal and the fixed excitation signal.
- the speech coding parameters include at least gain parameters representing gain information of the adaptive excitation signal and the fixed excitation signal.
- a driving sound source is generated by multiplying the gain by the multiplier 706 and adding by the adder 707, using the gain decoded by the gain parameter decoding unit 705 in the method described later. Is done. Then, using these sound sources and the LPC parameter data decoded by the LPC parameter data decoding unit 708, a decoded voice is generated via the LPC synthesis file 709 and the post-fill signal 710. Is done.
- a decoded speech is generated in the same manner as a frame in which no error is detected.
- the decoding method of each parameter except for the gain parameter is optional, but the parameter of the previous frame may be used for the LPC parameter and the lag parameter in the same manner as in the past.
- the fixed sound source parameter uses a fixed sound source signal generated by giving a random fixed sound source code, an arbitrary noise signal generated by a random number generator as a fixed sound source signal, and a fixed sound source parameter as a fixed sound source parameter.
- Decoding may be performed by using the fixed excitation code obtained by separating the encoded data of the current frame as it is.
- FIG. 9 is a block diagram showing the internal configuration of gain parameter decoding section 705 in the speech decoding apparatus shown in FIG.
- a gain decoding unit 801 decodes a gain parameter from the current parameter overnight code of the current frame.
- the error state monitor unit 802 determines the state of error detection based on the presence or absence of error detection. This state means that the current frame is
- the processing is switched by the switching unit 803 according to the above state.
- the gain parameter decoded by the gain decoding unit 801 is output as it is.
- the gain parameter in the error detection frame is calculated.
- the calculation method is arbitrary, and a conventional value obtained by attenuating the adaptive sound source gain and the fixed sound source gain of the previous frame may be used. Also, decoding using the gain code of the previous frame may be performed and used as the gain parameter of the current frame. Further, a lag-gain parameter control according to a mode and a gain parameter ratio control according to a mode as described in the first or second embodiment may be used.
- the adaptive sound source Z fixed sound source gain control unit 806 performs the following processing on the normal frame after the error detection.
- control is performed in which an upper limit value is specified for the value of the adaptive excitation gain (coefficient value by which the adaptive excitation is multiplied).
- a fixed value for example, 1.0
- an upper limit value proportional to the decoded adaptive excitation gain value may be determined, or a combination thereof may be used.
- the fixed sound source gain is also controlled so as to maintain the ratio between the adaptive sound source gain and the fixed sound source gain correctly.
- An example of a specific implementation method is shown in the following equation (3).
- Equation (3) if G a> 1.0
- G a Adaptive sound source gain
- the adaptive sound source gain is decoded depending on the decoded sound source of the previous frame.
- the adaptive excitation gain differs from the original value due to the error compensation processing of the previous frame, and in some cases, the quality may be degraded due to abnormally increased amplitude of the decoded speech. With the upper limit, quality degradation can be suppressed.
- the excitation signal in a normal frame after error detection is more accurate when there is no error. This is similar, and the decoded speech quality can be improved.
- the error compensation method may be configured as software.
- a program of the above error compensation method may be stored in a ROM, and the program may be operated according to an instruction of the CPU according to the program.
- the program, the adaptive excitation codebook, and the fixed excitation codebook are stored in a computer-readable storage medium, and the program, the adaptive codebook, and the fixed excitation codebook of the storage medium are recorded in the RAM of the computer. It may be operated according to a program. In such a case, the same operation and effect as those of the first to third embodiments are exhibited.
- a speech decoding apparatus includes: a receiving unit that receives a data having an encoded transmission parameter including a mode information, a lag parameter, and a gain parameter; and the mode information, a lag parameter, A decoding unit that decodes the gain parameter and a gain parameter; and in a decoding unit in which an error has been detected in the data, a lag parameter used in the decoding unit using mode information for a decoding unit that is earlier than the decoding unit. And a determining unit that adaptively determines one night and one gain parameter.
- a lag parameter used for speech decoding and a gain parameter are adaptively calculated based on the decoded mode information. Therefore, it is possible to realize more improved decoded speech quality.
- the determination unit includes a detection unit that detects a change in a lag parameter within the decoding unit and between Z or the decoding unit.
- a configuration is adopted in which a lag parameter used in the decoding unit is determined based on information.
- the lag parameter used for speech decoding is set to the decoded mode information, within the decoding unit, and between Z or between decoding units. Since the calculation is adaptively performed based on the fluctuation detection result, more improved decoded speech quality can be realized.
- the mode indicated by the mode information may be a transient mode or a silent mode
- the detection unit may change the lag parameter by more than a predetermined amount within the decoding unit and between Z or the decoding unit. If no lag parameter is detected, the lag parameter for the decoding unit is used, and in other cases, the lag parameter for the past decoding unit is used.
- the determining unit may include an indicator for indicating mode information.
- a limit control unit that limits the range of the gain parameter based on the gain parameter for the past decoding unit when the mode is the transient mode or the unvoiced mode. The configuration is determined as follows.
- the gain parameter decoded from the coded data of the current decoding unit is not changed. Since the output gain is controlled by specifying the upper limit of increase and the lower limit of decrease from the past gain parameter, the gain parameter decoded from coded data that may contain errors may become an abnormal value due to error. Thus, it is possible to realize more improved decoded voice quality.
- the speech decoding apparatus of the present invention receives data having coded transmission parameters including mode information, lag parameters, fixed excitation parameters, and gain parameters including an adaptive excitation gain and a fixed excitation gain.
- a receiving unit a decoding unit that decodes the mode information, the lag parameter, the fixed sound source parameter, and the gain parameter; and a decoding unit in which an error is detected with respect to the data, And a ratio control unit that controls the ratio between the adaptive excitation gain and the fixed excitation gain using mode information for a decoding unit that is earlier than the decoding unit.
- the ratio control unit may increase a ratio of an adaptive sound source gain when the mode information is in a voiced mode, and the mode information may be in a transient mode or in a transient mode.
- the gain ratio is controlled so as to lower the ratio of the adaptive sound source gain.
- the ratio between the adaptive excitation gain and the fixed excitation gain is adaptively controlled according to the mode information when decoding the gain parameter in a decoding unit in which an error is detected in the encoded data. Therefore, the decoded speech quality of the error detection decoding unit can be more audibly improved.
- the audio decoding apparatus includes: a lag parameter; a fixed sound source parameter; A receiving unit that receives data having an encoded transmission parameter including a gain parameter including an adaptive excitation gain and a fixed excitation gain; and a decoding unit that decodes the lag parameter, the fixed excitation parameter, and the gain parameter. And a defining unit that defines an upper limit of a gain parameter in a normal decoding unit immediately after a decoding unit in which an error is detected.
- control is performed so as to define the upper limit of the decoded adaptive excitation gain parameter in a normal decoding unit in which no error is detected immediately after a decoding unit in which an error is detected in the encoded data. Therefore, it is possible to suppress the deterioration of the decoded voice quality due to an abnormal increase in the amplitude of the decoded voice signal in the normal decoding unit immediately after the error detection.
- the speech decoding device of the present invention in the above configuration, adopts a configuration in which the defining unit controls the fixed sound source gain so as to maintain a predetermined ratio with respect to the adaptive sound source gain in a range where the upper limit is defined.
- the ratio between the adaptive excitation gain and the fixed excitation gain is controlled so as to be a value at the original decoding gain without error, so that the excitation signal in the normal decoding unit immediately after error detection has an error.
- the result is more similar to the case without, and the decoded speech quality can be improved.
- a voice decoding apparatus includes: a receiving unit that receives data having an encoded transmission parameter including a lag parameter and a gain parameter; and a decoding unit that decodes the lag parameter and a gain parameter.
- a decoding unit that obtains mode information from a decoding parameter or a decoded signal obtained by decoding the decoding, and a decoding unit in which an error is detected in the data, the decoding unit being more than the decoding unit.
- a determining unit that adaptively determines a lag parameter and a gain parameter to be used for the decoding unit using mode information for a past decoding unit.
- the lag parameter and the gain parameter used for voice decoding are also used for a voice coding method that does not include voice mode information in the coding parameter. Can be adaptively calculated based on the mode information calculated on the decoding side, and more improved decoded speech quality can be realized.
- a speech decoding apparatus for receiving data having an encoded transmission parameter including a lag parameter, a fixed source parameter, and a gain parameter including an adaptive source gain and a fixed source gain.
- a decoding unit for decoding the lag parameter, the fixed sound source parameter, and the gain parameter, and a mode calculation for obtaining mode information from a decoded parameter or a decoded signal obtained by decoding the data. And controlling the ratio between the adaptive excitation gain and the fixed excitation gain in a decoding unit in which an error has been detected for the data, using mode information for a decoding unit that is earlier than the decoding unit. And a ratio control unit.
- the gain parameter decoding in the decoding unit in which an error is detected in the coded data is performed. Since the ratio between the adaptive excitation gain and the fixed excitation gain is adaptively controlled according to the mode information calculated on the decoding side, the decoded speech quality of the error detection decoding unit can be more audibly improved.
- the code error compensation method includes the following steps: the mode information, the lag parameter, and the gain parameter in the data transmission including the encoded transmission parameter including the mode information, the lag parameter, and the gain parameter. Decoding the evening, and in a decoding unit in which an error is detected in the data, using mode information on a decoding unit that is earlier than the decoding unit, and adapting a lag parameter and a gain parameter used in the decoding unit. And a step of determinatively determining.
- a lag parameter and a gain parameter used for speech decoding are adaptively calculated based on the decoded mode information. Therefore, it is possible to realize more improved decoded speech quality.
- the code error compensation method is the method as set forth above, wherein the lag parameter is simply decoded.
- the lag parameter used for speech decoding is determined by the decoded mode information, within the decoding unit, and between the decoding units or between the decoding units. Since the calculation is adaptively performed based on the fluctuation detection result, more improved decoded speech quality can be realized.
- the mode indicated by the mode information is a transient mode or a silent mode, and a variation of a predetermined amount or more within the decoding unit and between the decoding unit or the decoding unit over the lag parameter is detected. If not, use the above lag parameter for the decoding unit; otherwise, use the lag parameter for the past decoding unit.
- the range of the gain parameter is limited based on the gain parameter for the past decoding unit, The gain parameter over the limited range is determined as the gain parameter.
- the gain parameter decoded from the coded data of the current decoding unit is not changed. Therefore, since the output gain is controlled by specifying the upper limit of increase and the lower limit of decrease from the past gain parameter, the gain parameter decoded from coded data that may contain errors may become an abnormal value due to errors. Can be suppressed, and more improved decoded speech quality can be realized.
- the code error compensation method according to the present invention receives data having coded transmission parameters including mode information, lag parameters, fixed excitation parameters, and gain parameters including an adaptive excitation gain and a fixed excitation gain.
- the code error compensation method in the above method, when the mode indicated by the mode information is the voiced mode, the ratio of the adaptive excitation gain is increased, and the mode indicated by the mode information is the transient mode or the unvoiced mode. Then, the gain ratio between the adaptive sound source gain and the fixed sound source gain is controlled so as to lower the ratio of the adaptive sound source gain.
- the ratio between the adaptive excitation gain and the fixed excitation gain is adaptively controlled in accordance with the mode information when performing gain parameter decoding in a decoding unit in which an error is detected in the encoded data. Therefore, the decoded speech quality of the error detection decoding unit can be more audibly improved.
- the code error compensation method includes a step of receiving data having an encoded transmission parameter including a lag parameter, a fixed excitation parameter, and a gain parameter including an adaptive excitation gain and a fixed excitation gain. Decoding the lag parameter, fixed sound source parameter, and gain parameter, and defining an upper limit of the gain parameter in a normal decoding unit immediately after a decoding unit in which an error is detected; Is provided.
- control is performed so as to define the upper limit of the decoded adaptive excitation gain parameter in a normal decoding unit in which no error is detected immediately after a decoding unit in which an error is detected in the coded data. Therefore, it is possible to suppress the deterioration of the decoded voice quality due to the abnormal increase in the amplitude of the decoded voice signal in the normal decoding unit immediately after the error detection.
- the fixed excitation gain is controlled so as to maintain a predetermined ratio with respect to the adaptive excitation gain in a range in which the upper limit is defined.
- the ratio between the adaptive sound source gain and the fixed sound source gain is Since the control is performed so that the value does not have the original decoding gain, the excitation signal in the normal decoding unit immediately after the error detection becomes more similar when there is no error, and the decoded voice quality can be improved.
- a code error compensation method includes the steps of: receiving data having an encoded transmission parameter including a lag parameter and a gain parameter; decoding the lag parameter and the gain parameter; Calculating mode information from a decoding parameter or a decoded signal obtained by decoding the decoding unit; and, for a decoding unit in which an error is detected in the data, using mode information for a decoding unit that is earlier than the decoding unit. Adaptively determining a lag parameter and a gain parameter used for the decoding unit.
- the lag parameter and the gain parameter used for speech decoding are added to the mode information calculated on the decoding side even for a speech coding method that does not include the speech mode information in the coding parameter. It can be adaptively calculated based on the above, and it is possible to realize more improved decoded speech quality.
- a recording medium of the present invention is a recording medium that stores a program and is readable by a computer, and the program has encoded transmission parameters including mode information, a lag parameter, and a gain parameter.
- the lag parameter and the gain parameter used for speech decoding are adaptively determined based on the decoded mode information. Since the calculation is performed, more improved decoded speech quality can be realized.
- a recording medium is a recording medium that stores a program and is readable by a computer, and the program includes mode information, lag parameters, Decoding the mode information, the lag parameter, and the gain parameter in the data having the coded transmission parameter including the gain parameter and the decoding parameter; and a decoding unit in which an error is detected in the data.
- mode information for the decoding unit in the past than the decoding unit, when the mode indicated by the mode information is the voiced mode, the ratio of the adaptive sound source gain is increased, and the mode indicated by the mode information is set. Controlling the gain ratio between the adaptive sound source gain and the fixed sound source gain so as to reduce the ratio of the adaptive sound source gain when is in the transient mode or the unvoiced mode.
- the ratio between the adaptive excitation gain and the fixed excitation gain is adaptively controlled according to the mode information when performing the gain parameter decoding in the decoding unit in which an error is detected in the encoded data.
- the decoded speech quality of the error detection decoding unit can be more audibly improved.
- the recording medium of the present invention is a recording medium that stores a program and is readable by a computer, wherein the program includes a lag parameter, and a gain parameter.
- the procedure for decoding the lag parameter and the gain parameter, and the upper limit of the gain parameter for the normal decoding unit immediately after the decoding unit in which the error was detected, and adaptation in the range where the upper limit is specified Controlling the fixed sound source gain so as to maintain a predetermined ratio with respect to the sound source gain.
- this medium it is possible to suppress the deterioration of the decoded speech quality due to the abnormal increase in the amplitude of the decoded speech signal in the normal decoding unit immediately after the error detection.
- the speech decoding apparatus and the code error compensation method of the present invention when speech is decoded in a frame in which an error has been detected in encoded data, the lag parameter decoding unit and the gain parameter decoding unit Then, the lag parameter and the gain parameter used for speech decoding are adaptively calculated based on the decoded mode information. Thereby, more improved decoded speech quality can be realized. Further, according to the present invention, the gain in a frame in which an error is detected in encoded data is determined. In the parameter decoding, the ratio between the adaptive excitation gain and the fixed excitation gain is adaptively controlled according to the mode information in the gain parameter decoding unit.
- the decoded sound quality of the error detection frame can be more audibly improved by controlling the gain ratio of the adaptive sound source to be low. it can.
- the gain parameter decoding unit for a normal frame in which no error is detected immediately after the frame in which the error has been detected in the encoded data, audio is output in accordance with the value of the decoding gain parameter.
- the adaptive excitation gain parameter and the fixed excitation gain parameter used for decoding are adaptively controlled. More specifically, control is performed so as to define the upper limit of the decoded adaptive excitation gain parameter. As a result, it is possible to suppress the deterioration of the decoded voice quality due to the abnormal increase in the amplitude of the decoded voice signal in the normal frame after the error detection.
- the present invention can be applied to a base station device and a communication terminal device in a digital wireless communication system. As a result, wireless communication that is resistant to transmission errors can be performed.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Detection And Prevention Of Errors In Transmission (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/018,317 US7171354B1 (en) | 1999-06-30 | 2000-06-30 | Audio decoder and coding error compensating method |
EP00942405A EP1207519B1 (en) | 1999-06-30 | 2000-06-30 | Audio decoder and coding error compensating method |
CA2377597A CA2377597C (en) | 1999-06-30 | 2000-06-30 | Speech decoder and code error compensation method |
AU57064/00A AU5706400A (en) | 1999-06-30 | 2000-06-30 | Audio decoder and coding error compensating method |
US11/641,009 US7499853B2 (en) | 1999-06-30 | 2006-12-19 | Speech decoder and code error compensation method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP18571299A JP4464488B2 (ja) | 1999-06-30 | 1999-06-30 | 音声復号化装置及び符号誤り補償方法、音声復号化方法 |
JP11/185712 | 1999-06-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/641,009 Continuation US7499853B2 (en) | 1999-06-30 | 2006-12-19 | Speech decoder and code error compensation method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001003115A1 true WO2001003115A1 (fr) | 2001-01-11 |
Family
ID=16175542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2000/004323 WO2001003115A1 (fr) | 1999-06-30 | 2000-06-30 | Decodeur audio et procede de compensation d'erreur de codage |
Country Status (8)
Country | Link |
---|---|
US (2) | US7171354B1 (ja) |
EP (2) | EP1207519B1 (ja) |
JP (1) | JP4464488B2 (ja) |
KR (1) | KR100439652B1 (ja) |
CN (1) | CN1220177C (ja) |
AU (1) | AU5706400A (ja) |
CA (1) | CA2377597C (ja) |
WO (1) | WO2001003115A1 (ja) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7447639B2 (en) * | 2001-01-24 | 2008-11-04 | Nokia Corporation | System and method for error concealment in digital audio transmission |
US7069208B2 (en) | 2001-01-24 | 2006-06-27 | Nokia, Corp. | System and method for concealment of data loss in digital audio transmission |
JP4433668B2 (ja) | 2002-10-31 | 2010-03-17 | 日本電気株式会社 | 帯域拡張装置及び方法 |
CN1989548B (zh) * | 2004-07-20 | 2010-12-08 | 松下电器产业株式会社 | 语音解码装置及补偿帧生成方法 |
KR100686174B1 (ko) | 2005-05-31 | 2007-02-26 | 엘지전자 주식회사 | 오디오 에러 은닉 방법 |
FR2897977A1 (fr) | 2006-02-28 | 2007-08-31 | France Telecom | Procede de limitation de gain d'excitation adaptative dans un decodeur audio |
AU2007318506B2 (en) * | 2006-11-10 | 2012-03-08 | Iii Holdings 12, Llc | Parameter decoding device, parameter encoding device, and parameter decoding method |
CN101286319B (zh) * | 2006-12-26 | 2013-05-01 | 华为技术有限公司 | 改进语音丢包修补质量的语音编码方法 |
US8688437B2 (en) | 2006-12-26 | 2014-04-01 | Huawei Technologies Co., Ltd. | Packet loss concealment for speech coding |
CN101226744B (zh) | 2007-01-19 | 2011-04-13 | 华为技术有限公司 | 语音解码器中实现语音解码的方法及装置 |
KR101411900B1 (ko) * | 2007-05-08 | 2014-06-26 | 삼성전자주식회사 | 오디오 신호의 부호화 및 복호화 방법 및 장치 |
US8204753B2 (en) * | 2007-08-23 | 2012-06-19 | Texas Instruments Incorporated | Stabilization and glitch minimization for CCITT recommendation G.726 speech CODEC during packet loss scenarios by regressor control and internal state updates of the decoding process |
CN101552008B (zh) * | 2008-04-01 | 2011-11-16 | 华为技术有限公司 | 语音编码方法及装置、语音解码方法及装置 |
US9197181B2 (en) * | 2008-05-12 | 2015-11-24 | Broadcom Corporation | Loudness enhancement system and method |
US8645129B2 (en) * | 2008-05-12 | 2014-02-04 | Broadcom Corporation | Integrated speech intelligibility enhancement system and acoustic echo canceller |
KR20100006492A (ko) | 2008-07-09 | 2010-01-19 | 삼성전자주식회사 | 부호화 방식 결정 방법 및 장치 |
WO2010130093A1 (zh) * | 2009-05-13 | 2010-11-18 | 华为技术有限公司 | 编码处理方法、编码处理装置与发射机 |
US8762136B2 (en) * | 2011-05-03 | 2014-06-24 | Lsi Corporation | System and method of speech compression using an inter frame parameter correlation |
KR102070430B1 (ko) | 2011-10-21 | 2020-01-28 | 삼성전자주식회사 | 프레임 에러 은닉방법 및 장치와 오디오 복호화방법 및 장치 |
CA2928974C (en) | 2013-10-31 | 2020-06-02 | Jeremie Lecomte | Audio decoder and method for providing a decoded audio information using an error concealment modifying a time domain excitation signal |
ES2746034T3 (es) * | 2013-10-31 | 2020-03-04 | Fraunhofer Ges Forschung | Decodificador de audio y método para proporcionar una información de audio decodificada usando un ocultamiento de error sobre la base de una señal de excitación de dominio de tiempo |
US9953660B2 (en) * | 2014-08-19 | 2018-04-24 | Nuance Communications, Inc. | System and method for reducing tandeming effects in a communication system |
JP6516099B2 (ja) * | 2015-08-05 | 2019-05-22 | パナソニックIpマネジメント株式会社 | 音声信号復号装置および音声信号復号方法 |
WO2020201040A1 (en) * | 2019-03-29 | 2020-10-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for error recovery in predictive coding in multichannel audio frames |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0430200A (ja) * | 1990-05-28 | 1992-02-03 | Nec Corp | 音声復号化方法 |
JPH05113798A (ja) * | 1991-07-15 | 1993-05-07 | Nippon Telegr & Teleph Corp <Ntt> | 音声復号方法 |
JPH0744200A (ja) * | 1993-07-29 | 1995-02-14 | Nec Corp | 音声符号化方式 |
JPH07239699A (ja) * | 1994-02-28 | 1995-09-12 | Hitachi Ltd | 音声符号化方法およびこの方法を用いた音声符号化装置 |
JPH08211895A (ja) * | 1994-11-21 | 1996-08-20 | Rockwell Internatl Corp | ピッチラグを評価するためのシステムおよび方法、ならびに音声符号化装置および方法 |
JPH08320700A (ja) * | 1995-05-26 | 1996-12-03 | Nec Corp | 音声符号化装置 |
JPH09134798A (ja) * | 1995-11-08 | 1997-05-20 | Jeol Ltd | 高周波装置 |
JPH09185396A (ja) * | 1995-12-28 | 1997-07-15 | Olympus Optical Co Ltd | 音声符号化装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5657418A (en) * | 1991-09-05 | 1997-08-12 | Motorola, Inc. | Provision of speech coder gain information using multiple coding modes |
US5495555A (en) * | 1992-06-01 | 1996-02-27 | Hughes Aircraft Company | High quality low bit rate celp-based speech codec |
JP2746033B2 (ja) | 1992-12-24 | 1998-04-28 | 日本電気株式会社 | 音声復号化装置 |
US5574825A (en) | 1994-03-14 | 1996-11-12 | Lucent Technologies Inc. | Linear prediction coefficient generation during frame erasure or packet loss |
JP3616432B2 (ja) * | 1995-07-27 | 2005-02-02 | 日本電気株式会社 | 音声符号化装置 |
JP3308783B2 (ja) | 1995-11-10 | 2002-07-29 | 日本電気株式会社 | 音声復号化装置 |
JP3092652B2 (ja) | 1996-06-10 | 2000-09-25 | 日本電気株式会社 | 音声再生装置 |
-
1999
- 1999-06-30 JP JP18571299A patent/JP4464488B2/ja not_active Expired - Fee Related
-
2000
- 2000-06-30 EP EP00942405A patent/EP1207519B1/en not_active Expired - Lifetime
- 2000-06-30 WO PCT/JP2000/004323 patent/WO2001003115A1/ja active IP Right Grant
- 2000-06-30 CA CA2377597A patent/CA2377597C/en not_active Expired - Fee Related
- 2000-06-30 KR KR10-2001-7016812A patent/KR100439652B1/ko not_active IP Right Cessation
- 2000-06-30 EP EP10180814A patent/EP2276021B1/en not_active Expired - Lifetime
- 2000-06-30 US US10/018,317 patent/US7171354B1/en not_active Expired - Fee Related
- 2000-06-30 CN CNB008097739A patent/CN1220177C/zh not_active Expired - Fee Related
- 2000-06-30 AU AU57064/00A patent/AU5706400A/en not_active Abandoned
-
2006
- 2006-12-19 US US11/641,009 patent/US7499853B2/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0430200A (ja) * | 1990-05-28 | 1992-02-03 | Nec Corp | 音声復号化方法 |
JPH05113798A (ja) * | 1991-07-15 | 1993-05-07 | Nippon Telegr & Teleph Corp <Ntt> | 音声復号方法 |
JPH0744200A (ja) * | 1993-07-29 | 1995-02-14 | Nec Corp | 音声符号化方式 |
JPH07239699A (ja) * | 1994-02-28 | 1995-09-12 | Hitachi Ltd | 音声符号化方法およびこの方法を用いた音声符号化装置 |
JPH08211895A (ja) * | 1994-11-21 | 1996-08-20 | Rockwell Internatl Corp | ピッチラグを評価するためのシステムおよび方法、ならびに音声符号化装置および方法 |
JPH08320700A (ja) * | 1995-05-26 | 1996-12-03 | Nec Corp | 音声符号化装置 |
JPH09134798A (ja) * | 1995-11-08 | 1997-05-20 | Jeol Ltd | 高周波装置 |
JPH09185396A (ja) * | 1995-12-28 | 1997-07-15 | Olympus Optical Co Ltd | 音声符号化装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1207519A4 * |
Also Published As
Publication number | Publication date |
---|---|
JP2001013998A (ja) | 2001-01-19 |
KR100439652B1 (ko) | 2004-07-12 |
CN1359513A (zh) | 2002-07-17 |
EP1207519B1 (en) | 2013-02-27 |
CN1220177C (zh) | 2005-09-21 |
US20070100614A1 (en) | 2007-05-03 |
CA2377597C (en) | 2011-06-28 |
EP2276021B1 (en) | 2012-10-24 |
EP1207519A4 (en) | 2005-08-24 |
JP4464488B2 (ja) | 2010-05-19 |
EP2276021A3 (en) | 2011-01-26 |
US7171354B1 (en) | 2007-01-30 |
CA2377597A1 (en) | 2001-01-11 |
EP2276021A2 (en) | 2011-01-19 |
AU5706400A (en) | 2001-01-22 |
KR20020027378A (ko) | 2002-04-13 |
EP1207519A1 (en) | 2002-05-22 |
US7499853B2 (en) | 2009-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7499853B2 (en) | Speech decoder and code error compensation method | |
EP2070082B1 (en) | Methods and apparatus for frame erasure recovery | |
US9318117B2 (en) | Method and arrangement for controlling smoothing of stationary background noise | |
US7426465B2 (en) | Speech signal decoding method and apparatus using decoded information smoothed to produce reconstructed speech signal to enhanced quality | |
US10607624B2 (en) | Signal codec device and method in communication system | |
EP1096476B1 (en) | Speech signal decoding | |
JP3568255B2 (ja) | 音声符号化装置及びその方法 | |
US8195469B1 (en) | Device, method, and program for encoding/decoding of speech with function of encoding silent period | |
JP2001265390A (ja) | 複数レートで動作する無音声符号化を含む音声符号化・復号装置及び方法 | |
JP3660676B2 (ja) | 音声符号化装置及びその方法 | |
JPH11272297A (ja) | ピッチ強調方法及びその装置 | |
JP3571709B2 (ja) | 音声符号化装置及びその方法 | |
JP3475958B2 (ja) | 無音声符号化を含む音声符号化・復号装置、復号化方法及びプログラムを記録した記録媒体 | |
JP3817562B2 (ja) | 音声復号化装置及びその方法 | |
JP3936369B2 (ja) | 音声復号化装置及びその方法 | |
JP2005316497A (ja) | 音声復号化装置及びその方法 | |
JP2004078235A (ja) | 複数レートで動作する無音声符号化を含む音声符号化・復号装置 | |
JPH05173595A (ja) | コード励振線形予測符号化方法 | |
JP2004004946A (ja) | 音声復号装置 | |
JPH09297600A (ja) | 音声復号装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 00809773.9 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10018317 Country of ref document: US Ref document number: IN/PCT/2001/1336/KOL Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2377597 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000942405 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020017016812 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1020017016812 Country of ref document: KR |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 2000942405 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1020017016812 Country of ref document: KR |