EP1852851A1 - Dispositif et procede de codage/decodage audio ameliores - Google Patents

Dispositif et procede de codage/decodage audio ameliores Download PDF

Info

Publication number
EP1852851A1
EP1852851A1 EP05738242A EP05738242A EP1852851A1 EP 1852851 A1 EP1852851 A1 EP 1852851A1 EP 05738242 A EP05738242 A EP 05738242A EP 05738242 A EP05738242 A EP 05738242A EP 1852851 A1 EP1852851 A1 EP 1852851A1
Authority
EP
European Patent Office
Prior art keywords
frequency
module
domain
spectrum
coefficients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05738242A
Other languages
German (de)
English (en)
Inventor
Xingde Pan
Dietz Martin
Andreas Ehret
Holger HÖRICH
Xiaoming Zhu
Michael Schug
Weimin Ren
Lei Wang
Hao Deng
Fredrik Henn
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing E-World Technology Co Ltd
BEIJING MEDIA WORKS Co Ltd
Coding Technologies Sweden AB
Original Assignee
Beijing E-World Technology Co Ltd
BEIJING MEDIA WORKS Co Ltd
Coding Technologies Sweden AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing E-World Technology Co Ltd, BEIJING MEDIA WORKS Co Ltd, Coding Technologies Sweden AB filed Critical Beijing E-World Technology Co Ltd
Publication of EP1852851A1 publication Critical patent/EP1852851A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • G10L19/038Vector quantisation, e.g. TwinVQ audio
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/028Noise substitution, i.e. substituting non-tonal spectral components by noisy source

Definitions

  • the invention relates to audio encoding and decoding, and in particular, to an enhanced audio encoding/decoding device and method based on a sensor model.
  • the digital audio signals need to be audio encoded or audio compressed for storage and transmission.
  • the object of encoding the audio signals is to realize transparent representation thereof by using as less number of bits as possible, for example, the originally input audio signals are almost the same as the output audio signals after being encoded.
  • CD came into existence, which reflects many advantages of representing the audio signals by digits, such as high fidelity, large dynamic range and great robustness.
  • all these advantages are achieved at the cost of a very high data rate.
  • the sampling rate requested by the digitization of the stereo signal of CD quality is 44.1kHz, and each sampling value has to be uniformly quantized by 15 bits, thus the non-compressed data rate reaches 1. 41Mb/s which brings great inconvenience to the transmission and storage of data, and the transmission and storage of data are limited by the bandwidth and cost especially in the situation of multimedia application and wireless transmission application.
  • the data rate in new network and wireless multimedia digital audio system must be reduced without damaging the quality of the audio.
  • MPEG-1 and MPEG-2 BC techniques are high sound quality encoding technique mainly used for mono and stereo audio signals.
  • the MPEG-2 BC encoding technique gives emphasis to backward compatibility with the MPEG-1 technique, it is impossible to realize high sound quality encoding of five sound channels at a code rate lower than 540kbps.
  • the MPEG-2 AAC technique was put forward, which can realize a high quality encoding of the five channel signals at a rate of 320kbps.
  • Fig. 1 is a block diagram of the MPEG-2 AAC encoder.
  • Said encoder comprises a gain controller 101, a filter bank 102, a time-domain noise shaping module 103, an intensity/coupling module 104, a psychoacoustical model, a second order backward adaptive predictor 105, a sum-difference stereo module 106, a bit allocation and quantization encoding module 107, and a bit stream multiplexing module 108, wherein the bit allocation and quantization encoding module 107 further comprises a compression ratio/distortion processing controller, a scale factor module, a non-uniform quantizer, and an entropy encoding module.
  • the filter bank 102 uses a modified discrete cosine transformation (MDCT), whose resolution is signal-adaptive, that is, an MDCT transformation of 2048 dots is used for the steady state signal, while an MDCT transformation of 256 dots is used for the transient state signal, thus for a signal sampled at 48kHz, the maximum frequency resolution is 23Hz and the maximum time resolution is 2. 6ms.
  • MDCT modified discrete cosine transformation
  • sine window and Kaiser-Bessel window can be used in the filter bank 102, and the sine window is used when the harmonic wave interval of the input signal is less than 140Hz, while the Kaiser-Bessel window is used when the strong component interval in the input signal is greater than 220Hz.
  • the time-domain noise shaping technique performs linear prediction analysis on the frequency spectrum coefficients in the frequency domain, then controls the shape of the quantized noise in the time domain according to said analysis to thereby control the pre-echo.
  • the intensity/coupling module 104 is used for stereo encoding of the signal intensity.
  • the sense of direction of audition is related to the change in the relevant signal intensity (signal envelope), but is irrelevant to the waveform of the signal, that is, a constant envelope signal has no influence on the sense of direction of audition. Therefore, this characteristic and the relevant information among multiple sound channels can be utilized to combine several sound channels into one common sound channel to be encoded, thereby forming the intensity/coupling technique.
  • the second order backward adaptive predictor 105 is used for removing the redundancy of the steady state signal and improving the encoding efficiency.
  • the sum-difference stereo (M/S) module 106 operates on sound channel pairs.
  • the sound channel pair refers to the two sound channels of the left-right sound channels or the left-righ't surround sound channels in, for example, dual sound channel signal or multiple sound channel signal.
  • the M/S module 106 achieves the effect of reducing code rate and improving encoding efficiency by means of the correlation between the two sound channels in the sound channel pair.
  • the bit allocation and quantization encoding module 107 is realized by a nested loop, wherein the non-uniform quantizer performs lossy encoding, while the entropy encoding module performs lossless encoding, thus removing redundancy and reducing correlation.
  • the nested loop comprises inner layer loop and outer layer loop, wherein the inner layer loop adjusts the step size of the non-uniform quantizer until the provided bits are used up, and the outer layer loop estimates the encoding quality of signal by using the ratio between the quantized noise and the masking threshold.
  • the encoded signals are formed into an encoded audio stream through the bit stream multiplexing module 108 to be output.
  • Each frequency band generating 256 frequency spectrum coefficients using MDCT, resulting in altogether 1024 frequency spectrum coefficients.
  • the gain controller 101 is used in each frequency band.
  • the high frequency PQF frequency band can be neglected in the decoder to obtain signals of low sampling rate.
  • Fig. 2 is a schematic block diagram of the corresponding MPEG-2 AAC decoder.
  • Said decoder comprises a bit stream demultiplexing module 201, a lossless decoding module 202, an inverse quantizer 203, a scale factor module 204, a sum-difference stereo (M/S) module 205, a prediction module 206, an intensity/coupling module 207, a time-domain noise shaping module 208, a filter bank 209 and a gain control module 210.
  • the encoded audio stream is demultiplexed by the bit stream demultiplexing module 201 to obtain the corresponding data stream and control stream.
  • the inverse quantizer 203 is a non-uniform quantizer bank realized by a companding function, which is used for transforming the integer quantized values into a reconstruction spectrum.
  • the scale factor module in the encoder differentiates the current scale factor from the previous scale factor and performs a Huffman encoding on the difference, so the scale factor module 204 in the decoder can obtain the corresponding difference through Huffman encoding, from which the real scale factor can be recovered.
  • the M/S module 205 converts the sum-difference sound channel into a left-right sound channel under the control of the side information.
  • a prediction module 206 is used in the decoder for performing prediction decoding.
  • the intensity/coupling module 207 performs intensity/coupling decoding under the control of the side information; then outputs to the time domain noise shaping module 208 to perform time domain noise shaping decoding, and in the end, the filter bank 209 performs integration filtering by an inverse modified discrete cosine transformer (IMDCT) technique.
  • IMDCT inverse modified discrete cosine transformer
  • the PQF frequency band of high frequency can be neglected through the gain control module 210 so as to obtain signals of low sampling rate.
  • the MPEG-2 AAC encoding/decoding technique is suitable for audio signals of medium and high code rate, but it has a poor encoding quality for low code rate or very low code rate audio signals; meanwhile, said encoding/decoding technique involves a lot of encoding/decoding modules, so it is highly complex in implementation and is not easy for real-time implementation.
  • Fig. 3 is a schematic drawing of the structure of the encoder using the Dolby AC-3 technique, which comprises a transient state signal detection module 301, a modified discrete cosine transformation filter MDCT 302, a frequency spectrum envelope/index encoding module 303, a mantissa encoding module 304, a forward-backward adaptive sensing model 305, a parameter bit allocation module 306, and a bit stream multiplexing module 307.
  • the audio signal is determined through the transient state signal detection module 301 to be either a steady state signal or a transient state signal. Meanwhile, the time-domain data is mapped to the frequency-domain data through the signal adaptive MDCT filter bank 302, wherein a long window of 512 dots is applied to the steady state signal, and a pair of short windows are applied to the transient state signal.
  • the frequency spectrum envelope/index encoding module 303 encodes the index portion of the signal according to the requirements of the code rate and frequency resolution in three modes, i.e. D15 encoding mode, D25 encoding mode and D45 encoding mode.
  • the AC-3 technique uses differential encoding for the spectrum envelope in frequency, because an increment of ⁇ 2 is needed at most, each increment representing a level change of 6dB.
  • An absolute value encoding is used for the first DC item, and differential encoding is used for the rest of the indexes.
  • each index requires about 2.33 bits, and three differential groups are encoded in a word length of 7 bits.
  • the D15 encoding mode sacrifices the time resolution to provide refined frequency resolution.
  • D15 is transmitted occasionally, usually the frequency spectrum envelope of every 6 sound blocks (one data frame) is transmitted at one time.
  • the estimate is encoded with lower frequency resolution generally using D25 and D45 encoding modes.
  • the D25 encoding mode provides the appropriate frequency resolution and time resolution, and differential encoding is performed in every other frequency coefficient, thus each index needs about 1.15 bits. If the frequency spectrum is steady on two to three blocks but changes abruptly, the D25 encoding mode can be used.
  • the D45 encoding mode performs differential encoding in every three frequency coefficients, thus each index needs about 0.58 bit.
  • the D45 encoding mode provides very high time resolution but low frequency resolution, so it is generally used for encoding of transient state signals.
  • the forward-backward adaptive sensing model 305 is used for estimating the masking threshold of each frame of signals, wherein the forward adaptive portion is only applied to the encoder to estimate a group of optimal sensing model parameters through iterative loop under the restriction of the code rate, then said parameters are transferred to the backward adaptive portion to estimate the masking threshold of each frame.
  • the backward adaptive portion is applied both to the encoder and the decoder.
  • the parameter bit allocation module 306 analyzes the frequency spectrum envelope of the audio signals according to the masking rule to determine the number of bits allocated to each mantissa. Said module 306 performs an overall bit allocation for all the sound channels by using a bit reservoir. When encoding in the mantissa encoding module 304, bits are taken recurrently from the bit reservior to be allocated to all the sound channels. The quantization of the mantissa is adjusted according to the number of bits that can be obtained.
  • the AC-3 encoder also uses the high frequency coupling technique, in which the high frequency portion of the coupled signal is divided into 18 sub-frequency channels according to the critical bandwidth of the human ear, then some of the sound channels are selected to be coupled starting from a certain sub-band. Finally, AC-3 audio stream is formed through the bit stream multiplexing module 307 to be output.
  • Fig. 4 is a schematic drawing of the flow of decoding using Dolby AC-3.
  • the bit stream that is encoded byAC-3 encoder is input, and data frame synchronization and error code detection are performed on the bit stream. If a data error code is detected, error code covering or muting processing is performed, then the bit stream is de-packaged to obtain the primary information and the side information, and then index decoding is performed thereon.
  • index decoding two pieces of side information are needed, one is the number of packaged indexes, the other is the index strategy that is adopted, such as D15, D25 or D45 mode.
  • the decoded indexes and the bit allocation side information again perform the bit allocation to indicate the number of bits used by each packaged mantissa, thereby obtaining a group of bit allocation pointers, each corresponding to an encoded mantissa.
  • the bit allocation pointers point out the quantizer for the mantissa and the number of bits occupied by each mantissa in the code stream.
  • the single encoded mantissa value is de-quantized to be transformed into a de-quantized value, and the mantissa that occupies zero bit is recovered to zero or is replaced by a random jitter value under the control of the jitter mark.
  • the de-coupling operation is carried out, which recovers the high frequency portion of the coupled sound channel, including the index and the mantissa, from the common coupling sound channel and the coupling factor.
  • a matrix processing is used for a certain sub-band, then at the decoding terminal, the sum and difference sound channel value of said sub-band should be converted into the left-right sound channel value through matrix recovery.
  • the code stream includes the dynamic range control value of each audio block. A dynamic range compression is performed on said value to change the amplitude of the coefficient, including index and mantissa.
  • the frequency-domain coefficients are inversely transformed into time-domain samples, then the time-domain samples are processed by adding window, and adjacent blocks are superposed to reconstruct the PCM audio signal.
  • a down-mixing processing should be performed on the audio signal to finally output the PCM stream.
  • the Dolby AC-3 encoding technique is mainly for high bit rate signals of multi-channel surround sound, but when the encoding bit rate of 5.1 sound channel is lower than 384kbps, the encoding effect is bad; besides, the encoding efficiency of stereo of mono and dual sound channels is also low.
  • the existing encoding and decoding techniques cannot ensure the encoding and decoding quality of audio signals of very low code rate, low code rate and high code rate and of signals of mono and dual channels, and the implementation thereof is complex.
  • the technical problem to be solved by this invention is to provide an enhanced audio encoding/decodirig device and method so as to overcome the low encoding efficiency and poor encoding quality with respect to the low code rate audio signals in the prior art.
  • the enhanced audio encoding device of the invention comprises a signal type analyzing module, a psychoacoustical analyzing module, a time-frequency mapping module, a frequency-domain linear prediction and vector quantization module, a quantization and entropy encoding module, and a bit-stream multiplexing module.
  • the signal type analyzing module is configured to analyze the signal type of the input audio signal and output the audio signal to the psychoacoustical analyzing module and the time-frequency mapping module, and to output the result of signal type analysis to the bit-stream multiplexing module at the same time;
  • the psychoacoustical analyzingmodule is configured to calculate a masking threshold and a signal-to-masking ratio of the input audio signal, and output them to said quantization and entropy encoding module;
  • the time-frequency mapping module is configured to convert the time-domain audio signal into frequency-domain coefficients;
  • the frequency-domain linear prediction and vector quantization module is configured to perform linear prediction and multi-stage vector quantization on the frequency-domain coefficients and output the residual sequence to the quantization and entropy encoding module and output the side information to the bit-stream multiplexing module;
  • the quantization and entropy encoding module is configured to perform quantization and entropy encoding on the residual sequence under the control of the signal-to-masking
  • the enhanced audio decoding device of the invention comprises a bit-stream demultiplexing module, an entropy decoding module, an inverse quantizer bank, an inverse frequency-domain linear prediction and vector quantization module, and a frequency-time mapping module.
  • the bit-stream demultiplexing module is configured to demultiplex the compressed audio data stream and output the corresponding data signals and control signals to the entropy decoding module and the inverse frequency-domain linear prediction and vector quantization module;
  • the entropy decoding module is configured to decode said signals, recover the quantized values of the spectrum so as to output to the inverse quantizer bank;
  • the inverse quantizer bank is configured to reconstruct the inverse quantization spectrum and output it to the inverse frequency-domain linear prediction and vector quantization module;
  • the inverse frequency-domain linear prediction and vector quantization module is configured to perform inverse quantization and inverse linear prediction filtering on the inverse quantization spectrum to obtain the spectrum-before-prediction, and to output it to the frequency-time mapping module; and the frequency-time
  • the invention is applicable to the Hi-Fi compression encoding of audio signals with the configuration of multiple sampling rates and sound channels, and it supports audio signals with the sampling rate of 8kHz to 192kHz, meanwhile, it supports all possible sound channel configurations and supports audio encoding/decoding of wide range of target code rate.
  • Figs. 1-4 are the schematic drawings of the structures of the encoders of the prior art, which have been introduced in the background art, so they will not be elaborated herein.
  • the audio encoding device of the present invention comprises a signal type analyzing module 50, a psychoacoustical analyzing module 51, a time-frequency mapping module 52, a frequency-domain linear prediction and vector quantization module 53, a quantization and entropy encoding module 54, and a bit-stream multiplexing module 55.
  • the signal type analyzing module 50 is configured to analyze the signal type of the input audio signal; the psychoacoustical analyzing module 51 is configured to calculate a masking threshold and a signal-to-masking ratio of the audio signal; the time-frequency mapping module 52 is configured to convert the time-domain audio signal into frequency-domain coefficients; the frequency-domain linear prediction and vector quantization module 53 is configured to perform linear prediction and multi-stage vector quantization on the frequency-domain coefficients and to output the residual sequence to the quantization and entropy encoding module 54, and to output the side information to the bit-stream multiplexing module 55 at the same time; the quantization and entropy encoding module 54 is configured to perform quantization and entropy encoding of the residual coefficients under the control of the signal-to-masking ratio output from the psychoacoustical analyzing module 51 and to output them to the bit-stream multiplexing module 55; and the bit-stream multiplexing module 55 is configured to multiplex the received data to form audio encoding code stream.
  • the signal pre-processing module 50 After the digital audio signal is input to the signal pre-processing module 50, the signal type is analyzed and then the signal is input to the psychoacoustical analyzing module 51 and the time-frequency mapping module 52.
  • the masking threshold and the signal-to-masking ratio of said frame of audio signal are calculated in the psychoacoustical analyzing module 51, and the signal-to-masking ratio is transmitted as a control signal to the quantization and entropy encoding module 54, and on the other hand, the time-domain audio signal is converted into frequency-domain coefficients through the time-frequency mapping module 52. Said frequency-domain coefficients are transmitted to the frequency-domain linear prediction and vector quantization module 53.
  • the optimal distortion measurement criterion is used to search and calculate the the code word indexes for the respective levels of code books, and the code word index is used as side information to be transferred to the bit-stream multiplexing module 55, while the residual sequence obtained through prediction analysis is output to the quantization and entropy encoding module 54.
  • said residual sequence/frequency-domain coefficients are quantized and entropy encoded in the quantization and entropy encoding module 54.
  • the encoded data and the side information are input to the bit-stream multiplexing module 55 to be multiplexed to form a code stream of enhanced audio encoding.
  • the signal type analyzing module 50 is configured to analyze the signal type of the input audio signal.
  • the signal type analyzing module 50 determines if the signal is a slowly varying signal or a fast varying signal by analyzing the forward and backward masking effects based on the adaptive threshold and waveform prediction. If the signal is of a fast varying type, the relevant parameter information of the abrupt component is calculated, such as the location where the abrupt signal occurs and the intensity of the abrupt signal, etc.
  • the psychoacoustical analyzing module 51 is mainly configured to calculate the masking threshold, the sensing entropy and the signal-to-masking ratio of the input audio signal.
  • the number of bits needed for the transparent encoding of the current signal frame can be dynamically analyzed based on the sensing entropy calculated by the psychoacoustical analyzing module 51, thereby adjusting the bit allocation among frames.
  • the psychoacoustical analyzing module 51 outputs the signal-to-masking ratio of each sub-band to the quantization and entropy encoding module 54 to control it.
  • the time-frequency mapping module 52 is configured to convert the audio signal from a time-domain signal into a frequency-domain coefficient, and it is formed of a filter bank which can be specifically discrete Fourier transformation (DFT) filter bank, discrete cosine transformation (DCT) filter bank, modified discrete cosine transformation (MDCT) filter bank, cosine modulation filter bank, and wavelet transformation filter bank, etc.
  • DFT discrete Fourier transformation
  • DCT discrete cosine transformation
  • MDCT modified discrete cosine transformation
  • cosine modulation filter bank cosine modulation filter bank
  • wavelet transformation filter bank etc.
  • the frequency-domain coefficients obtained from the time-frequency mapping are transmitted to the frequency-domain linear prediction and vector quantization module 53 to undergo linear prediction and vector quantization.
  • the frequency-domain linear prediction and vector quantization module 53 consists of a linear prediction analyzer, a linear prediction filter, a transformer, and a vector quantizer. Frequency-domain coefficients are input to the linear prediction analyzer for prediction analysis to obtain the prediction gain and prediction coefficients. If the prediction gain meets a certain condition, the frequeny-domain coefficients are input to the linear prediction filter to be filtered, thereby obtaining the predicted residual sequence of the rfequency-domain coefficients.
  • the residual sequence is directly output to the quantization and entropy encoding module 54, while the prediction coefficients are transformed into line spectrum pair frequency LSF coefficients through the transformer, and then are sent to the vector quantizer for a multi-stage vector quantization, and the quantized relevant side information is transmitted to the bit-stream multiplexing module 55.
  • Performing a frequency-domain linear prediction processing on the audio signals can effectively suppress the pre-echo and obtain greater encoding gain.
  • the real signal is x(t)
  • PSD ( f ) F ⁇ ⁇ x ( ⁇ ) ⁇ x *( ⁇ - t ) d ⁇ , so the square Hilbert envelope of the signal at the time-domain and the power spectrum density function of the signal at the frequency-domain are corresponding to each other.
  • the quantization and entropy encoding module 54 further comprises a non-linear quantizer bank and an encoder, wherein the quantizer can be either a scalar quantizer or a vector quantizer.
  • the vector quantizer can be further divided into the two categories of memoryless vector quantizer and memory vector quantizer.
  • each input vector is separately quantized independent of the previous vectors; while the memory vector quantizer quantizes a vector taking into account the previous vectors, i.e. using the correlation among the vectors.
  • Main memoryless vector quantizers include full searching vector quantizer, tree searching vector quantizer, multi-stage vector quantizer, gain/waveform vector quantizer and separating mean vector quantizer; and the main memory vector quantizers include prediction vector quantizer and finite state vector quantizer.
  • the non-linear quantizer bank further comprises M sub-band quantizers.
  • the scale factor is mainly used to perform the quantization, specifically, all the frequency-domain coefficients of the M scale factor sub-band are non-linearly compressed, then the frequency-domain coefficients of said sub-band are quantized by using the scale factors to obtain the quantization spectrum represented by an integer to be output to the encoder,
  • the first scale factor in each frame of signal is used as the common scale factor to be output to the bit-stream multiplexing module 55, and the rest of the scale factors are output to the encoder after differential processing with respect to their respective preceding scale factors.
  • the scale factors in said step are constantly varying values, which are adjusted according to the bit allocation strategy.
  • the present invention provides an overall sensing bit allocation strategy with the minimum distortion, and details are as follows:
  • the frequency-domain coefficients form a plurality of M-dimensional vectors to be input to the non-linear quantizer bank.
  • Each M-dimensional vector is spectrum smoothed according to the smoothing factor, i.e. reducing the dynamic range of the spectrum.
  • the vector quantizer finds the code word from the code book that has the shortest distance from the vector to be quantized according to the subjective perception distance measure criterion, and transfers the corresponding code word index to the encoder.
  • the smoothing factor is adjusted based on the bit allocation strategy of vector quantization, while the bit allocation strategy of vector quantization is controlled according to the priority of sensing among different sub-bands.
  • the entropy encoding technique is used to further remove the statistical redundancy of the quantized coefficients and the side information.
  • Entropy encoding is a source encoding technique, whose basic idea is allocating shorter code words to symbols that have greater probability of appearance, and allocating longer code words to symbols that have less probability of appearance, thus the average code word length is the shortest.
  • entropy encoding mainly includes Huffman encoding, arithmetic encoding or run length encoding, etc.
  • the entropy encoding in the present invention can be any of said encoding methods.
  • Entropy encoding is performed on the quantization spectrum quantized and output by the scalar quantizer and the differentially processed scale factors in the encoder to obtain the code book sequence numbers, the encoded values of the scale factors, and the lossless encoding quantization spectrum, then the code book sequence numbers are entropy encoded to obtain the encoded values of the code book sequence numbers, then the encoded values of the scale factors, the encoded values of the code book sequence numbers, and the lossless encoding quantization spectrum are output to the bit-stream multiplexing module 55.
  • the code word indexes quantized by the vector quantizer are one-dimensional or multi-dimensional entropy encoded in the encoder to obtain the encoded values of the code word indexes, then the encoded values of the code word indexes are output to the bit-stream multiplexing module 55.
  • the bit-stream multiplexing module 55 receives the side information output from the frequency-domain linear prediction and vector quantization module 53 and the code stream including the common scale factor, encoded values of the scale factors, encoded values of the code book sequence numbers and the quantization spectrum of lossless encoding or the encoded values of the code word indexes output from the quantization and entropy encoding module 54, and then multiplexes them to obtain the compressed audio data stream.
  • the encoding method based on said encoder as described above includes analyzing the signal type of the input audio signal; calculating the signal-to-masking ratio of the signal whose signal type has been analyzed; performing a time-frequency mapping on the signal whose signal type has been analyzed to obtain the frequency-domain coefficients of the audio signal; performing a standard linear prediction analysis on the frequency-domain coefficients to obtain the prediction gain and prediction coefficients; determining if the prediction gain exceeds the predetermined threshold, if it does, a frequency-domain linear prediction error filtering is performed on the frequency-domain coefficients based on the prediction coefficients to obtain the linear prediction residual sequence of the frequency-domain coefficients; transforming the prediction coefficients into line spectrum pair frequency coefficients, and performing a multi-stage vector quantization on said line spectrum pair frequency coefficients to obtain the side information; quantizing and entropy encoding the residual sequence; if the prediction gain does not exceed the predetermined threshold, quantizing and entropy encoding the frequency-domain coeffcients; and multiplexing
  • the signal type analyzing step determines if the signal is of a fast varying type or of a slowly varying type by performing forward and backward masking effect analysis based on the adaptive threshold and waveform prediction, and the specific steps thereof are: decomposing the input audio data into frames; decomposing the input frames into a plurality of sub-frames and searching for the local extremal vertexes of the absolute values of the PCM data on each sub-frame; selecting the sub-frame peak value from the local extremal vertexes of the respective sub-frames; for a certain sub-frame peak value, predicting the typical sample value of a plurality of (typically four) sub-frames that are forward delayed with respect to said sub-frame by means of apluralityof (typically three) sub-frame peak values before said sub-frame; calculating the difference and ratio between said sub-frame peak value and the predicted typical sample value; if the predicted difference and ratio are both larger than the predetermined threshold, determining that said sub-frame has jump signal and confirming that said sub-frame has the local
  • DFT discrete Fourier transformation
  • DCT discrete cosine transformation
  • MDCT modified discrete cosine transformation
  • cosine modulation filter bank wavelet transformation
  • wavelet transformation etc.
  • modified discrete cosine transformation MDCT and cosine modulation filtering are taken as examples to illustrate the process of time-frequency mapping.
  • the time-domain signals of M samples from the previous frame and the time domain signals of M samples of the present frame are selected first, then a window adding operation is performed on the time-domain signal of altogether 2M samples of these two frames, and then, MDCT transformation is performed on the window added signal to obtain M frequency-domain coefficients.
  • Sine window can be used as the window function.
  • double orthogonal transformation can also be used, and said limitation to the window function is modified by specific analysis filter and synthesis filter.
  • the time-domain signals of M samples from the previous frame and the time domain signals of M samples of the present frame are selected first, then a window adding operation is performed on the time-domain signal of altogether 2M samples of these two frames, and then, cosine modulation filtering is performed on the window added signal to obtain M frequency-domain coefficients.
  • the impulse response length of the analysis window (analysis prototype filter) P a (n) of M sub-bands cosine modulation filter bank is N a
  • the impulse response length of integration window (integration prototype filter) P s (n) is N s
  • the calculation of the masking threshold and signal-to-mas king ratio of the pre-processed audio signal includes the following steps:
  • a convolution operation is respectively performed for the sub-band energy e[b] and the unpredictability c[b] with respect to the spread function to obtain the sub-band energy spread e s [b] arid the sub-band unpredictability spread c s [b], the spread function of masking i with respect to sub-band b being represented by s[i, b].
  • t [b] 1
  • said sub-band signal is pure tone
  • t [b] 0
  • said sub-band signal is white noise.
  • Step 3 calculating the signal-to-noise ratio (SNR) needed for each sub-band.
  • Step 4 calculating the masking threshold of each sub-band and the sensing entropy of the signal.
  • Step 5 calculating the signal-to-masking ratio (SMR) of each sub-band signal.
  • a liner prediction and vector quantization is performed on the frequency-domain coefficients.
  • a standard linear prediction analysis is performed on the frequency-domain coefficients, including calculating the autocorrelation matrix, obtaining the prediction gain and the prediction coefficients by recursively executing the Levinson-Durbin algorithm. Then it is determined whether the calculated prediction gain exceeds a predetermined threshold, and if it does, a frequency-domain linear prediction error filtering is performed on the frequency-domain coefficients based on the prediction coefficients, otherwise, the frequency-domain coefficients are not processed and the next step is executed to quantize and entropy encoding the frequency-domain coefficients.
  • Linear prediction includes forward prediction and backward prediction.
  • Forward prediction refer to predicting the current value by using the values before a certain moment
  • the backward prediction refers to predicting the current value by using the values after a certain moment.
  • the forward prediction will be used as an example to explain the linear prediction error filtering.
  • the frequency-domain coefficients X ( k ) output after the time-frequency transformation can be represented by the residual sequence E(k) and a group of prediction coefficients a i .
  • said group of prediction coefficients a i are transformed into the linear spectrum frequency LSF coefficients, and multi-stage vector quantization is performed thereon.
  • the vector quantization uses the optimal distortion measurement criterion (e.g. nearest neighboring criterion) to search and calculate the code word indexes of the respective stages of code book, thereby determining the code words corresponding to the prediction coefficients and outputting the code word indexes as the side information.
  • the residual sequence E ( k ) is quantized and entropy encoded.
  • the frequency-domain coefficients or the residual sequence is quantized and entropy encoded based on said signal-to-masking ratio, wherein the quantization can be scalar quantization or vector quantization.
  • the scalar quantization comprises the steps of non-linearly compressing the frequency-domain coefficients in all the scale factor bands; using the scale factor of each sub-band to quantize the frequency-domain coefficients of said sub-band to obtain the quantization spectrum represented by an integer; selecting the first scale factor in each frame of signal as the common scale factor; and differentiating the rest of the scale factors from their respective previous scale factors.
  • the vector quantization comprises the steps of forming a plurality of multi-dimensional vector signals with the frequency-domain coefficients; performing spectrum smoothing for each M-dimensional vector according to the smoothing factor; searching for the code word from the code book that has the shortest distance from the vector to be quantized according to the subjective perception distance measure criterion to obtain the code word indexes.
  • the entropy encoding step comprises entropy encoding the quantization spectrum and the differentiated scale factors to obtain the sequence numbers of the code book, the encoded values of the scale factors and the quantization spectrum of lossless encoding; and entropy encoding the sequence numbers of the code book to obtain the encoded values thereof.
  • a one-dimensional or multi-dimensional entropy encoding is performed on the code word indexes to obtain the encoded values of the code word indexes.
  • the entropy encoding method described above can be any one of the existing Huffman encoding, arithmetic encoding or run length encoding method.
  • the encoded audio signal is obtained, which is multiplexed together with the common scale factor, side information and the result of signal type analysis to obtain the compressed audio code stream.
  • Fig. 6 is a schematic drawing of the structure of the audio decoding device according to the present invention.
  • the audio decoding device comprises a bit-stream demultiplexing module 801, an entropy decoding module 802, an inverse quantizer bank 803, an inverse frequency-domain linear prediction and vector quantization module 804, and a frequency-time mapping module 805.
  • the compressed audio code stream is demultiplexed by the bit-stream demultiplexing module 801 to obtain the corresponding data signal and control signal which are output to the entropy decoding module 802 and the inverse frequency-domain linear prediction and vector quantization module 804; the data signal and control signal are decoded in the entropy decoding module 802 to recover the quantized values of the spectrum.
  • Said quantized values are reconstructed in the inverse quantizer bank 803 to obtain the inversely quantized spectrum.
  • the inversely quantized spectrum is then output to the inverse frequency-domain linear prediction and vector quantization module 804 for inverse quantization and inverse linear prediction filtering to obtain the spectrum-before-prediction, which is output to the frequency-time mapping module 805, then the time-domain audio signal of low frequency band is obtained after the frequency-time mapping.
  • the bit-stream demultiplexing module 801 decomposes the compressed audio code stream to obtain the corresponding data signal and control signal and to provide the corresponding decoding information for other modules.
  • the compressed audio data stream is demultiplexed, and the signals output to the entropy decoding module 802 include the common scale factor, the scale factor encoded values, the encoded values of the code book sequence numbers, and the quantized spectrum of the lossless encoding, or the encoded values of the code word indexes; the control information of inverse frequency-domain linear prediction and vector quantization is output to the inverse frequency-domain linear prediction and vector quantization module 804.
  • the quantization and entropy encoding module 54 use the scalar quantizer, then in the decoding device, what the entropy decoding module 802 receives are the common scale factor, the scale factor encoded values, the encoded values of the code book sequence numbers, and the quantized spectrum of the lossless encoding as output from the bit-stream demultiplexing module 801, then code book sequence number decoding, spectrum coefficient decoding and scale factor decoding are performed thereon to reconstruct the quantization spectrum and to output the integer representation of the scale factors and the quantized values of the spectrum to the inverse quantizer bank 803.
  • the decoding method used by the entropy decoding module 802 corresponds to the encoding method used by entropy encoding in the encoding device, which is, for example, Huffman decoding, arithmetic decoding or run length decoding, etc.
  • the inverse quantizer bank 803 Upon receipt of the quantized values of the spectrum and the integer representation of the scale factor, the inverse quantizer bank 803 inversely quantizes the quantized values of the spectrum into reconstructed spectrum without scaling (inverse quantization spectrum), and outputs the inverse quantization spectrum to the inverse frequency-domain linear prediction and vector quantization module 804.
  • the inverse quantizer bank 803 can be either a uniform quantizer bank or a non-uniform quantizer bank realized by a companding function.
  • the quantizer bank uses the scalar quantizer, so in the decoding device, the inverse quantizer bank 803 also uses the scalar inverse quantizer.
  • the quantized values of the spectrum are non-linearly expanded first, then all the spectrum coefficients (inverse quantization spectrum) in the corresponding scale factor band are obtained by using each scale factor.
  • the entropy decoding module 802 receives the encoded values of the code word indexes output from the bit-stream demultiplexing module 801.
  • the encoded values of the code word indexes are decoded by the entropy decoding method corresponding to the entropy encoding method used in encoding, thereby obtaining the corresponding code word indexes.
  • the code word indexes are output to the inverse quantizer bank 803.
  • the quantized values (inverse quantization spectrum) are obtained and output to the frequency-time mapping module 805.
  • the inverse quantizer bank 803 uses the inverse vector quantizer.
  • the technique of frequency-domain linear prediction vector quantization is used to suppress the pre-echo and to obtain greater encoding gain. Therefore, in the decoder, the control information of inverse frequency-domain linear prediction vector quantization output from the inverse quantization spectrum and bit-stream demultiplexing module 801 is input to the inverse frequency-domain linear prediction and vector quantization module 804 to recover the spectrum-before- linear-prediction.
  • the inverse frequency-domain linear prediction and vector quantization module 804 comprises an inverse vector quantizer, an inverse transformer, and an inverse linear prediction filter, wherein the inverse vector quantizer is used for inversely quantizing the code word indexes to obtain the line spectrum pair frequency LSF coefficients; the inverse transformer is used for inversely transforming the line spectrum frequency LSF coefficients into prediction coefficients, and the inverse linear prediction filter is used for inversely filtering the inverse quantization spectrum based on the prediction coefficients to obtain the spectrum-before-prediction and output it to the frequency-time mapping module 805.
  • Time-domain audio signals of low frequency channel can be obtained by a mapping processing of the inverse quantization spectrum or the spectrum-before-prediction by the frequency-time mapping module 805.
  • the frequency-time mapping module 805 can be a filter bank of inverse discrete cosine transformation (IDCT), a filter bank of inverse discrete Fourier transformation (IDFT), a filter bank of inverse modified discrete cosine transformation (IMDCT), a filter bank of inverse wavelet transformation, and a cosine modulation filter bank, etc.
  • the decoding method based on the above-mentioned decoder comprises: demultiplexing the compressed audio code stream to obtain the data information and control information; entropy decoding said information to obtain the quantized value of the spectrum; inversely quantizing the quantized values of the spectrum to obtain the inverse quantization spectrum; determining if the control information contains information concerning that the inverse quantization spectrum needs to undergo the inverse frequency-domain linear prediction vector quantization, if it does, performing the inverse vector quantization to obtain the prediction coefficients, and performing an inverse linear prediction filtering on the inverse quantization spectrum according to the prediction coefficients to obtain the spectrum-before-prediction; frequency-time mapping the spectrum-before-prediction to obtain the time-domain audio signals of low frequency band; if the control information does not contain information concerning that the inverse quantization spectrum needs to undergo the inverse frequency-domain linear prediction vector quantization, frequency-time mapping the inverse quantization spectrum to obtain the time-domain audio signals of low frequency band.
  • the demultiplexed information include the encoded values of the code book sequence numbers, the common scale factor, the encoded values of the scale factors, and the quantization spectrum of the lossless encoding, then it means that the spectrum coefficients in the encoding device are quantized by the scalar quantization technique.
  • the entropy decoding steps include: decoding the encoded values of the code book sequence numbers to obtain the code book sequence numbers of all the scale factor bands; decoding the quantization coefficients of all the scale factor bands according to the code book corresponding to the code book sequence number; and decoding the scale factors of all the scale factor bands to reconstruct the quantization spectrum.
  • the entropy decoding method used in said process corresponds to the entropy encoding method used in the encoding method, which is, for example, run length decoding method, Huffman decoding method, or arithmetic decoding method, etc.
  • the entropy decoding process is described below by using as examples the decoding of the code book sequence numbers by the run length decoding method, the decoding of the quantization coefficients by the Huffman, decoding method, and the decoding of the scale factors by the Huffman decoding method.
  • the code book sequence numbers of all the scale factor bands are obtained through the run length decoding method.
  • the decoded code book sequence numbers are integers within a certain range. Suppose that said range is [0, 11], then only the code book sequence numbers within said valid range, i.e. between 0-11, are corresponding to the Huffman code book of the spectrum coefficients.
  • a certain code book sequence can be selected to correspond to it, typically, the 0 sequence number can be selected.
  • the Huffman code book of spectrum coefficients corresponding to said code book number is used to decode the quantization coefficients of all the scale factor bands. If the code book number of a scale factor band is within the valid range, for example between 1-11 in this embodiment, then said code book number corresponds to a spectrumcoefficient code book, and said code book is used to decode the quantization spectrum to obtain the code word indexes of the quantization coefficients of the scale factor bands, subsequently, the code word indexes are de-packaged to obtain the quantization coefficients. If the code book number of the scale factor band is not between 1 and 11, then said code book number is not corresponding to any spectrum coefficient code book, and the quantization coefficients of said scale factor band does not need to be decoded, but they are all directly set to be zero.
  • the scale factor is used to reconstruct the spectrum value on the basis of the inverse quantization spectrum coefficients. If the code book number of the scale factor band is within the valid range, each code book number corresponds to a scale factor.
  • the code stream occupied by the first scale factor is read first, then the rest of the scale factors are Huffman decoded to obtain the differences between each of the scale factors and their respective previous scale factors, and said differences are added to the values of the previous scale factors to obtain the respective scale factors. If the quantization coefficients of the present sub-band are all zero, then the scale factors of said sub-band do not have to be decoded.
  • the quantized values of the spectrum and the integer representation of the scale factors are obtained, then the quantized values of the spectrum are inversely quantized to obtain the inverse quantization spectrum.
  • the inverse quantization processing includes non-linearly expanding the quantized values of the spectrum, and obtaining all the spectrum coefficients (inverse quantization spectrum) in the corresponding scale factor band according to each scale factor.
  • the entropy decoding steps include: decoding the encoded values of the code word indexesby means of the entropy decoding method corresponding to the entropy encoding method used in the encoding device so as to obtain the code word indexes, then inversely quantizing the code word index to obtain the inverse quantization spectrum.
  • An inverse frequency-domain linear prediction vector quantization is performed on the inverse quantization spectrum. First, it is determined if said frame of signal has undergone the frequency-domain linear prediction vector quantization according to the control information, if it has, the code word indexes resulted from the vector quantization of the prediction coefficients are obtained from the control information; then the quantized line spectrum frequency LSF coefficients are obtained according to the code word indexes, on the basis of which the prediction coefficients are calculated; subsequently, a linear prediction synthesizing is performed on the inverse quantization spectrum to obtain the spectrum-before-prediction.
  • the residual sequence E ( k ) and the calculated prediction coefficient a i are synthesized by frequency-domain linear prediction to obtain the spectrum X(k) before prediction which is then frequency-time mapped.
  • control information indicates that said signal frame has not undergone the frequency-domain linear prediction vector quantization
  • the inverse frequency-domain linear prediction vector quantization will not be performed, and the inverse quantization spectrum is directly frequency-time mapped.
  • the method of performing a frequency-time mapping on the inverse quantization spectrum corresponds to the time-frequency mapping method in the encoding method, which can be inverse discrete cosine transformation (IDCT), inverse discrete Fourier transformation (IDFT), inverse modified discrete cosine transformation (IMDCT), and inverse wavelet transformation, etc.
  • IDCT inverse discrete cosine transformation
  • IDFT inverse discrete Fourier transformation
  • IMDCT inverse modified discrete cosine transformation
  • wavelet transformation etc.
  • the frequency-time mapping process is illustrated below by taking inverse modified discrete cosine transformation IMDCT as an example.
  • the frequency-time mapping process includes three steps: IMDCT transformation, time-domain window adding processing and time-domain superposing operation.
  • IMDCT transformation is perform on the spectrum-before-prediction or the inverse quantization spectrum to obtain the transformed time-domain signal x i,n .
  • window adding is performed on the time-domain signal obtained from IMDCT transformation at the time domain.
  • Typical window functions include, among others, Sine window and Kaiser-Bessel window.
  • said restriction to the window function can be modified by using double orthogonal transformation with a specific analysis filter and synthesizing filter.
  • timeSam i,n preSam i , n +preSam i-1 , n+N/2 , wherein i denotes the frame sequence number, n denotes the sample sequence number, 0 ⁇ n ⁇ N 2 , and N is 2048.
  • time-domain audio signals of low frequency band are obtained.
  • Fig. 7 is a schematic drawing of the structure of embodiment one of the encoding device of the present invention.
  • this embodiment has a multi-resolution analyzing module 56 added between the output of the frequency-domain linear prediction and vector quantization module 53 and the input of the quantization and entropy encoding module 54.
  • the encoding device of the present invention increases the time resolution of the encoded fast varying signals by means of a multi-resolution analyzing module 56.
  • the residual sequence or frequency-domain coefficients output from the frequency-domain linear prediction and vector quantization module 53 are input to the multi-resolution analyzing module 56.
  • a frequency-domain wavelet transformation or frequency-domain modified discrete cosine transformation is performed to obtain the multi-resolution representation for the residual sequence/frequency-domain coefficients to be output to the quantization and entropy encoding module 54.
  • MDCT frequency-domain wavelet transformation or frequency-domain modified discrete cosine transformation
  • the signal is of a slowly varying type, the residual sequence/frequency-domain coefficients are directly output to the quantization and entropy encoding module 54 without being processed.
  • the multi-resolution analyzing module 56 performs a time-and-frequency-domain reorganization of the input frequency-domain data to improve the time resolution of the frequency-domain data at the cost of reducing the precision of the frequency, thereby to automatically adapt to the time-frequency characteristics of the signals of fast varying type, accordingly, the effect of suppressing the pre-echo is achieved without adjusting the form of the filter bank in the time-frequency mapping module 52 at any time.
  • the multi-resolution analyzing module 56 comprises a frequency-domain coefficient transformation module and a reorganization module, wherein the frequency-domain coefficient transformation module is used for transforming the frequency-domain coefficients into a time-frequency plane coefficients; and the reorganization module is used for reorganizing the time-frequency plane coefficient according to a certain rule.
  • the frequency-domain coefficient transformation module can use the filter bank of frequency-domain wavelet transformation, or the filter bank of frequency-domain MDCT transformation, etc.
  • multi-resolution analysis module 56 The operation process of multi-resolution analysis module 56 is described below by taking frequency-domain wavelet transformation and frequency-domain MDCT transformation as examples.
  • the frequency-domain wavelet or the wavelet basis of wavelet package transformation may either be fixed or adaptive.
  • the scale coefficient of Harr wavelet basis is 1 2 ⁇ 1 2 .
  • FIG. 8 shows the schematic drawing of the filtering structure that performs wavelet transformation by using Harr wavelet basis, wherein H 0 represents low-pass filtering (the filtering coefficient is 1 2 ⁇ 1 2 ) , H 1 represents high-pass filtering (the filtering coefficient is 1 2 , - 1 2 ) , " ⁇ 2" represents a duple down sampling operation.
  • Harr wavelet transformation is performed for the high frequency portions in the MDCT coefficients to obtain coefficients X 2 (k), X 3 (k), X 4 (k), X 5 (k), X 6 (k), X 7 (k), of different time-frequency intervals, and the division of the corresponding time-frequency plane is as shown in Fig. 9.
  • different wavelet transformation structures can be used for processing so as to obtain other similar time-frequency plane divisions. Therefore, the time-frequency plane division during signal analysis can be discretionarily adjusted as desired so as to meet different requirements of the analysis of the time and frequency resolution.
  • time-frequency plane coefficients are reorganized in the reorganization module according to a certain rule, for example, the time-frequency plane coefficients can be organized in the frequency direction first, and the coefficients in each frequency band are organized in the time direction, then the organized coefficients are arranged in the order of sub-window and scale factor band.
  • M dot MDCT transformation is performed on said N dot frequency-domain data sequentially, so that the frequency precision of the time frequency domain data is reduced, while the time precision is increased.
  • MDCT transformations of different lengths are used in different frequency-domain ranges, thereby to obtain different time-frequency plane divisions, i.e. different time and frequency precision.
  • the reorganization module reorganizes the time-frequency data output from the filter bank of the frequency-domain MDCT transformation.
  • One way of reorganization is to organize the time-frequency plane coefficients in the frequency direction first, and the coefficients in each frequency band are organized in the time direction at the same time, then the organized coefficients are arranged in the order to sub-window and scale factor band.
  • the basic flow thereof is the same as that of the encoding method based on the encoding device as shown in Fig. 5, and the difference therebetween is that the former further includes the following steps: before quantizing and entropy encoding the residual sequence/frequency-domain coefficients, if the signal is a fast varying signal, performing a multi-resolution analysis on the residual sequence/frequency-domain coefficients; if the signal is not a fast varying signal, directly quantizing and entropy encoding the residual sequence/frequency-domain coefficients.
  • the multi-resolution analysis can use frequency-domain wavelet transformation method or frequency-domain MDCT transformation method.
  • the frequency-domain wavelet analysis method includes: wavelet transforming the frequency-domain coefficients to obtain the time-frequency plane coefficients; reorganizing said time-frequency plane coefficients according to a certain rule.
  • the MDCT transformation includes: MDCT transforming the frequency-domain coefficients to obtain the time-frequency plane coefficients; reorganizing said time-frequency plane coefficients according to a certain rule.
  • the reorganization method includes: organizing the time-frequency plane coefficients in the frequency direction, and organizing the coefficients in each frequency band in the time direction, then arranging the organized coefficients in the order of sub-window and scale factor band.
  • Fig. 10 is a schematic drawing of embodiment one of the decoding device of the present invention.
  • Said decoding device has a multi-resolution integration module 806 added on the basis of the decoding device as shown in Fig. 6.
  • Said multi-resolution integration module 806 is between the output of the inverse quantizer bank 803 and the input of the inverse frequency-domain linear prediction and vector quantization module 804 for multi-resolution integrating the inverse quantization spectrum.
  • the technique of multi-resolution filtering is used for the fast varying type signals to increase the time resolution of the encoded fast varying type signals.
  • the multi-resolution integration module 806 is used to recover the frequency-domain coefficients of the fast varying signals before multi-resolution analysis.
  • the multi-resolution integration module 806 comprises a coefficient reorganization module and a coefficient transformation module, wherein the coefficient transformation module may use a filter bank of frequency-domain inverse wavelet transformation or a filter bank of frequency-domain IMDCT transformation.
  • the basic flow thereof is the same as that of the decoding,method of the decoding device as shown in Fig. 6, and the difference is that the former further includes the steps of after obtaining the inverse quantization spectrum, performing a multi-resolution integration thereon, and then determining if it is necessary to perform an inverse frequency-domain linear prediction vector quantization on the multi-resolution integrated inverse quantization spectrum.
  • the method of multi-resolution integration is described below by taking the frequency-domain IMDCT transformation as an example.
  • the method specifically includes: reorganizing the coefficients of the inverse quantization spectrum, performing a plurality of IMDCT transformation on each coefficient to obtain the inverse quantization spectrum before the multi-resolution analysis.
  • This process is described in detail by using 128 IMDCT transformation (8 inputs and 16 outputs). Firstly, the coefficients of the inverse quantization spectrum are arranged in the order of sub-window and scale factor band; then they are reorganized in the order of frequency, thus the 128 coefficients of each sub-window are organized together in the order of frequency.
  • the coefficients that are arranged in the order of sub-window are organized in frequency direction with 8 in each group, and the 8 coefficients in each group are arranged in time sequence, thus there are altogether 128 groups of coefficients in the frequency direction.
  • An IMDCT transformation of 16 dots is performed on each group of coefficients, and the 16 coefficients output after the IMDCT transformation of each group are added in an overlapping manner to obtain 8 frequency-domain data. 128 times of such operation are performed from the low frequency direction to the high frequency direction to obtain 1024 frequency-domain coefficients.
  • Fig. 11 is the schematic drawing of the second embodiment of the encoding device of the present invention.
  • said embodiment has a sum-difference stereo (M/S) encoding module 57 added between the output of the frequency-domain linear prediction and vector quantization module 53 and the input of the quantization and entropy encoding module 54.
  • the psychoacoustical analyzing module 51 outputs the masking threshold of the sum-difference sound channel to the quantization and entropy encoding module 54.
  • the psychoacoustical analyzing module 51 calculates not only the masking threshold of the single sound channel of the audio signals, but also the masking threshold of the sum-difference sound channels.
  • the sum-difference stereo encoding module 57 can also be located between the quantizer bank and the encoder in the quantization and entropy encoding module 54.
  • the sum-difference stereo module 57 makes use of the correlation between the two sound channels in the sound channel pair to equate the freuqency-domain coefficients/residual sequence of the left-right sound channels to the freuqency-domain coefficients/residual sequence of the sum-difference sound channels, thereby reducing the code rate and improving the encoding efficiency. Hence, it is only suitable for multi-channel signals of the same signal type. While as for mono signals or multi-channel signals of different signal types, the sum-difference stereo encoding is not performed.
  • the encoding method of the encoding device as shown in Fig. 11 is substantially the same as the encoding method of the encoding device as shown in Fig. 5, and the difference is that the former further includes the steps of determining whether the audio signals are multi-channel signals before quantizing and entropy encoding the residual sequence/frequency-domain coefficients, if they are multi-channel signals, determining whether the types of the signals of the left-right sound channels are the same, if the signal types are the same, determining whether the scale factor bands corresponding to the two sound channels meet the conditions of sum-difference stereo encoding, if they meet the conditions, performing a sum-difference stereo encoding on the residual sequence/frequency-domain coefficients to obtain the residual sequence/frequency-domain coefficients of the sum-difference sound channels; if they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the frequency-domain coefficients are not processed.
  • the sum-difference stereo encoding can be applied not only before the quantization, but also after the quantization and before the entropy encoding, that is, after quantizing the residual sequence/frequency-domain coefficients, it is determined if the audio signals are multi-channel signals; if they are, it is determined if the signals of the left-right sound channels are of the same type, and if the signal types are the same, it is determined if the scale factor bands meet the encoding condition. If they meet the condition, performing a sum-difference stereo encoding on the quantization spectrum to obtain the quantization spectrum of the sum-difference sound channels. If they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the frequency-domain coefficients are not processed.
  • the residual sequence/frequency-domain coefficients of the left-right sound channels at the scale factor band are linearly transformed and are replaced with the residual sequence/frequency-domain coefficients of the sum-difference sound channels :
  • M S 1 2 ⁇ 1 1 1 - 1 [ L R ] , wherein M denotes the residual sequence/frequency-domain coefficients of the sum sound channel, S denotes the residual sequence/frequency-domain coefficients of the difference channel, L denotes the residual sequence/frequency-domain coefficients of the left sound channel, and R denotes the residual sequence/frequency-domain coefficients of the right sound channel.
  • Fig. 12 is a schematic drawing of embodiment two of the decoding device of the present invention.
  • said decoding device has a sum-difference stereo decoding module 807 added between the output of the inverse quantizer bank 803 and the input of the inverse frequency-domain linear prediction and vector quantization module 804 to receive the result of signal type analysis and the sum-difference stereo control signal output from the bit-stream demultiplexing module 801, and to transform the inverse quantization spectrum of the sum-difference sound channels into the inverse quantization spectrum of the left-right sound channels according to said control information.
  • the sum-difference control signal there is a flag bit for indicating if the present sound channel pair needs a sum-difference stereo decoding. If they need, then there is also a flag bit on each scale factor to indicate if the corresponding scale factor band needs to be sum-difference stereo decoded, and the sum-difference stereo decoding module 66 determines, on the basis of the flag bit of the scale factor band, if it is necessary to perform sum-difference stereo decoding on the inverse quantization spectrum in some of the scale factor bands. If the sum-difference stereo encoding is performed in the encoding device, then the sum-difference stereo decoding must be performed on the inverse quantization spectrum in the decoding device.
  • the sum-difference stereo decoding module 807 can also be located between the output of the entropy decoding module 802 and the input of the inverse quantizer bank 803 to receive the sum-difference stereo control signal and the result of signal type analysis output from the bit-stream demultiplexing module 601.
  • the decoding method of the decoding device as shown in Fig. 12 is substantially the same as the decoding method of the decoding device as shown in Fig. 6, and the difference is that the former further includes the followng steps: after obtaining the inverse quantization spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the inverse quantization spectrum according to the sum-difference stereo control signal. If it is necessary, it is determined, on the basis of the flag bit on each scale factor band, if said scale factor band needs a sum-difference stereo decoding.
  • the inverse quantization spectrum of the sum-difference sound channels in said scale factor band is transformed into inverse quantization spectrum of the left-right sound channels before the subsequent processing; if the signal types are not the same or it is unnecessary to perform the sum-difference stereo decoding, the inverse quantization spectrum is not processed and the subsequent processing is directly performed.
  • the sum-difference stereo decoding can also be performed after the entropy decoding and before the inverse quantization, that is, after obtaining the quantized values of the spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the quantized value sof the spectrum according to the sum-difference stereo control signal.
  • Fig. 13 is a schematic drawing of the structure of the third embodiment of the encoding device of the present invention.
  • said embodiment has a frequency band spreading module 58 and a re-sampling module 59 added.
  • the frequency band spreading module 58 is used for analyzing the originally input audio signal on the entire frequency band to extract the spectrum envelope of the high frequency portion and the parameters representing the correlation between the low and high frequency spectrum, and to output them as the frequency band spreading information to the bit-stream multiplexing module 55; and the re-sampling module 59 is used for re-sampling the originally input audio signal to change the sampling rate thereof.
  • the re-sampling includes up-sampling and down-sampling.
  • the re-sampling is described below using down-sampling as an example.
  • the re-sampling module 59 comprises a low-pass filter and a down-sampler, wherein the low-pass filter is used for limiting the frequency band of the audio signals and eliminating the aliasing that might be caused by down-sampling.
  • the input audio signal is down-sampled after being low-pass filtered.
  • the input audio signal is s(n)
  • said signal is output as v(n) after being filtered by the low-pass filter having a pulse response of h(n)
  • the sequence of an M times of down-sampling on v(n) is x(n)
  • the sampling rate of the re-sampled audio signal x(n) is reduced by M times as compared to the sampling rate of the originally input audio signal s(n).
  • the basic principle of frequency band spreading is that with respect to most audio signals, there is a strong correlation between the characteristic of the high frequency portion thereof and the characteristic of the low frequency portion thereof, so the high frequency portions of the audio signals can be effectively reconstructed through the low frequency portions, thus the high frequency portions of the audio signals may not be transmitted. In order to ensure a correct reconstruction of the high frequency portions, only few frequency band spreading information are transmitted in the compressed audio code stream.
  • the frequency band spreading module 58 comprises a parameter extracting module and a spectrum envelope extracting module. Signals are input to the parameter extracting module which extracts the parameters representing the spectrum characteristics of the input signals at different time-frequency regions, then in the spectrum envelope extracting module, the spectrum envelope of the high frequency portion of the signal is estimated at a certain time-frequency resolution. In order to ensure that the time-frequency resolution is most suitable for the characteristics of the present input signals, the time-frequency resolution of the spectrum envelope can be selected freely.
  • the parameters of the spectrum characteristics of the input signals and the spectrum envelope of the high frequency portion are used as the output of frequency band spreading to be sent to the bit-stream multiplexing module 55 for multiplexing.
  • the encoding method based on the encoding device as shown in Fig. 13 is substantially the same as the encoding method based on the encoding device as shown in Fig. 5, and the difference is that the former further includes the following steps: re-sampling the audio signal before analyzing the type thereof; analyzing the input audio signal on the entire frequency band to extract the high frequency spectrum envelope and the parameters of the signal spectrum characteristics thereof as the control signal of the frequency-band spreading, which are multiplexed together with the audio encoded signal and the side information to obtain the compressed audio code stream.
  • the re-sampling includes the two steps of limiting the frequency band of the audio signal and performing a multiple down-sampling on the audio signal whose frequency band is limited.
  • Fig. 14 is a schematic drawing of the structure of embodiment three of the decoding device of the present invention.
  • said decoding device has a frequency band spreading module 808 added, which receives the frequency band spreading control information output from the bit stream demultiplexing module 801 and the time-domain audio signal of low frequency channel output from the frequency-time mapping module 805, and which reconstructs the high frequency signal portion through spectrum shift and high frequency adjustment to output the wide band audio signal.
  • the decoding method based on the decoding device as shown in Fig. 14 is substantially the same as the decoding method based on the decoding device as shown in Fig. 6, and the difference lies in that the former further includes the step of reconstructing the high frequency portion of the time-domain audio signal according to the frequency band spreading control information and the time-domain audio signal; thereby to obtain the wide band audio signal.
  • Fig. 15 is a schematic drawing of the structure of the fourth embodiment of the encoding device of the present invention, which has a frequency band spreading module 58 and a re-sampling module 59 added on the basis of the encoding device as shown in Fig. 7.
  • the connection between said frequency band spreading module 58 and re-sampling module 59 and other modules, and the function and operation principle of these two modules are the same as those shown in Fig. 13, so they will not be elaborated herein.
  • the encoding method based on the encoding device as shown in Fig. 15 is substantially the same as the encoding method based on the encoding device as shown in Fig. 7, and the difference is that the former further includes the following steps: re-sampling the audio signal before analyzing the type thereof; analyzing the input audio signal on the entire frequency band to extract the high frequency spectrum envelope and the parameters of the spectrum characteristics thereof; and multiplexing them together with the audio encoded signal and the side information to obtain the compressed audio code stream.
  • Fig. 16 is a schematic drawing of embodiment four of the decoding device of the present invention.
  • said decoding device has a frequency band spreading module 808 added.
  • the connection between said frequency band spreading module 808 and other modules, and the function and operation principle thereof are the same as those shown in Fig. 14, so they will not be elaborated herein:
  • the decoding method based on the decoding device as shown in Fig. 16 is substantially the same as the decoding method based on the decoding device as shown in Fig. 10, and the difference is that said decoding method further includes the step of reconstructing the high frequency portion of the audio signal according to the frequency band spreading control information and the time-domain audio signal, thereby to obtain audio signal of wide frequency band.
  • Fig. 17 is a schematic drawing of the structure of the fifth embodiment of the encoding device of the present invention.
  • said embodiment has a sum-difference stereo encodingmodule 57 added between the output of the multi-resolution analyzing module 56 and the input of the quantization and entropy encoding module 54, or between the quantizer bank and the encoder in the quantization and entropy encoding module 54.
  • the function and operation principle of the sum-difference stereo encoding module 57 are the same as those shown in Fig. 11, so they will not be elaborated herein.
  • the encoding method of the encoding device as shown in Fig. 17 is substantially the same as the encoding method of the encoding device as shown in Fig. 7, and the difference is that the former further includes the steps of determining whether the audio signals are multi-channel signals after multi-resolution analysis of the residual sequence/frequency-domain coefficients. If they are multi-channel signals, determining whether the types of the signals of the left-right sound channels are the same, and if the signal types are the same, determining whether the scale factor bands meet the encoding conditions.
  • Fig. 18 is a schematic drawing of the structure of embodiment five of the decoding device of the present invention.
  • said decoding device has a sum-difference stereo decoding module 807 added between the output of the inverse quantizer bank 803 and the input of the multi-resolution integration module 806, or between the output of the entropy decoding module 802 and the input of the inverse quantizer bank 803.
  • the function and operation principle of the sum-difference stereo decoding module 807 are the same as those shown in Fig. 12, so they will not be elaborated herein.
  • the decoding method of the decoding device as shown in Fig. 18 is substantially the same as the decoding method of the decoding device as shown in Fig. 10, and the difference is that the former further includes the following steps: after obtaining the inverse quantization spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the inverse quantization spectrum according to the sum-difference stereo control signal.
  • Fig. 19 is the schematic drawing of the sixth embodiment of the encoding device of the present invention.
  • this embodiment has a frequency band spreading module 58 and a re-sampling module 59 added.
  • the connection between said frequency band spreading module 58 and re-sampling module 59 and other modules, and the functions and operation principles of these two modules are the same as those in Fig. 13, so they will not be elaborated herein.
  • the encoding method based on the encoding device as shown in Fig. 19 is substantially the same as the encoding method based on the encoding device as shown in Fig. 17, and the difference is that the former further includes the following steps: re-sampling the audio signal before analyzing the type thereof; analyzing the input audio signal on the entire frequency band to extract the high frequency spectrum envelope and the parameters of the spectrum characteristics thereof; and multiplexing them together with the audio encoded signal and the side information to obtain the compressed audio code stream.
  • Fig. 20 is a schematic drawing of embodiment six of the decoding device of the present invention.On the basis of the decoding device as shown in Fig. 18, said decoding device has a frequency band spreading module 808 added.
  • said frequency band spreading module 808 and other modules, and the function and principle thereof are the same as those shown in Fig. 14, so they will not be elaborated herein.
  • the decoding method based on the decoding device as shown in Fig. 20 is substantially the same as the decoding method based on the decoding device as shown in Fig. 18, and the difference is that said decoding method further includes the step of reconstructing the high frequency portion of the audio signal according to the frequency band spreading control information and the time-domain audio signal, thereby to obtain audio signals of wide frequency band.
  • Fig. 21 is a schematic drawing of the seventh embodiment of the encoding device of the present invention.
  • said embodiment has a frequency band spreading module 58 and a re-sampling module 59 added.
  • the connection between said frequency band spreading module 58 and re-sampling module 59 and other modules, and the functions and operation principles of said two modules are the same as those in Fig. 14, so they will not be elaborated herein.
  • the encoding method of the encoding device as shown in Fig. 21 is substantially the same as the encoding method of the encoding device as shown in Fig. 11, and the difference is that said encoding method further includes the steps of re-sampling the audio signal before analyzing the type thereof; analyzing the input audio signal on the entire frequency band to extract the high frequency spectrum envelope and the parameters of the spectrum characteristics thereof; and multiplexing them together with the audio encoded signal and the side information to obtain the compressed audio code stream.
  • Fig. 22 is a schematic drawing of embodiment seven of the decoding device of the present invention.
  • said decoding device has a frequency band spreadingmodule 808 added.
  • the connection between said frequency band spreading module 808 and other modules, and the function and principle thereof are the same as those shown in Fig. 14, so they will not be elaborated herein.
  • the decoding method based on the decoding device as shown in Fig. 22 is substantially the same as the decoding method based on the decoding device as shown in Fig. 12, and the difference is that said decoding method further includes the step of reconstructing the high frequency portion of the audio signal according to the frequency band spreading control information and the time-domain audio signal, thereby to obtain audio signals of wide frequency band.
  • the seven embodiments of the encoding device as described above may also include a gain control module which receives the audio signals output from the signal type analyzing module 59, controls the dynamic range of the fast varying type signals, and eliminates the pre-echo in audio processing. The output thereof is connected to the time-frequency mapping module 52 and the psychoacoustical analyzing module 51, meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • a gain control module which receives the audio signals output from the signal type analyzing module 59, controls the dynamic range of the fast varying type signals, and eliminates the pre-echo in audio processing.
  • the output thereof is connected to the time-frequency mapping module 52 and the psychoacoustical analyzing module 51, meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • the gain control module controls only the fast varying type signal, while the slowly varying signal is directly output without being processed.
  • the gain control module adjusts the time-domain energy envelope of the signal to increase the gain value of the signal before the fast varying point, so that the amplitudes of the time-domain signal before and after the fast varying point are close to each other; then the time-domain signal whose time-domain energy envelope is adjusted is output to the time-frequency mapping module 52 Meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • the encoding method based on said encoding device is substantially the same as the encoding method based on the above described encoding device, and the difference lies in that the former further includes the step of performing a gain control on the signal whose signal type has been analyzed.
  • the seven embodiments of the decoding device as described above may also include an inverse gain control module which is located after the output of the frequency-time mapping module 805 to receive the result of signal type analysis and the information of the amount of gain adjustment output from the bit-stream demultiplexing module 801, thereby adjusting the gain of the time-domain signal and controlling the pre-echo.
  • the inverse gain control module controls the fast varying signals but leaves the slowly varying signals unprocessed.
  • the inverse gain control module adjusts the energy envelope of the reconstructed time-domain signal according to the information of the amount of gain adjustment, reduces the amplitude value of the signal before the fast varying point, and adjusts the energy envelope back to the original state of low in the front and high in the back.
  • the amplitude value of the quantified noise before the fast varying point will be reduced along with the amplitude value of the signal, thereby controlling the pre-echo.
  • the decoding method based on said decoding device is substantially the same as the decoding method based on the above described decoding device, and the difference lies in that the former further includes the step of performing an inverse gain control on the reconstructed time-domain signals.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP05738242A 2004-04-01 2005-04-01 Dispositif et procede de codage/decodage audio ameliores Withdrawn EP1852851A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200410030945 2004-04-01
PCT/CN2005/000441 WO2005096274A1 (fr) 2004-04-01 2005-04-01 Dispositif et procede de codage/decodage audio ameliores

Publications (1)

Publication Number Publication Date
EP1852851A1 true EP1852851A1 (fr) 2007-11-07

Family

ID=35064018

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05738242A Withdrawn EP1852851A1 (fr) 2004-04-01 2005-04-01 Dispositif et procede de codage/decodage audio ameliores

Country Status (2)

Country Link
EP (1) EP1852851A1 (fr)
WO (1) WO2005096274A1 (fr)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012110476A1 (fr) * 2011-02-14 2012-08-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Système de codage basé sur la prédiction linéaire utilisant la mise en forme du bruit dans le domaine spectral
US8731917B2 (en) 2007-03-02 2014-05-20 Telefonaktiebolaget Lm Ericsson (Publ) Methods and arrangements in a telecommunications network
US8825496B2 (en) 2011-02-14 2014-09-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise generation in audio codecs
US9037457B2 (en) 2011-02-14 2015-05-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec supporting time-domain and frequency-domain coding modes
US9047859B2 (en) 2011-02-14 2015-06-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion
US9153236B2 (en) 2011-02-14 2015-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec using noise synthesis during inactive phases
US9311926B2 (en) 2010-10-18 2016-04-12 Samsung Electronics Co., Ltd. Apparatus and method for determining weighting function having for associating linear predictive coding (LPC) coefficients with line spectral frequency coefficients and immittance spectral frequency coefficients
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding
US9401152B2 (en) 2012-05-18 2016-07-26 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US9536530B2 (en) 2011-02-14 2017-01-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Information signal representation using lapped transform
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9595263B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Encoding and decoding of pulse positions of tracks of an audio signal
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result
RU2616863C2 (ru) * 2010-03-11 2017-04-18 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Сигнальный процессор, формирователь окон, кодированный медиа-сигнал, способ обработки сигнала и способ формирования окон
CN113194385A (zh) * 2021-01-14 2021-07-30 四川湖山电器股份有限公司 基于步长控制的子带自适应反馈消除方法及系统
US11708741B2 (en) 2012-05-18 2023-07-25 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US11830507B2 (en) 2018-08-21 2023-11-28 Dolby International Ab Coding dense transient events with companding

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8027242B2 (en) * 2005-10-21 2011-09-27 Qualcomm Incorporated Signal coding and decoding based on spectral dynamics
US8392176B2 (en) 2006-04-10 2013-03-05 Qualcomm Incorporated Processing of excitation in audio coding and decoding
EP2088588B1 (fr) 2006-11-10 2013-01-09 Panasonic Corporation Dispositif de décodage de paramètre, dispositif de codage de paramètre et procédé de décodage de paramètre
US8428957B2 (en) 2007-08-24 2013-04-23 Qualcomm Incorporated Spectral noise shaping in audio coding based on spectral dynamics in frequency sub-bands
US9841941B2 (en) 2013-01-21 2017-12-12 Dolby Laboratories Licensing Corporation System and method for optimizing loudness and dynamic range across different playback devices
EP2830065A1 (fr) 2013-07-22 2015-01-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil et procédé permettant de décoder un signal audio codé à l'aide d'un filtre de transition autour d'une fréquence de transition
WO2016162165A1 (fr) * 2015-04-10 2016-10-13 Thomson Licensing Procédé et dispositif servant à coder de multiples signaux audio, et procédé et dispositif servant à décoder un mélange de multiples signaux audio avec séparation améliorée
US10354668B2 (en) * 2017-03-22 2019-07-16 Immersion Networks, Inc. System and method for processing audio data
KR102603621B1 (ko) * 2019-01-08 2023-11-16 엘지전자 주식회사 신호 처리 장치 및 이를 구비하는 영상표시장치
CN116032709B (zh) * 2022-12-06 2024-04-12 中国电子科技集团公司第三十研究所 无先验知识fsk信号盲解调和调制特征解析方法及装置
CN117152448A (zh) * 2023-05-11 2023-12-01 中南大学 一种基于微分频域特征的浮选过程关联泡沫图像特征选择方法
CN117498892B (zh) * 2024-01-02 2024-05-03 深圳旷世科技有限公司 基于uwb的音频传输方法、装置、终端及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960012475B1 (ko) * 1994-01-18 1996-09-20 대우전자 주식회사 디지탈 오디오 부호화장치의 채널별 비트 할당 장치
EP0720316B1 (fr) * 1994-12-30 1999-12-08 Daewoo Electronics Co., Ltd Dispositif de codage adaptif pour le codage de son numérique et méthode d'allocation de bits
CN1154084C (zh) * 2002-06-05 2004-06-16 北京阜国数字技术有限公司 一种基于伪小波滤波的音频编/解码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005096274A1 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9076453B2 (en) 2007-03-02 2015-07-07 Telefonaktiebolaget Lm Ericsson (Publ) Methods and arrangements in a telecommunications network
US8731917B2 (en) 2007-03-02 2014-05-20 Telefonaktiebolaget Lm Ericsson (Publ) Methods and arrangements in a telecommunications network
RU2616863C2 (ru) * 2010-03-11 2017-04-18 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Сигнальный процессор, формирователь окон, кодированный медиа-сигнал, способ обработки сигнала и способ формирования окон
US10580425B2 (en) 2010-10-18 2020-03-03 Samsung Electronics Co., Ltd. Determining weighting functions for line spectral frequency coefficients
US9773507B2 (en) 2010-10-18 2017-09-26 Samsung Electronics Co., Ltd. Apparatus and method for determining weighting function having for associating linear predictive coding (LPC) coefficients with line spectral frequency coefficients and immittance spectral frequency coefficients
US9311926B2 (en) 2010-10-18 2016-04-12 Samsung Electronics Co., Ltd. Apparatus and method for determining weighting function having for associating linear predictive coding (LPC) coefficients with line spectral frequency coefficients and immittance spectral frequency coefficients
US9536530B2 (en) 2011-02-14 2017-01-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Information signal representation using lapped transform
US9595262B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
TWI488177B (zh) * 2011-02-14 2015-06-11 Fraunhofer Ges Forschung 使用頻譜域雜訊整形之基於線性預測的編碼方案
US9037457B2 (en) 2011-02-14 2015-05-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec supporting time-domain and frequency-domain coding modes
US9153236B2 (en) 2011-02-14 2015-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec using noise synthesis during inactive phases
CN103477387B (zh) * 2011-02-14 2015-11-25 弗兰霍菲尔运输应用研究公司 使用频谱域噪声整形的基于线性预测的编码方案
RU2575993C2 (ru) * 2011-02-14 2016-02-27 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Основанная на линейном предсказании схема кодирования, использующая формирование шума в спектральной области
AU2012217156B2 (en) * 2011-02-14 2015-03-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding
CN103477387A (zh) * 2011-02-14 2013-12-25 弗兰霍菲尔运输应用研究公司 使用频谱域噪声整形的基于线性预测的编码方案
WO2012110476A1 (fr) * 2011-02-14 2012-08-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Système de codage basé sur la prédiction linéaire utilisant la mise en forme du bruit dans le domaine spectral
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9595263B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Encoding and decoding of pulse positions of tracks of an audio signal
US9047859B2 (en) 2011-02-14 2015-06-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result
US8825496B2 (en) 2011-02-14 2014-09-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise generation in audio codecs
JP2014510306A (ja) * 2011-02-14 2014-04-24 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン スペクトル領域ノイズ整形を使用する線形予測ベースコーディングスキーム
US9721578B2 (en) 2012-05-18 2017-08-01 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US9881629B2 (en) 2012-05-18 2018-01-30 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US10074379B2 (en) 2012-05-18 2018-09-11 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US10217474B2 (en) 2012-05-18 2019-02-26 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US10388296B2 (en) 2012-05-18 2019-08-20 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US10522163B2 (en) 2012-05-18 2019-12-31 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US9401152B2 (en) 2012-05-18 2016-07-26 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US10950252B2 (en) 2012-05-18 2021-03-16 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US11708741B2 (en) 2012-05-18 2023-07-25 Dolby Laboratories Licensing Corporation System for maintaining reversible dynamic range control information associated with parametric audio coders
US11830507B2 (en) 2018-08-21 2023-11-28 Dolby International Ab Coding dense transient events with companding
CN113194385A (zh) * 2021-01-14 2021-07-30 四川湖山电器股份有限公司 基于步长控制的子带自适应反馈消除方法及系统

Also Published As

Publication number Publication date
WO2005096274A1 (fr) 2005-10-13

Similar Documents

Publication Publication Date Title
EP1852851A1 (fr) Dispositif et procede de codage/decodage audio ameliores
JP4950210B2 (ja) オーディオ圧縮
CA2853987C (fr) Train de bits audio a compression echelonnee ; codeur/decodeur utilisant un banc de filtre hierarchique et codage conjoint multicanal
EP1873753A1 (fr) Ameliorations apportees a un procede et un dispositif de codage/decodage audio
TWI541797B (zh) 使用時間雜訊修補成形以編碼及解碼已被編碼音訊的裝置及方法
RU2449387C2 (ru) Способ и устройство для обработки сигнала
JP5539203B2 (ja) 改良された音声及びオーディオ信号の変換符号化
US7181404B2 (en) Method and apparatus for audio compression
EP2490215A2 (fr) Procédé et appareil permettant d'extraire un composant spectral important à partir d'un signal audio et procédé de codage et/ou décodage de signal audio à faible débit binaire et appareil l'utilisant
US9037454B2 (en) Efficient coding of overcomplete representations of audio using the modulated complex lapped transform (MCLT)
JP2005535940A (ja) スケーラブルエンコーディングのための方法および装置ならびにスケーラブルデコーディングのための方法および装置
JP2013528824A (ja) オーディオまたはビデオエンコーダ、オーディオまたはビデオデコーダ、及び予測方向可変の予測を使用したマルチチャンネルオーディオまたはビデオ信号処理方法
WO1998000837A1 (fr) Procedes de codage et de decodage de signaux audio, et codeur et decodeur de signaux audio
CN101390159A (zh) 在解码器和相应设备中可靠识别和衰减数字信号中的回声的方法
CN103366749B (zh) 一种声音编解码装置及其方法
CN103366750B (zh) 一种声音编解码装置及其方法
CN104751850B (zh) 一种用于音频信号的矢量量化编解码方法及装置
WO2005096508A1 (fr) Equipement de codage et de decodage audio ameliore, procede associe
RU2409874C9 (ru) Сжатие звуковых сигналов
CN116114016A (zh) 音频量化器和音频去量化器以及相关方法
AU2011205144B2 (en) Scalable compressed audio bit stream and codec using a hierarchical filterbank and multichannel joint coding
AU2011221401B2 (en) Scalable compressed audio bit stream and codec using a hierarchical filterbank and multichannel joint coding
JPH05114863A (ja) 高能率符号化装置及び復号化装置

Legal Events

Date Code Title Description
PUAJ Public notification under rule 129 epc

Free format text: ORIGINAL CODE: 0009425

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070817

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: MARTIN, DIETZ

Inventor name: HENN, FREDRIK

Inventor name: WANG, LEI

Inventor name: EHRET, ANDREAS

Inventor name: REN, WEIMIN

Inventor name: DENG, HAO

Inventor name: HOERICH, HOLGER

Inventor name: ZHU, XIAOMING

Inventor name: SCHUG, MICHAEL

Inventor name: PAN, XINGDE

RIN1 Information on inventor provided before grant (corrected)

Inventor name: REN, WEIMIN

Inventor name: WANG, LEI

Inventor name: ZHU, XIAOMING

Inventor name: HOERICH, HOLGER

Inventor name: EHRET, ANDREAS

Inventor name: PAN, XINGDE

Inventor name: SCHUG, MICHAEL

Inventor name: HENN, FREDRIK

Inventor name: MARTIN, DIETZ

Inventor name: DENG, HAO

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20090323