EP1873753A1 - Ameliorations apportees a un procede et un dispositif de codage/decodage audio - Google Patents

Ameliorations apportees a un procede et un dispositif de codage/decodage audio Download PDF

Info

Publication number
EP1873753A1
EP1873753A1 EP05742018A EP05742018A EP1873753A1 EP 1873753 A1 EP1873753 A1 EP 1873753A1 EP 05742018 A EP05742018 A EP 05742018A EP 05742018 A EP05742018 A EP 05742018A EP 1873753 A1 EP1873753 A1 EP 1873753A1
Authority
EP
European Patent Office
Prior art keywords
frequency
module
coefficients
signal
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05742018A
Other languages
German (de)
English (en)
Inventor
Xingde C/F4 Triumph Plaza PAN
Dietz Martin
Andreas Ehret
Holger HÖRICH
Xiaoming C/F4 Triumph Plaza ZHU
Michael Schug
Weimin C/F4 Triumph Plaza REN
Lei C/F4 Triumph Plaza WANG
Hao C/F4 Triumph Plaza DENG
Fredrik Henn
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing E-World Technology Co Ltd
BEIJING MEDIA WORKS Co Ltd
Coding Technologies Sweden AB
Original Assignee
Beijing E-World Technology Co Ltd
BEIJING MEDIA WORKS Co Ltd
Coding Technologies Sweden AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing E-World Technology Co Ltd, BEIJING MEDIA WORKS Co Ltd, Coding Technologies Sweden AB filed Critical Beijing E-World Technology Co Ltd
Publication of EP1873753A1 publication Critical patent/EP1873753A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components

Definitions

  • the invention relates to audio encoding and decoding, and in particular, to an enhanced audio encoding/decoding device and method based on a sensor model.
  • the digital audio signals need to be audio encoded or audio compressed for storage and transmission.
  • the object of encoding the audio signals is to realize transparent representation thereof by using as less number of bits as possible, for example, the originally input audio signals are almost the same as the output audio signals after being encoded.
  • CD came into existence, which reflects many advantages of representing the audio signals by digits, such as high fidelity, large dynamic range and great robustness.
  • all these advantages are achieved at the cost of a very high data rate.
  • the sampling rate requested by the digitization of the stereo signal of CD quality is 44.1kHz, and each sampling rate has to be uniformly quantized by 15 bits, thus the non-compressed data rate reaches 1.41Mb/s which brings great inconvenience to the transmission and storage of data, and the transmission and storage of data are limited by the bandwidth and cost especially in the situation of multimedia application and wireless transmission application.
  • the data rate in new network and wireless multimedia digital audio system must be reduced without damaging the quality of the audio.
  • MPEG-1 and MPEG-2 BC techniques are high sound quality encoding technique mainly used for mono and stereo audio signals.
  • the MPEG-2 BC encoding technique gives emphasis to backward compatibility with the MPEG-1 technique, it is impossible to realize high sound quality encoding of five sound channels at a code rate lower than 540kbps.
  • the MPEG-2 AAC technique was put forward, which can realize a high quality encoding of the five channel signals at a rate of 320kbps.
  • Fig. 1 is a block diagram of the MPEG-2 AAC encoder.
  • Said encoder comprisesagaincontroller101, a filter bank 102, atime-domain noise shaping module 103, an intensity/coupling module 104, a psychoacoustical model, a second order backward adaptive predictor 105, a sum-difference stereo module 106, a bit allocation and quantization encoding module 107, and a bit stream multiplexing module 108, wherein the bit allocation and quantization encoding module 107 further comprises a compression ratio/distortion processing controller, a scale factor module, a non-uniform quantizer, and an entropy encoding module.
  • the filter bank 102 uses a modified discrete cosine transformation (MDCT), whose resolution is signal-adaptive, that is, an MDCT transformation of 2048 dots is used for the steady state signal, while a MDCT transformation of 256 dots is used for the transient state signal, thus for a signal sampled at 48kHz, the maximum frequency resolution is 23Hz and the maximum time resolution is 2.6ms.
  • MDCT modified discrete cosine transformation
  • sine window and Kaiser-Bessel window can be used in the filter bank 102, and the sine window is used when the harmonic wave interval of the input signal is less than 140Hz, while the Kaiser-Bessel window is used when the strong component interval in the input signal is greater than 220Hz.
  • the time-domain noise shaping technique performs linear prediction analysis on the frequency spectrum coefficients in the frequency domain, then controls the shape of the quantized noise according to said analysis to thereby control the pre-echo.
  • the intensity/coupling module 104 is used for stereo encoding of the signal intensity.
  • the sense of direction of audition is related to the change in the relevant signal intensity (signal envelope), but is irrelevant to the waveform of the signal, that is, a constant envelope signal has no influence on the sense of direction of audition. Therefore, this characteristic and the relevant information among multiple sound channels can be utilized to combine several sound channels into one common sound channel to be encoded, thereby forming the intensity/coupling technique.
  • the second order backward adaptive predictor 105 is used for removing the redundancy of the steady state signal and improving the encoding efficiency.
  • the sum-difference stereo (M/S) module 106 operates on sound channel pairs.
  • the sound channel pair refers to the two sound channels of the left-right sound channels or the left-right surround sound channels in, for example, double sound channel signals or multiple sound channel signals.
  • the M/S module 106 achieves the effect of reducing code rate and improving encoding efficiency by means of the correlation between the two sound channels in the sound channel pair.
  • the bit allocation and quantization encoding module 107 is realized by a nested loop, wherein the non-uniform quantizer performs lossy encoding, while the entropy encoding module performs lossless encoding, thus removing redundancy and reducing correlation.
  • the nested loop comprises inner layer loop and outer layer loop, wherein the inner layer loop adjusts the step size of the non-uniform quantizer until the provided bits are used up, and the outer layer loop estimates the encoding quality of signal by using the ratio between the quantized noise and the masking threshold.
  • the encoded signals are formed into an encoded audio stream through the bit stream multiplexing module 108 to be output.
  • Fig. 2 is a schematic block diagram of the corresponding MPEG-2 AAC decoder.
  • Said decoder comprises a bit stream demultiplexing module 201, a lossless decodingmodule 202, an inverse quantizer 203, a scale factor module 204, a sum-difference stereo (M/S) module 205, a prediction module 206, an intensity/coupling module 207, a time-domain noise shaping module 208, a filter bank 209 and a gain control module 210.
  • the encoded audio stream is demultiplexed by the bit stream demultiplexing module 201 to obtain the corresponding data stream and control stream.
  • the inverse quantizer 203 is a non-uniform quantizer bank realized by a companding function, which is used for transforming the integer quantized values into a reconstruction spectrum.
  • the scale factor module in the encoder differentiates the current scale factors from the previous scale factors and performs a Huffman encoding on the differences, so the scale factor module 204 in the decoder can obtain the corresponding differences through Huffman decoding, from which the real scale factors can be recovered.
  • the M/S module 205 converts the sum-difference sound channel into a left-right sound channel under the control of the side information.
  • a prediction module 206 is used in the decoder for performing prediction decoding.
  • the intensity/coupling module 207 performs intensity/coupling decoding under the control of the side information, then outputs to the time domain noise shaping module 208 to perform time domain noise shaping decoding, and in the end integrated filtering is performed by the filter bank 209, which adopts an inverse modified discrete cosine transformation (IMDCT) technique.
  • IMDCT inverse modified discrete cosine transformation
  • the high frequency PQF frequency band of high frequency can be neglected through the gain control module 210 so as to obtain signals of low sampling rate.
  • the MPEG-2 AAC encoding/decoding technique is suitable for audio signals of medium and high code rate, but it has a poor encoding quality for low code rate or very low code rate audio signals; meanwhile, this encoding/decoding technique involves a lot of encoding/decoding modules, so it is highly complex in implementation and is not easy for real-time implementation.
  • Fig. 3 is a schematic drawing of the structure of the encoder using the Dolby AC-3 technique, which comprises a transient state signal detection module 301, a modified discrete cosine transformer filter MDCT 302, a frequency spectrum envelope/index encoding module 303, a mantissa encoding module 304, a forward-backward adaptive sensing model 305, a parameter bit allocation module 306, and a bit stream multiplexing module 307.
  • the audio signal is determined through the transient state signal detection module 301 to be either a steady state signal or a transient state signal. Meanwhile, the time-domain data is mapped to the frequency-domain data through the signal adaptive MDCT filter bank 302, wherein a long window of 512 dots is applied to the steady state signal, and a pair of short windows are applied to the transient state signal.
  • the frequency spectrum envelope/index encoding module 303 encodes the index portion of the signal according to the requirements of the code rate and frequency resolution in three modes, i.e. D15 encoding mode, D25 encoding mode and D45 encoding mode.
  • the AC-3 technique uses differential encoding for the spectrum envelope in frequency, because an increment of ⁇ 2 is needed at most, each increment representing a level change of 6dB.
  • An absolute value encoding is used for the first DC item, and differential encoding is used for the rest of the indexes.
  • each index requires about 2.33 bits, and three differential groups are encoded in a word length of 7 bits.
  • the D15 encoding mode sacrifices the time resolution to provide refined frequency resolution.
  • D15 is transmitted occasionally, usually the frequency spectrum envelope of every 6 sound blocks (one data frame) is transmitted at one time.
  • the estimate is encoded with lower frequency resolution generally using D25 and D45 encoding modes generally.
  • the D25 encoding mode provides the appropriate frequency resolution and time resolution, and differential encoding is performed in every other frequency coefficient, thus each index needs about 1.15 bits. If the frequency spectrum is steady on two to three blocks but changes abruptly, the D25 encoding mode can be used.
  • the D45 encoding mode performs differential encoding in every three frequency coefficients, thus each index needs about 0.58 bit.
  • the D45 encoding mode provides very high time resolution but low frequency resolution, so it is generally used for encoding of transient state signals.
  • the forward-backward adaptive sensing model 305 is used for estimating the masking threshold of each frame of signals, wherein the forward adaptive portion is only applied to the encoder to estimate a group of optimal sensing model parameters through iterative loop under the restriction of the code rate, then said parameters are transferred to the backward adaptive portion to estimate the masking threshold of each frame.
  • the backward adaptive portion is applied both to the encoder and the decoder.
  • the parameter bit allocation module 306 analyzes the frequency spectrum envelope of the audio signals according to the masking rule to determine the number of bits allocated to each mantissa. Said module 306 performs an overall bit allocation for all the sound channels by using a bit reservoir.
  • bits are taken recurrently from the bit pool to be allocated to all sound channels.
  • the quantization of the mantissa is adjusted according to the number of bits that can be obtained.
  • the AC-3 encoder also uses the high frequency coupling technique, in which the high frequency portion of the coupled signal is divided into 18 sub-frequency channels according to the critical bandwidth of human ear, then some of the sound channels are selected to be coupled starting from a certain sub-band. Finally, AC-3 audio stream is formed through the bit stream multiplexing module 307 to be output.
  • Fig. 4 is a schematic drawing of the flow of decoding using Dolby AC-3.
  • the bit stream that is encoded by AC-3 encoder is input, and data frame synchronization and error code detection are performed on the bit stream. If a data error code is detected, error code covering or muting processing is performed. Then the bit stream is de-packaged to obtain the primary information and the side information, and then index decoding is performed thereon.
  • index decoding two pieces of side information are needed, one is the number of packaged indexes, the other is the index strategy that is adopted, such as D15, D25 or D45 mode.
  • the decoded index and the bit allocation side information again perform the bit allocation to indicate the number of bits used by each packaged mantissa, thereby obtaining a group of bit allocation pointers, each corresponding to an encoded mantissa.
  • the bit allocation pointers point out the quantizer for the mantissa and the number of bits occupied by each mantissa in the code stream.
  • the single encoded mantissa value is de-quantized to be transformed into a de-quantized value, and the mantissa that occupies zero bit is recovered to zero or is replaced by a random jitter value under the control of the jitter mark.
  • the de-coupling operation is carried out, which recovers the high frequency portion of the coupled sound channel, including the index and the mantissa, from the common coupling sound channel and the coupling factor.
  • a matrix processing is used for a certain sub-band, then at the decoding terminal, the sum and difference sound channel value of said sub-band should be converted into the left-right sound channel value through matrix recovery.
  • the code stream includes the dynamic range control value of each audio block. A dynamic range compression is performed on said value to change the amplitude of the coefficients, including index and mantissa.
  • the frequency-domain coefficients are inversely transformed into time-domain samples, then the time-domain samples are processed by adding window, and adjacent blocks are superposed to reconstruct the PCM audio signal.
  • a down-mixing processing should be performed on the audio signal to finally output the PCM stream.
  • the Dolby AC-3 encoding technique is mainly for high bit rate signals of multi-channel surround sound, but when the encoding bit rate of 5.1 sound channel is lower than 384kbps, the encoding effect is bad; besides, the encoding efficiency of stereo of mono and double sound channels is also low.
  • the existing encoding and decoding techniques cannot ensure the encoding and decoding quality of audio signals of very low code rate, low code rate and high code rate and of signals of mono and dual channels, and the implementation thereof is complex.
  • the technical problem to be solved by this invention is to provide an enhanced audio encoding/decoding device and method so as to overcome the low encoding efficiency and poor encoding quality with respect to the low code rate audio signals in the prior art.
  • the enhanced audio encoding device of the invention comprises a psychoacoustical analyzing module, a time-frequency mapping module, a quantization and entropy encoding module, a bit-stream multiplexing module, a signal characteristic analyzing module and a multi-resolution analyzing module.
  • the signal characteristic analyzing module is configured to analyze the signal type of the input audio signal and output it to the psychoacoustical analyzing module and time-frequency mapping module, and to output the information of the result of signal type analysis to the bit-stream multiplexing module;
  • the psychoacoustical analyzing module is configured to calculate a masking threshold and a signal-to-masking ratio of the audio signal, and output them to said quantization and entropy encoding module;
  • the time-frequency mapping module is configured to convert the time-domain audio signal into frequency-domain coefficients and output them to the multi-resolution analyzing module;
  • the multi-resolution analyzing module is configured to perform a multi-resolution analysis on the frequency-domain coefficients of signals of a fast varying type based on the signal type analysis result output from the signal characteristic analyzing module, and to output them to the quantization and entropy encoding module;
  • the quantization and entropy encoding module is configured to perform quantization and entropy encoding on the frequency
  • the enhanced audio decoding device of the invention comprises a bit-stream demultiplexing module, an entropy decoding module, an inverse quantizer bank, a frequency-time mapping module, and a multi-resolution integration module.
  • the bit-stream demultiplexing module is configured to demultiplex the compressed audio data stream and output the corresponding data signals and control signals to the entropy decoding module and the multi-resolution integration module;
  • the entropy decoding module is configured to decode said signals, recover the quantized values of the spectrum so as to output them to the inverse quantizer bank;
  • the inverse quantizer bank is configured to reconstruct the inverse quantization spectrum and output it to the multi-resolution integration module,
  • the multi-resolution integration module is configured to perform multi-resolution integration on the inverse quantization spectrum and to output it to the frequency-time mapping module; and the frequency-time mapping module is configured to perform a frequency-time mapping on the spectrum coefficients to output the time-domain audio signals.
  • the invention is applicable to the Hi-Fi compression encoding of audio signals with the configuration of multiple sampling rates and sound channels, and it supports audio signals with the sampling range of 8kHz to 192kHz. Meanwhile, it supports all possible sound channel configurations and supports audio encoding/decoding with a wide range of target code rate.
  • Figs. 1-4 are the schematic drawings of the structures of the encoders of the prior art, which have been introduced in the background art, so they will not be elaborated herein.
  • the audio encoding device of the present invention comprises a signal characteristic analyzing module 50, a psychoacoustical analyzing module 51, a time-frequency mapping module 52, a multi-resolution analyzing module 53, a quantization and entropy encoding module 54, and a bit-stream multiplexing module 55.
  • the signal characteristic analyzing module 50 is configured to analyze the signal type of the input audio signal and output the audio signal to the psychoacoustical analyzing module 51 and time-frequency mapping module 52, and to output the result of signal type analysis to the bit-stream multiplexing module 55;
  • the psychoacoustical analyzing module 51 is configured to calculate a masking threshold and a signal-to-masking ratio of the input audio signal, and output them to the quantization and entropy encoding module 54;
  • the time-frequency mapping module 52 is configured to convert the time-domain audio signal into frequency-domain coefficients and output them to the multi-resolution analyzing module 53;
  • the multi-resolution analyzing module 53 is configured to perform a multi-resolution analysis on the frequency-domain coefficients of signal of a fast varying type based on the signal type analysis result output from the psychoacoustical analyzing module 51, and to output them to the quantization and entropy encoding module 54;
  • the quantization and entropy encoding module 54 is
  • the digital audio signal is analyzed as to the signal type in the signal characteristic analyzing module 50, and the type information of the audio signal is output to the bit stream multiplexing module 55; meanwhile, the audio signal is output to the psychoacoustical analyzing module 51 and the time-frequency mapping module 52.
  • the masking threshold and the signal-to-masking ratio of this frame of audio signal are calculated in the psychoacoustical analyzing module 51, and the signal-to-masking ratio is transmitted as a control signal to the quantization and entropy encoding module 54, and on the other hand, the time-domain audio signal is converted into frequency-domain coefficients through the time-frequency mapping module 52; the multi-resolution analyzing module 53 performs a multi-resolution analysis of the frequency-domain coefficients of the fast varying type signals so as to increase the time resolution of the fast varying type signals and to output the result to the quantization and entropy encoding module 54; under the control of the signal-to-masking ratio output from the psychoacoustical analyzing module 51, quantization and entropy encoding are performed in the quantization and entropy encoding module 54, then the encoded data and control signal are multiplexed in the bit-stream multiplexing module 55 to form a code stream of enhanced audio encoding.
  • the signal characteristic analyzing module 50 is configured to analyze the signal type of the input audio signal and output the type information of the audio signal to the bit-stream multiplexing module 55, and to output the audio signal to the psychoacoustical analyzing module 51 and time-frequency mapping module 52 at the same time.
  • the signal characteristic analyzing module 50 determines if the signal is a slowly varying signal or a fast varying signal by analyzing the forward and backward masking effects based on the adaptive threshold and waveform prediction. If the signal is of a fast varying type, the relevant parameter information of the abrupt component is then calculated, such as the location where the abrupt signal occurs and the intensity of the abrupt signal, etc.
  • the psychoacoustical analyzing module 51 is mainly configured to calculate a masking threshold, a signal-to-masking ratio and a sensing entropy of the input audio signal.
  • the number of bits needed for the transparent encoding of the current signal frame can be dynamically analyzed based on the sensing entropy calculated by the psychoacoustical analyzing module 51, thereby adjusting the bit allocation among frames.
  • the psychoacoustical analyzing module 51 outputs the signal-to-masking ratio of each sub-band to the quantization and entropy encoding module 54 to control it.
  • the time-frequency mapping module 52 is configured to convert the audio signal from a time-domain signal into frequency-domain coefficients, and it is formed of a filter bank which can be specifically discrete Fourier transformation (DFT) filter bank, discrete cosine transformation (DCT) filter bank, modified discrete cosine transformation (MDCT) filter bank, cosine modulated filter bank, or wavelet transformation filter bank, etc.
  • DFT discrete Fourier transformation
  • DCT discrete cosine transformation
  • MDCT modified discrete cosine transformation
  • cosine modulated filter bank or wavelet transformation filter bank
  • the encoding device of the present invention increases the time resolution for the encoded fast varying signals by means of the multi-resolution analyzing module 53.
  • the frequency-domain coefficients output from the time-frequency mapping module 52 are input to the multi-resolution analyzing module 53.
  • a frequency-domain wavelet transformation or frequency-domain modified discrete cosine transformation is performed to obtain the multi-resolution representation for the frequency-domain coefficients to be output to the quantization and entropy encoding module 54; if the signal is of a slowly varying type, the frequency-domain coefficients are directly output to the quantization and entropy encoding module 54 without being processed.
  • MDCT frequency-domain modified discrete cosine transformation
  • the multi-resolution analyzing module 53 comprises a frequency-domain coefficient transformation module and a reorganization module, wherein the frequency-domain coefficient transformation module is used for transforming the frequency-domain coefficients into time-frequency plane coefficients; and the reorganization module is used for reorganizing the time-frequency plane coefficients according to a certain rule.
  • the frequency-domain coefficients transformation module can use the filter bank of frequency-domain wavelet transformation, the filter bank of frequency-domain MDCT transformation, etc.
  • the quantization and entropy encoding module 54 further comprises a non-linear quantizer bank and an encoder, wherein the quantizer can be either a scalar quantizer or a vector quantizer.
  • the vector quantizer can be further divided into the two categories of memoryless vector quantizer and memory vector quantizer.
  • each input vector is separately quantized independent of the previous vectors; while the memory vector quantizer quantizes a vector taking into account the previous vectors, i.e. using the correlation among the vectors.
  • Main memoryless vector quantizers include full searching vector quantizer, tree searching vector quantizer, multi-stage vector quantizer, gain/waveform vector quantizer and separate mean value vector quantizer; and the main memory vector quantizers include prediction vector quantizer and finite state vector quantizer.
  • the non-linear quantizer bank further comprises M sub-band quantizers.
  • the scale factor is mainly used to perform the quantization, specifically, all the frequency-domain coefficients of the sub-band of M scale factor are non-linearly compressed, then the frequency-domain coefficients of said sub-band is quantized by using the scale factors to obtain the quantization spectrum represented by an integer to be output to the encoder, The first scale factor in each frame of signal output to the bit-stream multiplexing module 55 as the common scale factor to be , and the rest of the scale factors are output to the encoder after differential processing with respect to their respective preceding scale factors.
  • the scale factors in said step are constantly varying values, which are adjusted according to the bit allocation strategy.
  • the present invention provides an overall sensing bit allocation strategy with the minimum distortion, details are as follows:
  • each sub-band quantizer is initialized to select an appropriate scale factor, so that the quantization values of the spectrum coefficients of all the sub-bands is zero.
  • the quantization noise of each sub-band at this time equals to the energy value thereof, and the noise-to-masking ratio NMR of each sub-band equals to its signal-to-masking ratio SMR.
  • the number of bit consumed by the quantization is zero, and the number of remaining bits B 1 equals to the number of target bits B.
  • the sub-band with the largest noise-to-masking ratio NMR is searched. If the noise-to-masking ratio NMR is not more than 1, the scale factor remains unchanged and the allocation result is output, thus ending the bit allocation; otherwise, the scale factor of the corresponding sub-band quantizer is reduced by one unit, then the number of bits ⁇ B i ( Q i ) that needs to be added for said sub-band is calculated.
  • the frequency-domain coefficients form a plurality of M-dimensional vectors to be input to the non-linear quantizer bank.
  • Each M-dimensional vector is spectrum smoothed according to a smoothing factor, i.e. reducing the dynamic range of the spectrum, then the vector quantizer finds the code word from the code book that has the shortest distance from the vector to be quantized according to the subjective perception distance measure criterion, and transfers the corresponding code word index to the encoder.
  • the smoothing factor is adjusted based on the bit allocation strategy of vector quantization, while the bit allocation strategy of vector quantization is controlled according to the priority of sensing among different sub-bands.
  • the entropy encoding technique is used to further remove the statistical redundancy of the quantized coefficients and the side information.
  • Entropy encoding is a source encoding technique, whose basic idea is allocating shorter code words to symbols that have greater probability of appearance, and allocating longer code words to symbols that have less probability of appearance, thus the average code word length is the shortest.
  • entropy encoding mainly includes Huffman encoding, arithmetic encoding or run length encoding method.
  • the entropy encoding in the present invention can be any of said encoding methods.
  • Entropy encoding is performed on the quantization spectrum quantized and output by the scalar quantizer and the differentially processed scale factors in the encoder to obtain the code book sequence numbers, the encoded values of the scale factors, and the lossless encoding quantization spectrum, then the code book sequence numbers are entropy encoded to obtain the encoded values of the code book sequence numbers, then the encoded values of the scale factors, the encoded values of the code book sequence numbers, and the lossless encoding quantization spectrum are output to the bit-stream multiplexing module 55.
  • the code word indexes quantized by the vector quantizer are one-dimensional or multi-dimensional entropy encoded in the encoder to obtain the encoded values of the code word indexes, then the encoded values of the code word indexes are output to the bit-stream multiplexing module 55.
  • the encoding method based on said encoder as described above includes analyzing the signal type of the input audio signal; calculating the signal-to-masking ratio of the audio signal; performing a time-frequency mapping on the audio signal to obtain the frequency-domain coefficients of the audio signal; performing multi-resolution analysis, quantization and entropy encoding on the frequency-domain coefficients; and multiplexing the result of signal type analysis and the encoded audio code stream to obtain the compressed audio code stream.
  • the signal type is determined by forward and backward masking effect analysis based on the adaptive threshold and waveform prediction, and the specific steps thereof are: decomposing the input audio data into frames; decomposing the input frames into a plurality of sub-frames and searching for the local extremal vertexes of the absolute values of the PCM data on each sub-frame; selecting the sub-frame peak value from the local extremal vertexes of the respective sub-frames; for a certain sub-frame peak value, predicting the typical sample value of a plurality of (typically four) sub-frames that are forward delayed with respect to said sub-frame by means of a plurality of (typically three) sub-frame peak values before said sub-frame; calculating the difference and ratio between said sub-frame peak value and the predicted typical sample value; if the predicted difference and ratio are both larger than the predetermined thresholds, determining that said sub-frame has jump signal and confirming that said sub-frame has the local extremal vertex with the capability of backward masking pre-echo, if there is a sub
  • DFT discrete Fourier transformation
  • DCT discrete cosine transformation
  • MDCT modified discrete cosine transformation
  • cosine modulation filter bank wavelet transformation
  • wavelet transformation etc.
  • modified discrete cosine transformation MDCT and cosine modulation filtering are taken as examples to illustrate the process of time-frequency mapping.
  • the time-domain signals of M samples from the previous frame and the time domain signals of M samples of the present frame are selected first, then a window adding operation is performed on the altogether 2M samples of these two frames, finally, MDCT transformation is performed on the window added signals to obtain M frequency-domain coefficients.
  • Sine window can be used as the window function.
  • said limitation to the window function can be modified by using double orthogonal transformation with specific analysis filter and synthesis filter.
  • the time-domain signals of M samples from the previous frame and the time domain signals of M samples of the present frame are selected first, then a window adding operation is performed on the altogether 2M samples of these two frames, finally, cosine modulation filtering is performed on the window added signals to obtain M frequency-domain coefficients.
  • the impact response length of the analysis window (analysis prototype filter) P a (n) of M sub-bands cosine modulation filter bank is N a
  • the impact response length of integrated window (integrated prototype filter) P s (n) is N s .
  • the calculation of the masking threshold and signal-to-masking ratio of the re-sampled signal includes the following steps:
  • the multi-resolution analyzing module 53 re-organizes the time-frequency domain of the input frequency-domain data to improve the time resolution of the frequency-domain data at the cost of reducing the frequency precision, thereby to automatically adapt to the time-frequency characteristic of the fast varying type signals and to suppress the pre-echo without adjusting the form of the filter bank in the time-frequency mapping module 52.
  • the multi-resolution analysis includes the two steps of frequency-domain coefficient transformation and reorganization, wherein the frequency-domain coefficients are transformed into time-frequency plane coefficients through frequency-domain coefficient transformation, and the time-frequency plane coefficients are grouped by reorganization according to a certain rule.
  • the frequency-domain wavelet or the wavelet basis of wavelet package transformation may either be fixed or adaptive.
  • the scale coefficient of Harr wavelet basis is 1 2 ⁇ 1 2 . and the wavelet coefficient is 1 2 , - 1 2 .
  • Fig. 6 shows the schematic drawing of the filtering structure that performs wavelet transformation by using Harr wavelet basis, wherein H 0 represents low-pass filtering (the filtering coefficient is 1 2 ⁇ 1 2 ), H 1 represents high-pass filtering (the filtering coefficient is 1 2 , - 1 2 ), with " ⁇ 2" representing a duple down sampling operation.
  • Harr wavelet transformation is performed for the high frequency portions of the frequency-domain coefficients to obtain coefficients X 2 (k), X 3 (k), X 4 (k), X 5 (k), X 6 (k), and X 7 (k), of different time-frequency intervals, and the division of the corresponding time-frequency plane is as shown in Fig. 7.
  • different wavelet transformation structures can be used for processing so as to obtain other similar time-frequency plane divisions. Therefore, the time-frequency plane division during signal analysis can be discretionarily adjusted as desired so as to meet different requirements of the analysis of the time and frequency resolution.
  • time-frequency plane coefficients are reorganized in the reorganizationmodule according to a certain rule, for example, the time-frequency plane coefficients can be organized in the frequency direction first, and the coefficients in each frequency band are organized in the time direction, then the organized coefficients are arranged in the order of sub-window and scale factor band.
  • Frequency-domain MDCT transformations of different lengths are used in different frequency-domain ranges, thereby to obtain different time-frequency plane divisions, i.e. different time and frequency precision.
  • the reorganization module reorganizes the time-frequency domain data output from the filter bank of the frequency-domain MDCT transformation.
  • One way of reorganization is to organize the time-frequency plane coefficients in the frequency direction first, and the coefficients in each frequency band are organized in the time direction at the same time, then the organized coefficients are arranged in the order of sub-window and scale factor band.
  • Quantization and entropy encoding further include the two steps of non-linear quantization and entropy encoding, wherein the quantization can be scalar quantization or vector quantization.
  • the scalar quantization comprises the steps of non-linearly compressing the frequency-domain coefficients in all the scale factor bands; using the scale factor of each sub-band to quantize the frequency-domain coefficients of said sub-band to obtain the quantization spectrum represented by an integer; selecting the first scale factor in each frame of signal as the common scale factor; and differentiating the rest of the scale factors from their respective previous scale factor.
  • the vector quantization comprises the steps of forming a plurality of multi-dimensional vector signals with the frequency-domain coefficients; performing spectrum smoothing for each M-dimensional vector according to the smoothing factor; searching for the code word from the code book that has a shortest distance from the vector to be quantized according to the subjective perception distance measure criterion to obtain the code word index.
  • the entropy encoding step comprises entropy encoding the quantization spectrum and the differentiated scale factors to obtain the sequence numbers of the code book, the encoded value of the scale factors and the quantization spectrum of lossless encoding; and entropy encoding the sequence numbers of the code book to obtain the encoded values thereof.
  • a one-dimensional or multi-dimensional entropy encoding is performed on the code word indexes to obtain the encoded values of the code word indexes.
  • Said entropy encoding method can be any one of the existing Huffman encoding, arithmetic encoding or run length encoding method.
  • the encoded audio code stream is obtained, which is multiplexed together with the common scale factor and the result of signal type analysis to obtain the compressed audio code stream.
  • Fig. 8 is a schematic drawing of the structure of the audio decoding device according to the present invention.
  • the audio decoding device comprises a bit-stream demultiplexing module 60, an entropy decoding module 61, an inverse quantizer bank 62, a multi-resolution integration module 63 and a frequency-time mapping module 64.
  • the compressed audio code stream is demultiplexed by the bit-stream demultiplexing module 60 to obtain the corresponding data signal and control signal which are output to the entropy decoding module 61 and the multi-resolution integration module 63; the data signal and control signal are decoded in the entropy decoding module 61 to recover the quantized values of the spectrum.
  • Said quantized values are reconstructed in the inverse quantizer bank 62 to obtain the inversely quantized spectrum, the inversely quantized spectrum is then output to the multi-resolution integration module 63 and is output to the frequency-time mapping module 64 after a multi-resolution integration, then the audio signal of time-domain is obtained through frequency-time mapping.
  • the bit-stream demultiplexing module 60 decomposes the compressed audio code stream to obtain the corresponding data signal and control signal and to provide the corresponding decoding information for other modules.
  • the compressed audio data stream is demultiplexed to output signals to the entropy decoding module 61, said signals including the common scale factor, the scale factor encoded values, the encoded values of the code book sequence number, and the quantized spectrum of the lossless encoding, or the encoded values of the code word indexes, and to output the information of the signal type to the multi-resolution integration module 63.
  • the quantization and entropy encoding module 54 uses the scalar quantizer, then in the decoding device, what the entropy decoding module 61 receives are the common scale factor, the scale factor encoded value, the encoded values of the code book sequence numbers, and the quantized spectrum of the lossless encoding output from the bit-stream demultiplexing module 60, then code book sequence number decoding, spectrum coefficient decoding and scale factor decoding are performed thereon to reconstruct the quantized spectrum and to output the integer representation of the scale factors and the quantized values of the spectrum to the inverse quantizer bank 62.
  • the decoding method used by the entropy decoding module 61 corresponds to the encoding method used by entropy encoding in the encoding device, which is, for example, Huffman decoding, arithmetic decoding or run length decoding, etc.
  • the inverse quantizer bank 62 Upon receipt of the quantized values of the spectrum and the integer representation of the scale factors, the inverse quantizer bank 62 inversely quantizes the quantized values of the spectrum into reconstructed spectrum without scaling (inverse quantization spectrum), and outputs the inverse quantization spectrum to the multi-resolution integration module 63.
  • the inverse quantizer bank 62 can be either a uniform quantizer bank or a non-uniform quantizer bank realized by a companding function. In the encoding device, the quantizer bank uses the scalar quantizer, so in the decoding device, the inverse quantizer bank 62 also uses the scalar inverse quantizer. In the scalar inverse quantizer, the quantized values of the spectrum are non-linearly expanded first, then all the spectrum coefficients (inverse quantization spectrum) in the corresponding scale factor band are obtained by using each scale factor.
  • the entropy decoding module 61 receives the encoded values of the code word indexes output from the bit-stream demultiplexing module 60, and decodes the encoded values of the code word indexes by the entropy decoding method corresponding to the entropy encoding method used in entropy encoding, thereby obtaining the corresponding code word index.
  • the code word indexes are output to the inverse quantizer bank 62, and by looking up the code book, the quantized values (inverse quantization spectrum) are obtained and are output to the multi-resolution integration module 63.
  • the inverse quantizer bank 62 uses the inverse vector quantizer. After a multi-resolution integration, the inverse quantization spectrum is mapped by the frequency-time mapping module 64 to obtain the time-domain audio signal.
  • the frequency-time mapping module 64 can be a filter bank of inverse discrete cosine transformation (IDCT), a filter bank of inverse discrete Fourier transformation (IDFT), a filter bank of inverse modified discrete cosine transformation (IMDCT), a filter bank of inverse wavelet transformation, and a cosine modulation filter bank, etc.
  • the decoding method based on the above-mentioned decoder comprises: demultiplexing the compressed audio code stream to obtain the data information and control information; entropy decoding said information to obtain the quantized values of the spectrum; inversely quantizing the quantized values of the spectrum to obtain the inverse quantization spectrum; multi-resolution integrating the inverse quantization spectrum and then performing a frequency-time mapping thereon to obtain the time-domain audio signal.
  • the entropy decoding steps include: decoding the encoded values of the code book sequence numbers to obtain the code book sequence numbers of all the scale factor bands; decoding the quantization coefficients of all the scale factor bands according to the code book corresponding to the code book sequence numbers; and decoding the scale factors of all the scale factor bands to reconstruct the quantization spectrum.
  • the entropy decoding method used in said process corresponds to the entropy encoding method used in the encoding method, which is, for example, run length decoding method, Huffman decoding method, or arithmetic decoding method, etc.
  • the entropy decoding process is described below by using as examples the decoding of the code book sequence number by the run length decoding method, the decoding of the quantization coefficients by the Huffman decoding method, and the decoding of the scale factor by the Huffman decoding method.
  • the code book sequence numbers of all the scale factor bands are obtained through the run length decoding method.
  • the decoded code book sequence numbers are integers within a certain range. Suppose that said range is [0, 11], then only the code book sequence numbers within said valid range, i.e. between 0-11, are corresponding to the Huffman code book of the spectrum coefficients.
  • a certain code book sequence can be selected to correspond to it, typically, the 0 sequence number can be selected.
  • the Huffman code book of spectrum coefficients corresponding to said code book number is used to decode the quantization coefficients of all the scale factor bands. If the code book number of a scale factor band is within the valid range, for example between 1-11 in this embodiment, then said code book number corresponds to a spectrum coeff icient code book, and said code book is used to decode the quantization spectrum to obtain the code word indexes of the quantization coefficients of the scale factor bands, subsequently, the code word indexes are de-packaged to obtain the quantization coefficients. If the code book number of the scale factor band is not between 1 and 11, then said code book number is not corresponding to any spectrum coefficient code book, and the quantization coefficients of said scale factor band do not need to be decoded, but they are all directly set to be zero.
  • the scale factors are used to reconstruct the spectrum values on the basis of the inverse quantization spectrum coefficients. If the code book number of the scale factor band is within the valid range, each code book number corresponds to a scale factor.
  • the code stream occupied by the first scale factor is read first, then the rest of the scale factors are Huffman decoded to obtain the differences between each of the scale factors and their respective previous scale factors, and said differences are added to the valuse of the previous scale factors to obtain the respective scale factors. If the quantization coefficients of the present sub-band are all zero, then the scale factors of said sub-band do not have to be decoded.
  • the quantized values of the spectrum and the integer representation of the scale factors are obtained, then the quantized values of the spectrum are inversely quantized to obtain the inverse quantization spectrum.
  • the inverse quantization processing includes non-linear expanding the quantized values of the spectrum, and obtaining all the spectrum coefficients (inverse quantization spectrum) in the corresponding scale factor band according to each scale factor.
  • the entropy decoding steps include: decoding the encoded values of the code word indexes by means of the entropy decoding method corresponding to the entropy encoding method used in the encoding device so as to obtain the code word indexes, then inversely quantizing the code word indexes to obtain the inverse quantization spectrum.
  • the frequency-domain coefficients are multi-resolution analyzed, then the multi-resolution representation of the frequency-domain coefficients is quantized and entropy encoded; if it is not a fast varying type signal, the frequency-domain coefficients are directly quantized and entropy encoded.
  • the multi-resolution integration can use frequency-domain wavelet transformation method or frequency-domain MDCT transformation method.
  • the frequency-domain wavelet integration method includes: reorganizing said time-frequency plane coefficients according to a certain rule; performing wavelet transformation on the frequency-domain coefficients to obtain the time-frequency plane coefficients.
  • the MDCT transformation includes: reorganizing said time-frequency plane coefficients according to a certain rule, and then performing several times of MDCT transformation on the frequency-domain coefficients to obtain the time-frequency plane coefficients.
  • the reorganization method includes: organizing the time-frequency plane coefficients in the frequency direction, and the coefficients in each frequency band are organized in the time direction, then the organized coefficients are arranged in the order of sub-window and scale factor band.
  • the method of performing a frequency-time mapping on the frequency-domain coefficients corresponds to the time-frequency mapping method in the encoding method, which can be inverse discrete cosine transformation (IDCT), inverse discrete Fourier transformation (IDFT), inverse modified discrete cosine transformation (IMDCT), and inverse wavelet transformation, etc.
  • IDCT inverse discrete cosine transformation
  • IDFT inverse discrete Fourier transformation
  • IMDCT inverse modified discrete cosine transformation
  • wavelet transformation etc.
  • the frequency-time mapping process is illustrated below by taking inverse modified discrete cosine transformation IMDCT as an example.
  • the frequency-time mapping process includes three steps: IMDCT transformation, time-domain window adding processing and time-domain superposing operation.
  • IMDCT transformation is perform on the spectrum before prediction or the inverse quantization spectrum to obtain the transformed time-domain signal x i,n .
  • window adding is performed on the time-domain signal obtained from IMDCT transformation at the time domain.
  • Typical window functions include, among others, Sine window and Kaiser-Bessel window.
  • said restriction to the window function can be modified by using double orthogonal transformation with a specific analysis filter and synthesis filter.
  • the window added time-domain signal is superposed to obtain the time-domain audio signal.
  • Fig. 9 is a schematic drawing of the first embodiment of the encoding device of the present invention.
  • this embodiment has a frequency-domain linear prediction and vector quantization module 56 added between the output of the multi-resolution analyzing module 53 and the input of the quantization and entropy encoding module 54 for outputting the residual sequence to the quantization and entropy encoding module 54, and for outputting the quantized code indexes as the side information to the bit-stream multiplexing module 55.
  • frequency-domain linear prediction and vector quantization module 56 needs to perform linear prediction and multi-stage vector quantization for the frequency-domain coefficients at each time interval.
  • the frequency-domain coefficents output from the multi-resolution analyzing module 53 are transmitted to the frequency-domain linear prediction and vector quantization module 56.
  • standard linear prediction analysis is performed on the frequency-domain coefficients at each time interval. If the prediction gain meets the given condition, linear prediction error filtering is performed on the frequency-domain coefficients, and the resulted prediction coefficients are transformed into line spectrum frequency LSF coefficients, then the optimal distortion measurement criterion is used to search and calculate the the code word indexes for the respective code book, and the code word indexes are used as side information to be transferred to the bit-stream multiplexing module 55, while the residual sequence obtained through prediction analysis is output to the quantization and entropy encoding module 54.
  • the frequency-domain linear prediction and vector quantization module 56 consists of a linear prediction analyzer, a linear prediction filter, a transformer, and a vector quantizer. Frequency-domain coefficients are input to the linear prediction analyzer for prediction analysis to obtain the prediction gain and prediction coefficients. The frequency-domain coefficients that meet a certain condition are output to the linear prediction filter to be filtered, and a residual sequence is obtained thereby; the residual sequence is directly output to the quantization and entropy encoding module 54, while the prediction coefficients are transformed into line spectrum frequency LSF coefficients through the transformer, then the LSF parameters are sent to the vector quantizer for a multi-stage vector quantization, and the quantized signals are transmitted to the bit-stream multiplexing module 55.
  • Performing a frequency-domain linear prediction processing on the audio signals can effectively suppress the pre-echo and obtain greater encoding gain.
  • the real signal is x(t)
  • C(f) is the one-side spectrum corresponding to the positive frequency component of signal x(t), that is, the Hilbert envelope of the signal is relevant to the autocorrelation function of said signal spectrum.
  • PSD ( f ) F ⁇ x ( ⁇ ) ⁇ x *( ⁇ -t ) d ⁇ ⁇ , so the square Hilbert envelope of the signal at the time-domain and the power spectrum density function of the signal at the frequency-domain are corresponding to each other.
  • the encoding method based on the encoding device as shown in Fig. 9 is substanially the same as the encoding method based on the encoding device as shown in Fig. 5, and the difference therebetween is that the former has the following steps added thereto: after a multi-resolution analysis of the frequency-domain coefficients, performing a standard linear prediction analysis on the frequency-domain coefficients at each time interval to obtain the prediction gain and the prediction coefficients; determining if the prediction gain exceeds the predetermined threshold, if it does, performing a frequency-domain linear prediction error filtering on the frequency-domain coefficients based on the prediction coefficients to obtain the residual sequence; transforming the prediction coefficients into line spectrum pair frequency coefficients, and performing a multi-stage vector quantization on said line spectrum pair frequency coefficients to obtain the side information; quantizing and entropy encoding the residual sequence; and if the prediction gain does not exceed the predetermined threshold, quantizing and entropy encoding the frequency-domain coefficients.
  • a standard linear prediction analysis is performed on the frequency-domain coefficients at each time interval, including calculating the autocorrelation matrix, obtaining the prediction gain and the prediction coefficients by recursively executing the Levinson-Durbin algorithm. Then it is determined whether the calculated prediction gain exceeds a predetermined threshold, if it does, a linear prediction error filtering is performed on the frequency-domain coefficients based on the prediction coefficients, otherwise, the frequency-domain coefficients are not processed and the next step is executed to quantize and entropy encode the frequency-domain coefficients.
  • Linear prediction includes forward prediction and backward prediction.
  • Forward prediction refers to predicting the current value by using the values before a certain moment
  • the backward prediction refers to predicting the current value by using the values after a certain moment.
  • the forward prediction will be used as an example to explain the linear prediction error filtering.
  • the frequency-domain coefficients X ( k ) output after the time-frequency transformation can be represented by the residual sequence E ( k ) and a group of prediction coefficients a i .
  • said group of prediction coefficients a i are transformed into the linear spectrum frequency LSF coefficients, and multi-stage vector quantization is performed thereon.
  • the vector quantization uses the optimal distortion measurement criterion (e.g. nearest neighbor criterion) to search and calculate the code word indexes of the respective stages of code book, thereby determining the code word corresponding to the prediction coefficients and outputting the code words indexes as the side information.
  • the residual sequence E ( k ) is quantized and entropy encoded.
  • Fig. 10 is a schematic drawing of embodiment one of the decoding device.
  • Said decoding device has an inverse frequency-domain linear prediction and vector quantization module 65 added on the basis of the decoding device as shown in Fig. 8.
  • Said inverse frequency-domain linear prediction and vector quantization module 65 is between the output of the inverse quantizer bank 62 and the input of the multi-resolution integration module 63, and the bit-stream demultiplexing module 60 outputs control information of inverse frequency-domain linear prediction vector quantization thereto for inverse quantizing and inverse linear prediction filtering the inverse quantization spectrum (residual spectrum), thereby obtaining the spectrum before prediction and outputting it to the multi-resolution integration module 63.
  • the technique of frequency-domain linear prediction vector quantization is used to suppress the pre-echo and to obtain greater encoding gain. Therefore, in the decoder, the inverse quantization spectrum and the control information of inverse frequency-domain linear prediction vector quantization output from the bit-stream demultiplexing module 60 are input to the inverse frequency-domain linear prediction and vector quantization module 65 to recover the spectrum be fore the linear prediction.
  • the inverse frequency-domain linear prediction and vector quantization module 65 comprises an inverse vector quantizer, an inverse transformer, and an inverse linear prediction filter, wherein the inverse vector quantizer is used for inversely quantizing the code word indexes to obtain the line spectrum pair frequency (LSF) coefficients, the inverse transformer is used for inverse transforming the line spectrum frequency (LSF) coefficients into prediction coefficients, and the inverse linear prediction filter is used for inverse filtering the inverse quantization spectrum based on the prediction coefficients to obtain the spectrum before prediction and output it to the multi-resolution integration module 63.
  • the inverse vector quantizer is used for inversely quantizing the code word indexes to obtain the line spectrum pair frequency (LSF) coefficients
  • the inverse transformer is used for inverse transforming the line spectrum frequency (LSF) coefficients into prediction coefficients
  • the inverse linear prediction filter is used for inverse filtering the inverse quantization spectrum based on the prediction coefficients to obtain the spectrum before prediction and output it to the multi-resolution integration module 63.
  • the decoding method of the decoding device as shown in Fig. 10 is substantially the same as the decoding method of the decoding device as shown in Fig. 8, and the difference is that the former further includes the steps of after obtaining the inverse quantization spectrum, determining if the control information contains information concerning that the inverse quantization spectrum needs to undergo the inverse frequency-domain linear prediction vector quantization, if it does, performing the inverse vector quantization to obtain the prediction coefficients, and performing a linear prediction synthesizing on the inverse quantization spectrum according to the prediction coefficients to obtain the spectrum before prediction; and multi-resolution integrating the spectrum before prediction.
  • the residual sequence E ( k ) and the calculated prediction coefficient a i are synthesized by frequency-domain linear prediction to obtain the spectrum X(k) before prediction which is then frequency-time mapped.
  • control information indicates that said signal frame has not undergone the frequency-domain linear prediction vector quantization
  • the inverse frequency-domain linear prediction vector quantization will not be performed, and the inverse quantization spectrum is directly frequency-time mapped.
  • Fig. 11 is the schematic drawing of the second embodiment of the encoding device of the present invention.
  • said embodiment has a sum-difference stereo (M/S) encoding module 57 added between the output of the multi-resolution analyzing module 53 and the input of the quantization and entropy encoding module 54.
  • the psychoacoustical analyzing module 51 calculates not only the mono masking threshold of the audio signal, but also the masking threshold of the sum-difference sound channel to be output to the quantization and entropy encoding module 54.
  • the sum-difference stereo module 57 can also be located between the quantizer bank and the encoder in the quantization and entropy encoding module 54.
  • the sum-difference stereo module 57 makes use of the correlation between the two sound channels in the sound channel pair to equate the freuqency-domain coefficients/residual sequence of the left-right sound channels to the freuqency-domain coefficients/residual sequence of the sum-difference sound channels, thereby reducing the code rate and improving the encoding efficiency. Hence, it is only suitable for multi-channel signals of the same signal type. While as for mono signals or multi-channel signals of different signal types, the sum-difference stereo encoding is not performed.
  • the encoding method of the encoding device as shown in Fig. 11 is substantially the same as the encoding method of the encoding device as shown in Fig. 5, and the difference is that the former further includes the steps of determining whether the audio signals are multi-channel signals before quantizing and entropy encoding the frequency-domain coefficients, if they are multi-channel signals, determining whether the types of the signals of the left-right sound channels are the same, if the signal types are the same, determining whether the scale factor bands corresponding to the two sound channels meet the conditions of sum-difference stereo encoding, if they meet the conditions, performing a sum-difference stereo encoding to obtain the frequency-domain coefficients of the sum-difference sound channels; if they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the frequency-domain coefficients are not processed.
  • the sum-difference stereo encoding can be applied not only before the quantization, but also after the quantization and before the entropy encoding, that is, after quantizing the frequency-domain coefficients, it is determined if the audio signals are multi-channel signals, if they are, it is determined if the signals of the left-right sound channels are of the same type, if the signal types are the same, it is determined if the scale factor bands corresponding to the two sound channels meet the conditions of sum-difference stereo encoding, if they meet the conditions, performing a sum-difference stereo encoding thereon; if they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the sum-difference stereo encoding is not performed on the frequency-domain coefficients.
  • Fig. 12 is a schematic drawing of embodiment two of the decoding device.
  • said decoding device has a sum-difference stereo decoding module 66 added between the output of the inverse quantizer bank 62 and the input of the multi-resolution integration module 63 to receive the result of signal type analysis and the sum-difference stereo control signal output from the bit-stream demultiplexing module 60, and to transform the inverse quantization spectrum of the sum-difference sound channels into the inverse quantization spectrum of the left-right sound channels according to said control information.
  • the sum-difference control signal there is a flag bit for indicating if the present sound channel pair needs a sum-difference stereo decoding, if it needs, then there is also a flag bit on each scale factor to indicate if the corresponding scale factor needs to be sum-difference stereo decoded, and the sum-difference stereo decoding module 66 determines, on the basis of the flag bit of the scale factor band, if it is necessary to perform sum-difference stereo decoding on the inverse quantization spectrum in some of the scale factor bands. If the sum-difference stereo encoding is performed in the encoding device, then the sum-difference stereo decoding must be performed on the inverse quantization spectrum in the decoding device.
  • the sum-difference stereo decodingmodule 66 can also be located between the output of the entropy decoding module 61 and the input of the inverse quantizer bank 62 to receive the sum-difference stereo control signal and the result of signal type analysis output from the bit-stream demultiplexing module 60.
  • the decoding method of the decoding device as shown in Fig. 12 is substantially the same as the decoding method of the decoding device as shown in Fig. 8, and the difference is that the former further includes the followng steps: after obtaining the inverse quantization spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the inverse quantization spectrum according to the sum-difference stereo control signal, if it is necessary, it is determined, on the basis of the flag bit on each scale factor band, if said scale factor band needs a sum-difference stereo decoding, if it needs, the inverse quantization spectrum of the sum-difference sound channels in said scale factor band is transformed into inverse quantization spectrum of the left-right sound channels before the subsequent processing; if the signal types are not the same or it is unnecessary to perform the sum-difference stereo decoding, the inverse quantization spectrum is not processed and the subsequent processing is directly performed.
  • the sum-difference stereo decoding can also be performed after the entropy decoding and before the inverse quantization, that is, after obtaining the quantized values of the spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the quantized values of the spectrum according to the sum-difference stereo control signal, if it is necessary, it is determined, on the basis of the flag bit on each scale factor band, if said scale factor band needs a sum-difference stereo decoding; if it needs, the quantized values of the spectrum of the sum-difference sound channels in said scale factor band are transformed into the quantized values of the spectrum of the left-right sound channels before the subsequent processing; if the signal types are not the same or it is unnecessary to perform the sum-difference stereo decoding, the quantized values of the spectrum are not processed and the subsequent processing is directly performed.
  • Fig. 13 is a schematic drawing of the structure of the third embodiment of the encoding device of the present invention.
  • said embodiment has a sum-difference stereo encoding module 57 added between the output of the frequency-domain linear prediction and vector quantization module 56 and the input of the quantization and entropy encoding module 54.
  • the psychoacoustical analyzing module 51 outputs the masking threshold of the sum-difference sound channels to the quantization and entropy encoding module 54.
  • the sum-difference stereo encoding module 57 can also be located between the quantizer bank and the encoder in the quantization and entropy encoding module 54 to receive the result of signal type analysis output from the psychoacoustical analyzing module 51.
  • the encoding method of the encoding device as shown in Fig. 13 is substantially the same as the encoding method of the encoding device as shown in Fig. 9, and the difference is that the former further includes the steps of determining whether the audio signals are multi-channel signals before quantizing and entropy encoding the frequency-domain coefficients; if they are multi-channel signals, determining whether the types of the signals of the left-right sound channels are the same; if the signal types are the same, determining whether the scale factor bands meet the encoding conditions; if they meet the conditions, performing a sum-difference stereo encoding on said scale factor bands; if they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the sum-difference stereo encoding is not performed.
  • the sum-difference stereo encoding can be applied not only before the quantization, but also after the quantization and before the entropy encoding, that is, after quantizing the frequency-domain coefficients, it is determined if the audio signals are multi-channel signals, if they are, itisdetermined if the signals of the left-right sound channels are of the same type, if the signal types are the same, it is determined if the scale factor bands meet the encoding conditions, if they meet the conditions, performing a sum-difference stereo encoding thereon; if they do not meet the conditions, the sum-difference stereo encoding is not performed. If the signals are mono signals or multi-channel signals of different types, the sum-difference stereo encoding is not performed.
  • Fig. 14 is a schematic drawing of the structure of embodiment three of the decoding device of the present invention.
  • said decoding device has a sum-difference stereo decoding module 66 added between the output of the inverse quantizer bank 62 and the input of the inverse frequency-domain linear prediction and vector quantization module 65, and the bit-stream demultiplexing module 60 outputs sum-difference stereo control signal thereto.
  • the sum-difference stereo decoding module 66 can also be located between the output of the entropy decoding module 61 and the input of the inverse quantizer bank 62 to receive the sum-difference stereo control signal output from the bit-stream demultiplexing module 60.
  • the function and the operating principle of the sum-difference stereo decoding module 66 are the same as those shows in Fig. 10, so they will not be elaborated again.
  • the decoding method of the decoding device as shown in Fig. 14 is substantially the same as the decoding method of the decoding device as shown in Fig. 10, and the difference is that the former further includes the followng steps: after obtaining the inverse quantization spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the inverse quantization spectrum according to the sum-difference stereo control signal; if it is necessary, it is determined, on the basis of the flag bit on each scale factor band, if said scale factor band needs a sum-difference stereo decoding, if it needs, the inverse quantization spectrum of the sum-difference sound channels in said scale factor band is transformed into inverse quantization spectrum of the left-right sound channels before the subsequent processing; if the signal types are not the same or it is unnecessary to perform the sum-difference stereo decoding, the inverse quantization spectrum is not processed and the subsequent processing is directly performed.
  • the sum-difference stereo decoding can also be performed before the inverse quantization, that is, after obtaining the quantized values of the spectrum, if the result of signal type analysis shows that the signal types are the same, it is determined whether it is necessary to perform a sum-difference stereo decoding on the quantized values of the spectrum according to the sum-difference stereo control signal, if it is necessary, it is determined, on the basis of the flag bit on each scale factor band, if said scale factor band needs a sum-difference stereo decoding, if it needs, the quantized values of the spectrum of the sum-difference sound channels in said scale factor band are transformed into the quantized value of the spectrum of the left-right sound channels before the subsequent processing; if the signal types are not the same or it is unnecessary to perform the sum-difference stereo decoding, the quantized values of the spectrum are not processed and the subsequent processing is directly performed.
  • Fig. 15 is the schematic drawing of the fourth embodiment of the encoding device of the present invention.
  • this embodiment has a re-sampling module 590 and a frequency band spreading module 591 added, wherein the re-sampling module 590 re-samples the input audio signals to change the sampling rate thereof, and then outputs the audio signals with a changed sampling rate to the signal characteristic analyzing module 50; the frequency band spreading module 591 is used for analyzing the input audio signals on the entire frequency band to extract the spectrum envelope of the high frequency portion and the characteristics of its relationship with the low frequency portion, and to output them to the bit-stream multiplexing module 55.
  • the re-sampling module 590 is used for re-sampling the input audio signals.
  • the re-sampling includes up-sampling and down-sampling.
  • the re-sampling is described below using down-sampling as an example.
  • the re-sampling module 590 comprises a low-pass filter and a down-sampler, wherein the low-pass filter is used for limiting the frequency band of the audio signals and eliminating the aliasing that might be caused by down-sampling.
  • the input audio signal is down-sampled after being low-pass filtered.
  • the input audio signal is s(n)
  • said signal is output as v(n) after being filtered by the low-pass filter having a pulse response of h(n)
  • the sequence of an M times of down-sampling on v(n) is x(n)
  • the sampling rate of the re-sampled audio signal x(n) is reduced by M times as compared to the sampling rate of the originally input audio signal s (n) .
  • the original audio signals After being input to the frequency band spreading module 591, the original audio signals are analyzed on the entire frequency band to extract the spectrum envelope of the high frequency portion and the characteristics of its relationship with the low frequency portion, and to output them to the bit-stream multiplexing module 55 as the frequency band spreading control information.
  • the basic principle of frequency band spreading is that with respect to most audio signals, there is a strong correlation between the characteristic of the high frequency portion thereof and the characteristic of the low frequency portion thereof, so the high frequency portions of the audio signals can be effectively reconstructed through the low frequency portions, thus the high frequency portions of the audio signals may not be transmitted. In order to ensure a correct reconstruction of the high frequency portions, only few frequency band spreading control signals need to be transmitted in the compressed audio code stream.
  • the frequency band spreading module 591 comprises a parameter extracting module and a spectrum envelope extracting module. Signals are input to the parameter extracting module which extracts the parameters representing the spectrum characteristics of the input signals at different time-frequency regions, then in the spectrum envelope extracting module, the spectrum envelope of the high frequency portion of the signal is estimated at a certain time-frequency resolution. In order to ensure that the time-frequency resolution is most suitable for the characteristics of the present input signals, the time-frequency resolution of the spectrum envelope can be selected freely.
  • the parameters of the spectrum characteristics of the input signals and the spectrum envelope of the high frequency portion are used as the control signal for frequency band spreading to be output to the bit-stream multiplexing module 55 for multiplexing.
  • the bit-stream multiplexing module 55 receives the code stream including the common scale factor, encoded values of the scale factors, encoded values of the code book sequence numbers and the quantization spectrum of lossless encoding or the encoded values of the code word indexes output from the quantization and entropy encoding module 54 and the frequency band spreading control signal output from the frequency band spreading module 591, and then multiplexes them to obtain the compressed audio data stream.
  • the encoding method based on the encoding device as shown in Fig. 15 specifically includes: analyzing the input audio signal on the entire frequency band, and extracting the high frequency spectrum envelope and the parameters of the signal spectrum characteristics as the frequency band spreading control signal; re-sampling the input audio signal and analyzing the signal type; calculating the signal-to-masking ratio of the re-sampled signal; time-frequency mapping the re-sampled signal to obtain the frequency-domain coefficients of the audio signal; quantizing and entropy encoding the frequency-domain coefficients; multiplexing the frequency band spreading control signal and the encoded audio code stream to obtain the compressed audio code stream, wherein the re-sampling includes the two steps of limiting the frequency band of the audio signal and performing a multiple down-sampling on the audio signal whose frequency band is limited.
  • Fig. 16 is a schematic drawing of the structure of embodiment four of the decoding device.
  • said embodiment has a frequency band spreading module 68 added, which receives the frequency band spreading control information output from the bit stream demultiplexing module 60 and the time-domain audio signal of low frequency output from the frequency-time mapping module 64, and which reconstruct the high frequency signal portion through spectrum shift and high frequency adjustment to output the wide band audio signal.
  • the decoding method based on the decoding device as shown in Fig. 16 is substantially the same as the decoding method based on the decoding device as shown in Fig. 8, and the difference lies in that the former further includes the step of reconstructing the high frequency portion of the audio signal according to the frequency band spreading control information and the time-domain audio signal after obtaining the time-domain audio signal, thereby to obtain the wide band audio signal.
  • Figs. 17, 19 and 21 are the fifth to the seventh embodiments of the encoding device, which respectively have a re-sampling module 590 and a frequency band spreading module 591 added thereto on the basis of the encoding devices as shown in Figs. 11, 9 and 13.
  • the connection of these two modules with other modules, and the function and principle of these two modules are the same as those shown in Fig. 15, so they will not be elaborated herein.
  • Figs. 18, 20 and 22 are the fifth to the seventh embodiments of the decoding device, which respectively have a frequency band spreading module 68 added thereto on the basis of the decoding devices as shown in Figs. 12, 10 and 14 to receive the frequency band spreading control information output from the bit-stream demultiplexing module 60 and the time-domain audio signals of low frequency channel output from the frequency-time mapping module 64, then the high frequency signal portion is reconstructed through frequency spectrum shift and high frequency adjustment to output audio signals of wide frequency band.
  • the seven embodiments of the encoding device as described above may also include a gain control module which receives the audio signals output from the signal characteristic analyzing module 50, controls the dynamic range of the fast varying type signals, and eliminates the pre-echo in audio processing. The output thereof is connected to the time-frequency mapping module 52 and the psychoacoustical analyzing module 51, meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • a gain control module which receives the audio signals output from the signal characteristic analyzing module 50, controls the dynamic range of the fast varying type signals, and eliminates the pre-echo in audio processing.
  • the output thereof is connected to the time-frequency mapping module 52 and the psychoacoustical analyzing module 51, meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • the gain control module controls only the fast varying type signals, while the slowly varying signals are directly output without being processed.
  • the gain control module adjusts the time-domain energy envelope of the signal to increase the gain value of the signal before the fast varying point, so that the amplitudes of the time-domain signal before and after the fast varying point are close to each other; then the time-domain signals whose time-domain energy envelope are adjusted are output to the time-frequency mapping module 52, meanwhile, the amount of gain adjustment is output to the bit-stream multiplexing module 55.
  • the encoding method based on said encoding device is substantially the same as the encoding method based on the above described encoding device, and the difference lies in that the former further includes the step of performing a gain control on the signal whose signal type has been analyzed.
  • the seven embodiments of the encoding device as described above may also include an inverse gain control module which is located after the output of the frequency-time mapping module 64 to receive the result of signal type analysis and the information of the amount of gain adjustment output from the bit-stream demultiplexing module 60, thereby adjusting the gain of the time-domain signal and controlling the pre-echo.
  • the inverse gain control module controls the fast varying type signals but leaves the slowly varying type signals unprocessed.
  • the inverse gain control module adjusts the energy envelope of the reconstructed time-domain signal according to the information of the amount of gain adjustment, reduces the amplitude value of the signal before the fast varying point, and adjusts the energy envelope back to the original state of low in the front and high in the back.
  • the amplitude value of the quantified noise before the fast varying point will be reduced along with the amplitude value of the signal, thereby controlling the pre-echo.
  • the decoding method based on said decoding device is substantially the same as the decoding method based on the above described decoding device, and the difference lies in that the former further includes the step of performing an inverse gain control on the reconstructed time-domain signals.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP05742018A 2004-04-01 2005-04-01 Ameliorations apportees a un procede et un dispositif de codage/decodage audio Withdrawn EP1873753A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200410030946 2004-04-01
PCT/CN2005/000440 WO2005096273A1 (fr) 2004-04-01 2005-04-01 Ameliorations apportees a un procede et un dispositif de codage/decodage audio

Publications (1)

Publication Number Publication Date
EP1873753A1 true EP1873753A1 (fr) 2008-01-02

Family

ID=35064017

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05742018A Withdrawn EP1873753A1 (fr) 2004-04-01 2005-04-01 Ameliorations apportees a un procede et un dispositif de codage/decodage audio

Country Status (2)

Country Link
EP (1) EP1873753A1 (fr)
WO (1) WO2005096273A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006081A1 (en) * 2007-06-27 2009-01-01 Samsung Electronics Co., Ltd. Method, medium and apparatus for encoding and/or decoding signal
US20100010807A1 (en) * 2008-07-14 2010-01-14 Eun Mi Oh Method and apparatus to encode and decode an audio/speech signal
WO2011110572A1 (fr) * 2010-03-11 2011-09-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Processeur de signal, dispositif de fourniture de fenêtre, signal multimédia codé, procédé de traitement de signal et procédé de fourniture d'une fenêtre
JP2013037111A (ja) * 2011-08-05 2013-02-21 Fujitsu Semiconductor Ltd オーディオ信号符号化方法および装置
WO2014044812A1 (fr) * 2012-09-21 2014-03-27 Dolby International Ab Codage d'un signal de champ sonore
RU2591011C2 (ru) * 2009-10-20 2016-07-10 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Кодер аудиосигнала, декодер аудиосигнала, способ кодирования или декодирования аудиосигнала с удалением алиасинга (наложения спектров)
RU2592412C2 (ru) * 2012-03-29 2016-07-20 Хуавэй Текнолоджиз Ко., Лтд. Способы и устройства кодирования и декодирования сигналов
US11830507B2 (en) 2018-08-21 2023-11-28 Dolby International Ab Coding dense transient events with companding

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102201027B1 (ko) 2014-03-24 2021-01-11 돌비 인터네셔널 에이비 고차 앰비소닉스 신호에 동적 범위 압축을 적용하는 방법 및 디바이스
US10531099B2 (en) * 2016-09-30 2020-01-07 The Mitre Corporation Systems and methods for distributed quantization of multimodal images
CN112530444B (zh) * 2019-09-18 2023-10-03 华为技术有限公司 音频编码方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960012475B1 (ko) * 1994-01-18 1996-09-20 대우전자 주식회사 디지탈 오디오 부호화장치의 채널별 비트 할당 장치
US5537510A (en) * 1994-12-30 1996-07-16 Daewoo Electronics Co., Ltd. Adaptive digital audio encoding apparatus and a bit allocation method thereof
CN1154084C (zh) * 2002-06-05 2004-06-16 北京阜国数字技术有限公司 一种基于伪小波滤波的音频编/解码方法
CN1461112A (zh) * 2003-07-04 2003-12-10 北京阜国数字技术有限公司 一种基于极小化全局噪声掩蔽比准则和熵编码的量化的音频编码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005096273A1 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006081A1 (en) * 2007-06-27 2009-01-01 Samsung Electronics Co., Ltd. Method, medium and apparatus for encoding and/or decoding signal
US8532982B2 (en) * 2008-07-14 2013-09-10 Samsung Electronics Co., Ltd. Method and apparatus to encode and decode an audio/speech signal
US20100010807A1 (en) * 2008-07-14 2010-01-14 Eun Mi Oh Method and apparatus to encode and decode an audio/speech signal
US9728196B2 (en) 2008-07-14 2017-08-08 Samsung Electronics Co., Ltd. Method and apparatus to encode and decode an audio/speech signal
US9355646B2 (en) 2008-07-14 2016-05-31 Samsung Electronics Co., Ltd. Method and apparatus to encode and decode an audio/speech signal
US20140012589A1 (en) * 2008-07-14 2014-01-09 Samsung Electronics Co., Ltd. Method and apparatus to encode and decode an audio/speech signal
RU2591011C2 (ru) * 2009-10-20 2016-07-10 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Кодер аудиосигнала, декодер аудиосигнала, способ кодирования или декодирования аудиосигнала с удалением алиасинга (наложения спектров)
US9252803B2 (en) 2010-03-11 2016-02-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Signal processor, window provider, encoded media signal, method for processing a signal and method for providing a window
RU2616863C2 (ru) * 2010-03-11 2017-04-18 Фраунхофер-Гезелльшафт цур Фёрдерунг дер ангевандтен Форшунг Е.Ф. Сигнальный процессор, формирователь окон, кодированный медиа-сигнал, способ обработки сигнала и способ формирования окон
WO2011110572A1 (fr) * 2010-03-11 2011-09-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Processeur de signal, dispositif de fourniture de fenêtre, signal multimédia codé, procédé de traitement de signal et procédé de fourniture d'une fenêtre
AU2011226121B2 (en) * 2010-03-11 2014-08-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Signal processor, window provider, encoded media signal, method for processing a signal and method for providing a window
KR101445292B1 (ko) * 2010-03-11 2014-09-29 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. 신호 프로세서, 윈도우 제공기, 인코딩된 미디어 신호, 신호를 처리하기 위한 방법 및 윈도우를 제공하기 위한 방법
US8907822B2 (en) 2010-03-11 2014-12-09 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Signal processor, window provider, encoded media signal, method for processing a signal and method for providing a window
CN102893329B (zh) * 2010-03-11 2015-04-08 弗兰霍菲尔运输应用研究公司 信号处理器、窗口提供器、用于处理信号的方法以及用于提供窗口的方法
JP2013531264A (ja) * 2010-03-11 2013-08-01 フラウンホーファーゲゼルシャフト ツール フォルデルング デル アンゲヴァンテン フォルシユング エー.フアー. 信号処理器、窓提供部、符号化されたメディア信号、信号を処理するための方法および窓を提供するための方法
CN102893329A (zh) * 2010-03-11 2013-01-23 弗兰霍菲尔运输应用研究公司 信号处理器、窗口提供器、编码的媒体信号、用于处理信号的方法以及用于提供窗口的方法
EP2372703A1 (fr) * 2010-03-11 2011-10-05 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Processeur de signal, fournisseur de fenêtre, signal de média codé, procédé de traitement d'un signal et procédé pour fournir une fenêtre
EP3096317A1 (fr) * 2010-03-11 2016-11-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Processeur de signal et procédé de traitement d'un signal
JP2013037111A (ja) * 2011-08-05 2013-02-21 Fujitsu Semiconductor Ltd オーディオ信号符号化方法および装置
RU2592412C2 (ru) * 2012-03-29 2016-07-20 Хуавэй Текнолоджиз Ко., Лтд. Способы и устройства кодирования и декодирования сигналов
US9537694B2 (en) 2012-03-29 2017-01-03 Huawei Technologies Co., Ltd. Signal coding and decoding methods and devices
US9786293B2 (en) 2012-03-29 2017-10-10 Huawei Technologies Co., Ltd. Signal coding and decoding methods and devices
US9899033B2 (en) 2012-03-29 2018-02-20 Huawei Technologies Co., Ltd. Signal coding and decoding methods and devices
US10600430B2 (en) 2012-03-29 2020-03-24 Huawei Technologies Co., Ltd. Signal decoding method, audio signal decoder and non-transitory computer-readable medium
US9460729B2 (en) 2012-09-21 2016-10-04 Dolby Laboratories Licensing Corporation Layered approach to spatial audio coding
US9495970B2 (en) 2012-09-21 2016-11-15 Dolby Laboratories Licensing Corporation Audio coding with gain profile extraction and transmission for speech enhancement at the decoder
US9502046B2 (en) 2012-09-21 2016-11-22 Dolby Laboratories Licensing Corporation Coding of a sound field signal
WO2014044812A1 (fr) * 2012-09-21 2014-03-27 Dolby International Ab Codage d'un signal de champ sonore
US9858936B2 (en) 2012-09-21 2018-01-02 Dolby Laboratories Licensing Corporation Methods and systems for selecting layers of encoded audio signals for teleconferencing
US11830507B2 (en) 2018-08-21 2023-11-28 Dolby International Ab Coding dense transient events with companding

Also Published As

Publication number Publication date
WO2005096273A1 (fr) 2005-10-13

Similar Documents

Publication Publication Date Title
EP1852851A1 (fr) Dispositif et procede de codage/decodage audio ameliores
EP1873753A1 (fr) Ameliorations apportees a un procede et un dispositif de codage/decodage audio
JP4950210B2 (ja) オーディオ圧縮
CA2853987C (fr) Train de bits audio a compression echelonnee ; codeur/decodeur utilisant un banc de filtre hierarchique et codage conjoint multicanal
JP5539203B2 (ja) 改良された音声及びオーディオ信号の変換符号化
JP5788833B2 (ja) オーディオ信号の符号化方法、オーディオ信号の復号化方法及び記録媒体
JP5820464B2 (ja) オーディオまたはビデオエンコーダ、オーディオまたはビデオデコーダ、及び予測方向可変の予測を使用したマルチチャンネルオーディオまたはビデオ信号処理方法
JP5705964B2 (ja) オーディオエンコーダ、オーディオデコーダ、及び複素数予測を使用したマルチチャンネルオーディオ信号処理方法
RU2449387C2 (ru) Способ и устройство для обработки сигнала
US7181404B2 (en) Method and apparatus for audio compression
US9037454B2 (en) Efficient coding of overcomplete representations of audio using the modulated complex lapped transform (MCLT)
CN101276587A (zh) 声音编码装置及其方法和声音解码装置及其方法
CN103366750B (zh) 一种声音编解码装置及其方法
CN101192410B (zh) 一种在编解码中调整量化质量的方法和装置
CN104751850B (zh) 一种用于音频信号的矢量量化编解码方法及装置
TWI793666B (zh) 對多頻道音頻信號的頻道使用比例參數的聯合編碼的音頻解碼器、音頻編碼器和相關方法以及電腦程式
CN1677492A (zh) 一种增强音频编解码装置及方法
CN103366751A (zh) 一种声音编解码装置及其方法
WO2005096508A1 (fr) Equipement de codage et de decodage audio ameliore, procede associe
RU2409874C9 (ru) Сжатие звуковых сигналов
WO2006056100A1 (fr) Procede et dispositif de codage/decodage utilisant la redondance des signaux intra-canal
AU2011205144B2 (en) Scalable compressed audio bit stream and codec using a hierarchical filterbank and multichannel joint coding
JPH05114863A (ja) 高能率符号化装置及び復号化装置

Legal Events

Date Code Title Description
PUAJ Public notification under rule 129 epc

Free format text: ORIGINAL CODE: 0009425

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070816

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: SCHUG, MICHAEL

Inventor name: DENG, HAO C/F4, TRIUMPH PLAZA

Inventor name: ZHU, XIAOMING C/F4, TRIUMPH PLAZA

Inventor name: WANG, LEI C/F4, TRIUMPH PLAZA

Inventor name: REN, WEIMIN C/F4, TRIUMPH PLAZA

Inventor name: HENN, FREDRIK

Inventor name: HOERICH, HOLGER

Inventor name: MARTIN, DIETZ

Inventor name: PAN, XINGDE C/F4, TRIUMPH PLAZA

Inventor name: EHRET, ANDREAS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20090323