WO2009132662A1 - Codage/décodage pour une réponse en fréquence améliorée - Google Patents

Codage/décodage pour une réponse en fréquence améliorée Download PDF

Info

Publication number
WO2009132662A1
WO2009132662A1 PCT/EP2008/003432 EP2008003432W WO2009132662A1 WO 2009132662 A1 WO2009132662 A1 WO 2009132662A1 EP 2008003432 W EP2008003432 W EP 2008003432W WO 2009132662 A1 WO2009132662 A1 WO 2009132662A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
signal
control information
frequency band
scheme
Prior art date
Application number
PCT/EP2008/003432
Other languages
English (en)
Inventor
Juha OJANPERÄ
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/EP2008/003432 priority Critical patent/WO2009132662A1/fr
Publication of WO2009132662A1 publication Critical patent/WO2009132662A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/06Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention relates to a method, apparatus, and computer program product for encoding and/or decoding a signal, such as - but not limited to - a stereo or multi-channel audio signal, in a transmitter, receiver, or transceiver device (such as a mobile terminal, user equipment or the like).
  • a signal such as - but not limited to - a stereo or multi-channel audio signal
  • a transmitter, receiver, or transceiver device such as a mobile terminal, user equipment or the like.
  • Fig. 1 shows a schematic block diagram of an audio coding system which contains an encoder 100 at the transmitting side and a decoder 200 at the receiving side.
  • the encoder 100 is responsible for adapting the incoming audio data rate AS to a bitrate level where the bandwidth conditions of the transmission channel 150 are not violated.
  • the encoding process may discard irrelevant information from the audio signal and the decoder 200 simply reverses the encoding process to obtain the decoded audio signal with little or no audible degradation.
  • Another usage scenario for the audio coding system is archiving where the transmission channel 150 corresponds to a storage medium and target is to get as low bitrate as possible in order to save storage space.
  • An option to decrease the bitrate required for encoding an audio signal without significant quality degradation is to apply prediction. It is possible to predict either (parts of) an input audio signal itself or some parameters used to describe the input signal. Predictive techniques typically provide coding gain for com- pression or relatively stationary input signals, or for signals that in some other way include redundant components. While speech coding is an example of the former type, e.g. a multi-channel audio coding fits to both categories.
  • a method comprises:
  • an apparatus at the encoder side comprises: • a predictor component with selectable prediction, configured to apply at least two prediction schemes with different characteristics to predict at least one frequency band of a received input signal to be encoded;
  • a selector configured to select a prediction scheme for said at least one frequency band
  • an encoder component configured to generate a prediction control information based on the selected prediction scheme and to provide said prediction control information together with an encoded version of said input signal.
  • an apparatus at the decoder side comprises:
  • a receiver component configured to receive a prediction control information for at least one frequency band of a received encoded signal
  • a prediction controller configured to select based on said prediction control information a prediction scheme from at least two prediction schemes with different characteristics to be used for decoding said at least one frequency band of said encoded signal.
  • the proposed encoding and decoding method and apparatuses can improve the encoded audio quality.
  • the bitrate required for encoding a signal for example a multi-channel signal, can be decreased without quality degradation by applying selectable and thus adaptive prediction.
  • the input signal itself, parts thereof, or some parameters used to describe it may be supplied to the selectable prediction.
  • the proposed prediction control information may comprise a side information on the selected prediction scheme for some or all frequency bands of the encoded signal and associated control information, for example information on frequency bands to which the prediction selection information applies to.
  • the prediction scheme may be selected for a predetermined time period, e.g. for a single frame period or for multiple frame periods.
  • selection of the prediction scheme may be based on a comparison of prediction performances or on overall coding gains resulting from said at least two prediction schemes.
  • a de-quantized version thereof may be calculated and used for updating a past value buffer used in the at least two prediction schemes.
  • the at least two prediction schemes may comprise a first prediction scheme which is based on backward adaptive prediction where frequency bins are predicted based on previous frames using quantized samples as input, and a second prediction scheme where a filtered signal from a previous time frame is used to predict a current frame.
  • the second prediction scheme may comprise first and second prediction rounds, wherein in the second prediction round, filter coefficients used for filtering the signal from the previous time frame are calculated only from spectral samples that provide coding gain in the first prediction round.
  • Fig. 1 shows a schematic block diagram of an audio coding system in which the present invention can be implemented
  • Fig. 2 shows a schematic block diagram of an encoder according to a first example embodiment
  • Fig. 3 shows a schematic block diagram of a decoder according to a second example embodiment
  • Fig. 4 shows a schematic block diagram of a selectable prediction encoder according to a third example embodiment
  • Fig. 5 shows a schematic block diagram of a selectable prediction encoder according to a fourth example embodiment
  • Fig. 6 shows a flow diagram of an encoding procedure according to a fifth example embodiment
  • Fig. 7 shows a flow diagram of an decoding procedure according to a sixth example embodiment.
  • Fig. 8 shows a schematic diagram of a software-based implementation according to a seventh example embodiment.
  • Example embodiments will now be described based on an encoding and decoding method and apparatus for a stereo audio signal.
  • the proposed encoding/decoding procedure or encoder/decoder can be used for any single-channel or multi-channel signal.
  • Audio data reduction relies on an understanding of the hearing mechanism and thus is a form of perceptual encoding.
  • the ear is only able to extract a certain proportion of the information in a given sound, all additional sound can be regarded as redundant.
  • Predictive coding uses the knowledge of previous samples to predict the values of the subsequent samples. It is then only necessary to provide the difference between the predicted value and the actual value for the receiver/decoder.
  • the receiver contains an identical predictor computing the predicted value to which the transmitted difference is added to obtain the original value. In real-life applications it may be necessary to quantize the predicted values prior to transmission, enabling the receiver/decoder to recreate a quantized approximation of the original value.
  • a transform of a waveform is computed periodically. Since the transform of an audio signal changes slowly, it needs to be sent much less often than audio samples.
  • the receiver performs an inverse transform which may be for example Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) or Wavelet.
  • FFT Fast Fourier Transform
  • DCT Discrete Cosine Transform
  • Wavelet Wavelet
  • MPEG coding the audio signal is split into a number of frequency bands (sub-bands, consisting of one or several "frequency bins") which are processed according to their own levels.
  • variable bit rate can be defined as follows.
  • One possibility is a VBR encoder outputting a bit stream, which may have a variable number of bits in successive frames. That is, each frame may contain a different number of bits.
  • Bit rate may vary, for example, in large predefined increments/decrements or it may vary by as little as one bit resolution.
  • Another possibility is an embedded encoder always outputting a bit stream consisting of several layers of significance, and either the transmitter, the network and/or the decoder can select the appropriate number of layers of coded signals considering the conditions of the transport network or the desired quality level.
  • effective bit rate may vary, possibly resulting in variable level of quality the codec provides. For example, higher quality signal with wide audio band can be supported when using all the layers, and normal quality signal with narrower audio band can be supported when using only the core layer.
  • the bit-rate may be dependent upon many factors such as network congestion, priority, QoS, etc.
  • Fig. 2 show a schematic block diagram of an encoder according to a first example embodiment based on two-channel (stereo) input signal.
  • incoming left (L) and right (R) channels supplied to respective converters or transformers 110-1 , 110-2, where they are transformed to the frequency domain.
  • a modified discrete cosine transform MDCT
  • a different transform for example the Fast Fourier Transform (FFT) can be used.
  • FFT Fast Fourier Transform
  • M 7 0.5 (Z 7 - + .K 7 )
  • EV-VBR embedded variable bitrate
  • the selectable prediction encoder 150 derives and forwards the signal to be quantized to a stereo encoder 160 and generates a prediction control informa- tion which identifies a selected prediction mechanism or scheme.
  • This prediction control information is then passed to a bitstream multiplexer (MUX) 170, where the different output streams of the mono encoder 120, selectable prediction encoder 150 and stereo encoder 160 are multiplexed to single output bitstream BS for storage (e.g. on an audio medium) or transmission to a receiving end with a corresponding decoder.
  • MUX bitstream multiplexer
  • the output streams of the selectable prediction encoder 150 and the mono and stereo encoders 120, 160 are not multiplexed into single bitsream, but all bitstreams are stored or transmitted separately.
  • Another alternative example embodiment may multiplex these three output signals only partially, for example by multiplexing the outputs of the mono and stereo encoders, 120 and 160, respectively, into one bitstream and handling the output of the selectable prediction encoder 150 as its own bitstream.
  • the output of the selectable prediction encoder 150 may be multiplexed into the same bitstream with the output of the stereo encoder 160, thereby handling the output of the mono encoder 120 as a separate bitstream.
  • the outputs of the mono encoder 120 and the selectable prediction encoder 150 are multiplexed into the same bitstream, while the output of the stereo encoder 160 is handled as a separate bitstream.
  • the stereo encoder 160 is responsible for quantizing the input signal to the desired bitrate. It is to be noted that any stereo encoder can be used here.
  • Fig. 3 shows a schematic block diagram of a corresponding decoder according to a second example embodiment.
  • the bitstream BS is demultiplexed in a demultiplexer (DEMUX) 270 and a mono signal is decoded with a EV-VBR mono decoder 220.
  • the demultiplexing step may not be needed.
  • employing partial multiplexing - i.e. grouping together outputs of a selectable prediction encoder 240 and a stereo decoder 230 or grouping together the outputs of the mono decoder 220 and the stereo decoder 230, as described above for the encoder - may require only partial demultiplexing.
  • the stereo decoder 230 extracts the quantized stereo signal which is then enhanced with an output information (i.e. prediction control information) of the selectable prediction decoder 240.
  • the synthesized enhanced stereo channels L f and Rf are then subjected to an inverse transformation (e.g. inverse MDCT in the current example) in respective inverse transformers 210-1 , 210-2 to obtain time domain left (L) and right (R) channels for the receiver side.
  • an inverse transformation e.g. inverse MDCT in the current example
  • Fig. 4 shows a more detailed block diagram of a selectable prediction encoder according to a third example embodiment.
  • the input signal is predicted using two prediction schemes A and B in respective predictor functions or units 1510-A (prediction scheme A) and 1510-B (prediction scheme B).
  • a performance analysis is conducted in performance analysis units 1520-1 and 1520-2 to select the predictor, which provides desired performance.
  • a prediction gain is calculated for both prediction schemes in the respective performance analysis units 1520-1 and 1520-2, and a comparison is made in a performance comparator function or unit 1530 to see which prediction scheme provides highest gain, if any.
  • the performance comparator may use different criteria. For example prediction scheme A is selected if its performance is better exceeds that of B by a predetermined margin (or vice versa). In yet another example embodiment prediction scheme A is selected if its performance exceeds a predetermined threshold regardless of the performance of prediction scheme B (or vice versa).
  • the prediction control information is generated and provided to the MUX 170.
  • the prediction control information may consist of e.g. an indicator identifying the selected prediction scheme, one or several prediction parameters, and/or indication on the frequency band(s) this prediction control information applies to.
  • the combined prediction and quantization performance is considered in performance analysis units 1520-1 and 1520- 2 when selecting the most appropriate prediction schemes for the current frame.
  • the prediction control signal is generated based on the quan- tized value of the prediction error.
  • the residual signal is then determined in a residual extraction function or unit 1540, as follows:
  • Equation (1 ) is repeated for 0 ⁇ i ⁇ N where N is the number of frequency bands, where prediction is to be applied. As an example, the following values could be used:
  • the frequency resolution is 25 Hz, which means that with given example frequency bands only frequencies up to 600 Hz are predicted.
  • the frequencies above 600 Hz are quantized such a way that only the original signal, not the residual, is present, that is:
  • a frequency division different from the example above can be used.
  • higher or lower frequency resolution can be used.
  • the mechanism proposed in this invention may be applied to any number of frequency bands (or frequency bins) - including only single frequency band or all frequency bands - and these frequency bands (or frequency bins) do not need to be the lowest ones in the frequency axis, nor consecutive bands (bins) in the frequency axis.
  • the obtained residual signal is supplied to a stereo encoder function or unit 1550, where the residual signal is quantized and supplied to the MUX 170. Furthermore, a de-quantized version of the residual signal is calculated in a de- quantizer function or unit 1560 to update past values buffers 1570 which are used in the prediction schemes A and B of the respective predictor functions or units 1510-A and 1510-B. It is noted that the decoder functionalities basically correspond to the encoder side as well, so that there operation will only briefly be described later.
  • the prediction scheme A of the first predictor function or unit 1510-A may for example be based on a backward adaptive prediction where each spectral bin is predicted over each successive frame using already quantized output samples (from the past frames) as an input. Using only preceding quantized spectral samples in the prediction loop has an advantage that no side information is needed for the transmission of the predictor coefficients.
  • An estimate of the current spectral bin and the resulting prediction error can be calculated as follows:
  • x ⁇ n is the past reconstructed spectral bin
  • x ⁇ n is the input spectral bin
  • a t are the predictor coefficients
  • P is the predictor order.
  • the predicted frame is obtained by applying equation (3) to each spectral bin separately.
  • the quantized prediction error may then be transmitted to the decoder, where identical operations can be performed to get the predicted frame to be added to the quantized prediction error. More information about the operation of this predictor used as the example predictor A can be found for example in L. Yin, M. Suonio, M. Vaananen, "A new backward predictor for MPEG audio coding", 103 rd AES Convention, New York 1997, Preprint 4521.
  • a predictor used as the example prediction scheme B of the second predictor function or unit 1510-B is described next.
  • a filtered signal from previous frame may be used to predict the current frame.
  • D f represent the quantized samples from a previous frame and D f the input signal of current frame.
  • the transfer function of the filter may then be expressed as follows:
  • the filter coefficients may be determined by minimizing a mean squared error function as defined by:
  • the codeword referred by the codebook index / that minimizes the selected error criteria, for example squared error function, between the quantized coefficients, and the original filter coefficients can be selected for transmission.
  • the performance analysis for example the computation of prediction gain for predicted signal x and input signal y, can be carried out in the performance analysis units 1520-1 and 1520-2 as follows:
  • predGain(x, y) pe(x, y) (1 1 )
  • T is the masking threshold for a current frame as determined by a psychoacoustical model of the encoder (this block is not explicitly shown in Fig. 2).
  • the performance comparator function or unit 1530 may determine the predicted signal for the residual extraction and the bitstream signalling according to following procedure, as indicated by a first example pseudo-code: if( predGain A > predGain B )
  • JD 78 O) O, sbOffset ⁇ i] ⁇ j ⁇ sbOffset[N + ⁇ ], diff _ gain ⁇ 0 (i) ⁇ 0 I do _ nothing, otherwise
  • Fig. 5 shows a more detailed block diagram of a selectable prediction decoder according to a fourth example embodiment.
  • the signalling information i.e. prediction control information
  • the demultiplexer 270 may be decoded in a prediction signal extraction function or unit 2410 for example according to following procedure, as indicted by a second pseudo-code:
  • the predicted signal is obtained in a target signal prediction function or unit 2420 in case prediction was enabled for current frame. If the example prediction scheme A is signalled, the predicted signal is determined as described for example in L. Yin, M. Suonio, M. Vaananen, "A new backward predictor for MPEG audio coding", 103 rd AES Convention, New York 1997, Preprint 4521.
  • the predicted signal can be obtained based on the following equation: ⁇ £> /i _ i (i + y)- fl0 " ). 0 ⁇ i ⁇ sbOffset[N] (13)
  • D f t are de-quantized samples from previous frame and ⁇ are the quantized filter coefficients as determined by the transmitted/received codebook index.
  • the residual signal may then be updated in a residual signal update function or unit 2430 according to the following equation:
  • Equation (14) may be repeated for 0 ⁇ i ⁇ N .
  • the signal is then passed e.g. to the stereo channel synthesizer 250 of Fig. 3.
  • the predictors can be updated in a prediction update function or unit 2440.
  • this update can be performed in accordance with, for example, L. Yin, M. Suonio, M. Vaananen, "A new backward predictor for MPEG audio coding", 103 rd AES Convention, New York 1997, Preprint 4521.
  • this n be based on the following equation Q ⁇ i ⁇ sbOffset[N].
  • the operation for the example prediction scheme B may be improved with slightly increased computational complexity.
  • the filter coefficients may be calculated twice. In the first round, the calculations as already described are used. In the second round, the filter coefficients may be calculated only from those spectral samples that provided coding gain in the first round. This operation can be expressed as follows:
  • L is the number of samples present in the modified input signals which can be determined as follows:
  • Equation (16) is repeated for 0 ⁇ i ⁇ N .
  • the subsequent steps in the computation remain unchanged. This enhanced mode of operation may improve the filter coefficients adaptivity to the target signal.
  • Fig. 6 shows a basic flow diagram of a selectable prediction encoding procedure according to a fifth example embodiment.
  • step S101 at least two prediction methods are executed at each selected frequency band and performance analysis is conducted. Then, in step S102, the results of the performance analysis are compared for each of the selected prediction schemes and for each selected frequency band.
  • step S103 the predictor with better or best performance out of all in a subset of the selected prediction schemes is selected and an information (i.e. prediction control information) is transmitted for each selected frequency band (possibly together with the indication of the frequency band(s) on which the prediction control information applies to) to provide a continuous adaptation of predictors at the decoder side.
  • an information i.e. prediction control information
  • Fig. 7 shows a basic flow diagram of a selectable prediction decoding proce- dure according to a sixth example embodiment.
  • step S201 a received predictor or prediction control information is read for each frequency band. Then in step S202, the spectral samples are decoded based on the received prediction control information. Finally, in step S203, predictors are updated based on the received prediction control information, to achieve adaptive prediction and thus improved signal quality.
  • Fig. 8 shows a schematic block diagram of an alternative software-based seventh example embodiment of the proposed encoding and decoding func- tionalities for achieving selectable prediction encoding/decoding.
  • the seventh embodiment is based on a software-controlled computer or processor 300 which can be implemented for example in the selectable prediction blocks 150 and 240 of Figs. 2 and 3.
  • the processor 300 comprises a processing unit 375, which may be any processor or control unit which performs control based on software routines of a control program stored in a memory 376.
  • Program code instructions are fetched from the memory 376 and are loaded to the processing unit 375 of the processor 300 in order to perform the processing steps of the above functionalities, as described in connection with the block and flow diagrams of Figs. 4, 5, 6 and 7.
  • These processing steps may be performed on the basis of input data Dl and may generate output data DO, wherein the input data Dl may correspond to the channel streams at the encoder side and to the prediction control information and spectral samples in the decoder case.
  • the output data DO may correspond to the prediction control information at the encoder side and to the improved samples at the decoder side.
  • the above embodiments can be implemented in hardware by a discrete analog or digital circuit, signal processor, or a chip or chip set (e.g. an ASIC (Application Specific Integrated Circuit)), or in software either in an ASIP (Application Specific Integrated Processor), a DSP (Digital Signal Processor), or any other processor or computer device.
  • ASIC Application Specific Integrated Circuit
  • ASIP Application Specific Integrated Processor
  • DSP Digital Signal Processor
  • a method, apparatus, and computer program product for encoding and/or decoding an input signal have been described, wherein at least two prediction schemes with different characteristics are applied to predict at least one frequency band of the input signal to be encoded at the encoder side.
  • a prediction scheme which provides better performance is then selected for at least one frequency band, and a prediction control information is generated based on the selected prediction scheme and provided together with an encoded version of the input signal.
  • the prediction control information is received for at least one frequency band of a received encoded signal, and a prediction scheme from at least two prediction schemes with different characteristics to be used for decoding said at least one frequency band of the received encoded signal is selected based on the prediction control information.
  • the present invention can be implemented or used in any coding and decoding system and is not limited to a stereo audio signal. Moreover, any prediction scheme and any number of prediction schemes could be used for the proposed selection.
  • the embodiments can be realized in hardware, software, or a combination of hardware and software. They can be realized in a centralized fashion in one processing system or in a distributed fashion where different elements are spread across several interconnected processing systems. Any kind of processing system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software can be a processing system with an application that, when being loaded and executed, controls the processing system such, that it carries out the methods described herein.
  • the embodiments also can be embedded in an application product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a processing system is able to carry out these methods.
  • means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • an application can include, but is not limited to, a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a processing system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

L'invention porte sur un procédé, sur un appareil et sur un produit de programme d'ordinateur pour coder et/ou décoder un signal d'entrée, au moins deux modes de prédiction présentant différentes caractéristiques étant appliqués pour prédire au moins une bande de fréquence du signal d'entrée devant être codé au niveau du côté de codeur. Un mode de prédiction qui fournit de meilleures performances est ensuite sélectionné pour ladite ou lesdites bandes de fréquence, et des informations de commande de prédiction sont générées sur la base du mode de prédiction sélectionné et fournies conjointement avec une version codée dudit signal d'entrée. Au niveau du côté de décodeur, les informations de commande de prédiction sont reçues pour au moins une bande de fréquence d'un signal codé reçu, et un mode de prédiction parmi au moins deux modes de prédiction présentant différentes caractéristiques et devant être utilisés pour décoder ladite ou lesdites bandes de fréquence du signal codé reçu est sélectionné sur la base des informations de commande de prédiction.
PCT/EP2008/003432 2008-04-28 2008-04-28 Codage/décodage pour une réponse en fréquence améliorée WO2009132662A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2008/003432 WO2009132662A1 (fr) 2008-04-28 2008-04-28 Codage/décodage pour une réponse en fréquence améliorée

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2008/003432 WO2009132662A1 (fr) 2008-04-28 2008-04-28 Codage/décodage pour une réponse en fréquence améliorée

Publications (1)

Publication Number Publication Date
WO2009132662A1 true WO2009132662A1 (fr) 2009-11-05

Family

ID=40238688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2008/003432 WO2009132662A1 (fr) 2008-04-28 2008-04-28 Codage/décodage pour une réponse en fréquence améliorée

Country Status (1)

Country Link
WO (1) WO2009132662A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5115469A (en) * 1988-06-08 1992-05-19 Fujitsu Limited Speech encoding/decoding apparatus having selected encoders
US6721700B1 (en) * 1997-03-14 2004-04-13 Nokia Mobile Phones Limited Audio coding method and apparatus
US20040105551A1 (en) * 1998-10-13 2004-06-03 Norihiko Fuchigami Audio signal processing apparatus
EP1587062A1 (fr) * 1999-07-05 2005-10-19 Nokia Corporation Procédé pour l'améliorement de l'efficacité de codage d'un signal audio

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5115469A (en) * 1988-06-08 1992-05-19 Fujitsu Limited Speech encoding/decoding apparatus having selected encoders
US6721700B1 (en) * 1997-03-14 2004-04-13 Nokia Mobile Phones Limited Audio coding method and apparatus
US20040105551A1 (en) * 1998-10-13 2004-06-03 Norihiko Fuchigami Audio signal processing apparatus
EP1587062A1 (fr) * 1999-07-05 2005-10-19 Nokia Corporation Procédé pour l'améliorement de l'efficacité de codage d'un signal audio

Similar Documents

Publication Publication Date Title
US9741354B2 (en) Bitstream syntax for multi-process audio decoding
US9728196B2 (en) Method and apparatus to encode and decode an audio/speech signal
US7761290B2 (en) Flexible frequency and time partitioning in perceptual transform coding of audio
KR100469002B1 (ko) 오디오 코딩 방법 및 장치
KR101340233B1 (ko) 스테레오 부호화 장치, 스테레오 복호 장치 및 스테레오부호화 방법
US7774205B2 (en) Coding of sparse digital media spectral data
JP5096468B2 (ja) サイド情報なしの時間的ノイズエンベロープの自由な整形
EP2022045B1 (fr) Décodage de données codées prédictivement au moyen d'une adaptation de tampons
KR101449434B1 (ko) 복수의 가변장 부호 테이블을 이용한 멀티 채널 오디오를부호화/복호화하는 방법 및 장치
US20080319739A1 (en) Low complexity decoder for complex transform coding of multi-channel sound
CA2489443C (fr) Systeme de codage audio utilisant des caracteristiques d'un signal decode pour adapter des composants spectraux synthetises
EP2856776B1 (fr) Encodeur de signal audio stéréo
WO2012052802A1 (fr) Appareil codeur/décodeur de signaux audio
JP2004094223A (ja) 多数のサブバンド及び重なり合うウィンドウ関数を用いて処理される音声信号を符号化及び復号化する方法及び装置
CN110235197B (zh) 立体声音频信号编码器
WO2009129822A1 (fr) Codage et décodage efficaces pour des signaux multicanal
US6012025A (en) Audio coding method and apparatus using backward adaptive prediction
WO2009132662A1 (fr) Codage/décodage pour une réponse en fréquence améliorée
WO1998035447A2 (fr) Technique de codage audio et appareil correspondant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08735399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08735399

Country of ref document: EP

Kind code of ref document: A1