EP1905034A1 - Virtual source location information based channel level difference quantization and dequantization method - Google Patents

Virtual source location information based channel level difference quantization and dequantization method

Info

Publication number
EP1905034A1
EP1905034A1 EP06783342A EP06783342A EP1905034A1 EP 1905034 A1 EP1905034 A1 EP 1905034A1 EP 06783342 A EP06783342 A EP 06783342A EP 06783342 A EP06783342 A EP 06783342A EP 1905034 A1 EP1905034 A1 EP 1905034A1
Authority
EP
European Patent Office
Prior art keywords
cld
quantization
vsli
channel audio
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP06783342A
Other languages
German (de)
French (fr)
Other versions
EP1905034B1 (en
EP1905034A4 (en
Inventor
Jeong Il Seo
Kyeong Ok Kang
Jin Woo Hong
Kwang Ki Kim
Seung Kwon Beack
Min Soo Hahn
Sang Bae Chon
Koeng Mo Sung
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060066822A external-priority patent/KR100755471B1/en
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Publication of EP1905034A1 publication Critical patent/EP1905034A1/en
Publication of EP1905034A4 publication Critical patent/EP1905034A4/en
Application granted granted Critical
Publication of EP1905034B1 publication Critical patent/EP1905034B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention relates to Spatial Audio Coding (SAC) of a multichannel audio signal and decoding of an audio bitstream generated by the SAC, and more particularly, to efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi- channel audio signal is performed.
  • SAC Spatial Audio Coding
  • CLD Channel Level Difference
  • SAC Spatial Audio Coding
  • MPEG Moving Picture Experts Group
  • SAC technology has been standardized and named “MPEG Surround” since 2002, and is described in detail in the ISO/IEC working document, ISO/IEC CD 14996-x (published on February 18, 2005 and hereinafter referred to as "SAC standard document").
  • the SAC approach is an encoding approach for improving transmission efficiency by encoding N number of multi-channel audio signals (N>2) using both a down-mix signal, which is mixed into mono or stereo, and a set of ancillary spatial parameters, which represent a human perceptual characteristic of the multi-channel audio signal.
  • the spatial parameters can include Channel Level Difference (CLD) representing a level difference between two channels according to time-frequency, Inter-channel Correlation/Coherence (ICC) representing correlation or coherence between two channels according to time-frequency, Channel Prediction Coefficient (CPC) for making it possible to reproduce a third channel from two channels by prediction, and so on.
  • CLD Channel Level Difference
  • ICC Inter-channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • the CLD is a core element in restoring a power gain of each channel, and is extracted in various ways in the process of SAC encoding. As illustrated in FIG. IA, on the basis of one reference channel, the CLD is expressed by a power ratio of the reference channel to each of the other channels. For example, if there are six channel signals L, R, C, LFE, Ls and Rs, five power ratios can be obtained based on one reference channel, and CLDl through CLD5 correspond to levels obtained by applying a base- 10 logarithm to each of the five power ratios.
  • a multi-channel is divided into a plurality of channel pairs, and each of the channel pairs is analyzed on the basis of stereo, and, in each analysis step, one CLD value is extracted.
  • This is carried out by step-by-step use of a plurality of One-To-Two (OTT) modules, which take two input channels to one output channel.
  • OTT One-To-Two
  • any one of the input stereo signals is recognized as a reference channel, and a base- 10 logarithmic value of a power ratio of the reference channel to the other channel is output as a CLD value.
  • the CLD value has a dynamic range between - ⁇ and + ⁇ .
  • CLD quantization is performed by using a normalized quantization table.
  • An example of such a quantization table is given in the SAC standard document (see page 41, Table 57).
  • SAC standard document see page 41, Table 57.
  • quantization error is introduced, and thus spectrum information is distorted. For example, when 5 bits are used for the CLD quantization, 1he dynamic range of the CLD value will be limited to the range between -25 dB and +25 dB.
  • the present invention is directed to Channel Level Difference (CLD) quantization and dequantization methods capable of minimizing sound deterioration in the process of Spatial Audio Coding (SAC)-based encoding of a multi-channel audio signal.
  • CLD Channel Level Difference
  • SAC Spatial Audio Coding
  • the present invention is also directed to CLD quantization and dequantization methods capable of minimizing sound deterioration using advantages of quantization of Virtual Source Location Information (VSLI), which is replaceable with CLD, in the process of S AC -based encoding of a multi-channel audio signal.
  • VSLI Virtual Source Location Information
  • the present invention is directed to improving quality of sound without additional complexity by providing a VSLI-based CLD quantization table, which can be replaced by a CLD quantization table used for CLD quantization and dequantization in a Moving Picture Experts Group (MPEG)-4 SAC system.
  • MPEG Moving Picture Experts Group
  • a first aspect of the present invention provides a method for quantizing a Channel Level Difference (CLD) parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed.
  • the CLD quantization method comprises the steps of extracting CLDs for each band from the N-channel audio signal, and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization
  • VSLI Virtual Source Location Information
  • a second aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method.
  • a third aspect of the present invention provides a method for encoding an N-
  • the method comprises the steps of down-mixing and encoding the N-channel audio signal, extracting spatial parameters including Channel Level Difference (CLD), Inter- channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal and quantizing the extracted spatial parameters.
  • CLD Channel Level Difference
  • ICC Inter- channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • the CLD is quantized by reference to a VSLI-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • a fourth aspect of the present invention provides an apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC).
  • the apparatus comprises an SAC encoding means down-mixing the N-channel audio signal to generate a down-mix signal and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal, an audio encoding means generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means, a spatial parameter quantizing means quantizing the spatial parameters extracted by the SAC encoding means, and a spatial parameter encoding means encoding the quantized spatial parameter levels.
  • the spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • VSLI Virtual Source Location Information
  • a fifth aspect of the present invention provides a method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N- channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC).
  • a sixth aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method.
  • a seventh aspect of the present invention provides a method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC).
  • the method comprises the steps of decoding the encoded N-channel audio bitstream, dequantizing a quantization value of at least one spatial parameter received together with the encoded N-channel audio bitstream, and synthesizing the decoded N- channel audio bitstream based on the dequantized spatial parameter to restore an N- channel audio signal.
  • a Channel Level Difference (CLD) included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • CLD Channel Level Difference
  • VSLI Virtual Source Location Information
  • An eighth aspect of the present invention provides an apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC).
  • the apparatus comprises means for decoding the encoded N-channel audio bitstream, means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream, means for dequantizing the quantization values of the spatial parameter, and means for synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal.
  • the means for dequantizing the quantization value of the spatial parameter dequantizes a Channel Level Difference (CLD) included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • CLD Channel Level Difference
  • VSLI Virtual Source Location Information
  • the VSLI-based CLD quantization table created according to the present invention can replace the CLD quantization table used in an existing SAC system.
  • the VSLI-based CLD quantization table according to the present invention sound deterioration can be prevented as much as possible.
  • a Huffman codebook in compressing CLD indexes which is proposed in the present invention, it is possible to reduce a bit rate required to transmit the CLD.
  • FIGS. IA and IB conceptually illustrate a process of extracting Channel
  • CLD Level Difference
  • FIG. 2 schematically illustrates a configuration of a spatial audio coding (SAC) system to which the present invention is to be applied;
  • SAC spatial audio coding
  • FIGS. 3A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention
  • FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention.
  • FIG. 2 schematically illustrates a configuration of a spatial audio coding
  • the SAC system can be divided into an encoding part of generating, encoding and transmitting a down-mix signal and spatial parameters from an N-channel audio signal and a decoding part of restoring the N-channel audio signal from the down- mix signal and spatial parameters transmitted from the encoding part.
  • the encoding part includes an SAC encoder 210, an audio encoder 220, a spatial parameter quantizer 230, and a spatial parameter encoder 240.
  • the decoding part includes an audio decoder 250, a spatial parameter decoder 260, a spatial parameter dequantizer 270, and an SAC decoder 280.
  • the SAC encoder 210 generates a down-mix signal from the input N- channel audio signal and analyzes spatial characteristics of the N-channel audio signal, thereby extracting spatial parameters such as Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC).
  • CLD Channel Level Difference
  • ICC Inter-channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • N (N > 1) multi-channel signal input into the SAC encoder 210 is decomposed into frequency bands by means of an analysis filter bank.
  • a quadrature mirror filter (QMF) is used. Spatial characteristics related to spatial perception are analyzed from sub-band signals, and spatial parameters such as CLD, ICC, and CPC are selectively extracted according to an encoding operation mode. Further, the sub-band signals are down-mixed and converted into a down-mix signal of a time domain by means of a QMF synthesis bank.
  • the down-mix signal may be replaced by a down-mix signal which is pre-produced by an acoustic engineer (or an artistic/hand-mixed down-mix signal).
  • the SAC encoder 210 adjusts and transmits the spatial parameters on the basis of the pre-produced down-mix signal, thereby optimizing multi-channel iestoration at the decoder.
  • the audio encoder 220 compresses the down-mix signal generated by the SAC encoder 210 or the artistic down-mix signal by using an existing audio compression technique (e.g. Moving Picture Experts Group (MPEG)-4, Advanced Audio Coding (AAC), MPEG-4 High Efficiency Advanced Audio Coding (HE- AAC), MPEG-4 Bit Sliced Arithmetic Coding (BSAC) etc.), thereby generating a compressed audio bitstream.
  • MPEG Moving Picture Experts Group
  • AAC Advanced Audio Coding
  • HE- AAC MPEG-4 High Efficiency Advanced Audio Coding
  • BSAC MPEG-4 Bit Sliced Arithmetic Coding
  • the spatial parameter quantizer 230 is provided with a quantization table, which is to be used to quantize each of the CLD, ICC and CPC. As described below, in order to minimize sound deterioration caused by quantizing the CLD using an existing normalized CLD quantization table, a
  • Virtual Source Location Information (VSLI)-based CLD quantization table can be used in the spatial parameter quantizer 230.
  • the spatial parameter encoder 240 performs entropy encoding in order to compress the spatial parameters quantized by the spatial parameter quantizer 230, and preferably performs Huffman encoding on quantization indexes of the spatial parameters using a Huffman codebook. As described below, the present invention proposes a new Huffman codebook in order to maximize transmission efficiency of CLD quantization indexes.
  • the audio decoder 250 decodes the audio bitstream compressed through the existing audio compression technique (e.g. MPEG-4, AAC, MPEG-4 HE-AAC, MPEG-4 BSAC, etc.).
  • the spatial parameter decoder 260 and the spatial parameter dequantizer 270 are modules for performing the inverse of the quantization and encoding performed by the spatial parameter quantizer 230 and the spatial parameter encoder 240.
  • the spatial parameter decoder 260 decodes the encoded quantization indexes of the spatial parameters on the basis of the Huffman codebook, and the spatial parameter dequantizer 270 obtains the spatial parameters corresponding to the quantization indexes from the quantization table.
  • the VSLI-based CLD quantization table and the Huffman codebook proposed in the present invention are used for the processes of decoding and dequantization of the spatial parameters.
  • the SAC decoder 280 restores the N multi-channel audio signals by synthesis of the audio bitstream decoded by the audio decoder 250 and the spatial parameters obtained by the spatial parameter dequantizer 270.
  • the SAC system can provide compatibility with an existing mono or stereo audio coding system.
  • the present invention is concerned with providing both the CLD quantization capable of minimizing sound deterioration resulting from quantization by utilizing advantages of the quantization of the VSLI representing a spatial audio image of the multi-channel audio signal.
  • the present invention is based on the fact that, in expressing an azimuth angle of the spatial audio image, human ears have
  • FIGS. 3 A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention.
  • FIG. 3A illustrates a stereo speaker environment in which two speakers are located at an angle
  • FIG. 3B is a view in which a stereo audio signal in the stereo speaker
  • the stereo or multi-channel audio signal can be represented by the magnitude vector of a down-mix audio signal and the VSLI that can be obtained by analyzing the each channel power of the multi-channel audio signals.
  • the multichannel audio signal represented in this way can be restored by projecting the magnitude vector according to the location vector of a sound source.
  • Equation 1 Equation 1
  • the VSLI calculated in this way has a value between A L and A R .
  • P L and P R can be restored from the VSLI as follows: First, the VSLI is mapped to a value, VSLI', between 0° and 90° using a Constant Power Panning (CPP) rule, as in
  • Equation 4 By using the VSLF mapped in this way and power P D of the down-mixed signal, P L and P R are calculated using Equations 4 and 5. Equation 4
  • the subject matter of the present invention concerns applying the advantages of quantization of the VSLI to quantization of the spatial parameter, the CLD.
  • the CLD can be expressed as in Equation 6. Equation 6
  • the CLD can be derived from the VSLI according to Equation 7. Equation 7
  • the CLD can be obtained by taking the natural logarithm, instead of the base- 10 logarithm, of the VSLI. Equation 8
  • Equations 7 and 8 can be directly used as spatial parameters of a general SAC system.
  • the CLD has a dynamic range between - ⁇
  • the main problem is quantization error caused by limitation of the dynamic range. Because all dynamic ranges of the CLD cannot be expressed with only a finite number of bits, the dynamic range of the CLD is limited to a predetermined level or less. As a result, quantization error is introduced, and the spectrum information is distorted. If 5 bits are used for the CLD quantization, the dynamic range of the CLD is limited to between -25 dB and +25 dB. In contrast, because the VSLI has a finite dynamic range of 90°, such
  • the number of quantization levels if 5 bits are used for the CLD quantization and a linear quantizer is applied, the number of quantization levels
  • the advantages of this VSLI quantization are applied to the CLD quantization of the stereo coding method, the CLD quantization table used in the existing SAC system can be replaced by a VSLI-based quantization table.
  • quantization is performed at a quantization interval of 3° and CLD conversion levels
  • a VSLI decision level for the VSLI quantization is decided by a middle value between neighboring quantization values.
  • the middle value is converted into the CLD and used as a decision level of the CLD quantization.
  • the VSLI-based CLD quantization decision level has a value other than the middle value between neighboring quantization values as seen in Table 2, unlike ordinary CLD quantization in which the decision level has the middle value between neighboring quantization values.
  • FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention. As illustrated, when
  • Tables 3 through 7 below are VSLI-based CLD quantization tables created by using Tables 1 and 2, wherein Table 3 gives the CLD quantization values down to the fourth decimal place, Table 4 down to the third decimal place, Table 5 down to the second decimal place, Table 6 down to the first decimal place, and Table 7 to the integer.
  • the CLD quantization value using the VSLI can be calculated by taking a base- 10 logarithm or natural logarithm. When taking the natural logarithm, e rather than 10 is used as the base when spectrum information is restored by using the CLD value.
  • the CLD quantization values and the CLD quantization decision levels are expressed as integers by taking the base- 10 logarithm, it can be seen that there is a problem that some of the CLD quantization values are identical to some of the CLD quantization decision levels.
  • the CLD quantization values and decision levels using the natural logarithm are preferably used for actual quantization.
  • the CLD quantization values are derived by taking the natural logarithm rather than the base- 10 logarithm of the VSLI.
  • the VSLI-based CLD quantization table created in this way is employed in the spatial parameter quantizer 230 and the spatial parameter dequantizer 270 of the SAC system illustrated in FIG. 2, so that sound deterioration resulting from the CLD quantization error can be minimized. Further, the present invention proposes a Huffman codebook capable of optimizing Huffman encoding of the CLD quantization indexes derived on the basis of the above-described VSLI-based CLD quantization table.
  • the multi-channel audio signal is processed after being split into sub-bands of a frequency domain by means of a filter bank.
  • a differential coding method is applied to a quantization index of each sub-band, thereby classifying the quantization indexes into the quantization index of the fist sub-band and the other 19 differential indexes between neighboring sub-bands.
  • they may be divided into differential indexes between neighboring frames.
  • a probability distribution is calculated with respect to each of the three types of indexes classified in this way, and then the Huffman coding method is applied to each of the three types of indexes.
  • Table 13 is the Huffman codebook for the index of the first sub-band
  • Table 14 is the Huffman code book for the other indexes between neighboring sub-bands.
  • the Huffman codebooks proposed in the present invention are employed to the spatial parameter encoder 240 and the spatial parameter decoder 260 of the SAC system illustrated in FIG. 2, so that a bit rate required to transmit the CLD quantization indexes can be reduced.
  • a bit rate required to transmit the CLD quantization indexes can be reduced.
  • 5-bit Pulse Code Modulation (PCM) coding can be performed on each sub-band.
  • the present invention can be provided as a computer program stored on at least one computer-readable medium in the form of at least one product such as a floppy disk, hard disk, CD ROM, flash memory card, PROM, RAM, ROM, or magnetic tape.
  • the computer program can be written in any programming language such as C, C++, or JAVA.

Abstract

Methods for Spatial Audio Coding (SAC) of a multi-channel audio signal and decoding of an audio bitstream generated by the SAC are provided. More particularly, methods of efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi-channel audio signal is performed are provided. A method of CLD quantization includes extracting sub-band-specific CLDs from an N-channel audio signal (N>1), and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.

Description

[DESCRIPTION]
[Invention Title]
VIRTUAL SOURCE LOCATION INFORMATION BASED CHANNEL LEVEL DIFFERENCE QUANTIZATION AND DEQUANTIZATION METHOD
[Technical Field]
The present invention relates to Spatial Audio Coding (SAC) of a multichannel audio signal and decoding of an audio bitstream generated by the SAC, and more particularly, to efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi- channel audio signal is performed.
[Background Art]
Spatial Audio Coding (SAC) is technology for efficiently compressing a multi-channel audio signal while maintaining compatibility with an existing stereo audio system. In the Moving Picture Experts Group (MPEG), SAC technology has been standardized and named "MPEG Surround" since 2002, and is described in detail in the ISO/IEC working document, ISO/IEC CD 14996-x (published on February 18, 2005 and hereinafter referred to as "SAC standard document").
Specifically, the SAC approach is an encoding approach for improving transmission efficiency by encoding N number of multi-channel audio signals (N>2) using both a down-mix signal, which is mixed into mono or stereo, and a set of ancillary spatial parameters, which represent a human perceptual characteristic of the multi-channel audio signal. The spatial parameters can include Channel Level Difference (CLD) representing a level difference between two channels according to time-frequency, Inter-channel Correlation/Coherence (ICC) representing correlation or coherence between two channels according to time-frequency, Channel Prediction Coefficient (CPC) for making it possible to reproduce a third channel from two channels by prediction, and so on.
The CLD is a core element in restoring a power gain of each channel, and is extracted in various ways in the process of SAC encoding. As illustrated in FIG. IA, on the basis of one reference channel, the CLD is expressed by a power ratio of the reference channel to each of the other channels. For example, if there are six channel signals L, R, C, LFE, Ls and Rs, five power ratios can be obtained based on one reference channel, and CLDl through CLD5 correspond to levels obtained by applying a base- 10 logarithm to each of the five power ratios.
Meanwhile, as illustrated in FIG. IB, a multi-channel is divided into a plurality of channel pairs, and each of the channel pairs is analyzed on the basis of stereo, and, in each analysis step, one CLD value is extracted. This is carried out by step-by-step use of a plurality of One-To-Two (OTT) modules, which take two input channels to one output channel. In each OTT, any one of the input stereo signals is recognized as a reference channel, and a base- 10 logarithmic value of a power ratio of the reference channel to the other channel is output as a CLD value.
The CLD value has a dynamic range between -∞ and +∞. Hence, to express
the CLD value with a finite number of bits, efficient quantization is required. Typically, CLD quantization is performed by using a normalized quantization table. An example of such a quantization table is given in the SAC standard document (see page 41, Table 57). In this manner, because all CLD values cannot be expressed with only a finite number of bits, the dynamic range of the CLD value is limited to a predetermined level or less. Thereby, quantization error is introduced, and thus spectrum information is distorted. For example, when 5 bits are used for the CLD quantization, 1he dynamic range of the CLD value will be limited to the range between -25 dB and +25 dB.
[Disclosure]
[Technical Problem]
The present invention is directed to Channel Level Difference (CLD) quantization and dequantization methods capable of minimizing sound deterioration in the process of Spatial Audio Coding (SAC)-based encoding of a multi-channel audio signal. The present invention is also directed to CLD quantization and dequantization methods capable of minimizing sound deterioration using advantages of quantization of Virtual Source Location Information (VSLI), which is replaceable with CLD, in the process of S AC -based encoding of a multi-channel audio signal.
In addition, the present invention is directed to improving quality of sound without additional complexity by providing a VSLI-based CLD quantization table, which can be replaced by a CLD quantization table used for CLD quantization and dequantization in a Moving Picture Experts Group (MPEG)-4 SAC system.
[Technical Solution]
A first aspect of the present invention provides a method for quantizing a Channel Level Difference (CLD) parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed. The CLD quantization method comprises the steps of extracting CLDs for each band from the N-channel audio signal, and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization
table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A second aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method. A third aspect of the present invention provides a method for encoding an N-
channel audio signal (N>1) based on Spatial Audio Coding (SAC). The method comprises the steps of down-mixing and encoding the N-channel audio signal, extracting spatial parameters including Channel Level Difference (CLD), Inter- channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal and quantizing the extracted spatial parameters. In the step of quantizing the extracted spatial parameters, the CLD is quantized by reference to a VSLI-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A fourth aspect of the present invention provides an apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC). The apparatus comprises an SAC encoding means down-mixing the N-channel audio signal to generate a down-mix signal and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal, an audio encoding means generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means, a spatial parameter quantizing means quantizing the spatial parameters extracted by the SAC encoding means, and a spatial parameter encoding means encoding the quantized spatial parameter levels. The spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A fifth aspect of the present invention provides a method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N- channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC). The CLD dequantization method comprises the steps of performing Huffman decoding on the encoded CLD quantization value, and dequantizing the decoded CLD quantization value by using a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from
VSLI quantization values of the N-channel audio signal.
A sixth aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method.
A seventh aspect of the present invention provides a method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC). The method comprises the steps of decoding the encoded N-channel audio bitstream, dequantizing a quantization value of at least one spatial parameter received together with the encoded N-channel audio bitstream, and synthesizing the decoded N- channel audio bitstream based on the dequantized spatial parameter to restore an N- channel audio signal. In the step of dequantizing a quantization value of at least one spatial parameter, a Channel Level Difference (CLD) included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
An eighth aspect of the present invention provides an apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC). The apparatus comprises means for decoding the encoded N-channel audio bitstream, means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream, means for dequantizing the quantization values of the spatial parameter, and means for synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal. The means for dequantizing the quantization value of the spatial parameter dequantizes a Channel Level Difference (CLD) included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Advantageous Effects]
The VSLI-based CLD quantization table created according to the present invention can replace the CLD quantization table used in an existing SAC system. By using the VSLI-based CLD quantization table according to the present invention, sound deterioration can be prevented as much as possible. In addition, by using a Huffman codebook in compressing CLD indexes, which is proposed in the present invention, it is possible to reduce a bit rate required to transmit the CLD.
[Description of Drawings]
FIGS. IA and IB conceptually illustrate a process of extracting Channel
Level Difference (CLD) values from multi-channel signals;
FIG. 2 schematically illustrates a configuration of a spatial audio coding (SAC) system to which the present invention is to be applied;
FIGS. 3A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention; and FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention.
[Mode for Invention]
Hereinafter, exemplary embodiments of the present invention will be described in detail. However, the present invention is not limited to the exemplary embodiments disclosed below, but can be implemented in various forms. Therefore,
these exemplary embodiments are provided for complete disclosure of the present invention and to fully convey the scope of the present invention to those of ordinary skill in the art. FIG. 2 schematically illustrates a configuration of a spatial audio coding
(SAC) system to which the present invention is to be applied. As illustrated, the SAC system can be divided into an encoding part of generating, encoding and transmitting a down-mix signal and spatial parameters from an N-channel audio signal and a decoding part of restoring the N-channel audio signal from the down- mix signal and spatial parameters transmitted from the encoding part. The encoding part includes an SAC encoder 210, an audio encoder 220, a spatial parameter quantizer 230, and a spatial parameter encoder 240. The decoding part includes an audio decoder 250, a spatial parameter decoder 260, a spatial parameter dequantizer 270, and an SAC decoder 280. The SAC encoder 210 generates a down-mix signal from the input N- channel audio signal and analyzes spatial characteristics of the N-channel audio signal, thereby extracting spatial parameters such as Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC).
Specifically, N (N > 1) multi-channel signal input into the SAC encoder 210 is decomposed into frequency bands by means of an analysis filter bank. In order to split a signal into sub-bands of a frequency domain with low complexity, a quadrature mirror filter (QMF) is used. Spatial characteristics related to spatial perception are analyzed from sub-band signals, and spatial parameters such as CLD, ICC, and CPC are selectively extracted according to an encoding operation mode. Further, the sub-band signals are down-mixed and converted into a down-mix signal of a time domain by means of a QMF synthesis bank.
Alternatively, the down-mix signal may be replaced by a down-mix signal which is pre-produced by an acoustic engineer (or an artistic/hand-mixed down-mix signal). At this time, the SAC encoder 210 adjusts and transmits the spatial parameters on the basis of the pre-produced down-mix signal, thereby optimizing multi-channel iestoration at the decoder.
The audio encoder 220 compresses the down-mix signal generated by the SAC encoder 210 or the artistic down-mix signal by using an existing audio compression technique (e.g. Moving Picture Experts Group (MPEG)-4, Advanced Audio Coding (AAC), MPEG-4 High Efficiency Advanced Audio Coding (HE- AAC), MPEG-4 Bit Sliced Arithmetic Coding (BSAC) etc.), thereby generating a compressed audio bitstream. Meanwhile, the spatial parameters generated by the SAC encoder 210 are transmitted after being quantized and encoded by the spatial parameter quantizer 230 and the spatial parameter encoder 240. The spatial parameter quantizer 230 is provided with a quantization table, which is to be used to quantize each of the CLD, ICC and CPC. As described below, in order to minimize sound deterioration caused by quantizing the CLD using an existing normalized CLD quantization table, a
Virtual Source Location Information (VSLI)-based CLD quantization table can be used in the spatial parameter quantizer 230.
The spatial parameter encoder 240 performs entropy encoding in order to compress the spatial parameters quantized by the spatial parameter quantizer 230, and preferably performs Huffman encoding on quantization indexes of the spatial parameters using a Huffman codebook. As described below, the present invention proposes a new Huffman codebook in order to maximize transmission efficiency of CLD quantization indexes. The audio decoder 250 decodes the audio bitstream compressed through the existing audio compression technique (e.g. MPEG-4, AAC, MPEG-4 HE-AAC, MPEG-4 BSAC, etc.).
The spatial parameter decoder 260 and the spatial parameter dequantizer 270 are modules for performing the inverse of the quantization and encoding performed by the spatial parameter quantizer 230 and the spatial parameter encoder 240. The spatial parameter decoder 260 decodes the encoded quantization indexes of the spatial parameters on the basis of the Huffman codebook, and the spatial parameter dequantizer 270 obtains the spatial parameters corresponding to the quantization indexes from the quantization table. In analogy to the quantization and encoding of the spatial parameters, the VSLI-based CLD quantization table and the Huffman codebook proposed in the present invention are used for the processes of decoding and dequantization of the spatial parameters.
The SAC decoder 280 restores the N multi-channel audio signals by synthesis of the audio bitstream decoded by the audio decoder 250 and the spatial parameters obtained by the spatial parameter dequantizer 270. Alternatively, when decoding of the multi-channel audio signals is impossible, only the down-mix signal can be decoded by using an existing audio decoder, so that independent service is possible. Therefore, the SAC system can provide compatibility with an existing mono or stereo audio coding system.
The present invention is concerned with providing both the CLD quantization capable of minimizing sound deterioration resulting from quantization by utilizing advantages of the quantization of the VSLI representing a spatial audio image of the multi-channel audio signal. The present invention is based on the fact that, in expressing an azimuth angle of the spatial audio image, human ears have
difficulty in recognizing an error of 3° or less. The VSLI expressed with the azimuth
angle has a limited dynamic range of 90°, so that quantization error caused by limitation of the dynamic range upon quantization can be avoided. When the CLD quantization table is designed on the basis of the advantages of the quantization of the VSLI, sound deterioration resulting from the quantization can be minimized. FIGS. 3 A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention. FIG. 3A illustrates a stereo speaker environment in which two speakers are located at an angle
of 60°, and FIG. 3B is a view in which a stereo audio signal in the stereo speaker
environment of FIG. 3A is represented by power of a down-mixed signal and by VSLI. As illustrated, the stereo or multi-channel audio signal can be represented by the magnitude vector of a down-mix audio signal and the VSLI that can be obtained by analyzing the each channel power of the multi-channel audio signals. The multichannel audio signal represented in this way can be restored by projecting the magnitude vector according to the location vector of a sound source.
As illustrated in FIGS. 3 A and 3B, assuming that power of a signal of the left speaker is PL, power of a signal of the right speaker is PR, and angles of the left and right speakers are AL and AR respectively, the VSLI of the sound source can be found by Equations 1 and 2. Equation 1
Equation 2
VSLI = θx (AR - AL)/90 + AL
The VSLI calculated in this way has a value between AL and AR. PL and PR can be restored from the VSLI as follows: First, the VSLI is mapped to a value, VSLI', between 0° and 90° using a Constant Power Panning (CPP) rule, as in
Equation 3.
Equation 3
VSLI-A £ VSLF= X 90
AR-A 1
By using the VSLF mapped in this way and power PD of the down-mixed signal, PL and PR are calculated using Equations 4 and 5. Equation 4
Equation 5
PR -= PD χ (Un(VSLI'))2
As previously described, the subject matter of the present invention concerns applying the advantages of quantization of the VSLI to quantization of the spatial parameter, the CLD. In the stereo speaker environment of FIG. 3 A, the CLD can be expressed as in Equation 6. Equation 6
P R
CZD= IOlOg 10 P L
The CLD can be derived from the VSLI according to Equation 7. Equation 7
CLD = 2OiOg10 (tan ( VSLl' ))
VSLI- A
= 20tofll0 (fan CΛ , L
A X QO ))
1RΏ ~ A L
Further, as defined in Equation 8 below, the CLD can be obtained by taking the natural logarithm, instead of the base- 10 logarithm, of the VSLI. Equation 8
CLD = 20loge {tan { VSLf )) X 90 ))
The CLD values obtained by Equations 7 and 8 can be directly used as spatial parameters of a general SAC system.
As previously described, because the CLD has a dynamic range between -∞
and +α>, problems occur in performing quantization using a finite number of bits.
The main problem is quantization error caused by limitation of the dynamic range. Because all dynamic ranges of the CLD cannot be expressed with only a finite number of bits, the dynamic range of the CLD is limited to a predetermined level or less. As a result, quantization error is introduced, and the spectrum information is distorted. If 5 bits are used for the CLD quantization, the dynamic range of the CLD is limited to between -25 dB and +25 dB. In contrast, because the VSLI has a finite dynamic range of 90°, such
quantization error caused by limitation of the dynamic range upon quantization can
be avoided.
In one embodiment, upon quantization of the VSLI, if 5 bits are used for the CLD quantization and a linear quantizer is applied, the number of quantization levels
is 31 and a quantization interval is 3°. The validity of the VSLI quantization
approach can be verified from the fact that people fail to recognize a difference of 3°
or less when recognizing the spatial image of an audio signal.
The advantages of this VSLI quantization are applied to the CLD quantization of the stereo coding method, the CLD quantization table used in the existing SAC system can be replaced by a VSLI-based quantization table.
In one embodiment, quantization values of the VSLI on which 5-bit linear
quantization is performed at a quantization interval of 3° and CLD conversion levels
corresponding to the VSLI quantization values are given in Table 1.
Table L. VSLI Quantization values and CLD values
Further, a VSLI decision level for the VSLI quantization is decided by a middle value between neighboring quantization values. The middle value is converted into the CLD and used as a decision level of the CLD quantization. The VSLI-based CLD quantization decision level has a value other than the middle value between neighboring quantization values as seen in Table 2, unlike ordinary CLD quantization in which the decision level has the middle value between neighboring quantization values.
FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention. As illustrated, when
quantizing the VSLI at a uniform angle on the basis of 45°, the decision level
between the quantized angles is the middle value between two angles. However, when this VSLJ decision level is converted into a CLD value, it can be found that the VSLI decision level has a value other than the middle value between two neighboring CLD values. Table 2 below lists the decision levels of the VSLI quantization and the corresponding CLD values. Table 2
Tables 3 through 7 below are VSLI-based CLD quantization tables created by using Tables 1 and 2, wherein Table 3 gives the CLD quantization values down to the fourth decimal place, Table 4 down to the third decimal place, Table 5 down to the second decimal place, Table 6 down to the first decimal place, and Table 7 to the integer. The CLD quantization value using the VSLI can be calculated by taking a base- 10 logarithm or natural logarithm. When taking the natural logarithm, e rather than 10 is used as the base when spectrum information is restored by using the CLD value.
Table 3. VSLI-based CLD Quantization Table (Fourth Decimal Place)
Table 4. VSLI-based CLD Quantization Table (Third Decimal Place)
Table 5. VSLI-based CLD Quantization Table (Second Decimal Place)
Table 6. VSLI-based CLD Quantization Table (First Decimal Place)
Table 7. VSLI-based CLD Quantization Table (Integer)
Next, the decision levels on the VSLI-based CLD quantization tables classified by decimal place are given in Tables 8, 9, 10, 11 and 12. Table 8
VSLI-based CLD Quantization Decision Levels (Fourth Decimal Place)
Table 9
VSLI-based CLD Quantization Decision Levels (Third Decimal Place)
Table 10
VSLI-based CLD Quantization Decision Levels (Second Decimal Place)
Table 11
VSLI-based CLD Quantization Decision Levels (First Decimal Place)
Table 12
VSLI-based CLD Quantization Decision Levels (Integer)
As shown in Tables 7 and 12, when the CLD quantization values and the CLD quantization decision levels are expressed as integers by taking the base- 10 logarithm, it can be seen that there is a problem that some of the CLD quantization values are identical to some of the CLD quantization decision levels. Hence, the CLD quantization values and decision levels using the natural logarithm are preferably used for actual quantization. In other words, when intending to use the VSLI-based CLD quantization table and the VSLI-based CLD quantization decision levels, both of which are expressed to the integer, the CLD quantization values are derived by taking the natural logarithm rather than the base- 10 logarithm of the VSLI. The VSLI-based CLD quantization table created in this way is employed in the spatial parameter quantizer 230 and the spatial parameter dequantizer 270 of the SAC system illustrated in FIG. 2, so that sound deterioration resulting from the CLD quantization error can be minimized. Further, the present invention proposes a Huffman codebook capable of optimizing Huffman encoding of the CLD quantization indexes derived on the basis of the above-described VSLI-based CLD quantization table.
In the SAC system, the multi-channel audio signal is processed after being split into sub-bands of a frequency domain by means of a filter bank. When the multi-channel audio signal is split into 20 sub-bands, a differential coding method is applied to a quantization index of each sub-band, thereby classifying the quantization indexes into the quantization index of the fist sub-band and the other 19 differential indexes between neighboring sub-bands. Alternatively, they may be divided into differential indexes between neighboring frames. A probability distribution is calculated with respect to each of the three types of indexes classified in this way, and then the Huffman coding method is applied to each of the three types of indexes. Thereby, Huffman codebooks described in Tables 13 and 14 below can be obtained. Table 13 is the Huffman codebook for the index of the first sub-band, and Table 14 is the Huffman code book for the other indexes between neighboring sub-bands.
Table 13
Table 14
In this manner, the Huffman codebooks proposed in the present invention are employed to the spatial parameter encoder 240 and the spatial parameter decoder 260 of the SAC system illustrated in FIG. 2, so that a bit rate required to transmit the CLD quantization indexes can be reduced. Alternatively, when the number of bits used for Huffman encoding of the 20 sub-bands exceeds 100, 5-bit Pulse Code Modulation (PCM) coding can be performed on each sub-band.
[Industrial Applicability]
The present invention can be provided as a computer program stored on at least one computer-readable medium in the form of at least one product such as a floppy disk, hard disk, CD ROM, flash memory card, PROM, RAM, ROM, or magnetic tape. In general, the computer program can be written in any programming language such as C, C++, or JAVA.
While the invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

[CLAIMS]
[Claim l ]
A Channel Level Difference (CLD) quantization method for quantizing a CLD parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed, the CLD quantization
method comprising the steps of: extracting CLDs for each sub-band from the N-channel audio signal; and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 2]
The CLD quantization method according to claim 1, wherein the VSLI quantization value is quantized at a predetermined quantization interval within a
range between 0° and 90°.
[Claim 31
The CLD quantization method according to claim 2, wherein the
predetermined quantization interval is 3°.
[Claim 4] The CLD quantization method according to claim 1, wherein the CLD quantization values are derived from the VSLI quantization values according to the following equation:
[Claim 5]
The CLD quantization method according to claim 1, wherein the CLD quantization values are derived from the VSLI quantization values according to the following equation:
CLD = 90)
[Claim 6]
The CLD quantization method according to claim 1, wherein a decision level for the CLD quantization is derived from a VSLI decision level for VSLI quantization.
[Claim 7]
The CLD quantization method according to claim 1, wherein the VSLI-based
CLD quantization table is as follows:
[Claim 8]
The CLD quantization method according to claim 7, wherein the VSLI-based CLD quantization table is related to the CLD quantization decision levels as follows:
[Claim 9]
The CLD quantization method according to claim 1, further comprising the step of performing Huffman encoding on quantization indexes of the CLD.
[Claim 10] The CLD quantization method according to claim 9, wherein the Huffman encoding is performed on a quantization index of a first sub-band by reference to a Huffman codebook as follows:
[Claim 11 ]
The CLD quantization method according to claim 10, wherein the Huffman encoding is performed on quantization indexes of the remaining sub-bands other than the first sub-band by reference to a Huffman codebook as follows:
[Claim 12]
A computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method according to any one of claims 1 through 11.
[Claim 13] A method for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC), the method comprising the steps of: down-mixing and encoding the N-channel audio signal; extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each sub-band, from the N-channel audio signal; and quantizing the extracted spatial parameters, wherein, in the step of quantizing the extracted spatial parameters, the CLD is quantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 14]
An apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC), the apparatus comprising: an SAC encoding means for down-mixing the N-channel audio signal to generate a down-mix signal, and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each sub-band, from the N-channel audio signal; an audio encoding means for generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means; a spatial parameter quantizing means for quantizing the spatial parameters extracted by the SAC encoding means; and a spatial parameter encoding means for encoding the quantized spatial
parameters, wherein the spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization
values of the N-channel audio signal.
[Claim 15]
The apparatus according to claim 14, wherein the VSLI-based CLD quantization table is as follows:
[Claim 16]
The apparatus according to claim 15, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
[Claim 17] A method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N-channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC), the method comprising the steps of: performing Huffman decoding on the encoded CLD quantization value; and dequantizing the decoded CLD quantization value by using a Virtual Source
Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 18]
The method according to claim 18, wherein the VSLI-based CLD quantization table is as follows:
[Claim 19]
The method according to claim 18, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
[Claim 20]
The method according to claim 17, wherein in the step of performing Huffman decoding on the encoded CLD quantization value, the CLD quantization
value of a first sub-band is decoded by reference to a Huffman codebook as follows:
[Claim 21 ] The method according to claim 20, wherein the Huffman encoding is performed on quantization indexes of the remaining sub-bands other than the first sub-band by reference to a Huffman codebook as follows:
[Claim 22]
A computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method according to any one of claims 17 through 21.
[Claim 23] A method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC), the method comprising the steps of: decoding the encoded N-channel audio bitstream; dequantizing quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream; and synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal, wherein, in the step of dequantizing quantization values of at least one spatial parameter, a CLD included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N- channel audio signal.
[Claim 24]
An apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC), the apparatus comprising: means for decoding the encoded N-channel audio bitstream; means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream; means for dequantizing the quantization values of the spatial parameter; and synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal, wherein the means for dequantizing the quantization value of the spatial parameter dequantizes a CLD included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N- channel audio signal.
[Claim 25]
The apparatus according to claim 24, wherein the VSLI-based CLD quantization table is as follows:
[Claim 26]
The apparatus according to claim 25, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
EP06783342A 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization Not-in-force EP1905034B1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20050065515 2005-07-19
KR20050096256 2005-10-12
KR1020060066822A KR100755471B1 (en) 2005-07-19 2006-07-18 Virtual source location information based channel level difference quantization and dequantization method
PCT/KR2006/002824 WO2007011157A1 (en) 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization method

Publications (3)

Publication Number Publication Date
EP1905034A1 true EP1905034A1 (en) 2008-04-02
EP1905034A4 EP1905034A4 (en) 2009-11-25
EP1905034B1 EP1905034B1 (en) 2011-06-01

Family

ID=37669008

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06783342A Not-in-force EP1905034B1 (en) 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization

Country Status (2)

Country Link
EP (1) EP1905034B1 (en)
WO (1) WO2007011157A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5098458B2 (en) * 2007-06-20 2012-12-12 カシオ計算機株式会社 Speech coding apparatus, speech coding method, and program
KR101613975B1 (en) 2009-08-18 2016-05-02 삼성전자주식회사 Method and apparatus for encoding multi-channel audio signal, and method and apparatus for decoding multi-channel audio signal
WO2011097903A1 (en) 2010-02-11 2011-08-18 华为技术有限公司 Multi-channel signal coding, decoding method and device, and coding-decoding system
CN102157151B (en) 2010-02-11 2012-10-03 华为技术有限公司 Encoding method, decoding method, device and system of multichannel signals
US9055371B2 (en) 2010-11-19 2015-06-09 Nokia Technologies Oy Controllable playback system offering hierarchical playback options
US9456289B2 (en) 2010-11-19 2016-09-27 Nokia Technologies Oy Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof
US9313599B2 (en) 2010-11-19 2016-04-12 Nokia Technologies Oy Apparatus and method for multi-channel signal playback
PL2740222T3 (en) 2011-08-04 2015-08-31 Dolby Int Ab Improved fm stereo radio receiver by using parametric stereo
EP2834995B1 (en) 2012-04-05 2019-08-28 Nokia Technologies Oy Flexible spatial audio capture apparatus
WO2014162171A1 (en) 2013-04-04 2014-10-09 Nokia Corporation Visual audio processing apparatus
EP2997573A4 (en) 2013-05-17 2017-01-18 Nokia Technologies OY Spatial object oriented audio apparatus
GB2575632A (en) * 2018-07-16 2020-01-22 Nokia Technologies Oy Sparse quantization of spatial audio parameters
GB2593672A (en) * 2020-03-23 2021-10-06 Nokia Technologies Oy Switching between audio instances

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016473A (en) * 1998-04-07 2000-01-18 Dolby; Ray M. Low bit-rate spatial coding method and system
US20030035553A1 (en) * 2001-08-10 2003-02-20 Frank Baumgarte Backwards-compatible perceptual coding of spatial cues
ATE426235T1 (en) * 2002-04-22 2009-04-15 Koninkl Philips Electronics Nv DECODING DEVICE WITH DECORORATION UNIT
JP4212591B2 (en) * 2003-06-30 2009-01-21 富士通株式会社 Audio encoding device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FALLER C ET AL: "BINAURAL CUE CODING APPLIED TO STEREO AND MULTI-CHANNEL AUDIO COMPRESSION" PREPRINTS OF PAPERS PRESENTED AT THE AES CONVENTION, XX, XX, vol. 112, no. 5574, 10 May 2002 (2002-05-10), XP009024737 *
JEONGIL SEO ET AL: "A New Cue Parameter for Spatial Audio Coding" JOINT VIDEO TEAM (JVT) OF ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 AND ITU-T SG16 Q6), XX, XX, no. M11264, 13 October 2004 (2004-10-13), XP030040038 *
KWANKI KIM ET AL: "Improved Channel Level Difference Quantization for Spatial Audio Coding" ETRI JOURNAL vol. 29, no. 1, February 2007 (2007-02), pages 99-102, XP002549876 Retrieved from the Internet: URL:http://etrij.etri.re.kr/Cyber/servlet/BrowseAbstract?paperid=LP0608-0149> [retrieved on 2009-10-12] *
See also references of WO2007011157A1 *
VAN DER WAAL R G ET AL: "Subband coding of stereophonic digital audio signals" SPEECH PROCESSING 1. TORONTO, MAY 14 - 17, 1991; [INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING. ICASSP], NEW YORK, IEEE, US, vol. CONF. 16, 14 April 1991 (1991-04-14), pages 3601-3604, XP010043648 ISBN: 9780780300033 *

Also Published As

Publication number Publication date
EP1905034B1 (en) 2011-06-01
EP1905034A4 (en) 2009-11-25
WO2007011157A1 (en) 2007-01-25

Similar Documents

Publication Publication Date Title
WO2007011157A1 (en) Virtual source location information based channel level difference quantization and dequantization method
US7620554B2 (en) Multichannel audio extension
KR101664434B1 (en) Method of coding/decoding audio signal and apparatus for enabling the method
US20210090581A1 (en) Energy lossless-encoding method and apparatus, audio encoding method and apparatus, energy lossless-decoding method and apparatus, and audio decoding method and apparatus
JP4521032B2 (en) Energy-adaptive quantization for efficient coding of spatial speech parameters
JP4685165B2 (en) Interchannel level difference quantization and inverse quantization method based on virtual sound source position information
KR100954179B1 (en) Near-transparent or transparent multi-channel encoder/decoder scheme
US7627480B2 (en) Support of a multichannel audio extension
KR101428487B1 (en) Method and apparatus for encoding and decoding multi-channel
RU2439718C1 (en) Method and device for sound signal processing
US7848931B2 (en) Audio encoder
EP2665294A2 (en) Support of a multichannel audio extension
USRE46082E1 (en) Method and apparatus for low bit rate encoding and decoding
US20110046946A1 (en) Encoder, decoder, and the methods therefor
US20080252510A1 (en) Method and Apparatus for Encoding/Decoding Multi-Channel Audio Signal
JPWO2006003891A1 (en) Speech signal decoding apparatus and speech signal encoding apparatus
KR20080044707A (en) Method and apparatus for encoding and decoding audio/speech signal
WO2002103685A1 (en) Encoding apparatus and method, decoding apparatus and method, and program
WO1995032499A1 (en) Encoding method, decoding method, encoding-decoding method, encoder, decoder, and encoder-decoder
US20110137661A1 (en) Quantizing device, encoding device, quantizing method, and encoding method
US20100114568A1 (en) Apparatus for processing an audio signal and method thereof
US7181079B2 (en) Time signal analysis and derivation of scale factors
US20080243489A1 (en) Multiple stream decoder
KR20070027669A (en) Low bitrate encoding/decoding method and apparatus
KR20140037118A (en) Method of processing audio signal, audio encoding apparatus, audio decoding apparatus and terminal employing the same

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080110

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20091026

17Q First examination report despatched

Effective date: 20100108

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/00 20060101AFI20101125BHEP

Ipc: G10L 19/02 20060101ALI20101125BHEP

RTI1 Title (correction)

Free format text: VIRTUAL SOURCE LOCATION INFORMATION BASED CHANNEL LEVEL DIFFERENCE QUANTIZATION AND DEQUANTIZATION

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602006022287

Country of ref document: DE

Effective date: 20110714

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110912

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110902

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111003

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110731

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20120330

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110731

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110731

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110801

26N No opposition filed

Effective date: 20120302

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20110901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602006022287

Country of ref document: DE

Effective date: 20120302

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110719

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110901

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110719

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110901

REG Reference to a national code

Ref country code: DE

Ref legal event code: R088

Ref document number: 602006022287

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110601

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20160614

Year of fee payment: 11

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602006022287

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180201