WO2007011157A1 - Virtual source location information based channel level difference quantization and dequantization method - Google Patents

Virtual source location information based channel level difference quantization and dequantization method Download PDF

Info

Publication number
WO2007011157A1
WO2007011157A1 PCT/KR2006/002824 KR2006002824W WO2007011157A1 WO 2007011157 A1 WO2007011157 A1 WO 2007011157A1 KR 2006002824 W KR2006002824 W KR 2006002824W WO 2007011157 A1 WO2007011157 A1 WO 2007011157A1
Authority
WO
WIPO (PCT)
Prior art keywords
cld
quantization
vsli
channel audio
spatial
Prior art date
Application number
PCT/KR2006/002824
Other languages
French (fr)
Inventor
Jeong Il Seo
Kyeong Ok Kang
Jin Woo Hong
Kwang Ki Kim
Seung Kwon Beack
Min Soo Hahn
Sang Bae Chon
Koeng Mo Sung
Original Assignee
Electronics And Telecommunications Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060066822A external-priority patent/KR100755471B1/en
Application filed by Electronics And Telecommunications Research Institute filed Critical Electronics And Telecommunications Research Institute
Priority to AT06783342T priority Critical patent/ATE511691T1/en
Priority to JP2008522700A priority patent/JP4685165B2/en
Priority to EP06783342A priority patent/EP1905034B1/en
Priority to CN2006800259842A priority patent/CN101223598B/en
Publication of WO2007011157A1 publication Critical patent/WO2007011157A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention relates to Spatial Audio Coding (SAC) of a multichannel audio signal and decoding of an audio bitstream generated by the SAC, and more particularly, to efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi- channel audio signal is performed.
  • SAC Spatial Audio Coding
  • CLD Channel Level Difference
  • SAC Spatial Audio Coding
  • MPEG Moving Picture Experts Group
  • SAC technology has been standardized and named “MPEG Surround” since 2002, and is described in detail in the ISO/IEC working document, ISO/IEC CD 14996-x (published on February 18, 2005 and hereinafter referred to as "SAC standard document").
  • the SAC approach is an encoding approach for improving transmission efficiency by encoding N number of multi-channel audio signals (N>2) using both a down-mix signal, which is mixed into mono or stereo, and a set of ancillary spatial parameters, which represent a human perceptual characteristic of the multi-channel audio signal.
  • the spatial parameters can include Channel Level Difference (CLD) representing a level difference between two channels according to time-frequency, Inter-channel Correlation/Coherence (ICC) representing correlation or coherence between two channels according to time-frequency, Channel Prediction Coefficient (CPC) for making it possible to reproduce a third channel from two channels by prediction, and so on.
  • CLD Channel Level Difference
  • ICC Inter-channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • the CLD is a core element in restoring a power gain of each channel, and is extracted in various ways in the process of SAC encoding. As illustrated in FIG. IA, on the basis of one reference channel, the CLD is expressed by a power ratio of the reference channel to each of the other channels. For example, if there are six channel signals L, R, C, LFE, Ls and Rs, five power ratios can be obtained based on one reference channel, and CLDl through CLD5 correspond to levels obtained by applying a base- 10 logarithm to each of the five power ratios.
  • a multi-channel is divided into a plurality of channel pairs, and each of the channel pairs is analyzed on the basis of stereo, and, in each analysis step, one CLD value is extracted.
  • This is carried out by step-by-step use of a plurality of One-To-Two (OTT) modules, which take two input channels to one output channel.
  • OTT One-To-Two
  • any one of the input stereo signals is recognized as a reference channel, and a base- 10 logarithmic value of a power ratio of the reference channel to the other channel is output as a CLD value.
  • the CLD value has a dynamic range between - ⁇ and + ⁇ .
  • CLD quantization is performed by using a normalized quantization table.
  • An example of such a quantization table is given in the SAC standard document (see page 41, Table 57).
  • SAC standard document see page 41, Table 57.
  • quantization error is introduced, and thus spectrum information is distorted. For example, when 5 bits are used for the CLD quantization, 1he dynamic range of the CLD value will be limited to the range between -25 dB and +25 dB.
  • the present invention is directed to Channel Level Difference (CLD) quantization and dequantization methods capable of minimizing sound deterioration in the process of Spatial Audio Coding (SAC)-based encoding of a multi-channel audio signal.
  • CLD Channel Level Difference
  • SAC Spatial Audio Coding
  • the present invention is also directed to CLD quantization and dequantization methods capable of minimizing sound deterioration using advantages of quantization of Virtual Source Location Information (VSLI), which is replaceable with CLD, in the process of S AC -based encoding of a multi-channel audio signal.
  • VSLI Virtual Source Location Information
  • the present invention is directed to improving quality of sound without additional complexity by providing a VSLI-based CLD quantization table, which can be replaced by a CLD quantization table used for CLD quantization and dequantization in a Moving Picture Experts Group (MPEG)-4 SAC system.
  • MPEG Moving Picture Experts Group
  • a first aspect of the present invention provides a method for quantizing a Channel Level Difference (CLD) parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed.
  • the CLD quantization method comprises the steps of extracting CLDs for each band from the N-channel audio signal, and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization
  • VSLI Virtual Source Location Information
  • a second aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method.
  • a third aspect of the present invention provides a method for encoding an N-
  • the method comprises the steps of down-mixing and encoding the N-channel audio signal, extracting spatial parameters including Channel Level Difference (CLD), Inter- channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal and quantizing the extracted spatial parameters.
  • CLD Channel Level Difference
  • ICC Inter- channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • the CLD is quantized by reference to a VSLI-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • a fourth aspect of the present invention provides an apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC).
  • the apparatus comprises an SAC encoding means down-mixing the N-channel audio signal to generate a down-mix signal and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal, an audio encoding means generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means, a spatial parameter quantizing means quantizing the spatial parameters extracted by the SAC encoding means, and a spatial parameter encoding means encoding the quantized spatial parameter levels.
  • the spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • VSLI Virtual Source Location Information
  • a fifth aspect of the present invention provides a method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N- channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC).
  • a sixth aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method.
  • a seventh aspect of the present invention provides a method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC).
  • the method comprises the steps of decoding the encoded N-channel audio bitstream, dequantizing a quantization value of at least one spatial parameter received together with the encoded N-channel audio bitstream, and synthesizing the decoded N- channel audio bitstream based on the dequantized spatial parameter to restore an N- channel audio signal.
  • a Channel Level Difference (CLD) included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • CLD Channel Level Difference
  • VSLI Virtual Source Location Information
  • An eighth aspect of the present invention provides an apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC).
  • the apparatus comprises means for decoding the encoded N-channel audio bitstream, means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream, means for dequantizing the quantization values of the spatial parameter, and means for synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal.
  • the means for dequantizing the quantization value of the spatial parameter dequantizes a Channel Level Difference (CLD) included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
  • CLD Channel Level Difference
  • VSLI Virtual Source Location Information
  • the VSLI-based CLD quantization table created according to the present invention can replace the CLD quantization table used in an existing SAC system.
  • the VSLI-based CLD quantization table according to the present invention sound deterioration can be prevented as much as possible.
  • a Huffman codebook in compressing CLD indexes which is proposed in the present invention, it is possible to reduce a bit rate required to transmit the CLD.
  • FIGS. IA and IB conceptually illustrate a process of extracting Channel
  • CLD Level Difference
  • FIG. 2 schematically illustrates a configuration of a spatial audio coding (SAC) system to which the present invention is to be applied;
  • SAC spatial audio coding
  • FIGS. 3A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention
  • FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention.
  • FIG. 2 schematically illustrates a configuration of a spatial audio coding
  • the SAC system can be divided into an encoding part of generating, encoding and transmitting a down-mix signal and spatial parameters from an N-channel audio signal and a decoding part of restoring the N-channel audio signal from the down- mix signal and spatial parameters transmitted from the encoding part.
  • the encoding part includes an SAC encoder 210, an audio encoder 220, a spatial parameter quantizer 230, and a spatial parameter encoder 240.
  • the decoding part includes an audio decoder 250, a spatial parameter decoder 260, a spatial parameter dequantizer 270, and an SAC decoder 280.
  • the SAC encoder 210 generates a down-mix signal from the input N- channel audio signal and analyzes spatial characteristics of the N-channel audio signal, thereby extracting spatial parameters such as Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC).
  • CLD Channel Level Difference
  • ICC Inter-channel Correlation/Coherence
  • CPC Channel Prediction Coefficient
  • N (N > 1) multi-channel signal input into the SAC encoder 210 is decomposed into frequency bands by means of an analysis filter bank.
  • a quadrature mirror filter (QMF) is used. Spatial characteristics related to spatial perception are analyzed from sub-band signals, and spatial parameters such as CLD, ICC, and CPC are selectively extracted according to an encoding operation mode. Further, the sub-band signals are down-mixed and converted into a down-mix signal of a time domain by means of a QMF synthesis bank.
  • the down-mix signal may be replaced by a down-mix signal which is pre-produced by an acoustic engineer (or an artistic/hand-mixed down-mix signal).
  • the SAC encoder 210 adjusts and transmits the spatial parameters on the basis of the pre-produced down-mix signal, thereby optimizing multi-channel iestoration at the decoder.
  • the audio encoder 220 compresses the down-mix signal generated by the SAC encoder 210 or the artistic down-mix signal by using an existing audio compression technique (e.g. Moving Picture Experts Group (MPEG)-4, Advanced Audio Coding (AAC), MPEG-4 High Efficiency Advanced Audio Coding (HE- AAC), MPEG-4 Bit Sliced Arithmetic Coding (BSAC) etc.), thereby generating a compressed audio bitstream.
  • MPEG Moving Picture Experts Group
  • AAC Advanced Audio Coding
  • HE- AAC MPEG-4 High Efficiency Advanced Audio Coding
  • BSAC MPEG-4 Bit Sliced Arithmetic Coding
  • the spatial parameter quantizer 230 is provided with a quantization table, which is to be used to quantize each of the CLD, ICC and CPC. As described below, in order to minimize sound deterioration caused by quantizing the CLD using an existing normalized CLD quantization table, a
  • Virtual Source Location Information (VSLI)-based CLD quantization table can be used in the spatial parameter quantizer 230.
  • the spatial parameter encoder 240 performs entropy encoding in order to compress the spatial parameters quantized by the spatial parameter quantizer 230, and preferably performs Huffman encoding on quantization indexes of the spatial parameters using a Huffman codebook. As described below, the present invention proposes a new Huffman codebook in order to maximize transmission efficiency of CLD quantization indexes.
  • the audio decoder 250 decodes the audio bitstream compressed through the existing audio compression technique (e.g. MPEG-4, AAC, MPEG-4 HE-AAC, MPEG-4 BSAC, etc.).
  • the spatial parameter decoder 260 and the spatial parameter dequantizer 270 are modules for performing the inverse of the quantization and encoding performed by the spatial parameter quantizer 230 and the spatial parameter encoder 240.
  • the spatial parameter decoder 260 decodes the encoded quantization indexes of the spatial parameters on the basis of the Huffman codebook, and the spatial parameter dequantizer 270 obtains the spatial parameters corresponding to the quantization indexes from the quantization table.
  • the VSLI-based CLD quantization table and the Huffman codebook proposed in the present invention are used for the processes of decoding and dequantization of the spatial parameters.
  • the SAC decoder 280 restores the N multi-channel audio signals by synthesis of the audio bitstream decoded by the audio decoder 250 and the spatial parameters obtained by the spatial parameter dequantizer 270.
  • the SAC system can provide compatibility with an existing mono or stereo audio coding system.
  • the present invention is concerned with providing both the CLD quantization capable of minimizing sound deterioration resulting from quantization by utilizing advantages of the quantization of the VSLI representing a spatial audio image of the multi-channel audio signal.
  • the present invention is based on the fact that, in expressing an azimuth angle of the spatial audio image, human ears have
  • FIGS. 3 A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention.
  • FIG. 3A illustrates a stereo speaker environment in which two speakers are located at an angle
  • FIG. 3B is a view in which a stereo audio signal in the stereo speaker
  • the stereo or multi-channel audio signal can be represented by the magnitude vector of a down-mix audio signal and the VSLI that can be obtained by analyzing the each channel power of the multi-channel audio signals.
  • the multichannel audio signal represented in this way can be restored by projecting the magnitude vector according to the location vector of a sound source.
  • Equation 1 Equation 1
  • the VSLI calculated in this way has a value between A L and A R .
  • P L and P R can be restored from the VSLI as follows: First, the VSLI is mapped to a value, VSLI', between 0° and 90° using a Constant Power Panning (CPP) rule, as in
  • Equation 4 By using the VSLF mapped in this way and power P D of the down-mixed signal, P L and P R are calculated using Equations 4 and 5. Equation 4
  • the subject matter of the present invention concerns applying the advantages of quantization of the VSLI to quantization of the spatial parameter, the CLD.
  • the CLD can be expressed as in Equation 6. Equation 6
  • the CLD can be derived from the VSLI according to Equation 7. Equation 7
  • the CLD can be obtained by taking the natural logarithm, instead of the base- 10 logarithm, of the VSLI. Equation 8
  • Equations 7 and 8 can be directly used as spatial parameters of a general SAC system.
  • the CLD has a dynamic range between - ⁇
  • the main problem is quantization error caused by limitation of the dynamic range. Because all dynamic ranges of the CLD cannot be expressed with only a finite number of bits, the dynamic range of the CLD is limited to a predetermined level or less. As a result, quantization error is introduced, and the spectrum information is distorted. If 5 bits are used for the CLD quantization, the dynamic range of the CLD is limited to between -25 dB and +25 dB. In contrast, because the VSLI has a finite dynamic range of 90°, such
  • the number of quantization levels if 5 bits are used for the CLD quantization and a linear quantizer is applied, the number of quantization levels
  • the advantages of this VSLI quantization are applied to the CLD quantization of the stereo coding method, the CLD quantization table used in the existing SAC system can be replaced by a VSLI-based quantization table.
  • quantization is performed at a quantization interval of 3° and CLD conversion levels
  • a VSLI decision level for the VSLI quantization is decided by a middle value between neighboring quantization values.
  • the middle value is converted into the CLD and used as a decision level of the CLD quantization.
  • the VSLI-based CLD quantization decision level has a value other than the middle value between neighboring quantization values as seen in Table 2, unlike ordinary CLD quantization in which the decision level has the middle value between neighboring quantization values.
  • FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention. As illustrated, when
  • Tables 3 through 7 below are VSLI-based CLD quantization tables created by using Tables 1 and 2, wherein Table 3 gives the CLD quantization values down to the fourth decimal place, Table 4 down to the third decimal place, Table 5 down to the second decimal place, Table 6 down to the first decimal place, and Table 7 to the integer.
  • the CLD quantization value using the VSLI can be calculated by taking a base- 10 logarithm or natural logarithm. When taking the natural logarithm, e rather than 10 is used as the base when spectrum information is restored by using the CLD value.
  • the CLD quantization values and the CLD quantization decision levels are expressed as integers by taking the base- 10 logarithm, it can be seen that there is a problem that some of the CLD quantization values are identical to some of the CLD quantization decision levels.
  • the CLD quantization values and decision levels using the natural logarithm are preferably used for actual quantization.
  • the CLD quantization values are derived by taking the natural logarithm rather than the base- 10 logarithm of the VSLI.
  • the VSLI-based CLD quantization table created in this way is employed in the spatial parameter quantizer 230 and the spatial parameter dequantizer 270 of the SAC system illustrated in FIG. 2, so that sound deterioration resulting from the CLD quantization error can be minimized. Further, the present invention proposes a Huffman codebook capable of optimizing Huffman encoding of the CLD quantization indexes derived on the basis of the above-described VSLI-based CLD quantization table.
  • the multi-channel audio signal is processed after being split into sub-bands of a frequency domain by means of a filter bank.
  • a differential coding method is applied to a quantization index of each sub-band, thereby classifying the quantization indexes into the quantization index of the fist sub-band and the other 19 differential indexes between neighboring sub-bands.
  • they may be divided into differential indexes between neighboring frames.
  • a probability distribution is calculated with respect to each of the three types of indexes classified in this way, and then the Huffman coding method is applied to each of the three types of indexes.
  • Table 13 is the Huffman codebook for the index of the first sub-band
  • Table 14 is the Huffman code book for the other indexes between neighboring sub-bands.
  • the Huffman codebooks proposed in the present invention are employed to the spatial parameter encoder 240 and the spatial parameter decoder 260 of the SAC system illustrated in FIG. 2, so that a bit rate required to transmit the CLD quantization indexes can be reduced.
  • a bit rate required to transmit the CLD quantization indexes can be reduced.
  • 5-bit Pulse Code Modulation (PCM) coding can be performed on each sub-band.
  • the present invention can be provided as a computer program stored on at least one computer-readable medium in the form of at least one product such as a floppy disk, hard disk, CD ROM, flash memory card, PROM, RAM, ROM, or magnetic tape.
  • the computer program can be written in any programming language such as C, C++, or JAVA.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

Methods for Spatial Audio Coding (SAC) of a multi-channel audio signal and decoding of an audio bitstream generated by the SAC are provided. More particularly, methods of efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi-channel audio signal is performed are provided. A method of CLD quantization includes extracting sub-band-specific CLDs from an N-channel audio signal (N>1), and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.

Description

[DESCRIPTION]
[Invention Title]
VIRTUAL SOURCE LOCATION INFORMATION BASED CHANNEL LEVEL DIFFERENCE QUANTIZATION AND DEQUANTIZATION METHOD
[Technical Field]
The present invention relates to Spatial Audio Coding (SAC) of a multichannel audio signal and decoding of an audio bitstream generated by the SAC, and more particularly, to efficient quantization and dequantization of Channel Level Difference (CLD) used as a spatial parameter when SAC -based encoding of a multi- channel audio signal is performed.
[Background Art]
Spatial Audio Coding (SAC) is technology for efficiently compressing a multi-channel audio signal while maintaining compatibility with an existing stereo audio system. In the Moving Picture Experts Group (MPEG), SAC technology has been standardized and named "MPEG Surround" since 2002, and is described in detail in the ISO/IEC working document, ISO/IEC CD 14996-x (published on February 18, 2005 and hereinafter referred to as "SAC standard document").
Specifically, the SAC approach is an encoding approach for improving transmission efficiency by encoding N number of multi-channel audio signals (N>2) using both a down-mix signal, which is mixed into mono or stereo, and a set of ancillary spatial parameters, which represent a human perceptual characteristic of the multi-channel audio signal. The spatial parameters can include Channel Level Difference (CLD) representing a level difference between two channels according to time-frequency, Inter-channel Correlation/Coherence (ICC) representing correlation or coherence between two channels according to time-frequency, Channel Prediction Coefficient (CPC) for making it possible to reproduce a third channel from two channels by prediction, and so on.
The CLD is a core element in restoring a power gain of each channel, and is extracted in various ways in the process of SAC encoding. As illustrated in FIG. IA, on the basis of one reference channel, the CLD is expressed by a power ratio of the reference channel to each of the other channels. For example, if there are six channel signals L, R, C, LFE, Ls and Rs, five power ratios can be obtained based on one reference channel, and CLDl through CLD5 correspond to levels obtained by applying a base- 10 logarithm to each of the five power ratios.
Meanwhile, as illustrated in FIG. IB, a multi-channel is divided into a plurality of channel pairs, and each of the channel pairs is analyzed on the basis of stereo, and, in each analysis step, one CLD value is extracted. This is carried out by step-by-step use of a plurality of One-To-Two (OTT) modules, which take two input channels to one output channel. In each OTT, any one of the input stereo signals is recognized as a reference channel, and a base- 10 logarithmic value of a power ratio of the reference channel to the other channel is output as a CLD value.
The CLD value has a dynamic range between -∞ and +∞. Hence, to express
the CLD value with a finite number of bits, efficient quantization is required. Typically, CLD quantization is performed by using a normalized quantization table. An example of such a quantization table is given in the SAC standard document (see page 41, Table 57). In this manner, because all CLD values cannot be expressed with only a finite number of bits, the dynamic range of the CLD value is limited to a predetermined level or less. Thereby, quantization error is introduced, and thus spectrum information is distorted. For example, when 5 bits are used for the CLD quantization, 1he dynamic range of the CLD value will be limited to the range between -25 dB and +25 dB.
[Disclosure]
[Technical Problem]
The present invention is directed to Channel Level Difference (CLD) quantization and dequantization methods capable of minimizing sound deterioration in the process of Spatial Audio Coding (SAC)-based encoding of a multi-channel audio signal. The present invention is also directed to CLD quantization and dequantization methods capable of minimizing sound deterioration using advantages of quantization of Virtual Source Location Information (VSLI), which is replaceable with CLD, in the process of S AC -based encoding of a multi-channel audio signal.
In addition, the present invention is directed to improving quality of sound without additional complexity by providing a VSLI-based CLD quantization table, which can be replaced by a CLD quantization table used for CLD quantization and dequantization in a Moving Picture Experts Group (MPEG)-4 SAC system.
[Technical Solution]
A first aspect of the present invention provides a method for quantizing a Channel Level Difference (CLD) parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed. The CLD quantization method comprises the steps of extracting CLDs for each band from the N-channel audio signal, and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization
table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A second aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method. A third aspect of the present invention provides a method for encoding an N-
channel audio signal (N>1) based on Spatial Audio Coding (SAC). The method comprises the steps of down-mixing and encoding the N-channel audio signal, extracting spatial parameters including Channel Level Difference (CLD), Inter- channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal and quantizing the extracted spatial parameters. In the step of quantizing the extracted spatial parameters, the CLD is quantized by reference to a VSLI-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A fourth aspect of the present invention provides an apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC). The apparatus comprises an SAC encoding means down-mixing the N-channel audio signal to generate a down-mix signal and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each band, from the N-channel audio signal, an audio encoding means generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means, a spatial parameter quantizing means quantizing the spatial parameters extracted by the SAC encoding means, and a spatial parameter encoding means encoding the quantized spatial parameter levels. The spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
A fifth aspect of the present invention provides a method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N- channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC). The CLD dequantization method comprises the steps of performing Huffman decoding on the encoded CLD quantization value, and dequantizing the decoded CLD quantization value by using a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from
VSLI quantization values of the N-channel audio signal.
A sixth aspect of the present invention provides a computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method.
A seventh aspect of the present invention provides a method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC). The method comprises the steps of decoding the encoded N-channel audio bitstream, dequantizing a quantization value of at least one spatial parameter received together with the encoded N-channel audio bitstream, and synthesizing the decoded N- channel audio bitstream based on the dequantized spatial parameter to restore an N- channel audio signal. In the step of dequantizing a quantization value of at least one spatial parameter, a Channel Level Difference (CLD) included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
An eighth aspect of the present invention provides an apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC). The apparatus comprises means for decoding the encoded N-channel audio bitstream, means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream, means for dequantizing the quantization values of the spatial parameter, and means for synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal. The means for dequantizing the quantization value of the spatial parameter dequantizes a Channel Level Difference (CLD) included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)- based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Advantageous Effects]
The VSLI-based CLD quantization table created according to the present invention can replace the CLD quantization table used in an existing SAC system. By using the VSLI-based CLD quantization table according to the present invention, sound deterioration can be prevented as much as possible. In addition, by using a Huffman codebook in compressing CLD indexes, which is proposed in the present invention, it is possible to reduce a bit rate required to transmit the CLD.
[Description of Drawings]
FIGS. IA and IB conceptually illustrate a process of extracting Channel
Level Difference (CLD) values from multi-channel signals;
FIG. 2 schematically illustrates a configuration of a spatial audio coding (SAC) system to which the present invention is to be applied;
FIGS. 3A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention; and FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention.
[Mode for Invention]
Hereinafter, exemplary embodiments of the present invention will be described in detail. However, the present invention is not limited to the exemplary embodiments disclosed below, but can be implemented in various forms. Therefore,
these exemplary embodiments are provided for complete disclosure of the present invention and to fully convey the scope of the present invention to those of ordinary skill in the art. FIG. 2 schematically illustrates a configuration of a spatial audio coding
(SAC) system to which the present invention is to be applied. As illustrated, the SAC system can be divided into an encoding part of generating, encoding and transmitting a down-mix signal and spatial parameters from an N-channel audio signal and a decoding part of restoring the N-channel audio signal from the down- mix signal and spatial parameters transmitted from the encoding part. The encoding part includes an SAC encoder 210, an audio encoder 220, a spatial parameter quantizer 230, and a spatial parameter encoder 240. The decoding part includes an audio decoder 250, a spatial parameter decoder 260, a spatial parameter dequantizer 270, and an SAC decoder 280. The SAC encoder 210 generates a down-mix signal from the input N- channel audio signal and analyzes spatial characteristics of the N-channel audio signal, thereby extracting spatial parameters such as Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC).
Specifically, N (N > 1) multi-channel signal input into the SAC encoder 210 is decomposed into frequency bands by means of an analysis filter bank. In order to split a signal into sub-bands of a frequency domain with low complexity, a quadrature mirror filter (QMF) is used. Spatial characteristics related to spatial perception are analyzed from sub-band signals, and spatial parameters such as CLD, ICC, and CPC are selectively extracted according to an encoding operation mode. Further, the sub-band signals are down-mixed and converted into a down-mix signal of a time domain by means of a QMF synthesis bank.
Alternatively, the down-mix signal may be replaced by a down-mix signal which is pre-produced by an acoustic engineer (or an artistic/hand-mixed down-mix signal). At this time, the SAC encoder 210 adjusts and transmits the spatial parameters on the basis of the pre-produced down-mix signal, thereby optimizing multi-channel iestoration at the decoder.
The audio encoder 220 compresses the down-mix signal generated by the SAC encoder 210 or the artistic down-mix signal by using an existing audio compression technique (e.g. Moving Picture Experts Group (MPEG)-4, Advanced Audio Coding (AAC), MPEG-4 High Efficiency Advanced Audio Coding (HE- AAC), MPEG-4 Bit Sliced Arithmetic Coding (BSAC) etc.), thereby generating a compressed audio bitstream. Meanwhile, the spatial parameters generated by the SAC encoder 210 are transmitted after being quantized and encoded by the spatial parameter quantizer 230 and the spatial parameter encoder 240. The spatial parameter quantizer 230 is provided with a quantization table, which is to be used to quantize each of the CLD, ICC and CPC. As described below, in order to minimize sound deterioration caused by quantizing the CLD using an existing normalized CLD quantization table, a
Virtual Source Location Information (VSLI)-based CLD quantization table can be used in the spatial parameter quantizer 230.
The spatial parameter encoder 240 performs entropy encoding in order to compress the spatial parameters quantized by the spatial parameter quantizer 230, and preferably performs Huffman encoding on quantization indexes of the spatial parameters using a Huffman codebook. As described below, the present invention proposes a new Huffman codebook in order to maximize transmission efficiency of CLD quantization indexes. The audio decoder 250 decodes the audio bitstream compressed through the existing audio compression technique (e.g. MPEG-4, AAC, MPEG-4 HE-AAC, MPEG-4 BSAC, etc.).
The spatial parameter decoder 260 and the spatial parameter dequantizer 270 are modules for performing the inverse of the quantization and encoding performed by the spatial parameter quantizer 230 and the spatial parameter encoder 240. The spatial parameter decoder 260 decodes the encoded quantization indexes of the spatial parameters on the basis of the Huffman codebook, and the spatial parameter dequantizer 270 obtains the spatial parameters corresponding to the quantization indexes from the quantization table. In analogy to the quantization and encoding of the spatial parameters, the VSLI-based CLD quantization table and the Huffman codebook proposed in the present invention are used for the processes of decoding and dequantization of the spatial parameters.
The SAC decoder 280 restores the N multi-channel audio signals by synthesis of the audio bitstream decoded by the audio decoder 250 and the spatial parameters obtained by the spatial parameter dequantizer 270. Alternatively, when decoding of the multi-channel audio signals is impossible, only the down-mix signal can be decoded by using an existing audio decoder, so that independent service is possible. Therefore, the SAC system can provide compatibility with an existing mono or stereo audio coding system.
The present invention is concerned with providing both the CLD quantization capable of minimizing sound deterioration resulting from quantization by utilizing advantages of the quantization of the VSLI representing a spatial audio image of the multi-channel audio signal. The present invention is based on the fact that, in expressing an azimuth angle of the spatial audio image, human ears have
difficulty in recognizing an error of 3° or less. The VSLI expressed with the azimuth
angle has a limited dynamic range of 90°, so that quantization error caused by limitation of the dynamic range upon quantization can be avoided. When the CLD quantization table is designed on the basis of the advantages of the quantization of the VSLI, sound deterioration resulting from the quantization can be minimized. FIGS. 3 A and 3B are views for explaining a concept of VSLI serving as a reference of CLD quantization in accordance with the present invention. FIG. 3A illustrates a stereo speaker environment in which two speakers are located at an angle
of 60°, and FIG. 3B is a view in which a stereo audio signal in the stereo speaker
environment of FIG. 3A is represented by power of a down-mixed signal and by VSLI. As illustrated, the stereo or multi-channel audio signal can be represented by the magnitude vector of a down-mix audio signal and the VSLI that can be obtained by analyzing the each channel power of the multi-channel audio signals. The multichannel audio signal represented in this way can be restored by projecting the magnitude vector according to the location vector of a sound source.
As illustrated in FIGS. 3 A and 3B, assuming that power of a signal of the left speaker is PL, power of a signal of the right speaker is PR, and angles of the left and right speakers are AL and AR respectively, the VSLI of the sound source can be found by Equations 1 and 2. Equation 1
Figure imgf000014_0001
Equation 2
VSLI = θx (AR - AL)/90 + AL
The VSLI calculated in this way has a value between AL and AR. PL and PR can be restored from the VSLI as follows: First, the VSLI is mapped to a value, VSLI', between 0° and 90° using a Constant Power Panning (CPP) rule, as in
Equation 3.
Equation 3
VSLI-A £ VSLF= X 90
AR-A 1
By using the VSLF mapped in this way and power PD of the down-mixed signal, PL and PR are calculated using Equations 4 and 5. Equation 4
Figure imgf000015_0001
Equation 5
PR -= PD χ (Un(VSLI'))2
As previously described, the subject matter of the present invention concerns applying the advantages of quantization of the VSLI to quantization of the spatial parameter, the CLD. In the stereo speaker environment of FIG. 3 A, the CLD can be expressed as in Equation 6. Equation 6
P R
CZD= IOlOg 10 P L
The CLD can be derived from the VSLI according to Equation 7. Equation 7
CLD = 2OiOg10 (tan ( VSLl' ))
Figure imgf000016_0001
VSLI- A
= 20tofll0 (fan CΛ , L
A X QO ))
1RΏ ~ A L
Further, as defined in Equation 8 below, the CLD can be obtained by taking the natural logarithm, instead of the base- 10 logarithm, of the VSLI. Equation 8
CLD = 20loge {tan { VSLf )) X 90 ))
Figure imgf000016_0002
The CLD values obtained by Equations 7 and 8 can be directly used as spatial parameters of a general SAC system.
As previously described, because the CLD has a dynamic range between -∞
and +α>, problems occur in performing quantization using a finite number of bits.
The main problem is quantization error caused by limitation of the dynamic range. Because all dynamic ranges of the CLD cannot be expressed with only a finite number of bits, the dynamic range of the CLD is limited to a predetermined level or less. As a result, quantization error is introduced, and the spectrum information is distorted. If 5 bits are used for the CLD quantization, the dynamic range of the CLD is limited to between -25 dB and +25 dB. In contrast, because the VSLI has a finite dynamic range of 90°, such
quantization error caused by limitation of the dynamic range upon quantization can
be avoided.
In one embodiment, upon quantization of the VSLI, if 5 bits are used for the CLD quantization and a linear quantizer is applied, the number of quantization levels
is 31 and a quantization interval is 3°. The validity of the VSLI quantization
approach can be verified from the fact that people fail to recognize a difference of 3°
or less when recognizing the spatial image of an audio signal.
The advantages of this VSLI quantization are applied to the CLD quantization of the stereo coding method, the CLD quantization table used in the existing SAC system can be replaced by a VSLI-based quantization table.
In one embodiment, quantization values of the VSLI on which 5-bit linear
quantization is performed at a quantization interval of 3° and CLD conversion levels
corresponding to the VSLI quantization values are given in Table 1.
Table L. VSLI Quantization values and CLD values
Figure imgf000017_0001
Figure imgf000018_0001
Further, a VSLI decision level for the VSLI quantization is decided by a middle value between neighboring quantization values. The middle value is converted into the CLD and used as a decision level of the CLD quantization. The VSLI-based CLD quantization decision level has a value other than the middle value between neighboring quantization values as seen in Table 2, unlike ordinary CLD quantization in which the decision level has the middle value between neighboring quantization values.
FIG. 4 is a graph showing CLD quantization values converted from VSLI quantization values in accordance with the present invention. As illustrated, when
quantizing the VSLI at a uniform angle on the basis of 45°, the decision level
between the quantized angles is the middle value between two angles. However, when this VSLJ decision level is converted into a CLD value, it can be found that the VSLI decision level has a value other than the middle value between two neighboring CLD values. Table 2 below lists the decision levels of the VSLI quantization and the corresponding CLD values. Table 2
Figure imgf000019_0001
Tables 3 through 7 below are VSLI-based CLD quantization tables created by using Tables 1 and 2, wherein Table 3 gives the CLD quantization values down to the fourth decimal place, Table 4 down to the third decimal place, Table 5 down to the second decimal place, Table 6 down to the first decimal place, and Table 7 to the integer. The CLD quantization value using the VSLI can be calculated by taking a base- 10 logarithm or natural logarithm. When taking the natural logarithm, e rather than 10 is used as the base when spectrum information is restored by using the CLD value.
Table 3. VSLI-based CLD Quantization Table (Fourth Decimal Place)
Figure imgf000020_0001
Table 4. VSLI-based CLD Quantization Table (Third Decimal Place)
Figure imgf000021_0001
Table 5. VSLI-based CLD Quantization Table (Second Decimal Place)
Figure imgf000022_0001
Table 6. VSLI-based CLD Quantization Table (First Decimal Place)
Figure imgf000023_0001
Table 7. VSLI-based CLD Quantization Table (Integer)
Figure imgf000024_0001
Next, the decision levels on the VSLI-based CLD quantization tables classified by decimal place are given in Tables 8, 9, 10, 11 and 12. Table 8
VSLI-based CLD Quantization Decision Levels (Fourth Decimal Place)
Figure imgf000025_0001
Table 9
VSLI-based CLD Quantization Decision Levels (Third Decimal Place)
Figure imgf000026_0001
Table 10
VSLI-based CLD Quantization Decision Levels (Second Decimal Place)
Figure imgf000027_0001
Table 11
VSLI-based CLD Quantization Decision Levels (First Decimal Place)
Figure imgf000028_0001
Table 12
VSLI-based CLD Quantization Decision Levels (Integer)
Figure imgf000029_0001
As shown in Tables 7 and 12, when the CLD quantization values and the CLD quantization decision levels are expressed as integers by taking the base- 10 logarithm, it can be seen that there is a problem that some of the CLD quantization values are identical to some of the CLD quantization decision levels. Hence, the CLD quantization values and decision levels using the natural logarithm are preferably used for actual quantization. In other words, when intending to use the VSLI-based CLD quantization table and the VSLI-based CLD quantization decision levels, both of which are expressed to the integer, the CLD quantization values are derived by taking the natural logarithm rather than the base- 10 logarithm of the VSLI. The VSLI-based CLD quantization table created in this way is employed in the spatial parameter quantizer 230 and the spatial parameter dequantizer 270 of the SAC system illustrated in FIG. 2, so that sound deterioration resulting from the CLD quantization error can be minimized. Further, the present invention proposes a Huffman codebook capable of optimizing Huffman encoding of the CLD quantization indexes derived on the basis of the above-described VSLI-based CLD quantization table.
In the SAC system, the multi-channel audio signal is processed after being split into sub-bands of a frequency domain by means of a filter bank. When the multi-channel audio signal is split into 20 sub-bands, a differential coding method is applied to a quantization index of each sub-band, thereby classifying the quantization indexes into the quantization index of the fist sub-band and the other 19 differential indexes between neighboring sub-bands. Alternatively, they may be divided into differential indexes between neighboring frames. A probability distribution is calculated with respect to each of the three types of indexes classified in this way, and then the Huffman coding method is applied to each of the three types of indexes. Thereby, Huffman codebooks described in Tables 13 and 14 below can be obtained. Table 13 is the Huffman codebook for the index of the first sub-band, and Table 14 is the Huffman code book for the other indexes between neighboring sub-bands.
Table 13
Figure imgf000031_0001
Table 14
Figure imgf000032_0001
In this manner, the Huffman codebooks proposed in the present invention are employed to the spatial parameter encoder 240 and the spatial parameter decoder 260 of the SAC system illustrated in FIG. 2, so that a bit rate required to transmit the CLD quantization indexes can be reduced. Alternatively, when the number of bits used for Huffman encoding of the 20 sub-bands exceeds 100, 5-bit Pulse Code Modulation (PCM) coding can be performed on each sub-band.
[Industrial Applicability]
The present invention can be provided as a computer program stored on at least one computer-readable medium in the form of at least one product such as a floppy disk, hard disk, CD ROM, flash memory card, PROM, RAM, ROM, or magnetic tape. In general, the computer program can be written in any programming language such as C, C++, or JAVA.
While the invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

[CLAIMS]
[Claim l ]
A Channel Level Difference (CLD) quantization method for quantizing a CLD parameter used as a spatial parameter when Spatial Audio coding (SAC)-based encoding of an N-channel audio signal (N>1) is performed, the CLD quantization
method comprising the steps of: extracting CLDs for each sub-band from the N-channel audio signal; and quantizing the CLDs by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 2]
The CLD quantization method according to claim 1, wherein the VSLI quantization value is quantized at a predetermined quantization interval within a
range between 0° and 90°.
[Claim 31
The CLD quantization method according to claim 2, wherein the
predetermined quantization interval is 3°.
[Claim 4] The CLD quantization method according to claim 1, wherein the CLD quantization values are derived from the VSLI quantization values according to the following equation:
Figure imgf000035_0001
[Claim 5]
The CLD quantization method according to claim 1, wherein the CLD quantization values are derived from the VSLI quantization values according to the following equation:
CLD = 90)
Figure imgf000035_0002
[Claim 6]
The CLD quantization method according to claim 1, wherein a decision level for the CLD quantization is derived from a VSLI decision level for VSLI quantization.
[Claim 7]
The CLD quantization method according to claim 1, wherein the VSLI-based
CLD quantization table is as follows:
Figure imgf000036_0001
[Claim 8]
The CLD quantization method according to claim 7, wherein the VSLI-based CLD quantization table is related to the CLD quantization decision levels as follows:
Figure imgf000037_0001
[Claim 9]
The CLD quantization method according to claim 1, further comprising the step of performing Huffman encoding on quantization indexes of the CLD.
[Claim 10] The CLD quantization method according to claim 9, wherein the Huffman encoding is performed on a quantization index of a first sub-band by reference to a Huffman codebook as follows:
Figure imgf000038_0001
[Claim 11 ]
The CLD quantization method according to claim 10, wherein the Huffman encoding is performed on quantization indexes of the remaining sub-bands other than the first sub-band by reference to a Huffman codebook as follows:
Figure imgf000039_0001
[Claim 12]
A computer-readable recording medium on which is recorded a computer program for performing the CLD quantization method according to any one of claims 1 through 11.
[Claim 13] A method for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC), the method comprising the steps of: down-mixing and encoding the N-channel audio signal; extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each sub-band, from the N-channel audio signal; and quantizing the extracted spatial parameters, wherein, in the step of quantizing the extracted spatial parameters, the CLD is quantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 14]
An apparatus for encoding an N-channel audio signal (N>1) based on Spatial Audio Coding (SAC), the apparatus comprising: an SAC encoding means for down-mixing the N-channel audio signal to generate a down-mix signal, and extracting spatial parameters including Channel Level Difference (CLD), Inter-channel Correlation/Coherence (ICC), and Channel Prediction Coefficient (CPC), for each sub-band, from the N-channel audio signal; an audio encoding means for generating a compressed audio bitstream from the down-mix signal generated by the SAC encoding means; a spatial parameter quantizing means for quantizing the spatial parameters extracted by the SAC encoding means; and a spatial parameter encoding means for encoding the quantized spatial
parameters, wherein the spatial parameter quantizing means quantizes the CLD by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization
values of the N-channel audio signal.
[Claim 15]
The apparatus according to claim 14, wherein the VSLI-based CLD quantization table is as follows:
Figure imgf000041_0001
[Claim 16]
The apparatus according to claim 15, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
Figure imgf000042_0001
[Claim 17] A method for dequantizing an encoded Channel Level Difference (CLD) quantization value when an encoded N-channel audio bitstream (N>1) is decoded based on Spatial Audio coding (SAC), the method comprising the steps of: performing Huffman decoding on the encoded CLD quantization value; and dequantizing the decoded CLD quantization value by using a Virtual Source
Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N-channel audio signal.
[Claim 18]
The method according to claim 18, wherein the VSLI-based CLD quantization table is as follows:
Figure imgf000043_0001
Figure imgf000044_0001
[Claim 19]
The method according to claim 18, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
Figure imgf000044_0002
[Claim 20]
The method according to claim 17, wherein in the step of performing Huffman decoding on the encoded CLD quantization value, the CLD quantization
value of a first sub-band is decoded by reference to a Huffman codebook as follows:
Figure imgf000045_0001
[Claim 21 ] The method according to claim 20, wherein the Huffman encoding is performed on quantization indexes of the remaining sub-bands other than the first sub-band by reference to a Huffman codebook as follows:
Figure imgf000046_0001
[Claim 22]
A computer-readable recording medium on which is recorded a computer program for performing the CLD dequantization method according to any one of claims 17 through 21.
[Claim 23] A method for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC), the method comprising the steps of: decoding the encoded N-channel audio bitstream; dequantizing quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream; and synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal, wherein, in the step of dequantizing quantization values of at least one spatial parameter, a CLD included in the spatial parameter is dequantized by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N- channel audio signal.
[Claim 24]
An apparatus for decoding an encoded N-channel audio bitstream (N>1) based on Spatial Audio Coding (SAC), the apparatus comprising: means for decoding the encoded N-channel audio bitstream; means for decoding quantization values of at least one spatial parameter received together with the encoded N-channel audio bitstream; means for dequantizing the quantization values of the spatial parameter; and synthesizing the decoded N-channel audio bitstream based on the dequantized spatial parameter to restore an N-channel audio signal, wherein the means for dequantizing the quantization value of the spatial parameter dequantizes a CLD included in the spatial parameter by reference to a Virtual Source Location Information (VSLI)-based CLD quantization table designed using CLD quantization values derived from VSLI quantization values of the N- channel audio signal.
[Claim 25]
The apparatus according to claim 24, wherein the VSLI-based CLD quantization table is as follows:
Figure imgf000048_0001
[Claim 26]
The apparatus according to claim 25, wherein the VSLI-based CLD quantization table is related to CLD quantization decision levels as follows:
Figure imgf000049_0001
PCT/KR2006/002824 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization method WO2007011157A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AT06783342T ATE511691T1 (en) 2005-07-19 2006-07-19 QUANTIZATION AND DEQUANTIZATION OF CHANNEL LEVEL DIFFERENCES BASED ON VIRTUAL SOURCE POSITIONING INFORMATION
JP2008522700A JP4685165B2 (en) 2005-07-19 2006-07-19 Interchannel level difference quantization and inverse quantization method based on virtual sound source position information
EP06783342A EP1905034B1 (en) 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization
CN2006800259842A CN101223598B (en) 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR20050065515 2005-07-19
KR10-2005-0065515 2005-07-19
KR10-2005-0096256 2005-10-12
KR20050096256 2005-10-12
KR1020060066822A KR100755471B1 (en) 2005-07-19 2006-07-18 Virtual source location information based channel level difference quantization and dequantization method
KR10-2006-0066822 2006-07-18

Publications (1)

Publication Number Publication Date
WO2007011157A1 true WO2007011157A1 (en) 2007-01-25

Family

ID=37669008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/002824 WO2007011157A1 (en) 2005-07-19 2006-07-19 Virtual source location information based channel level difference quantization and dequantization method

Country Status (2)

Country Link
EP (1) EP1905034B1 (en)
WO (1) WO2007011157A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009003078A (en) * 2007-06-20 2009-01-08 Casio Comput Co Ltd Speech encoding device, speech decoding device, speech encoding method, speech decoding method, and program
WO2013024200A1 (en) * 2011-08-15 2013-02-21 Nokia Corporation Apparatus and method for multi-channel signal playback
US8626518B2 (en) 2010-02-11 2014-01-07 Huawei Technologies Co., Ltd. Multi-channel signal encoding and decoding method, apparatus, and system
US8798276B2 (en) 2009-08-18 2014-08-05 Samsung Electronics Co., Ltd. Method and apparatus for encoding multi-channel audio signal and method and apparatus for decoding multi-channel audio signal
US9055371B2 (en) 2010-11-19 2015-06-09 Nokia Technologies Oy Controllable playback system offering hierarchical playback options
US9299355B2 (en) 2011-08-04 2016-03-29 Dolby International Ab FM stereo radio receiver by using parametric stereo
US9456289B2 (en) 2010-11-19 2016-09-27 Nokia Technologies Oy Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof
US9706324B2 (en) 2013-05-17 2017-07-11 Nokia Technologies Oy Spatial object oriented audio apparatus
US10008210B2 (en) 2010-02-11 2018-06-26 Huawei Technologies Co., Ltd. Method, apparatus, and system for encoding and decoding multi-channel signals
US10148903B2 (en) 2012-04-05 2018-12-04 Nokia Technologies Oy Flexible spatial audio capture apparatus
GB2575632A (en) * 2018-07-16 2020-01-22 Nokia Technologies Oy Sparse quantization of spatial audio parameters
US10635383B2 (en) 2013-04-04 2020-04-28 Nokia Technologies Oy Visual audio processing apparatus
GB2593672A (en) * 2020-03-23 2021-10-06 Nokia Technologies Oy Switching between audio instances

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016473A (en) * 1998-04-07 2000-01-18 Dolby; Ray M. Low bit-rate spatial coding method and system
US20030035553A1 (en) * 2001-08-10 2003-02-20 Frank Baumgarte Backwards-compatible perceptual coding of spatial cues
WO2003090208A1 (en) * 2002-04-22 2003-10-30 Koninklijke Philips Electronics N.V. pARAMETRIC REPRESENTATION OF SPATIAL AUDIO
US20060074693A1 (en) * 2003-06-30 2006-04-06 Hiroaki Yamashita Audio coding device with fast algorithm for determining quantization step sizes based on psycho-acoustic model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016473A (en) * 1998-04-07 2000-01-18 Dolby; Ray M. Low bit-rate spatial coding method and system
US20030035553A1 (en) * 2001-08-10 2003-02-20 Frank Baumgarte Backwards-compatible perceptual coding of spatial cues
WO2003090208A1 (en) * 2002-04-22 2003-10-30 Koninklijke Philips Electronics N.V. pARAMETRIC REPRESENTATION OF SPATIAL AUDIO
US20060074693A1 (en) * 2003-06-30 2006-04-06 Hiroaki Yamashita Audio coding device with fast algorithm for determining quantization step sizes based on psycho-acoustic model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1905034A4 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009003078A (en) * 2007-06-20 2009-01-08 Casio Comput Co Ltd Speech encoding device, speech decoding device, speech encoding method, speech decoding method, and program
US8798276B2 (en) 2009-08-18 2014-08-05 Samsung Electronics Co., Ltd. Method and apparatus for encoding multi-channel audio signal and method and apparatus for decoding multi-channel audio signal
US8626518B2 (en) 2010-02-11 2014-01-07 Huawei Technologies Co., Ltd. Multi-channel signal encoding and decoding method, apparatus, and system
US10008210B2 (en) 2010-02-11 2018-06-26 Huawei Technologies Co., Ltd. Method, apparatus, and system for encoding and decoding multi-channel signals
US9055371B2 (en) 2010-11-19 2015-06-09 Nokia Technologies Oy Controllable playback system offering hierarchical playback options
US9313599B2 (en) 2010-11-19 2016-04-12 Nokia Technologies Oy Apparatus and method for multi-channel signal playback
US9456289B2 (en) 2010-11-19 2016-09-27 Nokia Technologies Oy Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof
US10477335B2 (en) 2010-11-19 2019-11-12 Nokia Technologies Oy Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof
US9794686B2 (en) 2010-11-19 2017-10-17 Nokia Technologies Oy Controllable playback system offering hierarchical playback options
US9299355B2 (en) 2011-08-04 2016-03-29 Dolby International Ab FM stereo radio receiver by using parametric stereo
WO2013024200A1 (en) * 2011-08-15 2013-02-21 Nokia Corporation Apparatus and method for multi-channel signal playback
US10419712B2 (en) 2012-04-05 2019-09-17 Nokia Technologies Oy Flexible spatial audio capture apparatus
US10148903B2 (en) 2012-04-05 2018-12-04 Nokia Technologies Oy Flexible spatial audio capture apparatus
US10635383B2 (en) 2013-04-04 2020-04-28 Nokia Technologies Oy Visual audio processing apparatus
US9706324B2 (en) 2013-05-17 2017-07-11 Nokia Technologies Oy Spatial object oriented audio apparatus
GB2575632A (en) * 2018-07-16 2020-01-22 Nokia Technologies Oy Sparse quantization of spatial audio parameters
GB2593672A (en) * 2020-03-23 2021-10-06 Nokia Technologies Oy Switching between audio instances

Also Published As

Publication number Publication date
EP1905034A4 (en) 2009-11-25
EP1905034A1 (en) 2008-04-02
EP1905034B1 (en) 2011-06-01

Similar Documents

Publication Publication Date Title
EP1905034A1 (en) Virtual source location information based channel level difference quantization and dequantization method
US7620554B2 (en) Multichannel audio extension
KR101664434B1 (en) Method of coding/decoding audio signal and apparatus for enabling the method
US20210090581A1 (en) Energy lossless-encoding method and apparatus, audio encoding method and apparatus, energy lossless-decoding method and apparatus, and audio decoding method and apparatus
JP4521032B2 (en) Energy-adaptive quantization for efficient coding of spatial speech parameters
JP4685165B2 (en) Interchannel level difference quantization and inverse quantization method based on virtual sound source position information
KR100954179B1 (en) Near-transparent or transparent multi-channel encoder/decoder scheme
US7627480B2 (en) Support of a multichannel audio extension
KR101428487B1 (en) Method and apparatus for encoding and decoding multi-channel
RU2439718C1 (en) Method and device for sound signal processing
US7848931B2 (en) Audio encoder
USRE46082E1 (en) Method and apparatus for low bit rate encoding and decoding
EP2665294A2 (en) Support of a multichannel audio extension
US20110046946A1 (en) Encoder, decoder, and the methods therefor
US20080252510A1 (en) Method and Apparatus for Encoding/Decoding Multi-Channel Audio Signal
JPWO2006003891A1 (en) Speech signal decoding apparatus and speech signal encoding apparatus
KR20080044707A (en) Method and apparatus for encoding and decoding audio/speech signal
WO2002103685A1 (en) Encoding apparatus and method, decoding apparatus and method, and program
WO1995032499A1 (en) Encoding method, decoding method, encoding-decoding method, encoder, decoder, and encoder-decoder
US20110137661A1 (en) Quantizing device, encoding device, quantizing method, and encoding method
US20100114568A1 (en) Apparatus for processing an audio signal and method thereof
US7181079B2 (en) Time signal analysis and derivation of scale factors
KR20070027669A (en) Low bitrate encoding/decoding method and apparatus
KR20140037118A (en) Method of processing audio signal, audio encoding apparatus, audio decoding apparatus and terminal employing the same
KR20130012972A (en) Method of encoding audio/speech signal

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680025984.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 4847/KOLNP/2007

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2006783342

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008522700

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE