US20120288099A1 - Method, medium, and system encoding/decoding multi-channel signal - Google Patents

Method, medium, and system encoding/decoding multi-channel signal Download PDF

Info

Publication number
US20120288099A1
US20120288099A1 US13/557,848 US201213557848A US2012288099A1 US 20120288099 A1 US20120288099 A1 US 20120288099A1 US 201213557848 A US201213557848 A US 201213557848A US 2012288099 A1 US2012288099 A1 US 2012288099A1
Authority
US
United States
Prior art keywords
signal
channel signal
parameter
mono
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/557,848
Other versions
US8718284B2 (en
Inventor
Jung-Hoe Kim
Eun-mi Oh
Konstantly Osipov
Ki-hyun Choo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/557,848 priority Critical patent/US8718284B2/en
Publication of US20120288099A1 publication Critical patent/US20120288099A1/en
Application granted granted Critical
Publication of US8718284B2 publication Critical patent/US8718284B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H40/00Arrangements specially adapted for receiving broadcast information
    • H04H40/18Arrangements characterised by circuits or components specially adapted for receiving
    • H04H40/27Arrangements characterised by circuits or components specially adapted for receiving specially adapted for broadcast systems covered by groups H04H20/53 - H04H20/95
    • H04H40/36Arrangements characterised by circuits or components specially adapted for receiving specially adapted for broadcast systems covered by groups H04H20/53 - H04H20/95 specially adapted for stereophonic broadcast receiving
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/44Arrangements characterised by circuits or components specially adapted for broadcast
    • H04H20/46Arrangements characterised by circuits or components specially adapted for broadcast specially adapted for broadcast systems covered by groups H04H20/53-H04H20/95
    • H04H20/47Arrangements characterised by circuits or components specially adapted for broadcast specially adapted for broadcast systems covered by groups H04H20/53-H04H20/95 specially adapted for stereophonic broadcast systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/86Arrangements characterised by the broadcast information itself
    • H04H20/88Stereophonic broadcast systems

Definitions

  • One or more embodiments of the present invention relate to a method, medium, and system encoding/decoding a multi-channel signal and, more particularly, to a method, medium, and system encoding/decoding a multi-channel signal by using stereo parameters.
  • a parametric stereo (PS) technique down-mixes an input stereo signal so as to generate a mono-signal, extracts stereo parameters that represent side information on the stereo signal, encodes the mono-signal and the stereo parameters and transmits the encoded mono-signal and stereo parameters.
  • PS parametric stereo
  • the stereo parameters include an inter-channel intensity difference (IID) corresponding to a difference between intensities of at least two channel signals included in the stereo signal according to energy levels of the channel signals, an inter-channel coherence (ICC) according to a similarity of waveforms of the at least two channel signals, an inter-channel phase difference (IPD) between the at least two channel signals, and an overall phase difference (OPD) that represents how the phase difference between the at least two channel signals is distributed between two channels on the basis of a mono-signal.
  • IID inter-channel intensity difference
  • ICC inter-channel coherence
  • IPD inter-channel phase difference
  • OPD overall phase difference
  • One or more embodiments of the present invention provide a multi-channel signal decoding method and apparatus for efficiently decoding stereo parameters of a multi-channel signal transmitted at a low bit rate to improve the quality of the multi-channel signal, and a computer readable recording medium storing a program for executing the multi-channel signal decoding method.
  • One or more embodiments of the present invention also provide a multi-channel signal encoding method and apparatus for efficiently transmitting stereo parameters that represent side information of a multi-channel signal at a low bit rate, and a computer readable recording medium storing a program for executing the multi-channel encoding method.
  • a method of decoding a multi-channel signal comprising: decoding a down-mixed signal representative of a multi-channel signal; decoding parameters that represent characteristic relations between channels of the multi-channel signal; estimating an additional parameter by using the decoded parameters; and up-mixing the down-mixed signal by using the decoded parameters and the estimated parameter so as to decode the multi-channel signal.
  • a computer readable recording medium storing a program for executing a method of decoding a multi-channel signal comprising: decoding a down-mixed signal representative of a multi-channel signal; decoding parameters that represent characteristic relations between channels of the multi-channel signal; estimating an additional parameter by using the decoded parameters; and up-mixing the down-mixed signal by using the decoded parameters and the estimated parameter so as to decode the multi-channel signal.
  • a method of decoding a multi-channel signal comprising: decoding information on a domain in which a down-mixed signal representative of a multi-channel signal is encoded; decoding the down-mixed signal in a time domain or a frequency domain according to the decoded information; decoding parameters that represent characteristic relations between channels of the multi-channel signal; and up-mixing the decoded down-mixed signal by using the decoded parameters so as to decode the multi-channel signal.
  • a method of encoding a multi-channel signal comprising: encoding a signal obtained by down-mixing a multi-channel signal; extracting parameters that represent characteristic relations between channels of the multi-channel signal from the multi-channel signal; encoding some of the extracted parameters other than a parameter that can be estimated from the some of the extracted parameters; and outputting the encoded down-mixed signal and the encoded parameters as a multi-channel signal encoding result.
  • a multi-channel signal decoding system comprising: a down-mixed signal decoder to decode a down-mixed signal representative of a multi-channel signal; a parameter decoder to decode parameters that represent characteristic relations between channels of the multi-channel signal; an overall phase difference (OPD) estimator to estimate OPD that represents a phase difference between the decoded down-mixed signal and the multi-channel signal by using the decoded parameters; and an up-mixing unit to up-mix the decoded down-mixed signal by using the decoded parameters and the estimated OPD.
  • OPD overall phase difference
  • FIG. 1 is a block diagram of a multi-channel signal encoding system according to an embodiment of the present invention
  • FIG. 2 is a block diagram of a parameter extraction unit included in the multi-channel signal encoding system illustrated in FIG. 1 ;
  • FIG. 3 illustrates a method of extracting an inter-channel phase difference (IPD) and an overall phase difference (OPD) using an IPD/OPD extractor included in the parameter extraction unit illustrated in FIG. 2 ;
  • IPD inter-channel phase difference
  • OPD overall phase difference
  • FIGS. 4A and 4B illustrate an encoding operation of a parameter encoder included in the multi-channel signal encoding system illustrated in FIG. 1 ;
  • FIG. 5 is a block diagram of a multi-channel signal decoding system according to an embodiment of the present invention.
  • FIGS. 6A and 6B illustrate a phase interpolating operation of an OPD estimator included in the multi-channel signal decoding system illustrated in FIG. 5 ;
  • FIG. 7 is a flow chart of a multi-channel signal encoding method according to an embodiment of the present invention.
  • FIG. 8 is a flow chart of a multi-channel signal decoding method according to an embodiment of the present invention.
  • FIG. 1 is a block diagram of a multi-channel signal encoding system according to an embodiment of the present invention.
  • the multi-channel signal encoding system may include a transformation unit 11 , a down-mixing unit 12 , a mono-signal encoding unit 13 , a parameter extraction unit 14 , a parameter encoding unit 15 and a multiplexing unit 16 .
  • a multi-channel signal includes signals of multiple channels.
  • a multi-channel signal input to the multi-channel signal encoding system illustrated in FIG. 1 is a stereo signal including a left-channel signal L and a right-channel signal R.
  • the multi-channel signal is not limited to the stereo signal.
  • the transformation unit 11 transforms the left-channel signal L and the right-channel signal R from the time domain into a predetermined domain through an analysis filter bank.
  • the predetermined domain can be a domain capable of representing both the magnitude and phase of a signal.
  • the predetermined domain can be a domain that represents a signal for each of sub-bands split by a predetermined frequency.
  • the down-mixing unit 12 down-mixes the left-channel signal L and the right-channel signal R transformed by the transformation unit 11 and outputs a mono-signal.
  • down-mixing generates a mono-signal of a single channel from a stereo signal of at least two channels and the number of bits allocated to an encoding operation can be reduced through down-mixing.
  • the mono-signal can be a signal representative of the stereo signal. That is, only the down-mixed mono-signal can be encoded and transmitted without respectively encoding the left-channel signal L and the right-channel signal R included in the stereo signal.
  • Down-mixing normalizes the sum of the left-channel signal L and the right-channel signal R to generate the mono-signal in order to preserve the energy of the stereo signal.
  • the mono-signal encoding unit 13 encodes the down-mixed mono-signal.
  • the mono-signal encoding unit 13 can encode the mono-signal by using different methods according to whether the input stereo signal is a speech signal or a music signal.
  • the configuration of the mono-signal encoding unit 13 according to the type of the input stereo signal will now be explained.
  • the mono-signal encoding unit 13 can include an inverse transformer and an encoder when the input stereo signal is a speech signal.
  • the inverse transformer inversely transforms the down-mixed mono-signal into the time domain and the encoder encodes the inversely transformed mono-signal in the time domain.
  • the encoder can encode the inversely transformed mono-signal according to a code excited linear prediction (CELP) method.
  • CELP method encodes an input signal in the time domain by using linear prediction and long-term prediction.
  • the mono-signal encoding unit 13 can include an inverse transformer and an encoder when the input stereo signal is a music signal.
  • the inverse transformer inversely transforms the down-mixed mono-signal into the time domain.
  • the encoder encodes the inversely transformed mono-signal in the time domain or transforms the inversely transformed mono-signal into the frequency domain and then encodes the mono-signal in the frequency domain.
  • the mono-signal encoding unit 13 can encode the mono-signal down-mixed by the down-mixing unit 12 in the frequency domain when the input stereo signal is a music signal.
  • a method of encoding a signal on the time axis such as CELP method, or a method of encoding a signal on the frequency axis by using modified discrete cosine transform (MDCT)/fast Fourier transform (FFT), such as transform coded excitation (TCX) method, can be used to encode the mono-signal according to characteristics of the input signal.
  • MDCT discrete cosine transform
  • FFT fast Fourier transform
  • TCX transform coded excitation
  • the parameter extraction unit 14 extracts stereo parameters representing characteristic relations between the left-channel signal L and the right-channel signal R, which are transformed by the transformation unit 11 . Specifically, the parameter extraction unit 14 can extract IID, ICC, IPD and OPD with respect to the left-channel signal L and the right-channel signal R.
  • a conventional stereo signal encoding system extracts only IDD and ICC from among stereo parameters and encodes only the extracted IID and ICC so as to reduce the number of bits allocated to a stereo parameter encoding operation.
  • the parameter extraction unit 14 of the encoding system extracts parameters representing phase information on signals, such as IPD and OPD, as well as IID and ICC.
  • IPD and OPD parameters representing phase information on signals
  • IID and ICC parameters representing phase information on signals
  • the parameter encoding unit 15 quantizes the stereo parameters extracted by the parameter extraction unit 14 and encodes the quantization result. Specifically, the parameter encoding unit 15 quantizes only the IID, ICC and IPD from among the stereo parameters extracted by the parameter extraction unit 14 and encodes only the quantized IID, ICC and IDP in order to reduce the number of bits allocated to the stereo parameter encoding operation. In other words, the parameter encoding unit 15 does not encode the OPD extracted by the parameter extraction unit 14 or transmit the OPD to a decoding stage, and thus the number of bits allocated to the stereo parameter encoding operation can be reduced.
  • the decoding stage is required to up-mix a signal by using all the extracted stereo parameters in order to output a stereo signal with improved quality. Accordingly, the decoding stage has to estimate a stereo parameter that is not transmitted from the encoding stage by using the stereo parameters transmitted from the encoding stage.
  • the decoding stage can estimate OPD representing a phase difference between the mono-signal and the stereo signal on the basis of IID and IPD because IID represents an inter-channel intensity difference of the stereo signal and IPD represents a inter-channel phase difference of the stereo signal.
  • the mono-signal can be a signal representative of the stereo signal, and thus the phase difference between the mono-signal and the stereo signal can be estimated using IDD and IPD. This will be explained in detail with reference to FIG. 5 .
  • the parameter encoding unit 15 performs arithmetic encoding on the quantization parameters.
  • Arithmetic encoding is one of a number of entropy encoding methods that represent respective symbols or continuous symbols as a code with an appropriate length according to frequency in statistical generation of data symbols. The detailed encoding operation of the parameter encoding unit 15 will be explained with reference to FIGS. 4A and 4B .
  • the multiplexing unit 16 multiplexes the encoded mono-signal and the encoded parameters respectively output from the mono-signal encoding unit 13 and the parameter encoding unit 15 and outputs bit streams.
  • FIG. 2 is a block diagram of the parameter extraction unit 14 included in the multi-channel signal encoding system illustrated in FIG. 1 .
  • the parameter extraction unit 14 may include an IID extractor 141 , an IPD/OPD extractor 142 , and an ICC extractor 143 .
  • the parameter extraction unit 14 receives the left-channel signal and the right-channel signal transformed by the transformation unit 11 illustrated in FIG. 1 .
  • the IID extractor 141 extracts IID that represents an intensity difference between the transformed left-channel signal and right-channel signal and outputs the extracted IID to the parameter encoding unit 15 illustrated in FIG. 1 .
  • the IID extractor 141 can extract the IID by using Equation 1.
  • IID ⁇ ( b ) 10 ⁇ ⁇ log 10 ⁇ e L ⁇ ( b ) e R ⁇ ( b ) [ Equation ⁇ ⁇ 1 ]
  • b represents a frequency band index
  • e L (b) denotes an average energy level of the left-channel signal in a specific frequency band of the frequency domain
  • e R (b) represents an average energy level of the right-channel signal in the specific frequency band of the frequency domain. Accordingly, IID can be obtained by using a ratio of the energy level of the right-channel signal to the energy level of the left-channel signal in the frequency domain.
  • the IPD/OPD extractor 142 extracts IPD that represents a phase difference between the transformed left-channel signal and right-channel signal and OPD that represents how the phase difference is distributed between the left-channel signal and the right-channel signal and outputs the extracted IPD to the parameter encoding unit 15 illustrated in FIG. 1 .
  • FIG. 3 illustrates a method of extracting IPD and OPD by using the IPD/OPD extractor 142 illustrated in FIG. 2 .
  • the operation of the IPD/OPD extractor 142 is described with reference to FIGS. 2 and 3 .
  • L denotes the left-channel signal in the frequency domain
  • R represents the right-channel signal in the frequency domain
  • M denotes the down-mixed mono-signal.
  • IPD and OPD can be respectively obtained using Equations 2 and 3.
  • L ⁇ R denotes a dot product of the left-channel signal L and the right-channel signal R and IPD represents an angle made by the left-channel signal L and the right-channel signal R.
  • L ⁇ M denotes a dot product of the left-channel signal L and the down-mixed mono-signal M and OPD represents an angle made by the left-channel signal L and the down-mixed mono-signal M.
  • the ICC extractor 143 extracts ICC that is a parameter representing coherence of the transformed left-channel signal and right-channel signal and outputs the extracted ICC to the parameter encoding unit 15 illustrated in FIG. 1 .
  • FIGS. 4A and 4B illustrate the encoding operation of the parameter encoding unit 15 included in the multi-channel signal encoding system illustrated in FIG. 1 .
  • the encoding operation of the parameter encoding unit 15 is described with reference to FIGS. 1 , 4 A and 4 B.
  • a symbol that is a quantized value in a current frame is encoded by obtaining a difference between a symbol of a current frame and a symbol of a previous frame or previous frequency band and encoding the difference.
  • FIG. 4A illustrates a context based arithmetic encoding method.
  • the probability that a symbol is output from a current frame is determined according to a symbol in a previous frame or a previous frequency band on the basis of a context of frames or frequency bands.
  • ai denotes a current symbol
  • b j represents a previous symbol
  • i and j correspond to 0 to N ⁇ 1 (N is the number of quanta).
  • the probability that a symbol is output from the current frame can be represented as P(a i
  • a block indicated by an arrow in FIG. 4A represents a probability value P(a 2
  • the probability that a symbol is output from a current frame is determined by a symbol of a previous frame or previous frequency band and a predetermined variable f on the basis of a context of frames or frequency bands. Accordingly, the probability that a symbol is output from the current frame can be represented as P(a i
  • the product of the variations in the two arbitrary symbols has a positive value when the two symbols continuously increase and has a positive value when the two symbols continuously decrease (that is, ⁇ i ⁇ i-2 >0).
  • the product of the variations has a negative value when the two symbols do not continuously increase or decrease (that is, ⁇ i-1 ⁇ i-2 ⁇ 0).
  • the variable f is 1 when the two symbols continuously increase or decrease, that is, when the product of the variations has a positive value, and 0 when the product of the variations has a negative value. That is, the probability that a symbol is output from the current frame when two arbitrary symbols of current symbols continuously increase or decrease is higher than the probability that a symbol is output from the current frame when the two arbitrary symbols do not continuously increase or decrease.
  • FIG. 4B illustrates a context based arithmetic encoding method according to another embodiment of the present invention.
  • the probability that a symbol is output from a current frame is determined by a plurality of symbols in a previous frame or previous frequency band and a predetermined variable f on the basis of a context of frames or frequency bands.
  • a i denotes a current symbol
  • b j and b k represent previous symbols in a predetermined frame or predetermined frequency band
  • i, j and k correspond to 0 to N ⁇ 1 (N is the number of quanta).
  • the probability that a symbol is output from the current frame can be represented as P(a i
  • the variable f has been described above already and thus an explanation thereof will be omitted here.
  • the arithmetic encoding method illustrated in FIG. 4B increases the number of predetermined frames or predetermined bands generating previous symbols compared to the arithmetic encoding method illustrated in FIG. 4A . Accordingly, the number of symbols in previous frames or previous frequency bands, which is the basis of context-based arithmetic encoding, is increased, and thus the probability that a symbol is output from the current frame can be more accurately ascertained.
  • FIG. 5 is a block diagram of a multi-channel signal decoding system according to an embodiment of the present invention.
  • the multi-channel signal decoding system may include a demultiplexing unit 51 , a mono-signal decoding unit 52 , a parameter decoding unit 53 , an OPD estimation unit 54 , an up-mixing unit 55 and an inverse transformation unit 56 .
  • the demultiplexing unit 51 demultiplexes bit streams corresponding to an encoded multi-channel signal and outputs an encoded mono-signal and encoded stereo parameters.
  • the mono-signal decoding unit 52 decodes the encoded mono-signal demultiplexed by the demultiplexing unit 51 . Specifically, the mono-signal decoding unit 52 decodes the encoded mono-signal in the time domain when the mono-signal is encoded in the time domain and decodes the encoded mono-signal in the frequency domain when the mono-signal is encoded in the frequency domain.
  • the parameter decoding unit 53 decodes the encoded stereo parameters demultiplexed by the demultiplexer 51 .
  • the encoded stereo parameters can include encoded IID, IPD and ICC. Accordingly, the parameter decoding unit 53 decodes the encoded IID, IPD and ICC and outputs IID, IPD and ICC.
  • the OPD estimation unit 54 estimates OPD that represents a phase difference between the decoded mono-signal and a multi-channel signal by using the decoded IPD and IID.
  • OPD since OPD is not transmitted from an encoding system, the decoding system is required to estimate OPD by using parameters other than OPD, transmitted from the encoding system, in order to improve the quality of a decoded stereo signal. Accordingly, the decoding system can up-mix the mono-signal by using the parameters transmitted from the encoding system and OPD estimated on the basis of the parameters so as to improve the quality of the up-mixed signal.
  • the OPD estimation unit 54 obtains a first intermediate variable c by using IID according to Equation 4.
  • the first intermediate variable c can be obtained by representing the result, obtained by dividing IID in a specific frequency band by 20, as an exponent of 10.
  • a second intermediate variable c 1 and a third intermediate variable c 2 can be obtained using the first intermediate variable c according to Equations 5 and 6.
  • b denotes a frequency band index
  • the third intermediate variable c 2 can be obtained by multiplying the second intermediate variable c 1 by c(b).
  • the OPD estimation unit 54 can represent a first right-channel signal ⁇ grave over (R) ⁇ n,k and a first left-channel signal ⁇ grave over (L) ⁇ n,k by using a decoded mono-signal M and the second and third intermediate variables c 1 and c 2 according to Equations 7 and 8.
  • n denotes a time slot index and k represents a parameter band index.
  • the first right-channel signal ⁇ grave over (R) ⁇ n,k can be represented by a product of the second intermediate variable c 1 and the decoded mono-signal M.
  • n denotes the time slot index and k represents the parameter band index.
  • the first left-channel signal ⁇ grave over (L) ⁇ n,k can be represented by a product of the third intermediate variable c 2 and the decoded mono-signal M.
  • a first mono-signal ⁇ grave over (M) ⁇ n,k can be represented using the first right-channel signal ⁇ grave over (R) ⁇ n,k and the first left-channel signal ⁇ grave over (L) ⁇ n,k as follows.
  • a fourth intermediate variable p according to a time slot and a parameter band can be obtained using Equations 7, 8 and 9 according to Equation 10.
  • the fourth intermediate variable p corresponds to a value obtained by dividing the sum of the magnitudes of the first left-channel signal ⁇ grave over (L) ⁇ n,k , the first right-channel signal ⁇ grave over (R) ⁇ n,k and the first mono-signal ⁇ grave over (R) ⁇ n,k by 2.
  • OPD OPD can be obtained using Equation 11.
  • ⁇ 1 2 ⁇ ⁇ arctan ( ( p n , k - ⁇ L ⁇ n , k ⁇ ) ⁇ ( p n , k - ⁇ M ⁇ n , k ⁇ ) p n , k ⁇ ( p n , k - ⁇ R ⁇ n , k ⁇ ) ) [ Equation ⁇ ⁇ 11 ]
  • ⁇ 2 2 ⁇ ⁇ arctan ( ( p n , k - ⁇ R ⁇ n , k ⁇ ) ⁇ ( p n , k - ⁇ M ⁇ n , k ⁇ ) p n , k ⁇ ( p n , k - ⁇ L ⁇ n , k ⁇ ) ) [ Equation ⁇ ⁇ 12 ]
  • ⁇ 1 is obtained using Equation 11
  • ⁇ 2 which is obtained using Equation 12
  • ⁇ 1 is a phase difference between the decoded mono-signal and a left-channel signal to be up-mixed
  • ⁇ 2 which is obtained using Equation 12
  • Equation 12 is a phase difference between the decoded mono-signal and a right-channel signal to be up-mixed.
  • the OPD estimation unit 54 can generate the first left-channel signal and the first right-channel signal with respect to a left-channel signal and a right-channel signal from the decoded mono-signal by using IID of the multi-channel signal, generate the first mono-signal from the first left-channel signal and the first right-channel signal by using IPD of the multi-channel signal, and estimate OPD between the decoded mono-signal and the multi-channel signal using the first left-channel signal, the first right-channel signal and the first mono-signal.
  • the up-mixing unit 55 up-mixes the decoded mono-signal by using ICC, IID and IPD decoded by the parameter decoding unit 53 and OPD estimated by the OPD estimation unit 54 .
  • up-mixing generates a stereo signal of at least two channels from a mono-signal of a single channel and is the inverse of down-mixing.
  • the up-mixing operation of the up-mixing unit 55 will now be explained in detail.
  • the up-mixing unit 55 can obtain a first phase ⁇ + ⁇ and a second phase ⁇ by using the second and third intermediate variables c 1 and c 2 when IIC is ⁇ according to Equations 13 and 14.
  • the up-mixing unit 55 can obtain up-mixed left-channel and right-channel signals by using the first and second phases ⁇ + ⁇ and ⁇ , which are obtained using Equations 13 and 14, the second and third intermediate variables c 1 and c 2 , ⁇ 1 , which is obtained using Equation 11, and ⁇ 2 , which is obtained using Equation 12, when the decoded mono-signal is M and a decorrelated signal is D, as illustrated below.
  • the decoding system can estimate OPD using parameters transmitted from the encoding system although OPD is not transmitted from the encoding system so as to increase the number of parameters used for up-mixing and improve the quality of the up-mixed signal.
  • the inverse transformation unit 56 inversely transforms the signal up-mixed by the up-mixing unit 55 into the time domain.
  • FIGS. 6A and 6B illustrate a phase interpolating operation of the decoding system illustrated in FIG. 5 .
  • the phase interpolating operation of the decoding system will now be explained with reference to FIGS. 5 , 6 A and 6 B.
  • the phase of the decoded signal is interpolated in order to prevent the signal from abruptly varying with time.
  • the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 20°, 30°, 40° and 50° through interpolation of the signal in the current time slot and in the previous time slot.
  • P 1 denotes the phase of a signal in the previous time slot
  • N 1 denotes the phase of the signal in the current time slot.
  • phase interpolation is performed in a direction indicated by a dotted arrow illustrated in FIG. 6A to estimate the phase in the four time slots between the current time slot and the previous time slot as 90°, 155°, 220° and 285°.
  • the phase interpolation direction can be changed when the absolute value of a difference between P 1 and N 1 is greater than 180°.
  • the absolute value of the difference between P 1 and N 1 is 320°, which is greater than 180°.
  • the phase interpolation direction is changed to a direction indicated by a solid-line arrow illustrated in FIG. 6A , and thus the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 18°, 11°, 4° and 357° (that is, ⁇ 3°).
  • P 2 denotes the phase of a signal in the previous time slot and N 2 is the phase of a signal in the current time slot.
  • the conventional phase interpolating method subtracts P 2 from N 2 and divides the subtraction result by the number of time slots existing between the current time stop and the previous time slot. For example, when N 2 is 25°, P 2 is 350°, and the number of time slots existing between the current time slot and the previous time slot is 4, phase interpolation is performed along a direction indicated by a dotted arrow illustrated in FIG. 6B , and thus the phase in the four time slots between the current time slot and the previous time slot can be estimated as 285°, 220°, 155° and 90°.
  • the phase interpolation direction can be changed when the absolute value of a difference between P 2 and N 2 is greater than 180°.
  • the absolute value of the difference between P 2 and N 2 is 320°, which is greater than 180°.
  • the phase interpolation direction is changed to a direction indicated by a solid-line arrow illustrated in FIG. 6B , and thus the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 357° (that is, ⁇ 3°), 4°, 11° and 18°.
  • the phase interpolating method changes the phase interpolation direction when the absolute value of a difference between signal phases in two arbitrary time slots is greater than 180°, and thus a phase difference between interpolated values can be reduced to gradually vary the signal with time.
  • FIG. 7 is a flow chart of a multi-channel signal encoding method according to an embodiment of the present invention.
  • the multi-channel signal encoding method includes operations sequentially performed in the multi-channel signal encoding system illustrated in FIG. 1 , and thus the description of the multi-channel encoding system illustrated in FIG. 1 is applied to the multi-channel encoding method.
  • the down-mixing unit 12 down-mixes a multi-channel signal to a mono-signal and the mono-signal encoding unit 13 encodes the down-mixed mono-signal in operation 700 .
  • the parameter extraction unit 14 extracts parameters that represent characteristic relations between channels of the multi-channel signal from the multi-channel signal in operation 710 .
  • the extracted parameters can include ICC, IPD and OPD.
  • the parameter encoding unit 15 encodes some of the extracted parameters other than a parameter that can be estimated from the some of the extracted parameters in operation 720 . Specifically, the parameter encoding unit 15 quantizes some of the extracted parameters and arithmetic-encodes the quantization result based on the context of the quantization result.
  • the multiplexing unit 16 multiplexes the encoded mono-signal and the encoded parameters in operation 730 .
  • FIG. 8 is a flow chart of a multi-channel signal decoding method according to an embodiment of the present invention.
  • the multi-channel signal decoding method includes operations sequentially performed in the multi-channel signal decoding system illustrated in FIG. 5 , and thus the description of the multi-channel decoding system illustrated in FIG. 5 is applied to the multi-channel decoding method.
  • the mono-signal decoding unit 52 decodes a mono-signal representative of a multi-channel signal in operation 800 .
  • the parameter decoding unit 53 decodes parameters that represent characteristic relations between channels of the multi-channel signal in operation 810 .
  • the OPD estimation unit 54 estimates an additional parameter by using the decoded parameters in operation 820 .
  • the additional parameter can be a phase parameter that represents a phase difference between the decoded mono-signal and the multi-channel signal.
  • the OPD estimation unit 54 can multiply intermediate variables generated from IID of the multi-channel signal by the decoded mono-signal to generate first and second signals, generate a third signal from IPD of the multi-channel signal and the first and second signals, and estimate the phase parameter from the first, second and third signals.
  • the up-mixing unit 55 up-mixes the decoded mono-signal by using the decoded parameters and the estimated parameter to decode the multi-channel signal in operation 830 .
  • embodiments of the present invention can also be implemented through computer readable code/instructions in/on a medium, e.g., a computer readable medium, to control at least one processing element to implement any above described embodiment.
  • a medium e.g., a computer readable medium
  • the medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
  • the computer readable code can be recorded/transferred on a medium in a variety of ways, with examples of the medium including recording media, such as magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs), and transmission media such as carrier waves, as well as through the Internet, for example.
  • the medium may further be a signal, such as a resultant signal or bitstream, according to embodiments of the present invention.
  • the media may also be a distributed network, so that the computer readable code is stored/transferred and executed in a distributed fashion.
  • the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.

Abstract

A multi-channel signal decoding method is provided. A down-mixed signal representative of a multi-channel signal is decoded, and parameters representing characteristic relations between channels of the multi-channel signal are decoded. An additional parameter is estimated by using the decoded parameters, and the decoded down-mixed signal is up-mixed by using the decoded parameters and the estimated parameter so as to decode the multi-channel signal.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application of prior application Ser. No. 12/107,117, filed on Apr. 22, 2008 in the United States Patent and Trademark Office, which claims priority under 35 U.S.C. §119(a) from Korean Patent Application No. 2007-109729, in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entirety by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • One or more embodiments of the present invention relate to a method, medium, and system encoding/decoding a multi-channel signal and, more particularly, to a method, medium, and system encoding/decoding a multi-channel signal by using stereo parameters.
  • 2. Description of the Related Art
  • A parametric stereo (PS) technique down-mixes an input stereo signal so as to generate a mono-signal, extracts stereo parameters that represent side information on the stereo signal, encodes the mono-signal and the stereo parameters and transmits the encoded mono-signal and stereo parameters. The stereo parameters include an inter-channel intensity difference (IID) corresponding to a difference between intensities of at least two channel signals included in the stereo signal according to energy levels of the channel signals, an inter-channel coherence (ICC) according to a similarity of waveforms of the at least two channel signals, an inter-channel phase difference (IPD) between the at least two channel signals, and an overall phase difference (OPD) that represents how the phase difference between the at least two channel signals is distributed between two channels on the basis of a mono-signal.
  • SUMMARY OF THE INVENTION
  • One or more embodiments of the present invention provide a multi-channel signal decoding method and apparatus for efficiently decoding stereo parameters of a multi-channel signal transmitted at a low bit rate to improve the quality of the multi-channel signal, and a computer readable recording medium storing a program for executing the multi-channel signal decoding method.
  • One or more embodiments of the present invention also provide a multi-channel signal encoding method and apparatus for efficiently transmitting stereo parameters that represent side information of a multi-channel signal at a low bit rate, and a computer readable recording medium storing a program for executing the multi-channel encoding method.
  • Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
  • According to an aspect of the present invention, there is provided a method of decoding a multi-channel signal comprising: decoding a down-mixed signal representative of a multi-channel signal; decoding parameters that represent characteristic relations between channels of the multi-channel signal; estimating an additional parameter by using the decoded parameters; and up-mixing the down-mixed signal by using the decoded parameters and the estimated parameter so as to decode the multi-channel signal.
  • According to another aspect of the present invention, there is provided a computer readable recording medium storing a program for executing a method of decoding a multi-channel signal comprising: decoding a down-mixed signal representative of a multi-channel signal; decoding parameters that represent characteristic relations between channels of the multi-channel signal; estimating an additional parameter by using the decoded parameters; and up-mixing the down-mixed signal by using the decoded parameters and the estimated parameter so as to decode the multi-channel signal.
  • According to another aspect of the present invention, there is provided a method of decoding a multi-channel signal comprising: decoding information on a domain in which a down-mixed signal representative of a multi-channel signal is encoded; decoding the down-mixed signal in a time domain or a frequency domain according to the decoded information; decoding parameters that represent characteristic relations between channels of the multi-channel signal; and up-mixing the decoded down-mixed signal by using the decoded parameters so as to decode the multi-channel signal.
  • According to another aspect of the present invention, there is provided a method of encoding a multi-channel signal comprising: encoding a signal obtained by down-mixing a multi-channel signal; extracting parameters that represent characteristic relations between channels of the multi-channel signal from the multi-channel signal; encoding some of the extracted parameters other than a parameter that can be estimated from the some of the extracted parameters; and outputting the encoded down-mixed signal and the encoded parameters as a multi-channel signal encoding result.
  • According to another aspect of the present invention, there is provided a multi-channel signal decoding system comprising: a down-mixed signal decoder to decode a down-mixed signal representative of a multi-channel signal; a parameter decoder to decode parameters that represent characteristic relations between channels of the multi-channel signal; an overall phase difference (OPD) estimator to estimate OPD that represents a phase difference between the decoded down-mixed signal and the multi-channel signal by using the decoded parameters; and an up-mixing unit to up-mix the decoded down-mixed signal by using the decoded parameters and the estimated OPD.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
  • FIG. 1 is a block diagram of a multi-channel signal encoding system according to an embodiment of the present invention;
  • FIG. 2 is a block diagram of a parameter extraction unit included in the multi-channel signal encoding system illustrated in FIG. 1;
  • FIG. 3 illustrates a method of extracting an inter-channel phase difference (IPD) and an overall phase difference (OPD) using an IPD/OPD extractor included in the parameter extraction unit illustrated in FIG. 2;
  • FIGS. 4A and 4B illustrate an encoding operation of a parameter encoder included in the multi-channel signal encoding system illustrated in FIG. 1;
  • FIG. 5 is a block diagram of a multi-channel signal decoding system according to an embodiment of the present invention;
  • FIGS. 6A and 6B illustrate a phase interpolating operation of an OPD estimator included in the multi-channel signal decoding system illustrated in FIG. 5;
  • FIG. 7 is a flow chart of a multi-channel signal encoding method according to an embodiment of the present invention; and
  • FIG. 8 is a flow chart of a multi-channel signal decoding method according to an embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. In this regard, embodiments of the present invention may be embodied in many difference forms and should not be construed as being limited to embodiments set forth herein. Accordingly, embodiments are merely described below, by referring to the figures, to explain aspects of the present invention.
  • FIG. 1 is a block diagram of a multi-channel signal encoding system according to an embodiment of the present invention.
  • Referring to FIG. 1, the multi-channel signal encoding system may include a transformation unit 11, a down-mixing unit 12, a mono-signal encoding unit 13, a parameter extraction unit 14, a parameter encoding unit 15 and a multiplexing unit 16. In the current embodiment of the present invention, a multi-channel signal includes signals of multiple channels.
  • It is assumed that a multi-channel signal input to the multi-channel signal encoding system illustrated in FIG. 1 is a stereo signal including a left-channel signal L and a right-channel signal R. However, it will be understood by those of ordinary skill in the art that the multi-channel signal is not limited to the stereo signal.
  • The transformation unit 11 transforms the left-channel signal L and the right-channel signal R from the time domain into a predetermined domain through an analysis filter bank. The predetermined domain can be a domain capable of representing both the magnitude and phase of a signal. For example, the predetermined domain can be a domain that represents a signal for each of sub-bands split by a predetermined frequency.
  • The down-mixing unit 12 down-mixes the left-channel signal L and the right-channel signal R transformed by the transformation unit 11 and outputs a mono-signal. Here, down-mixing generates a mono-signal of a single channel from a stereo signal of at least two channels and the number of bits allocated to an encoding operation can be reduced through down-mixing. The mono-signal can be a signal representative of the stereo signal. That is, only the down-mixed mono-signal can be encoded and transmitted without respectively encoding the left-channel signal L and the right-channel signal R included in the stereo signal. Down-mixing normalizes the sum of the left-channel signal L and the right-channel signal R to generate the mono-signal in order to preserve the energy of the stereo signal.
  • The mono-signal encoding unit 13 encodes the down-mixed mono-signal. The mono-signal encoding unit 13 can encode the mono-signal by using different methods according to whether the input stereo signal is a speech signal or a music signal. The configuration of the mono-signal encoding unit 13 according to the type of the input stereo signal will now be explained.
  • In the current embodiment of the present invention, the mono-signal encoding unit 13 can include an inverse transformer and an encoder when the input stereo signal is a speech signal. The inverse transformer inversely transforms the down-mixed mono-signal into the time domain and the encoder encodes the inversely transformed mono-signal in the time domain. For example, the encoder can encode the inversely transformed mono-signal according to a code excited linear prediction (CELP) method. Here, the CELP method encodes an input signal in the time domain by using linear prediction and long-term prediction.
  • In another embodiment of the present invention, the mono-signal encoding unit 13 can include an inverse transformer and an encoder when the input stereo signal is a music signal. The inverse transformer inversely transforms the down-mixed mono-signal into the time domain. The encoder encodes the inversely transformed mono-signal in the time domain or transforms the inversely transformed mono-signal into the frequency domain and then encodes the mono-signal in the frequency domain.
  • In another embodiment of the present invention, the mono-signal encoding unit 13 can encode the mono-signal down-mixed by the down-mixing unit 12 in the frequency domain when the input stereo signal is a music signal.
  • In another embodiment of the present invention, a method of encoding a signal on the time axis, such as CELP method, or a method of encoding a signal on the frequency axis by using modified discrete cosine transform (MDCT)/fast Fourier transform (FFT), such as transform coded excitation (TCX) method, can be used to encode the mono-signal according to characteristics of the input signal.
  • The parameter extraction unit 14 extracts stereo parameters representing characteristic relations between the left-channel signal L and the right-channel signal R, which are transformed by the transformation unit 11. Specifically, the parameter extraction unit 14 can extract IID, ICC, IPD and OPD with respect to the left-channel signal L and the right-channel signal R.
  • A conventional stereo signal encoding system extracts only IDD and ICC from among stereo parameters and encodes only the extracted IID and ICC so as to reduce the number of bits allocated to a stereo parameter encoding operation. However, the parameter extraction unit 14 of the encoding system according to the current embodiment of the present invention extracts parameters representing phase information on signals, such as IPD and OPD, as well as IID and ICC. When a signal is decoded using the parameters representing phase information in addition to IID and ICC, the quality of the signal can be improved. The detailed operation of the parameter extraction unit 14 will be explained with reference to FIG. 2.
  • The parameter encoding unit 15 quantizes the stereo parameters extracted by the parameter extraction unit 14 and encodes the quantization result. Specifically, the parameter encoding unit 15 quantizes only the IID, ICC and IPD from among the stereo parameters extracted by the parameter extraction unit 14 and encodes only the quantized IID, ICC and IDP in order to reduce the number of bits allocated to the stereo parameter encoding operation. In other words, the parameter encoding unit 15 does not encode the OPD extracted by the parameter extraction unit 14 or transmit the OPD to a decoding stage, and thus the number of bits allocated to the stereo parameter encoding operation can be reduced.
  • As described above, some of the extracted stereo parameters are transmitted from an encoding stage in order to transmit the stereo parameters at a low bit rate. However, the decoding stage is required to up-mix a signal by using all the extracted stereo parameters in order to output a stereo signal with improved quality. Accordingly, the decoding stage has to estimate a stereo parameter that is not transmitted from the encoding stage by using the stereo parameters transmitted from the encoding stage.
  • According to the current embodiment of the present invention, the decoding stage can estimate OPD representing a phase difference between the mono-signal and the stereo signal on the basis of IID and IPD because IID represents an inter-channel intensity difference of the stereo signal and IPD represents a inter-channel phase difference of the stereo signal. As described above, the mono-signal can be a signal representative of the stereo signal, and thus the phase difference between the mono-signal and the stereo signal can be estimated using IDD and IPD. This will be explained in detail with reference to FIG. 5.
  • Specifically, the parameter encoding unit 15 performs arithmetic encoding on the quantization parameters. Arithmetic encoding is one of a number of entropy encoding methods that represent respective symbols or continuous symbols as a code with an appropriate length according to frequency in statistical generation of data symbols. The detailed encoding operation of the parameter encoding unit 15 will be explained with reference to FIGS. 4A and 4B.
  • The multiplexing unit 16 multiplexes the encoded mono-signal and the encoded parameters respectively output from the mono-signal encoding unit 13 and the parameter encoding unit 15 and outputs bit streams.
  • FIG. 2 is a block diagram of the parameter extraction unit 14 included in the multi-channel signal encoding system illustrated in FIG. 1.
  • Referring to FIG. 2, the parameter extraction unit 14 may include an IID extractor 141, an IPD/OPD extractor 142, and an ICC extractor 143. The parameter extraction unit 14 receives the left-channel signal and the right-channel signal transformed by the transformation unit 11 illustrated in FIG. 1.
  • The IID extractor 141 extracts IID that represents an intensity difference between the transformed left-channel signal and right-channel signal and outputs the extracted IID to the parameter encoding unit 15 illustrated in FIG. 1. The IID extractor 141 can extract the IID by using Equation 1.
  • IID ( b ) = 10 log 10 e L ( b ) e R ( b ) [ Equation 1 ]
  • Here, b represents a frequency band index, eL(b) denotes an average energy level of the left-channel signal in a specific frequency band of the frequency domain, and eR(b) represents an average energy level of the right-channel signal in the specific frequency band of the frequency domain. Accordingly, IID can be obtained by using a ratio of the energy level of the right-channel signal to the energy level of the left-channel signal in the frequency domain.
  • The IPD/OPD extractor 142 extracts IPD that represents a phase difference between the transformed left-channel signal and right-channel signal and OPD that represents how the phase difference is distributed between the left-channel signal and the right-channel signal and outputs the extracted IPD to the parameter encoding unit 15 illustrated in FIG. 1.
  • FIG. 3 illustrates a method of extracting IPD and OPD by using the IPD/OPD extractor 142 illustrated in FIG. 2. The operation of the IPD/OPD extractor 142 is described with reference to FIGS. 2 and 3.
  • In FIG. 3, L denotes the left-channel signal in the frequency domain, R represents the right-channel signal in the frequency domain, and M denotes the down-mixed mono-signal. Here, IPD and OPD can be respectively obtained using Equations 2 and 3.

  • IPD=∠(L·R)  [Equation 2]
  • Here, L·R denotes a dot product of the left-channel signal L and the right-channel signal R and IPD represents an angle made by the left-channel signal L and the right-channel signal R.

  • OPD=∠(L·M)  [Equation 3]
  • Here, L·M denotes a dot product of the left-channel signal L and the down-mixed mono-signal M and OPD represents an angle made by the left-channel signal L and the down-mixed mono-signal M.
  • Referring back to FIG. 2, the ICC extractor 143 extracts ICC that is a parameter representing coherence of the transformed left-channel signal and right-channel signal and outputs the extracted ICC to the parameter encoding unit 15 illustrated in FIG. 1.
  • FIGS. 4A and 4B illustrate the encoding operation of the parameter encoding unit 15 included in the multi-channel signal encoding system illustrated in FIG. 1. The encoding operation of the parameter encoding unit 15 is described with reference to FIGS. 1, 4A and 4B.
  • In a conventional arithmetic encoding method, a symbol that is a quantized value in a current frame is encoded by obtaining a difference between a symbol of a current frame and a symbol of a previous frame or previous frequency band and encoding the difference.
  • FIG. 4A illustrates a context based arithmetic encoding method.
  • According to the arithmetic encoding method, the probability that a symbol is output from a current frame is determined according to a symbol in a previous frame or a previous frequency band on the basis of a context of frames or frequency bands. In FIG. 4A, ai denotes a current symbol, bj represents a previous symbol, and i and j correspond to 0 to N−1 (N is the number of quanta). Accordingly, the probability that a symbol is output from the current frame can be represented as P(ai|bj) using ai and bj. For example, a block indicated by an arrow in FIG. 4A represents a probability value P(a2|b3) when i is 2 and j is 3.
  • In an arithmetic encoding method according to another embodiment of the present invention, the probability that a symbol is output from a current frame is determined by a symbol of a previous frame or previous frequency band and a predetermined variable f on the basis of a context of frames or frequency bands. Accordingly, the probability that a symbol is output from the current frame can be represented as P(ai|bj, fi) using ai, bj and f.
  • The predetermined variable f represents whether two arbitrary symbols from among current symbols continuously increase or decrease. Specifically, when a variation in each of the two arbitrary symbols is Δ(Δi-1=ai−ai-1), the variation Δ has a positive value when the two arbitrary symbols increase and has a negative value when the two arbitrary symbols decrease.
  • Accordingly, the product of the variations in the two arbitrary symbols has a positive value when the two symbols continuously increase and has a positive value when the two symbols continuously decrease (that is, Δi·Δi-2>0). However, the product of the variations has a negative value when the two symbols do not continuously increase or decrease (that is, Δi-1·Δi-2<0). The variable f is 1 when the two symbols continuously increase or decrease, that is, when the product of the variations has a positive value, and 0 when the product of the variations has a negative value. That is, the probability that a symbol is output from the current frame when two arbitrary symbols of current symbols continuously increase or decrease is higher than the probability that a symbol is output from the current frame when the two arbitrary symbols do not continuously increase or decrease.
  • FIG. 4B illustrates a context based arithmetic encoding method according to another embodiment of the present invention. According to the arithmetic encoding method, the probability that a symbol is output from a current frame is determined by a plurality of symbols in a previous frame or previous frequency band and a predetermined variable f on the basis of a context of frames or frequency bands. In FIG. 4B, ai denotes a current symbol, bj and bk represent previous symbols in a predetermined frame or predetermined frequency band, and i, j and k correspond to 0 to N−1 (N is the number of quanta). Accordingly, the probability that a symbol is output from the current frame can be represented as P(ai|bj, bk, fi) using ai|bj, bk and f. The variable f has been described above already and thus an explanation thereof will be omitted here.
  • As described above, the arithmetic encoding method illustrated in FIG. 4B increases the number of predetermined frames or predetermined bands generating previous symbols compared to the arithmetic encoding method illustrated in FIG. 4A. Accordingly, the number of symbols in previous frames or previous frequency bands, which is the basis of context-based arithmetic encoding, is increased, and thus the probability that a symbol is output from the current frame can be more accurately ascertained.
  • FIG. 5 is a block diagram of a multi-channel signal decoding system according to an embodiment of the present invention.
  • Referring to FIG. 5, the multi-channel signal decoding system may include a demultiplexing unit 51, a mono-signal decoding unit 52, a parameter decoding unit 53, an OPD estimation unit 54, an up-mixing unit 55 and an inverse transformation unit 56.
  • The demultiplexing unit 51 demultiplexes bit streams corresponding to an encoded multi-channel signal and outputs an encoded mono-signal and encoded stereo parameters.
  • The mono-signal decoding unit 52 decodes the encoded mono-signal demultiplexed by the demultiplexing unit 51. Specifically, the mono-signal decoding unit 52 decodes the encoded mono-signal in the time domain when the mono-signal is encoded in the time domain and decodes the encoded mono-signal in the frequency domain when the mono-signal is encoded in the frequency domain.
  • The parameter decoding unit 53 decodes the encoded stereo parameters demultiplexed by the demultiplexer 51. The encoded stereo parameters can include encoded IID, IPD and ICC. Accordingly, the parameter decoding unit 53 decodes the encoded IID, IPD and ICC and outputs IID, IPD and ICC.
  • The OPD estimation unit 54 estimates OPD that represents a phase difference between the decoded mono-signal and a multi-channel signal by using the decoded IPD and IID. As described above, since OPD is not transmitted from an encoding system, the decoding system is required to estimate OPD by using parameters other than OPD, transmitted from the encoding system, in order to improve the quality of a decoded stereo signal. Accordingly, the decoding system can up-mix the mono-signal by using the parameters transmitted from the encoding system and OPD estimated on the basis of the parameters so as to improve the quality of the up-mixed signal.
  • The operation of the OPD estimation unit 54 will now be described with reference to Equations 4 through 12.
  • The OPD estimation unit 54 obtains a first intermediate variable c by using IID according to Equation 4.
  • c ( b ) = 10 IID ( b ) 20 [ Equation 4 ]
  • Here, b denotes a frequency band index. The first intermediate variable c can be obtained by representing the result, obtained by dividing IID in a specific frequency band by 20, as an exponent of 10. A second intermediate variable c1 and a third intermediate variable c2 can be obtained using the first intermediate variable c according to Equations 5 and 6.
  • c 1 ( b ) = 2 1 + c 2 ( b ) [ Equation 5 ] c 2 ( b ) = 2 c ( b ) 1 + c 2 ( b ) [ Equation 6 ]
  • Here, b denotes a frequency band index, and the third intermediate variable c2 can be obtained by multiplying the second intermediate variable c1 by c(b).
  • Then, the OPD estimation unit 54 can represent a first right-channel signal {grave over (R)}n,k and a first left-channel signal {grave over (L)}n,k by using a decoded mono-signal M and the second and third intermediate variables c1 and c2 according to Equations 7 and 8.

  • {grave over (R)} n,k =c 1 M n,k  [Equation 7]
  • Here, n denotes a time slot index and k represents a parameter band index. The first right-channel signal {grave over (R)}n,k can be represented by a product of the second intermediate variable c1 and the decoded mono-signal M.

  • {grave over (L)} n,k =c 2 M n,k  [Equation 8]
  • Here, n denotes the time slot index and k represents the parameter band index. The first left-channel signal {grave over (L)}n,k can be represented by a product of the third intermediate variable c2 and the decoded mono-signal M.
  • When IPD is φ, a first mono-signal {grave over (M)}n,k can be represented using the first right-channel signal {grave over (R)}n,k and the first left-channel signal {grave over (L)}n,k as follows.

  • |{grave over (M)} n,k|=√{square root over (|{grave over (L)}n,k|2 +|{grave over (R)} n,k|2−2|{grave over (L)} n,k|2 ∥{grave over (L)} n,k|2|cos(π−φ))}  [Equation 9]
  • A fourth intermediate variable p according to a time slot and a parameter band can be obtained using Equations 7, 8 and 9 according to Equation 10.
  • p n , k = L ^ n , k + R ^ n , k + M ^ n , k 2 [ Equation 10 ]
  • The fourth intermediate variable p corresponds to a value obtained by dividing the sum of the magnitudes of the first left-channel signal {grave over (L)}n,k, the first right-channel signal {grave over (R)}n,k and the first mono-signal {grave over (R)}n,k by 2. When OPD is φ1, OPD can be obtained using Equation 11.
  • ϕ 1 = 2 arctan ( ( p n , k - L ^ n , k ) ( p n , k - M ^ n , k ) p n , k ( p n , k - R ^ n , k ) ) [ Equation 11 ]
  • When a difference between OPD and IPD is φ2, φ2 can be obtained using Equation 12.
  • ϕ 2 = 2 arctan ( ( p n , k - R ^ n , k ) ( p n , k - M ^ n , k ) p n , k ( p n , k - L ^ n , k ) ) [ Equation 12 ]
  • φ1, is obtained using Equation 11, is a phase difference between the decoded mono-signal and a left-channel signal to be up-mixed and φ2, which is obtained using Equation 12, is a phase difference between the decoded mono-signal and a right-channel signal to be up-mixed.
  • As described above, the OPD estimation unit 54 can generate the first left-channel signal and the first right-channel signal with respect to a left-channel signal and a right-channel signal from the decoded mono-signal by using IID of the multi-channel signal, generate the first mono-signal from the first left-channel signal and the first right-channel signal by using IPD of the multi-channel signal, and estimate OPD between the decoded mono-signal and the multi-channel signal using the first left-channel signal, the first right-channel signal and the first mono-signal.
  • The up-mixing unit 55 up-mixes the decoded mono-signal by using ICC, IID and IPD decoded by the parameter decoding unit 53 and OPD estimated by the OPD estimation unit 54. Here, up-mixing generates a stereo signal of at least two channels from a mono-signal of a single channel and is the inverse of down-mixing. The up-mixing operation of the up-mixing unit 55 will now be explained in detail.
  • The up-mixing unit 55 can obtain a first phase α+β and a second phase α−β by using the second and third intermediate variables c1 and c2 when IIC is ρ according to Equations 13 and 14.
  • α + β = 1 2 arccos ρ · ( 1 + c 1 - c 2 2 ) [ Equation 13 ] α - β = 1 2 arccos ρ · ( 1 - c 1 - c 2 2 ) [ Equation 14 ]
  • Then, the up-mixing unit 55 can obtain up-mixed left-channel and right-channel signals by using the first and second phases α+β and α−β, which are obtained using Equations 13 and 14, the second and third intermediate variables c1 and c2, φ1, which is obtained using Equation 11, and φ2, which is obtained using Equation 12, when the decoded mono-signal is M and a decorrelated signal is D, as illustrated below.

  • L′=(M·cos(α+β)+D·sin(α+β))·exp( 1c 2  [Equation 15]

  • R′=(M·cos(α−β)−D·sin(α−β))·exp( 1c 1  [Equation 16]
  • As described above, the decoding system according to the current embodiment of the present invention can estimate OPD using parameters transmitted from the encoding system although OPD is not transmitted from the encoding system so as to increase the number of parameters used for up-mixing and improve the quality of the up-mixed signal.
  • The inverse transformation unit 56 inversely transforms the signal up-mixed by the up-mixing unit 55 into the time domain.
  • FIGS. 6A and 6B illustrate a phase interpolating operation of the decoding system illustrated in FIG. 5. The phase interpolating operation of the decoding system will now be explained with reference to FIGS. 5, 6A and 6B.
  • When an encoded multi-channel signal is decoded, the phase of the decoded signal is interpolated in order to prevent the signal from abruptly varying with time. For example, when there are four time slots between a current time slot and a previous time slot, and when the phase of a signal is 60° in the current time slot, and the phase of the signal is 10° in the previous time slot, the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 20°, 30°, 40° and 50° through interpolation of the signal in the current time slot and in the previous time slot. In FIG. 6A, P1 denotes the phase of a signal in the previous time slot and N1 denotes the phase of the signal in the current time slot.
  • According to a conventional signal phase interpolating method, the phase P1 is subtracted from the phase N1 and the subtraction result is divided by the number of time slots existing between the current time slot and the previous time slot. For example, when N1 is 350°, P1 is 25° and the number of time slots existing between the current time slot and the previous time slot is 4, phase interpolation is performed in a direction indicated by a dotted arrow illustrated in FIG. 6A to estimate the phase in the four time slots between the current time slot and the previous time slot as 90°, 155°, 220° and 285°.
  • In the phase interpolating method according to the current embodiment of the present invention, the phase interpolation direction can be changed when the absolute value of a difference between P1 and N1 is greater than 180°. In the current embodiment of the present invention, the absolute value of the difference between P1 and N1 is 320°, which is greater than 180°. In this case, the phase interpolation direction is changed to a direction indicated by a solid-line arrow illustrated in FIG. 6A, and thus the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 18°, 11°, 4° and 357° (that is, −3°).
  • In FIG. 6B, P2 denotes the phase of a signal in the previous time slot and N2 is the phase of a signal in the current time slot.
  • As described above, the conventional phase interpolating method subtracts P2 from N2 and divides the subtraction result by the number of time slots existing between the current time stop and the previous time slot. For example, when N2 is 25°, P2 is 350°, and the number of time slots existing between the current time slot and the previous time slot is 4, phase interpolation is performed along a direction indicated by a dotted arrow illustrated in FIG. 6B, and thus the phase in the four time slots between the current time slot and the previous time slot can be estimated as 285°, 220°, 155° and 90°.
  • In the phase interpolating method according to the current embodiment of the present invention, the phase interpolation direction can be changed when the absolute value of a difference between P2 and N2 is greater than 180°. In the current embodiment of the present invention, the absolute value of the difference between P2 and N2 is 320°, which is greater than 180°. In this case, the phase interpolation direction is changed to a direction indicated by a solid-line arrow illustrated in FIG. 6B, and thus the phase of the signal in the four time slots between the current time slot and the previous time slot can be estimated as 357° (that is, −3°), 4°, 11° and 18°.
  • As described above, the phase interpolating method according to the current embodiment of the present invention changes the phase interpolation direction when the absolute value of a difference between signal phases in two arbitrary time slots is greater than 180°, and thus a phase difference between interpolated values can be reduced to gradually vary the signal with time.
  • FIG. 7 is a flow chart of a multi-channel signal encoding method according to an embodiment of the present invention.
  • Referring to FIG. 7, the multi-channel signal encoding method includes operations sequentially performed in the multi-channel signal encoding system illustrated in FIG. 1, and thus the description of the multi-channel encoding system illustrated in FIG. 1 is applied to the multi-channel encoding method.
  • Referring to FIGS. 1 and 7, the down-mixing unit 12 down-mixes a multi-channel signal to a mono-signal and the mono-signal encoding unit 13 encodes the down-mixed mono-signal in operation 700.
  • The parameter extraction unit 14 extracts parameters that represent characteristic relations between channels of the multi-channel signal from the multi-channel signal in operation 710. The extracted parameters can include ICC, IPD and OPD.
  • The parameter encoding unit 15 encodes some of the extracted parameters other than a parameter that can be estimated from the some of the extracted parameters in operation 720. Specifically, the parameter encoding unit 15 quantizes some of the extracted parameters and arithmetic-encodes the quantization result based on the context of the quantization result.
  • The multiplexing unit 16 multiplexes the encoded mono-signal and the encoded parameters in operation 730.
  • FIG. 8 is a flow chart of a multi-channel signal decoding method according to an embodiment of the present invention.
  • Referring to FIG. 8, the multi-channel signal decoding method includes operations sequentially performed in the multi-channel signal decoding system illustrated in FIG. 5, and thus the description of the multi-channel decoding system illustrated in FIG. 5 is applied to the multi-channel decoding method.
  • Referring to FIGS. 5 and 8, the mono-signal decoding unit 52 decodes a mono-signal representative of a multi-channel signal in operation 800. The parameter decoding unit 53 decodes parameters that represent characteristic relations between channels of the multi-channel signal in operation 810.
  • The OPD estimation unit 54 estimates an additional parameter by using the decoded parameters in operation 820. The additional parameter can be a phase parameter that represents a phase difference between the decoded mono-signal and the multi-channel signal. The OPD estimation unit 54 can multiply intermediate variables generated from IID of the multi-channel signal by the decoded mono-signal to generate first and second signals, generate a third signal from IPD of the multi-channel signal and the first and second signals, and estimate the phase parameter from the first, second and third signals.
  • The up-mixing unit 55 up-mixes the decoded mono-signal by using the decoded parameters and the estimated parameter to decode the multi-channel signal in operation 830.
  • In addition to the above described embodiments, embodiments of the present invention can also be implemented through computer readable code/instructions in/on a medium, e.g., a computer readable medium, to control at least one processing element to implement any above described embodiment. The medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
  • The computer readable code can be recorded/transferred on a medium in a variety of ways, with examples of the medium including recording media, such as magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs), and transmission media such as carrier waves, as well as through the Internet, for example. Thus, the medium may further be a signal, such as a resultant signal or bitstream, according to embodiments of the present invention. The media may also be a distributed network, so that the computer readable code is stored/transferred and executed in a distributed fashion. Still further, as only an example, the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.
  • While aspects of the present invention has been particularly shown and described with reference to differing embodiments thereof, it should be understood that these exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Any narrowing or broadening of functionality or capability of an aspect in one embodiment should not considered as a respective broadening or narrowing of similar features in a different embodiment, i.e., descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in the remaining embodiments.
  • Thus, although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (4)

1. An apparatus for generating a stereo signal from a down-mixed mono signal, the apparatus comprising:
a down-mixed signal decoder to decode the down-mixed mono signal included in a bitstream;
a parameter decoder to decode parameters that represent characteristic relations between channels, included in the bitstream;
a parameter estimator to estimate a parameter representing a phase difference between one of a left signal and a right signal and the down-mixed mono signal, by using the decoded parameters; and
an up-mixing unit to up-mix the decoded down-mixed mono signal by using the decoded parameters and the estimated parameter to generate the stereo signal.
2. The apparatus of claim 1, wherein the decoded parameters comprise a parameter that represents an energy difference between channels of the stereo signal, and a parameter that represents a phase difference between channels of the stereo signal.
3. The apparatus of claim 2, wherein the decoded parameters further comprise a parameter that represents a correlation between channels of the stereo signal.
4. The apparatus of claim 1, wherein the estimated parameter represents the phase difference between the left signal and the down-mixed mono signal.
US13/557,848 2007-10-30 2012-07-25 Method, medium, and system encoding/decoding multi-channel signal Active 2028-05-15 US8718284B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/557,848 US8718284B2 (en) 2007-10-30 2012-07-25 Method, medium, and system encoding/decoding multi-channel signal

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020070109729A KR101505831B1 (en) 2007-10-30 2007-10-30 Method and Apparatus of Encoding/Decoding Multi-Channel Signal
KR2007-109729 2007-10-30
US12/107,117 US8254584B2 (en) 2007-10-30 2008-04-22 Method, medium, and system encoding/decoding multi-channel signal
US13/557,848 US8718284B2 (en) 2007-10-30 2012-07-25 Method, medium, and system encoding/decoding multi-channel signal

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/107,117 Continuation US8254584B2 (en) 2007-10-30 2008-04-22 Method, medium, and system encoding/decoding multi-channel signal

Publications (2)

Publication Number Publication Date
US20120288099A1 true US20120288099A1 (en) 2012-11-15
US8718284B2 US8718284B2 (en) 2014-05-06

Family

ID=40582875

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/107,117 Active 2031-06-29 US8254584B2 (en) 2007-10-30 2008-04-22 Method, medium, and system encoding/decoding multi-channel signal
US13/557,848 Active 2028-05-15 US8718284B2 (en) 2007-10-30 2012-07-25 Method, medium, and system encoding/decoding multi-channel signal
US13/563,808 Active 2028-05-27 US8861738B2 (en) 2007-10-30 2012-08-01 Method, medium, and system encoding/decoding multi-channel signal

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/107,117 Active 2031-06-29 US8254584B2 (en) 2007-10-30 2008-04-22 Method, medium, and system encoding/decoding multi-channel signal

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/563,808 Active 2028-05-27 US8861738B2 (en) 2007-10-30 2012-08-01 Method, medium, and system encoding/decoding multi-channel signal

Country Status (2)

Country Link
US (3) US8254584B2 (en)
KR (1) KR101505831B1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101453732B1 (en) * 2007-04-16 2014-10-24 삼성전자주식회사 Method and apparatus for encoding and decoding stereo signal and multi-channel signal
EP2144230A1 (en) 2008-07-11 2010-01-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Low bitrate audio encoding/decoding scheme having cascaded switches
KR101600352B1 (en) * 2008-10-30 2016-03-07 삼성전자주식회사 / method and apparatus for encoding/decoding multichannel signal
US8666752B2 (en) * 2009-03-18 2014-03-04 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding multi-channel signal
KR101692394B1 (en) * 2009-08-27 2017-01-04 삼성전자주식회사 Method and apparatus for encoding/decoding stereo audio
WO2011039668A1 (en) * 2009-09-29 2011-04-07 Koninklijke Philips Electronics N.V. Apparatus for mixing a digital audio
KR101710113B1 (en) * 2009-10-23 2017-02-27 삼성전자주식회사 Apparatus and method for encoding/decoding using phase information and residual signal
CN102157149B (en) * 2010-02-12 2012-08-08 华为技术有限公司 Stereo signal down-mixing method and coding-decoding device and system
EP3422346B1 (en) 2010-07-02 2020-04-22 Dolby International AB Audio encoding with decision about the application of postfiltering when decoding
EP2612322B1 (en) * 2010-10-05 2016-05-11 Huawei Technologies Co., Ltd. Method and device for decoding a multichannel audio signal
KR101227932B1 (en) * 2011-01-14 2013-01-30 전자부품연구원 System for multi channel multi track audio and audio processing method thereof
TWI450266B (en) * 2011-04-19 2014-08-21 Hon Hai Prec Ind Co Ltd Electronic device and decoding method of audio files
EP3067886A1 (en) 2015-03-09 2016-09-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder for encoding a multichannel signal and audio decoder for decoding an encoded audio signal
EP3961623A1 (en) 2015-09-25 2022-03-02 VoiceAge Corporation Method and system for decoding left and right channels of a stereo sound signal
CN107731238B (en) * 2016-08-10 2021-07-16 华为技术有限公司 Coding method and coder for multi-channel signal
US11817878B2 (en) 2018-11-20 2023-11-14 Maxlinear, Inc. Multi-channel decoder with distributed scheduling

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060004583A1 (en) * 2004-06-30 2006-01-05 Juergen Herre Multi-channel synthesizer and method for generating a multi-channel output signal
US20060165237A1 (en) * 2004-11-02 2006-07-27 Lars Villemoes Methods for improved performance of prediction based multi-channel reconstruction
US20060233379A1 (en) * 2005-04-15 2006-10-19 Coding Technologies, AB Adaptive residual audio coding
US20080002842A1 (en) * 2005-04-15 2008-01-03 Fraunhofer-Geselschaft zur Forderung der angewandten Forschung e.V. Apparatus and method for generating multi-channel synthesizer control signal and apparatus and method for multi-channel synthesizing
US7961889B2 (en) * 2004-12-01 2011-06-14 Samsung Electronics Co., Ltd. Apparatus and method for processing multi-channel audio signal using space information
US7965848B2 (en) * 2006-03-29 2011-06-21 Dolby International Ab Reduced number of channels decoding
US8223976B2 (en) * 2004-04-16 2012-07-17 Dolby International Ab Apparatus and method for generating a level parameter and apparatus and method for generating a multi-channel representation

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100261253B1 (en) * 1997-04-02 2000-07-01 윤종용 Scalable audio encoder/decoder and audio encoding/decoding method
CA2427315C (en) * 2001-08-31 2008-10-14 Samsung Electronics Co., Ltd. Apparatus and method for transmitting and receiving forward channel quality information in a mobile communication system
KR101049751B1 (en) * 2003-02-11 2011-07-19 코닌클리케 필립스 일렉트로닉스 엔.브이. Audio coding
KR100561869B1 (en) * 2004-03-10 2006-03-17 삼성전자주식회사 Lossless audio decoding/encoding method and apparatus
US20060009985A1 (en) * 2004-06-16 2006-01-12 Samsung Electronics Co., Ltd. Multi-channel audio system
WO2006022124A1 (en) * 2004-08-27 2006-03-02 Matsushita Electric Industrial Co., Ltd. Audio decoder, method and program
SE0402650D0 (en) * 2004-11-02 2004-11-02 Coding Tech Ab Improved parametric stereo compatible coding or spatial audio
DE602006015294D1 (en) * 2005-03-30 2010-08-19 Dolby Int Ab MULTI-CHANNEL AUDIO CODING
KR100773562B1 (en) * 2006-03-06 2007-11-07 삼성전자주식회사 Method and apparatus for generating stereo signal
TWI340600B (en) * 2006-03-30 2011-04-11 Lg Electronics Inc Method for processing an audio signal, method of encoding an audio signal and apparatus thereof
KR100829560B1 (en) * 2006-08-09 2008-05-14 삼성전자주식회사 Method and apparatus for encoding/decoding multi-channel audio signal, Method and apparatus for decoding downmixed singal to 2 channel signal
KR101373004B1 (en) * 2007-10-30 2014-03-26 삼성전자주식회사 Apparatus and method for encoding and decoding high frequency signal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8223976B2 (en) * 2004-04-16 2012-07-17 Dolby International Ab Apparatus and method for generating a level parameter and apparatus and method for generating a multi-channel representation
US8538031B2 (en) * 2004-04-16 2013-09-17 Dolby International Ab Method for representing multi-channel audio signals
US20060004583A1 (en) * 2004-06-30 2006-01-05 Juergen Herre Multi-channel synthesizer and method for generating a multi-channel output signal
US20060165237A1 (en) * 2004-11-02 2006-07-27 Lars Villemoes Methods for improved performance of prediction based multi-channel reconstruction
US7961889B2 (en) * 2004-12-01 2011-06-14 Samsung Electronics Co., Ltd. Apparatus and method for processing multi-channel audio signal using space information
US20060233379A1 (en) * 2005-04-15 2006-10-19 Coding Technologies, AB Adaptive residual audio coding
US20080002842A1 (en) * 2005-04-15 2008-01-03 Fraunhofer-Geselschaft zur Forderung der angewandten Forschung e.V. Apparatus and method for generating multi-channel synthesizer control signal and apparatus and method for multi-channel synthesizing
US7965848B2 (en) * 2006-03-29 2011-06-21 Dolby International Ab Reduced number of channels decoding

Also Published As

Publication number Publication date
US20090110201A1 (en) 2009-04-30
KR20090043921A (en) 2009-05-07
US20120294448A1 (en) 2012-11-22
US8861738B2 (en) 2014-10-14
US8254584B2 (en) 2012-08-28
KR101505831B1 (en) 2015-03-26
US8718284B2 (en) 2014-05-06

Similar Documents

Publication Publication Date Title
US8718284B2 (en) Method, medium, and system encoding/decoding multi-channel signal
KR101945309B1 (en) Apparatus and method for encoding/decoding using phase information and residual signal
CN103052983B (en) Audio or video scrambler, audio or video demoder and Code And Decode method
US9384743B2 (en) Apparatus and method for encoding/decoding multichannel signal
RU2645271C2 (en) Stereophonic code and decoder of audio signals
US7916873B2 (en) Stereo compatible multi-channel audio coding
EP2410515B1 (en) Apparatus and method for decoding a multichannel signal
EP2467850B1 (en) Method and apparatus for decoding multi-channel audio signals
US20080077412A1 (en) Method, medium, and system encoding and/or decoding audio signals by using bandwidth extension and stereo coding
US20080120117A1 (en) Method, medium, and apparatus with bandwidth extension encoding and/or decoding
US20100014679A1 (en) Multi-channel encoding and decoding method and apparatus
US9685168B2 (en) Apparatus for encoding/decoding multichannel signal and method thereof
KR20080109299A (en) Method of encoding/decoding audio signal and apparatus using the same
US8976970B2 (en) Apparatus and method for bandwidth extension for multi-channel audio
EP2439736A1 (en) Down-mixing device, encoder, and method therefor
US20120093321A1 (en) Apparatus and method for encoding and decoding spatial parameter
KR101500972B1 (en) Method and Apparatus of Encoding/Decoding Multi-Channel Signal

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8