WO2006104017A1 - Dispositif de codage sonore et procédé de codage sonore - Google Patents

Dispositif de codage sonore et procédé de codage sonore Download PDF

Info

Publication number
WO2006104017A1
WO2006104017A1 PCT/JP2006/305871 JP2006305871W WO2006104017A1 WO 2006104017 A1 WO2006104017 A1 WO 2006104017A1 JP 2006305871 W JP2006305871 W JP 2006305871W WO 2006104017 A1 WO2006104017 A1 WO 2006104017A1
Authority
WO
WIPO (PCT)
Prior art keywords
amplitude ratio
quantization
delay difference
prediction parameter
signal
Prior art date
Application number
PCT/JP2006/305871
Other languages
English (en)
Japanese (ja)
Inventor
Koji Yoshida
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Priority to JP2007510437A priority Critical patent/JP4887288B2/ja
Priority to ES06729819.0T priority patent/ES2623551T3/es
Priority to EP06729819.0A priority patent/EP1858006B1/fr
Priority to US11/909,556 priority patent/US8768691B2/en
Priority to CN2006800096953A priority patent/CN101147191B/zh
Publication of WO2006104017A1 publication Critical patent/WO2006104017A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components

Definitions

  • the present invention relates to a speech coding apparatus and speech coding method, and more particularly to a speech coding apparatus and speech coding method for stereo speech.
  • a voice coding scheme having a scalable configuration is desired in order to control traffic on the network and realize multicast communication.
  • a scalable configuration is a configuration in which speech data can be decoded even with a partial code and data power on the receiving side.
  • mono-rural stereo can be selected by allowing the receiving side to perform decoding of a stereo signal and decoding of a monaural signal using a part of the encoded data. Coding with a scalable configuration between them (monaural stereo 'scalable configuration) is desired.
  • a speech coding method having such a monaural, one-stereo, scalable configuration for example, prediction of signals between channels (hereinafter abbreviated as “ch” as appropriate) (from the 1st ch signal to the 2nd ch signal) Prediction or prediction from the 2nd channel signal to the 1st channel signal) is performed by pitch prediction between channels, that is, a code is performed using correlation between two channels (Non-Patent Document). 1).
  • Non-Special Terms 1 Ramprashad, SA, "Stereophonic CELP coding using cross channel p rediction", Proc. IEEE Workshop on Speech Coding, pp.136-138, Sep. 2000.
  • Non-Patent Document 1 the prediction parameters between channels (the delay and gain of pitch prediction between channels) are independently encoded, so The efficiency is not high.
  • An object of the present invention is to provide a speech encoding apparatus and speech encoding method that can efficiently encode stereo speech.
  • the speech coding apparatus includes a prediction parameter analysis unit that obtains a delay difference and an amplitude ratio between a first signal and a second signal as a prediction parameter, and between the delay difference and the amplitude ratio. And a quantization means for obtaining the prediction parameter force quantization prediction parameter based on the correlation.
  • stereo sound can be efficiently encoded.
  • FIG. 1 is a block diagram showing a configuration of a speech coding apparatus according to Embodiment 1.
  • FIG. 2 is a block diagram showing a configuration of a second channel prediction unit according to Embodiment 1
  • FIG. 3 is a block diagram showing a configuration of a prediction parameter quantization unit according to Embodiment 1 (Configuration Example 1).
  • FIG. 4 is a characteristic diagram showing an example of a prediction parameter codebook according to Embodiment 1
  • FIG. 5 is a block diagram showing a configuration of a prediction parameter quantization unit according to Embodiment 1 (Configuration Example 2).
  • FIG. 6 is a characteristic diagram showing an example of a function used in the amplitude ratio estimation unit according to Embodiment 1.
  • FIG. 7 is a block diagram showing a configuration of a prediction parameter quantization unit according to Embodiment 2 (configuration example 3).
  • FIG. 8 is a characteristic diagram showing an example of a function used in the distortion calculation unit according to the second embodiment.
  • FIG. 9 is a block diagram showing a configuration of a prediction parameter quantization unit according to Embodiment 2 (configuration example 4).
  • FIG. 10 is a characteristic diagram showing an example of functions used in the amplitude ratio correction unit and the amplitude ratio estimation unit according to Embodiment 2.
  • FIG. 11 is a block diagram showing a configuration of a prediction parameter quantization unit according to Embodiment 2 (configuration example 5).
  • the configuration of the speech coding apparatus according to this embodiment is shown in FIG.
  • the speech encoding apparatus 10 shown in FIG. 1 includes an lch encoding unit 11, an lch decoding unit 12, a 2ch prediction unit 13, a subtractor 14, and a second channel prediction residual code input unit 15.
  • the operation is assumed on a frame basis.
  • the code signal data (the l-th channel code data) of the audio signal is output to the l-channel decoder 12. Further, the lch code data is multiplexed with the 2ch prediction parameter encoded data and the 2ch encoded data and transmitted to a speech decoding apparatus (not shown).
  • the lch decoding unit 12 generates an lch decoded signal from the lch encoded data and outputs the lch decoded signal to the second channel prediction unit 13.
  • the second channel prediction parameter code data obtained by encoding the second channel prediction parameter is output.
  • the second channel prediction parameter encoded data is multiplexed with other encoded data and transmitted to a speech decoding apparatus (not shown).
  • Second channel prediction residual encoding section 15 encodes the second channel prediction residual signal and outputs second channel encoded data.
  • the second channel code data is multiplexed with other code data and transmitted to the audio decoder.
  • FIG. 2 shows the configuration of the second channel prediction unit 13.
  • the second channel prediction unit 13 includes a prediction parameter analysis unit 21, a prediction parameter quantization unit 22, and a signal prediction unit 23.
  • the second channel prediction unit 13 uses parameters based on the delay difference D and the amplitude ratio g of the second channel audio signal with respect to the lch audio signal based on the correlation between the channel signals of the stereo signal.
  • the 1st ch audio signal strength predicts the 2nd ch audio signal.
  • the prediction parameter analysis unit 21 obtains the delay difference D and the amplitude ratio g of the second channel audio signal with respect to the first channel audio signal from the first channel decoded signal and the second channel audio signal as inter-channel prediction parameters, and calculates the prediction parameter quantum. To the conversion unit 22.
  • the prediction parameter quantization unit 22 quantizes the input prediction parameter (delay difference D, amplitude ratio g), and outputs the quantized prediction parameter and the second channel prediction parameter encoded data.
  • the quantized prediction parameter is input to the signal prediction unit 23.
  • Prediction parameter quantization part 2
  • the signal prediction unit 23 performs prediction of the second channel signal using the l-th channel decoded signal and the quantized prediction parameter, and outputs the prediction signal.
  • sp_ch2 (n) g ⁇ sd_chl (n-D)... (1)
  • the prediction parameter analysis unit 21 minimizes the distortion Dist represented by Equation (2), that is, the distortion Dist between the second channel audio signal s_ch2 (n) and the second channel prediction signal sp_ch2 (n).
  • the prediction parameters (delay difference D, amplitude ratio g) are obtained.
  • the prediction parameter analysis unit 21 The delay difference D that maximizes the cross-correlation between the voice signal and the l-th channel decoded signal or the ratio g of the average amplitude per frame may be obtained as the prediction parameter.
  • the prediction parameter quantization unit 22 uses this relationship to efficiently encode the inter-channel prediction parameters (delay difference D, amplitude ratio g), and with a smaller number of quantization bits. Realize equivalent quantization distortion.
  • the configuration of the prediction parameter quantization unit 22 according to the present embodiment is as shown in Fig. 3 ⁇ Configuration example 1> or Fig. 5 Configuration example 2>.
  • the delay difference D and the amplitude ratio g are represented as a two-dimensional vector, and vector quantization is performed on the two-dimensional vector.
  • Figure 4 is a characteristic diagram of the symbol vector that represents this two-dimensional vector with a dot ( ⁇ ).
  • the distortion calculation unit 31 applies each code vector of the prediction parameter codebook 33 to a prediction parameter represented by a two-dimensional vector (D, g) that also has a delay difference D, an amplitude ratio g, and a force. Calculate the distortion between and.
  • the minimum distortion search unit 32 searches for the code vector having the smallest distortion among all the code vectors, sends the search result to the prediction parameter codebook 33, and supports the code vector.
  • the index to be output is output as the 2nd channel prediction parameter code key data.
  • the prediction parameter codebook 33 outputs the code vector as a quantized prediction parameter based on the search result! /, With the least distortion! /.
  • wd and wg are weight constants that adjust the weight between the quantization distortion for the delay difference at the time of distortion calculation and the quantization distortion for the amplitude ratio.
  • a function for estimating the amplitude ratio g from the delay difference D is determined in advance, and after the delay difference D is quantized, its quantized power is predicted residual with respect to the amplitude ratio estimated using that function.
  • the delay difference quantization unit 51 performs quantization on the delay difference D of the prediction parameters, outputs the quantization delay difference Dq to the amplitude ratio estimation unit 52, and Output as prediction parameter. Also, the delay difference quantization unit 51 outputs the quantized delay difference index obtained by quantizing the delay difference D as the second channel prediction parameter code key data.
  • the amplitude ratio estimator 52 calculates an amplitude ratio estimate (estimated amplitude ratio) gp from the quantization delay difference Dq, and outputs it to the amplitude ratio estimation residual quantizer 53.
  • a function for estimating the amplitude ratio is also used for the quantization delay differential force prepared in advance. This function is quantized
  • a plurality of data indicating the correspondence between the delay difference Dq and the estimated amplitude ratio gp is obtained from the stereo audio signal for learning, and the correspondence is also prepared in advance by learning.
  • Amplitude ratio estimation residual quantization section 53 obtains estimated residual ⁇ g with respect to estimated amplitude ratio gp of amplitude ratio g according to equation (4).
  • the amplitude ratio estimation residual quantization unit 53 quantizes the estimated residual ⁇ g obtained by Equation (4), and outputs the quantization estimated residual as a quantization prediction parameter. .
  • the amplitude ratio estimation residual quantization unit 53 outputs a quantization estimation residual index obtained by quantization of the estimation residual ⁇ g as second channel prediction parameter code key data.
  • FIG. 6 shows an example of a function used in the amplitude ratio estimation unit 52.
  • the input prediction parameters (D, g) are shown as points on the coordinate plane in Fig. 6 as 2D vectors.
  • the amplitude ratio estimation residual quantization unit 53 obtains an estimated residual ⁇ g with respect to the estimated amplitude ratio gp of the amplitude ratio g of the input prediction parameter, and quantizes the estimated residual ⁇ g.
  • the quantization error can be reduced as compared with the case where the amplitude ratio is directly quantized, and as a result, the quantization efficiency can be improved.
  • the estimated amplitude ratio gp is obtained from the quantized delay difference Dq using a function for estimating the quantized delay differential force amplitude ratio, and the input amplitude ratio g relative to the estimated amplitude ratio gp is calculated.
  • the configuration for quantizing the estimated residual ⁇ g has been described, but the estimated delay difference is calculated from the quantized amplitude ratio gq using a function for quantizing the input amplitude ratio g and estimating the delay difference from the quantized amplitude ratio.
  • a configuration may be adopted in which Dp is obtained and the estimated residual ⁇ D of the input delay difference D relative to the estimated delay difference Dp is quantized.
  • the speech coding apparatus is different from Embodiment 1 in the configuration of prediction parameter quantization section 22 (FIGS. 2, 3, and 5). Quantization of prediction parameters in this embodiment Then, in the quantization of the delay difference and the amplitude ratio, quantization is performed so that the quantization errors of both parameters are audibly cancelled. In other words, when the quantization error of the delay difference occurs in the positive direction, the quantization error of the amplitude ratio is quantified to be larger, and conversely, when the quantization error of the delay difference occurs in the negative direction. Quantization is performed so that the quantization error of the amplitude ratio becomes smaller.
  • the delay difference and the amplitude ratio are quantized by adjusting the quantization error of the delay difference and the quantization error of the amplitude ratio so that the stereo localization does not change audibly.
  • the prediction parameter can be encoded more efficiently. That is, equivalent sound quality can be achieved at a lower code bit rate or higher sound quality at the same code bit rate.
  • the configuration of the prediction parameter quantization unit 22 according to the present embodiment is as shown in Fig. 7 ⁇ Configuration Example 3> or Fig. 9 Configuration Example 4>.
  • Configuration Example 3 differs from Configuration Example 1 (Fig. 3) in calculating the distortion.
  • FIG. 7 the same components as those in FIG.
  • the distortion calculation unit 71 performs each code vector of the prediction parameter codebook 33 on the prediction parameter represented by a two-dimensional vector (D, g) composed of the delay difference D and the amplitude ratio g. Calculate the distortion between.
  • the distortion calculation unit 71 is input.
  • the two-dimensional beta (D, g) of the prediction parameter is the nearest hearing-equivalent point (Dc '(k), gc' () to each code vector (Dc (k), gc (k))
  • w d and wg are weight constants for adjusting the weight between the quantization distortion for the delay difference at the time of distortion calculation and the quantization distortion for the amplitude ratio.
  • Dst (k) wd-((Dc '(k) -Dc (k)) 2 + wg * (gc, (k) -gc (k)) 2 ... (5)
  • the closest auditory equivalent point to each code vector (Dc (k), gc (k)) is, as shown in FIG. 8, from each code vector, the input prediction parameter vector (D , g) and stereo orientation correspond to the point where the perpendicular line is dropped to function 81 which is audibly equivalent.
  • This function 81 is a function in which the delay difference D and the amplitude ratio g are proportional to the positive direction. The larger the delay difference is, the larger the amplitude ratio is. On the other hand, the smaller the delay difference is, the smaller the amplitude ratio is. This is based on the audible characteristic, which gives an audible equivalent orientation.
  • the input prediction parameter vector is (D, g) as perceptually closest to each code vector (Dc (k), gc (k)) (ie, on the vertical line) on the function 81.
  • Dc code vector
  • gc gc vector
  • the code vector A quantization distortion A
  • the code vector B In the case of quantization distortion (B), the code vector C (quantization distortion C), which is closer to the auditory sense of stereo localization than the input prediction parameter vector, becomes the quantized value, and quantization can be performed with less auditory distortion.
  • the delay difference quantization unit 51 also outputs the quantization delay difference Dq to the amplitude ratio correction unit 91.
  • the amplitude ratio correction unit 91 corrects the amplitude ratio g to an audibly equivalent value based on the quantization error of the delay difference to obtain a corrected amplitude ratio g '.
  • the corrected amplitude ratio g ′ is input to the amplitude ratio estimation residual quantization unit 92.
  • Amplitude ratio estimation residual quantization section 92 obtains an estimation residual Sg with respect to estimated amplitude ratio gp of corrected amplitude ratio g 'according to equation (6).
  • the amplitude ratio estimation residual quantization unit 92 quantizes the estimated residual ⁇ g obtained by Equation (6), and outputs the quantization estimated residual as a quantization prediction parameter. . Further, the amplitude ratio estimation residual quantization unit 92 outputs a quantization estimation residual index obtained by quantization of the estimation residual ⁇ g as the second channel prediction parameter code key data.
  • FIG. 10 shows an example of functions used in the amplitude ratio correction unit 91 and the amplitude ratio estimation unit 52.
  • the function 81 used in the amplitude ratio correction unit 91 is the same function as the function 81 used in the configuration example 3, and the function 61 used in the amplitude ratio estimation unit 52 is used in the configuration example 2. This is the same function as function 61.
  • the function 81 is a function in which the delay difference D and the amplitude ratio g are proportional to the positive direction as described above, and the amplitude ratio correction unit 91 uses the function 81 to quantize the delay difference Dq.
  • a corrected amplitude ratio g ′ audibly equivalent to the amplitude ratio g based on the quantization error of the delay difference is obtained.
  • the amplitude ratio estimated residual quantization unit 92 obtains an estimated residual ⁇ g with respect to the estimated amplitude ratio gp of the corrected amplitude ratio g ′, and quantizes the estimated residual ⁇ g.
  • the estimated residual is obtained from the amplitude ratio (corrected amplitude ratio) corrected to an audibly equivalent value based on the quantization error of the delay difference, and the estimated residual is quantized. Quantization with small distortion and small quantization error can be performed.
  • the auditory characteristics relating to the delay difference and the amplitude ratio may be used as in the present embodiment.
  • the configuration of the prediction parameter quantization unit 22 in this case is as shown in FIG. In FIG. 11, the same components as those in Configuration Example 4 (FIG. 9) are denoted by the same reference numerals.
  • the amplitude ratio correction unit 91 corrects the amplitude ratio g to an audibly equivalent value based on the quantization error of the delay difference to obtain the corrected amplitude ratio g ′. .
  • the corrected amplitude ratio g ′ is input to the amplitude ratio quantization unit 1101. [0059]
  • the amplitude ratio quantization unit 1101 quantizes the corrected amplitude ratio g 'and outputs the quantized amplitude ratio as a quantization prediction parameter.
  • amplitude ratio quantization section 1101 outputs a quantized amplitude ratio index obtained by quantization of corrected amplitude ratio g ′ as second channel prediction parameter code data.
  • the prediction parameters (delay difference D and amplitude ratio g) have been described as scalar values (one-dimensional values), but over a plurality of time units (frames). Quantization similar to the above may be performed by combining a plurality of obtained prediction parameters into a vector of two or more dimensions.
  • each of the above embodiments can also be applied to a voice codec apparatus having a monaural-stereo's scalable configuration.
  • a monaural signal is generated from the input stereo signal (the lch and 2ch audio signals) and encoded
  • the lch is obtained from the monaural decoded signal by inter-channel prediction.
  • a (or 2nd ch) speech signal is predicted, and a prediction residual signal between this prediction signal and the lch (or 2nd ch) speech signal is encoded.
  • CELP codes are used for the monaural core layer and stereo enhancement layer codes, and inter-channel prediction is performed on the monaural driving sound source signal obtained by the mono core layer in the stereo enhancement layer. May be encoded with the CELP excitation code.
  • the inter-channel prediction parameter is a parameter for predicting the lch (or 2nd ch) speech signal with monaural signal power.
  • the delay difference Dml, Dm2 the amplitude ratio of the 1st channel audio signal and the 2nd channel audio signal relative to the monaural signal gml and gm2 may be combined for two channel signals and quantized in the same manner as in the second embodiment.
  • the speech coding apparatus is used in a mobile communication system! Installed in wireless communication devices such as wireless communication mobile station devices and wireless communication base station devices It is also possible to do.
  • Each functional block used in the description of each of the above embodiments is typically realized as an LSI which is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
  • the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. You may use an FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI.
  • FPGA Field Programmable Gate Array
  • the present invention can be applied to the use of a communication apparatus in a mobile communication system, a packet communication system using the Internet protocol, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

L’invention concerne un dispositif de codage sonore permettant de coder de manière efficace un son stéréophonique. Dans ce dispositif de codage sonore, une section d’analyse de paramètres de prédiction (21) détermine la différence de temporisation D et le rapport d’amplitude g d’un signal sonore de premier canal par rapport à un signal sonore de second canal comme paramètres de prédiction d’un canal à l’autre, à partir d’un signal décodé de premier canal et d’un signal sonore de second canal, une section de quantification de paramètres de prédiction (22) quantifie les paramètres de prédiction, une section de prédiction de signal (23) prédit un signal de second canal à l’aide du premier signal décodé et des paramètres de prédiction de quantification. La section de quantification de paramètres de prédiction (22) code et quantifie les paramètres de prédiction (la différence de temporisation D et le rapport d’amplitude g) à l’aide de la relation (corrélation) entre la différence de temporisation D et le rapport d’amplitude g attribuée à la caractéristique spatiale (par exemple la distance) entre la source sonore du signal et le point de réception.
PCT/JP2006/305871 2005-03-25 2006-03-23 Dispositif de codage sonore et procédé de codage sonore WO2006104017A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2007510437A JP4887288B2 (ja) 2005-03-25 2006-03-23 音声符号化装置および音声符号化方法
ES06729819.0T ES2623551T3 (es) 2005-03-25 2006-03-23 Dispositivo de codificación de sonido y procedimiento de codificación de sonido
EP06729819.0A EP1858006B1 (fr) 2005-03-25 2006-03-23 Dispositif de codage sonore et procédé de codage sonore
US11/909,556 US8768691B2 (en) 2005-03-25 2006-03-23 Sound encoding device and sound encoding method
CN2006800096953A CN101147191B (zh) 2005-03-25 2006-03-23 语音编码装置和语音编码方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005088808 2005-03-25
JP2005-088808 2005-03-25

Publications (1)

Publication Number Publication Date
WO2006104017A1 true WO2006104017A1 (fr) 2006-10-05

Family

ID=37053274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/305871 WO2006104017A1 (fr) 2005-03-25 2006-03-23 Dispositif de codage sonore et procédé de codage sonore

Country Status (6)

Country Link
US (1) US8768691B2 (fr)
EP (1) EP1858006B1 (fr)
JP (1) JP4887288B2 (fr)
CN (1) CN101147191B (fr)
ES (1) ES2623551T3 (fr)
WO (1) WO2006104017A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008090970A1 (fr) * 2007-01-26 2008-07-31 Panasonic Corporation Dispositif de codage stéréo, dispositif de décodage stéréo, et leur procédé
JP2013148682A (ja) * 2012-01-18 2013-08-01 Fujitsu Ltd オーディオ符号化装置、オーディオ符号化方法及びオーディオ符号化用コンピュータプログラム

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0721079A2 (pt) * 2006-12-13 2014-07-01 Panasonic Corp Dispositivo de codificação, dispositivo de decodificação e método dos mesmos
JP4708446B2 (ja) 2007-03-02 2011-06-22 パナソニック株式会社 符号化装置、復号装置およびそれらの方法
JP4871894B2 (ja) 2007-03-02 2012-02-08 パナソニック株式会社 符号化装置、復号装置、符号化方法および復号方法
US8983830B2 (en) 2007-03-30 2015-03-17 Panasonic Intellectual Property Corporation Of America Stereo signal encoding device including setting of threshold frequencies and stereo signal encoding method including setting of threshold frequencies
KR101428487B1 (ko) * 2008-07-11 2014-08-08 삼성전자주식회사 멀티 채널 부호화 및 복호화 방법 및 장치
PL3779977T3 (pl) * 2010-04-13 2023-11-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Dekoder audio do przetwarzania audio stereo z wykorzystaniem zmiennego kierunku predykcji
EP3723085B1 (fr) 2016-03-21 2022-11-16 Huawei Technologies Co., Ltd. Quantification adaptative de coefficients de matrice pondérés
CN107358959B (zh) * 2016-05-10 2021-10-26 华为技术有限公司 多声道信号的编码方法和编码器
US11176954B2 (en) * 2017-04-10 2021-11-16 Nokia Technologies Oy Encoding and decoding of multichannel or stereo audio signals

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004509365A (ja) * 2000-09-15 2004-03-25 テレフオンアクチーボラゲツト エル エム エリクソン 複数チャネル信号の符号化及び復号化

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS52116103A (en) * 1976-03-26 1977-09-29 Kokusai Denshin Denwa Co Ltd Multistage selection dpcm system
US5651090A (en) * 1994-05-06 1997-07-22 Nippon Telegraph And Telephone Corporation Coding method and coder for coding input signals of plural channels using vector quantization, and decoding method and decoder therefor
JP3180762B2 (ja) * 1998-05-11 2001-06-25 日本電気株式会社 音声符号化装置及び音声復号化装置
US7583805B2 (en) * 2004-02-12 2009-09-01 Agere Systems Inc. Late reverberation-based synthesis of auditory scenes
US7644003B2 (en) * 2001-05-04 2010-01-05 Agere Systems Inc. Cue-based audio coding/decoding
DE60230925D1 (de) * 2001-12-25 2009-03-05 Ntt Docomo Inc Signalcodierung
KR101021079B1 (ko) * 2002-04-22 2011-03-14 코닌클리케 필립스 일렉트로닉스 엔.브이. 파라메트릭 다채널 오디오 표현
EP1500084B1 (fr) 2002-04-22 2008-01-23 Koninklijke Philips Electronics N.V. Representation parametrique d'un signal audio spatial
EP1523863A1 (fr) * 2002-07-16 2005-04-20 Koninklijke Philips Electronics N.V. Codage audio
EP1595247B1 (fr) * 2003-02-11 2006-09-13 Koninklijke Philips Electronics N.V. Codage audio
CN1898724A (zh) * 2003-12-26 2007-01-17 松下电器产业株式会社 语音/乐音编码设备及语音/乐音编码方法
DE602005006777D1 (de) * 2004-04-05 2008-06-26 Koninkl Philips Electronics Nv Mehrkanal-codierer
US8843378B2 (en) * 2004-06-30 2014-09-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Multi-channel synthesizer and method for generating a multi-channel output signal
US7756713B2 (en) * 2004-07-02 2010-07-13 Panasonic Corporation Audio signal decoding device which decodes a downmix channel signal and audio signal encoding device which encodes audio channel signals together with spatial audio information
CN1922655A (zh) * 2004-07-06 2007-02-28 松下电器产业株式会社 音频信号编码装置、音频信号解码装置、方法及程序
US7391870B2 (en) * 2004-07-09 2008-06-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E V Apparatus and method for generating a multi-channel output signal
KR100672355B1 (ko) * 2004-07-16 2007-01-24 엘지전자 주식회사 음성 코딩/디코딩 방법 및 그를 위한 장치
US7720230B2 (en) * 2004-10-20 2010-05-18 Agere Systems, Inc. Individual channel shaping for BCC schemes and the like
SE0402651D0 (sv) * 2004-11-02 2004-11-02 Coding Tech Ab Advanced methods for interpolation and parameter signalling
EP1814104A4 (fr) * 2004-11-30 2008-12-31 Panasonic Corp Appareil de codage stéréo, appareil de décodage stéréo et leurs procédés
US7797162B2 (en) * 2004-12-28 2010-09-14 Panasonic Corporation Audio encoding device and audio encoding method
TW200705386A (en) * 2005-01-11 2007-02-01 Agency Science Tech & Res Encoder, decoder, method for encoding/decoding, computer readable media and computer program elements
US7573912B2 (en) * 2005-02-22 2009-08-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschunng E.V. Near-transparent or transparent multi-channel encoder/decoder scheme
US7751572B2 (en) * 2005-04-15 2010-07-06 Dolby International Ab Adaptive residual audio coding
ATE378675T1 (de) * 2005-04-19 2007-11-15 Coding Tech Ab Energieabhängige quantisierung für effiziente kodierung räumlicher audioparameter

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004509365A (ja) * 2000-09-15 2004-03-25 テレフオンアクチーボラゲツト エル エム エリクソン 複数チャネル信号の符号化及び復号化

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EBARA H. ET AL.: "Shosu Pulse Kudo Ongen o Mochiiru Tei-Bit Rate Onsei Fugoka Hoshiki no Hinshitsu Kaizen", IEICE TECHNICAL REPORT USPEECH, vol. 99, no. 299, 16 September 1999 (1999-09-16), pages 15 - 21, XP008122101 *
See also references of EP1858006A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008090970A1 (fr) * 2007-01-26 2008-07-31 Panasonic Corporation Dispositif de codage stéréo, dispositif de décodage stéréo, et leur procédé
JP2013148682A (ja) * 2012-01-18 2013-08-01 Fujitsu Ltd オーディオ符号化装置、オーディオ符号化方法及びオーディオ符号化用コンピュータプログラム

Also Published As

Publication number Publication date
JPWO2006104017A1 (ja) 2008-09-04
EP1858006A4 (fr) 2011-01-26
CN101147191A (zh) 2008-03-19
US8768691B2 (en) 2014-07-01
EP1858006B1 (fr) 2017-01-25
EP1858006A1 (fr) 2007-11-21
JP4887288B2 (ja) 2012-02-29
US20090055172A1 (en) 2009-02-26
ES2623551T3 (es) 2017-07-11
CN101147191B (zh) 2011-07-13

Similar Documents

Publication Publication Date Title
JP4887288B2 (ja) 音声符号化装置および音声符号化方法
JP5046653B2 (ja) 音声符号化装置および音声符号化方法
US7945447B2 (en) Sound coding device and sound coding method
JP4850827B2 (ja) 音声符号化装置および音声符号化方法
US8311810B2 (en) Reduced delay spatial coding and decoding apparatus and teleconferencing system
US8457319B2 (en) Stereo encoding device, stereo decoding device, and stereo encoding method
JP5153791B2 (ja) ステレオ音声復号装置、ステレオ音声符号化装置、および消失フレーム補償方法
WO2006118179A1 (fr) Dispositif de codage audio et méthode de codage audio
JP4963965B2 (ja) スケーラブル符号化装置、スケーラブル復号装置、及びこれらの方法
JPWO2007116809A1 (ja) ステレオ音声符号化装置、ステレオ音声復号装置、およびこれらの方法
WO2006049205A1 (fr) Appareil de codage et de decodage modulables
JPWO2009057327A1 (ja) 符号化装置および復号装置
US20080255832A1 (en) Scalable Encoding Apparatus and Scalable Encoding Method
WO2009122757A1 (fr) Convertisseur de signal stéréo, inverseur de signal stéréo et leurs procédés
JPWO2008090970A1 (ja) ステレオ符号化装置、ステレオ復号装置、およびこれらの方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680009695.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
REEP Request for entry into the european phase

Ref document number: 2006729819

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2006729819

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2007510437

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 11909556

Country of ref document: US

Ref document number: 1506/MUMNP/2007

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWP Wipo information: published in national office

Ref document number: 2006729819

Country of ref document: EP