EP2690621A1 - Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite - Google Patents

Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite Download PDF

Info

Publication number
EP2690621A1
EP2690621A1 EP12305914.9A EP12305914A EP2690621A1 EP 2690621 A1 EP2690621 A1 EP 2690621A1 EP 12305914 A EP12305914 A EP 12305914A EP 2690621 A1 EP2690621 A1 EP 2690621A1
Authority
EP
European Patent Office
Prior art keywords
data
matrix
signals
downmixing
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12305914.9A
Other languages
English (en)
French (fr)
Inventor
Oliver Wuebbolt
Adrian Murtaza
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP12305914.9A priority Critical patent/EP2690621A1/de
Publication of EP2690621A1 publication Critical patent/EP2690621A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition

Definitions

  • the invention relates to a method and to an apparatus for downmixing MPEG SAOC-like encoded audio signals at receiver side in a manner different from the manner of downmixing at encoder side, wherein the decoder side downmixing is controlled by desired playback configuration data and/or desired object positioning data.
  • Audio content providers are facing consumers with increasingly heterogeneous listening situations, e.g. home theatres, mobile audio, car and in-flight entertainment. Audio content cannot be processed by its creator or broadcaster so as to match every possible consumer listening condition, for example audio/video content played back on a mobile phone. Besides different listening conditions, also different listening experiences can be desirable, for instance in a live soccer broadcast a consumer can control his own virtual position within the sound scene of the stadium (pitch or stands), or can control the virtual position and the predominance of the commentator.
  • Metadata providers can add guiding metadata to the audio content, such that consumers can control down-mix or dynamic range of selected parts of the audio signal and/or assure high speech intelligibility.
  • metadata For incorporation of such metadata into existing broadcasting chains, it is important that the general audio format is not changed (legacy playback) and that only a small amount of extra information (e.g. as ancillary data) is added to the audio bit stream.
  • MPEG Spatial Audio Object Coding (SAOC), ISO/IEC 23003-1:2007, MPEG audio technologies - Part 1: MPEG Surround, and ISO/IEC 23003-2:2010, MPEG audio technologies - Part 2: Spatial Audio Object Coding, deals with parametric coding techniques for complex audio scenes at bit rates normally used for mono or stereo sound coding, offering at decoder side an interactive rendering of the audio objects mixed into the audio scene.
  • SAOC MPEG Spatial Audio Object Coding
  • ISO/IEC 23003-1 2007
  • MPEG audio technologies - Part 1 MPEG Surround
  • ISO/IEC 23003-2:2010 MPEG audio technologies - Part 2: Spatial Audio Object Coding
  • MPEG SAOC was developed starting from Spatial Audio Coding which is based on a 'channel-oriented' approach, by introducing the concept of audio objects, having the purpose to offer even more flexibility at receiver side. Since it is a parametric multiple object coding technique, the additional cost in terms of bit rate is limited to 2-3 kbit/s for each audio object. Although the bit rate increases with the number of audio objects, it still remains at small values in comparison with the actual audio data transmitted as a mono/stereo downmix.
  • a standard MPEG SAOC architecture is illustrated in Fig. 1 .
  • a variable number N of sound objects Obj.#1, Obj.#2, Obj.#3, Obj.#4 is input to an SAOC encoder 11 that provides an encoded background compatible mono or stereo downmix signal or signals with SAOC parameters and side information.
  • the downmix data signal or signals and the side information data are sent as dedicated bit streams to an SAOC decoder 12.
  • the receiver side processing is carried out basically in two steps: first, using the side information, SAOC decoder 12 reconstructs the original sound objects Obj.#1, Obj.#2, Obj.#3, Obj.#4. Controlled by interaction/control information, these reconstructed sound objects are re-mixed and rendered in an MPEG surround renderer 13. For example, the reconstructed sound objects are output as a stereo signal with channels #1 and #2.
  • SAOC parameters are computed for every time/frequency tile and are transmitted to the decoder side:
  • a Distortion Control Unit DCU can be used.
  • the final rendering matrix coefficients are computed as a linear combination of user-specified coefficients and the target coefficients which are assumed to be distortion-free.
  • the main drawback of the solution offered by MPEG SAOC is the limitation to a maximum of two down-mix channels. Further, the MPEG SAOC standard is not designed for 'Solo/Karaoke' type of applications, which involve the complete suppression of one or more audio objects. In the MPEG SAOC standard this problem is tackled by using residual coding for specific audio objects, thereby increasing the bit rate.
  • a problem to be solved by the invention is to overcome these limitations of MPEG SAOC and to allow for adding of side information for a legacy multi-channel audio broadcasting like 5.1.
  • This problem is solved by the methods disclosed in claims 1 and 2. Apparatuses that utilise these methods are disclosed in claims 3 and 4, respectively.
  • the invention describes how, by adding only a small amount of extra bit rate, at decoder or receiver side a re-mixing of a broadcast audio signal is achieved using information about the actual mix of the audio signal, audio signal characteristics like correlation, level differences, and the desired audio scene rendering.
  • a second embodiment shows how to determine already at encoder side the suitability of the actual multi-channel audio signal for a remix at decoder side.
  • This feature allows for countermeasures (e.g. changing the mixing matrix used, i.e. how the sound objects are mixed into the different channels) if a decoder side re-mixing without perceivable artefacts, or without problems like the necessary additional transmission of the audio objects itself for a short time, is not possible.
  • the invention could be used to amend the MPEG SAOC standard correspondingly, based on the same building blocks.
  • the inventive encoding method is suited for downmixing spatial audio signals that can be downmixed at receiver side in a manner different from the manner of downmixing at encoder side, wherein said encoding is based on MPEG SAOC and said downmixing at receiver side can be controlled by desired playback configuration data and/or desired object positioning data, said method including the steps:
  • the inventive decoding method is suited downmixing spatial audio signals processed according to the encoding method in a manner different from the manner of downmixing at encoder side, wherein said downmixing at receiver side can be controlled by desired playback configuration data and/or desired object positioning data, said method including the steps:
  • the inventive encoding apparatus is suited for downmixing spatial audio signals that can be downmixed at receiver side in a manner different from the manner of downmixing at encoder side, wherein said encoding is based on MPEG SAOC and said downmixing at receiver side can be controlled by desired playback configuration data and/or desired object positioning data, said apparatus including:
  • the inventive decoding apparatus is suited for downmixing spatial audio signals processed according to the encoding method in a manner different from the manner of downmixing at encoder side, wherein said downmixing at receiver side can be controlled by desired playback configuration data and/or desired object positioning data, said apparatus including:
  • the inventive spatial audio object coding system with five down-mix channels facilitates a backward compatible transmission at bit rates only slighter higher (due to the extended content of the side information: OLD, IOC, DMG and DCLD ) than the bit rates for known 5.1 channel transmission.
  • a number of M 5 channels containing the ambience signals and a number of L audio objects mixed over the ambience are considered.
  • An example is the stadium ambience of a soccer match plus specific sound effects (ball kicks, whistle) and one or more commentators.
  • M is at least '2' and L is '1' or greater.
  • At decoder side it is not intended to reconstruct the audio objects, but to offer the possibility of re-mixing, attenuating, totally suppressing, and changing the position of the audio objects in the rendered audio scene.
  • any time/frequency transform can be used.
  • hybrid Quadrature Mirror Filter (hQMF) banks are used for better selectivity in the frequency domain.
  • the spatial audio input signals are processed in non-overlapping multiple-sample temporal slots, in particular 64-samples temporal slots. These temporal slots are used for computing the perceptual cues or characteristics for every successive frame, which has a length of a fixed number of temporal slots, in particular 16 temporal slots.
  • 71 frequency bands are used according to the sensitivity of the human auditory system, and are grouped into K processing bands, K having a value of '2', '3' or '4', thereby obtaining different levels of accuracy.
  • the hQMF filter bank transforms in each case 64 time samples into 71 frequency samples.
  • FIG.2 A basic block diagram of the inventive encoder and decoder is shown in Fig.2 and Fig.3 , respectively.
  • a hybrid Quadrature Mirror Filter analysis filter bank step or stage 21 is applied to all audio input signals, e.g. ambience channels Ch.#1 to Ch.#5 and sound objects Obj.#1 to Obj.# L (at least one sound object).
  • the number of ambience channels is not limited to five.
  • the ambience channels are not independent sound objects but are usually correlated sound signals.
  • the corresponding filter bank outputs time/frequency domain signals X, which are fed on one hand to a down-mixer step or stage 22 that multiplies them with a downmix matrix D and provides, via a hybrid Quadrature Mirror Filter synthesis filter bank step or stage 24 that has an inverse operation of the analysis filter bank, the audio channels in time domain for transmission, and on the other hand are fed to an enhanced MPEG SAOC parameter calculator step or stage 23.
  • the synthesis filter bank inverses the function of the analysis filter bank.
  • the enhanced SAOC parameters determine the rendering flexibility in a decoder and include, as mentioned above, Object Level Differences data OLD, Inter-Object Coherence data IOC, Downmix Gains data DMG, Downmix Channel Level Differences data DCLD, and can comprise Object Energy parameter data NRG. Except the DMG and DCLD data, the other parameters correspond to original MPEG SAOC parameters. From these side information data items a rank can be determined as described below in a rank calculator step or stage 25 (i.e. step/stage 25 is optional), and the side information data items and data items regarding any re-mix constraints (described below) are transmitted.
  • the output signals of step/stage 24 together with the output signals of step/stage 25 are used to form an enhanced MPEG SAOC bitstream.
  • the object parameters are quantised and coded efficiently, and are correspondingly decoded and inversely quantised at receiver side.
  • the downmix signal Before transmission, the downmix signal can be compressed, and is correspondingly decompressed at receiver side.
  • the SAOC side information is (or can be) encoded according to the MPEG SAOC standard and is transmitted together with DMG and DCLG data e.g. as ancillary data portion of the downmix bitstream.
  • a hybrid Quadrature Mirror Filter analysis filter bank step or stage 31 corresponding to filter bank 21 receives and processes the transmitted hQMF synthesised data from the enhanced MPEG SAOC bitstream, and feds them as time/frequency domain data to an enhanced MPEG SAOC decoder or transcoder step or stage 32 that is controlled by an estimation matrix T .
  • a rendering matrix step or stage 35 receives user data regarding perceptual cues, e.g. a desired playback configuration and desired object positions, as well as the transmitted re-mix constraint data items, and there from a corresponding rendering matrix A is determined.
  • Matrix A has the size of down-mixing matrix D and its coefficients are based on the coefficients of matrix D.
  • the last row of rendering matrix A contains zero values only and all other matrix coefficients are identical to the coefficients in matrix D. I.e., each sound object is represented by a different row in matrix A.
  • Matrix A together with an estimated covariance matrix C and a reconstructed down-mixing matrix D, are used for determining (as explained below) in an estimation matrix generator step or stage 36 the estimation matrix T.
  • the estimated covariance matrix C and the reconstructed down-mixing matrix D are determined from the received side information data in a covariance and down-mixing matrix calculation step or stage 34.
  • the estimation matrix T is used for decoding or transcoding the audio signals of the new audio scene.
  • the downmixed signals of step/stage 32 are output as channel signals (e.g. Ch.#1 to Ch.#5) via a hybrid Quadrature Mirror Filter synthesis filter bank step or stage 33 (corresponding to synthesis filter bank 24).
  • the perceptual cues are used as ancillary data in the main bitstream.
  • Minimum bit rate means: such that the resulting audio quality is not affected, i.e. the distortions caused due to slightly less available bit rate for the audio signals are not audible, or at least not annoying.
  • the Object Level Differences data OLD and Inter Object Coherence data IOC are used, the values of which are computed in step/stage 23 e.g. according to section Annex D.2 "Calculation of SAOC parameters" of the MPEG SAOC standard, for every frame/frequency processing band tile ( l,m ), i.e. for every non-overlapping 16 temporal slots and every K processing bands.
  • nrg l , m i ⁇ j ⁇ t ⁇ l ⁇ k ⁇ m X t , k i * X t , k * j ⁇ t ⁇ l ⁇ k ⁇ m 1 + ⁇ , where the indices i and j stand for the ambience channel number and the audio object number, respectively, m is a current frequency processing band, k is a running frequency sample index within frequency processing band m , l is a current frame, t is a running temporal slot index within frame l , and ⁇ has a small value (e.g. 10- 9 ) and avoids a division by zero in the following computations.
  • the DMG and DCLD data or parameters are extended versions of the corresponding MPEG SAOC parameters because they are not limited to two channels like in MPEG SAOC.
  • the time/frequency resolution of DMG and DCLD can be modified with the moving speed of the audio objects in the audio scene. This time/frequency resolution change will not affect the performance of the inventive processing, and therefore it is assumed for simplicity that the time/frequency resolution at which these parameters are computed is equal to the time/frequency resolution at which the processing is done.
  • the perceptual cues are used in step/stage 34 for approximating the covariance matrix C of the original input channels.
  • the original rendering or mixing matrix is required at decoder side.
  • the downmix Gains DMG values and the Downmix Channel Level Differences DCLD values from the additional side information are used.
  • Matrix D is computed differently than according to MPEG SAOC, but its resulting content can assumed to be identical.
  • every frame l and processing band m (cf. the above table), mixing matrix D l,m ( i,j ) and the rendering matrix A l,m ( i,j ) used for remixing the L objects have a size 5 by N in this embodiment.
  • T l,m argmin ⁇ E l ⁇ Z t , m - Z ⁇ t , m ⁇ Z t , m - Z ⁇ t , m H .
  • Z t,m is not available at decoder side but is used for the derivation of the following equations, and it finally turns out that knowledge of Z t,m at decoder side is not required.
  • the rank of this matrix G l,m is computed in step/stage 25 before final encoding, and one can proceed with the encoding of the side information if that matrix has a full rank.
  • the rank of a matrix is the number of independent columns or rows, and this can be hard to determine.
  • the effective rank is used, which is a more stable measure for the rank and is described in section 6.3 "Singular Value Decomposition" of the textbook of Gilbert Strang, "Linear Algebra and its applications", 4th edition, published 19 July 2005 .
  • the eigenvalues of matrix G l,m are computed.
  • the rank value can be used for controlling the number K of frequency bands applied in the inventive processing, and thereby the accuracy of the side information parameters.
  • the rank value can also be used for switching on or off a residual coding like in the MPEG SAOC standard.
  • G l , m Q l , m ⁇ U l , m ⁇ Q l , m - 1
  • Q l,m is a unitary matrix
  • Q l , m - 1 Q l , m H
  • a singular matrix is not very common and it can become non-singular with a small weighting of one coefficient of the singular matrix.
  • matrix G l,m is invertible
  • these eigenvalues are modified by adding a weight of ⁇ to each one of them.
  • the error introduced by this procedure is of order ⁇ and will not affect the remixing processing in step/stage 32.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP12305914.9A 2012-07-26 2012-07-26 Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite Withdrawn EP2690621A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP12305914.9A EP2690621A1 (de) 2012-07-26 2012-07-26 Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP12305914.9A EP2690621A1 (de) 2012-07-26 2012-07-26 Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite

Publications (1)

Publication Number Publication Date
EP2690621A1 true EP2690621A1 (de) 2014-01-29

Family

ID=47002786

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12305914.9A Withdrawn EP2690621A1 (de) 2012-07-26 2012-07-26 Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite

Country Status (1)

Country Link
EP (1) EP2690621A1 (de)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015056383A1 (ja) * 2013-10-17 2015-04-23 パナソニック株式会社 オーディオエンコード装置及びオーディオデコード装置
CN106796804A (zh) * 2014-10-02 2017-05-31 杜比国际公司 用于对话增强的解码方法和解码器
US9805727B2 (en) 2013-04-03 2017-10-31 Dolby Laboratories Licensing Corporation Methods and systems for generating and interactively rendering object based audio
CN108632048A (zh) * 2017-03-22 2018-10-09 展讯通信(上海)有限公司 会议电话控制方法、装置及多通终端
US10136240B2 (en) 2015-04-20 2018-11-20 Dolby Laboratories Licensing Corporation Processing audio data to compensate for partial hearing loss or an adverse hearing environment
CN109036441A (zh) * 2014-03-24 2018-12-18 杜比国际公司 对高阶高保真立体声信号应用动态范围压缩的方法和设备
WO2019227991A1 (zh) * 2018-05-31 2019-12-05 华为技术有限公司 立体声信号的编码方法和装置
CN110739000A (zh) * 2019-10-14 2020-01-31 武汉大学 一种适应于个性化交互系统的音频对象编码方法
CN113678199A (zh) * 2019-03-28 2021-11-19 诺基亚技术有限公司 空间音频参数的重要性的确定及相关联的编码

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049943A1 (en) * 2006-05-04 2008-02-28 Lg Electronics, Inc. Enhancing Audio with Remix Capability
US20110166867A1 (en) * 2008-07-16 2011-07-07 Electronics And Telecommunications Research Institute Multi-object audio encoding and decoding apparatus supporting post down-mix signal
US20120078642A1 (en) * 2009-06-10 2012-03-29 Jeong Il Seo Encoding method and encoding device, decoding method and decoding device and transcoding method and transcoder for multi-object audio signals
US20120177204A1 (en) * 2009-06-24 2012-07-12 Oliver Hellmuth Audio Signal Decoder, Method for Decoding an Audio Signal and Computer Program Using Cascaded Audio Object Processing Stages

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049943A1 (en) * 2006-05-04 2008-02-28 Lg Electronics, Inc. Enhancing Audio with Remix Capability
US20110166867A1 (en) * 2008-07-16 2011-07-07 Electronics And Telecommunications Research Institute Multi-object audio encoding and decoding apparatus supporting post down-mix signal
US20120078642A1 (en) * 2009-06-10 2012-03-29 Jeong Il Seo Encoding method and encoding device, decoding method and decoding device and transcoding method and transcoder for multi-object audio signals
US20120177204A1 (en) * 2009-06-24 2012-07-12 Oliver Hellmuth Audio Signal Decoder, Method for Decoding an Audio Signal and Computer Program Using Cascaded Audio Object Processing Stages

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GILBERT STRANG: "Singular Value Decomposition", 19 July 2005, article "Linear Algebra and its applications"

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11769514B2 (en) 2013-04-03 2023-09-26 Dolby Laboratories Licensing Corporation Methods and systems for rendering object based audio
US10276172B2 (en) 2013-04-03 2019-04-30 Dolby Laboratories Licensing Corporation Methods and systems for generating and interactively rendering object based audio
US11270713B2 (en) 2013-04-03 2022-03-08 Dolby Laboratories Licensing Corporation Methods and systems for rendering object based audio
US9805727B2 (en) 2013-04-03 2017-10-31 Dolby Laboratories Licensing Corporation Methods and systems for generating and interactively rendering object based audio
US10832690B2 (en) 2013-04-03 2020-11-10 Dolby Laboratories Licensing Corporation Methods and systems for rendering object based audio
US10553225B2 (en) 2013-04-03 2020-02-04 Dolby Laboratories Licensing Corporation Methods and systems for rendering object based audio
US10002616B2 (en) 2013-10-17 2018-06-19 Socionext Inc. Audio decoding device
WO2015056383A1 (ja) * 2013-10-17 2015-04-23 パナソニック株式会社 オーディオエンコード装置及びオーディオデコード装置
US9779740B2 (en) 2013-10-17 2017-10-03 Socionext Inc. Audio encoding device and audio decoding device
CN109036441B (zh) * 2014-03-24 2023-06-06 杜比国际公司 对高阶高保真立体声信号应用动态范围压缩的方法和设备
US11838738B2 (en) 2014-03-24 2023-12-05 Dolby Laboratories Licensing Corporation Method and device for applying Dynamic Range Compression to a Higher Order Ambisonics signal
CN109036441A (zh) * 2014-03-24 2018-12-18 杜比国际公司 对高阶高保真立体声信号应用动态范围压缩的方法和设备
CN106796804A (zh) * 2014-10-02 2017-05-31 杜比国际公司 用于对话增强的解码方法和解码器
CN106796804B (zh) * 2014-10-02 2020-09-18 杜比国际公司 用于对话增强的解码方法和解码器
US10136240B2 (en) 2015-04-20 2018-11-20 Dolby Laboratories Licensing Corporation Processing audio data to compensate for partial hearing loss or an adverse hearing environment
CN108632048A (zh) * 2017-03-22 2018-10-09 展讯通信(上海)有限公司 会议电话控制方法、装置及多通终端
CN108632048B (zh) * 2017-03-22 2020-12-22 展讯通信(上海)有限公司 会议电话控制方法、装置及多通终端
US11462224B2 (en) 2018-05-31 2022-10-04 Huawei Technologies Co., Ltd. Stereo signal encoding method and apparatus using a residual signal encoding parameter
US11978463B2 (en) 2018-05-31 2024-05-07 Huawei Technologies Co., Ltd. Stereo signal encoding method and apparatus using a residual signal encoding parameter
WO2019227991A1 (zh) * 2018-05-31 2019-12-05 华为技术有限公司 立体声信号的编码方法和装置
CN113678199A (zh) * 2019-03-28 2021-11-19 诺基亚技术有限公司 空间音频参数的重要性的确定及相关联的编码
CN110739000A (zh) * 2019-10-14 2020-01-31 武汉大学 一种适应于个性化交互系统的音频对象编码方法

Similar Documents

Publication Publication Date Title
EP2690621A1 (de) Verfahren und Vorrichtung zum Heruntermischen von Audiosignalen mit MPEG SAOC-ähnlicher Codierung an der Empfängerseite in unterschiedlicher Weise als beim Heruntermischen auf Codiererseite
US9257128B2 (en) Apparatus and method for coding and decoding multi object audio signal with multi channel
US9578435B2 (en) Apparatus and method for enhanced spatial audio object coding
JP5189979B2 (ja) 聴覚事象の関数としての空間的オーディオコーディングパラメータの制御
JP5592974B2 (ja) 多チャネルダウンミックスされたオブジェクト符号化における強化された符号化及びパラメータ表現
EP2941771B1 (de) Dekodierer, kodierer und verfahren für informierte lautstärkenschätzung in objektbasierten audiocodierungssystemen
US8867753B2 (en) Apparatus, method and computer program for upmixing a downmix audio signal
US8515759B2 (en) Apparatus and method for synthesizing an output signal
CN105518775B (zh) 使用自适应相位校准的多声道降混的梳型滤波器的伪迹消除
KR101798117B1 (ko) 후방 호환성 다중 해상도 공간적 오디오 오브젝트 코딩을 위한 인코더, 디코더 및 방법
US11501785B2 (en) Method and apparatus for adaptive control of decorrelation filters
MX2012005781A (es) Aparato para proporcionar una representacion de señal de mezcla ascendente con base en la representacion de señal de mezcla descendente, aparato para proporcionar un flujo de bits que representa una señal de audio multicanal, metodos, programas informaticos y flujo de bits que representan una señal de audio multicanal usando un parametro de combinacion lineal.
EP2439736A1 (de) Abwärtsmischvorrichtung, encoder und verfahren dafür
WO2023172865A1 (en) Methods, apparatus and systems for directional audio coding-spatial reconstruction audio processing
RU2485605C2 (ru) Усовершенствованный метод кодирования и параметрического представления кодирования многоканального объекта после понижающего микширования

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140730