WO2015041477A1 - 오디오 신호 처리 방법 및 장치 - Google Patents
오디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2015041477A1 WO2015041477A1 PCT/KR2014/008678 KR2014008678W WO2015041477A1 WO 2015041477 A1 WO2015041477 A1 WO 2015041477A1 KR 2014008678 W KR2014008678 W KR 2014008678W WO 2015041477 A1 WO2015041477 A1 WO 2015041477A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subband
- signal
- signals
- channel
- audio signal
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 87
- 230000005236 sound signal Effects 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000009877 rendering Methods 0.000 claims abstract description 141
- 238000001914 filtration Methods 0.000 claims abstract description 50
- 230000004044 response Effects 0.000 claims abstract description 20
- 238000003672 processing method Methods 0.000 claims description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 description 12
- 238000002156 mixing Methods 0.000 description 9
- 238000012546 transfer Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000005034 decoration Methods 0.000 description 7
- 238000012805 post-processing Methods 0.000 description 6
- 238000003908 quality control method Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 230000008685 targeting Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000010076 replication Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0223—Computation saving measures; Accelerating measures
- H03H17/0227—Measures concerning the coefficients
- H03H17/0229—Measures concerning the coefficients reducing the number of taps
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0248—Filters characterised by a particular frequency response or filtering method
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0248—Filters characterised by a particular frequency response or filtering method
- H03H17/0264—Filter sets with mutual related characteristics
- H03H17/0266—Filter banks
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0248—Filters characterised by a particular frequency response or filtering method
- H03H17/0264—Filter sets with mutual related characteristics
- H03H17/0272—Quadrature mirror filters
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H21/00—Adaptive networks
- H03H21/0012—Digital adaptive filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/04—Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/02—Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/01—Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/03—Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/03—Application of parametric coding in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/07—Synergistic effects of band splitting and sub-band processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/11—Application of ambisonics in stereophonic audio systems
Definitions
- the present invention relates to a signal processing method and apparatus for effectively reproducing an audio signal. More particularly, the present invention relates to audio signal processing for realizing binaural rendering for reproducing a multi-channel or multi-object audio signal in stereo. A method and apparatus are disclosed.
- Binaural rendering for listening to a multi-channel signal in stereo has a problem that requires more computation as the length of the target filter increases.
- the length may range from 48,000 to 96,000 samples.
- the amount of calculation is huge.
- binaural filtering can be expressed as follows.
- the above time-domain convolution is generally performed using fast convolution based on the Fast Fourier Transform (FFT).
- FFT Fast Fourier Transform
- an FFT corresponding to the number of input channels and an inverse FFT transform corresponding to the number of output channels must be performed.
- delay must be taken into account, so block-wise fast convolution must be performed, which is more than simply fast convolution over the entire length. The amount of computation can be consumed.
- a filtering process requiring a large amount of computation in binaural rendering to preserve a stereoscopic effect such as an original signal can be implemented with a very low computational amount while minimizing sound loss. Has a purpose.
- the present invention has an object to minimize the diffusion of distortion through a high quality filter when there is distortion in the input signal itself.
- the present invention has an object to implement a finite impulse response (FIR) filter having a very long length to a filter of a smaller length.
- FIR finite impulse response
- the present invention has an object to minimize the distortion of the portion damaged by the missing filter coefficients when performing the filtering using the abbreviated FIR filter.
- the present invention provides an audio signal processing method and an audio signal processing apparatus as follows.
- the present invention comprises the steps of receiving a multi-audio signal including a multi-channel or multi-object signal; Receiving truncated subband filter coefficients for filtering the multi audio signal, wherein the truncated subband filter coefficients are obtained from a Binaural Room Impulse Response (BRIR) filter coefficient for binaural filtering of the multi audio signal At least one portion of a subband filter coefficient, wherein the length of the truncated subband filter coefficient is determined based on filter order information obtained by using at least part of the characteristic information extracted from the corresponding subband filter coefficient, The length of the truncated subband filter coefficients is different from the length of the truncated subband filter coefficients of other subbands; And filtering the subband signal using the truncated subband filter coefficients corresponding to each subband signal of the multi audio signal. It provides an audio signal processing method comprising a.
- BRIR Binaural Room Impulse Response
- an audio signal processing apparatus for performing binaural rendering on a multi-audio signal including a multi-channel or multi-object signal, wherein the multi-audio signal includes a plurality of subband signals, respectively.
- a fast convolution unit for performing rendering of the direct sound and early reflection sound parts for the apparatus;
- a late reverberation generator for performing rendering of late reverberation parts for the respective subband signals, wherein the fast convolution unit receives truncated subband filter coefficients for filtering the multi-audio signal, wherein the truncation is performed.
- the subband filter coefficients are at least a portion of the subband filter coefficients obtained from the Binaural Room Impulse Response (BRIR) filter coefficients for binaural filtering of the multi-audio signal, and the length of the truncated subband filter coefficients is corresponding to the subband filter coefficients.
- the length of at least one truncated subband filter coefficient is determined based on the filter order information obtained by using the characteristic information extracted from the subband filter coefficients, at least in part. Different in length, the truncated corresponding to each subband signal of the multi audio signal
- An audio signal processing apparatus is characterized by filtering the subband signal using subband filter coefficients.
- the characteristic information may include first reverberation time information of a corresponding subband filter coefficient, and the filter order information may have one value for each subband.
- the length of the truncated subband filter is characterized in that it has a multiple of the power of two.
- the plurality of subband filter coefficients and the plurality of subband signals each include a first subband group having a low frequency and a second subband group having a high frequency based on a preset frequency band.
- the filtering may be performed on truncated subband filter coefficients and subband signals of the first subband group.
- the filtering is performed by using the front subband filter coefficients truncated based at least in part on the first reverberation time information of the corresponding subband filter coefficients. And reverberation processing of the subband signal corresponding to the section after the front subband filter coefficient.
- the reverberation processing may include receiving downmix subband filter coefficients for each subband, and the downmix subband filter coefficients combine rear subband filter coefficients for each channel or each object of the corresponding subband.
- the rear subband filter coefficients are obtained from an interval after the front subband filter coefficients among the corresponding subband filter coefficients; Generating a downmix subband signal for each subband, wherein the downmix subband signal is generated by downmixing subband signals for each channel or for each object of the corresponding subband; And generating two channels of left and right subband reverberation signals using the downmix subband signal and the downmix subband filter coefficients corresponding thereto.
- the downmix subband signal is a mono subband signal
- the downmix subband filter coefficients reflect energy reduction characteristics of the reverberation unit for the corresponding subband signal
- the filtered mono sub Generating a decorrelation signal for the band signal; And generating left and right signals of two channels by performing weighted summation between the filtered mono subband signal and the decorrelation signal.
- each of the multi-audio signal includes a plurality of subband signals, the plurality of subband signals are preset A low frequency first subband group signal and a high frequency second subband group signal based on the frequency band;
- BRIR binaural room impulse response
- an audio signal processing apparatus for performing binaural rendering on a multi-audio signal including a multi-channel or multi-object signal, wherein the multi-audio signal includes a plurality of subband signals, respectively, and the plurality of subbands.
- Signals include a signal of a first subband group of a low frequency and a signal of a second subband group of a high frequency based on a preset frequency band, and perform rendering of each subband signal of the first subband group.
- High speed convolution unit for; And a tap-delay line processing unit for performing rendering on each subband signal of the second subband group, wherein the tap-delay line processing unit corresponds to each subband signal of the second subband group.
- the present invention provides an audio signal processing apparatus characterized by performing tap-delay line filtering on a subband signal of the second subband group.
- the parameter may include one delay information of a corresponding BRIR subband filter coefficient and one gain information corresponding to the delay information.
- the tap-delay line filtering is one-tap-delay line filtering using the parameter.
- the delay information may indicate position information on the maximum peak among the BRIR subband filter coefficients.
- the delay information is characterized by having an integer value of a sample unit in the QMF domain.
- the gain information is characterized by having a complex value.
- the method comprises the steps of: summing up the filtered multi audio signal into two channel left and right subband signals for each subband; Combining the summed left and right subband signals with left and right subband signals generated from the multi-audio signals of the first subband group; And QMF synthesizing the combined left and right subband signals, respectively. It characterized in that it further comprises.
- a method including: receiving a multimedia signal having a plurality of subbands; Receiving at least one prototype-type rendering filter coefficient for filtering each subband signal of the multimedia signal; Converting the circular rendering filter coefficients into a plurality of subband filter coefficients; Cutting each of the subband filter coefficients based on filter order information obtained by using at least part of the characteristic information extracted from the corresponding subband filter coefficients, the length of at least one of the truncated subband filter coefficients being different from each other Different from the length of the truncated subband filter coefficients of the band; And filtering the multimedia signal using the truncated subband filter coefficients corresponding to the respective subband signals. It provides a multimedia signal processing method comprising a.
- a multimedia signal processing apparatus having a plurality of subbands, the apparatus comprising: receiving at least one prototype rendering filter coefficient for filtering each subband signal of the multimedia signal; Converting each subband filter coefficient into subband filter coefficients, and truncating each subband filter coefficient based on at least partially the filter order information obtained by using the characteristic information extracted from the corresponding subband filter coefficients; A parameterization unit having a length different from that of the truncated subband filter coefficients of other subbands; And a rendering unit for receiving the multimedia signal and filtering the multimedia signal using the truncated subband filter coefficients corresponding to the respective subband signals.
- the multimedia signal includes a multi-channel or multi-object signal
- the circular rendering filter coefficients are BRIR filter coefficients in the time domain.
- the characteristic information may include energy decay time information of a corresponding subband filter coefficient, and the filter order information may have one value for each subband.
- each of the multi-audio signal includes a plurality of subband signal, the plurality of subband signal is A low frequency first subband group signal and a high frequency second subband group signal based on the set frequency band;
- Receiving truncated subband filter coefficients for filtering the multi-audio signal of the first subband group wherein the truncated subband filter coefficients are BRIR (Binaural Room Impulse) for binaural filtering of the multi-audio signal Response) at least a portion of the subband filter coefficients of the first subband group obtained from the filter coefficients, and the length of the truncated subband filter coefficients is obtained using at least partially the characteristic information extracted from the corresponding subband filter coefficients Determined based on filtered filter order information;
- And filtering the subband signals of the first subband group using the truncated subband filter coefficients wherein the truncated subband filter coefficients are BRIR (Binaural Room Impulse) for binaural filtering of the multi-audio signal
- an audio signal processing apparatus for performing binaural rendering on a multi-audio signal including a multi-channel or multi-object signal, wherein the multi-audio signal includes a plurality of subband signals, respectively, and the plurality of subbands.
- Signals include a signal of a first subband group of a low frequency and a signal of a second subband group of a high frequency based on a preset frequency band, and perform rendering of each subband signal of the first subband group.
- High speed convolution unit for; And a tap-delay line processing unit for performing rendering of each subband signal of the second subband group
- the fast convolution unit comprises: truncation for filtering the multi-audio signal of the first subband group Received subband filter coefficients, wherein the truncated subband filter coefficients are subband filter coefficients of a first subband group obtained from binaural room impulse response (BRIR) filter coefficients for binaural filtering of the multi-audio signal
- BRIR binaural room impulse response
- the processing unit may receive at least one parameter corresponding to each subband signal of the second subband group, wherein the at least one parameter corresponds to each subband signal of the second subband group.
- Impulse Response An audio signal processing apparatus is extracted from subband filter coefficients and performs tap-delay line filtering on subband signals of the second subband group using the received parameters.
- the left and right subband signals of the two channels generated by filtering the subband signals of the first subband group and the left and right channels of the two channels generated by tap-delay line filtering of the subband signals of the second subband group.
- the amount of computation can be dramatically lowered while minimizing sound loss when performing binaural rendering on a multichannel or multiobject signal.
- FIG. 1 is a block diagram illustrating an audio signal decoder according to an embodiment of the present invention.
- Figure 2 is a block diagram showing each configuration of the binaural renderer according to an embodiment of the present invention.
- 3 to 7 illustrate various embodiments of an audio signal processing apparatus according to the present invention.
- FIGS. 8 to 10 are diagrams illustrating a method for generating an FIR filter for binaural rendering according to an embodiment of the present invention.
- 11 to 14 illustrate various embodiments of the P-part rendering unit of the present invention.
- the audio signal decoder of the present invention includes a core decoder 10, a rendering unit 20, a mixer 30, and a post processing unit 40.
- the core decoder 10 decodes a loudspeaker channel signal, a discrete object signal, an object downmix signal, a pre-rendered signal, and the like.
- the core decoder 10 may use a Unified Speech and Audio Coding (USAC) based codec.
- USAC Unified Speech and Audio Coding
- the rendering unit 20 renders the signal decoded by the core decoder 10 using reproduction layout information.
- the rendering unit 20 may include a format converter 22, an object renderer 24, an OAM decoder 25, a SAOC decoder 26, and a HOA decoder 28.
- the rendering unit 20 performs rendering using any one of the above configurations according to the type of the decoded signal.
- the format converter 22 converts the transmitted channel signal into an output speaker channel signal. That is, the format converter 22 performs conversion between the transmitted channel configuration and the speaker channel configuration to be reproduced. If the number of output speaker channels (such as 5.1 channels) is less than the number of transmitted channels (such as 22.2 channels) or the transmitted channel configuration is different from the channel configuration to be reproduced, the format converter 22 transmits the transmitted channel. Perform a downmix on the signal.
- the audio signal decoder of the present invention may generate an optimal downmix matrix using a combination of an input channel signal and an output speaker channel signal, and perform a downmix using the matrix.
- the channel signal processed by the format converter 22 may include a pre-rendered object signal.
- at least one object signal may be pre-rendered and mixed with the channel signal before encoding the audio signal.
- the mixed object signal may be converted into an output speaker channel signal by the format converter 22 together with the channel signal.
- the object renderer 24 and the SAOC decoder 26 perform rendering for the object based audio signal.
- the object-based audio signal may include individual object waveforms and parametric object waveforms.
- each object signal is provided to the encoder as a monophonic waveform, and the encoder transmits the respective object signals using single channel elements (SCEs).
- SCEs single channel elements
- a parametric object waveform a plurality of object signals are downmixed into at least one channel signal, and characteristics of each object and a relationship between them are represented by a spatial audio object coding (SAOC) parameter.
- SAOC spatial audio object coding
- compressed object metadata corresponding thereto may be transmitted together.
- Object metadata quantizes object attributes in units of time and space to specify the position and gain of each object in three-dimensional space.
- the OAM decoder 25 of the rendering unit 20 receives the compressed object metadata, decodes it, and passes it to the object renderer 24 and / or the SAOC decoder 26.
- the object renderer 24 uses object metadata to render each object signal in accordance with a given playback format.
- each object signal may be rendered to specific output channels based on the object metadata.
- the SAOC decoder 26 recovers the object / channel signal from the decoded SAOC transport channels and parametric information.
- the SAOC decoder 26 may generate an output audio signal based on the reproduction layout information and the object metadata. As such, the object renderer 24 and the SAOC decoder 26 may render the object signal as a channel signal.
- the HOA decoder 28 receives a Higher Order Ambisonics (HOA) signal and HOA side information and decodes it.
- the HOA decoder 28 generates a sound scene by modeling a channel signal or an object signal with a separate equation. When the location of the speaker in the generated sound scene is selected, rendering may be performed with the speaker channel signal.
- HOA Higher Order Ambisonics
- DRC dynamic range control
- the channel-based audio signal and the object-based audio signal processed by the rendering unit 20 are transferred to the mixer 30.
- the mixer 30 adjusts delays of the channel-based waveform and the rendered object waveform and sums them in units of samples.
- the audio signal summed by the mixer 30 is passed to the post processing unit 40.
- the post processing unit 40 includes a speaker renderer 100 and a binaural renderer 200.
- the speaker renderer 100 performs post processing for outputting the multichannel and / or multiobject audio signal transmitted from the mixer 30.
- Such post processing may include dynamic range control (DRC), loudness normalization (LN) and peak limiter (PL).
- DRC dynamic range control
- LN loudness normalization
- PL peak limiter
- the binaural renderer 200 generates a binaural downmix signal of the multichannel and / or multiobject audio signal.
- the binaural downmix signal is a two-channel audio signal such that each input channel / object signal is represented by a virtual sound source located in three dimensions.
- the binaural renderer 200 may receive an audio signal supplied to the speaker renderer 100 as an input signal.
- Binaural rendering is performed based on a Binaural Room Impulse Response (BRIR) filter and may be performed on a time domain or a QMF domain.
- BRIR Binaural Room Impulse Response
- DRC dynamic range control
- LN volume normalization
- PL peak limit
- the binaural renderer 200 is a BRIR parameterization unit 210, high-speed convolution unit 230, late reverberation generation unit 240, QTDL processing unit 250, Mixer & combiner 260 may be included.
- the binaural renderer 200 performs binaural rendering on various types of input signals to generate 3D audio headphone signals (ie, 3D audio two channel signals).
- the input signal may be an audio signal including at least one of a channel signal (ie, a speaker channel signal), an object signal, and a HOA signal.
- the binaural renderer 200 when the binaural renderer 200 includes a separate decoder, the input signal may be an encoded bitstream of the aforementioned audio signal.
- Binaural rendering converts the decoded input signal into a binaural downmix signal, so that the surround sound can be experienced while listening to the headphones.
- the binaural renderer 200 may perform binaural rendering of the input signal on the QMF domain.
- the binaural renderer 200 may receive a multi-channel (N channels) signal of a QMF domain and perform binaural rendering on the multi-channel signal using a BRIR subband filter of the QMF domain.
- Is Is the time domain BRIR filter transformed into a subband filter in the QMF domain.
- binaural rendering may be performed by dividing a channel signal or an object signal of a QMF domain into a plurality of subband signals, convolving each subband signal with a corresponding BRIR subband filter, and then summing them.
- the BRIR parameterization unit 210 converts and edits BRIR filter coefficients and generates various parameters for binaural rendering in the QMF domain.
- the BRIR parameterization unit 210 receives time domain BRIR filter coefficients for a multichannel or multiobject and converts them into QMF domain BRIR filter coefficients.
- the QMF domain BRIR filter coefficients include a plurality of subband filter coefficients respectively corresponding to the plurality of frequency bands.
- the subband filter coefficients indicate each BRIR filter coefficient of the QMF transformed subband domain.
- Subband filter coefficients may also be referred to herein as BRIR subband filter coefficients.
- the BRIR parameterization unit 210 may edit the plurality of BRIR subband filter coefficients of the QMF domain, respectively, and transmit the edited subband filter coefficients to the high speed convolution unit 230.
- the BRIR parameterization unit 210 may be included as one component of the binaural renderer 200, or may be provided as a separate device.
- the configuration including the high-speed convolution unit 230, the late reverberation generation unit 240, the QTDL processing unit 250, the mixer & combiner 260 except for the BRIR parameterization unit 210 is The binaural rendering unit 220 may be classified.
- the BRIR parameterization unit 210 may receive, as an input, a BRIR filter coefficient corresponding to at least one position of the virtual reproduction space.
- Each position of the virtual reproduction space may correspond to each speaker position of the multichannel system.
- each BRIR filter coefficient received by the BRIR parameterization unit 210 may be directly matched to each channel or each object of the input signal of the binaural renderer 200.
- each of the received BRIR filter coefficients may have a configuration independent of the input signal of the binaural renderer 200.
- the BRIR filter coefficients received by the BRIR parameterization unit 210 may not directly match the input signal of the binaural renderer 200, and the number of received BRIR filter coefficients may correspond to the channel of the input signal and / or Or it may be smaller or larger than the total number of objects.
- the BRIR parameterization unit 210 converts and edits the BRIR filter coefficients corresponding to each channel or each object of the input signal of the binaural renderer 200 to perform the binaural rendering unit 220.
- the corresponding BRIR filter coefficients may be matching BRIR or fallback BRIR for each channel or each object.
- BRIR matching may be determined according to whether or not there is a BRIR filter coefficient targeting the position of each channel or each object in the virtual reproduction space. If there is a BRIR filter coefficient targeting at least one of each channel or the position of each object of the input signal, the corresponding BRIR filter coefficient may be a matching BRIR of the input signal.
- the binaural rendering unit 220 assigns the BRIR filter coefficients targeting the position most similar to that channel or object to the corresponding channel or object. Can be provided as a fallback for BRIR.
- the BRIR parameterization unit 210 may convert and edit all of the received BRIR filter coefficients and transmit the converted BRIR filter coefficients to the binaural rendering unit 220.
- the screening operation of the BRIR filter coefficients (or the edited BRIR filter coefficients) corresponding to each channel or each object of the input signal may be performed by the binaural rendering unit 220.
- the binaural rendering unit 220 includes a high speed convolution unit 230, a late reverberation generation unit 240, and a QTDL processing unit 250, and outputs a multi audio signal including a multichannel and / or multiobject signal. Receive.
- an input signal including a multichannel and / or multiobject signal is referred to as a multi audio signal.
- the binaural rendering unit 220 receives the multi-channel signal of the QMF domain according to an embodiment.
- the input signal of the binaural rendering unit 220 may be a time domain multi-channel signal and a multi-channel. Object signals and the like.
- the input signal may be an encoded bitstream of the multi audio signal.
- the present invention will be described based on the case of performing BRIR rendering on the multi-audio signal, but the present invention is not limited thereto. That is, the features provided by the present invention may be applied to other types of rendering filters other than BRIR, and may be applied to an audio signal of a single channel or a single object rather than a multi-audio signal.
- the fast convolution unit 230 performs fast convolution between the input signal and the BRIR filter to process direct sound and early reflection on the input signal.
- the high speed convolution unit 230 may perform high speed convolution using a truncated BRIR.
- the truncated BRIR includes a plurality of subband filter coefficients truncated depending on each subband frequency, and is generated by the BRIR parameterization unit 210. In this case, the length of each truncated subband filter coefficient is determined depending on the frequency of the corresponding subband.
- the fast convolution unit 230 may perform variable order filtering in the frequency domain by using truncated subband filter coefficients having different lengths according to subbands.
- fast convolution may be performed between the QMF domain subband audio signal and the truncated subband filters of the corresponding QMF domain for each frequency band.
- the direct sound & early reflection (D & E) part may be referred to as a front part.
- the late reverberation generator 240 generates a late reverberation signal with respect to the input signal.
- the late reverberation signal represents an output signal after the direct sound and the initial reflection sound generated by the fast convolution unit 230.
- the late reverberation generator 240 may process the input signal based on the reverberation time information determined from each subband filter coefficient transmitted from the BRIR parameterization unit 210.
- the late reverberation generator 240 may generate a mono or stereo downmix signal for the input audio signal and perform late reverberation processing on the generated downmix signal.
- the late reverberation (LR) part herein may be referred to as a parametric (P) -part.
- the QMF domain trapped delay line (QTDL) processing unit 250 processes a signal of a high frequency band among the input audio signals.
- the QTDL processing unit 250 receives at least one parameter corresponding to each subband signal of a high frequency band from the BRIR parameterization unit 210 and performs tap-delay line filtering in the QMF domain using the received parameter.
- the binaural renderer 200 separates the input audio signal into a low frequency band signal and a high frequency band signal based on a predetermined constant or a predetermined frequency band, and the low frequency band signal is a high speed signal.
- the high frequency band signal may be processed by the QTDL processing unit 250, respectively.
- the fast convolution unit 230, the late reverberation generator 240, and the QTDL processing unit 250 output two QMF domain subband signals, respectively.
- the mixer & combiner 260 performs mixing by combining the output signal of the fast convolution unit 230, the output signal of the late reverberation generator 240, and the output signal of the QTDL processing unit 250. At this time, the combination of the output signal is performed separately for the left and right output signals of the two channels.
- the binaural renderer 200 QMF synthesizes the combined output signal to produce a final output audio signal in the time domain.
- the audio signal processing apparatus may refer to the binaural renderer 200 or the binaural rendering unit 220 illustrated in FIG. 2.
- the audio signal processing apparatus may broadly refer to the audio signal decoder of FIG. 1 including a binaural renderer.
- Each binaural renderer illustrated in FIGS. 3 to 7 may represent only a partial configuration of the binaural renderer 200 illustrated in FIG. 2 for convenience of description.
- an embodiment of a multichannel input signal may be mainly described, but unless otherwise stated, the channel, multichannel, and multichannel input signals respectively include an object, a multiobject, and a multiobject input signal. Can be used as a concept.
- the multichannel input signal may be used as a concept including a HOA decoded and rendered signal.
- FIG. 3 illustrates a binaural renderer 200A according to an embodiment of the present invention.
- Generalizing binaural rendering using BRIR is M-to-O processing to obtain O output signals for multi-channel input signals with M channels.
- Binaural filtering can be regarded as filtering using filter coefficients corresponding to each input channel and output channel in this process.
- the original filter set H denotes transfer functions from the speaker position of each channel signal to the left and right ear positions.
- One of these transfer functions measured in a general listening room, that is, a room with reverberation, is called a Binaural Room Impulse Response (BRIR).
- BRIR Binaural Room Impulse Response
- the BRIR contains not only the direction information but also the information of the reproduction space.
- the HRTF and an artificial reverberator may be used to replace the BRIR.
- a binaural rendering using BRIR will be described.
- the present invention is not limited thereto, and the present invention is equally applicable to binaural rendering using various types of FIR filters.
- the BRIR may have a length of 96K samples, and multi-channel binaural rendering is performed using M * O different filters, thus requiring a high throughput process.
- the BRIR parameterization unit 210 may generate modified filter coefficients from the original filter set H to optimize the calculation amount.
- the BRIR parameterization unit 210 separates the original filter coefficients into F (front) -part coefficients and P (parametric) -part coefficients.
- the F-part represents the direct sound and the early reflection sound (D & E) part
- the P-part represents the late reverberation (LR) part.
- an original filter coefficient having a 96K sample length may be separated into an F-part cut only up to the previous 4K sample and a P-part corresponding to the remaining 92K sample.
- the binaural rendering unit 220 receives the F-part coefficients and the P-part coefficients from the BRIR parameterization unit 210 and renders the multi-channel input signal using the F-part coefficients.
- the fast convolution unit 230 illustrated in FIG. 2 renders a multi-audio signal using the F-part coefficient received from the BRIR parameterization unit 210, and generates a late reverberation generator 240.
- F-part rendering (binaural rendering using F-part coefficients) is implemented with a conventional Finite Impulse Response (FIR) filter, and P-part rendering (binaural using P-part coefficients). Rendering) can be implemented in a parametric way.
- FIR Finite Impulse Response
- P-part rendering (binaural using P-part coefficients). Rendering) can be implemented in a parametric way.
- the complexity-quality control input provided by the user or control system may be used to determine the information generated by the F-part and / or P-part.
- FIG. 4 illustrates a more detailed method of implementing F-part rendering as a binaural renderer 200B according to another embodiment of the present invention.
- the P-part rendering unit is omitted in FIG. 4.
- FIG. 4 shows a filter implemented in the QMF domain, the present invention is not limited thereto and may be applicable to all subband processing of other domains.
- F-part rendering may be performed by the fast convolution unit 230 on the QMF domain.
- the QMF analyzer 222 performs time domain input signals x0, x1,... x_M-1 is the QMF domain signal X0, X1,... Convert to X_M-1.
- the input signals x0, x1,... x_M-1 may be a multi-channel audio signal, for example, a channel signal corresponding to a 22.2 channel speaker.
- the QMF domain may use 64 subbands in total, but the present invention is not limited thereto.
- the QMF analyzer 222 may be omitted from the binaural renderer 200B.
- the binaural renderer 200B directly performs QMF domain signals X0, X1,... Without QMF analysis.
- X_M-1 can be received as an input. Therefore, when receiving the QMF domain signal as an input directly, the QMF used in the binaural renderer according to the present invention is characterized in that it is the same as the QMF used in the previous processing unit (for example, SBR).
- the QMF synthesizing unit 244 performs QMF synthesizing of the left and right signals Y_L and Y_R of the two channels on which the binaural rendering is performed to generate the two-channel output audio signals yL and yR of the time domain.
- 5 through 7 illustrate embodiments of binaural renderers 200C, 200D, and 200E that perform F-part rendering and P-part rendering, respectively.
- the F-part rendering is performed by the fast convolution unit 230 on the QMF domain
- the P-part rendering is performed by the late reverberation generation unit 240 on the QMF domain or the time domain. do.
- FIGS. 5 to 7 detailed description of parts overlapping with the embodiments of the previous drawings will be omitted.
- the binaural renderer 200C may perform both F-part rendering and P-part rendering in the QMF domain. That is, the QMF analysis unit 222 of the binaural renderer 200C receives the time domain input signals x0, x1,... x_M-1 is the QMF domain signal X0, X1,... X_M-1 is converted to the high speed convolution unit 230 and the late reverberation generation unit 240, respectively.
- the high speed convolution unit 230 and the late reverberation generation unit 240 perform the QMF domain signals X0, X1,... Render X_M-1 to generate two channels of output signals Y_L, Y_R and Y_Lp and Y_Rp, respectively.
- the fast convolution unit 230 and the late reverberation generator 240 may perform rendering using the F-part filter coefficients and the P-part filter coefficients received by the BRIR parameterization unit 210, respectively.
- the output signals Y_L, Y_R of the F-part rendering and the output signals Y_Lp, Y_Rp of the P-part rendering are combined by the left and right channels in the mixer & combiner 260 and transmitted to the QMF synthesis unit 224.
- the QMF synthesizing unit 224 QMF synthesizes the input two left and right signals to generate two channel output audio signals yL and yR in the time domain.
- the binaural renderer 200D may perform F-part rendering in the QMF domain and P-part rendering in the time domain, respectively.
- the QMF analyzer 222 of the binaural renderer 200D QMF-converts the time domain input signal to the fast convolution unit 230.
- the fast convolution unit 230 generates the output signals Y_L and Y_R of two channels by F-part rendering the QMF domain signal.
- the QMF synthesizing unit 224 converts the output signal of the F-part rendering into a time domain output signal and delivers it to the mixer & combiner 260.
- the late reverberation generator 240 directly receives the time domain input signal and performs P-part rendering.
- the output signals yLp and yRp of the P-part rendering are sent to the mixer & combiner 260.
- the mixer & combiner 260 combines the F-part rendering output signal and the P-part rendering output signal in the time domain, respectively, to generate the two-channel output audio signals yL and yR in the time domain.
- the F-part rendering and the P-part rendering are performed in parallel, respectively.
- the binaural renderer 200E performs the F-part rendering.
- P-part rendering can be performed sequentially, respectively. That is, the fast convolution unit 230 performs F-part rendering on the QMF-converted input signal, and the F-part rendered two-channel signals Y_L and Y_R are converted into time domain signals by the QMF synthesis unit 224 and then late reverberation. It may be delivered to the generation unit 240.
- the late reverberation generator 240 performs P-part rendering on the input two-channel signal to generate two-channel output audio signals yL and yR in the time domain.
- 5 to 7 illustrate an embodiment of performing F-part rendering and P-part rendering, respectively, and binaural rendering may be performed by combining or modifying the embodiments of each drawing.
- the binaural renderer may perform P-part rendering for each of the input multi-audio signals separately, but downmixes the input signal to two channels of left, right or mono signals and then down P-part rendering may be performed on the mixed signal.
- 8 to 10 illustrate a method for generating an FIR filter for binaural rendering according to an embodiment of the present invention.
- an FIR filter converted to a plurality of subband filters of the QMF domain may be used for binaural rendering in the QMF domain.
- subband filters truncated depending on the subband frequencies may be used for F-part rendering. That is, the fast convolution unit of the binaural renderer may perform variable order filtering in the QMF domain by using truncated subband filters having different lengths according to subbands. 8 to 10 described below may be performed by the BRIR parameterization unit 210 of FIG. 2.
- FIG. 8 shows an embodiment of the length according to each QMF band of the QMF domain filter used for binaural rendering.
- the FIR filter is converted into I QMF subband filters
- Fi represents the truncated subband filter of QMF subband i.
- the QMF domain may use 64 subbands in total, but the present invention is not limited thereto.
- N represents the length (number of taps) of the original subband filter
- the length of the truncated subband filter is represented by N1, N2, and N3, respectively. Where the lengths N, N1, N2 and N3 represent the number of taps in the downsampled QMF domain.
- truncated subband filters having different lengths N1, N2, N3 according to each subband may be used for F-part rendering.
- the truncated subband filter is a front filter cut from the original subband filter, and may also be referred to as a front subband filter.
- the rear after truncation of the original subband filter may be referred to as a rear subband filter and may be used for P-part rendering.
- the filter order for each subband may include parameters extracted from the original BRIR filter, for example, reverberation time (RT) information for each subband filter, and EDC (Energy). Decay Curve) value, energy decay time information and the like can be determined.
- the reverberation time may vary from frequency to frequency, due to the acoustic characteristics of the attenuation in the air for each frequency, the sound absorption of the wall and ceiling material is different. In general, a lower frequency signal has a longer reverberation time. Long reverberation time means that a lot of information remains behind the FIR filter.
- each truncated subband filter of the present invention is determined based at least in part on the characteristic information (eg, reverberation time information) extracted from the subband filter.
- each subband may be classified into a plurality of groups, and the length of each truncated subband filter may be determined according to the classified group.
- each subband may be classified into three zones (Zone 1, Zone 2, and Zone 3), wherein the truncated subband filters of Zone 1 corresponding to the low frequency are Zone corresponding to the high frequency. It may have a longer filter order (ie, filter length) than truncated subband filters of 2 and Zone 3. Also, as the high frequency zone goes, the filter order of the truncated subband filter in that zone may gradually decrease.
- the length of each truncated subband filter may be determined independently and variably for each subband according to the characteristic information of the original subband filter.
- the length of each truncated subband filter is determined based on the truncation length determined in that subband and is not affected by the length of the truncated subband filter of neighboring or other subbands.
- the length of some or all truncated subband filters of Zone 2 may be longer than the length of at least one truncated subband filter of Zone 1.
- frequency domain variable order filtering may be performed only on a part of subbands classified into a plurality of groups. That is, truncated subband filters having different lengths may be generated only for subbands belonging to some of the classified at least two groups.
- the group in which the truncated subband filter is generated may be a subband group classified into a low frequency band (for example, Zone 1) based on a preset constant or a preset frequency band.
- the length of the truncated filter may be determined based on additional information obtained by the audio signal processing apparatus, such as complexity of the decoder, complexity level (profile), or required quality information.
- the complexity may be determined according to hardware resources of the audio signal processing apparatus or based on a value directly input by the user.
- the quality may be determined according to a user's request, or may be determined by referring to a value transmitted through the bitstream or other information included in the bitstream.
- the quality may be determined according to an estimated value of the quality of the transmitted audio signal. For example, the higher the bit rate, the higher the quality.
- the length of each truncated subband filter may increase proportionally according to complexity and quality, or may vary at different rates for each band.
- each truncated subband filter may be determined as a multiple of a power unit, for example, a power of 2, so as to obtain an additional gain by high-speed processing such as an FFT described later.
- the length of the truncated subband filter may be adjusted to the length of the actual subband filter.
- the BRIR parameterization unit generates truncated subband filter coefficients (F-part coefficients) corresponding to each truncated subband filter determined according to the above-described embodiment, and transfers them to the fast convolution unit.
- the fast convolution unit performs frequency domain variable order filtering on each subband signal of the multi-audio signal using the truncated subband filter coefficients.
- FIG. 9 shows another embodiment of the length of each QMF band of the QMF domain filter used for binaural rendering.
- the same or corresponding parts as those of the embodiment of FIG. 8 will be omitted.
- Fi_L and Fi_R represent truncated subband filters (front subband filters) used for F-part rendering of QMF subband i, respectively, and Pi is used for P-part rendering of QMF subband i.
- N denotes the length (number of taps) of the original subband filter
- NiF and NiP denote lengths of the front subband filter and the rear subband filter of subband i, respectively.
- NiF and NiP represent the number of taps in the down sampled QMF domain.
- the length of the rear subband filter as well as the front subband filter may be determined based on parameters extracted from the original subband filter. That is, the lengths of the front subband filter and the rear subband filter of each subband are determined based at least in part on the characteristic information extracted from the corresponding subband filter. For example, the length of the front subband filter may be determined based on the first reverberation time information of the corresponding subband filter, and the length of the rear subband filter may be determined based on the second reverberation time information.
- the front subband filter is a filter of the front part cut based on the first reverberation time information in the original subband filter
- the rear subband filter is a section after the front subband filter between the first reverberation time and the second reverberation time.
- the filter may be a later part corresponding to the interval of.
- the first reverberation time information may be RT20 and the second reverberation time information may be RT60, but the present invention is not limited thereto.
- the second reverberation time there is a portion that switches from the early reflection part to the late reverberation part.
- a point of transition from a section having a deterministic characteristic to a section having a stochastic characteristic is called a mixing time in view of the BRIR of the entire band.
- information that provides directionality for each position is mainly present, which is unique for each channel.
- the late reverberation part since the late reverberation part has a common characteristic for each channel, it may be efficient to process a plurality of channels at once. Therefore, it is possible to estimate the mixing time for each subband and perform fast convolution through the F-part rendering before the mixing time, and perform the processing reflecting the common characteristics of each channel through the P-part rendering after the mixing time. have.
- the length of the F-part that is, the length of the front subband filter may be longer or shorter than the length corresponding to the mixing time according to the complexity-quality control.
- the model of reducing the filter of the subband to a lower order is possible.
- a typical method is FIR filter modeling using frequency sampling, and it is possible to design a filter that is minimized in terms of least squares.
- the lengths of the front subband filter and / or the rear subband filter for each subband may have the same value for each channel of the corresponding subband.
- the length of the filter may be determined based on the inter-channel or sub-band interrelationships to reduce this effect.
- the BRIR parameterization unit extracts first characteristic information (eg, first reverberation time information) from subband filters corresponding to respective channels of the same subband, and combines the extracted first characteristic information.
- One piece of filter order information (or first truncation point information) for the corresponding subband may be obtained.
- the front subband filter for each channel of the corresponding subband may be determined to have the same length based on the obtained filter order information (or the first truncation point information).
- the BRIR parameterization unit extracts second characteristic information (eg, second reverberation time information) from subband filters corresponding to respective channels of the same subband, and combines the extracted second characteristic information to correspond to the corresponding subbands.
- Second cut point information to be commonly applied to a rear subband filter corresponding to each channel of may be obtained.
- the front subband filter is a front filter cut based on the first cut point information in the original subband filter
- the rear subband filter is a section after the front subband filter between the first cut point and the second cut point. Can be the latter filter corresponding to the interval of
- only F-part processing may be performed on subbands of a specific subband group.
- the processing when the processing is performed using only the filter up to the first truncation point for the corresponding subband, the user may be perceived by the energy difference of the filter processed compared to when the processing is performed using the entire subband filter. This level of distortion can occur.
- energy compensation may be performed for regions not used for processing in the corresponding subband filter, that is, regions after the first cutting point.
- the energy compensation can be performed by dividing the F-part coefficients (front subband filter coefficients) by the filter power up to the first truncation point of the corresponding subband filter and multiplying the energy of the desired area, ie the total power of the corresponding subband filter. Do.
- the energy of the F-part coefficients can be adjusted to be equal to the energy of the entire subband filter.
- the binaural rendering unit may not perform the P-part processing based on the complexity-quality control. In this case, the binaural rendering unit may perform the energy compensation for the F-part coefficients using the P-part coefficients.
- the filter coefficients of truncated subband filters having different lengths for each subband are obtained from one time-domain filter (ie, proto-type filter). That is, since one time-domain filter is converted into a plurality of QMF subband filters and the lengths of the filters corresponding to each subband are varied, each truncated subband filter is obtained from one circular filter.
- one time-domain filter ie, proto-type filter
- the BRIR parameterization unit generates front subband filter coefficients (F-part coefficients) corresponding to each front subband filter determined according to the above-described embodiment, and transfers them to the fast convolution unit.
- the fast convolution unit performs frequency domain variable order filtering on each subband signal of the multi-audio signal using the received front subband filter coefficients.
- the BRIR parameterization unit may generate rear subband filter coefficients (P-part coefficients) corresponding to each rear subband filter determined according to the above-described embodiments, and may transfer them to the late reverberation generation unit.
- the late reverberation generator may perform reverberation processing for each subband signal using the received rear subband filter coefficients.
- the BRIR parameterization unit may generate a downmix subband filter coefficient (downmix P-part coefficient) by combining rear subband filter coefficients for each channel, and transmit the downmix subband filter coefficients to the late reverberation generator.
- the late reverberation generator may generate two channels of left and right subband reverberation signals using the received downmix subband filter coefficients.
- FIG. 10 illustrates another embodiment of a method for generating an FIR filter used for binaural rendering.
- the same or corresponding parts as those of FIGS. 8 and 9 will be omitted.
- a plurality of QMF transformed subband filters may be classified into a plurality of groups, and different processing may be applied to each classified group.
- the plurality of subbands are classified into a first subband group Zone 1 of a low frequency and a second subband group Zone 2 of a high frequency based on a preset frequency band QMF band i. Can be.
- F-part rendering may be performed on the input subband signals of the first subband group
- QTDL processing described below may be performed on the input subband signals of the second subband group.
- the BRIR parameterization unit generates front subband filter coefficients for each subband of the first subband group, and transfers the front subband filter coefficients to the fast convolution unit.
- the fast convolution unit performs F-part rendering on the subband signals of the first subband group by using the received front subband filter coefficients.
- P-part rendering of subband signals of the first subband group may be additionally performed by the late reverberation generator.
- the BRIR parameterization unit obtains at least one parameter from each subband filter coefficient of the second subband group and transfers it to the QTDL processing unit.
- the QTDL processing unit performs tap-delay line filtering on each subband signal of the second subband group using the obtained parameter as described below.
- the predetermined frequency (QMF band i) for distinguishing the first subband group and the second subband group may be determined based on a predetermined constant value, and the bit of the transmitted audio input signal may be determined. It may be determined depending on the thermal characteristics. For example, in the case of an audio signal using SBR, the second subband group may be set to correspond to the SBR band.
- the plurality of subbands may be classified into three subband groups based on the first frequency band QMF band i and the second frequency band QMF band j. That is, the plurality of subbands may include a first subband group Zone 1 which is a low frequency zone smaller than or equal to the first frequency band, and a second subband that is an intermediate frequency zone greater than or equal to the second frequency band. Band group Zone 2 and a third subband group Zone 3 that is a higher frequency region larger than the second frequency band.
- F-part rendering and QTDL processing may be performed on the subband signals of the first subband group and the subband signals of the second subband group, respectively, as described above, and the subbands of the third subband group Rendering may not be performed on the signals.
- FIGS. 11 to 14 various embodiments of the P-part rendering of the present invention will be described with reference to FIGS. 11 to 14. That is, various embodiments of the late reverberation generation unit 240 of FIG. 2 performing P-part rendering in the QMF domain will be described with reference to FIGS. 11 to 14.
- FIGS. 11 to 14 it is assumed that a multichannel input signal is received as a subband signal of a QMF domain. Accordingly, the processing of each component of FIGS. 11 to 14, that is, the decorrelator 241, the subband filtering unit 242, the IC matching unit 243, the downmixing unit 244, and the energy attenuation matching unit 246 is performed. May be performed for each QMF subband.
- FIGS. 11 to 14 detailed descriptions of parts overlapping with the embodiments of the previous drawings will be omitted.
- Pi (P1, P2, P3, ...) corresponding to the P-part corresponds to the rear portion of each subband filter removed according to the frequency variable truncation.
- the length of the P-part may be defined as the entire filter after the cut point of each subband filter, or may be defined as a smaller length with reference to the second reverberation time information of the corresponding subband filter. have.
- P-part rendering may be performed independently for each channel, or may be performed for downmixed channels.
- the P-part rendering may be applied through different processing for each preset subband group or for each subband, or may be applied to the same processing for all subbands.
- the processing applicable to the P-part includes energy reduction compensation for the input signal, tap-delay line filtering, processing using an Infinite Impulse Response (IIR) filter, processing using an artificial reverberator, and frequency (FIIC) -independent interaural coherence (FDIC) compensation, and frequency-dependent interaural coherence (FDIC) compensation.
- IIR Infinite Impulse Response
- FDIC frequency-independent interaural coherence
- EDR Energy Decay Relief
- FDIC Frequency-dependent Interaural Coherence
- Impulse response STFT Short Time Fourier Transform
- n time index
- i frequency index
- k frame index
- m output channel index (L, R).
- the function of the molecule Outputs the real value of the input x, Denotes the complex conjugate of x.
- the molecular part in the above formula may be replaced with a function that takes an absolute value instead of a real value.
- FDIC since the binaural rendering in the present invention is performed in the QMF domain, FDIC may be defined by the following equation.
- i is the subband index
- k is the time index in the subband
- the FDIC of the late reverberation part is a parameter that is mainly influenced by the position of the two microphones when the BRIR is recorded. Assuming the listener's head is a sphere, BRIR's theoretical FDIC (IC ideal ) can satisfy the following equation:
- r is the distance between the listener's ears, ie, the distance between the two microphones, and k is the frequency index.
- the initial reflection sound mainly included in the F-part is very different for each channel.
- the FDIC of the F-part varies very differently from channel to channel.
- the FDIC varies greatly, but this is because a large measurement error occurs due to the characteristics of the high frequency band signal, which rapidly decays energy, and when the average of each channel is taken, the FDIC converges to almost zero.
- the difference in FDIC for each channel occurs due to measurement error, but it can be seen that the average converges to the sync function as shown in Equation 5.
- the late reverberation generation unit for P-part rendering may be implemented based on the above characteristics.
- the late reverberation generation unit 240A may include a subband filtering unit 242 and downmixing units 244a and 244b.
- the subband filtering unit 242 uses the P-part coefficients to multi-channel input signals X0, X1,... , X_M-1 is filtered for each subband.
- the P-part coefficient is received from a BRIR parameterization unit (not shown) as described above, and may include coefficients of a rear subband filter having different lengths for each subband.
- the subband filtering unit 242 performs fast convolution between the QMF domain subband signal and the rear subband filter of the QMF domain corresponding to each frequency.
- the length of the rear subband filter may be determined based on the RT60 as described above, but may be set to a value larger or smaller than the RT60 according to the complexity-quality control.
- the multi-channel input signals are left channel signals X_L0, X_L1, ... by the subband filtering unit 242, respectively. , X_L_M-1 and the right channel signals X_R0, X_R1,... , Rendered with X_R_M-1.
- the downmix units 244a and 244b downmix the rendered plurality of left channel signals and the plurality of right channel signals by left and right channels, respectively, to generate two channels of left and right output signals Y_Lp and Y_Rp.
- the late reverberation generation unit 240B includes a decorator 241, an IC matching unit 243, a downmixing units 244a and 244b, and energy attenuation matching units 246a and 246b. can do.
- the BRIR parameterization unit (not shown) may include an IC estimator 213 and a downmix subband filter generator 216.
- the late reverberation generation unit 240B may reduce the amount of calculation by using the same energy decay characteristic for each channel of the late reverberation part. That is, the late reverberation generation unit 240B performs decorrelation and interaural coherence (IC) adjustment for each multichannel signal, and downmixes the adjusted input signal and decorrelation signal for each channel into left and right channel signals. Afterwards, two channels of left and right output signals are generated by compensating for energy attenuation of the downmixed signal. More specifically, the decorrelator 241 is configured for each multichannel input signal X0, X1,... , The decoration signals D0, D1, ... for X_M-1. , D_M-1 is generated. The decorrelator 241 is a kind of preprocessor for adjusting coherence between both ears, and a phase randomizer may be used, and the phase of the input signal may be phased in units of 90 degrees for efficiency of computation. You can also change
- the IC estimator 213 of the BRIR parameterization unit estimates an IC value and transmits the IC value to the binaural rendering unit (not shown).
- the binaural rendering unit may store the received IC value in the memory 255 and transmit the received IC value to the IC matching unit 243.
- the IC matching unit 243 may directly receive an IC value from the BRIR parameterization unit and may obtain an IC value previously stored in the memory 255.
- the input signal and the decoration signal for each channel are X_L0, X_L1, ... which are left channel signals in the IC matching unit 243. , X_L_M-1 and the right channel signals X_R0, X_R1,... , Rendered with X_R_M-1.
- the IC matching unit 243 performs weighted summation between the decorrelated signal and the original input signal for each channel by referring to the IC value, and adjusts the coherence between the two channel signals.
- the input signal for each channel is a signal of the subband domain
- the above-described FDIC can be matched.
- the left and right channel signals X_L and X_R on which IC matching is performed may be expressed by the following equation.
- the downmix units 244a and 244b downmix the plurality of left channel signals and the plurality of right channel signals rendered through IC matching for each left and right channel to generate two left and right rendering signals.
- the energy attenuation matching units 246a and 246b generate the two channel left and right output signals Y_Lp and Y_Rp by reflecting the energy decay of the two channel left and right rendering signals, respectively.
- the energy attenuation matching units 246a and 246b perform energy attenuation matching using the downmix subband filter coefficients obtained from the downmix subband filter generation unit 216.
- the downmix subband filter coefficients are generated by a combination of rear subband filter coefficients for each channel of the corresponding subband.
- the downmix subband filter coefficients may include subband filter coefficients rooted on the average of the square amplitude response of the rear subband filter coefficients for each channel with respect to the corresponding subband. Accordingly, the downmix subband filter coefficients reflect energy reduction characteristics of the late reverberation part for the corresponding subband signal.
- the downmix subband filter coefficients may include downmix subband filter coefficients that are downmixed in mono or stereo, depending on the embodiment, and may be received directly from the BRIR parameterization section, as in FDIC, or from values previously stored in the memory 225. Can be obtained.
- FIG. 13 illustrates a late reverberation generation unit 240C according to another embodiment of the present invention.
- Each configuration of the late reverberation generation unit 240C of FIG. 13 may be the same as each configuration of the late reverberation generation unit 240B described in the embodiment of FIG. 12, and the data processing order between the elements may be partially different.
- the late reverberation generation unit 240C may further reduce the amount of calculation by using the same FDIC for each channel of the late reverberation part. That is, the late reverberation generation unit 240C downmixes each multichannel signal into left and right channel signals, adjusts the IC of the downmixed left and right channel signals, and then adjusts the energy of the adjusted left and right channel signals. The attenuation can be compensated to generate two channels of left and right output signals.
- the decorrelator 241 is configured for each multichannel input signal X0, X1,... ,
- the decoration signals D0, D1, ... for X_M-1. , D_M-1 is generated.
- the downmix units 244a and 244b downmix the multi-channel input signal and the decoration signal to generate two-channel downmix signals X_DMX and D_DMX, respectively.
- the IC matching unit 243 weights and sums the two-channel downmix signal with reference to the IC value, thereby adjusting the coherence between the two channel signals.
- the energy attenuation compensators 246a and 246b perform energy compensation on each of the left and right channel signals X_L and X_R on which IC matching is performed by the IC matching unit 243 to output the left and right output signals X_Lp and Y_Rp of two channels.
- the energy compensation information used for energy compensation may include downmix subband filter coefficients for each subband.
- Each configuration of the late reverberation generation unit 240D of FIG. 14 may be the same as each configuration of the late reverberation generation units 240B and 240C described in the embodiments of FIGS. 12 and 13, but has a more simplified feature.
- the down mix unit 244 performs multichannel input signals X0, X1,... , Downmixing X_M-1 for each subband to generate a mono downmix signal (ie, a mono subband signal) X_DMX.
- the energy decay matching unit 246 reflects the energy decay of the generated mono downmix signal.
- downmix subband filter coefficients for each subband may be used to reflect energy attenuation.
- the decorrelator 241 generates a decoration signal D_DMX of the mono downmix signal reflecting the energy decay.
- the IC matching unit 243 weights the mono downmix signal and the decoration signal reflecting the energy decay with reference to the FDIC value, thereby generating two left and right output signals Y_Lp and Y_Rp. According to the embodiment of FIG. 14, the energy attenuation matching is performed only once for the mono downmix signal X_DMX, thereby further reducing the amount of computation.
- FIGS. 15 and 16 assume that the multi-channel input signal is received as a subband signal in the QMF domain. Accordingly, in the embodiment of FIGS. 15 and 16, the tap-delay line filter and the one-tap-delay line filter may perform processing for each QMF subband. In addition, QTDL processing may be performed only on the input signal of the high frequency band classified based on a predetermined constant or a predetermined frequency band as described above. If SBR (Spectral Band Replication) is applied to the input audio signal, the high frequency band may correspond to the SBR band. 15 and 16, detailed description of parts overlapping with those of the previous drawings will be omitted.
- SBR Spectrum Band Replication
- SBR Spectral Band Replication
- the high frequency band is generated using information of the low frequency band that is encoded and transmitted and additional information of the high frequency band signal transmitted by the encoder.
- SBR band is a high frequency band, and as described above, the reverberation time of the frequency band is very short. That is, the BRIR subband filter of the SBR band has less valid information and has a fast attenuation rate. Therefore, the BRIR rendering for the high frequency band that corresponds to the SBR band may be very effective in terms of the amount of computation compared to the quality of sound quality rather than performing the convolution.
- the QTDL processing unit 250A uses a tap-delay line filter to multi-channel input signals X0, X1,... , Sub-band filtering is performed on X_M-1.
- the tap-delay line filter convolutions only a few taps preset for each channel signal. In this case, the number of taps used may be determined based on a parameter directly extracted from a BRIR subband filter coefficient corresponding to the corresponding subband signal.
- the parameter includes delay information for each tap to be used in the tap-delay line filter and gain information corresponding thereto.
- the number of taps used in the tap-delay line filter can be determined by complexity-quality control.
- the QTDL processing unit 250A receives, from the BRIR parameterization unit, a set of parameters (gain information and delay information) corresponding to the number of taps for each channel and subband based on the predetermined number of taps.
- the received parameter set is extracted from the BRIR subband filter coefficients corresponding to the corresponding subband signal, and may be determined according to various embodiments. For example, a set of parameters for each of the peaks extracted by the predetermined number of taps may be received among the plurality of peaks of the corresponding BRIR subband filter coefficients in order of absolute value magnitude, real value magnitude, or imaginary value magnitude. have.
- the delay information of each parameter represents position information of a corresponding peak, and has an integer value of a sample unit in the QMF domain.
- the gain information is determined based on the magnitude of the peak corresponding to the delay information.
- the weight value of the corresponding peak after energy compensation for the entire subband filter coefficients may be used.
- the gain information is obtained by using both real weight and imaginary weight for the corresponding peak, and thus has a complex value.
- the plurality of channel signals filtered by the tap-delay line filter are summed into two channel left and right output signals Y_L and Y_R for each subband.
- parameters used in each tap-delay line filter of the QTDL processing unit 250A may be stored in a memory during initialization of binaural rendering, and QTDL processing may be performed without additional calculation for parameter extraction.
- the QTDL processing unit 250B uses the one-tap-delay line filter to multi-channel input signals X0, X1,... , Sub-band filtering is performed on X_M-1.
- One-tap-delay line filters can be understood to perform convolution on only one tap for each channel signal.
- the tap used may be determined based on a parameter directly extracted from a BRIR subband filter coefficient corresponding to the corresponding subband signal.
- the parameter includes delay information extracted from the BRIR subband filter coefficients and corresponding gain information.
- L_0, L_1,... L_M-1 represents the delay for the BRIR to the left ear in M channels, respectively, and R_0, R_1,. , R_M-1 represents the delay for the BRIR from the M channel to the right ear, respectively.
- the delay information indicates position information of the maximum peak among the corresponding BRIR subband filter coefficients in order of absolute value, real value, or imaginary value.
- G_L_0, G_L_1,... , G_L_M-1 represent gains corresponding to the delay information of the left channel
- G_R_0, G_R_1,... And G_R_M-1 indicate gains corresponding to the delay information of the right channel, respectively.
- each gain information is determined based on the magnitude of the peak corresponding to the corresponding delay information.
- the weight value of the corresponding peak after energy compensation for the entire subband filter coefficients may be used.
- the gain information is obtained by using both real weight and imaginary weight for the corresponding peak, and thus has a complex value.
- the plurality of channel signals filtered by the one-tap-delay line filter are added to the left and right output signals Y_L and Y_R of two channels for each subband.
- parameters used in each one-tap-delay line filter of the QTDL processing unit 250B may be stored in a memory during initialization of binaural rendering, and QTDL processing may be performed without additional operations for parameter extraction. have.
- the present invention can be applied to a multimedia signal processing apparatus including various types of audio signal processing apparatuses and video signal processing apparatuses.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Stereophonic System (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
Claims (12)
- 멀티채널 또는 멀티오브젝트 신호를 포함하는 멀티 오디오 신호를 수신하는 단계, 상기 멀티 오디오 신호는 각각 복수의 서브밴드 신호들을 포함하며, 상기 복수의 서브밴드 신호들은 기 설정된 주파수 밴드를 기준으로 한 저주파수의 제1 서브밴드 그룹의 신호와 고주파수의 제2 서브밴드 그룹의 신호를 포함함;상기 제2 서브밴드 그룹의 각 서브밴드 신호에 대응하는 적어도 하나의 파라메터를 수신하는 단계, 상기 적어도 하나의 파라메터는 상기 제2 서브밴드 그룹의 각 서브밴드 신호에 대응하는 BRIR(Binaural Room Impulse Response) 서브밴드 필터 계수로부터 추출됨;상기 수신된 파라메터를 이용하여 상기 제2 서브밴드 그룹의 서브밴드 신호에 대하여 탭-딜레이 라인 필터링을 수행하는 단계;를 포함하는 것을 특징으로 하는 오디오 신호 처리 방법.
- 제1 항에 있어서,상기 파라메터는 해당 BRIR 서브밴드 필터 계수에 대한 하나의 딜레이 정보 및 상기 딜레이 정보에 대응하는 하나의 게인 정보를 포함하는 것을 특징으로 하는 오디오 신호 처리 방법.
- 제2 항에 있어서,상기 딜레이 정보는 상기 BRIR 서브밴드 필터 계수 중 최대 피크에 대한 위치 정보를 나타내는 것을 특징으로 하는 오디오 신호 처리 방법.
- 제2 항에 있어서,상기 딜레이 정보는 QMF 도메인에서 샘플 단위의 정수 값을 갖는 것을 특징으로 하는 오디오 신호 처리 방법.
- 제2 항에 있어서,상기 게인 정보는 복소수 값을 갖는 것을 특징으로 하는 오디오 신호 처리 방법.
- 제1 항에 있어서,상기 필터링 된 멀티 오디오 신호를 각 서브밴드 별로 2채널의 좌, 우 서브밴드 신호로 합산하는 단계;상기 합산된 좌, 우 서브밴드 신호를 상기 제1 서브밴드 그룹의 멀티 오디오 신호로부터 생성된 좌, 우 서브밴드 신호와 결합하는 단계; 및상기 결합된 좌, 우 서브밴드 신호를 각각 QMF 합성하는 단계;를 더 포함하는 것을 특징으로 하는 오디오 신호 처리 방법.
- 멀티채널 또는 멀티오브젝트 신호를 포함하는 멀티 오디오 신호에 대한 바이노럴 렌더링을 수행하기 위한 오디오 신호 처리 장치로서, 상기 멀티 오디오 신호는 각각 복수의 서브밴드 신호들을 포함하며, 상기 복수의 서브밴드 신호들은 기 설정된 주파수 밴드를 기준으로 한 저주파수의 제1 서브밴드 그룹의 신호와 고주파수의 제2 서브밴드 그룹의 신호를 포함하고,상기 제1 서브밴드 그룹의 각 서브밴드 신호에 대한 렌더링을 수행하기 위한 고속 콘볼루션부; 및상기 제2 서브밴드 그룹의 각 서브밴드 신호에 대한 렌더링을 수행하기 위한 탭-딜레이 라인 프로세싱부를 포함하되,상기 탭-딜레이 라인 프로세싱부는,상기 제2 서브밴드 그룹의 각 서브밴드 신호에 대응하는 적어도 하나의 파라메터를 수신하되, 상기 적어도 하나의 파라메터는 상기 제2 서브밴드 그룹의 각 서브밴드 신호에 대응하는 BRIR(Binaural Room Impulse Response) 서브밴드 필터 계수로부터 추출되고,상기 수신된 파라메터를 이용하여 상기 제2 서브밴드 그룹의 서브밴드 신호에 대하여 탭-딜레이 라인 필터링을 수행하는 것을 특징으로 하는 오디오 신호 처리 장치.
- 제7 항에 있어서,상기 파라메터는 해당 BRIR 서브밴드 필터 계수에 대한 하나의 딜레이 정보 및 상기 딜레이 정보에 대응하는 하나의 게인 정보를 포함하는 것을 특징으로 하는 오디오 신호 처리 장치.
- 제8 항에 있어서,상기 딜레이 정보는 상기 BRIR 서브밴드 필터 계수 중 최대 피크에 대한 위치 정보를 나타내는 것을 특징으로 하는 오디오 신호 처리 장치.
- 제8 항에 있어서,상기 딜레이 정보는 QMF 도메인에서 샘플 단위의 정수 값을 갖는 것을 특징으로 하는 오디오 신호 처리 장치.
- 제8 항에 있어서,상기 게인 정보는 복소수 값을 갖는 것을 특징으로 하는 오디오 신호 처리 장치.
- 제7 항에 있어서,상기 탭-딜레이 라인 프로세싱부는, 상기 필터링 된 멀티 오디오 신호를 각 서브밴드 별로 2채널의 좌, 우 서브밴드 신호로 합산하고,상기 오디오 신호 처리 장치는,상기 합산된 좌, 우 서브밴드 신호를 상기 제1 서브밴드 그룹의 멀티 오디오 신호로부터 생성된 좌, 우 서브밴드 신호와 결합하는 믹서; 및상기 결합된 좌, 우 서브밴드 신호를 각각 QMF 합성하는 QMF 합성부;를 더 포함하는 것을 특징으로 하는 오디오 신호 처리 장치.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14845972.0A EP3048814B1 (en) | 2013-09-17 | 2014-09-17 | Method and device for audio signal processing |
US15/022,922 US9961469B2 (en) | 2013-09-17 | 2014-09-17 | Method and device for audio signal processing |
KR1020217003585A KR102294100B1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
KR1020167006858A KR101815079B1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
KR1020177037593A KR102215129B1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
CN201480051253.XA CN105706468B (zh) | 2013-09-17 | 2014-09-17 | 用于音频信号处理的方法和设备 |
US15/942,588 US10455346B2 (en) | 2013-09-17 | 2018-04-02 | Method and device for audio signal processing |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361878638P | 2013-09-17 | 2013-09-17 | |
US61/878,638 | 2013-09-17 | ||
KR20130125936 | 2013-10-22 | ||
KR10-2013-0125936 | 2013-10-22 | ||
US201361894442P | 2013-10-23 | 2013-10-23 | |
US61/894,442 | 2013-10-23 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/022,922 A-371-Of-International US9961469B2 (en) | 2013-09-17 | 2014-09-17 | Method and device for audio signal processing |
US15/942,588 Continuation US10455346B2 (en) | 2013-09-17 | 2018-04-02 | Method and device for audio signal processing |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015041477A1 true WO2015041477A1 (ko) | 2015-03-26 |
Family
ID=52689083
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/008678 WO2015041477A1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
PCT/KR2014/008677 WO2015041476A1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
PCT/KR2014/008679 WO2015041478A1 (ko) | 2013-09-17 | 2014-09-17 | 멀티미디어 신호 처리 방법 및 장치 |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/008677 WO2015041476A1 (ko) | 2013-09-17 | 2014-09-17 | 오디오 신호 처리 방법 및 장치 |
PCT/KR2014/008679 WO2015041478A1 (ko) | 2013-09-17 | 2014-09-17 | 멀티미디어 신호 처리 방법 및 장치 |
Country Status (9)
Country | Link |
---|---|
US (7) | US10469969B2 (ko) |
EP (6) | EP3767970B1 (ko) |
JP (1) | JP6121052B2 (ko) |
KR (6) | KR101782916B1 (ko) |
CN (4) | CN108200530B (ko) |
BR (1) | BR112016005956B8 (ko) |
CA (3) | CA3194257A1 (ko) |
ES (1) | ES2932422T3 (ko) |
WO (3) | WO2015041477A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108605195A (zh) * | 2015-11-27 | 2018-09-28 | 诺基亚技术有限公司 | 智能音频呈现 |
Families Citing this family (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11665482B2 (en) | 2011-12-23 | 2023-05-30 | Shenzhen Shokz Co., Ltd. | Bone conduction speaker and compound vibration device thereof |
CN108806706B (zh) | 2013-01-15 | 2022-11-15 | 韩国电子通信研究院 | 处理信道信号的编码/解码装置及方法 |
WO2014112793A1 (ko) | 2013-01-15 | 2014-07-24 | 한국전자통신연구원 | 채널 신호를 처리하는 부호화/복호화 장치 및 방법 |
KR101782916B1 (ko) | 2013-09-17 | 2017-09-28 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법 및 장치 |
WO2015058818A1 (en) * | 2013-10-22 | 2015-04-30 | Huawei Technologies Co., Ltd. | Apparatus and method for compressing a set of n binaural room impulse responses |
WO2015060654A1 (ko) | 2013-10-22 | 2015-04-30 | 한국전자통신연구원 | 오디오 신호의 필터 생성 방법 및 이를 위한 파라메터화 장치 |
DE102013223201B3 (de) * | 2013-11-14 | 2015-05-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Verfahren und Vorrichtung zum Komprimieren und Dekomprimieren von Schallfelddaten eines Gebietes |
CN104681034A (zh) * | 2013-11-27 | 2015-06-03 | 杜比实验室特许公司 | 音频信号处理 |
WO2015099429A1 (ko) | 2013-12-23 | 2015-07-02 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법, 이를 위한 파라메터화 장치 및 오디오 신호 처리 장치 |
CN108600935B (zh) | 2014-03-19 | 2020-11-03 | 韦勒斯标准与技术协会公司 | 音频信号处理方法和设备 |
KR101856127B1 (ko) | 2014-04-02 | 2018-05-09 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법 및 장치 |
CN105448312B (zh) * | 2014-06-12 | 2019-02-19 | 华为技术有限公司 | 音频同步播放方法、装置及系统 |
US9838819B2 (en) * | 2014-07-02 | 2017-12-05 | Qualcomm Incorporated | Reducing correlation between higher order ambisonic (HOA) background channels |
WO2017035281A2 (en) * | 2015-08-25 | 2017-03-02 | Dolby International Ab | Audio encoding and decoding using presentation transform parameters |
US10142755B2 (en) * | 2016-02-18 | 2018-11-27 | Google Llc | Signal processing methods and systems for rendering audio on virtual loudspeaker arrays |
US10492016B2 (en) * | 2016-09-29 | 2019-11-26 | Lg Electronics Inc. | Method for outputting audio signal using user position information in audio decoder and apparatus for outputting audio signal using same |
US10492018B1 (en) | 2016-10-11 | 2019-11-26 | Google Llc | Symmetric binaural rendering for high-order ambisonics |
CN106454643A (zh) * | 2016-12-03 | 2017-02-22 | 邯郸学院 | 一种补偿频率可调的音频播放器 |
US9992602B1 (en) | 2017-01-12 | 2018-06-05 | Google Llc | Decoupled binaural rendering |
US10009704B1 (en) | 2017-01-30 | 2018-06-26 | Google Llc | Symmetric spherical harmonic HRTF rendering |
US10158963B2 (en) | 2017-01-30 | 2018-12-18 | Google Llc | Ambisonic audio with non-head tracked stereo based on head position and time |
DE102017102988B4 (de) | 2017-02-15 | 2018-12-20 | Sennheiser Electronic Gmbh & Co. Kg | Verfahren und Vorrichtung zur Verarbeitung eines digitalen Audiosignals für binaurale Wiedergabe |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
US10939222B2 (en) * | 2017-08-10 | 2021-03-02 | Lg Electronics Inc. | Three-dimensional audio playing method and playing apparatus |
US11200906B2 (en) | 2017-09-15 | 2021-12-14 | Lg Electronics, Inc. | Audio encoding method, to which BRIR/RIR parameterization is applied, and method and device for reproducing audio by using parameterized BRIR/RIR information |
RU2020112483A (ru) * | 2017-10-20 | 2021-09-27 | Сони Корпорейшн | Устройство, способ и программа для обработки сигнала |
US11257478B2 (en) * | 2017-10-20 | 2022-02-22 | Sony Corporation | Signal processing device, signal processing method, and program |
US10499153B1 (en) * | 2017-11-29 | 2019-12-03 | Boomcloud 360, Inc. | Enhanced virtual stereo reproduction for unmatched transaural loudspeaker systems |
JP7102024B2 (ja) * | 2018-04-10 | 2022-07-19 | ガウディオ・ラボ・インコーポレイテッド | メタデータを利用するオーディオ信号処理装置 |
JP7137694B2 (ja) | 2018-09-12 | 2022-09-14 | シェンチェン ショックス カンパニー リミテッド | 複数の音響電気変換器を有する信号処理装置 |
BR112022003131A2 (pt) * | 2019-09-03 | 2022-05-17 | Dolby Laboratories Licensing Corp | Banco de filtros de áudio com componentes de descorrelação |
CN110853658B (zh) * | 2019-11-26 | 2021-12-07 | 中国电影科学技术研究所 | 音频信号的下混方法、装置、计算机设备及可读存储介质 |
GB2593170A (en) | 2020-03-16 | 2021-09-22 | Nokia Technologies Oy | Rendering reverberation |
CN112336380A (zh) * | 2020-10-29 | 2021-02-09 | 成都信息工程大学 | 一种基于Golay码的超声弹性成像应变估计方法 |
CN112770227B (zh) * | 2020-12-30 | 2022-04-29 | 中国电影科学技术研究所 | 音频处理方法、装置、耳机和存储介质 |
US11568884B2 (en) * | 2021-05-24 | 2023-01-31 | Invictumtech, Inc. | Analysis filter bank and computing procedure thereof, audio frequency shifting system, and audio frequency shifting procedure |
KR102652643B1 (ko) * | 2021-06-09 | 2024-03-29 | 코클 아이엔씨 | 오디오 음질 변환 장치 및 그의 제어방법 |
CN116095595B (zh) * | 2022-08-19 | 2023-11-21 | 荣耀终端有限公司 | 音频处理方法和装置 |
KR20240057243A (ko) * | 2022-10-24 | 2024-05-02 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050117762A1 (en) * | 2003-11-04 | 2005-06-02 | Atsuhiro Sakurai | Binaural sound localization using a formant-type cascade of resonators and anti-resonators |
KR20050123396A (ko) * | 2004-06-25 | 2005-12-29 | 삼성전자주식회사 | 저비트율 부호화/복호화 방법 및 장치 |
KR20090047341A (ko) * | 2007-11-07 | 2009-05-12 | 한국전자통신연구원 | 공간큐 기반의 바이노럴 스테레오 합성 장치 및 그 방법과,그를 이용한 바이노럴 스테레오 복호화 장치 |
KR20100063113A (ko) * | 2007-10-09 | 2010-06-10 | 코닌클리즈케 필립스 일렉트로닉스 엔.브이. | 바이노럴 오디오 신호를 생성하기 위한 방법 및 장치 |
KR20120013893A (ko) * | 2010-08-06 | 2012-02-15 | 삼성전자주식회사 | 디코딩 방법 및 그에 따른 디코딩 장치 |
Family Cites Families (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5329587A (en) | 1993-03-12 | 1994-07-12 | At&T Bell Laboratories | Low-delay subband adaptive filter |
US5371799A (en) | 1993-06-01 | 1994-12-06 | Qsound Labs, Inc. | Stereo headphone sound source localization system |
DE4328620C1 (de) | 1993-08-26 | 1995-01-19 | Akg Akustische Kino Geraete | Verfahren zur Simulation eines Raum- und/oder Klangeindrucks |
WO1995034883A1 (fr) * | 1994-06-15 | 1995-12-21 | Sony Corporation | Processeur de signaux et dispositif de reproduction sonore |
JP2985675B2 (ja) * | 1994-09-01 | 1999-12-06 | 日本電気株式会社 | 帯域分割適応フィルタによる未知システム同定の方法及び装置 |
IT1281001B1 (it) * | 1995-10-27 | 1998-02-11 | Cselt Centro Studi Lab Telecom | Procedimento e apparecchiatura per codificare, manipolare e decodificare segnali audio. |
WO1999014983A1 (en) | 1997-09-16 | 1999-03-25 | Lake Dsp Pty. Limited | Utilisation of filtering effects in stereo headphone devices to enhance spatialization of source around a listener |
US6668061B1 (en) * | 1998-11-18 | 2003-12-23 | Jonathan S. Abel | Crosstalk canceler |
FI118247B (fi) | 2003-02-26 | 2007-08-31 | Fraunhofer Ges Forschung | Menetelmä luonnollisen tai modifioidun tilavaikutelman aikaansaamiseksi monikanavakuuntelussa |
RU2005135650A (ru) * | 2003-04-17 | 2006-03-20 | Конинклейке Филипс Электроникс Н.В. (Nl) | Синтез аудиосигнала |
SE0301273D0 (sv) * | 2003-04-30 | 2003-04-30 | Coding Technologies Sweden Ab | Advanced processing based on a complex-exponential-modulated filterbank and adaptive time signalling methods |
US7949141B2 (en) | 2003-11-12 | 2011-05-24 | Dolby Laboratories Licensing Corporation | Processing audio signals with head related transfer function filters and a reverberator |
KR100595202B1 (ko) * | 2003-12-27 | 2006-06-30 | 엘지전자 주식회사 | 디지털 오디오 워터마크 삽입/검출 장치 및 방법 |
US7486498B2 (en) * | 2004-01-12 | 2009-02-03 | Case Western Reserve University | Strong substrate alloy and compressively stressed dielectric film for capacitor with high energy density |
ATE527654T1 (de) | 2004-03-01 | 2011-10-15 | Dolby Lab Licensing Corp | Mehrkanal-audiodecodierung |
US7720230B2 (en) | 2004-10-20 | 2010-05-18 | Agere Systems, Inc. | Individual channel shaping for BCC schemes and the like |
KR100617165B1 (ko) * | 2004-11-19 | 2006-08-31 | 엘지전자 주식회사 | 워터마크 삽입/검출 기능을 갖는 오디오 부호화/복호화장치 및 방법 |
US7715575B1 (en) * | 2005-02-28 | 2010-05-11 | Texas Instruments Incorporated | Room impulse response |
ATE459216T1 (de) * | 2005-06-28 | 2010-03-15 | Akg Acoustics Gmbh | Verfahren zur simulierung eines raumeindrucks und/oder schalleindrucks |
CN102395098B (zh) | 2005-09-13 | 2015-01-28 | 皇家飞利浦电子股份有限公司 | 生成3d声音的方法和设备 |
KR101333031B1 (ko) | 2005-09-13 | 2013-11-26 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | HRTFs을 나타내는 파라미터들의 생성 및 처리 방법 및디바이스 |
KR101562379B1 (ko) | 2005-09-13 | 2015-10-22 | 코닌클리케 필립스 엔.브이. | 공간 디코더 유닛 및 한 쌍의 바이노럴 출력 채널들을 생성하기 위한 방법 |
CN101263739B (zh) | 2005-09-13 | 2012-06-20 | Srs实验室有限公司 | 用于音频处理的系统和方法 |
US7917561B2 (en) | 2005-09-16 | 2011-03-29 | Coding Technologies Ab | Partially complex modulated filter bank |
US8443026B2 (en) | 2005-09-16 | 2013-05-14 | Dolby International Ab | Partially complex modulated filter bank |
EP1942582B1 (en) | 2005-10-26 | 2019-04-03 | NEC Corporation | Echo suppressing method and device |
JP4637725B2 (ja) * | 2005-11-11 | 2011-02-23 | ソニー株式会社 | 音声信号処理装置、音声信号処理方法、プログラム |
WO2007080211A1 (en) | 2006-01-09 | 2007-07-19 | Nokia Corporation | Decoding of binaural audio signals |
CN101361117B (zh) * | 2006-01-19 | 2011-06-15 | Lg电子株式会社 | 处理媒体信号的方法和装置 |
ES2339888T3 (es) | 2006-02-21 | 2010-05-26 | Koninklijke Philips Electronics N.V. | Codificacion y decodificacion de audio. |
KR100754220B1 (ko) * | 2006-03-07 | 2007-09-03 | 삼성전자주식회사 | Mpeg 서라운드를 위한 바이노럴 디코더 및 그 디코딩방법 |
CN101401455A (zh) * | 2006-03-15 | 2009-04-01 | 杜比实验室特许公司 | 使用子带滤波器的立体声呈现技术 |
FR2899424A1 (fr) * | 2006-03-28 | 2007-10-05 | France Telecom | Procede de synthese binaurale prenant en compte un effet de salle |
FR2899423A1 (fr) * | 2006-03-28 | 2007-10-05 | France Telecom | Procede et dispositif de spatialisation sonore binaurale efficace dans le domaine transforme. |
KR101244910B1 (ko) * | 2006-04-03 | 2013-03-18 | 삼성전자주식회사 | 시분할 입체 영상 디스플레이 장치 및 그 구동 방법 |
EP1853092B1 (en) * | 2006-05-04 | 2011-10-05 | LG Electronics, Inc. | Enhancing stereo audio with remix capability |
US8374365B2 (en) * | 2006-05-17 | 2013-02-12 | Creative Technology Ltd | Spatial audio analysis and synthesis for binaural reproduction and format conversion |
EP2337224B1 (en) * | 2006-07-04 | 2017-06-21 | Dolby International AB | Filter unit and method for generating subband filter impulse responses |
US7876903B2 (en) | 2006-07-07 | 2011-01-25 | Harris Corporation | Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system |
EP1885154B1 (en) * | 2006-08-01 | 2013-07-03 | Nuance Communications, Inc. | Dereverberation of microphone signals |
US9496850B2 (en) | 2006-08-04 | 2016-11-15 | Creative Technology Ltd | Alias-free subband processing |
EP3288027B1 (en) | 2006-10-25 | 2021-04-07 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for generating complex-valued audio subband values |
JP5450085B2 (ja) | 2006-12-07 | 2014-03-26 | エルジー エレクトロニクス インコーポレイティド | オーディオ処理方法及び装置 |
KR20080076691A (ko) | 2007-02-14 | 2008-08-20 | 엘지전자 주식회사 | 멀티채널 오디오신호 복호화방법 및 그 장치, 부호화방법및 그 장치 |
KR100955328B1 (ko) | 2007-05-04 | 2010-04-29 | 한국전자통신연구원 | 반사음 재생을 위한 입체 음장 재생 장치 및 그 방법 |
US8140331B2 (en) | 2007-07-06 | 2012-03-20 | Xia Lou | Feature extraction for identification and classification of audio signals |
KR100899836B1 (ko) | 2007-08-24 | 2009-05-27 | 광주과학기술원 | 실내 충격응답 모델링 방법 및 장치 |
CN101884065B (zh) | 2007-10-03 | 2013-07-10 | 创新科技有限公司 | 用于双耳再现和格式转换的空间音频分析和合成的方法 |
CN101136197B (zh) * | 2007-10-16 | 2011-07-20 | 得理微电子(上海)有限公司 | 基于时变延迟线的数字混响处理器 |
US8125885B2 (en) | 2008-07-11 | 2012-02-28 | Texas Instruments Incorporated | Frequency offset estimation in orthogonal frequency division multiple access wireless networks |
WO2010012478A2 (en) | 2008-07-31 | 2010-02-04 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Signal generation for binaural signals |
TWI475896B (zh) | 2008-09-25 | 2015-03-01 | Dolby Lab Licensing Corp | 單音相容性及揚聲器相容性之立體聲濾波器 |
EP2175670A1 (en) * | 2008-10-07 | 2010-04-14 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Binaural rendering of a multi-channel audio signal |
KR20100062784A (ko) | 2008-12-02 | 2010-06-10 | 한국전자통신연구원 | 객체 기반 오디오 컨텐츠 생성/재생 장치 |
US8660281B2 (en) | 2009-02-03 | 2014-02-25 | University Of Ottawa | Method and system for a multi-microphone noise reduction |
EP2237270B1 (en) | 2009-03-30 | 2012-07-04 | Nuance Communications, Inc. | A method for determining a noise reference signal for noise compensation and/or noise reduction |
JP2012525051A (ja) | 2009-04-21 | 2012-10-18 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | オーディオ信号の合成 |
JP4893789B2 (ja) | 2009-08-10 | 2012-03-07 | ヤマハ株式会社 | 音場制御装置 |
US9432790B2 (en) * | 2009-10-05 | 2016-08-30 | Microsoft Technology Licensing, Llc | Real-time sound propagation for dynamic sources |
EP2365630B1 (en) * | 2010-03-02 | 2016-06-08 | Harman Becker Automotive Systems GmbH | Efficient sub-band adaptive fir-filtering |
ES2522171T3 (es) | 2010-03-09 | 2014-11-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Aparato y método para procesar una señal de audio usando alineación de borde de patching |
KR101844511B1 (ko) | 2010-03-19 | 2018-05-18 | 삼성전자주식회사 | 입체 음향 재생 방법 및 장치 |
JP5850216B2 (ja) | 2010-04-13 | 2016-02-03 | ソニー株式会社 | 信号処理装置および方法、符号化装置および方法、復号装置および方法、並びにプログラム |
US8693677B2 (en) | 2010-04-27 | 2014-04-08 | Freescale Semiconductor, Inc. | Techniques for updating filter coefficients of an adaptive filter |
NZ587483A (en) | 2010-08-20 | 2012-12-21 | Ind Res Ltd | Holophonic speaker system with filters that are pre-configured based on acoustic transfer functions |
CA3191597C (en) | 2010-09-16 | 2024-01-02 | Dolby International Ab | Cross product enhanced subband block based harmonic transposition |
JP5707842B2 (ja) | 2010-10-15 | 2015-04-30 | ソニー株式会社 | 符号化装置および方法、復号装置および方法、並びにプログラム |
FR2966634A1 (fr) * | 2010-10-22 | 2012-04-27 | France Telecom | Codage/decodage parametrique stereo ameliore pour les canaux en opposition de phase |
EP2464146A1 (en) * | 2010-12-10 | 2012-06-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for decomposing an input signal using a pre-calculated reference curve |
US9462387B2 (en) | 2011-01-05 | 2016-10-04 | Koninklijke Philips N.V. | Audio system and method of operation therefor |
EP2541542A1 (en) | 2011-06-27 | 2013-01-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for determining a measure for a perceived level of reverberation, audio processor and method for processing a signal |
EP2503800B1 (en) | 2011-03-24 | 2018-09-19 | Harman Becker Automotive Systems GmbH | Spatially constant surround sound |
JP5704397B2 (ja) | 2011-03-31 | 2015-04-22 | ソニー株式会社 | 符号化装置および方法、並びにプログラム |
US9117440B2 (en) | 2011-05-19 | 2015-08-25 | Dolby International Ab | Method, apparatus, and medium for detecting frequency extension coding in the coding history of an audio signal |
EP2530840B1 (en) * | 2011-05-30 | 2014-09-03 | Harman Becker Automotive Systems GmbH | Efficient sub-band adaptive FIR-filtering |
KR101809272B1 (ko) * | 2011-08-03 | 2017-12-14 | 삼성전자주식회사 | 다 채널 오디오 신호의 다운 믹스 방법 및 장치 |
EP2891338B1 (en) * | 2012-08-31 | 2017-10-25 | Dolby Laboratories Licensing Corporation | System for rendering and playback of object based audio in various listening environments |
US20140270189A1 (en) | 2013-03-15 | 2014-09-18 | Beats Electronics, Llc | Impulse response approximation methods and related systems |
US9420393B2 (en) | 2013-05-29 | 2016-08-16 | Qualcomm Incorporated | Binaural rendering of spherical harmonic coefficients |
EP2840811A1 (en) | 2013-07-22 | 2015-02-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method for processing an audio signal; signal processing unit, binaural renderer, audio encoder and audio decoder |
US9319819B2 (en) | 2013-07-25 | 2016-04-19 | Etri | Binaural rendering method and apparatus for decoding multi channel audio |
KR101782916B1 (ko) | 2013-09-17 | 2017-09-28 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법 및 장치 |
WO2015060654A1 (ko) | 2013-10-22 | 2015-04-30 | 한국전자통신연구원 | 오디오 신호의 필터 생성 방법 및 이를 위한 파라메터화 장치 |
WO2015099429A1 (ko) | 2013-12-23 | 2015-07-02 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법, 이를 위한 파라메터화 장치 및 오디오 신호 처리 장치 |
CN108600935B (zh) | 2014-03-19 | 2020-11-03 | 韦勒斯标准与技术协会公司 | 音频信号处理方法和设备 |
JP6084264B1 (ja) | 2015-09-28 | 2017-02-22 | Kyb株式会社 | スプール弁装置 |
-
2014
- 2014-09-17 KR KR1020167006857A patent/KR101782916B1/ko active IP Right Grant
- 2014-09-17 EP EP20195668.7A patent/EP3767970B1/en active Active
- 2014-09-17 EP EP14846160.1A patent/EP3048815B1/en active Active
- 2014-09-17 BR BR112016005956A patent/BR112016005956B8/pt active IP Right Grant
- 2014-09-17 CN CN201711373267.5A patent/CN108200530B/zh active Active
- 2014-09-17 CN CN201480051252.5A patent/CN105659630B/zh active Active
- 2014-09-17 EP EP20203224.9A patent/EP3806498B1/en active Active
- 2014-09-17 KR KR1020177026831A patent/KR102163266B1/ko active Application Filing
- 2014-09-17 KR KR1020167006859A patent/KR101815082B1/ko active IP Right Grant
- 2014-09-17 CA CA3194257A patent/CA3194257A1/en active Pending
- 2014-09-17 KR KR1020177037604A patent/KR102159990B1/ko active IP Right Grant
- 2014-09-17 KR KR1020167006858A patent/KR101815079B1/ko active IP Right Grant
- 2014-09-17 EP EP14845972.0A patent/EP3048814B1/en active Active
- 2014-09-17 WO PCT/KR2014/008678 patent/WO2015041477A1/ko active Application Filing
- 2014-09-17 CA CA2924458A patent/CA2924458C/en active Active
- 2014-09-17 CA CA3122726A patent/CA3122726C/en active Active
- 2014-09-17 US US15/022,923 patent/US10469969B2/en active Active
- 2014-09-17 US US15/022,922 patent/US9961469B2/en active Active
- 2014-09-17 JP JP2016515421A patent/JP6121052B2/ja active Active
- 2014-09-17 CN CN201480051248.9A patent/CN105706467B/zh active Active
- 2014-09-17 CN CN201480051253.XA patent/CN105706468B/zh active Active
- 2014-09-17 WO PCT/KR2014/008677 patent/WO2015041476A1/ko active Application Filing
- 2014-09-17 EP EP14846500.8A patent/EP3048816B1/en active Active
- 2014-09-17 EP EP22191380.9A patent/EP4120699A1/en active Pending
- 2014-09-17 WO PCT/KR2014/008679 patent/WO2015041478A1/ko active Application Filing
- 2014-09-17 ES ES20195668T patent/ES2932422T3/es active Active
- 2014-09-17 KR KR1020177037593A patent/KR102215129B1/ko active IP Right Grant
-
2016
- 2016-01-08 US US14/990,814 patent/US9578437B2/en active Active
- 2016-05-04 US US15/145,822 patent/US9584943B2/en active Active
-
2018
- 2018-04-02 US US15/942,588 patent/US10455346B2/en active Active
-
2019
- 2019-09-25 US US16/581,782 patent/US11096000B2/en active Active
-
2021
- 2021-07-06 US US17/367,629 patent/US11622218B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050117762A1 (en) * | 2003-11-04 | 2005-06-02 | Atsuhiro Sakurai | Binaural sound localization using a formant-type cascade of resonators and anti-resonators |
KR20050123396A (ko) * | 2004-06-25 | 2005-12-29 | 삼성전자주식회사 | 저비트율 부호화/복호화 방법 및 장치 |
KR20100063113A (ko) * | 2007-10-09 | 2010-06-10 | 코닌클리즈케 필립스 일렉트로닉스 엔.브이. | 바이노럴 오디오 신호를 생성하기 위한 방법 및 장치 |
KR20090047341A (ko) * | 2007-11-07 | 2009-05-12 | 한국전자통신연구원 | 공간큐 기반의 바이노럴 스테레오 합성 장치 및 그 방법과,그를 이용한 바이노럴 스테레오 복호화 장치 |
KR20120013893A (ko) * | 2010-08-06 | 2012-02-15 | 삼성전자주식회사 | 디코딩 방법 및 그에 따른 디코딩 장치 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108605195A (zh) * | 2015-11-27 | 2018-09-28 | 诺基亚技术有限公司 | 智能音频呈现 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015041477A1 (ko) | 오디오 신호 처리 방법 및 장치 | |
WO2015060654A1 (ko) | 오디오 신호의 필터 생성 방법 및 이를 위한 파라메터화 장치 | |
WO2015099429A1 (ko) | 오디오 신호 처리 방법, 이를 위한 파라메터화 장치 및 오디오 신호 처리 장치 | |
WO2015142073A1 (ko) | 오디오 신호 처리 방법 및 장치 | |
WO2015152665A1 (ko) | 오디오 신호 처리 방법 및 장치 | |
KR102317732B1 (ko) | 오디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14845972 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167006858 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15022922 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2014845972 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014845972 Country of ref document: EP |