WO2015199508A1 - 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 - Google Patents
음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 Download PDFInfo
- Publication number
- WO2015199508A1 WO2015199508A1 PCT/KR2015/006601 KR2015006601W WO2015199508A1 WO 2015199508 A1 WO2015199508 A1 WO 2015199508A1 KR 2015006601 W KR2015006601 W KR 2015006601W WO 2015199508 A1 WO2015199508 A1 WO 2015199508A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- altitude
- channel
- rendering
- panning
- channels
- Prior art date
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 187
- 238000000034 method Methods 0.000 title claims abstract description 102
- 230000003111 delayed effect Effects 0.000 claims abstract description 12
- 238000004091 panning Methods 0.000 claims description 200
- 230000003447 ipsilateral effect Effects 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 108091006146 Channels Proteins 0.000 description 641
- 230000005236 sound signal Effects 0.000 description 54
- 230000000694 effects Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 18
- 210000005069 ears Anatomy 0.000 description 16
- 238000001914 filtration Methods 0.000 description 13
- 210000003128 head Anatomy 0.000 description 12
- 235000009508 confectionery Nutrition 0.000 description 8
- 230000002829 reductive effect Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 7
- 238000009434 installation Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 230000007423 decrease Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000007654 immersion Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 230000002542 deteriorative effect Effects 0.000 description 2
- 210000000883 ear external Anatomy 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 210000003454 tympanic membrane Anatomy 0.000 description 2
- 101001038300 Homo sapiens Protein ERGIC-53 Proteins 0.000 description 1
- 102100040252 Protein ERGIC-53 Human genes 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/308—Electronic adaptation dependent on speaker or headphone connection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/01—Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/03—Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/13—Aspects of volume control, not necessarily automatic, in stereophonic sound systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/05—Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation
Definitions
- the present invention relates to a method and apparatus for rendering an acoustic signal, and more particularly, to a location of a sound image by modifying an altitude panning coefficient or an altitude filter coefficient when an altitude of an input channel is higher or lower than an altitude according to a standard layout. And a rendering method and apparatus for more accurately reproducing a timbre.
- Stereo sound is a sound that adds spatial information to reproduce not only the height and tone of the sound but also a sense of direction and distance, to have a sense of presence, and to perceive the sense of direction, distance and sense of space to the listener who is not located in the space where the sound source is generated. it means.
- a multi-channel signal such as 22.2 channel is rendered to 5.1 channel
- a three-dimensional sound signal can be reproduced using a two-dimensional output channel, but when the elevation angle of the input channel is different from the reference elevation angle,
- the input signal is rendered using the rendering parameters determined according to the above, sound distortion occurs.
- the present invention solves the problems of the prior art described above, and an object thereof is to reduce distortion of an image even when an altitude of an input channel is higher or lower than a reference altitude.
- a method of rendering an acoustic signal including: receiving a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; Adding a predetermined delay to the frontal height input channel such that each output channel provides a sound image at a reference altitude; Based on the added delay, modifying the altitude rendering parameter for the front height input channel; And generating a delayed highly rendered surround output channel for the front height input channel based on the modified altitude rendering parameter, thereby preventing front-back confusion.
- the plurality of output channels are horizontal channels.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the front height channel includes at least one of CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 and CH_U_000 channels.
- the surround output channel includes at least one of CH_M_L110 and CH_M_R110.
- the predetermined delay is determined based on the sampling rate.
- an apparatus for rendering an acoustic signal including: a receiver configured to receive a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; A rendering unit that adds a predetermined delay to the frontal height input channel, each output channel having a sound image at a reference altitude angle, and modifies the altitude rendering parameter for the front height input channel based on the added delay. ; And an output unit configured to generate a delayed altitude rendering surround sound output channel for the front height input channel based on the modified altitude rendering parameter to prevent back and forth confusion.
- the plurality of output channels are horizontal channels.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the front height input channel includes at least one of CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 and CH_U_000 channels.
- the front height channel includes at least one of CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 and CH_U_000 channels.
- the predetermined delay is determined based on the sampling rate.
- a method of rendering an acoustic signal including: receiving a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; Obtaining an altitude rendering parameter for the height input channel such that each output channel provides a sound image at a reference altitude angle; And updating the altitude rendering parameter for a height input channel having a predetermined altitude angle other than the reference altitude angle, wherein updating the altitude rendering parameter comprises: a height input channel of a top front center; Updating a panning gain for panning to the surround output channel.
- the plurality of output channels is a horizontal channel.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the updating of the altitude rendering parameter includes updating the panning gain based on the reference altitude angle and the predetermined altitude angle.
- the updated altitude panning gain to be applied to an output channel that is ipsilateral to an output channel having a predetermined altitude angle among the updated altitude panning gains is The sum of the squares of the updated altitude panning gains to be applied to each of the input channels is greater than the altitude panning gain before the update.
- the updated altitude panning gain to be applied to the output channel having the predetermined altitude angle among the updated altitude panning gains is one less than the altitude panning gain before updating.
- an apparatus for rendering an acoustic signal including: a receiver configured to receive a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; And obtaining an altitude rendering parameter for the height input channel so that each output channel provides a sound image at a reference altitude angle, and updating the altitude rendering parameter for the height input channel having a predetermined altitude angle other than the reference altitude angle.
- the updated elevation rendering parameter includes a panning gain for panning a height input channel of a top front center to a surround output channel.
- the plurality of output channels is a horizontal channel.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the updated altitude rendering parameter includes an updated panning gain based on the reference elevation angle and the predetermined elevation angle.
- the updated altitude panning gain to be applied to an output channel that is ipsilateral to an output channel having a predetermined altitude angle among the updated altitude panning gains is The sum of the squares of the updated altitude panning gains to be applied to each of the input channels is greater than the altitude panning gain before the update.
- the updated altitude panning gain to be applied to the output channel having the predetermined altitude angle among the updated altitude panning gains is one less than the altitude panning gain before updating.
- a method of rendering an acoustic signal including: receiving a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; Obtaining an altitude rendering parameter for the height input channel such that each output channel provides a sound image at a reference altitude angle; And updating the altitude rendering parameter for a height input channel having a predetermined altitude angle other than the reference altitude angle, wherein updating the altitude rendering parameter comprises: setting a low frequency band based on a position of the height input channel; Obtaining an updated panning gain for a frequency range that includes.
- the updated panning gain is the panning gain for the rear height input channel.
- the plurality of output channels is a horizontal channel.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the updating of the altitude rendering parameter may include applying a weight to an altitude filter coefficient based on the reference altitude angle and the predetermined altitude angle.
- the weight is determined so that the altitude filter feature appears smoothly when the predetermined elevation angle is smaller than the reference elevation angle, and when the predetermined elevation angle is larger than the reference elevation angle, the elevation filter The feature is determined to appear strong.
- the updating of the altitude rendering parameter includes updating the panning gain based on the reference altitude angle and the predetermined altitude angle.
- the updated altitude panning gain to be applied to an output channel that is ipsilateral to an output channel having a predetermined altitude angle among the updated altitude panning gains is The sum of the squares of the updated altitude panning gains to be applied to each of the input channels is greater than the altitude panning gain before the update.
- the updated altitude panning gain to be applied to the output channel having the predetermined altitude angle among the updated altitude panning gains is one less than the altitude panning gain before updating.
- an apparatus for rendering an acoustic signal including: a receiver configured to receive a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; And obtaining an altitude rendering parameter for the height input channel so that each output channel provides a sound image at a reference altitude angle, and updating the altitude rendering parameter for the height input channel having a predetermined altitude angle other than the reference altitude angle.
- the updated altitude rendering parameter includes a panning gain updated for a frequency range including a low frequency band based on the position of the height input.
- the updated panning gain is the panning gain for the rear height input channel.
- the plurality of output channels is a horizontal channel.
- the altitude rendering parameter includes at least one of a panning gain and an altitude filter coefficient.
- the updated altitude rendering parameter includes a weighted altitude filter coefficient based on the reference altitude angle and the predetermined altitude angle.
- the weight is determined so that the altitude filter feature appears smoothly when the predetermined elevation angle is smaller than the reference elevation angle, and when the predetermined elevation angle is larger than the reference elevation angle, the elevation filter The feature is determined to appear strong.
- the updated altitude rendering parameter includes an updated panning gain based on the reference elevation angle and the predetermined elevation angle.
- an updated altitude panning gain to be applied to an output channel that is ipsilateral to an output channel having a predetermined altitude angle among the updated altitude panning gains is one.
- the updated altitude panning gain to be applied to an output channel ipsilateral to an output channel having a predetermined altitude angle among the updated altitude panning gains. Is less than the altitude panning gain before the update, and the sum of the squares of the updated altitude panning gains to be applied to each input channel is one.
- a computer readable recording medium for recording another method for implementing the present invention, another system, and a computer program for executing the method.
- the present invention even when the altitude of the input channel is higher or lower than the reference altitude, it is possible to render the stereoscopic signal so that the distortion of the sound image is reduced. Further, according to the present invention, it is possible to prevent the front and rear confusion caused by the surround output channel.
- FIG. 1 is a block diagram illustrating an internal structure of a 3D sound reproducing apparatus according to an exemplary embodiment.
- FIG. 2 is a block diagram illustrating a structure of a renderer among the structures of a 3D sound reproducing apparatus according to an exemplary embodiment.
- FIG. 3 is a diagram illustrating a layout of each channel when a plurality of input channels are downmixed into a plurality of output channels according to an exemplary embodiment.
- FIG. 4 is a diagram illustrating a panning unit according to an embodiment when there is a positional deviation between a standard layout and an installation layout of an output channel.
- FIG. 5 is a block diagram illustrating a configuration of a decoder and a stereo sound renderer among the configurations of a stereoscopic sound reproducing apparatus according to an embodiment.
- 6 to 8 illustrate a layout of upper layers according to an elevation of an upper layer in a channel layout according to an embodiment.
- 9 to 11 are diagrams illustrating changes in sound image and altitude filters according to altitude of a channel according to an embodiment.
- FIG. 12 is a flowchart of a method of rendering a stereo sound signal, according to an embodiment.
- FIG. 13 is a diagram illustrating a phenomenon in which left and right sound images are reversed when an elevation angle of an input channel is greater than or equal to a threshold according to an embodiment.
- FIG. 14 illustrates a horizontal channel and a front height channel according to one embodiment.
- FIG. 15 illustrates a recognition probability of a front height channel according to an embodiment.
- 16 is a flowchart of a method for preventing back and forth confusion according to one embodiment.
- FIG. 17 illustrates a horizontal channel and a front height channel with delay added to the surround output channel according to one embodiment.
- TFC channel 18 illustrates a horizontal channel and a front center channel (TFC channel) according to one embodiment.
- a method of rendering an acoustic signal including: receiving a multichannel signal including a plurality of input channels to be converted into a plurality of output channels; Adding a predetermined delay to the frontal height input channel such that each output channel provides a sound image at a reference altitude; Based on the added delay, modifying the altitude rendering parameter for the front height input channel; And generating a delayed highly rendered surround output channel for the front height input channel based on the modified altitude rendering parameter, thereby preventing front-back confusion.
- FIG. 1 is a block diagram illustrating an internal structure of a 3D sound reproducing apparatus according to an exemplary embodiment.
- the stereoscopic sound reproducing apparatus 100 may output a multi-channel sound signal mixed with a plurality of output channels for reproducing a plurality of input channels. At this time, if the number of output channels is smaller than the number of input channels, the input channels are downmixed to match the number of output channels.
- Stereo sound is a sound that adds spatial information to reproduce not only the height and tone of the sound but also a sense of direction and distance, to have a sense of presence, and to perceive the sense of direction, distance and sense of space to the listener who is not located in the space where the sound source is generated. it means.
- the output channel of the sound signal may refer to the number of speakers from which sound is output. As the number of output channels increases, the number of speakers for outputting sound may increase.
- the stereoscopic sound reproducing apparatus 100 may render and mix a multichannel sound input signal as an output channel to be reproduced so that a multichannel sound signal having a large number of input channels may be output and reproduced in an environment having a small number of output channels. Can be.
- the multi-channel sound signal may include a channel capable of outputting elevated sound.
- the channel capable of outputting altitude sound may refer to a channel capable of outputting an acoustic signal through a speaker located above the head of the listener to feel the altitude.
- the horizontal channel may refer to a channel capable of outputting a sound signal through a speaker positioned on a horizontal plane with the listener.
- the environment in which the number of output channels described above is small may mean an environment in which sound is output through a speaker disposed on a horizontal plane without including an output channel capable of outputting high-altitude sound.
- a horizontal channel may refer to a channel including a sound signal that may be output through a speaker disposed on the horizontal plane.
- the overhead channel may refer to a channel including an acoustic signal that may be output through a speaker that is disposed on an altitude rather than a horizontal plane and may output altitude sound.
- the stereo sound reproducing apparatus 100 may include an audio core 110, a renderer 120, a mixer 130, and a post processor 140.
- the 3D sound reproducing apparatus 100 may render a multi-channel input sound signal, mix it, and output the mixed channel to an output channel to be reproduced.
- the multi-channel input sound signal may be a 22.2 channel signal
- the output channel to be reproduced may be 5.1 or 7.1 channel.
- the 3D sound reproducing apparatus 100 performs rendering by determining an output channel to correspond to each channel of the multichannel input sound signal, and outputs the rendered audio signals by combining the signals of the channels corresponding to the channel to be reproduced and outputting the final signal. You can mix.
- the encoded sound signal is input to the audio core 110 in the form of a bitstream, and the audio core 110 selects a decoder tool suitable for the manner in which the sound signal is encoded, and decodes the input sound signal.
- the renderer 120 may render the multichannel input sound signal into a multichannel output channel according to a channel and a frequency.
- the renderer 120 may render the multichannel sound signal according to the overhead channel and the horizontal channel in 3D (dimensional) rendering and 2D (dimensional) rendering, respectively.
- 3D (dimensional) rendering and 2D (dimensional) rendering respectively.
- the structure of the renderer and a detailed rendering method will be described in more detail later with reference to FIG. 2.
- the mixer 130 may combine the signals of the channels corresponding to the horizontal channel by the renderer 120 and output the final signal.
- the mixer 130 may mix signals of each channel for each predetermined section. For example, the mixer 130 may mix signals of each channel for each frame.
- the mixer 130 may mix based on power values of signals rendered in respective channels to be reproduced.
- the mixer 130 may determine the amplitude of the final signal or the gain to be applied to the final signal based on the power values of the signals rendered in the respective channels to be reproduced.
- the post processor 140 adjusts the output signal of the mixer 130 to each playback device (such as a speaker or a headphone) and performs dynamic range control and binauralizing on the multiband signal.
- the output sound signal output from the post processor 140 is output through a device such as a speaker, and the output sound signal may be reproduced in 2D or 3D according to the processing of each component.
- the stereoscopic sound reproducing apparatus 100 according to the exemplary embodiment illustrated in FIG. 1 is illustrated based on the configuration of an audio decoder, and an additional configuration is omitted.
- FIG. 2 is a block diagram illustrating a structure of a renderer among the structures of a 3D sound reproducing apparatus according to an exemplary embodiment.
- the renderer 120 includes a filtering unit 121 and a panning unit 123.
- the filtering unit 121 may correct the tone or the like according to the position of the decoded sound signal and may filter the input sound signal by using a HRTF (Head-Related Transfer Function) filter.
- HRTF Head-Related Transfer Function
- the filtering unit 121 may render the overhead channel passing through the HRTF (Head-Related Transfer Function) filter in different ways depending on the frequency in order to 3D render the overhead channel.
- HRTF Head-Related Transfer Function
- HRTF filters not only provide simple path differences, such as level differences between two ears (ILD) and interaural time differences between the two ears, 3D sound can be recognized by a phenomenon in which a characteristic of a complicated path such as reflection is changed according to the direction of sound arrival.
- the HRTF filter may process acoustic signals included in the overhead channel so that stereoscopic sound may be recognized by changing sound quality of the acoustic signal.
- the panning unit 123 obtains and applies a panning coefficient to be applied for each frequency band and each channel in order to pan the input sound signal for each output channel.
- Panning the sound signal means controlling the magnitude of a signal applied to each output channel to render a sound source at a specific position between two output channels.
- the panning coefficient can be used interchangeably with the term panning gain.
- the panning unit 123 renders a low frequency signal among the overhead channel signals according to an add-to-closest channel method, and a high frequency signal according to a multichannel panning method. Can render.
- a gain value set differently for each channel to be rendered in each channel signal of the multichannel sound signal may be applied to at least one horizontal channel.
- the signals of each channel to which the gain value is applied may be summed through mixing to be output as the final signal.
- the multi-channel panning method does not render each channel of the multi-channel sound signal separately in several channels, but renders only one channel, so that the listener may have a sound quality similar to that of the listener.
- the stereoscopic sound reproducing apparatus 100 renders a low frequency signal according to an add-to-closest-channel method to prevent sound quality deterioration that may occur when several channels are mixed in one output channel. can do. That is, when several channels are mixed in one output channel, the sound quality may be amplified or reduced according to the interference between the channel signals, thereby deteriorating. Thus, the sound quality deterioration may be prevented by mixing one channel in one output channel.
- each channel of the multichannel sound signal may be rendered to the nearest channel among channels to be reproduced instead of being divided into several channels.
- the stereo sound reproducing apparatus 100 may widen the sweet spot without deteriorating sound quality by performing rendering in a different method according to the frequency. That is, by rendering the low frequency signal with strong diffraction characteristics according to the add-to-closed channel method, it is possible to prevent sound quality degradation that may occur when several channels are mixed in one output channel.
- the sweet spot refers to a predetermined range in which a listener can optimally listen to an undistorted stereoscopic sound.
- the listener can optimally listen to a wide range of non-distorted stereoscopic sounds, and when the listener is not located at the sweet spot, the sound quality or sound image or the like can be distorted.
- FIG. 3 is a diagram illustrating a layout of each channel when a plurality of input channels are downmixed into a plurality of output channels according to an exemplary embodiment.
- the stereoscopic sound refers to a sound in which the sound signal itself has a high and low sense of sound, and at least two loudspeakers, that is, output channels, are required to reproduce the stereoscopic sound.
- output channels are required to reproduce the stereoscopic sound.
- a large number of output channels are required to more accurately reproduce the high, low, and spatial sense of sound.
- FIG. 3 is a diagram for explaining a case of reproducing a 22.2 channel stereoscopic signal to a 5.1 channel output system.
- the 5.1-channel system is the generic name for the 5-channel surround multichannel sound system and is the most commonly used system for home theater and theater sound systems in the home. All 5.1 channels include a FL (Front Left) channel, a C (Center) channel, a F (Right Right) channel, a SL (Surround Left) channel, and a SR (Surround Right) channel. As can be seen in Fig. 3, since the outputs of the 5.1 channels are all on the same plane, they are physically equivalent to a two-dimensional system. You have to go through the rendering process.
- 5.1-channel systems are widely used in a variety of applications, from movies to DVD video, DVD sound, Super Audio Compact Disc (SACD) or digital broadcast.
- SACD Super Audio Compact Disc
- the 5.1 channel system provides improved spatial feeling compared to the stereo system, there are various limitations in forming a wider listening space.
- the sweet spot is narrow and cannot provide a vertical sound image having an elevation angle, it may not be suitable for a large listening space such as a theater.
- the NHK's proposed 22.2 channel system consists of three layers of output channels.
- the upper layer 310 includes a Voice of God (VOG), T0, T180, TL45, TL90, TL135, TR45, TR90 and TR45 channels.
- VOG Voice of God
- the index of the first T of each channel name means the upper layer
- the index of L or R means the left or the right, respectively
- the upper layer is often called the top layer.
- the VOG channel exists above the listener's head and has an altitude of 90 degrees and no azimuth. However, the VOG channel may not be a VOG channel anymore since the position has a slight azimuth and the altitude angle is not 90 degrees.
- the middle layer 320 is in the same plane as the existing 5.1 channel and includes ML60, ML90, ML135, MR60, MR90, and MR135 channels in addition to the 5.1 channel output channel.
- the index of the first M of each channel name means the middle layer
- the number after the middle means the azimuth angle from the center channel.
- the low layer 330 includes L0, LL45, and LR45 channels.
- the index of the first L of each channel name means a low layer, and the number after the mean an azimuth angle from the center channel.
- the middle layer is called a horizontal channel
- the VOG, T0, T180, T180, M180, L, and C channels corresponding to 0 degrees of azimuth or 180 degrees of azimuth are called vertical channels.
- FIG. 4 is a diagram illustrating a panning unit according to an embodiment when there is a positional deviation between a standard layout and an installation layout of an output channel.
- the original sound field may be distorted, and various techniques have been studied to correct such distortion.
- Common rendering techniques are designed to perform rendering based on speakers, i.e., output channels installed in a standard layout. However, when the output channel is not installed to exactly match the standard layout, distortion of the sound image position and distortion of the timbre occur.
- Distortion of sound image has high level distortion and phase angle distortion, but it is not very sensitive at some low level.
- Due to the physical characteristics of two human ears located at the left-right side it is possible to perceive the image distortion more sensitively when the left-center-right sound image is changed.
- the frontal image is more sensitively perceived.
- the channels such as VOG, T0, T180, T180, M180, L, and C positioned at 0 degrees or 180 degrees than the channels on the left and right are not distorted. Particular attention should be paid.
- the first step is to calculate the panning coefficient of the input multi-channel signal according to the standard layout of the output channel, which corresponds to an initialization process.
- the second step is to modify the calculated coefficients based on the layout in which the output channels are actually installed.
- the sound image of the output signal may be present at a more accurate position.
- the panning unit 123 needs information about an installation layout of the output channel and a standard layout of the output channel.
- the audio input signal refers to an input signal to be reproduced in C
- the audio output signal refers to a modified panning signal output from the L and R channels according to the installation layout.
- the two-dimensional panning method which only considers azimuth deviation, does not compensate for the effects of altitude deviation when there is an elevation deviation between the standard layout and the installation layout of the output channel. Therefore, if there is an altitude deviation between the standard layout and the installation layout of the output channel, it is necessary to correct the altitude increase effect due to the altitude deviation through the altitude effect correction unit 124 as shown in FIG. 4.
- FIG. 5 is a block diagram illustrating a configuration of a decoder and a stereo sound renderer among the configurations of a stereoscopic sound reproducing apparatus according to an embodiment.
- the stereoscopic sound reproducing apparatus 100 is illustrated based on the configuration of the decoder 110 and the stereoscopic sound renderer 120, and other components are omitted.
- the sound signal input to the 3D sound reproducing apparatus is an encoded signal and is input in the form of a bitstream.
- the decoder 110 decodes the input sound signal by selecting a decoder tool suitable for the method in which the sound signal is encoded, and transmits the decoded sound signal to the 3D sound renderer 120.
- the stereoscopic renderer 120 includes an initialization unit 125 for obtaining and updating filter coefficients and panning coefficients, and a rendering unit 127 for performing filtering and panning.
- the renderer 127 performs filtering and panning on the acoustic signal transmitted from the decoder.
- the filtering unit 1271 processes information on the position of the sound so that the rendered sound signal may be reproduced at a desired position
- the panning unit 1272 processes the information on the tone of the sound, and thus the rendered sound signal is desired. Make sure you have the right tone for your location.
- the filtering unit 1271 and the panning unit 1272 perform functions similar to those of the filtering unit 121 and the panning unit 123 described with reference to FIG. 2. However, it should be noted that the filtering unit and the panning unit 123 of FIG. 2 are simplified views, and thus a configuration for obtaining filter coefficients and panning coefficients such as an initialization unit may be omitted.
- the initialization unit 125 is composed of an advanced rendering parameter obtaining unit 1251 and an advanced rendering parameter updating unit 1252.
- the altitude rendering parameter obtainer 1251 obtains an initial value of the altitude rendering parameter by using a configuration and arrangement of an output channel, that is, a loudspeaker.
- the initial value of the altitude rendering parameter is calculated based on the configuration of the output channel according to the standard layout and the configuration of the input channel according to the altitude rendering setting, or according to the mapping relationship between the input and output channels Read the saved initial value.
- the altitude rendering parameter may include a filter coefficient for use in the filtering unit 1251 or a panning coefficient for use in the panning unit 1252.
- the altitude setting value for altitude rendering may be different from the setting of the input channel.
- using a fixed altitude setting value makes it difficult to achieve the purpose of virtual rendering in which the original input stereo signal is reproduced three-dimensionally more similarly through an output channel having a different configuration from the input channel.
- the altitude feeling For example, if the altitude is too high, the image is small and the sound quality deteriorates. If the altitude is too low, it may be difficult to feel the effect of the virtual rendering. Therefore, it is necessary to adjust the altitude feeling according to the user's setting or the degree of virtual rendering suitable for the input channel.
- the altitude rendering parameter updater 1252 updates the altitude rendering parameter based on the altitude information of the input channel or the user-set altitude based on the initial values of the altitude rendering parameter acquired by the altitude rendering parameter obtainer 1251. At this time, if the speaker layout of the output channel is different from the standard layout, a process for correcting the influence may be added. In this case, the deviation of the output channel may include deviation information according to an altitude or azimuth difference.
- the output sound signal filtered and panned by the renderer 127 using the advanced rendering parameters acquired and updated by the initializer 125 is reproduced through a speaker corresponding to each output channel.
- 6 to 8 illustrate a layout of upper layers according to an elevation of an upper layer in a channel layout according to an embodiment.
- the input channel signal is a 22.2 channel stereo sound signal and is arranged according to the layout as shown in FIG. 3, the upper layer of the input channel has the layout as shown in FIG. 4 according to the elevation angle.
- the elevation angles are 0 degrees, 25 degrees, 35 degrees, and 45 degrees, respectively, and the VOG channel corresponding to the elevation angle of 90 degrees is omitted.
- Upper layers with an elevation of 0 degrees are as present in the horizontal plane (middle layer 320).
- FIG. 6 shows the channel arrangement when the upper channels are viewed from the front.
- FIG. 7 shows the channel arrangement when the upper channels are viewed from above.
- 8 shows the upper channel arrangement in three dimensions. It can be seen that the eight upper layer channels are arranged at equal intervals, each having an azimuth difference of 45 degrees.
- the elevation angle of the stereoscopic sound of the corresponding content may be applied differently, and as shown in FIGS. 6 to 8, the position and distance of each channel vary according to the altitude of the channel, The characteristics will also be different.
- 9 to 11 are diagrams illustrating changes in sound image and altitude filters according to altitude of a channel according to an embodiment.
- 9 is a view showing the position of each channel when the height of the height channel is 0 degrees, 35 degrees and 45 degrees, respectively.
- 9 is a view from behind the listener, and the channels shown in the figure are ML90 channels or TL90 channels, respectively. If the elevation angle is 0 degrees, the channel exists in the horizontal plane and corresponds to the ML90 channel. If the elevation angles are 35 degrees and 45 degrees, the upper layer channel corresponds to the TL90 channel.
- FIG. 10 is a view for explaining a difference between signals felt by the listener's left and right ears when an acoustic signal is output in each channel positioned as shown in FIG. 9.
- a sound signal is output from the ML90 without an elevation angle, in principle the sound signal is recognized only in the left ear and not in the right ear.
- the difference between the sound recognized by the left ear and the sound signal recognized by the right ear gradually decreases, and as the altitude angle of the channel gradually increases to 90 degrees, the channel above the listener's head, that is, the VOG channel. The same sound signal is recognized by both ears.
- the Interaural Level Difference (ILD) and the Interaural Time Difference (ITD) become the maximum, and the listener recognizes the sound image of the ML90 channel in the left horizontal channel.
- the difference in the acoustic signals recognized by the left and right ears as the elevation is increased This difference allows the listener to feel the difference in altitude in the output acoustic signal.
- the output signal of the channel with an altitude of 35 degrees has a wider sound image and sweet spot and the natural sound quality than the output signal of the channel with an altitude of 45 degrees, and the output signal of the channel with an altitude of 45 degrees is the output signal of a channel with an altitude of 35 degrees.
- the sound image is narrower and the sweet spot is narrower, but it has a characteristic of obtaining a sound field that provides strong immersion.
- the higher the altitude the higher the sense of altitude, the stronger the immersion, but the narrower the sound image. This is because, as the elevation angle increases, the physical position of the channel gradually enters inward and eventually approaches the listener.
- the update of the panning coefficient according to the change of the altitude angle is determined as follows.
- the panning coefficient is updated to make the sound image wider as the altitude angle increases, and the panning coefficient is updated to narrow the sound image as the altitude angle decreases.
- the rendering panning coefficient to be applied to the virtual channel to be rendered and the ipsilateral output channel is increased, and the panning coefficient to be applied to the remaining channels is determined through power normalization.
- the input channels of the 22.2 channels having the elevation angle, to which virtual rendering is applied are CH_U_000 (T0), CH_U_L45 (TL45), CH_U_R45 (TR45), CH_U_L90 (TL90), CH_U_R90 (TR90), and CH_U_L135 (TL135).
- N denotes the number of output channels for rendering an arbitrary virtual channel
- g_i denotes a panning coefficient to be applied to each output channel.
- This process must be performed for each height input channel respectively.
- the rendering panning coefficient to be applied to the virtual channel to be rendered and the ipsilateral output channel is reduced, and the panning coefficient to be applied to the remaining channels is determined through power normalization.
- the panning coefficient applied to the output channels CH_M_L030 and CH_M_L110 is reduced by 3 dB.
- N denotes the number of output channels for rendering an arbitrary virtual channel
- g_i denotes a panning coefficient to be applied to each output channel.
- FIG. 11 is a diagram illustrating characteristics of a tone filter according to frequency when an elevation angle of a channel is 35 degrees and an elevation angle is 45 degrees.
- the tone filter of the channel having an elevation angle of 45 degrees has a larger characteristic due to the elevation angle than the tone filter of the channel having an elevation angle of 35 degrees.
- the filter size characteristic is expressed in decibel scale, it is negative in the frequency band where the size of the output signal should be reduced to a positive value in the frequency band where the size of the output signal should be increased as shown in FIG. 7C. .
- the lower the elevation angle the flatter the shape of the filter size appears.
- the tone is similar to the signal of the horizontal channel, and the higher the altitude angle, the greater the change in the altitude sense. It is to emphasize the effect of altitude by raising the elevation angle. On the contrary, as the altitude is lowered, the effect of the tone filter may be reduced to reduce the altitude effect.
- the update of the filter coefficients according to the change of the altitude angle updates the original filter coefficients using a weight based on the default altitude angle and the altitude angle to actually render.
- the coefficients corresponding to the 45 degree filter of FIG. It must be updated with the coefficients corresponding to the filter.
- the filter coefficients must be updated so that both the valley and the floor of the filter according to the frequency band are smoothly corrected compared to the 45 degree filter. It is.
- the filter coefficients so that both the valley and the floor of the filter according to the frequency band are strongly modified compared to the 45 degree filter. Should be updated.
- FIG. 12 is a flowchart of a method of rendering a stereo sound signal, according to an embodiment.
- the renderer receives a multi-channel sound signal including a plurality of input channels (1210).
- the input multi-channel sound signal is converted into a plurality of output channel signals through rendering, and for example, an input signal having 22.2 channels of downmix having fewer output channels than the number of input channels is converted into an output signal having 5.1 channels. To be converted.
- a rendering parameter is acquired according to a standard layout of an output channel and a default elevation angle for virtual rendering (1220).
- the default elevation angle may vary depending on the renderer.
- the satisfaction and effect of the virtual rendering may be lowered depending on the user's taste or the characteristics of the input signal. Can be.
- the rendering parameter is updated (1230).
- the updated rendering parameter gives an initial value of the panning coefficient according to the result of comparing the updated filter coefficient or the magnitude of the preset altitude with the default altitude of the input filter by giving a weight determined based on the elevation angle deviation. Can be increased or decreased to include updated panning coefficients.
- the deviation of the output channel may include deviation information according to an altitude or azimuth difference.
- FIG. 13 is a diagram illustrating a phenomenon in which left and right sound images are reversed when an elevation angle of an input channel is greater than or equal to a threshold according to an embodiment.
- a person distinguishes the location of a sound image by the time difference, the magnitude difference, and the frequency characteristic difference of the sound reaching both ears.
- the differences in the signal characteristics reaching the two ears are large, the position is easier to identify, and even if a slight error occurs, there is no confusion before or after the sound image.
- the virtual sound source located near the front or rear of the head has little time difference and magnitude difference reaching the two ears, so the position of the virtual sound source should be recognized only by the difference in frequency characteristics.
- FIG. 13 is a CH_U_L90 channel as seen from the rear of the listener and is represented by a square.
- the altitude angle of CH_U_L90 is ⁇
- the ILD and ITD of the acoustic signal reaching the listener's left and right ears become smaller as ⁇ increases, and the acoustic signals recognized by both ears have similar sound images.
- the maximum value of the altitude angle ⁇ is 90 degrees, and when ⁇ is 90 degrees, it becomes the VOG channel existing on the listener's head, so that the same acoustic signal is received at both ears.
- the altitude is increased to provide a sound field feeling that provides a strong immersion feeling.
- the image becomes narrower and the sweet spot is narrower, and thus the left and right reversal of the image may occur even if the listener's position is slightly shifted or the channel is slightly displaced.
- FIG. 13 is a view showing the positions of the listener and the channel when the listener slightly moves to the left. Since the channel altitude angle ⁇ has a large value and a high sense of altitude is formed, even if the listener moves a little, the relative position of the left and right channels changes greatly, and in the worst case, the signal reaching the right ear is larger than the left channel. As shown in the right figure of FIG. 13, left and right inversion of a sound image may occur.
- the panning coefficient needs to be reduced, but it is necessary to set the minimum threshold value of the panning coefficient so as not to be smaller than a predetermined value.
- the left and right reversal of the image may be prevented.
- front-back confusion of an acoustic signal may occur due to the reproduction component of the surround channel.
- the front and rear confusion means a phenomenon in which the virtual sound source cannot exist in the front or back in the stereo sound.
- fk is the normalized center frequency of the k th frequency band
- fs is the sampling frequency Is the initial value of the altitude filter coefficient at the reference altitude angle.
- the altitude panning coefficients for the other height input channels except for the TBC channel CH_U_180 and the VOG channel CH_T_000 should also be updated.
- the altitude is controlled by adjusting the ratio of gains for the SL channel and the SR channel, which are the rear channel to the frontal channel. More details will be described later.
- the input channel is a CH_U_L045 channel
- the output channels on the east side of the input channel are CH_M_L030 and CH_M_L110
- the input channel and the output channel on the other side are CH_M_R030 and CH_M_R110.
- the input channel is the side channel or the front channel or the rear channel And And how to update the altitude panning gain from it.
- the input channel with an elevation elv is the front channel (azimuth angle -70 degrees to +70 degrees) or the rear channel (azimuth angle -180 degrees to -110 degrees or 110 degrees to 180 degrees), And Are determined by Equations 11 and 12, respectively.
- the altitude panning coefficient may be updated based on.
- Updated altitude panning coefficients for input channels that are ipsilateral to the input channel And updated altitude panning coefficients for the input channel and the output channel on the side Are determined by Equations 13 and 14, respectively.
- the panning coefficients obtained by equations (13) and (14) are power normalized according to equations (15) and (16).
- the power normalization process is performed such that the sum of the squares of the panning coefficients of the input channel is 1, so that the energy level of the output signal before the panning coefficient update and the energy level of the output signal after the panning coefficient update can be kept the same.
- the index at H indicates that the altitude panning coefficient is updated only in the high frequency region.
- the updated altitude panning coefficients of Equations 13 and 14 apply only in the high frequency band, 2.8 kHz to 10 kHz band.
- the advanced panning coefficient is updated not only for the high frequency band but also for the low frequency band.
- Coefficient And updated altitude panning coefficients for the input channel and the output channel on the side are determined by Equations 17 and 18, respectively.
- the updated high panning gain in the low frequency band is also normalized according to equations (19) and (20) in order to keep the energy level of the output signal constant. do.
- the power normalization process is performed such that the sum of the squares of the panning coefficients of the input channel is 1, so that the energy level of the output signal before the panning coefficient update and the energy level of the output signal after the panning coefficient update can be kept the same.
- 14 to 17 are diagrams for describing a method for preventing front and back confusion of a sound image, according to an exemplary embodiment.
- FIG. 14 illustrates a horizontal channel and a front height channel according to one embodiment.
- the output channel is 5.0 channel (woofer channel not shown) and the front height input channel is rendered to such a horizontal output channel.
- the 5.0 channel exists in the horizontal plane 1410 and includes a front center (FC) channel, a front left (FL) channel, a front right (FR) channel, a surround left (SL) channel, and a surround right (SR) channel.
- the front height channel corresponds to the upper layer 1420 in FIG. 4, and in the embodiment of FIG. 14, the top front center (TFC) channel, the top front left (TFL) channel, and the TFR It includes the channel (Top Front Right).
- output channels such as FC (Front Center), FL (Front Left), FR (Front Right), SL (Surround Left) and SR (Surround Right, Right surround) channel signals include components corresponding to each of the input signals.
- the number of front height channels and horizontal channels, azimuth angles, and elevation angles of the height channels may be variously determined according to the channel layout.
- the front height channel may include at least one of CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 and CH_U_000.
- the surround channel may include at least one of CH_M_L110 and CH_M_R110.
- the surround output channel increases the altitude of the sound by giving a sense of altitude to the sound. Therefore, when virtually rendering the signal of the front height input channel to the 5.0 output channel, which is a horizontal channel, the altitude may be provided and adjusted by the SL and SR channel output signals, which are surround output channels.
- FIG. 15 illustrates a recognition probability of a front height channel according to an embodiment.
- FIG. 15 is a diagram illustrating a probability that a user recognizes positions (front and rear) of a sound image when virtually rendering a front height channel and a TFR channel using a horizontal output channel.
- the height recognized by the user is the height channel 1420, and the size of the circle is proportional to the size of the probability.
- the most users recognize the sound image at the right 45 degrees, which is the position of the original virtual rendered channel, but many users recognize the sound image at a position other than the right 45 degrees.
- this phenomenon is because the HRTF characteristics of each person is different, and it can be seen that some users perceive that the sound image exists in the rear more than 90 degrees to the right.
- HRTF is a mathematical transmission function that represents the path of sound from a sound source located at an arbitrary position around the head to the eardrum using a mathematical transfer function.It depends on the relative position of the sound source relative to the center of the head and the size and shape of the human head and pinna. It will be very different. In order to accurately describe the virtual sound source, the HRTF of the target person must be measured and used individually, but since it is difficult in reality, a non-individualized HRTF measured by installing a microphone at the eardrum position of a mannequin similar to a human body is generally used. use.
- psychoacoustic sounds Sound is not perceived equally by everyone, and sounds differently depending on the surroundings or the psychological state of the listener. This is because the physical phenomenon in the space where sound propagates is perceived subjectively and sensibly by the listener. As described above, the acoustic signal recognized based on the subjective or psychological factors of the listener is called psychoacoustic. In addition to physical variables such as sound pressure, frequency, and time, psychoacoustic sounds have subjective variables such as loudness, pitch, timbre, and sound experience.
- Psychoacoustic effects can have various effects according to each situation. Representatively, there are masking effects, cocktail effects, direction perception effects, distance perception effects, and preceding sound effects. Psychoacoustic-based technology has been applied in various fields to provide a more appropriate sound signal to the listener.
- the precedence effect also known as the Hass effect, is a method in which the sound is perceived by the listener as the first sound is generated when different sounds are sequentially generated with a time difference of 1 ms to 30 ms. Say. However, if the occurrence time of the two sounds differ by more than 50ms, they are perceived in different directions.
- the output signal of the right channel is delayed in the state where the sound image is positioned, the sound image is shifted to the left side and recognized as a signal reproduced on the right side.
- the surround output channel is used to give a sense of altitude to the sound, as shown in FIG. 15.
- the surround output channel signal causes the frontal channel signal to be perceived as being heard from the rear. back confusion) occurs.
- the front output channels existing at -90 degrees to +90 degrees with respect to the front of the output signal reproducing the front height channel input signal are included.
- the signal of the surround output channels present at -180 degrees to -90 degrees or +90 degrees to +180 degrees relative to the front side is reproduced later than the signal.
- 16 is a flowchart of a method for preventing back and forth confusion according to one embodiment.
- the renderer receives a multi-channel sound signal including a plurality of input channels (1610).
- the input multi-channel sound signal is converted into a plurality of output channel signals through rendering, and an input signal having, for example, 22.2 channels of downmix having a smaller number of output channels than the number of input channels has 5.1 or 5.0 channels. Converted to an output signal.
- rendering parameters are acquired according to a standard layout of an output channel and a default elevation angle for virtual rendering.
- the basic elevation angle may be variously determined according to the renderer, but the satisfaction and effect of the virtual rendering may be improved by setting the predetermined elevation angle instead of the default elevation angle according to the user's taste or the characteristics of the input signal.
- a time delay is added to the surround output channel for the front height channeler (1620).
- the front output channels existing at -90 degrees to +90 degrees with respect to the front of the output signal reproducing the front height channel input signal are included.
- the signal of the surround output channels present at -180 degrees to -90 degrees or +90 degrees to +180 degrees relative to the front side is reproduced later than the signal.
- the renderer modifies the altitude rendering parameter based on the delay added to the surround output channel (1630).
- the renderer If the altitude rendering parameter is modified, the renderer generates a highly rendered surround output channel based on the modified altitude rendering parameter (1640).
- a modified output rendering parameter is applied to a height input channel signal to render a surround output channel signal.
- the delayed altitude rendering surround output channel for the front height input channel based on the modified altitude rendering parameter can prevent back and forth confusion by the surround output channel.
- the time delay applied to the surround output channel is about 2.7 ms and about 91.5 cm in distance, which corresponds to 128 samples, or 2 quadrature mirror filter (QMF) samples, at 48 kHz.
- QMF quadrature mirror filter
- the delay added to the surround output channel to prevent back and forth confusion can vary depending on the sampling rate and playback environment.
- the rendering parameter is updated based on this.
- the updated rendering parameter gives a weight determined based on the altitude angle deviation to the initial value of the filter coefficient to increase or decrease the initial value of the panning coefficient according to the result of the updated filter coefficient or the magnitude comparison between the altitude of the input channel and the default altitude.
- the updated panning coefficient may be included.
- delayed QMF samples of the front input channel are added to the input QMF samples and the downmix matrix is expanded with the modified coefficients.
- a specific method of adding a time delay to a given front height input channel and modifying the rendering (downmix) matrix is as follows.
- the QMF sample delay of the input channel And the delayed QMF sample is determined as in Equation 21 and Equation 22.
- fs is the sampling frequency
- nth QMF subband sample of the kth band Denotes the nth QMF subband sample of the kth band.
- the time delay applied to the surround output channel is about 2.7 ms and about 91.5 cm in distance, which corresponds to 128 samples, or 2 QMF samples, at 48 kHz.
- the time delay added to the surround output channel to prevent back and forth confusion can vary depending on the sampling rate and playback environment.
- the modified rendering (downmix) matrix is determined as in Equations 23-25.
- Is a downmix matrix for elevation rendering Denotes a downmix matrix for normal rendering and Nout denotes the number of output channels.
- the downmix parameter of the j th output channel for the i th input channel is determined as follows.
- the downmix parameter to be applied to the output channel is expressed by Equation 26. Is determined.
- the downmix parameter to be applied to the output channel is determined as shown in Equation 27. .
- the deviation of the output channel may include deviation information according to an altitude or azimuth difference.
- FIG. 17 illustrates a horizontal channel and a front height channel with delay added to the surround output channel according to one embodiment.
- the embodiment shown in FIG. 17 assumes that the output channel is 5.0 channels (woofer channel not shown) and renders the front height input channel as such a horizontal output channel, as in the embodiment shown in FIG.
- the 5.0 channel exists in the horizontal plane 1410 and includes a front center (FC) channel, a front left (FL) channel, a front right (FR) channel, a surround left (SL) channel, and a surround right (SR) channel.
- FC front center
- FL front left
- FR front right
- SL surround left
- SR surround right
- the front height channel corresponds to the upper layer 1420 in FIG. 4.
- the front height channel includes a top front center (TFC) channel, a top front left (TFL) channel, and a top front right (TFR) channel. do.
- the input channel is 22.2 channels
- 24 channels of input signals are rendered (downmixed) to generate five channels of output signals.
- components corresponding to each of the 24 channel input signals are allocated to the 5-channel output signal by the rendering rule.
- the output channel FC channel, FL channel, FR channel, SL channel, and SR channel signals include components corresponding to the input signals, respectively.
- the number of front height channels and horizontal channels, azimuth angles, and elevation angles of the height channels may be variously determined according to the channel layout.
- the front height channel may include at least one of CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 and CH_U_000.
- the surround channel may include at least one of CH_M_L110 and CH_M_R110.
- a predetermined delay is added to the front height input channel rendered through the surround output channel to prevent back and forth confusion caused by the SL channel and the SR channel.
- the delayed altitude rendering surround output channel for the front height input channel based on the modified altitude rendering parameter can prevent back and forth confusion by the surround output channel.
- Equation 1 to 7 A method for obtaining the modified altitude rendering parameter based on the delayed acoustic signal and the added delay is shown in Equations 1 to 7. Since this has been described in detail in the embodiment of FIG. 16, a detailed description thereof will be omitted in the embodiment of FIG. 17.
- the time delay applied to the surround output channel is about 2.7 ms and about 91.5 cm in distance, which corresponds to 128 samples or 2 QMF samples at 48 kHz.
- the delay added to the surround output channel to prevent back and forth confusion can vary depending on the sampling rate and playback environment.
- TFC channel 18 illustrates a horizontal channel and a front center channel (TFC channel) according to one embodiment.
- the output channel is 5.0 channel (woofer channel not shown) and the TFC channel is rendered as such a horizontal output channel.
- the 5.0 channel exists in the horizontal plane 1810 and includes a front center (FC) channel, a front left (FL) channel, a front right (FR) channel, a surround left (SL) channel, and a surround right (SR) channel.
- the TFC channel corresponds to the upper layer 1820 in FIG. 4, and assumes that the azimuth angle is 0 degrees and is located at a predetermined elevation angle.
- the panning coefficients and filter coefficients are determined for a virtual rendering that provides a sense of altitude at a specific altitude.
- the panning coefficients of the FL channel and the FR channel are determined because the TFC channel input signal must have a sound image located in front of the listener.
- the sound image of the TFC channel is determined to be in front.
- the panning coefficients of the FL and FR channels must be the same, and the panning coefficients of the SL and SR channels must be the same.
- the panning coefficients of the left and right channels for rendering the TFC input channels must be the same, it is impossible to adjust the panning coefficients of the left and right channels to adjust the altitude of the TFC input channels. Therefore, in order to render a TFC input channel and give a sense of altitude, a panning coefficient between front-rear channels is adjusted.
- the panning coefficients of the SL channel and the SR channel for virtual rendering the TFC input channel to the elevation angle elv are respectively 28 and (29).
- G_vH0,5 (i_in) is a panning coefficient of the SL channel for virtual rendering at a reference altitude of 35 degrees
- G_vH0,6 (i_in) is a panning coefficient of the SL channel for virtual rendering at a reference altitude of 35 degrees
- i_in is an index for the height input channel. Equations 8 and 9 represent a relationship between an initial value of the panning coefficient and an updated panning coefficient when the height input channel is a TFC channel.
- the power normalization process is performed such that the sum of the squares of the panning coefficients of the input channel is 1, so that the energy level of the output signal before the panning coefficient update and the energy level of the output signal after the panning coefficient update can be kept the same.
- Embodiments according to the present invention described above can be implemented in the form of program instructions that can be executed by various computer components and recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- Program instructions recorded on the computer-readable recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floptical disks. medium) and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
- the hardware device may be modified with one or more software modules to perform the processing according to the present invention, and vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Stereophonic System (AREA)
Abstract
Description
Claims (48)
- 음향 신호를 렌더링하는 방법에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 단계;각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 제공하도록, 정면 높이(frontal height) 입력 채널에 소정의 지연을 부가하는 단계;상기 부가된 지연에 기초하여, 상기 정면 높이 입력 채널에 대한 고도 렌더링 파라미터를 수정하는 단계; 및상기 수정된 고도 렌더링 파라미터에 기초하여, 상기 정면 높이 입력 채널에 대해 지연된 고도 렌더링된 서라운드 출력 채널을 생성함으로써, 앞-뒤 혼동(front-back confusion)을 방지하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 1 항에 있어서,상기 복수 개의 출력 채널은 수평 채널인,음향 신호를 렌더링하는 방법.
- 제 1 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 방법.
- 제 1 항에 있어서,상기 정면 높이 입력 채널은,CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 및 CH_U_000 채널 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 방법.
- 제 1 항에 있어서,상기 서라운드 출력 채널은,CH_M_L110 및 CH_M_R110 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 방법.
- 제 1 항에 있어서,상기 소정의 지연은, 샘플링 레이트에 기초하여 결정되는,음향 신호를 렌더링하는 방법.
- 음향 신호를 렌더링하는 장치에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 수신부;각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 갖는 정면 높이(frontal height) 입력 채널에 소정의 지연을 부가하고, 상기 부가된 지연에 기초하여 상기 정면 높이 입력 채널에 대한 고도 렌더링 파라미터를 수정하는 렌더링부; 및상기 수정된 고도 렌더링 파라미터에 기초하여, 상기 정면 높이 입력 채널에 대해 지연된 고도 렌더링 서라운드 출력 채널을 생성함으로써, 앞뒤 혼동을 방지하는 출력부;를 포함하는,음향 신호를 렌더링하는 장치.
- 제 7 항에 있어서,상기 복수 개의 출력 채널은 수평 채널인,음향 신호를 렌더링하는 장치.
- 제 7 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 장치.
- 제 7 항에 있어서,상기 정면 높이 입력 채널은,CH_U_L030, CH_U_R030, CH_U_L045, CH_U_R045 및 CH_U_000 채널 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 장치.
- 제 7 항에 있어서,상기 서라운드 출력 채널은,CH_M_L110 및 CH_M_R110 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 장치.
- 제 7 항에 있어서,상기 소정의 지연은, 샘플링 레이트에 기초하여 결정되는,음향 신호를 렌더링하는 장치.
- 음향 신호를 렌더링하는 방법에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 단계;각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 제공하도록 높이 입력 채널에 대한 고도 렌더링 파라미터를 획득하는 단계; 및상기 기준 고도각 이외의 소정의 고도각을 갖는 높이 입력 채널에 대하여 상기 고도 렌더링 파라미터를 갱신하는 단계;를 포함하고,상기 고도 렌더링 파라미터를 갱신하는 단계는, 탑 프론트 센터(top front center)의 높이 입력 채널을 서라운드 출력 채널로 패닝하는 패닝 게인을 갱신하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 13 항에 있어서,상기 복수 개의 출력 채널은 수평 채널(horizontal channel)인,음향 신호를 렌더링하는 방법.
- 제 13 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 방법.
- 제 15 항에 있어서,상기 고도 렌더링 파라미터를 갱신하는 단계는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여, 상기 패닝 게인을 갱신하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 16 항에 있어서,상기 소정의 고도각이 기준 고도각보다 작은 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 크고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 방법.
- 제 16 항에 있어서,상기 소정의 고도각이 기준 고도각보다 큰 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 작고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 방법.
- 음향 신호를 렌더링하는 장치에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 수신부; 및각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 제공하도록, 높이 입력 채널에 대한 고도 렌더링 파라미터를 획득하고, 상기 기준 고도각 이외의 소정의 고도각을 갖는 높이 입력 채널에 대하여 상기 고도 렌더링 파라미터를 갱신하는 렌더링부;를 포함하고,상기 갱신된 고도 렌더링 파라미터는, 탑 프론트 센터(top front center)의 높이 입력 채널을 서라운드 출력 채널로 패닝하는 패닝 게인을 포함하는,음향 신호를 렌더링하는 장치.
- 제 19 항에 있어서,상기 복수 개의 출력 채널은 수평 채널(horizontal channel)인,음향 신호를 렌더링하는 장치.
- 제 19 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 장치.
- 제 21 항에 있어서,상기 갱신된 고도 렌더링 파라미터는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여, 갱신된 패닝 게인을 포함하는,음향 신호를 렌더링하는 장치.
- 제 22 항에 있어서,상기 소정의 고도각이 기준 고도각보다 작은 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 크고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 장치.
- 제 22 항에 있어서,상기 소정의 고도각이 기준 고도각보다 큰 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 작고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 장치.
- 음향 신호를 렌더링하는 방법에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 단계;각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 제공하도록, 높이 입력 채널에 대한 고도 렌더링 파라미터를 획득하는 단계; 및상기 기준 고도각 이외의 소정의 고도각을 갖는 높이 입력 채널에 대하여 상기 고도 렌더링 파라미터를 갱신하는 단계;를 포함하고,상기 고도 렌더링 파라미터를 갱신하는 단계는, 상기 높이 입력 채널의 위치에 기초하여, 저주파 대역을 포함하는 주파수 범위에 대해 갱신된 패닝 게인을 획득하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 25 항에 있어서,상기 갱신된 패닝 게인은, 후면(rear) 높이 입력 채널에 대한 패닝 게인인,음향 신호를 렌더링하는 방법.
- 제 25 항에 있어서,상기 복수 개의 출력 채널은 수평 채널(horizontal channel)인,음향 신호를 렌더링하는 방법.
- 제 25 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 방법.
- 제 28 항에 있어서,상기 고도 렌더링 파라미터를 갱신하는 단계는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여, 상기 고도 필터 계수에 가중치를 적용하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 29 항에 있어서,상기 가중치는,상기 소정의 고도각이 기준 고도각보다 작은 경우, 고도 필터 특징이 완만하게 나타나도록 결정되고,상기 소정의 고도각이 기준 고도각보다 큰 경우, 고도 필터 특징이 강하게 나타나도록 결정되는,음향 신호를 렌더링하는 방법.
- 제 28 항에 있어서,상기 고도 렌더링 파라미터를 갱신하는 단계는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여, 상기 패닝 게인을 갱신하는 단계;를 포함하는,음향 신호를 렌더링하는 방법.
- 제 31 항에 있어서,상기 소정의 고도각이 기준 고도각보다 작은 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 크고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1 이 되는,음향 신호를 렌더링하는 방법.
- 제 31 항에 있어서,상기 소정의 고도각이 기준 고도각보다 큰 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 작고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 방법.
- 음향 신호를 렌더링하는 장치에 있어서,복수 개의 출력 채널로 변환될 복수 개의 입력 채널을 포함하는 멀티채널 신호를 수신하는 수신부; 및각 출력 채널들이 기준 고도각에서 고도감 있는 음상을 제공하도록, 높이 입력 채널에 대한 고도 렌더링 파라미터를 획득하고, 상기 기준 고도각 이외의 소정의 고도각을 갖는 높이 입력 채널에 대하여 상기 고도 렌더링 파라미터를 갱신하는 렌더링부;를 포함하고,상기 갱신된 고도 렌더링 파라미터는, 상기 높이 입력의 위치에 기초하여, 저주파 대역을 포함하는 주파수 범위에 대해 갱신된 패닝 게인을 포함하는,음향 신호를 렌더링하는 장치.
- 제 34 항에 있어서,상기 갱신된 패닝 게인은, 후면(rear) 높이 입력 채널에 대한 패닝 게인인,음향 신호를 렌더링하는 장치.
- 제 34 항에 있어서,상기 복수 개의 출력 채널은 수평 채널(horizontal channel)인,음향 신호를 렌더링하는 장치.
- 제 34 항에 있어서,상기 고도 렌더링 파라미터는, 패닝 게인 및 고도 필터 계수 중 적어도 하나를 포함하는,음향 신호를 렌더링하는 장치.
- 제 37 항에 있어서,상기 갱신된 고도 렌더링 파라미터는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여, 가중치가 적용된 고도 필터 계수를 포함하는,음향 신호를 렌더링하는 장치.
- 제 38 항에 있어서,상기 가중치는,상기 소정의 고도각이 기준 고도각보다 작은 경우, 고도 필터 특징이 완만하게 나타나도록 결정되고,상기 소정의 고도각이 기준 고도각보다 큰 경우, 고도 필터 특징이 강하게 나타나도록 결정되는,음향 신호를 렌더링하는 장치.
- 제 37 항에 있어서,상기 갱신된 고도 렌더링 파라미터는,상기 기준 고도각 및 상기 소정의 고도각에 기초하여 갱신된 패닝 게인을 포함하는,음향 신호를 렌더링하는 장치.
- 제 40 항에 있어서,상기 소정의 고도각이 기준 고도각보다 작은 경우,상기 갱신된 고도 패닝 계수 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 계수는, 갱신 전의 고도 패닝 계수보다 크고,입력 채널 각각에 적용될 갱신된 고도 패닝 계수의 제곱의 합은 1이 되는,음향 신호를 렌더링하는 장치.
- 제 40 항에 있어서,상기 소정의 고도각이 기준 고도각보다 큰 경우,상기 갱신된 고도 패닝 게인 중 상기 소정의 고도각을 가지는 출력 채널과 동측에 있는 출력 채널에 적용될 갱신된 고도 패닝 게인은, 갱신 전의 고도 패닝 게인보다 작고,입력 채널 각각에 적용될 갱신된 고도 패닝 게인의 제곱의 합은 1 이 되는,음향 신호를 렌더링하는 장치.
- 제 1 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록하는 컴퓨터 판독 가능한 기록 매체.
- 제 13 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록하는 컴퓨터 판독 가능한 기록 매체.
- 제 25 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록하는 컴퓨터 판독 가능한 기록 매체.
- 제 1 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램.
- 제 13 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램.
- 제 25 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램.
Priority Applications (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RU2017101976A RU2656986C1 (ru) | 2014-06-26 | 2015-06-26 | Способ и устройство для рендеринга акустического сигнала и машиночитаемый носитель записи |
JP2016575113A JP6444436B2 (ja) | 2014-06-26 | 2015-06-26 | 音響信号のレンダリング方法、その装置及び該コンピュータ可読記録媒体 |
CA2953674A CA2953674C (en) | 2014-06-26 | 2015-06-26 | Method and device for rendering acoustic signal, and computer-readable recording medium |
CN201580045447.3A CN106797524B (zh) | 2014-06-26 | 2015-06-26 | 用于渲染声学信号的方法和装置及计算机可读记录介质 |
US15/322,051 US10021504B2 (en) | 2014-06-26 | 2015-06-26 | Method and device for rendering acoustic signal, and computer-readable recording medium |
BR112016030345-8A BR112016030345B1 (pt) | 2014-06-26 | 2015-06-26 | Método de renderização de um sinal de áudio, aparelho para renderização de um sinal de áudio, meio de gravação legível por computador, e programa de computador |
EP15811229.2A EP3163915A4 (en) | 2014-06-26 | 2015-06-26 | Method and device for rendering acoustic signal, and computer-readable recording medium |
MX2017000019A MX365637B (es) | 2014-06-26 | 2015-06-26 | Metodo y dispositivo para representar una señal acustica y medio de grabacion legible por computadora. |
BR122022017776-0A BR122022017776B1 (pt) | 2014-06-26 | 2015-06-26 | Método de renderização de elevação de um sinal de áudio, aparelho para renderização de um sinal de áudio de elevação, e meio de gravação não transitório legível por computador |
AU2015280809A AU2015280809C1 (en) | 2014-06-26 | 2015-06-26 | Method and device for rendering acoustic signal, and computer-readable recording medium |
AU2017279615A AU2017279615B2 (en) | 2014-06-26 | 2017-12-19 | Method and device for rendering acoustic signal, and computer-readable recording medium |
US16/004,774 US10299063B2 (en) | 2014-06-26 | 2018-06-11 | Method and device for rendering acoustic signal, and computer-readable recording medium |
AU2019200907A AU2019200907B2 (en) | 2014-06-26 | 2019-02-08 | Method and device for rendering acoustic signal, and computer-readable recording medium |
US16/379,211 US10484810B2 (en) | 2014-06-26 | 2019-04-09 | Method and device for rendering acoustic signal, and computer-readable recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462017499P | 2014-06-26 | 2014-06-26 | |
US62/017,499 | 2014-06-26 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/322,051 A-371-Of-International US10021504B2 (en) | 2014-06-26 | 2015-06-26 | Method and device for rendering acoustic signal, and computer-readable recording medium |
US16/004,774 Continuation US10299063B2 (en) | 2014-06-26 | 2018-06-11 | Method and device for rendering acoustic signal, and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015199508A1 true WO2015199508A1 (ko) | 2015-12-30 |
Family
ID=54938492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/006601 WO2015199508A1 (ko) | 2014-06-26 | 2015-06-26 | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 |
Country Status (11)
Country | Link |
---|---|
US (3) | US10021504B2 (ko) |
EP (1) | EP3163915A4 (ko) |
JP (2) | JP6444436B2 (ko) |
KR (4) | KR102294192B1 (ko) |
CN (3) | CN106797524B (ko) |
AU (3) | AU2015280809C1 (ko) |
BR (2) | BR122022017776B1 (ko) |
CA (2) | CA3041710C (ko) |
MX (2) | MX365637B (ko) |
RU (2) | RU2656986C1 (ko) |
WO (1) | WO2015199508A1 (ko) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9774974B2 (en) | 2014-09-24 | 2017-09-26 | Electronics And Telecommunications Research Institute | Audio metadata providing apparatus and method, and multichannel audio data playback apparatus and method to support dynamic format conversion |
CN106303897A (zh) | 2015-06-01 | 2017-01-04 | 杜比实验室特许公司 | 处理基于对象的音频信号 |
EP3335436B1 (en) * | 2015-08-14 | 2021-10-06 | DTS, Inc. | Bass management for object-based audio |
JP2019518373A (ja) * | 2016-05-06 | 2019-06-27 | ディーティーエス・インコーポレイテッドDTS,Inc. | 没入型オーディオ再生システム |
US10791153B2 (en) * | 2017-02-02 | 2020-09-29 | Bose Corporation | Conference room audio setup |
KR102483470B1 (ko) * | 2018-02-13 | 2023-01-02 | 한국전자통신연구원 | 다중 렌더링 방식을 이용하는 입체 음향 생성 장치 및 입체 음향 생성 방법, 그리고 입체 음향 재생 장치 및 입체 음향 재생 방법 |
CN109005496A (zh) * | 2018-07-26 | 2018-12-14 | 西北工业大学 | 一种hrtf中垂面方位增强方法 |
EP3726858A1 (en) * | 2019-04-16 | 2020-10-21 | Fraunhofer Gesellschaft zur Förderung der Angewand | Lower layer reproduction |
JP7157885B2 (ja) * | 2019-05-03 | 2022-10-20 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 複数のタイプのレンダラーを用いたオーディオ・オブジェクトのレンダリング |
US11341952B2 (en) | 2019-08-06 | 2022-05-24 | Insoundz, Ltd. | System and method for generating audio featuring spatial representations of sound sources |
TWI735968B (zh) * | 2019-10-09 | 2021-08-11 | 名世電子企業股份有限公司 | 音場型自然環境音效系統 |
CN112911494B (zh) * | 2021-01-11 | 2022-07-22 | 恒大新能源汽车投资控股集团有限公司 | 一种音频数据处理方法、装置及设备 |
DE102021203640B4 (de) * | 2021-04-13 | 2023-02-16 | Kaetel Systems Gmbh | Lautsprechersystem mit einer Vorrichtung und Verfahren zum Erzeugen eines ersten Ansteuersignals und eines zweiten Ansteuersignals unter Verwendung einer Linearisierung und/oder einer Bandbreiten-Erweiterung |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110249819A1 (en) * | 2008-12-18 | 2011-10-13 | Dolby Laboratories Licensing Corporation | Audio channel spatial translation |
KR20130080819A (ko) * | 2012-01-05 | 2013-07-15 | 삼성전자주식회사 | 다채널 음향 신호의 정위 방법 및 장치 |
WO2014041067A1 (en) * | 2012-09-12 | 2014-03-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for providing enhanced guided downmix capabilities for 3d audio |
WO2014058275A1 (ko) * | 2012-10-11 | 2014-04-17 | 한국전자통신연구원 | 오디오 데이터 생성 장치 및 방법, 오디오 데이터 재생 장치 및 방법 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU3427393A (en) * | 1992-12-31 | 1994-08-15 | Desper Products, Inc. | Stereophonic manipulation apparatus and method for sound image enhancement |
AU2002244269A1 (en) * | 2001-03-07 | 2002-09-24 | Harman International Industries, Inc. | Sound direction system |
US7928311B2 (en) * | 2004-12-01 | 2011-04-19 | Creative Technology Ltd | System and method for forming and rendering 3D MIDI messages |
KR100708196B1 (ko) * | 2005-11-30 | 2007-04-17 | 삼성전자주식회사 | 모노 스피커를 이용한 확장된 사운드 재생 장치 및 방법 |
KR101336237B1 (ko) * | 2007-03-02 | 2013-12-03 | 삼성전자주식회사 | 멀티 채널 스피커 시스템의 멀티 채널 신호 재생 방법 및장치 |
ES2452348T3 (es) * | 2007-04-26 | 2014-04-01 | Dolby International Ab | Aparato y procedimiento para sintetizar una señal de salida |
EP2154911A1 (en) * | 2008-08-13 | 2010-02-17 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | An apparatus for determining a spatial output multi-channel audio signal |
JP2011211312A (ja) * | 2010-03-29 | 2011-10-20 | Panasonic Corp | 音像定位処理装置及び音像定位処理方法 |
KR20120004909A (ko) * | 2010-07-07 | 2012-01-13 | 삼성전자주식회사 | 입체 음향 재생 방법 및 장치 |
JP2012049652A (ja) * | 2010-08-24 | 2012-03-08 | Panasonic Corp | マルチチャネルオーディオ再生装置およびマルチチャネルオーディオ再生方法 |
EP2614659B1 (en) * | 2010-09-06 | 2016-06-08 | Dolby International AB | Upmixing method and system for multichannel audio reproduction |
US20120155650A1 (en) | 2010-12-15 | 2012-06-21 | Harman International Industries, Incorporated | Speaker array for virtual surround rendering |
JP5867672B2 (ja) * | 2011-03-30 | 2016-02-24 | ヤマハ株式会社 | 音像定位制御装置 |
US9479886B2 (en) * | 2012-07-20 | 2016-10-25 | Qualcomm Incorporated | Scalable downmix design with feedback for object-based surround codec |
KR101703333B1 (ko) | 2013-03-29 | 2017-02-06 | 삼성전자주식회사 | 오디오 장치 및 이의 오디오 제공 방법 |
WO2015147533A2 (ko) | 2014-03-24 | 2015-10-01 | 삼성전자 주식회사 | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 |
US10149086B2 (en) | 2014-03-28 | 2018-12-04 | Samsung Electronics Co., Ltd. | Method and apparatus for rendering acoustic signal, and computer-readable recording medium |
-
2015
- 2015-06-26 CA CA3041710A patent/CA3041710C/en active Active
- 2015-06-26 EP EP15811229.2A patent/EP3163915A4/en active Pending
- 2015-06-26 US US15/322,051 patent/US10021504B2/en active Active
- 2015-06-26 JP JP2016575113A patent/JP6444436B2/ja active Active
- 2015-06-26 CN CN201580045447.3A patent/CN106797524B/zh active Active
- 2015-06-26 CN CN201910547164.9A patent/CN110213709B/zh active Active
- 2015-06-26 WO PCT/KR2015/006601 patent/WO2015199508A1/ko active Application Filing
- 2015-06-26 CA CA2953674A patent/CA2953674C/en active Active
- 2015-06-26 KR KR1020150091586A patent/KR102294192B1/ko active IP Right Grant
- 2015-06-26 BR BR122022017776-0A patent/BR122022017776B1/pt active IP Right Grant
- 2015-06-26 RU RU2017101976A patent/RU2656986C1/ru active
- 2015-06-26 BR BR112016030345-8A patent/BR112016030345B1/pt active IP Right Grant
- 2015-06-26 MX MX2017000019A patent/MX365637B/es active IP Right Grant
- 2015-06-26 RU RU2018112368A patent/RU2759448C2/ru active
- 2015-06-26 CN CN201910547171.9A patent/CN110418274B/zh active Active
- 2015-06-26 AU AU2015280809A patent/AU2015280809C1/en active Active
-
2017
- 2017-01-04 MX MX2019006683A patent/MX2019006683A/es unknown
- 2017-12-19 AU AU2017279615A patent/AU2017279615B2/en active Active
-
2018
- 2018-06-11 US US16/004,774 patent/US10299063B2/en active Active
- 2018-11-27 JP JP2018220950A patent/JP6600733B2/ja active Active
-
2019
- 2019-02-08 AU AU2019200907A patent/AU2019200907B2/en active Active
- 2019-04-09 US US16/379,211 patent/US10484810B2/en active Active
-
2021
- 2021-08-20 KR KR1020210110307A patent/KR102362245B1/ko active IP Right Grant
-
2022
- 2022-01-28 KR KR1020220013617A patent/KR102423757B1/ko active IP Right Grant
- 2022-07-15 KR KR1020220087385A patent/KR102529122B1/ko active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110249819A1 (en) * | 2008-12-18 | 2011-10-13 | Dolby Laboratories Licensing Corporation | Audio channel spatial translation |
KR20130080819A (ko) * | 2012-01-05 | 2013-07-15 | 삼성전자주식회사 | 다채널 음향 신호의 정위 방법 및 장치 |
WO2014041067A1 (en) * | 2012-09-12 | 2014-03-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for providing enhanced guided downmix capabilities for 3d audio |
WO2014058275A1 (ko) * | 2012-10-11 | 2014-04-17 | 한국전자통신연구원 | 오디오 데이터 생성 장치 및 방법, 오디오 데이터 재생 장치 및 방법 |
Non-Patent Citations (2)
Title |
---|
See also references of EP3163915A4 * |
VICTORIA EVELKIN ET AL.: "EFFECT OF LATENCY TIME IN HIGH FREQUENCIES ON SOUND LOCALIZATION.", IEEE 27-TH CONVENTION OF ELECTRICAL AND ELECTRONICS ENGINEERS IN ISRAEL ., 14 November 2012 (2012-11-14), pages 1 - 4, XP032277714 * |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015199508A1 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2016024847A1 (ko) | 음향 신호를 생성하고 재생하는 방법 및 장치 | |
WO2019103584A1 (ko) | 귀 개방형 헤드폰을 이용한 다채널 사운드 구현 장치 및 그 방법 | |
WO2015147532A2 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2015147619A1 (ko) | 음향 신호의 렌더링 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2018074677A1 (ko) | 단말 장치들 간의 멀티미디어 통신에 있어서, 오디오 신호를 송신하고 수신된 오디오 신호를 출력하는 방법 및 이를 수행하는 단말 장치 | |
WO2017191970A2 (ko) | 바이노럴 렌더링을 위한 오디오 신호 처리 방법 및 장치 | |
WO2009131391A1 (en) | Method for generating and playing object-based audio contents and computer readable recording medium for recoding data having file format structure for object-based audio service | |
WO2014157975A1 (ko) | 오디오 장치 및 이의 오디오 제공 방법 | |
WO2010107269A2 (ko) | 멀티 채널 신호의 부호화/복호화 장치 및 방법 | |
WO2015142073A1 (ko) | 오디오 신호 처리 방법 및 장치 | |
WO2012005507A2 (en) | 3d sound reproducing method and apparatus | |
WO2019031652A1 (ko) | 3차원 오디오 재생 방법 및 재생 장치 | |
WO2020231202A1 (ko) | 복수 개의 스피커들을 포함하는 전자 장치 및 그 제어 방법 | |
EP3569001A1 (en) | Method for processing vr audio and corresponding equipment | |
WO2014148844A1 (ko) | 단말 장치 및 그의 오디오 신호 출력 방법 | |
WO2021020823A2 (ko) | 소음 제거 장치 및 방법 | |
WO2020060206A1 (en) | Methods for audio processing, apparatus, electronic device and computer readable storage medium | |
WO2014148845A1 (ko) | 오디오 신호 크기 제어 방법 및 장치 | |
WO2021060680A1 (en) | Methods and systems for recording mixed audio signal and reproducing directional audio | |
WO2018233221A1 (zh) | 多窗口声音输出方法、电视机以及计算机可读存储介质 | |
WO2022158943A1 (ko) | 다채널 오디오 신호 처리 장치 및 방법 | |
WO2016182184A1 (ko) | 입체 음향 재생 방법 및 장치 | |
WO2020040541A1 (ko) | 전자장치, 그 제어방법 및 기록매체 | |
WO2016190460A1 (ko) | 입체 음향 재생 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15811229 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016575113 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 122022017776 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2953674 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15322051 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2017/000019 Country of ref document: MX |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112016030345 Country of ref document: BR |
|
REEP | Request for entry into the european phase |
Ref document number: 2015811229 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015811229 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017101976 Country of ref document: RU Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2015280809 Country of ref document: AU Date of ref document: 20150626 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112016030345 Country of ref document: BR Kind code of ref document: A2 Effective date: 20161222 |