WO2015056383A1 - オーディオエンコード装置及びオーディオデコード装置 - Google Patents
オーディオエンコード装置及びオーディオデコード装置 Download PDFInfo
- Publication number
- WO2015056383A1 WO2015056383A1 PCT/JP2014/004247 JP2014004247W WO2015056383A1 WO 2015056383 A1 WO2015056383 A1 WO 2015056383A1 JP 2014004247 W JP2014004247 W JP 2014004247W WO 2015056383 A1 WO2015056383 A1 WO 2015056383A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- signal
- channel
- encoding
- information
- Prior art date
Links
- 230000005236 sound signal Effects 0.000 claims abstract description 113
- 238000004458 analytical method Methods 0.000 claims abstract description 53
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 230000000873 masking effect Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 19
- 238000000034 method Methods 0.000 description 18
- 230000035945 sensitivity Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 239000000470 constituent Substances 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002360 explosive Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/002—Dynamic bit allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
Definitions
- the present invention relates to an audio encoding apparatus that compresses and encodes a signal, and an audio decoding apparatus that decodes the encoded signal.
- Non-Patent Document 1 a system that can handle background sounds in an object-based audio system has been proposed (see, for example, Non-Patent Document 1).
- the background sound is input as a multi-channel signal as a multi-channel background object (MBO), but the input signal is compressed as a 1-channel or 2-channel signal by an MPS encoder (MPEG Surround encoder). It has been proposed to treat it as one object (see, for example, Non-Patent Document 2).
- MBO multi-channel background object
- MPS encoder MPEG Surround encoder
- An audio decoding apparatus is an audio decoding apparatus that decodes an encoded signal obtained by encoding an input signal, and the input signal includes a channel-based audio signal and an object-based audio signal.
- the encoded signal includes a channel-based encoded signal obtained by encoding the channel-based audio signal, an object-based encoded signal encoded by an object-based audio signal, and audio scene information extracted from the input signal.
- An encoded audio scene encoded signal wherein the audio decoding device is configured to extract the channel-based encoded signal, the object-based encoded signal, and the audio scene encoded signal from the encoded signal.
- the audio scene decoding means for extracting and decoding the encoded signal of the audio scene information from the encoded signal, the channel base decoder for decoding the channel-based audio signal, and the audio scene decoding means.
- the audio scene information is used to separately specify an object base decoder that decodes the object-based audio signal using the audio scene information, an output signal of the channel base decoder, and an output signal of the object base decoder.
- Audio scene synthesis means for synthesizing based on the speaker arrangement information and reproducing the synthesized audio scene synthesis signal.
- FIG. 1 is a diagram illustrating a configuration of an audio encoding apparatus according to the first embodiment.
- FIG. 2 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 3 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 4 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 5 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 6 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 7 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 8 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 9 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 10 is a diagram illustrating an example of a method for determining the perceptual importance of an audio object.
- FIG. 11 is a diagram illustrating a configuration of a bit stream.
- FIG. 12 is a diagram of a configuration of the audio decoding apparatus according to the second embodiment.
- FIG. 13 is a diagram showing the configuration of the bit stream and the state of skipping reproduction.
- Fig. 15 shows the configuration of the channel-based audio system.
- the channel-based audio system allocates the collected sound source to a 5ch signal by a renderer, encodes it with a channel-based encoder, and records and transmits the encoded signal. After that, decoding is performed by the channel base decoder, and the decoded 5ch sound field and the sound field downmixed to 2ch or 7.1ch are reproduced by a speaker.
- the advantage of this system is that, when the speaker configuration on the decoding side is what the system assumes, an optimal sound field can be reproduced without imposing a load on the decoding side.
- a background sound, an acoustic signal with reverberation, and the like can be appropriately expressed by appropriately adding each channel signal in advance.
- the collected sound source group (guitar, piano, vocal, etc.) is directly encoded as an audio object, recorded, and transmitted. At that time, the reproduction position information of each sound source is also recorded and transmitted. On the decoder side, each audio object is rendered according to the position information of the sound source and the speaker arrangement.
- the audio object is allocated to each channel so that each audio object is reproduced at a position corresponding to the reproduction position information by the 5ch speaker.
- the advantage of this system is that an optimal sound field can be reproduced according to the speaker arrangement on the reproduction side.
- the background sound is input as a multi-channel signal as a multi-channel background object (MBO), but is compressed as a 1-channel or 2-channel signal by the MPS encoder and handled as one object.
- MBO multi-channel background object
- FIG. 5 Architecture of the SAOC system handling the MBO of Non-Patent Document 1.
- an audio encoding apparatus is an audio encoding apparatus that encodes an input signal, and the input signal includes a channel-based audio signal and an object-based audio signal, and the input Audio scene analysis means for determining an audio scene from the signal and detecting audio scene information, channel-based encoder for encoding the channel-based audio signal output from the audio scene analysis means, and output from the audio scene analysis means
- An object-based encoder that encodes the object-based audio signal
- audio scene encoding means that encodes the audio scene information.
- an audio decoding apparatus is an audio decoding apparatus that decodes an encoded signal obtained by encoding an input signal, and the input signal includes a channel-based audio signal and an object-based audio signal.
- the encoded signal includes: a channel-based encoded signal obtained by encoding the channel-based audio signal; an object-based encoded signal obtained by encoding an object-based audio signal; and an audio scene extracted from the input signal.
- an audio scene encoded signal that encodes information, wherein the audio decoding device includes the channel-based encoded signal, the object-based encoded signal, and the audio scene encoded from the encoded signal.
- the audio object can be skipped appropriately according to the playback situation.
- the audio scene information is perceptual importance information of the audio object, and when the calculation resource necessary for decoding is insufficient, the audio object having low perceptual importance is skipped.
- This configuration enables playback with the sound quality maintained as much as possible even with a processor with a small computing capacity.
- the audio encoding apparatus includes an audio scene analysis unit 100, a channel base encoder 101, an object base encoder 102, an audio scene encoding unit 103, and a multiplexing unit 104.
- the audio scene analysis means 100 determines an audio scene from an input signal composed of a channel-based audio signal and an object-based audio signal, and detects audio scene information.
- the functions of the audio scene analysis means 100 are roughly divided into two types. One is the ability to reconstruct channel-based and object-based audio signals, and the other is to determine the perceptual importance of audio objects that are individual elements of object-based audio signals. is there.
- the audio scene analysis means 100 analyzes the input channel-based audio signal, and if the specific channel signal is independent of other channel signals, incorporates the channel signal into the object-based audio signal.
- the reproduction position information of the audio signal is a position where the speaker of the channel is to be placed.
- the channel signal may be an object-based audio signal (audio object).
- audio object the playback position of the audio object is the center.
- acoustic signals with background sound and reverberation are output as channel-based audio signals.
- reproduction processing can be performed with high sound quality and a small amount of calculation on the decoder side.
- the audio scene analysis means 100 analyzes the input object-based audio signal, and when a specific audio object is present at a specific speaker position, the audio object is output from the speaker. You may mix with a signal.
- the audio object when an audio object representing the sound of a certain instrument is present at the position of the right speaker, the audio object may be mixed into a channel signal output from the right speaker. By doing so, the number of audio objects can be reduced by one, which contributes to a reduction in the bit rate during transmission and recording.
- the audio scene analysis means 100 determines that an audio object with a high sound pressure level has a higher perceptual importance than an audio object with a low sound pressure level. This is to reflect the listener's psychology of paying much attention to the sound with a high sound pressure level.
- a sound source 1 indicated by a black circle 1 has a higher sound pressure level than a sound source 2 indicated by a black circle 2.
- Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis unit 100 determines that the audio object whose reproduction position approaches the listener has a higher perceptual importance than the audio object whose reproduction position moves away from the listener. This is to reflect the listener's psychology of paying much attention to the approaching object.
- a sound source 1 indicated by a black circle 1 is a sound source approaching the listener
- a sound source 2 indicated by a black circle 2 is a sound source moving away from the listener.
- Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis means 100 determines that the audio object whose playback position is in front of the listener has a higher perceptual importance than the audio object whose playback position is behind the listener.
- the audio scene analysis means 100 determines that the audio object whose playback position is in front of the listener has a higher perceptual importance than the audio object whose playback position is above.
- the listener's sensitivity to objects in front of the listener is higher than the sensitivity to objects on the listener's side, and the listener's sensitivity to objects on the listener's side is more perceptually important than the sensitivity to objects above and below the listener Because.
- a sound source 3 indicated by a white circle 1 is in a position in front of the listener, and a sound source 4 indicated by a white circle 2 is in a position behind the listener. In this case, it is determined that the sound source 3 has a higher perceptual importance than the sound source 4.
- the sound source 1 indicated by a black circle 1 is at the position in front of the listener, and the sound source 2 indicated by a black circle 2 is at a position above the listener. In this case, it is determined that Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis unit 100 determines that the audio object whose playback position moves to the left and right of the listener has a higher perceptual importance than the audio object whose playback position moves before and after the listener. In addition, the audio scene analysis unit 100 determines that an audio object whose playback position moves before and after the listener has a higher perceptual importance than an audio object whose playback position moves above and below the listener. This is because the listener's sensitivity to the left and right movement is higher than the listener's sensitivity to the front and rear movement, and the listener's sensitivity to the front and rear movement is higher than the listener's sensitivity to the vertical movement.
- Sound Source trajectory 1 indicated by black circle 1 moves to the left and right with respect to the listener
- Sound Source trajectory 2 indicated by black circle 2 moves back and forth with respect to the listener
- Sound Source trajectory 3 indicated by black circle 3 is Move up and down with respect to the listener.
- the sound source trajectory 1 has a higher perceptual importance than the sound source trajectory 2.
- the sound source trajectory 2 has a higher perceptual importance than the sound source trajectory 3.
- the audio scene analysis means 100 determines that the audio object whose playback position is moving has a higher perceptual importance than the audio object whose playback position is stationary. Further, the audio scene analysis unit 100 determines that an audio object having a high movement speed has a higher perceptual importance than an audio object having a low movement speed. This is because the listener's sensitivity to the movement of the auditory sound source is high.
- the sound source trajectory 1 indicated by the black circle 1 moves relative to the listener, and the sound source trajectory 2 indicated by the black circle 2 is stationary relative to the listener. In this case, it is determined that the sound source trajectory 1 has a higher perceptual importance than the sound source trajectory 2.
- the audio scene analysis unit 100 determines that the audio object on which the object is displayed has higher perceptual importance than the audio object that is not.
- a sound source 1 indicated by a black circle 1 is stationary or moved with respect to the listener, and is also reflected on the screen. Further, the position of the sound source 2 indicated by the black circle 2 is the same as that of the sound source 1. In this case, it is determined that Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis unit 100 determines that an audio object rendered by a small number of speakers has a higher perceptual importance than an audio object rendered by many speakers. This is because audio objects rendered with many speakers are expected to reproduce sound images more accurately than audio objects rendered with few speakers, so audio objects rendered with few speakers are more accurate. Based on the idea that it should be encoded.
- the sound source 1 indicated by the black circle 1 is rendered by one speaker, and the sound source 2 indicated by the black circle 2 is rendered by four more speakers than the sound source 1.
- Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis unit 100 determines that an audio object including many frequency components with high auditory sensitivity has a higher perceptual importance than an audio object including many frequency components with low auditory sensitivity. To do.
- a sound source 1 indicated by a black circle 1 is a sound in the frequency band of a human voice
- a sound source 2 indicated by a black circle 2 is a sound in a frequency band such as a flight sound of an aircraft, and is indicated by a black circle 3.
- Sound Source3 moves up and down with respect to the listener.
- human hearing is highly sensitive to sounds (objects) that contain frequency components of human voice, and is sensitive to sounds that contain higher frequency components than human voices, such as aircraft flight sounds. Is moderate, and has low sensitivity to sounds containing a frequency component lower than the frequency of a human voice such as a bass guitar.
- Sound Source1 has a higher perceptual importance than Sound Source2.
- the sound source 2 is determined to have a higher perceptual importance than the sound source 3.
- the audio scene analysis means 100 determines that an audio object that contains many masked frequency components has a lower perceptual importance than an audio object that contains many unmasked frequency components.
- a sound source 1 indicated by a black circle 1 is an explosive sound
- a sound source 2 indicated by a black circle 2 is a gunshot sound including a lot of frequencies masked by an explosive sound in human hearing.
- Sound Source1 has a higher perceptual importance than Sound Source2.
- the audio scene analysis means 100 determines the perceptual importance of each audio object as described above, and allocates the number of bits according to the total amount when encoding with the object-based encoder and the channel-based encoder.
- the method is as follows, for example.
- the channel number of the channel-based input signal is A
- the object number of the object-based input signal is B
- the weight for the channel base is a
- the weight for the object base is b
- the total number of bits available for encoding is T (T is already This represents the total number of bits given to channel-based and object-based audio signals minus the number of bits given to audio scene information and the number of bits given to header information).
- the calculated number of bits is temporarily allocated by T * (b * B / (a * A + b * B)). That is, the number of bits calculated by T * (b / (a * A + b * B)) is assigned to each audio object.
- a and b are positive values in the vicinity of 1.0, but specific values may be determined in accordance with the nature of the content and the listener's preference.
- FIG. 11 (a) shows an example of the distribution of the number of bits allocated in this way for each audio frame.
- the oblique stripe pattern portion indicates the total code amount of the channel-based audio signal.
- the horizontal stripe pattern portion indicates the total amount of code of the object-based audio signal.
- the white portion indicates the total code amount of the audio scene information.
- section 1 is a section in which no audio object exists. Therefore, all bits are assigned to channel-based audio signals.
- Section 2 shows a state when an audio object appears.
- Section 3 shows a case where the total amount of perceptual importance of the audio object is lower than section 2.
- Section 4 shows a case where the total amount of perceptual importance of the audio object is higher than that of section 3.
- a section 5 shows a state where no audio object exists.
- FIGS. 11B and 11C show how the number of bits allocated to each audio object in a predetermined audio frame and the information (audio scene information) are arranged in the bit stream. Or an example.
- the number of bits allocated to each audio object is determined by the perceptual importance for each audio object.
- the perceptual importance (audio scene information) for each audio object may be put together at a predetermined location on the bitstream as shown in FIG. 11B, or (c) in FIG. It may be attached to individual audio objects as shown in FIG.
- the channel base encoder 101 encodes the channel base audio signal output from the audio scene analysis unit 100 with the number of bits allocated by the audio scene analysis unit 100.
- the object-based encoder 102 encodes the object-based audio signal output from the audio scene analysis unit 100 with the number of bits allocated by the audio scene analysis unit 100.
- the audio scene encoding means 103 encodes audio scene information (in the above example, the perceptual importance of the object-based audio signal). For example, encoding is performed as the information amount of the audio frame of the object-based audio signal.
- the multiplexing unit 104 includes a channel base encoded signal that is an output signal of the channel base encoder 101, an object base encoded signal that is an output signal of the object base encoder 102, and an output signal of the audio scene encoding unit 103.
- a bit stream is generated by multiplexing an audio scene encoded signal. That is, a bit stream as shown in (b) of FIG. 11 or (c) of FIG. 11 is generated.
- the object-based encoded signal and the audio scene encoded signal are multiplexed as follows.
- the meaning of “as a pair” does not necessarily mean that the information arrangement is adjacent. “As a pair” means that each of the encoded signals and the corresponding information amount are multiplexed in association with each other. By doing so, the processing according to the audio scene can be controlled for each audio object on the decoder side. In that sense, it is desirable that the audio scene encoded signal is stored before the object-based encoded signal.
- the audio encoding apparatus encodes an input signal, and the input signal includes a channel-based audio signal and an object-based audio signal.
- Audio scene analysis means for determining a scene and detecting audio scene information
- channel-based encoder for encoding the channel-based audio signal output from the audio scene analysis means, and the output from the audio scene analysis means
- An object-based encoder that encodes an object-based audio signal
- an audio scene encoding unit that encodes the audio scene information.
- the audio encoding apparatus it is possible to reduce the bit rate. This is because the number of audio objects can be reduced by mixing audio objects that can be expressed on a channel basis with channel-based signals.
- the degree of rendering freedom on the decoder side can be improved. This is because a sound that can be converted into an audio object is detected from the channel-based signal, and can be recorded and transmitted as an audio object.
- the audio encoding apparatus it is possible to appropriately assign the number of encoding bits for encoding the channel-based audio signal and the object-based audio signal.
- FIG. 12 is a diagram showing a configuration of the audio decoding apparatus according to the present embodiment.
- the audio decoding apparatus includes a separating unit 200, an audio scene decoding unit 201, a channel base decoder 202, an object base decoder 203, and an audio scene synthesizing unit 204.
- the separating unit 200 separates the channel-based encoded signal, the object-based encoded signal, and the audio scene encoded signal from the bit stream input to the separating unit 200.
- the audio scene decoding unit 201 decodes the audio scene encoded signal separated by the separating unit 200 and outputs audio scene information.
- the channel base decoder 202 decodes the channel base encoded signal separated by the separating means 200 and outputs a channel signal.
- the audio scene synthesizing unit 204 synthesizes an audio scene based on a channel signal that is an output signal of the channel base decoder 202, an object signal that is an output signal of the object base decoder 203, and speaker arrangement information that is separately designated. .
- the separation unit 200 separates the channel-based encoded signal, the object-based encoded signal, and the audio scene encoded signal from the input bit stream.
- the audio scene coded signal is obtained by coding perceptual importance information of each audio object.
- the perceptual importance may be encoded as the amount of information of each audio object, and the order of importance may be encoded as first, second, third, etc. Moreover, both of these may be sufficient.
- the audio scene encoded signal is decoded by the audio scene decoding means 201, and audio scene information is output.
- the channel base decoder 202 decodes the channel base encoded signal
- the object base decoder 203 decodes the object base encoded signal based on the audio scene information.
- additional information indicating the reproduction status is given to the object base decoder 203.
- the additional information indicating the reproduction status may be information on the computation capacity of the processor that executes the process.
- the above skip processing may be performed based on the information of the code amount.
- the perceptual importance is represented in order such as first, second, third, etc., an audio object having a lower order may be read and discarded as it is (without processing).
- FIG. 13 shows a case where skipping is performed by the information of the code amount when the perceptual importance of the audio object is low from the audio scene information and the perceptual importance is expressed as the code amount. Show.
- the additional information given to the object base decoder 203 may be listener attribute information. For example, if the listener is a child, only audio objects suitable for the listener may be selected and the rest may be discarded.
- the audio object is skipped based on the code amount corresponding to the audio object.
- metadata is assigned to each audio object, and what character the audio object represents is defined.
- each speaker is based on the channel signal that is the output signal of the channel base decoder 202, the object signal that is the output signal of the object base decoder 203, and the speaker arrangement information that is separately designated.
- the signal to be assigned to is determined and played back.
- the method is as follows.
- the output signal of the channel base decoder 202 is assigned to each channel as it is.
- the output signal from the object base decoder 203 distributes (renders) sound to each channel so as to form a sound image at the position according to the reproduction position information of the object originally included in the object base audio.
- the method may be any conventionally known method.
- FIG. 14 is a schematic diagram showing the configuration of the same audio decoding apparatus as that in FIG. 12, except that the position information of the listener is inputted to the audio scene synthesizing means 204.
- FIG. The HRTF may be configured according to the position information and the reproduction position information of the object originally included in the object base decoder 203.
- the audio decoding apparatus is an audio decoding apparatus that decodes an encoded signal obtained by encoding an input signal, and the input signal includes a channel-based audio signal and an object-based audio signal.
- the encoded signal is extracted from the input signal, a channel-based encoded signal that encodes the channel-based audio signal, an object-based encoded signal that encodes an object-based audio signal, and the input signal.
- An audio scene encoded signal obtained by encoding audio scene information, and the audio decoding device includes the channel-based encoded signal, the object-based encoded signal, and the audio scene from the encoded signal.
- Separating means for separating the encoded signal; audio scene decoding means for extracting and decoding the encoded signal of the audio scene information from the encoded signal; a channel base decoder for decoding the channel-based audio signal; and the audio scene An object base decoder that decodes the object-based audio signal using the audio scene information decoded by the decoding means, an output signal of the channel base decoder, and an output signal of the object base decoder, and the audio scene information. And an audio scene synthesizing unit that synthesizes the audio based on speaker arrangement information separately designated and reproduces the synthesized audio scene synthesized signal.
- the perceptual importance of an audio object is set as audio scene information, so that even if processing is performed by a processor having a small calculation capacity, the audio object is read and discarded according to the perceptual importance, so that the sound quality is as much as possible. Playback is possible while preventing deterioration.
- the perceptual importance of an audio object is expressed as a code amount and used as audio scene information, so that the amount of skipping can be grasped in advance when skipping. It is very easy to skip the reading process.
- the audio decoding apparatus by providing the listener's position information to the audio scene synthesizing unit 204, processing can be performed if an HRTF is generated from the position information and the position information of the audio object. . This makes it possible to synthesize audio scenes with a high sense of presence.
- the present invention is not limited to this embodiment. Unless it deviates from the meaning of the present invention, those in which various modifications conceived by those skilled in the art have been made in the present embodiment are also included in the scope of the present invention.
- the audio encoding device and the audio decoding device according to the present disclosure can appropriately encode background sounds and audio objects, and reduce the amount of calculation on the decoding side, so that audio playback devices and AV playback with images can be performed. Can be widely applied to equipment.
- Audio scene analysis means 101
- Channel base encoder 102
- Object base encoder 103
- Audio scene encoding means 104
- Multiplexing means 200
- Separation means 201
- Audio scene decoding means 202
- Channel base decoder 203
- Object base decoder 204 Audio scene synthesis means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
Abstract
Description
本開示の実施形態について説明する前に、本開示の基礎となった知見について説明する。
以下、実施の形態1にかかるオーディオエンコード装置について図面を参照しながら説明する。
以下、実施の形態2にかかるオーディオデコード装置について図面を参照しながら説明する。
101 チャネルベースエンコーダ
102 オブジェクトベースエンコーダ
103 オーディオシーンエンコード手段
104 多重化手段
200 分離手段
201 オーディオシーンデコード手段
202 チャネルベースデコーダ
203 オブジェクトベースデコーダ
204 オーディオシーン合成手段
Claims (11)
- 入力信号をエンコードするオーディオエンコード装置であって、
前記入力信号は、チャネルベースのオーディオ信号とオブジェクトベースのオーディオ信号とからなり、
前記入力信号からオーディオシーンを判定し、オーディオシーン情報を検出するオーディオシーン分析手段と、
前記オーディオシーン分析手段から出力された前記チャネルベースのオーディオ信号をエンコードするチャネルベースエンコーダと、
前記オーディオシーン分析手段から出力された前記オブジェクトベースのオーディオ信号をエンコードするオブジェクトベースエンコーダと、
前記オーディオシーン情報をエンコードするオーディオシーンエンコード手段と、
を備える
オーディオエンコード装置。 - 前記オーディオシーン分析手段は、さらに、
前記入力信号から、前記チャネルベースのオーディオ信号と前記オブジェクトベースのオーディオ信号とを分離して出力する
請求項1記載のオーディオエンコード装置。 - 前記オーディオシーン分析手段は、少なくともオブジェクトベースのオーディオ信号の知覚的重要度情報を抽出し、それに応じて前記チャネルベースのオーディオ信号と前記オブジェクトベースのオーディオ信号とのそれぞれに割り当てられる符号化ビット数を決定し、
前記チャネルベースエンコーダは、前記符号化ビット数に応じて、前記チャネルベースのオーディオ信号をエンコードし、
前記オブジェクトベースエンコーダは、前記符号化ビット数に応じて、前記オブジェクトベースのオーディオ信号をエンコードする
請求項1記載のオーディオエンコード装置。 - 前記オーディオシーン分析手段は、
前記入力信号のうちの前記オブジェクトベースのオーディオ信号に含まれるオーディオオブジェクトの数、
それぞれの前記オーディオオブジェクトの音の大きさ、
前記オーディオオブジェクトの音の大きさの遷移、
それぞれの前記オーディオオブジェクトの位置、
前記オーディオオブジェクトの位置の軌跡、
それぞれの前記オーディオオブジェクトの周波数特性、
それぞれの前記オーディオオブジェクトのマスキング特性、および、
前記オーディオオブジェクトと映像信号との関係、
の少なくともいずれかを検出し、それに応じて、
前記チャネルベースのオーディオ信号と前記オブジェクトベースのオーディオ信号のそれぞれに割り当てる前記符号化ビット数を決定する
請求項3記載のオーディオエンコード装置。 - 前記オーディオシーン分析手段は、
前記入力信号のうちの前記オブジェクトベースのオーディオ信号に含まれる複数のオーディオオブジェクトのそれぞれの音の大きさ、
複数の前記オーディオオブジェクトのそれぞれの音の大きさの遷移、
それぞれの前記オーディオオブジェクトの位置、
前記オーディオオブジェクトの軌跡、
それぞれの前記オーディオオブジェクトの周波数特性、
それぞれの前記オーディオオブジェクトのマスキング特性、および、
前記オーディオオブジェクトと映像信号との関係、
の少なくともいずれかを検出し、それに応じて、
各前記オーディオオブジェクトに割り当てる前記符号化ビット数を決定する
請求項3記載のオーディオエンコード装置。 - 前記オブジェクトベースのオーディオ信号の知覚的重要度情報のエンコード結果は、前記オブジェクトベースのオーディオ信号のエンコード結果と対としてビットストリームに格納され、
前記知覚的重要度情報のエンコード結果は、前記オブジェクトベースのオーディオ信号のエンコード結果の前に配置される
請求項4記載のオーディオエンコード装置。 - 前記それぞれのオーディオオブジェクトの知覚的重要度情報のエンコード結果は、前記それぞれのオーディオオブジェクトのエンコード結果と対としてビットストリームに格納され、
前記知覚的重要度情報のエンコード結果は、前記オーディオオブジェクトのエンコード結果の前に配置される
請求項5記載のオーディオエンコード装置。 - 入力信号をエンコードした符号化信号をデコードするオーディオデコード装置であって、
前記入力信号は、チャネルベースのオーディオ信号とオブジェクトベースのオーディオ信号とからなり、
前記符号化信号は、前記チャネルベースのオーディオ信号をエンコードしたチャネルベース符号化信号と、オブジェクトベースのオーディオ信号をエンコードしたオブジェクトベース符号化信号と、前記入力信号から抽出されたオーディオシーン情報をエンコードしたオーディオシーン符号化信号とを含むものであり、
前記オーディオデコード装置は、
前記符号化信号から、前記チャネルベース符号化信号と、前記オブジェクトベース符号化信号と、前記オーディオシーン符号化信号とを分離する分離手段と、
前記符号化信号から前記オーディオシーン情報のエンコード信号を取り出しデコードするオーディオシーンデコード手段と、
前記チャネルベースのオーディオ信号をデコードするチャネルベースデコーダと、
前記オーディオシーンデコード手段でデコードされた前記オーディオシーン情報を用いて、前記オブジェクトベースのオーディオ信号をデコードするオブジェクトベースデコーダと、
前記チャネルベースデコーダの出力信号と前記オブジェクトベースデコーダの出力信号とを、前記オーディオシーン情報とは別途指示されるスピーカ配置情報とに基づいて合成し、合成されたオーディオシーン合成信号を再生するオーディオシーン合成手段と、
を有する
オーディオデコード装置。 - 前記オーディオシーン情報は、オーディオオブジェクトの符号化ビット数情報であり、別途指示される情報に基づいて前記オーディオオブジェクトの中で再生しないものを決定し、当該再生しないオーディオオブジェクトを当該オーディオオブジェクトの符号化ビット数に基づいて読み飛ばす
請求項8記載のオーディオデコード装置。 - 前記オーディオシーン情報は、前記オーディオオブジェクトの知覚的重要度情報であり、デコードに必要な演算資源が不足している場合は、知覚的重要度の低い前記オーディオオブジェクトを読み飛ばす
請求項8記載のオーディオデコード装置。 - 前記オーディオシーン情報は、オーディオオブジェクト位置情報であり、当該情報と、別途指示される再生側スピーカ配置情報と、別途指示されるあるいは予め想定しているリスナーの位置情報とから各スピーカへのダウンミックスする際のHRTF(頭部伝達関数:Head Related Transfer Function)係数を決定する
請求項8記載のオーディオデコード装置。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480056559.4A CN105637582B (zh) | 2013-10-17 | 2014-08-20 | 音频编码装置及音频解码装置 |
EP14853892.9A EP3059732B1 (en) | 2013-10-17 | 2014-08-20 | Audio decoding device |
JP2015542491A JP6288100B2 (ja) | 2013-10-17 | 2014-08-20 | オーディオエンコード装置及びオーディオデコード装置 |
US15/097,117 US9779740B2 (en) | 2013-10-17 | 2016-04-12 | Audio encoding device and audio decoding device |
US15/694,672 US10002616B2 (en) | 2013-10-17 | 2017-09-01 | Audio decoding device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013216821 | 2013-10-17 | ||
JP2013-216821 | 2013-10-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/097,117 Continuation US9779740B2 (en) | 2013-10-17 | 2016-04-12 | Audio encoding device and audio decoding device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015056383A1 true WO2015056383A1 (ja) | 2015-04-23 |
Family
ID=52827847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/004247 WO2015056383A1 (ja) | 2013-10-17 | 2014-08-20 | オーディオエンコード装置及びオーディオデコード装置 |
Country Status (5)
Country | Link |
---|---|
US (2) | US9779740B2 (ja) |
EP (1) | EP3059732B1 (ja) |
JP (1) | JP6288100B2 (ja) |
CN (1) | CN105637582B (ja) |
WO (1) | WO2015056383A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017519239A (ja) * | 2014-05-16 | 2017-07-13 | クアルコム,インコーポレイテッド | 高次アンビソニックス信号の圧縮 |
WO2018198789A1 (ja) * | 2017-04-26 | 2018-11-01 | ソニー株式会社 | 信号処理装置および方法、並びにプログラム |
WO2020105423A1 (ja) * | 2018-11-20 | 2020-05-28 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
JP2022506501A (ja) * | 2018-10-31 | 2022-01-17 | 株式会社ソニー・インタラクティブエンタテインメント | 音響効果のテキスト注釈 |
JP2023523081A (ja) * | 2020-04-30 | 2023-06-01 | 華為技術有限公司 | 音声信号に対するビット割り当て方法及び装置 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6439296B2 (ja) * | 2014-03-24 | 2018-12-19 | ソニー株式会社 | 復号装置および方法、並びにプログラム |
EP3293987B1 (en) * | 2016-09-13 | 2020-10-21 | Nokia Technologies Oy | Audio processing |
US10187740B2 (en) * | 2016-09-23 | 2019-01-22 | Apple Inc. | Producing headphone driver signals in a digital audio signal processing binaural rendering environment |
US11064453B2 (en) | 2016-11-18 | 2021-07-13 | Nokia Technologies Oy | Position stream session negotiation for spatial audio applications |
WO2018200822A1 (en) * | 2017-04-26 | 2018-11-01 | Dts, Inc. | Bit rate control over groups of frames |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
US11019449B2 (en) * | 2018-10-06 | 2021-05-25 | Qualcomm Incorporated | Six degrees of freedom and three degrees of freedom backward compatibility |
KR102691543B1 (ko) * | 2018-11-16 | 2024-08-02 | 삼성전자주식회사 | 오디오 장면을 인식하는 전자 장치 및 그 방법 |
AU2020310952A1 (en) * | 2019-07-08 | 2022-01-20 | Voiceage Corporation | Method and system for coding metadata in audio streams and for efficient bitrate allocation to audio streams coding |
US11430451B2 (en) * | 2019-09-26 | 2022-08-30 | Apple Inc. | Layered coding of audio with discrete objects |
CN114822564A (zh) * | 2021-01-21 | 2022-07-29 | 华为技术有限公司 | 音频对象的比特分配方法和装置 |
CN115472170A (zh) * | 2021-06-11 | 2022-12-13 | 华为技术有限公司 | 一种三维音频信号的处理方法和装置 |
EP4428857A1 (en) * | 2021-11-02 | 2024-09-11 | Beijing Xiaomi Mobile Software Co., Ltd. | Signal encoding and decoding method and apparatus, and user equipment, network side device and storage medium |
WO2023216119A1 (zh) * | 2022-05-10 | 2023-11-16 | 北京小米移动软件有限公司 | 音频信号编码方法、装置、电子设备和存储介质 |
US20240196158A1 (en) * | 2022-12-08 | 2024-06-13 | Samsung Electronics Co., Ltd. | Surround sound to immersive audio upmixing based on video scene analysis |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010506231A (ja) * | 2007-02-14 | 2010-02-25 | エルジー エレクトロニクス インコーポレイティド | オブジェクトベースオーディオ信号の符号化及び復号化方法並びにその装置 |
WO2010109918A1 (ja) * | 2009-03-26 | 2010-09-30 | パナソニック株式会社 | 復号化装置、符号化復号化装置および復号化方法 |
JP2011509591A (ja) * | 2008-01-01 | 2011-03-24 | エルジー エレクトロニクス インコーポレイティド | オーディオ信号の処理方法及び装置 |
US20120314875A1 (en) * | 2011-06-09 | 2012-12-13 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding 3-dimensional audio signal |
EP2690621A1 (en) * | 2012-07-26 | 2014-01-29 | Thomson Licensing | Method and Apparatus for downmixing MPEG SAOC-like encoded audio signals at receiver side in a manner different from the manner of downmixing at encoder side |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100542129B1 (ko) * | 2002-10-28 | 2006-01-11 | 한국전자통신연구원 | 객체기반 3차원 오디오 시스템 및 그 제어 방법 |
KR20070046752A (ko) * | 2005-10-31 | 2007-05-03 | 엘지전자 주식회사 | 신호 처리 방법 및 장치 |
EP2100297A4 (en) * | 2006-09-29 | 2011-07-27 | Korea Electronics Telecomm | DEVICE AND METHOD FOR CODING AND DECODING A MEHROBJECT AUDIO SIGNAL WITH DIFFERENT CHANNELS |
CN101490744B (zh) * | 2006-11-24 | 2013-07-17 | Lg电子株式会社 | 用于编码和解码基于对象的音频信号的方法和装置 |
KR101422745B1 (ko) * | 2007-03-30 | 2014-07-24 | 한국전자통신연구원 | 다채널로 구성된 다객체 오디오 신호의 인코딩 및 디코딩장치 및 방법 |
EP2225894B1 (en) | 2008-01-01 | 2012-10-31 | LG Electronics Inc. | A method and an apparatus for processing an audio signal |
CN101562015A (zh) * | 2008-04-18 | 2009-10-21 | 华为技术有限公司 | 音频处理方法及装置 |
US8396577B2 (en) * | 2009-08-14 | 2013-03-12 | Dts Llc | System for creating audio objects for streaming |
JP5582027B2 (ja) * | 2010-12-28 | 2014-09-03 | 富士通株式会社 | 符号器、符号化方法および符号化プログラム |
US9026450B2 (en) * | 2011-03-09 | 2015-05-05 | Dts Llc | System for dynamically creating and rendering audio objects |
EP2686654A4 (en) * | 2011-03-16 | 2015-03-11 | Dts Inc | CODING AND PLAYING THREE-DIMENSIONAL AUDIOSPURES |
SG10201604679UA (en) * | 2011-07-01 | 2016-07-28 | Dolby Lab Licensing Corp | System and method for adaptive audio signal generation, coding and rendering |
EP2805326B1 (en) * | 2012-01-19 | 2015-10-14 | Koninklijke Philips N.V. | Spatial audio rendering and encoding |
JP6439296B2 (ja) * | 2014-03-24 | 2018-12-19 | ソニー株式会社 | 復号装置および方法、並びにプログラム |
-
2014
- 2014-08-20 JP JP2015542491A patent/JP6288100B2/ja active Active
- 2014-08-20 CN CN201480056559.4A patent/CN105637582B/zh active Active
- 2014-08-20 EP EP14853892.9A patent/EP3059732B1/en active Active
- 2014-08-20 WO PCT/JP2014/004247 patent/WO2015056383A1/ja active Application Filing
-
2016
- 2016-04-12 US US15/097,117 patent/US9779740B2/en active Active
-
2017
- 2017-09-01 US US15/694,672 patent/US10002616B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010506231A (ja) * | 2007-02-14 | 2010-02-25 | エルジー エレクトロニクス インコーポレイティド | オブジェクトベースオーディオ信号の符号化及び復号化方法並びにその装置 |
JP2011509591A (ja) * | 2008-01-01 | 2011-03-24 | エルジー エレクトロニクス インコーポレイティド | オーディオ信号の処理方法及び装置 |
WO2010109918A1 (ja) * | 2009-03-26 | 2010-09-30 | パナソニック株式会社 | 復号化装置、符号化復号化装置および復号化方法 |
US20120314875A1 (en) * | 2011-06-09 | 2012-12-13 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding 3-dimensional audio signal |
EP2690621A1 (en) * | 2012-07-26 | 2014-01-29 | Thomson Licensing | Method and Apparatus for downmixing MPEG SAOC-like encoded audio signals at receiver side in a manner different from the manner of downmixing at encoder side |
Non-Patent Citations (2)
Title |
---|
JONAS ENGDEGARD; BARBARA RESCH; CORNELIA FALCH; OLIVER HELLMUTH; JOHANNES HILPERT; ANDREAS HOELZER; LEONID TERENTIEV; JEROEN BREEB: "Spatial Audio Object Coding (SAOC) The Upcoming MPEG Standard on Parametric Object Based Audio Coding", AES 124TH CONVENTION, 17 May 2008 (2008-05-17) |
See also references of EP3059732A4 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017519239A (ja) * | 2014-05-16 | 2017-07-13 | クアルコム,インコーポレイテッド | 高次アンビソニックス信号の圧縮 |
US11900956B2 (en) | 2017-04-26 | 2024-02-13 | Sony Group Corporation | Signal processing device and method, and program |
WO2018198789A1 (ja) * | 2017-04-26 | 2018-11-01 | ソニー株式会社 | 信号処理装置および方法、並びにプログラム |
JPWO2018198789A1 (ja) * | 2017-04-26 | 2020-03-05 | ソニー株式会社 | 信号処理装置および方法、並びにプログラム |
JP7459913B2 (ja) | 2017-04-26 | 2024-04-02 | ソニーグループ株式会社 | 信号処理装置および方法、並びにプログラム |
JP7160032B2 (ja) | 2017-04-26 | 2022-10-25 | ソニーグループ株式会社 | 信号処理装置および方法、並びにプログラム |
JP2022188258A (ja) * | 2017-04-26 | 2022-12-20 | ソニーグループ株式会社 | 信号処理装置および方法、並びにプログラム |
US11574644B2 (en) | 2017-04-26 | 2023-02-07 | Sony Corporation | Signal processing device and method, and program |
JP2022506501A (ja) * | 2018-10-31 | 2022-01-17 | 株式会社ソニー・インタラクティブエンタテインメント | 音響効果のテキスト注釈 |
WO2020105423A1 (ja) * | 2018-11-20 | 2020-05-28 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
JPWO2020105423A1 (ja) * | 2018-11-20 | 2021-10-14 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
JP7468359B2 (ja) | 2018-11-20 | 2024-04-16 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
JP2023523081A (ja) * | 2020-04-30 | 2023-06-01 | 華為技術有限公司 | 音声信号に対するビット割り当て方法及び装置 |
US11900950B2 (en) | 2020-04-30 | 2024-02-13 | Huawei Technologies Co., Ltd. | Bit allocation method and apparatus for audio signal |
JP7550881B2 (ja) | 2020-04-30 | 2024-09-13 | 華為技術有限公司 | 音声信号に対するビット割り当て方法及び装置 |
Also Published As
Publication number | Publication date |
---|---|
US9779740B2 (en) | 2017-10-03 |
EP3059732A4 (en) | 2017-04-19 |
US20160225377A1 (en) | 2016-08-04 |
US10002616B2 (en) | 2018-06-19 |
JP6288100B2 (ja) | 2018-03-07 |
JPWO2015056383A1 (ja) | 2017-03-09 |
CN105637582A (zh) | 2016-06-01 |
EP3059732A1 (en) | 2016-08-24 |
CN105637582B (zh) | 2019-12-31 |
US20170365262A1 (en) | 2017-12-21 |
EP3059732B1 (en) | 2018-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6288100B2 (ja) | オーディオエンコード装置及びオーディオデコード装置 | |
KR100888474B1 (ko) | 멀티채널 오디오 신호의 부호화/복호화 장치 및 방법 | |
KR101328962B1 (ko) | 오디오 신호 처리 방법 및 장치 | |
KR101506837B1 (ko) | 다객체 오디오 신호의 부가정보 비트스트림 생성 방법 및 장치 | |
TWI431610B (zh) | 用以將以物件為主之音訊信號編碼與解碼之方法與裝置 | |
JP5453514B2 (ja) | 多様なチャネルから構成されたマルチオブジェクトオーディオ信号の符号化および復号化装置、並びにその方法 | |
KR101221916B1 (ko) | 오디오 신호 처리 방법 및 장치 | |
JP5260665B2 (ja) | ダウンミックスを用いたオーディオコーディング | |
KR101414455B1 (ko) | 스케일러블 채널 복호화 방법 | |
RU2406166C2 (ru) | Способы и устройства кодирования и декодирования основывающихся на объектах ориентированных аудиосигналов | |
US9570082B2 (en) | Method, medium, and apparatus encoding and/or decoding multichannel audio signals | |
US20120183148A1 (en) | System for multichannel multitrack audio and audio processing method thereof | |
KR100718132B1 (ko) | 오디오 신호의 비트스트림 생성 방법 및 장치, 그를 이용한부호화/복호화 방법 및 장치 | |
KR101434834B1 (ko) | 다채널 오디오 신호의 부호화/복호화 방법 및 장치 | |
KR20070081735A (ko) | 오디오 신호의 인코딩/디코딩 방법 및 장치 | |
KR20080030848A (ko) | 오디오 신호 인코딩 및 디코딩 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14853892 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015542491 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2014853892 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014853892 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |