WO2010058931A2 - A method and an apparatus for processing a signal - Google Patents

A method and an apparatus for processing a signal Download PDF

Info

Publication number
WO2010058931A2
WO2010058931A2 PCT/KR2009/006714 KR2009006714W WO2010058931A2 WO 2010058931 A2 WO2010058931 A2 WO 2010058931A2 KR 2009006714 W KR2009006714 W KR 2009006714W WO 2010058931 A2 WO2010058931 A2 WO 2010058931A2
Authority
WO
WIPO (PCT)
Prior art keywords
window
frame
feature
current
feature information
Prior art date
Application number
PCT/KR2009/006714
Other languages
French (fr)
Other versions
WO2010058931A3 (en
Inventor
Sung Yong Yoon
Hyun Kook Lee
Dong Soo Kim
Jae Hyun Lim
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020090109742A external-priority patent/KR20100054749A/en
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Publication of WO2010058931A2 publication Critical patent/WO2010058931A2/en
Publication of WO2010058931A3 publication Critical patent/WO2010058931A3/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/022Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring

Definitions

  • the present invention relates to an apparatus for processing an audio signal and method thereof.
  • the present invention is suitable for a wide scope of applications, it is particularly suitable for reducing the number of bits for transferring a window feature by determining a window feature of a current frame using first window feature information, second feature information and a window feature of a previous frame and then using the determined window feature.
  • a signal can be coded using 9 kinds of window features in case of coding an audio or speech signal.
  • the number of bits allocated to represent a substantially usable window feature of a current frame may be smaller than the number of bits allocated to represent all window features.
  • An object of the present invention is to provide an apparatus for processing a signal and method thereof, by which a window feature of a current frame represented using a less bit number can be used in a manner of determining a window feature of a current frame using first window feature information, second feature information and a window feature of a previous frame.
  • the present invention provides the following effects or advantages.
  • an apparatus for processing a signal and method thereof according to the present invention by and by determining a window feature of a current window frame using a window feature of a previous window frame as well as using a feature of a right window of the current window frame applied to a current frame, first window feature information indicating an MDCT unit and second window feature information, it is able to reduce the number of bits allocated to indicate the window feature.
  • first window feature information indicating an MDCT unit and second window feature information
  • FIG. 1 is a diagram for relations between a frame and a window frame used for a signal processing apparatus according to one embodiment of the present invention
  • FIG. 2 is a schematic block diagram of a signal encoding device according to one embodiment of the present invention.
  • FIG. 3 is a detailed block diagram of a first window feature information generating unit and a second window feature information generating unit shown in FIG. 2;
  • FIG. 4 is a schematic block diagram of a signal decoding device according to one embodiment of the present invention.
  • FIG. 5 is a diagram for a method of determining a window feature of a current frame according to one embodiment of the present invention
  • FIG. 6 is a detailed block diagram of a first window feature information receiving unit, a second window feature information receiving unit and a window feature determining unit according to one embodiment of the present invention
  • FIG. 7 and FIG. 8 are diagrams of syntaxes indicating signal decoding methods according to various embodiments of the present invention, respectively;
  • FIG. 9 is a schematic block diagram of a signal encoding device according to another embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of a signal decoding device according to another embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a configuration of a product including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
  • FIG. 12 is a schematic diagrams for relations of products including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
  • FIG. 13 is a schematic block diagram of a broadcast signal decoding device including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
  • a method of decoding a signal comprises obtaining a window feature of previous window frame indicating window which is used to previous window frame; extracting first window feature information indicating a length of right window slope of current window frame; when the first window feature information indicates a length of long window slope, determining a window feature of the current window frame by using the first window feature information and the window feature of the previous window frame; when the first window feature information does not indicate length of long window slope, extracting second window feature information indicating unit of frequency transform of the current window frame; and determining a window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame; and decoding an audio signal based on the window feature of the current window frame.
  • the window feature of the current window frame is determined further by using coding mode information of current frame indicating coding mode of the current frame, and coding mode information of previous frame indicating coding mode of the previous frame.
  • the window feature of the previous frame is determined based on first window feature information of the previous window frame.
  • the window feature of the current window frame is one of a long window feature including that left window and right window are long window, and a long stop window feature indicating that left window is short window and right window is long window.
  • a bit length of the first window feature information and the second window feature information is each 1 bit.
  • an apparatus for decoding a signal comprises a window feature obtaining unit obtaining a window feature of previous window frame indicating window which is used to previous window frame; a first window feature information extracting unit extracting first window feature information indicating a length of right window slope of current window frame; a second window feature information extracting unit extracting second window feature information indicating a unit of frequency transform of the current window frame, when the first window feature information does not indicate a length of long window; and a window feature determining unit determining window feature of the current window frame based on the first window feature information, the second window feature information and the window feature of the previous window frame, wherein the second window feature information extracting unit is not performed, when the first window feature information indicates length of long window slope.
  • the concept 'coding' in the present invention includes both encoding and decoding.
  • 'information' in this disclosure is the terminology that generally includes values, parameters, coefficients, elements and the like and its meaning can be construed as different occasionally, by which the present invention is non-limited.
  • Stereo signal is taken as an example for a signal in this disclosure, by which examples of the present invention are non-limited.
  • a signal in this disclosure may include a plural channel signal having at least three or more channels.
  • FIG. 1 is a conceptional diagram indicating an applied unit of a window used for a signal coder side according to the present invention.
  • an audio signal can be partitioned into a plurality of frames 110.
  • a length of a window frame 120/130/... applied to the audio signal may be equal to a sum of a length of a current frame of the audio signal and a length of a previous frame.
  • the length of the window frame 2 140 is equal to a sum of the length of the frame 1 112 of the audio signal and the length of the frame 2 113 of the audio signal.
  • the length of the window frame 3 150 is equal to a sum of the length of the frame 2 113 of the audio signal and the length of the frame 3 114 of the audio signal.
  • the modified discrete cosine transform can be then performed by a unit of the window applied audio signal, i.e., by a unit of 2048 samples if the frames of the audio signal are constructed with 1024 samples.
  • a frame of a general audio signal shall be named different from a window frame that is a unit for applying a window thereto.
  • a window feature applied to a window frame includes an LPD sequence (LPD_sequence) used for a speech signal.
  • a window feature for an audio signal includes an only long sequence (ONLY_LONG_SEQUENCE), a long start sequence (LONG_START_SEQUENCE), an eight short sequence (EIGHT_SHORT_SEQUENC), a long stop sequence (LONG_STOP_SEQUENCE), a stop start sequence (STOP_START_SEQUENCE), an LPD start sequence (LPD_START_SEQUENCE), a stop 1152 sequence (STOPJ 152_SEQUENCE), and a stop start 1152 sequence (STOP_START_1152_SEQUENCE).
  • a window feature of the window frame 3 160 will be applied to the frame 2 113 and the frame 3 114 of the audio signal. Meanwhile, the frame 2 113 is included in the window frame 2 140 that is the previous window frame. And, the window feature corresponding to the frame 2 113 is already determined as the window feature of the widow frame 2
  • a shape i.e., a window shape corresponding to the frame 3
  • a shape of a left window in the window feature of the window frame 3 150 is determined as a shape that meets the following condition with the previous window frame.
  • a shape of a right window of the window frame 3 150 corresponding to the frame 3 114 is determined according to a characteristic of the audio signal of the frame 3 114, whereby the window feature of the window frame 3 150 can be determined as one of the 8 kinds of types.
  • FIG. 2 is a schematic block diagram of a signal encoding device 200 according to one embodiment of the present invention.
  • a signal encoding device 200 includes a receiving unit 210, a right window shape determining unit 220, a first window feature information generating unit 230, a second window feature information generating unit 240, a window applying unit 250, a MDCT unit 260 and a multiplexer 270.
  • the receiving unit 210 is able to receive a window feature of a previous window frame and an input signal of a current frame.
  • a current window frame is a unit that is applied to a current frame and a previous frame.
  • the window frame 3 150 is applicable to the frame 3 114 (i.e., current frame) and the frame 2 113 (i.e., previous frame).
  • the receiving unit 210 is able to receive a window feature of the window frame 2 140 and an input signal corresponding to the frame 3 114.
  • the right window shape determining unit 220 determines a shape of a right window of the window frame 3 150 according to a feature of the input signal (frame 3).
  • the shape of the right window of the current frame is determined different according to a case of a long frame having a frame length of the input signal set to 1024 samples or a short frame having a frame length set to 128 samples.
  • the first window feature information generating unit 230 generates first window feature information based on the determined shape of the right window, and preferably, on a length of a slope of the right window. Preferably, it can be determined by referring to a frame length of the input signal, of which details will be explained layer with reference to FIG. 3.
  • the second window feature information generating unit 240 Only if the first window feature information indicates that the length of the right window slope is a length of a long window slope, the second window feature information generating unit 240 generates second window feature information.
  • the second window feature information can indicate a frequency transform (MDCT) unit of the current window frame, of which details will be explained layer with reference to FIG. 3.
  • MDCT frequency transform
  • the window applying unit 250 is able to apply the determined window feature of the current window frame to the current frame and the previous frame.
  • the MDCT unit 260 transforms a time domain signal into a frequency signal (frequency spectrum) by a window frame unit to which the window feature is applied.
  • the MDCT unit 260 is then able to transfer the transformed frequency signal to a decoder side.
  • the multiplexer 270 enables the window feature of the current window frame, the first window feature information, the second window feature information and the signal transformed into the frequency spectrum (MDCT transformed signal) to be contained in one bitstream and then outputs the bitstream from the encoding device 200. Meanwhile, in case of determining a window frame applied to a next frame
  • the encoding device 200 is able to further include a window feature storing unit (not shown in the drawing) for storing a frame feature of a current window frame.
  • FIG. 3 is a detailed block diagram of the first window feature information generating unit and the second window feature information generating unit shown in FIG. 2.
  • a first window feature information generating unit 310 generates first window feature information based on a determined shape of a right window, and preferably, on a length of a slope of the right window. More preferably, the first window feature information indicates whether the right window has same slope in 1024 samples of a long window, i.e., whether the slope of the right window indicates a length of a long window, of which meaning is shown in Table 1. [Table 1]
  • a second window feature information generating unit 320 generates second window feature information only if the first window feature information does not indicate that the length of the right window indicates a length of a long window.
  • the second window feature information is determined based on a frequency transform unit (MDCT unit) of a current window frame.
  • the second window feature information generating unit 320 includes an MDCT unit length determining unit 321 and a second window feature information determining unit 322.
  • the MDCT unit length determining unit 321 determines a length of a unit used in performing modified discrete cosine transform (MDCT) to transform a current window frame constructed by time unit into a frequency signal.
  • the MDCT unit length determining unit 321 determines whether the MDCT unit is 1024/1152 samples corresponding to a length of a long window or is able to determine whether the MDCT unit is 128 sample units corresponding to a length of a short window.
  • the second window feature information determining unit 322 determines second window feature information based on the MDCT unit length determined by the MDCT unit length determining unit 321. Detailed meaning of the second window feature information is shown in Table 2. [Table 2]
  • FIG. 4 is a schematic block diagram of a signal processing decoding device 400 according to one embodiment of the present invention.
  • a signal processing decoding device 400 includes a demultiplexer 410, an IMDCT unit 420, a first window feature information extracting unit 430, a second window feature information extracting unit
  • the demultiplexer 410 receives an input of the bitstream outputted from the multiplexer 270 of the signal processing encoding device 200 shown in FIG. 2, classifies the inputted bitstream per signal or information, and then outputs the classified signals and/or informations respectively.
  • the IMDCT unit 420 receives an input of an encoded signal outputted from the demultiplexer 410 and is then able to perform inverse modified discrete cosine transform (hereinafter abbreviated IMDCT).
  • IMDCT inverse modified discrete cosine transform
  • the IMDCT follows a general IMDCT method.
  • the first window feature information extracting unit 430 is able to extract the first window feature information from the signal inputted from the demultiplexer 410.
  • the meaning and feature of the first window feature information are equal to those described with reference to FIG. 2 and FIG. 3, of which details are omitted from the following description.
  • the second window feature information extracting unit 440 is able to extract the second window feature information from the signal inputted from the demultiplexer
  • the window feature determining unit 450 inputs the first window feature information and the second window feature information.
  • the demultiplexer 410 is able to output a window feature of a previous window frame, which indicates a window feature applied to the previous window frame.
  • the window feature determining unit 450 is able to receive the window feature of the previous window frame.
  • the window feature determining unit 450 obtains information indicating whether a slope of a right window of a current window frame indicates a length of a long window slope from the first window feature information and also obtains information on an MDCT unit of the current window frame from the second window feature information, thereby being able to determine a shape of the right window (corresponding to the current frame) of the current window frame. Moreover, the window feature determining unit 450 is able to determine a shape of a left window (corresponding to a previous frame) of the current window frame using the window feature of the previous window frame. Thus, the window feature of the current window frame is determined according to the determined shape of the right window of the current window frame and the determined shape of the left window of the current window frame.
  • a current window frame is the window frame 3 150.
  • the left window shape of the window frame 3 150 which was determined in the frame 2 140, as shown in FIG. 1, can be determined based on the window feature of the previous window frame.
  • the shape of the left window of the current window frame can be determined according to the shape of the right window of the previous window frame.
  • the left window shape of the current window frame can be determined according to the first window feature information that indicates whether the right window of the previous window frame is the length of the long window.
  • mapping relations according to a method of determining a window feature of a current window frame according to a window feature of a previous window frame and first and second window feature informations of the current window frame will be described with reference to Tables 3 to 5 and FIG. 6 later.
  • the window applying unit 460 is able to apply a window to the current window frame based on the determined window feature of the current window frame. Thereafter, it is apparent that the current frame can be decoded by performing post-processing on the window applied current window frame.
  • a window feature of a current window frame is defined to have one of four kinds of values such as 0(0O 2 ), 1(0I 2 ), 2(1O 2 ) and 3(1I 2 ) and can be expressed using 2 bits.
  • the signal processing apparatus and method according to the present invention represent the first window feature information and the second window feature information as three kinds of cases including 0(0O 2 ), 2(1O 2 ) and 3(1I 2 ).
  • Tables 3 to 5 in the following show the detailed mapping relations according to a method of determining a window feature of a current window frame in association with a window feature of a previous window frame and first and second window feature informations of the current window frame.
  • a window feature of a previous frame is LPD_START_SEQUENCE, i.e., if a shape of a right window of a current window frame has a window shape applied to an LPD frame, a next frame is switched to a speech signal instead of an audio signal using block-switching. Hence, it is not necessary to consider a subsequent change of a window feature.
  • LPD_START_SEQUENCE is omitted from a window feature of a previous frame.
  • a window feature of a current window frame is represented as LPD_SEQUENCE.
  • a window feature of a current window frame is determined as either EIGHT_SHORT_SEQUENCE constructed with 8 short windows (128 samples) or LONG_STOP_SEQUENCE indicating that a slope length of a right window of a current window frame is a long window.
  • a window feature of a current window frame is determined as either STOP_1152_SEQUENCE indicating that a slope length of a right window is a long window or STOP_START_1152_SEQUENCE indicating that a slope length of a right window is a short window (a left window in both cases is a long window used for LPD).
  • STOP_START_1152_SEQUENCE available window features of a current window frame are limited to two kinds. Hence, it is able to transfer a window feature of a current window using 1 bit only. This is reflected on the mapping relation shown in Table 4.
  • FIG. 5 is a diagram for a method of determining a window feature of a current frame according to the mapping relation shown in Table 4.
  • Each block in FIG. 5 indicates a window frame and a name within the block indicates a window feature of a window frame.
  • a window feature located on left of an arrow indicates a window feature of a previous window frame.
  • a window feature located on right of the arrow indicates a window feature of a current window frame.
  • a long window is indicated without shade.
  • a short window is indicated by a slashed shade.
  • a LPD window is indicated by a doted shade.
  • a block using 8 short windows is indicated by a latticed shade.
  • LONG_STOP_SEQUENCE as a left window in a window frame uses a short window and a right window uses a long window, a left half of the window frame is represented by a shaded shade, while a right half is represented by a non-shade block.
  • a numeral written next to an arrow indicates a path determined using first window feature information and second window feature information, which is described in Table 4.
  • a window feature of a current window frame depends on a window feature of a previous window frame and can be determined as one of maximum three kinds of types. Therefore, it is able to reduce the number of used bits in a manner that the window feature of the current frame is represented as one of maximum three kinds of cases using the first window feature information and the second window feature information. Moreover, it is able to further reduce the number of total bits in a manner that a value having a highest frequency is represented as a value 0(O 2 ) to use 1 bit only.
  • a window frame is switched by block switching, a case of switching from
  • STOP_1152_SEQUENCE to LPD_START_SEQUENCES a case of switching from LONG_STOP_SEQUENCE to LPD_START_SEQUENCE, a case of switching from LONG_START_SEQUENCE to STOP_START_SEQUENCE, or a case of switching from STOP_START_SEQUENCE to STOP_START_SEQUENCE is possible on a predetermined condition only and is not mandatory. Thus, it is possible to perform coding except the above cases.
  • the mapping relation for this case is shown in Table 5. In case that the mapping relation shown in Table 5 is established, it is able to use 1 bit as the bit number indicating a window feature of a current window frame occasionally. Therefore, it is able to considerably reduce the number of used bits. [Table 5]
  • FIG. 6 is a detailed block diagram of another example of the first window feature information 430, the second window feature information 440 and the window feature determining unit 450 shown in FIG. 4.
  • a first window feature information 610 and a second window feature information 620 are the units having the same configurations and functions of the former first window feature information 430 and the former second window feature information 450 shown in FIG. 4, respectively. And, it is able to extract first window feature information and second window feature information of a current window frame by the same method.
  • the second window feature information receiving unit 620 can be activated (on) only if the first window feature information received from the first window feature information receiving unit 610 does not indicate a slope length of a long window. If the first window feature information indicates the slope length of the long window, the second window feature information receiving unit 620 is not activated (off).
  • the window feature determining unit 630 can receive an input of a window feature of a previous window frame as well as the first window feature information and the second window feature information. Moreover, the window feature determining unit 630 is able to further receive coding mode information of a current frame, which indicates a coding mode of the current frame, and coding mode information of a previous frame, which indicates a coding mode of the previous frame.
  • the coding mode indicates whether a current frame is encoded by an audio coding scheme for coding an audio signal or a speech coding scheme for coding a speech signal.
  • the window feature determining unit 630 is able to determine a window feature of a current window frame by means of further using the coding mode information of the previous frame and the coding mode information of the current frame.
  • the above-described signal processing method is able use a window feature of a previous window frame and a coding mode information of a previous frame. Yet, in case that decoding is performed in the middle of a bitstream of a broadcast or the like, it is unable to know a window feature of a previous window frame. Thus, it may cause a problem in determining a window feature of a current window frame. Hence, this problem can be solved in a manner that information related to a window feature of a previous frame is contained in a header part of a bitstream.
  • FIG. 7 is a syntax indicating a signal processing method according to another embodiment of the present invention.
  • a previous window feature flag indicating a window feature of a previous window frame is included in a header of a bitstream, it is able to determine a window feature of a current window frame even if decoding starts to be performed in the middle of a bitstream such as a broadcast bitstream and the like.
  • FIG. 8 is a syntax indicating a signal processing method according to another embodiment of the present invention.
  • previous coding mode information (last_core_mode) indicating a coding scheme of a previous frame to a header of a bitstream.
  • last_core_mode previous coding mode information
  • a window feature of a previous window frame is necessary to determine a window feature of a current window frame.
  • a coding method of a previous frame uses a speech coding mode
  • a window feature of a previous window frame is unnecessary.
  • a previous window feature needs not to be received.
  • a signal processing apparatus is represented as the above described syntax, thereby obtaining a window feature of a previous window frame and coding mode information of a previous frame to determine a window feature of a current window frame.
  • FIG. 9 shows an example of a signal processing encoding device according to a further embodiment of the present invention
  • FIG. shows an example of a signal processing decoding device according to a further embodiment of the present invention
  • a signal processing apparatus 900 includes a plural channel encoding unit 910, a band extension coding unit 920, an audio signal encoding unit 930, a speech signal encoding unit 940 and a multiplexer 950.
  • the plural channel encoding unit 910 receives an input of a plural channel signal (a signal having at least two channels) (hereinafter named a multi-channel signal) and then generates a mono or stereo downmix signal by downmixing the multi-channel signal. And, the plural channel encoding unit 910 generates spatial information for upmixing the downmix signal into the multi-channel signal.
  • the spatial information can include channel level difference information, inter-channel correlation information, channel prediction coefficient, downmix gain information and the like. If the signal encoding device 900 receives an input of a mono signal, it is understood that the mono signal can bypass the plural channel encoding unit 910 without being downmixed.
  • the band extension encoding unit 920 is able to generate spectral data corresponding to a low frequency band and band extension information for high frequency band extension in a manner of applying a band extension scheme (SBR) to the downmix signal that is an output of the plural channel encoding unit 910.
  • SBR band extension scheme
  • spectral data on a partial band (e.g., a high frequency band) of the downmix signal is excluded.
  • the band extension information for reconstructing the excluded data can be generated.
  • the signal generated via the band extension coding unit 920 is inputted to the audio signal encoding unit 930 or the speech signal encoding unit 940.
  • the audio signal encoding unit 930 encodes the downmix signal according to an audio coding scheme.
  • the audio coding scheme may follow the AAC (advanced audio coding) standard or HE-AAC (high efficiency advanced audio coding) standard, by which the present invention is non-limited.
  • the audio signal encoding unit 930 can include a modified discrete cosine transform (MDCT) encoder.
  • MDCT modified discrete cosine transform
  • the audio signal encoding unit 930 includes the modified discrete cosine transform (MDCT) encoder, it can include a window feature determining unit 931, a window applying unit 932 and an MDCT unit 933.
  • the window feature determining unit 931 is able to include the right window shape determining unit 220, the first window feature determining unit 230 and the second window feature determining unit 240, which are shown in FIG. 2.
  • the window feature determining unit 931, the window applying unit 932 and the MDCT unit 933 have the same configurations and functions of the right window shape determining unit 220, the first window feature generating unit 230/310, the second window feature generating unit 240/320, the window applying unit 250 and the MDCT unit 260, which are shown in FIG. 2/FIG.
  • the window feature determining unit 931 is able to generate the first and second window feature informations described with reference to FIG. 2 and FIG. 3. If a specific frame or segment of the downmix signal has a large speech characteristic, the speech signal encoding unit 940 encodes the downmix signal according to a speech coding scheme.
  • the speech coding scheme may follow the AMR-WB (adaptive multi-rate wideband) standard, by which the present invention is non-limited.
  • the speech signal encoding unit 940 can further use a linear prediction coding (LPC) scheme. If a harmonic signal has high redundancy on a time axis, it can be modeled by linear prediction for predicting a present signal from a past signal.
  • LPC linear prediction coding
  • the speech signal encoding unit 940 can correspond to a time domain encoder.
  • the multiplexer 950 generates at least one bitstream by multiplexing the spatial information, the band extension information, the signals respectively encoded by the audio signal encoding unit 930 and the speech signal encoding unit 940, the first window feature information and the second window feature information together.
  • a signal decoding device 1000 includes a demultiplexer 1010, an audio signal decoding unit 1020, a speech signal decoding unit 1030, a band extension decoding unit 1040 and a plural channel decoding unit 1050.
  • the audio signal decoding unit 1020 includes an IMDCT unit 1021, a window feature determining unit 1022 and a window applying unit 1023.
  • the demultiplexer 1010 extracts a first window feature information, a second window feature information, a quantized signal, a coding mode information, a band extension information, a spatial information and the like from a signal bitstream.
  • the IMDCT unit 1021 performs inverse modified discrete cosine transform on the inputted signal.
  • the window feature determining unit 1022 determines a window feature of a current window frame using the first window feature information, the second window feature information, a window feature of a previous window fame, the coding mode information and coding mode information (not shown in the drawing) of a previous frame.
  • the window feature of the current window frame can be determined by the former method described with reference to FIGs. 4 to 8 and its details are omitted from the following description. If an audio signal (e.g., a spectral coefficient as a result of dequantization) has a large audio characteristic, the audio signal decoding unit 1020 decodes the audio signal according to an audio coding scheme.
  • the audio coding scheme may follow the AAC (advanced audio coding) standard or HE-AAC (high efficiency advanced audio coding) standard. If the audio signal has a large speech characteristic, the speech signal decoding unit 1030 decodes the downmix signal according to a speech coding scheme. In this case, as mentioned in the foregoing description, the speech coding scheme may follow the AMR-WB (adaptive multi-rate wideband) standard, by which the present invention is non-limited.
  • AAC advanced audio coding
  • HE-AAC high efficiency advanced audio coding
  • the band extension decoding unit 1040 reconstructs a signal on a high frequency band based on the band extension information by performing a band extension decoding scheme on the output signals from the audio and speech signal decoding units 1020 and 1030. And, the plural channel decoding unit 1050 generates an output channel signal of a multi-channel signal (stereo signal included) using spatial information if the decoded audio signal is a downmix.
  • FIG. 11 is a schematic diagram of a configuration of a product including an
  • FIG. 12 is a schematic diagrams for relations of products including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
  • a wire/wireless communication unit 1110 receives a bitstream by wire/wireless communications.
  • the wire/wireless communication unit 1110 includes at least one of a wire communication unit 1111, an infrared communication unit 1112, a Bluetooth unit 1113 and a wireless LAN communication unit 1114.
  • a user authenticating unit 1120 receives an input of user information and then performs user authentication.
  • the user authenticating unit 1120 can include at least one of a fingerprint recognizing unit 1121, an iris recognizing unit 1122, a face recognizing unit 1123 and a voice recognizing unit 1124.
  • the user authentication can be performed in a manner of receiving an input of fingerprint information, iris information, face contour information or voice information, converting the inputted information to user information, and then determining whether the user information matches user data which was previously registered.
  • An input unit 1130 is an input device for enabling a user to input various kinds of commands. And, the input unit 1130 can include at least one of a keypad unit 1131, a touchpad unit 1132 and a remote controller unit 1133, by which examples of the input unit 1130 are non-limited.
  • a signal decoding unit 1140 includes an IMDCT unit 1141, a window feature determining unit 1142 and a window applying unit 1143, which have the same configurations and functions of the former IMDCT unit 420, the former window feature determining unit 430 and the former window applying unit 440 described with reference to FIG. 4, respectively. And, details of the signal decoding unit 1140 are omitted in the following description.
  • a control unit 1150 receives input signals from the input devices and controls all processes of the signal decoding unit 1140 and an output unit 1160.
  • the output unit 1160 is an element for outputting an output signal and the like generated by the signal decoding unit 1140.
  • the output unit 1160 can include a signal output unit 1161 and a display unit 1162. If an output signal is an audio signal, it is outputted via the signal output unit 1161. If an output signal is a video signal, it is outputted via the display unit 1162. Moreover, if metadata is inputted to the input unit 1130, it is displayed on a screen via the display unit 1162.
  • FIG. 12 shows relation between terminals or between terminal and server, which correspond to the product shown in FIG. 11.
  • bidirectional communications of data or bitstream can be performed between a first terminal 1210 and a second terminal 1220 via wire/wireless communication units.
  • the data or bitstream exchanged via the wire/wireless communication unit may include the data including the first window feature information, the second window feature information and the like of the present invention described with reference to FIGs. 1 to 3.
  • wire/wireless communications can be performed between a server 1230 and a first terminal 1240.
  • FIG. 13 is a schematic block diagram of a broadcast signal decoding device including an IMDCT unit 1341, a window feature determining unit 1342 and a window applying unit 1343 according to one embodiment of the present invention.
  • a demultiplexer 1320 receives a plurality of data related to a TV broadcast from a tuner 1310. The received data are separated by the demultiplexer 1320 and are then decoded by a data decoder 1330. Meanwhile, the data separated by the demultiplexer 1320 can be stored in such a storage medium 1350 as an HDD.
  • the data separated by the demultiplexer 1320 are inputted to a signal decoding unit 1340 to decode an audio signal and a video signal.
  • Data which is received from demultiplexer 1320 is inputted into a signal decoding unit 1340 and the signal decoding unit 1340 decodes an audio signal and a video signal.
  • the signal decoding unit 1340 includes an IMDCT unit 1341, a window feature determining unit 1342, a window applying unit 1343 and a video decoding unit 1344. They have the same configurations and functions of the former units of the same names shown in FIG. 4 and their details are omitted in the following description.
  • An output unit 1370 outputs the video signal and the audio signal outputted from the signal decoding unit 1340.
  • the audio signal may include the signal that is decoded by applying a window feature of a current window frame.
  • the window feature is determined using first window feature information and second window feature information.
  • the data decoded by the signal decoding unit 1340 can be stored in a storage medium 1350 such as an HDD.
  • the signal decoding device 1300 can further include an application manager 1360 capable of controlling a plurality of data received according to an input of information from a user.
  • the application manager 1360 includes a user interface manager 1361 and a service manager 1362.
  • the user interface manager 1361 controls an interface for receiving an input of information from a user. For instance, the user interface manager 1361 is able to control a font type of text displayed on the output unit 1370, a screen brightness, a menu configuration and the like.
  • the service manager 1362 is able to control a received broadcast signal using information inputted by a user.
  • the service manager 1362 is able to provide a broadcast channel setting, an alarm function setting, an adult authentication function, etc.
  • the data outputted from the application manager 1360 are usable by being transferred to the output unit 1370 as well as the signal decoding unit 1340.
  • the present invention represents a window feature of a current window frame affected by a window feature of a previous window frame using the reduced number of bits, thereby raising coding efficiency in processing a signal.
  • the present invention applied decoding/encoding method can be implemented in a program recorded medium as computer-readable codes.
  • multimedia data having the data structure of the present invention can be stored in the computer-readable recoding medium.
  • the computer-readable recording media include all kinds of storage devices in which data readable by a computer system are stored.
  • the computer-readable media include ROM, RAM, CD-ROM, magnetic tapes, floppy discs, optical data storage devices, and the like for example and also include carrier-wave type implementations (e.g., transmission via Internet).
  • a bitstream generated by the encoding method is stored in a computer-readable recording medium or can be transmitted via wire/wireless communication network.
  • the present invention is applicable to encoding and decoding of signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method of decoding an audio signal comprises obtaining a window feature of previous window frame indicating a window which is used to previous window frame; extracting first window feature information indicating a length of right window slope of current window frame; when the first window feature information indicates a length of long window slope, determining a window feature of the current window frame by using the first window feature information and the window feature of the previous window frame; when the first window feature information does not indicate a length of long window slope, extracting second window feature information indicating a unit of frequency transform of the current window frame; and determining a window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame; and decoding an audio signal based on the window feature of the current window frame. Accordingly, by determining a window feature of a current window frame using a window feature of a previous window frame as well as using a feature of a right window of the current window frame applied to a current frame, first window feature information indicating an MDCT unit and second window feature information, it is able to reduce the number of bits allocated to indicate the window feature.

Description

[DESCRIPTION] [INVENTION TITLE]
A METHOD AND AN APPARATUS FOR PROCESSING A SIGNAL
[TECHNICAL FIELD]
The present invention relates to an apparatus for processing an audio signal and method thereof. Although the present invention is suitable for a wide scope of applications, it is particularly suitable for reducing the number of bits for transferring a window feature by determining a window feature of a current frame using first window feature information, second feature information and a window feature of a previous frame and then using the determined window feature.
[BACKGROUND ART]
Generally, a signal can be coded using 9 kinds of window features in case of coding an audio or speech signal.
[DISCLOSURE] [TECHNICAL PROBLEM]
However, since a window feature used for signal coding of a current frame is dependent on a window feature of a previous frame, the number of bits allocated to represent a substantially usable window feature of a current frame may be smaller than the number of bits allocated to represent all window features. [TECHNICAL SOLUTION]
Accordingly, the present invention is directed to an apparatus for processing a signal and method thereof that substantially obviate one or more of the problems due to limitations and disadvantages of the related art. An object of the present invention is to provide an apparatus for processing a signal and method thereof, by which a window feature of a current frame represented using a less bit number can be used in a manner of determining a window feature of a current frame using first window feature information, second feature information and a window feature of a previous frame.
[ADVANTAGEOUS EFFECTS]
Accordingly, the present invention provides the following effects or advantages.
First of all, in an apparatus for processing a signal and method thereof according to the present invention, by and by determining a window feature of a current window frame using a window feature of a previous window frame as well as using a feature of a right window of the current window frame applied to a current frame, first window feature information indicating an MDCT unit and second window feature information, it is able to reduce the number of bits allocated to indicate the window feature. Secondly, in an apparatus for processing a signal and method thereof according to the present invention, by allocating 0 (O2) to a most frequently used window feature among informations indicating window features of a current window frame, it is able to considerably reduce the number of total bits transferred from an encoder. [DESCRIPTION OF DRAWINGS]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
In the drawings:
FIG. 1 is a diagram for relations between a frame and a window frame used for a signal processing apparatus according to one embodiment of the present invention;
FIG. 2 is a schematic block diagram of a signal encoding device according to one embodiment of the present invention;
FIG. 3 is a detailed block diagram of a first window feature information generating unit and a second window feature information generating unit shown in FIG. 2;
FIG. 4 is a schematic block diagram of a signal decoding device according to one embodiment of the present invention;
FIG. 5 is a diagram for a method of determining a window feature of a current frame according to one embodiment of the present invention;
FIG. 6 is a detailed block diagram of a first window feature information receiving unit, a second window feature information receiving unit and a window feature determining unit according to one embodiment of the present invention;
FIG. 7 and FIG. 8 are diagrams of syntaxes indicating signal decoding methods according to various embodiments of the present invention, respectively;
FIG. 9 is a schematic block diagram of a signal encoding device according to another embodiment of the present invention; FIG. 10 is a schematic block diagram of a signal decoding device according to another embodiment of the present invention;
FIG. 11 is a schematic diagram of a configuration of a product including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention;
FIG. 12 is a schematic diagrams for relations of products including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention; and
FIG. 13 is a schematic block diagram of a broadcast signal decoding device including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
[BEST MODE]
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, a method of decoding a signal comprises obtaining a window feature of previous window frame indicating window which is used to previous window frame; extracting first window feature information indicating a length of right window slope of current window frame; when the first window feature information indicates a length of long window slope, determining a window feature of the current window frame by using the first window feature information and the window feature of the previous window frame; when the first window feature information does not indicate length of long window slope, extracting second window feature information indicating unit of frequency transform of the current window frame; and determining a window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame; and decoding an audio signal based on the window feature of the current window frame.
Preferably, the window feature of the current window frame is determined further by using coding mode information of current frame indicating coding mode of the current frame, and coding mode information of previous frame indicating coding mode of the previous frame.
Preferably, the window feature of the previous frame is determined based on first window feature information of the previous window frame. Preferably, the window feature of the current window frame is one of a long window feature including that left window and right window are long window, and a long stop window feature indicating that left window is short window and right window is long window.
Preferably, a bit length of the first window feature information and the second window feature information is each 1 bit.
To further achieve these and other advantages and in accordance with the purpose of the present invention, an apparatus for decoding a signal comprises a window feature obtaining unit obtaining a window feature of previous window frame indicating window which is used to previous window frame; a first window feature information extracting unit extracting first window feature information indicating a length of right window slope of current window frame; a second window feature information extracting unit extracting second window feature information indicating a unit of frequency transform of the current window frame, when the first window feature information does not indicate a length of long window; and a window feature determining unit determining window feature of the current window frame based on the first window feature information, the second window feature information and the window feature of the previous window frame, wherein the second window feature information extracting unit is not performed, when the first window feature information indicates length of long window slope.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
[MODE FOR INVENTION]
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. First of all, terminologies or words used in this specification and claims are not construed as limited to the general or dictionary meanings and should be construed as the meanings and concepts matching the technical idea of the present invention based on the principle that an inventor is able to appropriately define the concepts of the terminologies to describe the inventor's invention in best way. The embodiment disclosed in this disclosure and configurations shown in the accompanying drawings are just one preferred embodiment and do not represent all technical idea of the present invention. Therefore, it is understood that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents at the timing point of filing this application.
First of all, it is understood that the concept 'coding' in the present invention includes both encoding and decoding.
Secondly, 'information' in this disclosure is the terminology that generally includes values, parameters, coefficients, elements and the like and its meaning can be construed as different occasionally, by which the present invention is non-limited. Stereo signal is taken as an example for a signal in this disclosure, by which examples of the present invention are non-limited. For example, a signal in this disclosure may include a plural channel signal having at least three or more channels.
Since features of an audio signal having a dominant music component are different from those of a speech signal having a dominant speech component, it is unable to reconstruct an original sound in case of coding each of the former and latter audio signals using the same coding device. Hence, it is necessary to code a corresponding signal using a coding device suitable for each of the signals. In case of coding an audio signal, samples of frames are transformed into values in a frequency domain through modified discrete cosine transform (hereinafter abbreviated MDCT). In doing so, a window determined according to a characteristic of an audio signal is applied to each frame and the MDCT is then performed on a window applied audio signal.
FIG. 1 is a conceptional diagram indicating an applied unit of a window used for a signal coder side according to the present invention. Referring to FIG. 1, an audio signal can be partitioned into a plurality of frames 110. In this case, a length of a window frame 120/130/... applied to the audio signal may be equal to a sum of a length of a current frame of the audio signal and a length of a previous frame. For instance, the length of the window frame 2 140 is equal to a sum of the length of the frame 1 112 of the audio signal and the length of the frame 2 113 of the audio signal. And, the length of the window frame 3 150 is equal to a sum of the length of the frame 2 113 of the audio signal and the length of the frame 3 114 of the audio signal. Moreover, the modified discrete cosine transform can be then performed by a unit of the window applied audio signal, i.e., by a unit of 2048 samples if the frames of the audio signal are constructed with 1024 samples. As mentioned in the foregoing description of this disclosure, a frame of a general audio signal shall be named different from a window frame that is a unit for applying a window thereto.
First of all, a window feature applied to a window frame includes an LPD sequence (LPD_sequence) used for a speech signal. And, a window feature for an audio signal includes an only long sequence (ONLY_LONG_SEQUENCE), a long start sequence (LONG_START_SEQUENCE), an eight short sequence (EIGHT_SHORT_SEQUENC), a long stop sequence (LONG_STOP_SEQUENCE), a stop start sequence (STOP_START_SEQUENCE), an LPD start sequence (LPD_START_SEQUENCE), a stop 1152 sequence (STOPJ 152_SEQUENCE), and a stop start 1152 sequence (STOP_START_1152_SEQUENCE).
Since there exist 8 kinds of window features applicable to a window frame in case of an audio signal, it is able to represent them by applying 3 bits ( log28 = 3 ) to the window features. Yet, as a real window feature depends on a window feature of a previous window frame, it will be able to reduce the number of bits required for representing a window feature of a current window frame by considering the relation between the real window feature and the window feature of the previous window frame.
Referring now to FIG. 1, if a current window frame is the window frame 3 150, a window feature of the window frame 3 160 will be applied to the frame 2 113 and the frame 3 114 of the audio signal. Meanwhile, the frame 2 113 is included in the window frame 2 140 that is the previous window frame. And, the window feature corresponding to the frame 2 113 is already determined as the window feature of the widow frame 2
140. Hence, a shape (i.e., a window shape corresponding to the frame 3) of a left window in the window feature of the window frame 3 150 is determined as a shape that meets the following condition with the previous window frame.
[Formula 1]
Figure imgf000011_0001
Jb + b' = l
Moreover, a shape of a right window of the window frame 3 150 corresponding to the frame 3 114 is determined according to a characteristic of the audio signal of the frame 3 114, whereby the window feature of the window frame 3 150 can be determined as one of the 8 kinds of types.
Thus, since a window feature of a window frame depends on a window feature of a previous frame as well as a characteristic of an input signal, a type of a window feature, which can be determined as a window feature of a current window frame, can be substantially limited. Therefore, a signal processing apparatus and method for reducing the number of bits allocated to indicate a window feature according to one embodiment of the present invention are provided by reflecting a feature that a window feature of a current window frame is limitable. FIG. 2 is a schematic block diagram of a signal encoding device 200 according to one embodiment of the present invention.
Referring to FIG. 2, a signal encoding device 200 includes a receiving unit 210, a right window shape determining unit 220, a first window feature information generating unit 230, a second window feature information generating unit 240, a window applying unit 250, a MDCT unit 260 and a multiplexer 270.
The receiving unit 210 is able to receive a window feature of a previous window frame and an input signal of a current frame. As mentioned in the foregoing description, a current window frame is a unit that is applied to a current frame and a previous frame. Referring now to FIG. 1, the window frame 3 150 is applicable to the frame 3 114 (i.e., current frame) and the frame 2 113 (i.e., previous frame). Hence, if the current frame is the frame 3 114, the receiving unit 210 is able to receive a window feature of the window frame 2 140 and an input signal corresponding to the frame 3 114.
The right window shape determining unit 220 determines a shape of a right window of the window frame 3 150 according to a feature of the input signal (frame 3).
For instance, the shape of the right window of the current frame is determined different according to a case of a long frame having a frame length of the input signal set to 1024 samples or a short frame having a frame length set to 128 samples.
The first window feature information generating unit 230 generates first window feature information based on the determined shape of the right window, and preferably, on a length of a slope of the right window. Preferably, it can be determined by referring to a frame length of the input signal, of which details will be explained layer with reference to FIG. 3.
Only if the first window feature information indicates that the length of the right window slope is a length of a long window slope, the second window feature information generating unit 240 generates second window feature information. The second window feature information can indicate a frequency transform (MDCT) unit of the current window frame, of which details will be explained layer with reference to FIG. 3.
The window applying unit 250 is able to apply the determined window feature of the current window frame to the current frame and the previous frame. The MDCT unit 260 transforms a time domain signal into a frequency signal (frequency spectrum) by a window frame unit to which the window feature is applied. The MDCT unit 260 is then able to transfer the transformed frequency signal to a decoder side.
The multiplexer 270 enables the window feature of the current window frame, the first window feature information, the second window feature information and the signal transformed into the frequency spectrum (MDCT transformed signal) to be contained in one bitstream and then outputs the bitstream from the encoding device 200. Meanwhile, in case of determining a window frame applied to a next frame
(e.g., frame 4) located behind a current frame (e.g., frame 3), the window feature (e.g., window feature of the window frame 3) of the current window frame is needed as a window feature of a previous frame. Therefore, the encoding device 200 is able to further include a window feature storing unit (not shown in the drawing) for storing a frame feature of a current window frame.
FIG. 3 is a detailed block diagram of the first window feature information generating unit and the second window feature information generating unit shown in FIG. 2.
Referring to FIG. 3, a first window feature information generating unit 310 generates first window feature information based on a determined shape of a right window, and preferably, on a length of a slope of the right window. More preferably, the first window feature information indicates whether the right window has same slope in 1024 samples of a long window, i.e., whether the slope of the right window indicates a length of a long window, of which meaning is shown in Table 1. [Table 1]
Figure imgf000014_0001
A second window feature information generating unit 320 generates second window feature information only if the first window feature information does not indicate that the length of the right window indicates a length of a long window. The second window feature information is determined based on a frequency transform unit (MDCT unit) of a current window frame. The second window feature information generating unit 320 includes an MDCT unit length determining unit 321 and a second window feature information determining unit 322.
The MDCT unit length determining unit 321 determines a length of a unit used in performing modified discrete cosine transform (MDCT) to transform a current window frame constructed by time unit into a frequency signal. Preferably, the MDCT unit length determining unit 321 determines whether the MDCT unit is 1024/1152 samples corresponding to a length of a long window or is able to determine whether the MDCT unit is 128 sample units corresponding to a length of a short window. The second window feature information determining unit 322 determines second window feature information based on the MDCT unit length determined by the MDCT unit length determining unit 321. Detailed meaning of the second window feature information is shown in Table 2. [Table 2]
Figure imgf000015_0001
FIG. 4 is a schematic block diagram of a signal processing decoding device 400 according to one embodiment of the present invention. A signal processing decoding device 400 includes a demultiplexer 410, an IMDCT unit 420, a first window feature information extracting unit 430, a second window feature information extracting unit
440, a window feature determining unit 450 and a window applying unit 460.
First of all, the demultiplexer 410 receives an input of the bitstream outputted from the multiplexer 270 of the signal processing encoding device 200 shown in FIG. 2, classifies the inputted bitstream per signal or information, and then outputs the classified signals and/or informations respectively.
The IMDCT unit 420 receives an input of an encoded signal outputted from the demultiplexer 410 and is then able to perform inverse modified discrete cosine transform (hereinafter abbreviated IMDCT). In this case, the IMDCT follows a general IMDCT method.
The first window feature information extracting unit 430 is able to extract the first window feature information from the signal inputted from the demultiplexer 410.
The meaning and feature of the first window feature information are equal to those described with reference to FIG. 2 and FIG. 3, of which details are omitted from the following description. The second window feature information extracting unit 440 is able to extract the second window feature information from the signal inputted from the demultiplexer
410. The meaning and feature of the second window feature information are equal to those described with reference to FIG. 2 and FIG. 3, of which details are omitted from the following description.
The window feature determining unit 450 inputs the first window feature information and the second window feature information. The demultiplexer 410 is able to output a window feature of a previous window frame, which indicates a window feature applied to the previous window frame. And, the window feature determining unit 450 is able to receive the window feature of the previous window frame.
Hence, the window feature determining unit 450 obtains information indicating whether a slope of a right window of a current window frame indicates a length of a long window slope from the first window feature information and also obtains information on an MDCT unit of the current window frame from the second window feature information, thereby being able to determine a shape of the right window (corresponding to the current frame) of the current window frame. Moreover, the window feature determining unit 450 is able to determine a shape of a left window (corresponding to a previous frame) of the current window frame using the window feature of the previous window frame. Thus, the window feature of the current window frame is determined according to the determined shape of the right window of the current window frame and the determined shape of the left window of the current window frame.
For instance, referring now to FIG. 1, if a current frame to decode currently is the frame 3 114, a current window frame is the window frame 3 150. In order to determine a window feature of the window frame 3 150, it is able to determine a right window shape of the window frame 3 150 corresponding to the frame 3 114 using the first window feature information and the second window feature information. In this case, the left window shape of the window frame 3, which was determined in the frame 2 140, as shown in FIG. 1, can be determined based on the window feature of the previous window frame. In particular, the shape of the left window of the current window frame can be determined according to the shape of the right window of the previous window frame. In more particular, the left window shape of the current window frame can be determined according to the first window feature information that indicates whether the right window of the previous window frame is the length of the long window.
Detailed mapping relations according to a method of determining a window feature of a current window frame according to a window feature of a previous window frame and first and second window feature informations of the current window frame will be described with reference to Tables 3 to 5 and FIG. 6 later.
Meanwhile, the window applying unit 460 is able to apply a window to the current window frame based on the determined window feature of the current window frame. Thereafter, it is apparent that the current frame can be decoded by performing post-processing on the window applied current window frame. Generally, a window feature of a current window frame is defined to have one of four kinds of values such as 0(0O2), 1(0I2), 2(1O2) and 3(1I2) and can be expressed using 2 bits. Yet, since the window feature of the current window frame is substantially able to have a value equal to or smaller than 3 kinds by depending on the window feature of the previous window frame, the signal processing apparatus and method according to the present invention represent the first window feature information and the second window feature information as three kinds of cases including 0(0O2), 2(1O2) and 3(1I2).
Tables 3 to 5 in the following show the detailed mapping relations according to a method of determining a window feature of a current window frame in association with a window feature of a previous window frame and first and second window feature informations of the current window frame.
If a window feature of a previous frame is LPD_START_SEQUENCE, i.e., if a shape of a right window of a current window frame has a window shape applied to an LPD frame, a next frame is switched to a speech signal instead of an audio signal using block-switching. Hence, it is not necessary to consider a subsequent change of a window feature. Hence, in Table 3, LPD_START_SEQUENCE is omitted from a window feature of a previous frame. In case that the previous frame uses a speech coding mode, a window feature of a current window frame is represented as LPD_SEQUENCE.
[Table 3]
Figure imgf000018_0001
Figure imgf000019_0001
Referring to Table 3, in case that a window feature of a previous window frame is STOP_START_1152_SEQUENCE, a window feature of a current window frame is determined as either EIGHT_SHORT_SEQUENCE constructed with 8 short windows (128 samples) or LONG_STOP_SEQUENCE indicating that a slope length of a right window of a current window frame is a long window. In case that a window feature of a previous window frame is LPD_SEQUENCE, a window feature of a current window frame is determined as either STOP_1152_SEQUENCE indicating that a slope length of a right window is a long window or STOP_START_1152_SEQUENCE indicating that a slope length of a right window is a short window (a left window in both cases is a long window used for LPD). Hence, in case that a window feature of a previous window frame is STOP_START_1152_SEQUENCE or LPD_SEQUENCE, available window features of a current window frame are limited to two kinds. Hence, it is able to transfer a window feature of a current window using 1 bit only. This is reflected on the mapping relation shown in Table 4.
[Table 4]
Figure imgf000019_0002
Figure imgf000020_0001
FIG. 5 is a diagram for a method of determining a window feature of a current frame according to the mapping relation shown in Table 4. Each block in FIG. 5 indicates a window frame and a name within the block indicates a window feature of a window frame. A window feature located on left of an arrow indicates a window feature of a previous window frame. And, a window feature located on right of the arrow indicates a window feature of a current window frame. A long window is indicated without shade. A short window is indicated by a slashed shade. A LPD window is indicated by a doted shade. And, a block using 8 short windows is indicated by a latticed shade. For example of LONG_STOP_SEQUENCE, as a left window in a window frame uses a short window and a right window uses a long window, a left half of the window frame is represented by a shaded shade, while a right half is represented by a non-shade block.
Moreover, a numeral written next to an arrow indicates a path determined using first window feature information and second window feature information, which is described in Table 4.
Referring to FIG. 5, a window feature of a current window frame depends on a window feature of a previous window frame and can be determined as one of maximum three kinds of types. Therefore, it is able to reduce the number of used bits in a manner that the window feature of the current frame is represented as one of maximum three kinds of cases using the first window feature information and the second window feature information. Moreover, it is able to further reduce the number of total bits in a manner that a value having a highest frequency is represented as a value 0(O2) to use 1 bit only. When a window frame is switched by block switching, a case of switching from
STOP_1152_SEQUENCE to LPD_START_SEQUENCES, a case of switching from LONG_STOP_SEQUENCE to LPD_START_SEQUENCE, a case of switching from LONG_START_SEQUENCE to STOP_START_SEQUENCE, or a case of switching from STOP_START_SEQUENCE to STOP_START_SEQUENCE is possible on a predetermined condition only and is not mandatory. Thus, it is possible to perform coding except the above cases. The mapping relation for this case is shown in Table 5. In case that the mapping relation shown in Table 5 is established, it is able to use 1 bit as the bit number indicating a window feature of a current window frame occasionally. Therefore, it is able to considerably reduce the number of used bits. [Table 5]
Figure imgf000021_0001
Figure imgf000022_0001
FIG. 6 is a detailed block diagram of another example of the first window feature information 430, the second window feature information 440 and the window feature determining unit 450 shown in FIG. 4.
Referring to FIG. 6, a first window feature information 610 and a second window feature information 620 are the units having the same configurations and functions of the former first window feature information 430 and the former second window feature information 450 shown in FIG. 4, respectively. And, it is able to extract first window feature information and second window feature information of a current window frame by the same method. In particular, the second window feature information receiving unit 620 can be activated (on) only if the first window feature information received from the first window feature information receiving unit 610 does not indicate a slope length of a long window. If the first window feature information indicates the slope length of the long window, the second window feature information receiving unit 620 is not activated (off). Like the former window feature determining unit 450 shown in FIG. 4, the window feature determining unit 630 can receive an input of a window feature of a previous window frame as well as the first window feature information and the second window feature information. Moreover, the window feature determining unit 630 is able to further receive coding mode information of a current frame, which indicates a coding mode of the current frame, and coding mode information of a previous frame, which indicates a coding mode of the previous frame. The coding mode indicates whether a current frame is encoded by an audio coding scheme for coding an audio signal or a speech coding scheme for coding a speech signal.
In case of a speech coding scheme, since a length of a used frame differs in an LPC scheme that uses a linear prediction, a window feature may become different from that for a case of an audio coding scheme. Therefore, the window feature determining unit 630 is able to determine a window feature of a current window frame by means of further using the coding mode information of the previous frame and the coding mode information of the current frame.
Thereafter, it will be able to decode an audio signal in a manner of applying the determined window feature of the current window frame to the audio signal. The above-described signal processing method is able use a window feature of a previous window frame and a coding mode information of a previous frame. Yet, in case that decoding is performed in the middle of a bitstream of a broadcast or the like, it is unable to know a window feature of a previous window frame. Thus, it may cause a problem in determining a window feature of a current window frame. Hence, this problem can be solved in a manner that information related to a window feature of a previous frame is contained in a header part of a bitstream.
FIG. 7 is a syntax indicating a signal processing method according to another embodiment of the present invention.
Referring to FIG. 7, it is able to represent a window feature of a previous window frame by adding a 1-bit previous window feature flag (last_window_sequence) to a header (USACSpecificConfig) of a bitstream. The meaning of the previous window feature flag is shown in Table 6. [Table 6]
Figure imgf000024_0001
Thus, in a manner that a previous window feature flag indicating a window feature of a previous window frame is included in a header of a bitstream, it is able to determine a window feature of a current window frame even if decoding starts to be performed in the middle of a bitstream such as a broadcast bitstream and the like.
Referring now to FIG. 6, in order to determine a window feature of a current window frame, coding mode information of a previous frame may be needed. Therefore, the syntax needs to be modified to cope with this case.
FIG. 8 is a syntax indicating a signal processing method according to another embodiment of the present invention.
Referring to FIG. 8, it is able to further add previous coding mode information (last_core_mode) indicating a coding scheme of a previous frame to a header of a bitstream. The detailed meaning of the previous coding mode information is shown in Table 7.
[Table 7]
Figure imgf000024_0002
Figure imgf000025_0001
Referring now to FIG. 8, if a coding method of a previous frame uses an audio coding mode, a window feature of a previous window frame is necessary to determine a window feature of a current window frame. Hence, if a previous coding mode information is set to 0 (last_core_mode = 0), it is able to receive a previous window feature flag (last_window_sequence). Meanwhile, if a coding method of a previous frame uses a speech coding mode, a window feature of a previous window frame is unnecessary. Hence, a previous window feature needs not to be received.
A signal processing apparatus according to one embodiment of ht present invention is represented as the above described syntax, thereby obtaining a window feature of a previous window frame and coding mode information of a previous frame to determine a window feature of a current window frame.
FIG. 9 shows an example of a signal processing encoding device according to a further embodiment of the present invention and FIG. shows an example of a signal processing decoding device according to a further embodiment of the present invention. Referring to FIG. 9, a signal processing apparatus 900includes a plural channel encoding unit 910, a band extension coding unit 920, an audio signal encoding unit 930, a speech signal encoding unit 940 and a multiplexer 950.
The plural channel encoding unit 910 receives an input of a plural channel signal (a signal having at least two channels) (hereinafter named a multi-channel signal) and then generates a mono or stereo downmix signal by downmixing the multi-channel signal. And, the plural channel encoding unit 910 generates spatial information for upmixing the downmix signal into the multi-channel signal. In this case, the spatial information can include channel level difference information, inter-channel correlation information, channel prediction coefficient, downmix gain information and the like. If the signal encoding device 900 receives an input of a mono signal, it is understood that the mono signal can bypass the plural channel encoding unit 910 without being downmixed. The band extension encoding unit 920 is able to generate spectral data corresponding to a low frequency band and band extension information for high frequency band extension in a manner of applying a band extension scheme (SBR) to the downmix signal that is an output of the plural channel encoding unit 910. hi particular, spectral data on a partial band (e.g., a high frequency band) of the downmix signal is excluded. And, the band extension information for reconstructing the excluded data can be generated.
The signal generated via the band extension coding unit 920 is inputted to the audio signal encoding unit 930 or the speech signal encoding unit 940.
If a specific frame or segment of the downmix signal mainly has an audio characteristic, the audio signal encoding unit 930 encodes the downmix signal according to an audio coding scheme. In this case, the audio coding scheme may follow the AAC (advanced audio coding) standard or HE-AAC (high efficiency advanced audio coding) standard, by which the present invention is non-limited. Meanwhile, the audio signal encoding unit 930 can include a modified discrete cosine transform (MDCT) encoder.
If the audio signal encoding unit 930 includes the modified discrete cosine transform (MDCT) encoder, it can include a window feature determining unit 931, a window applying unit 932 and an MDCT unit 933. The window feature determining unit 931 is able to include the right window shape determining unit 220, the first window feature determining unit 230 and the second window feature determining unit 240, which are shown in FIG. 2. The window feature determining unit 931, the window applying unit 932 and the MDCT unit 933 have the same configurations and functions of the right window shape determining unit 220, the first window feature generating unit 230/310, the second window feature generating unit 240/320, the window applying unit 250 and the MDCT unit 260, which are shown in FIG. 2/FIG. 3, of which details are omitted from the following description. Meanwhile, the window feature determining unit 931 is able to generate the first and second window feature informations described with reference to FIG. 2 and FIG. 3. If a specific frame or segment of the downmix signal has a large speech characteristic, the speech signal encoding unit 940 encodes the downmix signal according to a speech coding scheme. In this case, the speech coding scheme may follow the AMR-WB (adaptive multi-rate wideband) standard, by which the present invention is non-limited. Meanwhile, the speech signal encoding unit 940 can further use a linear prediction coding (LPC) scheme. If a harmonic signal has high redundancy on a time axis, it can be modeled by linear prediction for predicting a present signal from a past signal. In this case, if the linear prediction coding scheme is adopted, it is able to raise coding efficiency. Besides, the speech signal encoding unit 940 can correspond to a time domain encoder. And, the multiplexer 950 generates at least one bitstream by multiplexing the spatial information, the band extension information, the signals respectively encoded by the audio signal encoding unit 930 and the speech signal encoding unit 940, the first window feature information and the second window feature information together.
Referring to FIG. 10, a signal decoding device 1000 includes a demultiplexer 1010, an audio signal decoding unit 1020, a speech signal decoding unit 1030, a band extension decoding unit 1040 and a plural channel decoding unit 1050. And, the audio signal decoding unit 1020 includes an IMDCT unit 1021, a window feature determining unit 1022 and a window applying unit 1023. The demultiplexer 1010 extracts a first window feature information, a second window feature information, a quantized signal, a coding mode information, a band extension information, a spatial information and the like from a signal bitstream.
First of all, it is determined whether an inputted signal will be decoded by an audio coding scheme or a speech coding scheme according to the coding mode information.
If the inputted signal is decoded by the audio coding scheme, the IMDCT unit 1021 performs inverse modified discrete cosine transform on the inputted signal.
Subsequently, the window feature determining unit 1022 determines a window feature of a current window frame using the first window feature information, the second window feature information, a window feature of a previous window fame, the coding mode information and coding mode information (not shown in the drawing) of a previous frame. In this case, the window feature of the current window frame can be determined by the former method described with reference to FIGs. 4 to 8 and its details are omitted from the following description. If an audio signal (e.g., a spectral coefficient as a result of dequantization) has a large audio characteristic, the audio signal decoding unit 1020 decodes the audio signal according to an audio coding scheme. In this case, as mentioned in the foregoing description, the audio coding scheme may follow the AAC (advanced audio coding) standard or HE-AAC (high efficiency advanced audio coding) standard. If the audio signal has a large speech characteristic, the speech signal decoding unit 1030 decodes the downmix signal according to a speech coding scheme. In this case, as mentioned in the foregoing description, the speech coding scheme may follow the AMR-WB (adaptive multi-rate wideband) standard, by which the present invention is non-limited.
The band extension decoding unit 1040 reconstructs a signal on a high frequency band based on the band extension information by performing a band extension decoding scheme on the output signals from the audio and speech signal decoding units 1020 and 1030. And, the plural channel decoding unit 1050 generates an output channel signal of a multi-channel signal (stereo signal included) using spatial information if the decoded audio signal is a downmix.
FIG. 11 is a schematic diagram of a configuration of a product including an
IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention, and FIG. 12 is a schematic diagrams for relations of products including an IMDCT unit, a window feature determining unit and a window applying unit according to another embodiment of the present invention.
Referring to FIG. 11, a wire/wireless communication unit 1110 receives a bitstream by wire/wireless communications. In particular, the wire/wireless communication unit 1110 includes at least one of a wire communication unit 1111, an infrared communication unit 1112, a Bluetooth unit 1113 and a wireless LAN communication unit 1114.
A user authenticating unit 1120 receives an input of user information and then performs user authentication. The user authenticating unit 1120 can include at least one of a fingerprint recognizing unit 1121, an iris recognizing unit 1122, a face recognizing unit 1123 and a voice recognizing unit 1124. In this case, the user authentication can be performed in a manner of receiving an input of fingerprint information, iris information, face contour information or voice information, converting the inputted information to user information, and then determining whether the user information matches user data which was previously registered.
An input unit 1130 is an input device for enabling a user to input various kinds of commands. And, the input unit 1130 can include at least one of a keypad unit 1131, a touchpad unit 1132 and a remote controller unit 1133, by which examples of the input unit 1130 are non-limited.
A signal decoding unit 1140 includes an IMDCT unit 1141, a window feature determining unit 1142 and a window applying unit 1143, which have the same configurations and functions of the former IMDCT unit 420, the former window feature determining unit 430 and the former window applying unit 440 described with reference to FIG. 4, respectively. And, details of the signal decoding unit 1140 are omitted in the following description.
A control unit 1150 receives input signals from the input devices and controls all processes of the signal decoding unit 1140 and an output unit 1160.
And, the output unit 1160 is an element for outputting an output signal and the like generated by the signal decoding unit 1140. The output unit 1160 can include a signal output unit 1161 and a display unit 1162. If an output signal is an audio signal, it is outputted via the signal output unit 1161. If an output signal is a video signal, it is outputted via the display unit 1162. Moreover, if metadata is inputted to the input unit 1130, it is displayed on a screen via the display unit 1162. FIG. 12 shows relation between terminals or between terminal and server, which correspond to the product shown in FIG. 11.
Referring to FIG. 12A, it can be observed that bidirectional communications of data or bitstream can be performed between a first terminal 1210 and a second terminal 1220 via wire/wireless communication units. In this case, the data or bitstream exchanged via the wire/wireless communication unit may include the data including the first window feature information, the second window feature information and the like of the present invention described with reference to FIGs. 1 to 3. Referring to FIG. 12B, it can be observed that wire/wireless communications can be performed between a server 1230 and a first terminal 1240.
FIG. 13 is a schematic block diagram of a broadcast signal decoding device including an IMDCT unit 1341, a window feature determining unit 1342 and a window applying unit 1343 according to one embodiment of the present invention.
Referring to FIG. 13, a demultiplexer 1320 receives a plurality of data related to a TV broadcast from a tuner 1310. The received data are separated by the demultiplexer 1320 and are then decoded by a data decoder 1330. Meanwhile, the data separated by the demultiplexer 1320 can be stored in such a storage medium 1350 as an HDD.
The data separated by the demultiplexer 1320 are inputted to a signal decoding unit 1340 to decode an audio signal and a video signal. Data which is received from demultiplexer 1320 is inputted into a signal decoding unit 1340 and the signal decoding unit 1340 decodes an audio signal and a video signal. The signal decoding unit 1340 includes an IMDCT unit 1341, a window feature determining unit 1342, a window applying unit 1343 and a video decoding unit 1344. They have the same configurations and functions of the former units of the same names shown in FIG. 4 and their details are omitted in the following description.
An output unit 1370 outputs the video signal and the audio signal outputted from the signal decoding unit 1340. The audio signal may include the signal that is decoded by applying a window feature of a current window frame. In this case, the window feature is determined using first window feature information and second window feature information. Moreover, the data decoded by the signal decoding unit 1340 can be stored in a storage medium 1350 such as an HDD.
Meanwhile, the signal decoding device 1300 can further include an application manager 1360 capable of controlling a plurality of data received according to an input of information from a user. The application manager 1360 includes a user interface manager 1361 and a service manager 1362. The user interface manager 1361 controls an interface for receiving an input of information from a user. For instance, the user interface manager 1361 is able to control a font type of text displayed on the output unit 1370, a screen brightness, a menu configuration and the like. Meanwhile, if a broadcast signal is decoded and outputted by the signal decoding unit 1340 and the output unit 1370, the service manager 1362 is able to control a received broadcast signal using information inputted by a user. For instance, the service manager 1362 is able to provide a broadcast channel setting, an alarm function setting, an adult authentication function, etc. The data outputted from the application manager 1360 are usable by being transferred to the output unit 1370 as well as the signal decoding unit 1340.
Accordingly, as a signal processing apparatus according to one embodiment of the present invention is included in a real product, the present invention represents a window feature of a current window frame affected by a window feature of a previous window frame using the reduced number of bits, thereby raising coding efficiency in processing a signal.
The present invention applied decoding/encoding method can be implemented in a program recorded medium as computer-readable codes. And, multimedia data having the data structure of the present invention can be stored in the computer-readable recoding medium. The computer-readable recording media include all kinds of storage devices in which data readable by a computer system are stored. The computer-readable media include ROM, RAM, CD-ROM, magnetic tapes, floppy discs, optical data storage devices, and the like for example and also include carrier-wave type implementations (e.g., transmission via Internet). And, a bitstream generated by the encoding method is stored in a computer-readable recording medium or can be transmitted via wire/wireless communication network.
While the present invention has been described and illustrated herein with reference to the preferred embodiments thereof, it will be apparent to those skilled in the art that various modifications and variations can be made therein without departing from the spirit and scope of the invention. Thus, it is intended that the present invention covers the modifications and variations of this invention that come within the scope of the appended claims and their equivalents.
[INDUSTRIAL APPLICABILITY] Accordingly, the present invention is applicable to encoding and decoding of signals.

Claims

[CLAIMS]
[Claim 1]
A method of decoding a signal, comprising: obtaining a window feature of previous window frame indicating a window which is used to previous window frame; extracting first window feature information indicating a length of right window slope of current window frame; when the first window feature information indicates a length of long window slope, determining a window feature of the current window frame by using the first window feature information and the window feature of the previous window frame; when the first window feature information does not indicate a length of long window slope, extracting second window feature information indicating a unit of frequency transform of the current window frame; and determining a window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame; and decoding an audio signal based on the window feature of the current window frame.
[Claim 2]
The method of claim 1, wherein the window feature of the current window frame is determined further by using coding mode information of current frame indicating coding mode of the current frame, and coding mode information of previous frame indicating coding mode of the previous frame.
[Claim 3] The method of claim 1, wherein the window feature of the previous frame is determined based on first window feature information of the previous window frame.
[Claim 4]
The method of claim 1, wherein the window feature of the current window frame is one of a long window feature including that left window and right window are long window, and a long stop window feature indicating that left window is short window and right window is long window.
[Claim 5]
The method of claim 1, wherein a bit length of the first window feature information and the second window feature information is each 1 bit.
[Claim 6]
An apparatus of decoding a signal, comprising: a window feature obtaining unit obtaining a window feature of previous window frame indicating a window which is used to previous window frame; a first window feature information extracting unit extracting first window feature information indicating a length of right window slope of current window frame; a second window feature information extracting unit extracting second window feature information indicating a unit of frequency transform of the current window frame, when the first window feature information does not indicate a length of long window; and a window feature determining unit determining a window feature of the current window frame based on the first window feature information, the second window feature information and the window feature of the previous window frame, wherein the second window feature information extracting unit is not performed, when the first window feature information indicates a length of long window slope.
[Claim 7]
The apparatus of claim 6, wherein the window feature of the current window frame is determined further by using coding mode information of current frame indicating coding mode of the current frame, and coding mode information of previous frame indicating coding mode of the previous frame.
[Claim 8]
The apparatus of claim 6, wherein the window feature of the previous frame is determined based on first window feature information of the previous window frame.
[Claim 9] The apparatus of claim 6, wherein the window feature of the current window frame is one of a long window feature including that left window and right window are long window, and a long stop window feature indicating that left window is short window and right window is long window.
[Claim 10] The apparatus of claim 1, wherein a bit length of the first window feature information and the second window feature information is each 1 bit.
[Claim 11]
A method of encoding a signal, comprising: receiving a window feature of previous window frame and input signal of current frame; determining right window shape of current window frame based on the input signal; generating first window feature information indicating a length of right window slope, based on the right window shape; when the first window feature information indicates a length of long window, determining window feature of the current window frame by using the first window feature information and the window feature of the current window frame; when the first window feature information does not indicate a length of long window, generating second window feature information indicating a unit of frequency transform of the current window frame; and determining window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame; applying the window feature of the current window frame of the previous frame and the current frame; and frequency transforming the current window frame based on the window feature of the current window frame. [Claim 12]
An apparatus of encoding a signal, comprising: a receiving unit receiving a window feature of previous window frame and input signal of current frame; a right window shape determining unit determining right window shape of current window frame based on the input signal; a first window feature information generating unit generating first window feature information indicating a length of right window slope, based on the right window shape; a second window feature information generating unit generating second window feature information indicating a unit of frequency transform of the current window frame, when the first window feature information indicates a length of long window; a windowing unit determining window feature of the current window frame by using the first window feature information, the second window feature information and the window feature of the previous window frame, and applying the window feature of the current window frame to the previous frame and the current frame; and a frequency transforming unit frequency transforming the current window frame based on the window feature of the current window frame, wherein the second window feature information generating is not performed, when the first window feature information does not indicate length of long window.
PCT/KR2009/006714 2008-11-14 2009-11-16 A method and an apparatus for processing a signal WO2010058931A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11447708P 2008-11-14 2008-11-14
US61/114,477 2008-11-14
KR1020090109742A KR20100054749A (en) 2008-11-14 2009-11-13 A method and apparatus for processing a signal
KR10-2009-0109742 2009-11-13

Publications (2)

Publication Number Publication Date
WO2010058931A2 true WO2010058931A2 (en) 2010-05-27
WO2010058931A3 WO2010058931A3 (en) 2010-08-05

Family

ID=42198636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2009/006714 WO2010058931A2 (en) 2008-11-14 2009-11-16 A method and an apparatus for processing a signal

Country Status (1)

Country Link
WO (1) WO2010058931A2 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0619574A1 (en) * 1993-04-09 1994-10-12 SIP SOCIETA ITALIANA PER l'ESERCIZIO DELLE TELECOMUNICAZIONI P.A. Speech coder employing analysis-by-synthesis techniques with a pulse excitation
US20060122825A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for transforming audio signal, method and apparatus for adaptively encoding audio signal, method and apparatus for inversely transforming audio signal, and method and apparatus for adaptively decoding audio signal
US20070160219A1 (en) * 2006-01-09 2007-07-12 Nokia Corporation Decoding of binaural audio signals
US20070271480A1 (en) * 2006-05-16 2007-11-22 Samsung Electronics Co., Ltd. Method and apparatus to conceal error in decoded audio signal
US20080140428A1 (en) * 2006-12-11 2008-06-12 Samsung Electronics Co., Ltd Method and apparatus to encode and/or decode by applying adaptive window size

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0619574A1 (en) * 1993-04-09 1994-10-12 SIP SOCIETA ITALIANA PER l'ESERCIZIO DELLE TELECOMUNICAZIONI P.A. Speech coder employing analysis-by-synthesis techniques with a pulse excitation
US20060122825A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for transforming audio signal, method and apparatus for adaptively encoding audio signal, method and apparatus for inversely transforming audio signal, and method and apparatus for adaptively decoding audio signal
US20070160219A1 (en) * 2006-01-09 2007-07-12 Nokia Corporation Decoding of binaural audio signals
US20070271480A1 (en) * 2006-05-16 2007-11-22 Samsung Electronics Co., Ltd. Method and apparatus to conceal error in decoded audio signal
US20080140428A1 (en) * 2006-12-11 2008-06-12 Samsung Electronics Co., Ltd Method and apparatus to encode and/or decode by applying adaptive window size

Also Published As

Publication number Publication date
WO2010058931A3 (en) 2010-08-05

Similar Documents

Publication Publication Date Title
CA2705968C (en) A method and an apparatus for processing a signal
EP2182513B1 (en) An apparatus for processing an audio signal and method thereof
US8135585B2 (en) Method and an apparatus for processing a signal
US8060042B2 (en) Method and an apparatus for processing an audio signal
EP2169670B1 (en) An apparatus for processing an audio signal and method thereof
US8380523B2 (en) Method and an apparatus for processing an audio signal
EP2169665A1 (en) A method and an apparatus for processing a signal
WO2011059255A2 (en) An apparatus for processing an audio signal and method thereof
EP2169666A1 (en) A method and an apparatus for processing a signal
WO2011059254A2 (en) An apparatus for processing a signal and method thereof
US20100114568A1 (en) Apparatus for processing an audio signal and method thereof
US9070364B2 (en) Method and apparatus for processing audio signals
WO2010058931A2 (en) A method and an apparatus for processing a signal
KR20100054749A (en) A method and apparatus for processing a signal
WO2010035972A2 (en) An apparatus for processing an audio signal and method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09827696

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09827696

Country of ref document: EP

Kind code of ref document: A2