WO2008022565A1 - Audio decoding - Google Patents
Audio decoding Download PDFInfo
- Publication number
- WO2008022565A1 WO2008022565A1 PCT/CN2007/002490 CN2007002490W WO2008022565A1 WO 2008022565 A1 WO2008022565 A1 WO 2008022565A1 CN 2007002490 W CN2007002490 W CN 2007002490W WO 2008022565 A1 WO2008022565 A1 WO 2008022565A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- code book
- frame
- indexes
- entropy
- transient
- Prior art date
Links
- 238000013139 quantization Methods 0.000 claims abstract description 70
- 230000006870 function Effects 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 27
- 230000005236 sound signal Effects 0.000 claims abstract description 14
- 230000001052 transient effect Effects 0.000 claims description 78
- 238000005070 sampling Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012993 chemical processing Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/022—Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
- G10L19/025—Detection of transients or attacks for time/frequency resolution switching
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
- G10L19/0208—Subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
- G10L19/035—Scalar quantisation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
- G10L19/038—Vector quantisation, e.g. TwinVQ audio
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/24—Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/022—Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
Definitions
- the present invention pertains to systems, methods and techniques for decoding of audio signals, such as digital audio signals received across a communication channel or read from a storage device.
- the present invention addresses this need by providing, among other things, decoding systems, methods and techniques in which audio data are retrieved from a bit stream by applying code books to specified ranges of quantization indexes (in some cases even crossing boundaries of quantization units) and by identifying a sequence of different windows to be applied within a single frame of the audio data based on window information within the bit stream.
- the invention is directed to systems, methods and techniques for decoding an audio signal from a frame-based bit stream. Each frame includes processing information pertaining to the frame and entropy- encoded quantization indexes representing audio data within the frame.
- the processing information includes: (i) entropy code book indexes, (ii) code book application information specifying ranges of entropy-encoded quantization indexes to which the code books are to be applied, and (iii) window information.
- the entropy-encoded quantization indexes are decoded by applying the identified code books to the corresponding ranges of entropy-encoded quantization indexes.
- Subband samples are then generated by dequantizing the decoded quantization indexes, and a sequence of different window functions that were applied within a single frame of the audio data is identified based on the window information.
- Time-domain audio data are obtained by inverse-transforming the subband samples and using the plural different window functions indicated by the window information.
- Figure 1 is a block diagram illustrating various illustrative environments in which a decoder may be used, according to representative embodiments of the present invention.
- Figures 2A-B illustrate the use of a single long block to cover a frame and the use of multiple short blocks to cover a frame, respectively, according to a representative embodiment of the present invention.
- Figures 3A-C illustrate different examples of a transient frame according to a representative embodiment of the present invention.
- FIG. 4 is a block diagram of an audio signal decoding system 10 according to a representative embodiment of the present invention. DESCRIPTION OF THE PREFERRED EMBODIMENT(S)
- the present invention pertains to systems, methods and techniques for decoding audio signals, e.g., after retrieval from a storage device or reception across a communication channel.
- Applications in which the present invention may be used include, but are not limited to: digital audio broadcasting, digital television (satellite, terrestrial and/or cable broadcasting), home theatre, digital theatre, laser video disc players, content streaming on the Internet and personal audio players.
- the audio decoding systems, methods and techniques of the present invention can be used, e.g., in conjunction with the audio encoding systems, methods and techniques of the '346 Application.
- a decoder 100 receives as its input a frame- based bit stream 20 and that includes, for each frame, the actual audio data within that frame (typically, entropy-encoded quantization indexes) and various kinds of processing information (e.g., including control, formatting and/or auxiliary information).
- the bit stream 20 ordinarily will be input into decoder 100 via a hard- wired connection or via a detachable connector.
- bit stream 20 could have originated from any of a variety of different sources.
- the sources include, e.g., a digital radio-frequency (or other electromagnetic) transmission which is received by an antenna 32 and converted into bit stream 20 in demodulator 34, a storage device 36 (e.g., semiconductor, magnetic or optical) from which the bit stream 20 is obtained by an appropriate reader 38, a cable connection 42 from which bit stream 20 is derived in demodulator 44, or a cable connection 48 which directly provides bit stream 20.
- Bit stream 20 might have been generated, e.g., using any of the techniques described in the '346 Application.
- bit stream 20 itself will have been derived from another signal, e.g., a multiplexed bit stream, such as those multiplexed according to MPEG 2 system protocol, where the audio bit stream is multiplexed with video bit streams of various formats, audio bit stream of other formats, and metadata; or a received radio-frequency signal that was modulated (using any of the known techniques) with redundancy-encoded, interleaved and/or punctured symbols representing bits of audio data.
- a multiplexed bit stream such as those multiplexed according to MPEG 2 system protocol, where the audio bit stream is multiplexed with video bit streams of various formats, audio bit stream of other formats, and metadata
- a received radio-frequency signal that was modulated (using any of the known techniques) with redundancy-encoded, interleaved and/or punctured symbols representing bits of audio data.
- the audio data within bit stream 20 have been transformed into subband samples (preferably using a unitary sinusoidal-based transform technique), quantized, and then entropy-encoded.
- the audio data have been transformed using the modified discrete cosine transform (MDCT), quantized and then entropy-encoded using appropriate Huffman encoding.
- MDCT modified discrete cosine transform
- Huffman Huffman encoding
- PCM pulse-coded modulation
- the decoder 10 preferably stores the same code books as are used by the encoder.
- the preferred Huffman code books are set forth in the '760 Application, where the "Code” is the Huffman code in decimal format, the "Bit Increment” is the number of additional bits (in decimal format) required for the current code as compared to the code on the previous line and the "Index" is the unencoded value in decimal format.
- the input audio data are frame-based ⁇ with each frame defining a particular time interval and including samples for each of multiple audio channels during that time interval.
- each such frame has a fixed number of samples, selected from a relatively small set of frame sizes, with the selected frame size for any particular time interval depending, e.g., upon the sampling rate and the amount of delay that can be tolerated between frames.
- each frame includes 128, 256, 512 or 1,024 samples, with longer frames being preferred except in situations where reduction of delay is important. In most of the examples discussed below, it is assumed that each frame consists of 1,024 samples. However, such examples should not be taken as limiting.
- the frames are divided into a number of smaller preferably equal-sized blocks (sometimes referred to herein as "primary blocks" to distinguish them from MDCT or other transform blocks which typically are longer). This division is illustrated in Figures 2A&B.
- Figure 2 A the entire frame 50 is covered by a single primary block 51 (e.g., including 1,024 audio data samples).
- Figure 2B the frame 50 is covered by eight contiguous primary blocks 52-59 (e.g., each including 128 audio data samples).
- Each frame of samples can be classified as a transient frame (i.e., one that includes a signal transient) or a quasistationary frame (i.e., one that does not include a transient).
- a signal transient preferably is defined as a sudden and quick rise (attack) or fall of signal energy.
- Transient signals occur only sparsely and, for purposes of the present invention, it is assumed that no more than two transient signals will occur in each frame.
- transient segment refers to an entire frame or a segment of a frame in which the signal that has the same or similar statistical properties.
- a quasistationary frame generally consists of a single transient segment, while a transient frame ordinarily will consist of two or three transient segments.
- the transient frame generally will have two transient segments: one covering the portion of the frame before the attack or fall and another covering the portion of the frame after the attack or fall. If both an attack and fall occur in a transient frame, then three transient segments generally will exist, each one covering the portion of the frame as segmented by the attack and fall, respectively.
- FIGS 3A-C each of which illustrating a single frame 60 of samples that has been divided into eight equal-sized primary blocks 61-68.
- a transient signal 70 occurs in the second block 62, so there are two transient segments, one consisting of block 61 alone and the other consisting of blocks 62-68.
- a transient signal 71 occurs in block 64 and another transient signal 72 occurs in block 66, so there are three transient segments, one consisting of blocks 61-63, one consisting of blocks 64-65 and the last consisting of blocks 66-68.
- a transient signal 73 occurs in block 68, so there are two transient segments, one consisting of blocks 61-67 and the other consisting of block 68 alone.
- FIG. 4 is a block diagram of audio signal decoding system 100 according to a representative embodiment of the present invention, in which the solid arrows indicate the flow of audio data, the broken-line arrows indicate the flow of control, formatting and/or auxiliary information, and the broken-line boxes indicate components that in the present embodiment are instantiated only if indicated in the corresponding control data in bit stream 20, as described in more detail below.
- the individual sections, modules or components illustrated in Figure 4 are implemented entirely in computer-executable code, as described below. However, in alternate embodiments any or all of such sections or components may be implemented in any of the other ways discussed herein.
- the bit stream 20 initially is input into demultiplexer 115, which divides the bit stream 20 into frames of data and unpacks the data in each frame in order to separate out the processing information and the audio-signal information.
- the data in bit stream 20 preferably are interpreted as a sequence of frames, with each new frame beginning with the same "synchronization word" (preferably, Qx7FFF).
- each data frame preferably is as follows:
- nFrmHeaderType indicates one of two possible different types of frames
- nFrmHeaderType indicates a General frame header
- the first 10 bits following nFrmHeaderType are interpreted as nNumWord (defined below)
- the next 3 bits are interpreted as nNumNormalCh (defined below)
- nFrmHeaderType indicates an Extension frame header
- the first 13 bits following nFrmHeaderType are interpreted as nNumWord
- the next 6 bits are interpreted as nNumNormalCh, and so on.
- the field "nNumWord” indicates the length of the audio data in the current frame (in 32-bit words) from the beginning of the synchronization word (its first byte) to the end of the error-detection word for the current frame.
- nNumBlocksPerFrm indicates the number of short-window Modified Discrete Cosine Transform (MDCT) blocks corresponding to the current frame of audio data.
- MDCT Modified Discrete Cosine Transform
- one short- window MDCT block contains 128 primary audio data samples (preferably entropy-encoded quantized subband samples), so the number of primary audio data samples corresponding to a frame of audio data is 128*nNumBlocksPerFrm.
- the MDCT block preferably is larger than the primary block and, more preferably, twice the size of the primary block. Accordingly, if the short primary block size consists of 128 audio data samples, then the short MDCT block preferably consists of 256 samples, and if the long primary block consists of 1,024 audio data samples, then the long MDCT block consists of 2,048 samples. More preferably, each primary block consists of the new (next subsequent) audio data samples.
- sampleRatelndex indicates the index of the sampling frequency that was used for the audio signal.
- nNumNormalCh indicates the number of normal channels.
- the number of bits representing this field is determined by the frame header type. In the present embodiment, if nFrmHeaderType indicates a General frame header, then 3 bits are used and the number of normal channels can range from 1 to 8. On the other hand, if nFrmHeaderType indicates an Extension frame header, then 6 bits are used and the number of normal channels can range from 1 to 64.
- nNumLfeCh indicates the number of LFE channels.
- nFrmHeaderType indicates a General frame header
- 1 bit is used and the number of normal channels can range from 0 to 1.
- nFrmHeaderType indicates an Extension frame header
- 2 bits are used and the number of normal channels can range from 0 to 3.
- bAuxChCfg indicates whether there is any auxiliary data at the end of the current frame, e.g., containing additional channel configuration information.
- nJicCb indicates the starting critical band of joint intensity encoding if joint intensity encoding has been applied in the current frame. Again, this field preferably is present only in the General frame header and does not appear in the Extension frame header.
- the general data structure for each normal channel is as follows:
- not all of the normal channels contain the window sequence information. If the window sequence information is not provided for one or more of the channels, this group of data preferably is copied from the provided window sequence information for channel 0 (ChO), although in other embodiments the information instead is copied from any other designated channel.
- the general data structure for each LFE channel is as follows:
- the window sequence information (provided for normal channels only) preferably includes a MDCT window function index.
- that index is designated as "nWinTypeCurrent" and has the following values and meanings:
- nWinTypeCurrent 0, 1, 2, 3, 4, 5, 6, 7 or 8
- nWinTypeCurrent 9, 10, 11 or 12
- the current frame is made up of nNumBlocksPerFrm (e.g., up to 8) short MDCTs
- nWinTypeCurrent indicates only the first and last window function of these nNumBlocksPerFrm short MDCTs.
- the other short window functions within the frame preferably are determined by the location where the transient appears, in conjunction with the perfect reconstruction requirements (as described in more detail in the '917 Application.
- the received data preferably includes window information that is adequate to fully identify the entire window sequence that was used at the encoder side.
- the field "nNumCluster” indicates the number of transient segments in current frame.
- the current frame is quasistationary, so the number of transient segments implicitly is 1, and nNumCluster does not need to appear in the bit stream (so it preferably is not transmitted).
- 2 bits are allocated to nNumCluster when a short window function is indicated and its value ranges from 0-2, corresponding to 1-3 transient segments, respectively.
- short window functions may be used even in a quasistationary frame (i.e., a single transient segment). This case can occur, e.g., when the encoder wanted to achieve low coding delay.
- the number of audio data samples in a frame can be less than 1,024 (i.e., the length of a long primary block).
- the encoder might have chosen to include just 256 PCM samples in a frame, in which case it covers those samples with two short blocks (each including 128 PCM samples that are covered by a 256-sample MDCT block) in the frame, meaning that the decoder also applies two short windows.
- the current frame is a transient frame (i.e., includes at least a portion of a transient signal so that nNumCluster indicates more than one transient segment)
- a field "anNumBlocksPerFrmPerClusterfnClusterf * preferably is included in the received data and indicates the length of each transient segment nCluster in terms of the number of short MDCT blocks it occupies.
- Each such word preferably is Huffman encoded (e.g., using HuffDecl_7xl in Table B.28 of the '760 Application) and, therefore, each transient segment length can be decoded to reconstruct the locations of the transient segments.
- anNumBlocksPerF ⁇ nPerClustertnCluster] preferably does not appear in the bit stream (i.e., it is not transmitted) because the transient segment length is implicit, i.e., a single long block in a frame having a long window function (e.g., 2,048 MDCT samples) or all of the blocks in a frame having multiple (e.g., up to 8) short window functions (e.g., each containing 256 MDCT samples).
- module 118 the appropriate code books and application ranges are selected based on the corresponding information that was extracted in demultiplexer 15. More specifically, the above-referenced Huffman Code Book Index and Application Range information preferably includes the following fields. [48] The field "anHSNumBands [nCluster]" indicates the number of code book segments in the transient segment nCluster.
- mnHSBandEdge [nCluster] [nBand] *4" indicates the length (in terms of quantization indexes) of the code book segment nBand (i.e., the application range of the Huffman code book) in the transient segment nCluster; each such value itself preferably is Huffman encoded, with HuffDec2_64xl (as set forth in the '760 Application) being used by module 18 to decode the value for quasistationary frames and HuffDec3_32xl (also forth in the '760 Application) being used to decode the value for transient frames.
- mnHS [nCluster] [nBand] indicates the Huffman code book index of the code book segment nBand in the transient segment nCluster; each such value itself preferably is Huffman encoded, with HuffDec4_18xl in the '760 Application being used to decode the value for quasistationary frames and HuffDec5_18xl in the '760 Application being used to decode the value for transient frames.
- Indexes are then retrieved based on the decoded mnHS [nCluster] [nBand] code book indexes as follows:
- each code book application range i.e., each code book segment
- Each such codebook segment may cross boundaries of one or more quantization units.
- the codebook segments may have been specified in other ways, e.g., by specifying the starting point for each code book application range. However, it generally will be possible to encode using a fewer total number of bits if the lengths (rather than the starting points) are specified.
- the received information preferably uniquely identifies the application range(s) to which each code book is to be applied, and the decoder 100 uses this information for decoding the actual quantization indexes.
- This approach is significantly different than conventional approaches, in which each quantization unit is assigned a code book, so that the application ranges are not transmitted in conventional approaches.
- the additional overhead ordinarily is more than compensated by the additional efficiencies that can be obtained by flexibly specifying application ranges.
- the quantization indexes extracted by demultiplexer 15 are decoded by applying the code books identified in module 18 to their corresponding application ranges of quantization indexes.
- each "quantization unit” preferably is defined by a rectangle of quantization indexes bounded by a critical band in the frequency domain and by a transient segment in the time domain. All quantization indexes within this rectangle belong to the same quantization unit.
- the transient segments preferably are identified, based on the transient segment information extracted by multiplexer 115, in the manner described above.
- a "critical band” refers to the frequency resolution of the human ear, i.e., the bandwidth ⁇ / within which the human ear is not capable of distinguishing different frequencies.
- the bandwidth ⁇ / preferably rises along with the frequency / , with the relationship between / and ⁇ / being approximately exponential.
- Each critical band can be represented as a number of adjacent subband samples of the filter bank.
- nMaxBand anHSNumBandsfnCluster]
- nMaxBin mnHSBandEdge[nCluster][nMaxBand- ⁇ ]*4
- nMaxBin Ceil(nMaxBin/anNumBlocksPerCluster[nCluster])
- nCb 0; while (pnCBEdgefnCbJ ⁇ nMaxBin)
- anHSNumBandsfnCluster] is the number of codebooks for transient segment nCluster
- mnHSBandEdge[nCluster] [nBand] is the upper boundary of codebook application range for codebook nBand of transient segment is the upper boundary of critical band nBand
- anMaxActCb[nCluster] is the number of quantization units for transient segment nCluster.
- dequantizer module 124 the quantization step size applicable to each quantization unit is decoded from the bit stream 20, and such step sizes are used to reconstruct the subband samples from quantization indexes received from decoding module 120.
- "mnQStepIndexfnClusterJfnBandf' indicates the quantization step size index of quantization unit (nCluster, nBand) and is decoded by Huffman code book HuffDec ⁇ l 16x1 for quasistationary frames and by
- Huffman code book HuffDec7_l 16x1 for transient frames both as set forth in the '760 Application.
- each subband sample value preferably is obtained as follows (assuming linear quantization was used at the encoder):
- Subband sample Quantization step size * Quantization index.
- nonlinear quantization techniques are used.
- Joint intensity decoding in module 128 preferably is performed only if indicated by the value of bUseJIC. If so, the joint intensity decoder 128 copies the subband samples from the source channel and then multiplies them by the scale factor to reconstruct the subband samples of the joint channel, i.e.,
- Joint channel samples Scale factor * Source channel samples in one representative embodiment, the source channel is the front left channel and each other normal channel has been encoded as a joint channel. Preferably, all of the subband samples in the same quantization unit have the same scale factor.
- Sum/difference decoding in module 130 preferably is performed only if indicated by the value of bUseSumDiff. If so, reconstruction of the subband samples in the left/right channel preferably is performed as follows:
- the encoder in a process called interleaving, rearranges the subband samples for the current frame of the current channel so as to group together samples within the same transient segment that correspond to the same subband. Accordingly, in de-interleaving module
- the subband samples are rearranged back into their natural order.
- nBinO anClusterBinOfnClusterJ
- nNumCluster is the number of transient segments
- anNumBlocksPerFrmPerClusterfnCluster] is the transient segment length for transient segment nCluster
- nClusterBinO[nCluster] is the first subband sample location of transient segment nCluster
- afBinlnterleavedfq] is the array of subband samples arranged in interleaved order
- afBinNatural[p] is the array of subband samples arranged in natural order.
- the subband samples for each frame of each channel are output in their natural order.
- module 134 the sequence of window functions that was used (at the encoder side) for the transform blocks of the present frame of data is identified.
- the MDCT transform was used at the encoder side.
- other types of transforms preferably unitary and sinusoidal-based
- nWinTypeCurrent identifies the single long window function that was used for the entire frame. Accordingly, no additional processing needs to be performed in module 134 for long transform-block frames in this embodiment.
- nWinTypeCurrent in the current embodiment only specifies the window function used for the first and the last transform block. Accordingly, the following processing preferably is performed for short transform-block frames.
- the received value for nWinTypeCurrent preferably identifies whether the first block of the current frame and the first block of the next frame contain a transient signal. This information, together with the locations of the transient segments (identified from the received transient segment lengths) and the perfect reconstruction requirements, permits the decoder 100 to determine which window function to use in each block of the frame.
- the window function for the last block of the frame if it contains a transient, its window function should be WIN_SHORT_BRIEF2BRIEF.
- the window function for the last block of the frame should be WIN_SHORT_Last2SHORT, where Last is determined by the window function of the second last block of the frame via the perfect reconstruction property.
- the window function for the last block of the frame should be WTN_SHORT_Last2 BRIEF, where Last is again determined by the window function of the second last block of the frame via the perfect reconstruction property.
- the window functions for the rest of the blocks in the frame can be determined by the transient location(s), which is indicated by the start of a transient segment, via the perfect reconstruction property. A detailed procedure for doing this is given in the '917 Application.
- module 136 for each transform block of the current frame, the subband samples are inverse transformed using the window function identified by module 134 for such block to recover the original data values (subject to any quantization noise that may have been introduced in the course of the encoding and other numerical inaccuracies).
- the output of module 136 is the reconstructed sequence of PCM samples that was input to the encoder.
- Such devices typically will include, for example, at least some of the following components interconnected with each other, e.g., via a common bus: one or more central processing units (CPUs); readonly memory (ROM); random access memory (RAM); input/output software and circuitry for interfacing with other devices (e.g., using a hardwired connection, such as a serial port, a parallel port, a USB connection or a firewire connection, or using a wireless protocol, such as Bluetooth or a 802.11 protocol); software and circuitry for connecting to one or more networks (e.g., using a hardwired connection such as an Ethernet card or a wireless protocol, such as code division multiple access (CDMA), global system for mobile communications (GSM), Bluetooth, a 802.11 protocol, or any other cellular- based or non-cellular-based system), which networks, in turn,
- Suitable devices for use in implementing the present invention may be obtained from various vendors. In the various embodiments, different types of devices are used depending upon the size and complexity of the tasks. Suitable devices include mainframe computers, multiprocessor computers, workstations, personal computers, and even smaller computers such as PDAs, wireless telephones or any other appliance or device, whether stand-alone, hard- wired into a network or wirelessly connected to a network.
- any of the functionality described above can be implemented in software, hardware, firmware or any combination of these, with the particular implementation being selected based on known engineering tradeoffs. More specifically, where the functionality described above is implemented in a fixed, predetermined or logical manner, it can be accomplished through programming (e.g., software or firmware), an appropriate arrangement of logic components (hardware) or any combination of the two, as will be readily appreciated by those skilled in the art.
- the present invention also relates to machine- readable media on which are stored program instructions for performing the methods and functionality of this invention.
- Such media include, by way of example, magnetic disks, magnetic tape, optically readable media such as CD ROMs and DVD ROMs, or semiconductor memory such as PCMCIA cards, various types of memory cards, USB memory devices, etc.
- the medium may take the form of a portable item such as a miniature disk drive or a small disk, diskette, cassette, cartridge, card, stick etc., or it may take the form of a relatively larger or immobile item such as a hard disk drive, ROM or RAM provided in a computer or other device.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP07800711A EP2054881B1 (en) | 2006-08-18 | 2007-08-17 | Audio decoding |
KR1020097005454A KR101161921B1 (en) | 2006-08-18 | 2007-08-17 | Audio decoding |
JP2009524878A JP5162589B2 (en) | 2006-08-18 | 2007-08-17 | Speech decoding |
DE602007010158T DE602007010158D1 (en) | 2006-08-18 | 2007-08-17 | AUDIO DECODING |
KR1020127005062A KR101401224B1 (en) | 2006-08-18 | 2007-08-17 | Apparatus, method, and computer-readable medium for decoding an audio signal |
AT07800711T ATE486346T1 (en) | 2006-08-18 | 2007-08-17 | AUDIO DECODING |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US82276006P | 2006-08-18 | 2006-08-18 | |
US60/822,760 | 2006-08-18 | ||
US11/558,917 | 2006-11-12 | ||
US11/558,917 US8744862B2 (en) | 2006-08-18 | 2006-11-12 | Window selection based on transient detection and location to provide variable time resolution in processing frame-based data |
US11/669,346 | 2007-01-31 | ||
US11/669,346 US7895034B2 (en) | 2004-09-17 | 2007-01-31 | Audio encoding system |
US11/689,371 | 2007-03-21 | ||
US11/689,371 US7937271B2 (en) | 2004-09-17 | 2007-03-21 | Audio decoding using variable-length codebook application ranges |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008022565A1 true WO2008022565A1 (en) | 2008-02-28 |
Family
ID=39110404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2007/002490 WO2008022565A1 (en) | 2006-08-18 | 2007-08-17 | Audio decoding |
Country Status (2)
Country | Link |
---|---|
US (5) | US7937271B2 (en) |
WO (1) | WO2008022565A1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9495971B2 (en) | 2007-08-27 | 2016-11-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Transient detector and method for supporting encoding of an audio signal |
KR101924192B1 (en) * | 2009-05-19 | 2018-11-30 | 한국전자통신연구원 | Method and apparatus for encoding and decoding audio signal using layered sinusoidal pulse coding |
PL3723090T3 (en) * | 2009-10-21 | 2022-03-21 | Dolby International Ab | Oversampling in a combined transposer filter bank |
US8958510B1 (en) * | 2010-06-10 | 2015-02-17 | Fredric J. Harris | Selectable bandwidth filter |
US20120082228A1 (en) * | 2010-10-01 | 2012-04-05 | Yeping Su | Nested entropy encoding |
US10104391B2 (en) | 2010-10-01 | 2018-10-16 | Dolby International Ab | System for nested entropy encoding |
US9530419B2 (en) * | 2011-05-04 | 2016-12-27 | Nokia Technologies Oy | Encoding of stereophonic signals |
CN110097889B (en) * | 2013-02-20 | 2023-09-01 | 弗劳恩霍夫应用研究促进协会 | Apparatus and method for generating or decoding encoded signals |
EP2830058A1 (en) | 2013-07-22 | 2015-01-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Frequency-domain audio coding supporting transform length switching |
US20150100324A1 (en) * | 2013-10-04 | 2015-04-09 | Nvidia Corporation | Audio encoder performance for miracast |
US10075266B2 (en) * | 2013-10-09 | 2018-09-11 | Qualcomm Incorporated | Data transmission scheme with unequal code block sizes |
CN105745706B (en) * | 2013-11-29 | 2019-09-24 | 索尼公司 | Device, methods and procedures for extending bandwidth |
FR3024581A1 (en) * | 2014-07-29 | 2016-02-05 | Orange | DETERMINING A CODING BUDGET OF A TRANSITION FRAME LPD / FD |
KR20170136546A (en) | 2015-04-13 | 2017-12-11 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | Decoders, receivers, and electronics |
CN110870006B (en) | 2017-04-28 | 2023-09-22 | Dts公司 | Method for encoding audio signal and audio encoder |
US20230085013A1 (en) * | 2020-01-28 | 2023-03-16 | Hewlett-Packard Development Company, L.P. | Multi-channel decomposition and harmonic synthesis |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2972205A (en) | 1957-04-18 | 1961-02-21 | Gazzola | Fishhook disgorger |
CN1208489A (en) * | 1995-12-01 | 1999-02-17 | 数字剧场系统股份有限公司 | Multi-channel predictive subband coder using psychoacoustic adaptive bit allocation |
CN1338104A (en) * | 1999-01-28 | 2002-02-27 | 多尔拜实验特许公司 | Data framing for adaptive-block-length coding system |
WO2002056297A1 (en) * | 2001-01-11 | 2002-07-18 | Sasken Communication Technologies Limited | Adaptive-block-length audio coder |
JP2003233397A (en) * | 2002-02-12 | 2003-08-22 | Victor Co Of Japan Ltd | Device, program, and data transmission device for audio encoding |
CN1677490A (en) * | 2004-04-01 | 2005-10-05 | 北京宫羽数字技术有限责任公司 | Intensified audio-frequency coding-decoding device and method |
WO2006030289A1 (en) | 2004-09-17 | 2006-03-23 | Digital Rise Technology Co., Ltd. | Apparatus and methods for multichannel digital audio coding |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3902948A1 (en) | 1989-02-01 | 1990-08-09 | Telefunken Fernseh & Rundfunk | METHOD FOR TRANSMITTING A SIGNAL |
CN1062963C (en) * | 1990-04-12 | 2001-03-07 | 多尔拜实验特许公司 | Adaptive-block-lenght, adaptive-transform, and adaptive-window transform coder, decoder, and encoder/decoder for high-quality audio |
DE4020656A1 (en) | 1990-06-29 | 1992-01-02 | Thomson Brandt Gmbh | METHOD FOR TRANSMITTING A SIGNAL |
GB9103777D0 (en) | 1991-02-22 | 1991-04-10 | B & W Loudspeakers | Analogue and digital convertors |
US5285498A (en) * | 1992-03-02 | 1994-02-08 | At&T Bell Laboratories | Method and apparatus for coding audio signals based on perceptual model |
CA2090052C (en) | 1992-03-02 | 1998-11-24 | Anibal Joao De Sousa Ferreira | Method and apparatus for the perceptual coding of audio signals |
US5819213A (en) * | 1996-01-31 | 1998-10-06 | Kabushiki Kaisha Toshiba | Speech encoding and decoding with pitch filter range unrestricted by codebook range and preselecting, then increasing, search candidates from linear overlap codebooks |
US5852806A (en) * | 1996-03-19 | 1998-12-22 | Lucent Technologies Inc. | Switched filterbank for use in audio signal coding |
US5848391A (en) * | 1996-07-11 | 1998-12-08 | Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. | Method subband of coding and decoding audio signals using variable length windows |
JP3707153B2 (en) * | 1996-09-24 | 2005-10-19 | ソニー株式会社 | Vector quantization method, speech coding method and apparatus |
JP3849210B2 (en) * | 1996-09-24 | 2006-11-22 | ヤマハ株式会社 | Speech encoding / decoding system |
JP3206497B2 (en) * | 1997-06-16 | 2001-09-10 | 日本電気株式会社 | Signal Generation Adaptive Codebook Using Index |
US6330531B1 (en) * | 1998-08-24 | 2001-12-11 | Conexant Systems, Inc. | Comb codebook structure |
US6266644B1 (en) * | 1998-09-26 | 2001-07-24 | Liquid Audio, Inc. | Audio encoding apparatus and methods |
JP3323175B2 (en) | 1999-04-20 | 2002-09-09 | 松下電器産業株式会社 | Encoding device |
US7389227B2 (en) * | 2000-01-14 | 2008-06-17 | C & S Technology Co., Ltd. | High-speed search method for LSP quantizer using split VQ and fixed codebook of G.729 speech encoder |
US7010482B2 (en) * | 2000-03-17 | 2006-03-07 | The Regents Of The University Of California | REW parametric vector quantization and dual-predictive SEW vector quantization for waveform interpolative coding |
US6601032B1 (en) * | 2000-06-14 | 2003-07-29 | Intervideo, Inc. | Fast code length search method for MPEG audio encoding |
US6983017B2 (en) * | 2001-08-20 | 2006-01-03 | Broadcom Corporation | Method and apparatus for implementing reduced memory mode for high-definition television |
US7460993B2 (en) * | 2001-12-14 | 2008-12-02 | Microsoft Corporation | Adaptive window-size selection in transform coding |
US7328150B2 (en) * | 2002-09-04 | 2008-02-05 | Microsoft Corporation | Innovations in pure lossless audio compression |
US7299190B2 (en) * | 2002-09-04 | 2007-11-20 | Microsoft Corporation | Quantization and inverse quantization for audio |
TW594674B (en) * | 2003-03-14 | 2004-06-21 | Mediatek Inc | Encoder and a encoding method capable of detecting audio signal transient |
SG120118A1 (en) * | 2003-09-15 | 2006-03-28 | St Microelectronics Asia | A device and process for encoding audio data |
US7325023B2 (en) * | 2003-09-29 | 2008-01-29 | Sony Corporation | Method of making a window type decision based on MDCT data in audio encoding |
US7426462B2 (en) * | 2003-09-29 | 2008-09-16 | Sony Corporation | Fast codebook selection method in audio encoding |
US7548819B2 (en) | 2004-02-27 | 2009-06-16 | Ultra Electronics Limited | Signal measurement and processing method and apparatus |
US20060080090A1 (en) * | 2004-10-07 | 2006-04-13 | Nokia Corporation | Reusing codebooks in parameter quantization |
US7199735B1 (en) * | 2005-08-25 | 2007-04-03 | Mobilygen Corporation | Method and apparatus for entropy coding |
-
2007
- 2007-03-21 US US11/689,371 patent/US7937271B2/en active Active
- 2007-08-17 WO PCT/CN2007/002490 patent/WO2008022565A1/en active Application Filing
-
2011
- 2011-03-28 US US13/073,833 patent/US8271293B2/en active Active
-
2012
- 2012-08-07 US US13/568,705 patent/US8468026B2/en active Active
-
2013
- 2013-05-15 US US13/895,256 patent/US9361894B2/en not_active Expired - Fee Related
-
2016
- 2016-05-21 US US15/161,230 patent/US20160267916A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2972205A (en) | 1957-04-18 | 1961-02-21 | Gazzola | Fishhook disgorger |
CN1208489A (en) * | 1995-12-01 | 1999-02-17 | 数字剧场系统股份有限公司 | Multi-channel predictive subband coder using psychoacoustic adaptive bit allocation |
CN1338104A (en) * | 1999-01-28 | 2002-02-27 | 多尔拜实验特许公司 | Data framing for adaptive-block-length coding system |
WO2002056297A1 (en) * | 2001-01-11 | 2002-07-18 | Sasken Communication Technologies Limited | Adaptive-block-length audio coder |
JP2003233397A (en) * | 2002-02-12 | 2003-08-22 | Victor Co Of Japan Ltd | Device, program, and data transmission device for audio encoding |
CN1677490A (en) * | 2004-04-01 | 2005-10-05 | 北京宫羽数字技术有限责任公司 | Intensified audio-frequency coding-decoding device and method |
WO2006030289A1 (en) | 2004-09-17 | 2006-03-23 | Digital Rise Technology Co., Ltd. | Apparatus and methods for multichannel digital audio coding |
Non-Patent Citations (1)
Title |
---|
See also references of EP2054881A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20160267916A1 (en) | 2016-09-15 |
US9361894B2 (en) | 2016-06-07 |
US20120303375A1 (en) | 2012-11-29 |
US8468026B2 (en) | 2013-06-18 |
US20130253938A1 (en) | 2013-09-26 |
US8271293B2 (en) | 2012-09-18 |
US7937271B2 (en) | 2011-05-03 |
US20110173014A1 (en) | 2011-07-14 |
US20070174053A1 (en) | 2007-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8468026B2 (en) | Audio decoding using variable-length codebook application ranges | |
EP2054881B1 (en) | Audio decoding | |
JP4374233B2 (en) | Progressive Lossless Embedded AudioCoder (PLEAC) using multiple factorial reversible transforms (ProgressiveLosslessEmbeddedAudioCoder: PLEAC) | |
EP1715476B1 (en) | Low-bitrate encoding/decoding method and system | |
KR100903017B1 (en) | Scalable coding method for high quality audio | |
EP2279562B1 (en) | Factorization of overlapping transforms into two block transforms | |
CN105074818A (en) | Methods for parametric multi-channel encoding | |
KR20110046498A (en) | Compression of audio scale factors by two-dimensional transformation | |
CN100489964C (en) | Audio encoding | |
TW594675B (en) | Method and apparatus for encoding and for decoding a digital information signal | |
JP3814611B2 (en) | Method and apparatus for processing time discrete audio sample values | |
KR20100089772A (en) | Method of coding/decoding audio signal and apparatus for enabling the method | |
CN104681028A (en) | Encoding method and encoding device | |
WO2004079923A2 (en) | Method and apparatus for audio compression | |
CN101290774B (en) | Audio encoding and decoding system | |
WO2021143691A1 (en) | Audio encoding and decoding methods and audio encoding and decoding devices | |
KR100300887B1 (en) | A method for backward decoding an audio data | |
US6463405B1 (en) | Audiophile encoding of digital audio data using 2-bit polarity/magnitude indicator and 8-bit scale factor for each subband | |
Chen et al. | Fast time-frequency transform algorithms and their applications to real-time software implementation of AC-3 audio codec | |
KR101260285B1 (en) | BSAC arithmetic decoding method based on plural probability model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07800711 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2009524878 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007800711 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 990/KOLNP/2009 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020097005454 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020127005062 Country of ref document: KR |