EP2054881B1 - Décodage audio - Google Patents

Décodage audio Download PDF

Info

Publication number
EP2054881B1
EP2054881B1 EP07800711A EP07800711A EP2054881B1 EP 2054881 B1 EP2054881 B1 EP 2054881B1 EP 07800711 A EP07800711 A EP 07800711A EP 07800711 A EP07800711 A EP 07800711A EP 2054881 B1 EP2054881 B1 EP 2054881B1
Authority
EP
European Patent Office
Prior art keywords
frame
code book
indexes
transient
entropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP07800711A
Other languages
German (de)
English (en)
Other versions
EP2054881A4 (fr
EP2054881A1 (fr
Inventor
Yuli You
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Rise Technology Co Ltd
Original Assignee
Digital Rise Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/558,917 external-priority patent/US8744862B2/en
Priority claimed from US11/689,371 external-priority patent/US7937271B2/en
Application filed by Digital Rise Technology Co Ltd filed Critical Digital Rise Technology Co Ltd
Publication of EP2054881A1 publication Critical patent/EP2054881A1/fr
Publication of EP2054881A4 publication Critical patent/EP2054881A4/fr
Application granted granted Critical
Publication of EP2054881B1 publication Critical patent/EP2054881B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/022Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
    • G10L19/025Detection of transients or attacks for time/frequency resolution switching
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • G10L19/038Vector quantisation, e.g. TwinVQ audio

Definitions

  • the present invention pertains to systems, methods and techniques for decoding of audio signals, such as digital audio signals received across a communication channel or read from a storage device.
  • WO 2006/030289 discloses a low bitrate audio coding system with an encoder and a decoder. There is provided a selectively switchable resolution filter bank switching between different resolution modes when detecting transients in a frame.
  • decoding systems, methods and techniques are provided in which audio data are retrieved from a bit stream by applying code books to specified ranges of quantization indexes (in some cases even crossing boundaries of quantization units) and by identifying a sequence of different windows to be applied within a single frame of the audio data based on window information within the bit stream.
  • Figure 1 is a block diagram illustrating various illustrative environments in which a decoder may be used, according to representative embodiments of the present invention.
  • Figures 2A-B illustrate the use of a single long block to cover a frame and the use of multiple short blocks to cover a frame, respectively, according to a representative embodiment of the present invention.
  • FIGS 3A-C illustrate different examples of a transient frame according to a representative embodiment of the present invention.
  • FIG. 4 is a block diagram of an audio signal decoding system 10 according to a representative embodiment of the present invention.
  • the present invention pertains to systems, methods and techniques for decoding audio signals, e.g., after retrieval from a storage device or reception across a communication channel.
  • Applications in which the present invention may be used include, but are not limited to: digital audio broadcasting, digital television (satellite, terrestrial and/or cable broadcasting), home theatre, digital theatre, laser video disc players, content streaming on the Internet and personal audio players.
  • the audio decoding systems, methods and techniques of the present invention can be used, e.g., in conjunction with the audio encoding systems, methods and techniques of the '346 Application.
  • a decoder 100 receives as its input a frame-based bit stream 20 and that includes, for each frame, the actual audio data within that frame (typically, entropy-encoded quantization indexes) and various kinds of processing information (e.g., including control, formatting and/or auxiliary information).
  • the bit stream 20 ordinarily will be input into decoder 100 via a hard-wired connection or via a detachable connector.
  • bit stream 20 could have originated from any of a variety of different sources.
  • the sources include, e.g., a digital radio-frequency (or other electromagnetic) transmission which is received by an antenna 32 and converted into bit stream 20 in demodulator 34, a storage device 36 (e.g., semiconductor, magnetic or optical) from which the bit stream 20 is obtained by an appropriate reader 38, a cable connection 42 from which bit stream 20 is derived in demodulator 44, or a cable connection 48 which directly provides bit stream 20.
  • Bit stream 20 might have been generated, e.g., using any of the techniques described in the '346 Application.
  • bit stream 20 itself will have been derived from another signal, e.g., a multiplexed bit stream, such as those multiplexed according to MPEG 2 system protocol, where the audio bit stream is multiplexed with video bit streams of various formats, audio bit stream of other formats, and metadata; or a received radio-frequency signal that was modulated (using any of the known techniques) with redundancy-encoded, interleaved and/or punctured symbols representing bits of audio data.
  • a multiplexed bit stream such as those multiplexed according to MPEG 2 system protocol, where the audio bit stream is multiplexed with video bit streams of various formats, audio bit stream of other formats, and metadata
  • a received radio-frequency signal that was modulated (using any of the known techniques) with redundancy-encoded, interleaved and/or punctured symbols representing bits of audio data.
  • the audio data within bit stream 20 have been transformed into subband samples (preferably using a unitary sinusoidal-based transform technique), quantized, and then entropy-encoded.
  • the audio data have been transformed using the modified discrete cosine transform (MDCT), quantized and then entropy-encoded using appropriate Huffman encoding.
  • MDCT modified discrete cosine transform
  • Huffman Huffman encoding
  • PCM pulse-coded modulation
  • the decoder 10 preferably stores the same code books as are used by the encoder.
  • the preferred Huffman code books are set forth in the '760 Application, where the "Code” is the Huffman code in decimal format, the "Bit Increment” is the number of additional bits (in decimal format) required for the current code as compared to the code on the previous line and the "Index" is the unencoded value in decimal format.
  • the input audio data are frame-based, with each frame defining a particular time interval and including samples for each of multiple audio channels during that time interval.
  • each such frame has a fixed number of samples, selected from a relatively small set of frame sizes, with the selected frame size for any particular time interval depending, e.g., upon the sampling rate and the amount of delay that can be tolerated between frames.
  • each frame includes 128, 256, 12 or 1,024 samples, with longer frames being preferred except in situations where reduction of delay is important. In most of the examples discussed below, it is assumed that each frame consists of 1,024 samples. However, such examples should not be taken as limiting.
  • the frames are divided into a number of smaller equal-sized contiguous blocks (sometimes referred to herein as "primary blocks" to distinguish them from MDCT or other transform blocks which typically are longer). This division is illustrated in Figures 2A&B .
  • the entire frame 50 is covered by a single primary block 51 (e.g., including 1,024 audio data samples).
  • the frame 50 is covered by eight contiguous primary blocks 52-59 (e.g., each including 128 audio data samples).
  • Each frame of samples can be classified as a transient frame (i.e., one that includes a signal transient) or a quasistationary frame (i.e., one that does not include a transient).
  • a signal transient preferably is defined as a sudden and quick rise (attack) or fall of signal energy. Transient signals occur only sparsely and, for purposes of the present invention, it is assumed that no more than two transient signals will occur in each frame.
  • transient segment refers to an entire frame or a segment of a frame in which the signal that has the same or similar statistical properties.
  • a quasistationary frame generally consists of a single transient segment, while a transient frame ordinarily will consist of two or three transient segments.
  • the transient frame generally will have two transient segments: one covering the portion of the frame before the attack or fall and another covering the portion of the frame after the attack or fall. If both an attack and fall occur in a transient frame, then three transient segments generally will exist, each one covering the portion of the frame as segmented by the attack and fall, respectively.
  • FIGS 3A-C each of which illustrating a single frame 60 of samples that has been divided into eight equal-sized primary blocks 61-68.
  • a transient signal 70 occurs in the second block 62, so there are two transient segments, one consisting of block 61 alone and the other consisting of blocks 62-68.
  • a transient signal 71 occurs in block 64 and another transient signal 72 occurs in block 66, so there are three transient segments, one consisting of blocks 61-63, one consisting of blocks 64-65 and the last consisting of blocks 66-68.
  • a transient signal 73 occurs in block 68, so there are two transient segments, one consisting of blocks 61-67 and the other consisting of block 68 alone.
  • Figure 4 is a block diagram of audio signal decoding system 100 according to a representative embodiment of the present invention, in which the solid arrows indicate the flow of audio data, the broken-line arrows indicate the flow of control, formatting and/or auxiliary information, and the broken-line boxes indicate components that in the present embodiment are instantiated only if indicated in the corresponding control data in bit stream 20, as described in more detail below.
  • the individual sections, modules or components illustrated in Figure 4 are implemented entirely in computer-executable code, as described below. However, in alternate embodiments any or all of such sections or components may be implemented in any of the other ways discussed herein.
  • the bit stream 20 initially is input into demultiplexer 115, which divides the bit stream 20 into frames of data and unpacks the data in each frame in order to separate out the processing information and the audio-signal information.
  • the data in bit stream 20 preferably are interpreted as a sequence of frames, with each new frame beginning with the same "synchronization word" (preferably, 0x7FFF ).
  • each data frame preferably is as follows: Frame Header Synchronization word (preferably, 0 x7FFF ) Description of the audio signal, such as sample rate, the number of normal channels, the number of low-frequency effect (LFE) channels and so on.
  • Normal Channels 1 to 64 Audio data for all normal channels (up to 64 such channels in the present embodiment)
  • LFE Channels 0 to 3 Audio data for all LFE channels (up to 3 such channels in the present embodiment) Error Detection Error-detection code for the current frame of audio data. When detected, the error-handling program is run.
  • nFrmHeaderType indicates an Extension frame header
  • the first 13 bits following nFrmHeaderType are interpreted as nNumWord
  • the next 6 bits are interpreted as nNumNormalCh, and so on.
  • nNumWord indicates the length of the audio data in the current frame (in 32-bit words) from the beginning of the synchronization word (its first byte) to the end of the error-detection word for the current frame.
  • nNumBlocksPerFrm indicates the number of short-window Modified Discrete Cosine Transform (MDCT) blocks corresponding to the current frame of audio data.
  • one short-window MDCT block contains 128 primary audio data samples (preferably entropy-encoded quantized subband samples), so the number of primary audio data samples corresponding to a frame of audio data is 128*nNumBlocksPerFrm.
  • the MDCT block preferably is larger than the primary block and, more preferably, twice the size of the primary block. Accordingly, if the short primary block size consists of 128 audio data samples, then the short MDCT block preferably consists of 256 samples, and if the long primary block consists of 1,024 audio data samples, then the long MDCT block consists of 2,048 samples. More preferably, each primary block consists of the new (next subsequent) audio data samples.
  • nSampleRaleIndex indicates the index of the sampling frequency that was used for the audio signal.
  • nSampleRateIndex Sampling frequency (Hz) 0 8000 1 11025 2 12000 3 16000 4 22050 5 24000 6 32000 7 44100 8 48000 9 88200 10 96000 11 174600 12 192000 13 Reserved 14 Reserved 15 Reserved
  • nNumNormalCh indicates the number of normal channels.
  • the number of bits representing this field is determined by the frame header type. In the present embodiment, if nFrmHeaderType indicates a General frame header, then 3 bits are used and the number of normal channels can range from 1 to 8. On the other hand, if nFrmHeaderType indicates an Extension frame header, then 6 bits are used and the number of normal channels can range from 1 to 64.
  • nNumLfeCh indicates the number of LFE channels.
  • I bit is used and the number of normal channels can range from 0 to 1.
  • 2 bits are used and the number of normal channels can range from 0 to 3.
  • bAuxChCfg indicates whether there is any auxiliary data at the end of the current frame, e.g., containing additional channel configuration information.
  • nJicCb indicates the starting critical band of joint intensity encoding if joint intensity encoding has been applied in the current frame. Again, this field preferably is present only in the General frame header and does not appear in the Extension frame header.
  • processing information As indicated above, all of the data in the header is processing information. As will become apparent below, some of the channel-specific data also is processing information, although the vast majority of such data are audio data samples.
  • the general data structure for each normal channel is as follows: Window Sequence Window function index Indicates MDCT window function(s) The number of transient segments Indicates the number of transient segments - only used for a transient frame. Transient segment length Indicate the lengths of the transient segments - only used for a transient frame Huffman Code Book Indexes and Application The number of code books The number of Huffman code books which each transient segment uses Application ranges Application range of each Huffman code book Ranges Code book indexes Code book index for each Huffman code book Subband Sample Quantization Indexes Quantization indexes of all subband samples Quantization Step Size Indexes Quantization step size index of each quantization unit Sum/Difference encoding Decision Indicates whether the decoder should perform sum/difference decoding on the samples of a quantization unit.
  • the normal channels contain the window sequence information. If the window sequence information is not provided for one or more of the channels, this group of data preferably is copied from the provided window sequence information for channel 0 (Ch0), although in other embodiments the information instead is copied from any other designated channel.
  • the general data structure for each LFE channel is as follows: Huffman Code Book Indexes and Application Ranges
  • the number of code books Indicates the number of code books.
  • Application ranges Application range of each Huffman code book.
  • Code book indexes Code book index of each Huffman code book.
  • Subband Sample Quantization Indexes Quantization indexes of all subband samples.
  • Quantization Step Size Indexes Quantization step size index of each quantization unit.
  • the window sequence information (provided for normal channels only) preferably includes a MDCT window function index.
  • that index is designated as " nWinTypeCurrent " and has the following values and meanings: n WinTypeCurrent Window Function Window Function Length (the number of samples) 0 WIN_LONG__LONG2LONG 2048 1 WIN_LONG_LONG2SHORT 2048 2 WIN_LONG_SHORT2LONG 2048 3 WIN_LONG_SHORT2SHORT 2048 4 WIN_LONG_LONG2BRIEF 2048 5 WIN_LONG_BRIEF2LONG 2048 6 WIN_LONG_BRIEF2BRIEF 2048 7 WIN_LONG_SHORT2BRIEF 2048 8 WIN_LONG_BRIEF2SHORT 2048 9 WIN_SHORT_SHORT2SHORT 256 10 WIN_SHORT_SHORT2BRJEF 256 11 WIN_SHORT
  • nWinTypeCurrent 9, 10, 11 or 12
  • the current frame is made up of nNumBlocksPerFrm (e.g., up to 8) short MDCTs, and nWinTypeCurrent indicates only the first and last window function of these nNumBlocksPerFrm short MDCTs.
  • the other short window functions within the frame preferably are determined by the location where the transient appears, in conjunction with the perfect reconstruction requirements (as described in more detail in the '917 Application.
  • the received data preferably includes window information that is adequate to fully identify the entire window sequence that was used at the encoder side.
  • nNumCluster indicates the number of transient segments in current frame.
  • the current frame is quasistationary, so the number of transient segments implicitly is 1, and nNumCluster does not need to appear in the bit stream (so it preferably is not transmitted).
  • 2 bits are allocated to nNumCluster when a short window function is indicated and its value ranges from 0-2, corresponding to 1-3 transient segments, respectively.
  • short window functions may be used even in a quasistationary frame (i.e., a single transient segment). This case can occur, e.g., when the encoder wanted to achieve low coding delay. In such a low-delay mode, the number of audio data samples in a frame can be less than 1,024 (i.e., the length of a long primary block).
  • the encoder might have chosen to include just 256 PCM samples in a frame, in which case it covers those samples with two short blocks (each including 128 PCM samples that are covered by a 256-sample MDCT block) in the frame, meaning that the decoder also applies two short windows.
  • a field " anNumB / ocksPerFrmPerC / uster[nCluster] " preferably is included in the received data and indicates the length of each transient segment nCluster in terms of the number of short MDCT blocks it occupies.
  • Each such word preferably is Huffman encoded (e.g., using HuffDec1_7x1 in Table B.28 of the '760 Application) and, therefore, each transient segment length can be decoded to reconstruct the locations of the transient segments.
  • anNumBlocksPerFrmPerCluster[nCluster] preferably does not appear in the bit stream (i.e., it is not transmitted) because the transient segment length is implicit, i.e., a single long block in a frame having a long window function (e.g., 2,048 MDCT samples) or all of the blocks in a frame having multiple (e.g., up to 8) short window functions (e.g., each containing 256 MDCT samples).
  • nWinTypeCurrent when the frame is covered by a single long block, that single block is designated by nWinTypeCurrent.
  • the situation generally is a bit more complicated when the frame is covered by multiple short blocks.
  • the reason for the additional complexity is that, due to the perfect reconstruction requirements, the window function for the current block depends upon the window functions that were used in the immediately adjacent previous and subsequent blocks. Accordingly, in the current embodiment of the invention, additional processing is performed in order to identify the appropriate window sequence when short blocks are indicated. This additional processing is described in more detail below in connection with the discussion of module 134.
  • the Huffman Code Book Index and Application Range information also is extracted by multiplexer 115. This information and the processing of it are described below.
  • module 118 the appropriate code books and application ranges are selected based on the corresponding information that was extracted in demultiplexer 15. More specifically, the above-referenced Huffman Code Book Index and Application Range information preferably includes the following fields.
  • anHSNumBands[nCluster] indicates the number of code book segments in the transient segment nCluster.
  • the field " mnHSBandEdge[nCluster][nBand] *4" indicates the length (in terms of quantization indexes) of the code book segment nBand (i.e., the application range of the Huffman code book) in the transient segment nCluster; each such value itself preferably is Huffman encoded, with HuffDec2_64x1 (as set forth in the '760 Application) being used by module 18 to decode the value for quasistationary frames and HuffDec3_32x1 (also forth in the '760 Application) being used to decode the value for transient frames.
  • the field "mnHS[nCluster][nBand]" indicates the Huffman code book index of the code book segment nBand in the transient segment nCluster; each such value itself preferably is Huffman encoded, with HuffDec4_18x1 in the '760 Application being used to decode the value for quasistationary frames and HuffDee5_18x1 in the '760 Application being used to decode the value for transient frames.
  • the code books for decoding the actual Subband Sample Quantization Indexes are then retrieved based on the decoded mnHS[nCluster][nBand] code book indexes as follows: Code Book Index (mnHS) Dimension Quantization Index Range Midtread Quasistationary Code Book Group Transient Code Book Group 0 0 0 reserved reserved reserved 1 4 -1 , 1 Yes HuffDec10_81x4 HuffDec19_81x4 2 2 -2 , 2 Yes HuffDec11_25x2 HuffDec20_25x2 3 2 -4 , 4 Yes HuffDec12_81x2 HuffDec21_81x2 4 2 -8 , 8 Yes HuffDec13_289x2 HuffDec22_289x2 5 1 -15 , 15 Yes HuffDec14_31x1 HuffDec23_31x1 6 1 -31 , 31 Yes HuffDec15_63x1 HuffDec24_63x1 7 1 -63 ,
  • each code book application range i.e., each code book segment
  • Each such codebook segment may cross boundaries of one or more quantization units.
  • the codebook segments may have been specified in other ways, e.g., by specifying the starting point for each code book application range. However, it generally will be possible to encode using a fewer total number of bits if the lengths (rather than the starting points) are specified.
  • the received information preferably uniquely identifies the application range(s) to which each code book is to be applied, and the decoder 100 uses this information for decoding the actual quantization indexes.
  • This approach is significantly different than conventional approaches, in which each quantization unit is assigned a code book, so that the application ranges are not transmitted in conventional approaches.
  • the additional overhead ordinarily is more than compensated by the additional efficiencies that can be obtained by flexibly specifying application ranges.
  • the quantization indexes extracted by demultiplexer 15 are decoded by applying the code books identified in module 18 to their corresponding application ranges of quantization indexes. The result is a fully decoded set of quantization indexes.
  • each "quantization unit” preferably is defined by a rectangle of quantization indexes bounded by a critical band in the frequency domain and by a transient segment in the time domain. All quantization indexes within this rectangle belong to the same quantization unit.
  • the transient segments preferably are identified, based on the transient segment information extracted by multiplexer 115, in the manner described above.
  • a "critical band” refers to the frequency resolution of the human ear, i.e., the bandwidth ⁇ f within which the human ear is not capable of distinguishing different frequencies.
  • the bandwidth ⁇ f preferably rises along with the frequency f , with the relationship between f and ⁇ f being approximately exponential.
  • Each critical band can be represented as a number of adjacent subband samples of the filter bank.
  • the preferred critical bands for the short and long windows and for the different sampling rates are set for in tables B.2 through B.27 of the '760 Application.
  • the boundaries of the critical bands are determined in advance for each MDCT block size and sampling rate, with the encoder and decoder using the same critical bands. From the foregoing information, the number of quantization units is reconstructed as follows.
  • anHSNumBands[nCluster] is the number of codebooks for transient segment nCluster
  • mnHSBandEdge[nCluster][nBand] is the upper boundary of codebook application range for codebook nBand of transient segment nCluster
  • pnCBEdge[nBand] is the upper boundary of critical band nBand
  • anMaxActCb[nCluster] is the number of quantization units for transient segment nCluster.
  • dequantizer module 124 the quantization step size applicable to each quantization unit is decoded from the bit stream 20, and such step sizes are used to reconstruct the subband samples from quantization indexes received from decoding module 120.
  • " mnostepIndex[nCluster][nBand] " indicates the quantization step size index of quantization unit (nCluster, nBand) and is decoded by Huffman code book HuffDec6_116x1 for quasistationary frames and by Huffman code book HuffDec7_1116x1 for transient frames, both as set forth in the '760 Application.
  • the encoder in a process called interleaving, rearranges the subband samples for the current frame of the current channel so as to group together samples within the same transient segment that correspond to the same subband. Accordingly, in de-interleaving module 132, the subband samples are rearranged back into their natural order.
  • nNumCluster is the number of transient segments
  • anNumBlocksPerFrmPerCluster[nCluster] is the transient segment length for transient segment nCluster
  • nClusterBin0[nCluster] is the first subband sample location of transient segment nCluster
  • afBinInterleaved[q] is the array of subband samples arranged in interleaved order
  • afBinNatural[p] is the array of subband samples arranged in natural order.
  • the subband samples for each frame of each channel are output in their natural order.
  • module 134 the sequence of window functions that was used (at the encoder side) for the transform blocks of the present frame of data is identified.
  • the MDCT transform was used at the encoder side.
  • other types of transforms preferably unitary and sinusoidal-based
  • nWinTypeCurrent identifies the single long window function that was used for the entire frame. Accordingly, no additional processing needs to be performed in module 134 for long transform-block frames in this embodiment.
  • n WinTypeCurrent in the current embodiment only specifies the window function used for the first and the last transform block. Accordingly, the following processing preferably is performed for short transform-block frames.
  • the received value for nWinTypeCurrent preferably identifies whether the first block of the current frame and the first block of the next frame contain a transient signal. This information, together with the locations of the transient segments (identified from the received transient segment lengths) and the perfect reconstruction requirements, permits the decoder 100 to determine which window function to use in each block of the frame.
  • WIN_SHORT_BRIEF2BRIEF window function is used for a block with a transient in the preferred embodiments, the following nomenclature may be used to convey this information.
  • WIN_SHORT_BRIEF2BRIEF indicates that there is a transient in the first block of the current frame and in the first block of the subsequent frame
  • WIN_SHORT_BRIEF2SHORT indicates that there is a transient in the first block of the current frame but not in the first block of the subsequent frame.
  • the window function for the last block of the frame if it contains a transient, its window function should be WIN_SHORT_BRIEF2BRIEF.
  • the window function for the last block of the frame should be WIN_SHORT_Last2SHORT, where Last is determined by the window function of the second last block of the frame via the perfect reconstruction property.
  • the window function for the last block of the frame should be WIN_SHORT_Last2 BRIEF, where Last is again determined by the window function of the second last block of the frame via the perfect reconstruction property.
  • the window functions for the rest of the blocks in the frame can be determined by the transient location(s), which is indicated by the start of a transient segment, via the perfect reconstruction property. A detailed procedure for doing this is given in the '917 Application.
  • module 136 for each transform block of the current frame, the subband samples are inverse transformed using the window function identified by module 134 for such block to recover the original data values (subject to any quantization noise that may have been introduced in the course of the encoding and other numerical inaccuracies).
  • the output of module 136 is the reconstructed sequence of PCM samples that was input to the encoder.
  • Such devices typically will include, for example, at least some of the following components interconnected with each other, e.g., via a common bus: one or more central processing units (CPUs); read-only memory (ROM); random access memory (RAM); input/output software and circuitry for interfacing with other devices (e.g., using a hardwired connection, such as a serial port, a parallel port, a USB connection or a firewire connection, or using a wireless protocol, such as Bluetooth or a 802.11 protocol); software and circuitry for connecting to one or more networks (e.g., using a hardwired connection such as an Ethernet card or a wireless protocol, such as code division multiple access (CDMA), global system for mobile communications (GSM), Bluetooth, a 802.11 protocol, or any other cellular-based or non-cellular-based system), which networks, in turn, in many embodiment
  • CDMA code division multiple access
  • GSM global system for mobile communications
  • Bluetooth Bluetooth
  • 802.11 protocol any other cellular-based or non-cellular-based system
  • the process steps to implement the above methods and functionality typically initially are stored in mass storage (e.g., the hard disk), are downloaded into RAM and then are executed by the CPU out of RAM.
  • mass storage e.g., the hard disk
  • the process steps initially are stored in RAM or ROM.
  • Suitable devices for use in implementing the present invention may be obtained from various vendors. In the various embodiments, different types of devices are used depending upon the size and complexity of the tasks. Suitable devices include mainframe computers, multiprocessor computers, workstations, personal computers, and even smaller computers such as PDAs, wireless telephones or any other appliance or device, whether stand-alone, hard-wired into a network or wirelessly connected to a network.
  • Suitable devices include mainframe computers, multiprocessor computers, workstations, personal computers, and even smaller computers such as PDAs, wireless telephones or any other appliance or device, whether stand-alone, hard-wired into a network or wirelessly connected to a network.
  • any of the functionality described above can be implemented in software, hardware, firmware or any combination of these, with the particular implementation being selected based on known engineering tradeoffs. More specifically, where the functionality described above is implemented in a fixed, predetermined or logical manner, it can be accomplished through programming (e.g., software or firmware), an appropriate arrangement of logic components (hardware) or any combination of the two, as will be readily appreciated by those skilled in the art.
  • the present invention also relates to machine-readable media on which are stored program instructions for performing the methods and functionality of this invention.
  • Such media include, by way of example, magnetic disks, magnetic tape, optically readable media such as CD ROMs and DVD ROMs, or semiconductor memory such as PCMCIA cards, various types of memory cards, USB memory devices, etc.
  • the medium may take the form of a portable item such as a miniature disk drive or a small disk, diskette, cassette, cartridge, card, stick etc., or it may take the form of a relatively larger or immobile item such as a hard disk drive, ROM or RAM provided in a computer or other device.
  • functionality sometimes is ascribed to a particular module or component. However, functionality generally may be redistributed as desired among any different modules or components, in some cases completely obviating the need for a particular component or module and/or requiring the addition of new components or modules.
  • the precise distribution of functionality preferably is made according to known engineering tradeoffs, with reference to the specific embodiment of the invention, as will be understood by those skilled in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Amplifiers (AREA)
  • Signal Processing Not Specific To The Method Of Recording And Reproducing (AREA)
  • Diaphragms For Electromechanical Transducers (AREA)

Claims (16)

  1. Procédé de décodage d'un signal audio, comprenant :
    (a) l'obtention d'un flux binaire qui englobe une pluralité de trames, chaque trame englobant des informations de traitement relatives à ladite trame et des indices de quantification à codage entropique lesquels représentent des données audio au sein de ladite trame, et les informations de traitement englobant :
    (i) une pluralité d'indices de tables de codage, chaque indice de table de codage identifiant une table de codage,
    (ii) des informations d'application des tables de codage qui spécifient les gammes d'indices de quantification à codage entropique auxquels les tables de codage sont destinées à être appliquées, et
    (iii) des informations de fenêtre ;
    (b) le décodage des indices de quantification à codage entropique grâce à l'application des tables de codage, identifiées par les indices des tables de codage, aux gammes d'indices de quantification à codage entropique ayant été spécifiées par les informations d'application des tables de codage ;
    (c) la génération d'échantillons de sous-bandes, représentant lesdites données audio au sein de ladite trame, grâce à une déquantification des indices de quantification décodés ;
    (d) l'identification, sur la base des informations de fenêtre, d'une séquence de plusieurs fenêtrages différents qui ont été appliqués, pendant le codage, à des blocs de transformation contigus à taille égale au sein de ladite trame des données audio ; et
    (e) l'obtention de données audio à domaine temporel au sein de ladite trame grâce à une transformation inverse des échantillons de sous-bandes et grâce à l'utilisation, au sein de ladite trame des données audio, de la séquence de plusieurs fenêtrages différents qui ont été identifiés par les informations de fenêtre ;
    cas dans lequel les informations de fenêtre indiquent un emplacement d'une transitoire dans la trame, et cas dans lequel la séquence de plusieurs fenêtrages différents est identifiée à l'étape (d) sur la base de règles prédéterminées relatives à l'emplacement de la transitoire ; et
    cas dans lequel les règles prédéterminées spécifient qu'un fenêtrage particulier différent a été appliqué, pendant le codage, à l'un des blocs de transformation qui comporte la transitoire, par rapport à celui qui a été appliqué aux blocs de transformation qui ne comportent pas la transitoire.
  2. Procédé selon la revendication 1, l'une au moins des gammes d'indices de quantification à codage entropique traversant une limite d'une unité de quantification, alors qu'une unité de quantification est définie par un rectangle d'indices de quantification qui est cerné par une bande critique dans un domaine fréquenciel et par un segment transitoire dans un domaine temporel.
  3. Procédé selon la revendication 1, les informations d'application des tables de codage identifiant une gamme des indices de quantification à codage entropique pour chaque table de codage ayant été identifiée par les indices de tables de codage.
  4. Procédé selon la revendication 1, les informations d'application des tables de codage spécifiant une longueur des indices de quantification à codage entropique pour chaque table de codage ayant été identifiée par les indices de tables de codage.
  5. Procédé selon la revendication 1, les règles prédéterminées se conformant également aux impératifs d'une reconstruction parfaite.
  6. Procédé selon la revendication 1, le fenêtrage particulier étant plus étroit que d'autres parmi la pluralité de fenêtrages différents au sein de la trame individuelle des données audio.
  7. Procédé selon la revendication 1, le fenêtrage particulier étant symétrique et occupant seulement une section centrale de l'intégralité de son bloc de transformation, présentant une pluralité de valeurs 0 sur chaque extrémité de son bloc de transformation.
  8. Procédé selon la revendication 1, chacun des postes suivants, à savoir (i) la pluralité d'indices de tables de codage, (ii) les informations d'application des tables de codage et (iii) les informations de fenêtre étant à codage entropique.
  9. Support lisible par ordinateur stockant des étapes de processus exécutables par ordinateur afin de décoder un signal audio, lesdites étapes du processus comprenant les étapes suivantes :
    (a) l'obtention d'un flux binaire qui englobe une pluralité de trames, chaque trame englobant des informations de traitement relatives à ladite trame et des indices de quantification à codage entropique lesquels représentent des données audio au sein de ladite trame, et les informations de traitement englobant :
    (i) une pluralité d'indices de tables de codage, chaque indice de table de codage identifiant une table de codage,
    (ii) des informations d'application des tables de codage qui spécifient les gammes d'indices de quantification à codage entropique auxquels les tables de codage sont destinées à être appliquées, et
    (iii) des informations de fenêtre ;
    (b) le décodage des indices de quantification à codage entropique grâce à l'application des tables de codage, identifiées par les indices des tables de codage, aux gammes d'indices de quantification à codage entropique ayant été spécifiées par les informations d'application des tables de codage ;
    (c) la génération d'échantillons de sous-bandes, représentant lesdites données audio au sein de ladite trame, grâce à une déquantification des indices de quantification décodés ;
    (d) l'identification, sur la base des informations de fenêtre, d'une séquence de plusieurs fenêtrages différents qui ont été appliqués, pendant le codage, à des blocs de transformation contigus à taille égale au sein de ladite trame des données audio ; et
    (e) l'obtention de données audio à domaine temporel au sein de ladite trame grâce à une transformation inverse des échantillons de sous-bandes et grâce à l'utilisation, au sein de ladite trame des données audio, de la séquence de plusieurs fenêtrages différents qui ont été identifiés par les informations de fenêtre ; cas dans lequel les informations de fenêtre indiquent un emplacement d'une transitoire dans la trame, et cas dans lequel la séquence de plusieurs fenêtrages différents est identifiée à l'étape (d) sur la base de règles prédéterminées relatives à l'emplacement de la transitoire, cas dans lequel les règles prédéterminées spécifient qu'un fenêtrage particulier différent a été appliqué, pendant le codage, à l'un des blocs de transformation qui comporte la transitoire, par rapport à celui qui a été appliqué aux blocs de transformation qui ne comportent pas la transitoire, et cas dans lequel les règles prédéterminées se conforment également aux impératifs d'une reconstruction parfaite.
  10. Support lisible par ordinateur selon la revendication 9, l'une au moins des gammes d'indices de quantification à codage entropique traversant une limite d'une unité de quantification, alors qu'une unité de quantification est définie par un rectangle d'indices de quantification qui est cerné par une bande critique dans un domaine fréquenciel et par un segment transitoire dans un domaine temporel.
  11. Support lisible par ordinateur selon la revendication 9, le fenêtrage particulier étant symétrique et occupant seulement une section centrale de l'intégralité de son bloc de transformation, présentant une pluralité de valeurs 0 sur chaque extrémité de son bloc de transformation.
  12. Support lisible par ordinateur selon la revendication 9, chacun des postes suivants, à savoir (i) la pluralité d'indices de tables de codage, (ii) les informations d'application des tables de codage et (iii) les informations de fenêtre étant à codage entropique.
  13. Appareil de décodage d'un signal audio, comprenant :
    (a) des moyens pour obtenir un flux binaire qui englobe une pluralité de trames, chaque trame englobant des informations de traitement relatives à ladite trame et des indices de quantification à codage entropique lesquels représentent des données audio au sein de ladite trame, et les informations de traitement englobant :
    (i) une pluralité d'indices de tables de codage, chaque indice de table de codage identifiant une table de codage,
    (ii) des informations d'application des tables de codage qui spécifient les gammes d'indices de quantification à codage entropique auxquels les tables de codage sont destinées à être appliquées, et
    (iii) des informations de fenêtre ;
    (b) des moyens pour décoder les indices de quantification à codage entropique grâce à l'application des tables de codage, identifiées par les indices des tables de codage, aux gammes d'indices de quantification à codage entropique ayant été spécifiées par les informations d'application des tables de codage ;
    (c) des moyens pour générer des échantillons de sous-bandes, représentant lesdites données audio au sein de ladite trame, grâce à une déquantification des indices de quantification décodés ;
    (d) des moyens pour identifier, sur la base des informations de fenêtre, une séquence de plusieurs fenêtrages différents qui ont été appliqués, pendant le codage, à des blocs de transformation contigus à taille égale au sein de ladite trame des données audio ; et
    (e) des moyens pour obtenir des données audio à domaine temporel au sein de ladite trame grâce à une transformation inverse des échantillons de sous-bandes et grâce à l'utilisation, au sein de ladite trame des données audio, de la séquence de plusieurs fenêtrages différents qui ont été identifiés par les informations de fenêtre ; cas dans lequel les informations de fenêtre indiquent un emplacement d'une transitoire dans la trame, et cas dans lequel la séquence de plusieurs fenêtrages différents est identifiée à l'étape (d) sur la base de règles prédéterminées relatives à l'emplacement de la transitoire, cas dans lequel les règles prédéterminées spécifient qu'un fenêtrage particulier différent a été appliqué, pendant le codage, à l'un des blocs de transformation qui comporte la transitoire, par rapport à celui qui a été appliqué aux blocs de transformation qui ne comportent pas la transitoire, et cas dans lequel les règles prédéterminées se conforment également aux impératifs d'une reconstruction parfaite.
  14. Appareil selon la revendication 13, l'une au moins des gammes d'indices de quantification à codage entropique traversant une limite d'une unité de quantification, alors qu'une unité de quantification est définie par un rectangle d'indices de quantification qui est cerné par une bande critique dans un domaine fréquenciel et par un segment transitoire dans un domaine temporel.
  15. Appareil selon la revendication 13, le fenêtrage particulier étant symétrique et occupant seulement une section centrale de l'intégralité de son bloc de transformation, présentant une pluralité de valeurs 0 sur chaque extrémité de son bloc de transformation.
  16. Appareil selon la revendication 13, chacun des postes suivants, à savoir (i) la pluralité d'indices de tables de codage, (ii) les informations d'application des tables de codage et (iii) les informations de fenêtre étant à codage entropique.
EP07800711A 2006-08-18 2007-08-17 Décodage audio Active EP2054881B1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US82276006P 2006-08-18 2006-08-18
US11/558,917 US8744862B2 (en) 2006-08-18 2006-11-12 Window selection based on transient detection and location to provide variable time resolution in processing frame-based data
US11/669,346 US7895034B2 (en) 2004-09-17 2007-01-31 Audio encoding system
US11/689,371 US7937271B2 (en) 2004-09-17 2007-03-21 Audio decoding using variable-length codebook application ranges
PCT/CN2007/002490 WO2008022565A1 (fr) 2006-08-18 2007-08-17 Décodage audio

Publications (3)

Publication Number Publication Date
EP2054881A1 EP2054881A1 (fr) 2009-05-06
EP2054881A4 EP2054881A4 (fr) 2009-09-09
EP2054881B1 true EP2054881B1 (fr) 2010-10-27

Family

ID=39110402

Family Applications (2)

Application Number Title Priority Date Filing Date
EP07785373A Active EP2054883B1 (fr) 2006-08-18 2007-08-17 Système de codage audio
EP07800711A Active EP2054881B1 (fr) 2006-08-18 2007-08-17 Décodage audio

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP07785373A Active EP2054883B1 (fr) 2006-08-18 2007-08-17 Système de codage audio

Country Status (7)

Country Link
US (1) US7895034B2 (fr)
EP (2) EP2054883B1 (fr)
JP (2) JP5162589B2 (fr)
KR (3) KR101168473B1 (fr)
AT (2) ATE486347T1 (fr)
DE (2) DE602007010160D1 (fr)
WO (1) WO2008022564A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419977A (zh) * 2011-01-14 2012-04-18 展讯通信(上海)有限公司 瞬态音频信号的判别方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8332216B2 (en) * 2006-01-12 2012-12-11 Stmicroelectronics Asia Pacific Pte., Ltd. System and method for low power stereo perceptual audio coding using adaptive masking threshold
KR101435411B1 (ko) * 2007-09-28 2014-08-28 삼성전자주식회사 심리 음향 모델의 마스킹 효과에 따라 적응적으로 양자화간격을 결정하는 방법과 이를 이용한 오디오 신호의부호화/복호화 방법 및 그 장치
JP5414684B2 (ja) 2007-11-12 2014-02-12 ザ ニールセン カンパニー (ユー エス) エルエルシー 音声透かし、透かし検出、および透かし抽出を実行する方法および装置
WO2009081568A1 (fr) * 2007-12-21 2009-07-02 Panasonic Corporation Codeur, décodeur et procédé de codage
US8457951B2 (en) 2008-01-29 2013-06-04 The Nielsen Company (Us), Llc Methods and apparatus for performing variable black length watermarking of media
CN102081926B (zh) * 2009-11-27 2013-06-05 中兴通讯股份有限公司 格型矢量量化音频编解码方法和系统
CN102222505B (zh) * 2010-04-13 2012-12-19 中兴通讯股份有限公司 可分层音频编解码方法系统及瞬态信号可分层编解码方法
EP3441967A1 (fr) 2011-04-05 2019-02-13 Nippon Telegraph and Telephone Corporation Procédé de décodage, décodeur, programme et support d'enregistrement
ES2659001T3 (es) 2013-01-29 2018-03-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Codificadores de audio, decodificadores de audio, sistemas, métodos y programas informáticos que utilizan una resolución temporal aumentada en la proximidad temporal de inicios o finales de fricativos o africados
UA112833C2 (uk) * 2013-05-24 2016-10-25 Долбі Інтернешнл Аб Аудіо кодер і декодер
JP2017009663A (ja) * 2015-06-17 2017-01-12 ソニー株式会社 録音装置、録音システム、および、録音方法
WO2017064264A1 (fr) 2015-10-15 2017-04-20 Huawei Technologies Co., Ltd. Procédé et appareil de codage et de décodage sinusoïdal
US9762382B1 (en) * 2016-02-18 2017-09-12 Teradyne, Inc. Time-aligning a signal
CN105790854B (zh) * 2016-03-01 2018-11-20 济南中维世纪科技有限公司 一种基于声波的短距离数据传输方法及装置
CN114499690B (zh) * 2021-12-27 2023-09-29 北京遥测技术研究所 一种星载激光通信终端地面模拟装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3902948A1 (de) * 1989-02-01 1990-08-09 Telefunken Fernseh & Rundfunk Verfahren zur uebertragung eines signals
DE4020656A1 (de) * 1990-06-29 1992-01-02 Thomson Brandt Gmbh Verfahren zur uebertragung eines signals
GB9103777D0 (en) 1991-02-22 1991-04-10 B & W Loudspeakers Analogue and digital convertors
JP3413691B2 (ja) * 1994-08-16 2003-06-03 ソニー株式会社 情報符号化方法及び装置、情報復号化方法及び装置、並びに情報記録媒体及び情報送信方法
US5956674A (en) 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
US5812971A (en) * 1996-03-22 1998-09-22 Lucent Technologies Inc. Enhanced joint stereo coding method using temporal envelope shaping
US5848391A (en) * 1996-07-11 1998-12-08 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method subband of coding and decoding audio signals using variable length windows
JP3318824B2 (ja) * 1996-07-15 2002-08-26 ソニー株式会社 デジタル信号符号化処理方法、デジタル信号符号化処理装置、デジタル信号記録方法、デジタル信号記録装置、記録媒体、デジタル信号伝送方法及びデジタル信号伝送装置
US6266003B1 (en) * 1998-08-28 2001-07-24 Sigma Audio Research Limited Method and apparatus for signal processing for time-scale and/or pitch modification of audio signals
US6357029B1 (en) * 1999-01-27 2002-03-12 Agere Systems Guardian Corp. Joint multiple program error concealment for digital audio broadcasting and other applications
US6226608B1 (en) * 1999-01-28 2001-05-01 Dolby Laboratories Licensing Corporation Data framing for adaptive-block-length coding system
JP3518737B2 (ja) * 1999-10-25 2004-04-12 日本ビクター株式会社 オーディオ符号化装置、オーディオ符号化方法、及びオーディオ符号化信号記録媒体
AU2001276588A1 (en) * 2001-01-11 2002-07-24 K. P. P. Kalyan Chakravarthy Adaptive-block-length audio coder
US6983017B2 (en) * 2001-08-20 2006-01-03 Broadcom Corporation Method and apparatus for implementing reduced memory mode for high-definition television
JP3815323B2 (ja) * 2001-12-28 2006-08-30 日本ビクター株式会社 周波数変換ブロック長適応変換装置及びプログラム
JP2003216188A (ja) * 2002-01-25 2003-07-30 Matsushita Electric Ind Co Ltd オーディオ信号符号化方法、符号化装置、及び記憶媒体
JP2003233397A (ja) * 2002-02-12 2003-08-22 Victor Co Of Japan Ltd オーディオ符号化装置、オーディオ符号化プログラム及びオーディオ符号化データ伝送装置
US7328150B2 (en) 2002-09-04 2008-02-05 Microsoft Corporation Innovations in pure lossless audio compression
US7516064B2 (en) * 2004-02-19 2009-04-07 Dolby Laboratories Licensing Corporation Adaptive hybrid transform for signal analysis and synthesis
US7548819B2 (en) * 2004-02-27 2009-06-16 Ultra Electronics Limited Signal measurement and processing method and apparatus
JP4271602B2 (ja) * 2004-03-04 2009-06-03 富士通株式会社 転送データの正当性を判定する装置および方法
JP2005268912A (ja) * 2004-03-16 2005-09-29 Sharp Corp フレーム補間のための画像処理装置およびそれを備えた表示装置
CN1677490A (zh) * 2004-04-01 2005-10-05 北京宫羽数字技术有限责任公司 一种增强音频编解码装置及方法
US7630902B2 (en) * 2004-09-17 2009-12-08 Digital Rise Technology Co., Ltd. Apparatus and methods for digital audio coding using codebook application ranges

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419977A (zh) * 2011-01-14 2012-04-18 展讯通信(上海)有限公司 瞬态音频信号的判别方法
CN102419977B (zh) * 2011-01-14 2013-10-02 展讯通信(上海)有限公司 瞬态音频信号的判别方法

Also Published As

Publication number Publication date
EP2054881A4 (fr) 2009-09-09
US7895034B2 (en) 2011-02-22
ATE486347T1 (de) 2010-11-15
US20070124141A1 (en) 2007-05-31
WO2008022564A1 (fr) 2008-02-28
KR101401224B1 (ko) 2014-05-28
KR20120032039A (ko) 2012-04-04
DE602007010160D1 (de) 2010-12-09
JP5162589B2 (ja) 2013-03-13
JP5162588B2 (ja) 2013-03-13
EP2054883A1 (fr) 2009-05-06
KR20090041439A (ko) 2009-04-28
DE602007010158D1 (de) 2010-12-09
EP2054883B1 (fr) 2010-10-27
EP2054881A1 (fr) 2009-05-06
KR101168473B1 (ko) 2012-07-26
KR101161921B1 (ko) 2012-07-03
ATE486346T1 (de) 2010-11-15
KR20090042972A (ko) 2009-05-04
JP2010501089A (ja) 2010-01-14
JP2010501090A (ja) 2010-01-14
EP2054883A4 (fr) 2009-09-09

Similar Documents

Publication Publication Date Title
EP2054881B1 (fr) Décodage audio
US8468026B2 (en) Audio decoding using variable-length codebook application ranges
EP1715476B1 (fr) Procédé et système d'encodage/de décodage à faible débit binaire
US20100305956A1 (en) Method and an apparatus for processing a signal
JP2003506763A (ja) 高品質オーディオ用縮尺自在符号化方法
EP2279562B1 (fr) Factorisation de transformées chevauchantes en deux transformées par blocs
EP2395503A2 (fr) Procédé de codage et de décodage de signaux audio, et appareil à cet effet
CN100489964C (zh) 音频解码
TW594675B (en) Method and apparatus for encoding and for decoding a digital information signal
US20040172239A1 (en) Method and apparatus for audio compression
CN101290774B (zh) 音频编码和解码系统
US6678647B1 (en) Perceptual coding of audio signals using cascaded filterbanks for performing irrelevancy reduction and redundancy reduction with different spectral/temporal resolution
KR100300887B1 (ko) 디지털 오디오 데이터의 역방향 디코딩 방법
Chen et al. Fast time-frequency transform algorithms and their applications to real-time software implementation of AC-3 audio codec
KR101260285B1 (ko) 다원화된 확률 모형에 기반한 비.에스.에이.씨 산술 복호화방법
Bii MPEG-1 Layer III Standard: A Simplified Theoretical Review
CN113948094A (zh) 音频编解码方法和相关装置及计算机可读存储介质

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090306

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RIN1 Information on inventor provided before grant (corrected)

Inventor name: YOU, YULI

A4 Supplementary search report drawn up and despatched

Effective date: 20090811

17Q First examination report despatched

Effective date: 20090826

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602007010158

Country of ref document: DE

Date of ref document: 20101209

Kind code of ref document: P

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20101027

LTIE Lt: invalidation of european patent or patent extension

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110127

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110227

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110228

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110128

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110207

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20110728

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007010158

Country of ref document: DE

Effective date: 20110728

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110817

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20101027

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: LU

Payment date: 20230720

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: MC

Payment date: 20230823

Year of fee payment: 17

Ref country code: GB

Payment date: 20230728

Year of fee payment: 17

Ref country code: CH

Payment date: 20230902

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230728

Year of fee payment: 17

Ref country code: DE

Payment date: 20230720

Year of fee payment: 17