WO2007052088A1 - Audio compression - Google Patents

Audio compression Download PDF

Info

Publication number
WO2007052088A1
WO2007052088A1 PCT/IB2005/003293 IB2005003293W WO2007052088A1 WO 2007052088 A1 WO2007052088 A1 WO 2007052088A1 IB 2005003293 W IB2005003293 W IB 2005003293W WO 2007052088 A1 WO2007052088 A1 WO 2007052088A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
low frequency
sections
band
high frequency
Prior art date
Application number
PCT/IB2005/003293
Other languages
French (fr)
Inventor
Mikko Tammi
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/IB2005/003293 priority Critical patent/WO2007052088A1/en
Priority to KR1020087010631A priority patent/KR100958144B1/en
Priority to EP05806493.2A priority patent/EP1943643B1/en
Priority to CN2005800519760A priority patent/CN101297356B/en
Priority to AU2005337961A priority patent/AU2005337961B2/en
Priority to JP2008538430A priority patent/JP4950210B2/en
Priority to BRPI0520729-0A priority patent/BRPI0520729B1/en
Priority to US12/084,677 priority patent/US8326638B2/en
Publication of WO2007052088A1 publication Critical patent/WO2007052088A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
    • G10L19/0208Subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction

Definitions

  • the present application relates in general to audio compression.
  • Audio compression is commonly employed in modern consumer devices for storing or transmitting digital audio signals.
  • Consumer devices may be telecommunication devices, video devices, audio players, radio devices and other consumer devices.
  • High compression ratios enable better storage capacity, or more efficient transmission via a communication channel, i.e. a wireless communication channel, or a wired communication channel.
  • the quality of the compressed signal should be maintained at a high level.
  • the target of audio coding is generally to maximize the audio quality in relation to the given compression ratio, i.e. the bit rate.
  • the input signal is divided into a limited number of sub-bands.
  • Each of the sub-band signals can be quantized. From the theory of psychoacoustics it is known that the highest frequencies in the spectrum are perceptually less important than the low frequencies. This can be considered to some extend in the coder by allocating lesser bits to the quantization of the high frequency sub-bands than to the low frequency sub-bands .
  • More sophisticated audio coding utilizes the fact that in most cases, there are large dependencies between the low frequency regions and high frequency regions of an audio signal, i.e. the higher half of the spectrum is generally quite similar as the lower half.
  • the low frequency region can be considered the lower half of the audio spectrum
  • the high frequency can be considered the upper half of the audio spectrum. It is to be understood, that the border between low and high frequency is not fixed, but may lie in between 2kHz and 15kHz, and even beyond these borders .
  • SBR spectral-band-replication
  • the drawback of the method according to the art is that the mere transposition of low frequency bands to high frequency bands may lead to dissimilarities between the original high frequencies and their reconstruction utilizing the transposed low frequencies.
  • Another drawback is that noise and sinusoids need to be added to the frequency spectrum according to known methods.
  • the application provides, according to one aspect, a method for encoding audio signals with receiving an input audio signal, dividing the audio signal into at least a low frequency band and a high frequency band, dividing the high frequency band into at least two high frequency sub- band signals, determining within the low frequency band signal sections which match best with high-frequency sub- band signals, and generating parameters that refer at least to the low frequency band signal sections which match best with high-frequency sub-band signals.
  • the application provides a new approach for coding the high frequency region of an input signal.
  • the input signal can be divided into temporally successive frames. Each of the frames represents a temporal instance of the input signal. Within each frame, the input signal can be represented by its spectral components. The spectral components, or samples, represent the frequencies within the input signal .
  • the application maximizes the similarity between the original and the coded high frequency spectral components.
  • the high frequency region is formed utilizing the already-coded low frequency region of the signal .
  • a signal section within the low frequency can be found, which matches best with an actual high frequency sub-band.
  • the application provides for searching within the whole low frequency spectrum sample by sample for a signal section, which resembles best a high frequency sub-band.
  • the application provides, in other words, finding a sample sequence, which matches best with the high frequency sub- band.
  • the sample sequence can start anywhere within the low frequency band, except that the last considered starting point within the low frequency band should be the last sample in the low frequency band minus the length of the high frequency sub-band that is to be matched.
  • An index or link to the low frequency signal section matching best the actual high frequency sub-band can be used to model the high frequency sub-band. Only the index or link needs to be encoded and stored, or transmitted in order to allow restoring a representation of the corresponding high frequency sub-band at the receiving end.
  • the most similar match i.e. the most similar spectral shape of the signal section and the high frequency sub-band
  • the low frequency band is searched within the low frequency band.
  • Parameters referring at least to the signal section which is found to be most similar with a high frequency sub-band are created in the encoder.
  • the parameters may comprise scaling factors for scaling the found sections into the high frequency band.
  • scaling can be applied to the copied low frequency signal sections using scaling factors. According to embodiments, only the scaling factors and the links to the low frequency signal sections need to be encoded.
  • the shape of the high frequency region follows more closely the original high frequency spectrum than with known methods when using the best matching low frequency signal sections for reproduction of the high frequency sub-bands.
  • the perceptually important spectral peaks can be modeled more accurately, because the amplitude, shape, and frequency position is more similar to the original signal.
  • the modeled high frequency sub-bands can be compared with the original high frequency sub-bands, it is possible to easily detect missing spectral components, i.e. sinusoids or noise, and then add these.
  • embodiments provide utilizing the low frequency signal sections by transposing the low frequency signal samples into high-frequency sub-band signals using the parameters wherein the parameters comprise scaling factors such that an envelope of the transposed low frequency signal sections follows an envelope of the high frequency sub-band signals of the received signal.
  • the scaling factors enable adjusting the energy and shape of the copied low frequency signal sections to match better with the actual high frequency sub-bands .
  • the parameters can comprise links to low frequency signal sections to represent the corresponding high frequency sub-band signals according to embodiments.
  • the links can be pointers or indexes to the low frequency signal sections. With this information, it is possible to refer to the low frequency signal sections when constructing the high frequency sub-band.
  • the envelope of the high frequency sub-band signals In order to reduce the number of quantization bits, it is possible to normalize the envelope of the high frequency sub-band signals.
  • the normalization provides that both the low and high frequency bands are within a normalized amplitude range. This reduces the number of bits needed for quantization of the scaling factors.
  • the information used for normalization has to be provided by the encoder to construct the representation of the high frequency sub-band in the decoder.
  • Embodiments provide envelope normalization with linear prediction coding. It is also possible to normalize the envelope utilizing cepstral modeling. Cepstral modeling uses the inverse Fourier Transform of the logarithm of the power spectrum of a signal .
  • Generating scaling factors can comprise generating scaling factors in the linear domain to match at least amplitude peaks in the spectrum. Generating scaling factors can also comprise matching at least energy and/or shape of the spectrum in the logarithmic domain, according to embodiments.
  • Embodiments provide generating signal samples within the low frequency band and/or the high frequency band using modified discrete cosine transformation (MDCT) .
  • MDCT transformation provides spectrum coefficients preferably as real numbers.
  • the MDCT transformation according to embodiments can be used with any suitable frame sizes, in particular with frame sizes of 2048 samples for normal frames and 256 samples for transient frames, but also any other value in between.
  • embodiments provide calculating a similarity measure using a normalized correlation or the Euclidian distance.
  • embodiments provide quantizing the low frequency signal samples and quantizing at least the scaling factors.
  • the link to the low frequency signal section can be an integer.
  • embodiments provide dividing the input signal into temporally successive frames, and detecting tonal sections within two successive frames within the input signal.
  • the tonal sections can be enhanced by adding additional sinusoids. Sections which are highly tonal can be enhanced additionally by increasing the number of high frequency sub-bands in the corresponding high frequency regions.
  • Input frames can be divided into different tonality groups, e.g. not tonal, tonal, and strongly tonal.
  • Detecting tonal sections can comprise using Shifted Discrete Fourier Transformation (SDFT) .
  • SDFT Shifted Discrete Fourier Transformation
  • Another aspect of the application is a method for decoding audio signals with receiving an encoded bit stream, decoding from the bit stream at least a low frequency signal and at least parameters referring to low frequency signal sections, utilizing the low frequency signal samples and the parameters referring to the low frequency signal sections for reconstructing at least two high-frequency sub-band signals, and outputting an output signal comprising at least the low frequency signal and at least the two high-frequency sub-band signals.
  • a further aspect of the application is an encoder for encoding audio signals comprising receiving means arranged for receiving an input audio signal, filtering means arranged for dividing the audio signal into at least a low frequency band and a high frequency band, and further arranged for dividing the high frequency band into at least two high frequency sub-band signals, and coding means arranged for generating parameters that refer at least to low frequency band signal sections which match best with the high-frequency sub-band signals .
  • a further aspect of the application is a decoder for decoding audio signals comprising receiving means arranged for receiving an encoded bit stream, decoding means arranged for decoding from the bit stream at least a low frequency signal and at least parameters referring to the low frequency signal sections, generation means arranged for utilizing samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high- frequency sub-band signals.
  • a further aspect of the application is a system for digital audio compression comprising a described decoder, and a described encoder.
  • a further aspect of the application relates to a computer program product for encoding audio signals, the program comprising instructions operable to cause a processor to receive an input audio signal, divide the audio signal into at least a low frequency band and a high frequency band, divide the high frequency band into at least two high frequency sub-band signals, and generate parameters that refer at least to low frequency band signal sections which match best with high-frequency sub-band signals.
  • a computer program product for decoding bit streams comprising instructions operable to cause a processor to receive an encoded bit stream, decode from the bit stream at least a low frequency signal and at least parameters referring to the low frequency signal sections, utilize samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high-frequency sub-band signals, and put out an output signal comprising at least the low frequency signal and at least two high-frequency sub-band signals.
  • FIG. 1 a system for coding audio signals according to the art
  • FIG. 2 an encoder according to the art
  • FIG. 3 a decoder according to the art
  • FIG. 4 an SBR encoder
  • FIG. 5 an SBR decoder
  • FIG. 6 spectral representation of an audio signal in different stages
  • FIG. 7 a system according to a first embodiment
  • FIG. 8 a system according to a second embodiment
  • FIG. 9 a frequency spectrum with envelope normalization
  • FIG. 10 coding enhancement using tonal detection.
  • FIG. 1 Illustrated is a coding system 2 with an encoder 4, a storage or media channel 6 and a decoder 8.
  • the encoder 4 compresses an input audio signal 10 producing a bit stream 12, which is either stored or transmitted through a media channel 6.
  • the bit stream 12 can be received within the decoder 8.
  • the decoder 8 decompresses the bit stream 12 and produces an output audio signal 14.
  • the bit rate of the bit stream 12 and the quality of the output audio signal 14 in relation to the input signal 10 are the main features, which define the performance of the coding system 2.
  • a typical structure of a modern audio encoder 4 is presented schematically in FIG. 2.
  • the input signal 10 is divided into sub-bands using an analysis filter bank structure 16.
  • Each sub-band can be quantized and coded within coding means 18 utilizing the information provided by a psychoacoustic model 20.
  • the coding can be Huffman coding.
  • the quantization setting as well as the coding scheme can be dictated by the psychoacoustic model 18.
  • the quantized, coded information is used within a bit stream formatter 22 for creating a bit stream 12.
  • the bit stream 12 can be decoded within a decoder 8 as illustrated schematically in FIG. 3.
  • the decoder 8 can comprise bit stream unpacking means 24, sub-band reconstruction means 26, and a synthesis filter bank 28.
  • the decoder 8 computes the inverse of the encoder 4 and transforms the bit stream 12 back to an output audio signal 14.
  • the bit stream 12 is de-quantized in the sub-band reconstruction means 26 into sub-band signals.
  • the sub-band signals are fed to the synthesis filter bank 28, which synthesizes the audio signal from the sub-band signals and creates the output signal 14.
  • FIG. 4 illustrates schematically an encoder 4.
  • the encoder 4 comprises low pass filtering means 30, coding means 31, SBR means 32, envelope extraction means 34 and bit stream formatter 22.
  • the low pass filter 30 first defines a cut-off frequency up to which the input signal 10 is filtered. The effect is illustrated in FIG. 6a. Only frequencies below the cut-off frequency 36 pass the filter.
  • the coding means 31 carry out quantization and Huffman coding with 32 low frequency sub-bands.
  • the low frequency contents are converted within coding means 31 into the QMF domain.
  • the low frequency contents are transposed based on the output of coder 31.
  • the transposition is done in SBR means 32.
  • the effect of transposition of the low frequencies to the high frequencies is illustrated within FIG. 6b.
  • the transposition is performed blindly such that the low frequency sub-band samples are just copied into high frequency sub-band samples. This is done similarly in every frame of the input signal and independently of the characteristics of the input signal .
  • the high frequency sub-bands can be adjusted based on additional information. This is done to make particular features of the synthesized high frequency region more similar with the original one. Additional components, such as sinusoids or noise, can be added to the high frequency region to increase the similarity with the original high frequency region. Finally, the envelope is adjusted in envelope extraction means 34 to follow the envelope of the original high frequency spectrum. The effect can be seen in FIG. 6c, where the high frequency components are scaled to be more closely to the actual high frequency components of the input signal.
  • bit stream 12 the coded low frequency signal together with scaling and envelope adjustment parameters is comprised.
  • the bit stream 12 can be decoded within a decoder as illustrated in FIG. 5.
  • Fig. 5 illustrates a decoder 8 with unpacking means 24, a low frequency decoder 38, high frequency reconstruction means 40, component adjustment means 42, and envelope adjustment means 44.
  • the low frequency sub-bands are reconstructed in the decoder 38.
  • the high frequency sub-bands are statically reconstructed within high frequency reconstruction means 40. Sinusoids can be added and the envelope adjusted in component adjustment means 42, and envelope adjustment means 44.
  • the transposition of low frequency signal samples into high frequency sub-bands is done dynamically, e.g. it is checked, which low frequency signal sections match best with a high frequency sub- band. An index to the corresponding low frequency signal sections is created. This index is encoded and used within the decoder for constructing the high frequency sub-bands from the low frequency signal.
  • FIG. 7 illustrates a coding system with an encoder 4, and a decoder 8.
  • the encoder 4 is comprised of high frequency coding means 50, a low frequency coder 52, and bit stream formatter 22.
  • the encoder 4 can be part of a more complex audio coding scheme.
  • the application can be used in almost any audio coder in which good quality is aimed at low bit rates. For instance the application can be used totally separated from the actual low bit rate audio coder, e.g. it can be placed in front of a psychoacoustic coder, e.g. AAC, MPEG, etc.
  • the high frequency region typically contains similar spectral shapes as the low frequency region, good coding performance is generally achieved. This is accomplished with a relatively low total bit rate, as only the indexes of the copied spectrum and the scaling factors need to be transmitted to the decoder.
  • the low frequency samples X L (k) are coded.
  • the high frequency coder Within the low frequency coder 22, the low frequency samples X L (k) are coded.
  • parameters OC ⁇ , OC 2 ⁇ > I representing transformation, scaling and envelope forming are created for coding, as will be described in more detail below.
  • the high frequency spectrum is first divided into n*, sub- bands. For each sub-band, the most similar match (i.e. the most similar spectrum shape) is searched from the low frequency region.
  • the method can operate in the modified discrete cosine (MDCT) domain. Due its good properties (50% overlap with critical sampling, flexible window switching etc.), the MDCT domain is used in most state-of-the-art audio coders.
  • MDCT transformation is performed as:
  • x(n) is the input signal
  • h ⁇ n is the time analysis window with length 2N 1 and 0 ⁇ k ⁇ N.
  • N 1024 (normal frames) or 128 samples (transients) .
  • the spectrum coefficients X ⁇ k) can be real numbers. Frame sizes as mentioned, as well as any other frame size are possible.
  • the high frequency coder 50 and the low frequency coder 52 can create N MDCT coded components, where X L (k) represents the low frequency components and X H (k) represent the high frequency components.
  • N L low frequency MDCT coefficients X L (k) , 0 ⁇ k ⁇ N L can be coded.
  • N L N/2, but also other selections can be done.
  • the original high frequency spectrum X H (k) is divided into i-i, non-overlapping bands.
  • the number of bands as well as the width of the bands can be chosen arbitrarily. For example, eight equal width frequency bands can be used when N equals to 1024 samples. Another reasonable choice is to select the bands based on the perceptual properties of human hearing. For example Bark or equivalent rectangular bandwidth (ERB) scales can be utilized to select the number of bands and their widths.
  • ERP equivalent rectangular bandwidth
  • the similarity measure between the high frequency signal and the low frequency components can be calculated.
  • X H ' be a column vector containing the jth band of X H (k) with length of w 3 samples.
  • X H ' can be compared with the coded low frequency spectrum X L (k) as follows:
  • S (a, b) is a similarity measure between vectors a and b
  • X[ (y) is a vector containing indexes i(j ' ) ⁇ k ⁇ i(j) + W j of the coded low frequency spectrum X L (k) .
  • the length of the desired low frequency signal section is the same as the length of the current high frequency sub-band, thus basically the only information needed is the index i(j), which indicates where a respective low frequency signal section begins.
  • the similarity measure can be used to select the index i(j ' ) which provides the highest similarity.
  • the similarity measure is used to describe how similar the shapes of the vectors are, while their relative amplitude is not important. There are many choices for the similarity measure.
  • One possible implementation can be the normalized correlation:
  • scaling factors take care that the envelope of the coded high frequency spectrum follows the envelope of the original spectrum.
  • index i(j) a selected vector X ⁇ y) , most similar in shape with the X H ' , has to be scaled to the same amplitude as X H ' .
  • scaling can be performed in two phases, first in the linear domain to match the high amplitude peaks in the spectrum and then in the logarithmic domain to match the energy and shape. Scaling the vector X[ (/) with these scaling factors results in the coded high frequency component X H ' .
  • CC 2 H can be selected such that the energies are set to the approximately equal level:
  • variable M ⁇ the purpose of the variable M ⁇ , is to make sure that the amplitudes of the largest values in X H ' (i.e. the spectral peaks) are not scaled too high (the first scaling factor Or 1 (V) did already set them to the correct level) .
  • the parameters need to be quantized for transmitting the high frequency region reconstruction information to the decoder 8.
  • i(j), Or 1 (V) and a 2 (j) are needed for each band.
  • a high frequency generation means 54 utilize these parameters. Since index i(j) is an integer, it can be submitted as such.
  • cc ⁇ (j) and a 2 (j) can be quantized using for example a scalar or vector quantization.
  • a low frequency decoding means 56 decodes the low frequency signal and together with the reconstructed high frequency sub-bands form the output signal 14 according to equation 2.
  • the system as illustrated in FIG. 7 may further be enhanced with means for envelope normalization.
  • the system illustrated in FIG. 8 comprises in addition to the system illustrated in FIG. 7 envelope normalization means 58 and envelope synthesis means 60
  • the high frequency coding technique is used to generate an envelope-normalized spectrum using the envelope normalization means 58 in the encoder 4.
  • the actual envelope synthesis is performed in a separate envelope synthesis means 60 in the decoder 8.
  • the envelope normalization can be performed utilizing, for example, LPC-analysis or cepstral modeling. It should be noted that with envelope normalization, envelope parameters describing the original high frequency spectral envelope have to be submitted to the decoder, as illustrated in FIG. 8.
  • the quality of the coded signal may decrease when compared to the original. This is because the coded high frequency region may not remain as periodic from one frame to another as in the original signal. The periodicity is lost since some periodic (sinusoidal) components may be missing or the amplitude of the existing periodic components varies too much from one frame to another.
  • the tonal signal sections with possible quality degradations can be detected.
  • the tonal sections can be detected by comparing the similarities between two successive frames in the Shifted Discrete Fourier Transform (SDFT) domain.
  • SDFT is a useful transformation for this purpose, because it contains also phase information, but is still closely related to the MDCT transformation, which is used in the other parts of the coder.
  • Tonality detection can be performed right after transient detection and before initializing the actual high frequency region coding. Since transient frames do generally not contain tonal components, tonality detection can be applied only when both present and previous frames are normal long frames (e.g. 2048 samples) .
  • the tonality detection is based on Shifted Discrete Fourier Transform (SDFT) , as indicated above, which can be defined for 2N samples long frames as:
  • SDFT Shifted Discrete Fourier Transform
  • h(n) is the window
  • x(n) is the input signal
  • the SDFT transformation can be computed first for the tonality analysis and then the MDCT transformation is obtained straightforwardly as a real part of the SDFT coefficients. This way the tonality detection does not increase computational complexity significantly.
  • TONALITY J TONAL, s hmi ⁇ S ⁇ s hm2 . (16) [ NOTTONAL, s hm2 ⁇ S
  • the tonal detection (62) as described above can be carried out based on the input signal 10.
  • the input frames are divided into three groups: not tonal (64), tonal (66) and strongly tonal (66), as illustrated in FIG. 10.
  • the quality of the tonal sections can be improved by adding additional sinusoids to the high frequency region and possibly by increasing the number of high frequency sub- bands used to create the high frequency region as described above.
  • the signal is not tonal (64), and then the coding is continued as described above .
  • additional sinusoids can be added to the high frequency spectrum after applying the coding as illustrated above.
  • a fixed number of sinusoids can be added to the MDCT domain spectrum.
  • the sinusoids can straightforwardly be added to the frequencies where the absolute difference between the original and the coded spectrum is largest.
  • the positions and amplitudes of the sinusoids are quantized and submitted to the decoder.
  • sinusoids can be added to the high frequency region of the spectrum.
  • X H ⁇ k) and X H ⁇ k) representing the original and coded high frequency sub-band components, respectively
  • the first sinusoid can be added to index Ic 1 , which can be obtained from
  • the amplitude (including its sign) of the sinusoid can be defined as
  • Equations (17) - (19) can be repeated until a desired number of sinusoids have been added. Typically, already four additional sinusoids can result in clearly improved results during tonal sections.
  • the amplitudes of the sinusoids Ai can be quantized and submitted to the decoder 8. The positions ki of the sinusoids can also be submitted. In addition, the decoder 8 can be informed that the current frame is tonal.
  • the second scaling factor CX 2 may not improve the quality and may then be eliminated.
  • the high frequency sub- bands remain very similar from one frame to another.
  • special actions can be applied.
  • the number of high frequency sub-bands nb is relatively low (i.e. 8 or below)
  • the number of high frequency sub-bands can be increased to higher rates.
  • 16 high frequency sub-bands generally provide performance that is more accurate .
  • a high number of sinusoids can be added. In general, a good solution is to use two times as many sinusoids as during "normal" tonal sections.

Abstract

The application relates to audio encoding and decoding. In order to enhance coded audio signals, there is provided dividing the audio signal into at least a low frequency band and a high frequency band, dividing the high frequency band into at least two high frequency sub-band signals, and generating parameters that refer at least to the low frequency band signal sections, which match best with high-frequency sub-band signals.

Description

Audio Compression
Technical Field
The present application relates in general to audio compression.
Background
Audio compression is commonly employed in modern consumer devices for storing or transmitting digital audio signals. Consumer devices may be telecommunication devices, video devices, audio players, radio devices and other consumer devices. High compression ratios enable better storage capacity, or more efficient transmission via a communication channel, i.e. a wireless communication channel, or a wired communication channel. However, simultaneously to the compression ratio, the quality of the compressed signal should be maintained at a high level. The target of audio coding is generally to maximize the audio quality in relation to the given compression ratio, i.e. the bit rate.
Numerous audio coding techniques have been developed during the past decades . Advanced audio coding systems utilize effectively the properties of the human ear. The main idea is that the coding noise can be placed in the areas of the signal where it least affects the perceptual quality, so that the data rate can be reduced without introducing audible distortion. Therefore, theories of psychoacoustics are an important part of modern audio coding.
In known audio encoders, the input signal is divided into a limited number of sub-bands. Each of the sub-band signals can be quantized. From the theory of psychoacoustics it is known that the highest frequencies in the spectrum are perceptually less important than the low frequencies. This can be considered to some extend in the coder by allocating lesser bits to the quantization of the high frequency sub-bands than to the low frequency sub-bands .
More sophisticated audio coding utilizes the fact that in most cases, there are large dependencies between the low frequency regions and high frequency regions of an audio signal, i.e. the higher half of the spectrum is generally quite similar as the lower half. The low frequency region can be considered the lower half of the audio spectrum, and the high frequency can be considered the upper half of the audio spectrum. It is to be understood, that the border between low and high frequency is not fixed, but may lie in between 2kHz and 15kHz, and even beyond these borders .
A current approach for coding the high frequency region is known as spectral-band-replication (SBR) . This technique is described in M. Dietz, L. Liljeryd, K. Kjόrling and O. Kunz, "Spectral Band Replication, a novel approach in audio coding, " in 112th AES Convention, Munich, Germany, May, 2002 and P. Ekstrand, "Bandwidth extension of audio signals by spectral band replication, " in 1st IEEE Benelux Workshop on Model Based Processing and Coding of Audio, Leuven, Belgium, November 2002. The described method can be applied in ordinary audio coders, such as, for example AAC or MPEG-I Layer III (MP3) coders, and many other state-of-the-art coders.
The drawback of the method according to the art is that the mere transposition of low frequency bands to high frequency bands may lead to dissimilarities between the original high frequencies and their reconstruction utilizing the transposed low frequencies. Another drawback is that noise and sinusoids need to be added to the frequency spectrum according to known methods.
Therefore, it is an object of the application to provide an improved audio coding technique. It is a further object of the application to provide a coding technique representing the input signal more correctly with reasonably low bit rates.
Summary
In order to overcome the above mentioned drawbacks, the application provides, according to one aspect, a method for encoding audio signals with receiving an input audio signal, dividing the audio signal into at least a low frequency band and a high frequency band, dividing the high frequency band into at least two high frequency sub- band signals, determining within the low frequency band signal sections which match best with high-frequency sub- band signals, and generating parameters that refer at least to the low frequency band signal sections which match best with high-frequency sub-band signals.
The application provides a new approach for coding the high frequency region of an input signal. The input signal can be divided into temporally successive frames. Each of the frames represents a temporal instance of the input signal. Within each frame, the input signal can be represented by its spectral components. The spectral components, or samples, represent the frequencies within the input signal .
Instead of blindly transposing the low frequency region to the high frequencies, the application maximizes the similarity between the original and the coded high frequency spectral components. According to the application, the high frequency region is formed utilizing the already-coded low frequency region of the signal .
By comparing low frequency signal samples with the high frequency sub-bands of the received signal, a signal section within the low frequency can be found, which matches best with an actual high frequency sub-band. The application provides for searching within the whole low frequency spectrum sample by sample for a signal section, which resembles best a high frequency sub-band. As a signal section corresponds to a sample sequence, the application provides, in other words, finding a sample sequence, which matches best with the high frequency sub- band. The sample sequence can start anywhere within the low frequency band, except that the last considered starting point within the low frequency band should be the last sample in the low frequency band minus the length of the high frequency sub-band that is to be matched.
An index or link to the low frequency signal section matching best the actual high frequency sub-band can be used to model the high frequency sub-band. Only the index or link needs to be encoded and stored, or transmitted in order to allow restoring a representation of the corresponding high frequency sub-band at the receiving end.
According to embodiments, the most similar match, i.e. the most similar spectral shape of the signal section and the high frequency sub-band, is searched within the low frequency band. Parameters referring at least to the signal section which is found to be most similar with a high frequency sub-band are created in the encoder. The parameters may comprise scaling factors for scaling the found sections into the high frequency band. At the decoder side, these parameters are used to transpose the corresponding low frequency signal sections to a high frequency region to reconstruct the high frequency sub- bands . Scaling can be applied to the copied low frequency signal sections using scaling factors. According to embodiments, only the scaling factors and the links to the low frequency signal sections need to be encoded.
The shape of the high frequency region follows more closely the original high frequency spectrum than with known methods when using the best matching low frequency signal sections for reproduction of the high frequency sub-bands. The perceptually important spectral peaks can be modeled more accurately, because the amplitude, shape, and frequency position is more similar to the original signal. As the modeled high frequency sub-bands can be compared with the original high frequency sub-bands, it is possible to easily detect missing spectral components, i.e. sinusoids or noise, and then add these.
To enable envelope shaping, embodiments provide utilizing the low frequency signal sections by transposing the low frequency signal samples into high-frequency sub-band signals using the parameters wherein the parameters comprise scaling factors such that an envelope of the transposed low frequency signal sections follows an envelope of the high frequency sub-band signals of the received signal. The scaling factors enable adjusting the energy and shape of the copied low frequency signal sections to match better with the actual high frequency sub-bands .
The parameters can comprise links to low frequency signal sections to represent the corresponding high frequency sub-band signals according to embodiments. The links can be pointers or indexes to the low frequency signal sections. With this information, it is possible to refer to the low frequency signal sections when constructing the high frequency sub-band.
In order to reduce the number of quantization bits, it is possible to normalize the envelope of the high frequency sub-band signals. The normalization provides that both the low and high frequency bands are within a normalized amplitude range. This reduces the number of bits needed for quantization of the scaling factors. The information used for normalization has to be provided by the encoder to construct the representation of the high frequency sub-band in the decoder. Embodiments provide envelope normalization with linear prediction coding. It is also possible to normalize the envelope utilizing cepstral modeling. Cepstral modeling uses the inverse Fourier Transform of the logarithm of the power spectrum of a signal .
Generating scaling factors can comprise generating scaling factors in the linear domain to match at least amplitude peaks in the spectrum. Generating scaling factors can also comprise matching at least energy and/or shape of the spectrum in the logarithmic domain, according to embodiments.
Embodiments provide generating signal samples within the low frequency band and/or the high frequency band using modified discrete cosine transformation (MDCT) . The MDCT transformation provides spectrum coefficients preferably as real numbers. The MDCT transformation according to embodiments can be used with any suitable frame sizes, in particular with frame sizes of 2048 samples for normal frames and 256 samples for transient frames, but also any other value in between.
To obtain the low frequency signal sections which match best with corresponding high-frequency sub-band signals, embodiments provide calculating a similarity measure using a normalized correlation or the Euclidian distance.
In order to encode the input signal, embodiments provide quantizing the low frequency signal samples and quantizing at least the scaling factors. The link to the low frequency signal section can be an integer.
It is possible to add additional sinusoids to improve the quality of high frequency signals. In order to comply with such sinusoids, embodiments provide dividing the input signal into temporally successive frames, and detecting tonal sections within two successive frames within the input signal. The tonal sections can be enhanced by adding additional sinusoids. Sections which are highly tonal can be enhanced additionally by increasing the number of high frequency sub-bands in the corresponding high frequency regions. Input frames can be divided into different tonality groups, e.g. not tonal, tonal, and strongly tonal. Detecting tonal sections can comprise using Shifted Discrete Fourier Transformation (SDFT) . The result of the SDFT can be utilized within the encoder to provide the MDCT transformation.
Another aspect of the application is a method for decoding audio signals with receiving an encoded bit stream, decoding from the bit stream at least a low frequency signal and at least parameters referring to low frequency signal sections, utilizing the low frequency signal samples and the parameters referring to the low frequency signal sections for reconstructing at least two high-frequency sub-band signals, and outputting an output signal comprising at least the low frequency signal and at least the two high-frequency sub-band signals.
A further aspect of the application is an encoder for encoding audio signals comprising receiving means arranged for receiving an input audio signal, filtering means arranged for dividing the audio signal into at least a low frequency band and a high frequency band, and further arranged for dividing the high frequency band into at least two high frequency sub-band signals, and coding means arranged for generating parameters that refer at least to low frequency band signal sections which match best with the high-frequency sub-band signals .
Yet, a further aspect of the application is a decoder for decoding audio signals comprising receiving means arranged for receiving an encoded bit stream, decoding means arranged for decoding from the bit stream at least a low frequency signal and at least parameters referring to the low frequency signal sections, generation means arranged for utilizing samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high- frequency sub-band signals.
A further aspect of the application is a system for digital audio compression comprising a described decoder, and a described encoder.
Yet, a further aspect of the application relates to a computer program product for encoding audio signals, the program comprising instructions operable to cause a processor to receive an input audio signal, divide the audio signal into at least a low frequency band and a high frequency band, divide the high frequency band into at least two high frequency sub-band signals, and generate parameters that refer at least to low frequency band signal sections which match best with high-frequency sub-band signals.
Also, a computer program product for decoding bit streams, the program comprising instructions operable to cause a processor to receive an encoded bit stream, decode from the bit stream at least a low frequency signal and at least parameters referring to the low frequency signal sections, utilize samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high-frequency sub-band signals, and put out an output signal comprising at least the low frequency signal and at least two high-frequency sub-band signals.
Brief Description of the Figures
In the figures show:
FIG. 1 a system for coding audio signals according to the art ;
FIG. 2 an encoder according to the art;
FIG. 3 a decoder according to the art;
FIG. 4 an SBR encoder;
FIG. 5 an SBR decoder;
FIG. 6 spectral representation of an audio signal in different stages;
FIG. 7 a system according to a first embodiment;
FIG. 8 a system according to a second embodiment;
FIG. 9 a frequency spectrum with envelope normalization;
FIG. 10 coding enhancement using tonal detection.
Detailed Description of the Figures General audio coding systems consist of an encoder and a decoder, as illustrated in schematically FIG. 1. Illustrated is a coding system 2 with an encoder 4, a storage or media channel 6 and a decoder 8.
The encoder 4 compresses an input audio signal 10 producing a bit stream 12, which is either stored or transmitted through a media channel 6. The bit stream 12 can be received within the decoder 8. The decoder 8 decompresses the bit stream 12 and produces an output audio signal 14. The bit rate of the bit stream 12 and the quality of the output audio signal 14 in relation to the input signal 10 are the main features, which define the performance of the coding system 2.
A typical structure of a modern audio encoder 4 is presented schematically in FIG. 2. The input signal 10 is divided into sub-bands using an analysis filter bank structure 16. Each sub-band can be quantized and coded within coding means 18 utilizing the information provided by a psychoacoustic model 20. The coding can be Huffman coding. The quantization setting as well as the coding scheme can be dictated by the psychoacoustic model 18. The quantized, coded information is used within a bit stream formatter 22 for creating a bit stream 12.
The bit stream 12 can be decoded within a decoder 8 as illustrated schematically in FIG. 3. The decoder 8 can comprise bit stream unpacking means 24, sub-band reconstruction means 26, and a synthesis filter bank 28. The decoder 8 computes the inverse of the encoder 4 and transforms the bit stream 12 back to an output audio signal 14. During the decoding process, the bit stream 12 is de-quantized in the sub-band reconstruction means 26 into sub-band signals. The sub-band signals are fed to the synthesis filter bank 28, which synthesizes the audio signal from the sub-band signals and creates the output signal 14.
It is in many cases possible to efficiently and perceptually accurately synthesize the high frequency region using only the low frequency region and a limited amount of additional control information. Optimally, the coding of the high frequency part only requires a small number of control parameters. Since the whole upper part of the spectrum can be synthesized with a small amount of information, considerable savings can be achieved in the total bit rate.
Current coding, such as MP3pro, utilize these properties in audio signals by introducing an SBR coding scheme in addition to the psychoacoustic coding. In SBR, the high frequency region can be generated separately utilizing the coded low frequency region, as illustrated schematically in FIGs 4 and 5.
FIG. 4 illustrates schematically an encoder 4. The encoder 4 comprises low pass filtering means 30, coding means 31, SBR means 32, envelope extraction means 34 and bit stream formatter 22. The low pass filter 30 first defines a cut-off frequency up to which the input signal 10 is filtered. The effect is illustrated in FIG. 6a. Only frequencies below the cut-off frequency 36 pass the filter.
The coding means 31 carry out quantization and Huffman coding with 32 low frequency sub-bands. The low frequency contents are converted within coding means 31 into the QMF domain. The low frequency contents are transposed based on the output of coder 31. The transposition is done in SBR means 32. The effect of transposition of the low frequencies to the high frequencies is illustrated within FIG. 6b. The transposition is performed blindly such that the low frequency sub-band samples are just copied into high frequency sub-band samples. This is done similarly in every frame of the input signal and independently of the characteristics of the input signal .
In the SBR means 32, the high frequency sub-bands can be adjusted based on additional information. This is done to make particular features of the synthesized high frequency region more similar with the original one. Additional components, such as sinusoids or noise, can be added to the high frequency region to increase the similarity with the original high frequency region. Finally, the envelope is adjusted in envelope extraction means 34 to follow the envelope of the original high frequency spectrum. The effect can be seen in FIG. 6c, where the high frequency components are scaled to be more closely to the actual high frequency components of the input signal.
Within bit stream 12 the coded low frequency signal together with scaling and envelope adjustment parameters is comprised. The bit stream 12 can be decoded within a decoder as illustrated in FIG. 5.
Fig. 5 illustrates a decoder 8 with unpacking means 24, a low frequency decoder 38, high frequency reconstruction means 40, component adjustment means 42, and envelope adjustment means 44. The low frequency sub-bands are reconstructed in the decoder 38. From the low frequency sub-bands, the high frequency sub-bands are statically reconstructed within high frequency reconstruction means 40. Sinusoids can be added and the envelope adjusted in component adjustment means 42, and envelope adjustment means 44.
According to the application, the transposition of low frequency signal samples into high frequency sub-bands is done dynamically, e.g. it is checked, which low frequency signal sections match best with a high frequency sub- band. An index to the corresponding low frequency signal sections is created. This index is encoded and used within the decoder for constructing the high frequency sub-bands from the low frequency signal.
FIG. 7 illustrates a coding system with an encoder 4, and a decoder 8. The encoder 4 is comprised of high frequency coding means 50, a low frequency coder 52, and bit stream formatter 22. The encoder 4 can be part of a more complex audio coding scheme. The application can be used in almost any audio coder in which good quality is aimed at low bit rates. For instance the application can be used totally separated from the actual low bit rate audio coder, e.g. it can be placed in front of a psychoacoustic coder, e.g. AAC, MPEG, etc.
As the high frequency region typically contains similar spectral shapes as the low frequency region, good coding performance is generally achieved. This is accomplished with a relatively low total bit rate, as only the indexes of the copied spectrum and the scaling factors need to be transmitted to the decoder.
Within the low frequency coder 22, the low frequency samples XL(k) are coded. Within the high frequency coder
50, parameters OC^ , OC 2 ■> I representing transformation, scaling and envelope forming are created for coding, as will be described in more detail below.
The high frequency spectrum is first divided into n*, sub- bands. For each sub-band, the most similar match (i.e. the most similar spectrum shape) is searched from the low frequency region.
The method can operate in the modified discrete cosine (MDCT) domain. Due its good properties (50% overlap with critical sampling, flexible window switching etc.), the MDCT domain is used in most state-of-the-art audio coders. The MDCT transformation is performed as:
Figure imgf000018_0001
where x(n) is the input signal, h{n) is the time analysis window with length 2N1 and 0 < k < N. Typically in audio coding N is 1024 (normal frames) or 128 samples (transients) . The spectrum coefficients X{k) can be real numbers. Frame sizes as mentioned, as well as any other frame size are possible.
To create the parameters describing the high frequency sub-bands, it is necessary to find the low frequency signal sections, which match best the high frequency sub- bands within the high frequency coder 50. The high frequency coder 50 and the low frequency coder 52 can create N MDCT coded components, where XL(k) represents the low frequency components and XH(k) represent the high frequency components.
With the low frequency coder 52, NL low frequency MDCT coefficients XL(k) , 0 < k < NL can be coded. Typically NL = N/2, but also other selections can be done.
Utilizing XL{k) and the original spectrum X{k) , the target is to create a high frequency component XH(k) which is, with the used measures, maximally similar with the original high frequency signal XH(k)= X(NL + k) , 0 < k < N -
NL. XL(k) and XH{k) form together the synthesized spectrum X(k): XIk) = ( 2 )
Figure imgf000019_0001
The original high frequency spectrum XH(k) is divided into i-i, non-overlapping bands. In principle, the number of bands as well as the width of the bands can be chosen arbitrarily. For example, eight equal width frequency bands can be used when N equals to 1024 samples. Another reasonable choice is to select the bands based on the perceptual properties of human hearing. For example Bark or equivalent rectangular bandwidth (ERB) scales can be utilized to select the number of bands and their widths.
Within the high frequency coder, the similarity measure between the high frequency signal and the low frequency components can be calculated.
Let XH' be a column vector containing the jth band of XH(k) with length of w3 samples. XH' can be compared with the coded low frequency spectrum XL(k) as follows:
max(s(Xi. ('\X/,)), O≤ i (j) < NL - w3 , (3)
where S (a, b) is a similarity measure between vectors a and b, and X[(y) is a vector containing indexes i(j') < k < i(j) + Wj of the coded low frequency spectrum XL(k) . The length of the desired low frequency signal section is the same as the length of the current high frequency sub-band, thus basically the only information needed is the index i(j), which indicates where a respective low frequency signal section begins.
The similarity measure can be used to select the index i(j') which provides the highest similarity. The similarity measure is used to describe how similar the shapes of the vectors are, while their relative amplitude is not important. There are many choices for the similarity measure. One possible implementation can be the normalized correlation:
Figure imgf000020_0001
which provides a measure that is not sensitive to the amplitudes of a and b. Another reasonable alternative is a similarity measure based on Euclidian distance:
S(a,b) = (5) a-b
Correspondingly, many other similarity measures can be utilized as well.
These most similar sections within the low frequency signal samples can be copied to the high frequency sub- bands and scaled using particular scaling factors. The scaling factors take care that the envelope of the coded high frequency spectrum follows the envelope of the original spectrum. Using the index i(j), a selected vector X^y) , most similar in shape with the XH' , has to be scaled to the same amplitude as XH' . There are many different techniques for scaling. For example, scaling can be performed in two phases, first in the linear domain to match the high amplitude peaks in the spectrum and then in the logarithmic domain to match the energy and shape. Scaling the vector X[(/) with these scaling factors results in the coded high frequency component XH' .
The linear domain scaling is performed simply as
Figure imgf000021_0001
where Or1(Z) is obtained from
Figure imgf000021_0002
Notice, that Or1(Z) can 9et both positive and negative values. Before logarithmic scaling, the sign of vector samples as well as the maximum logarithmic value of X^ can be stored:
Figure imgf000021_0003
M-, = max(logI0|xi|) ( 9 ) Now, the logarithmic scaling can be performed and XH is updated as
Figure imgf000022_0001
XH 1=IO1^(K,,)7, (11)
where the scaling factor a2(j) is obtained from
Figure imgf000022_0002
This scaling factor maximizes similarity between waveforms in the logarithmic domain. Alternatively, CC2H) can be selected such that the energies are set to the approximately equal level:
Figure imgf000022_0003
In the above equations, the purpose of the variable M^, is to make sure that the amplitudes of the largest values in XH' (i.e. the spectral peaks) are not scaled too high (the first scaling factor Or1(V) did already set them to the correct level) . Variable K^, is used to store the sign of the original samples, since that information is lost during transformation to the logarithmic domain. After the bands have been scaled, the synthesized high frequency spectrum XH(k) can be obtained by combining vectors XH' , j = 0, 1, ..., nb - 1.
After the parameters have been selected, the parameters need to be quantized for transmitting the high frequency region reconstruction information to the decoder 8.
To be able to reconstruct XH(k) in the decoder 8, parameters i(j), Or1(V) and a2(j) are needed for each band. In the decoder 8, a high frequency generation means 54 utilize these parameters. Since index i(j) is an integer, it can be submitted as such. ccλ(j) and a2(j) can be quantized using for example a scalar or vector quantization.
The quantized versions of these parameters, ^1(Z), and cc2{j) , are used in the high frequency generation means 54 to construct XH{k) according to equations (6) and (10) .
A low frequency decoding means 56 decodes the low frequency signal and together with the reconstructed high frequency sub-bands form the output signal 14 according to equation 2.
The system as illustrated in FIG. 7 may further be enhanced with means for envelope normalization. The system illustrated in FIG. 8 comprises in addition to the system illustrated in FIG. 7 envelope normalization means 58 and envelope synthesis means 60 In this system, the high frequency coding technique is used to generate an envelope-normalized spectrum using the envelope normalization means 58 in the encoder 4. The actual envelope synthesis is performed in a separate envelope synthesis means 60 in the decoder 8.
The envelope normalization can be performed utilizing, for example, LPC-analysis or cepstral modeling. It should be noted that with envelope normalization, envelope parameters describing the original high frequency spectral envelope have to be submitted to the decoder, as illustrated in FIG. 8.
In SBR, additional sinusoids and noise components are added to the high frequency region. It is possible to do the same also in the above described application. If necessary, additional components can be added easily. This is because in the described method it is possible to measure the difference between the original and synthesized spectra and thus to find locations where there are significant differences in the spectral shape. Since, for example, in common BWE coders the spectral shape differs significantly from the original spectrum it is typically more difficult to decide whether additional sinusoidal or noise components should be added.
It has been noticed that in some cases when the input signal is very tonal, the quality of the coded signal may decrease when compared to the original. This is because the coded high frequency region may not remain as periodic from one frame to another as in the original signal. The periodicity is lost since some periodic (sinusoidal) components may be missing or the amplitude of the existing periodic components varies too much from one frame to another.
To include tonal sections even when the low frequency signal samples used for reconstructing the high frequency sub-bands do not represent the entire sinusoidal, two further steps can be provided.
In a first step, the tonal signal sections with possible quality degradations can be detected. The tonal sections can be detected by comparing the similarities between two successive frames in the Shifted Discrete Fourier Transform (SDFT) domain. SDFT is a useful transformation for this purpose, because it contains also phase information, but is still closely related to the MDCT transformation, which is used in the other parts of the coder.
Tonality detection can be performed right after transient detection and before initializing the actual high frequency region coding. Since transient frames do generally not contain tonal components, tonality detection can be applied only when both present and previous frames are normal long frames (e.g. 2048 samples) . The tonality detection is based on Shifted Discrete Fourier Transform (SDFT) , as indicated above, which can be defined for 2N samples long frames as:
2 SΛ-MZ. y (fc)= Σ ∑ h(ή)x{n) exp{i2π(n + u)(k + v) / 2N) , ( 14 ; n=0
where h(n) is the window, x(n) is the input signal, and u and v represent time and frequency domain shifts, respectively. These domain shifts can be selected such that u = (N + l)/2 and v = 1/2, since then it holds that X(*)=real(y(k)).
Thus, instead of computing SDFT and MDCT transformations separately, the SDFT transformation can be computed first for the tonality analysis and then the MDCT transformation is obtained straightforwardly as a real part of the SDFT coefficients. This way the tonality detection does not increase computational complexity significantly.
With Y{k)b and Y(Jc) b-1 representing the SDFT transformation of current and previous frames, respectively, the similarity between frames can be measured using:
Figure imgf000026_0001
where iVL+l corresponds to the limit frequency for high frequency coding. The smaller the parameter S is, the more similar the high frequency spectrums are. Based on the value of S, frames can be classified as follows:
(STRONGLYTONAL, 0< S < shm,
TONALITY=J TONAL, shmi ≤ S < shm2 . (16) [ NOTTONAL, shm2 < S
Good choices for the limiting factors sliml and slim2 are 0.02 and 0.2, respectively. However, also other choices can be made. In addition, different variants can be used and, for example, one of the classes can be totally removed .
As illustrated in FIG. 10, the tonal detection (62) as described above can be carried out based on the input signal 10.
Based on the tonality detection (62) , the input frames are divided into three groups: not tonal (64), tonal (66) and strongly tonal (66), as illustrated in FIG. 10.
After tonal detection (62) , in a second step the quality of the tonal sections can be improved by adding additional sinusoids to the high frequency region and possibly by increasing the number of high frequency sub- bands used to create the high frequency region as described above.
The most typical case is that the signal is not tonal (64), and then the coding is continued as described above . If the input signal is classified as tonal (66) , additional sinusoids can be added to the high frequency spectrum after applying the coding as illustrated above. A fixed number of sinusoids can be added to the MDCT domain spectrum. The sinusoids can straightforwardly be added to the frequencies where the absolute difference between the original and the coded spectrum is largest. The positions and amplitudes of the sinusoids are quantized and submitted to the decoder.
When a frame is detected to be tonal (or strongly tonal) , sinusoids can be added to the high frequency region of the spectrum. With XH{k) and XH{k) representing the original and coded high frequency sub-band components, respectively, the first sinusoid can be added to index Ic1, which can be obtained from
Figure imgf000028_0001
The amplitude (including its sign) of the sinusoid can be defined as
A,=Xu(k,)-XH(k,). (18)
Finally, XH{k) can be updated as
XH(.kl) = XH(kl) + Al. (19) Equations (17) - (19) can be repeated until a desired number of sinusoids have been added. Typically, already four additional sinusoids can result in clearly improved results during tonal sections. The amplitudes of the sinusoids Ai can be quantized and submitted to the decoder 8. The positions ki of the sinusoids can also be submitted. In addition, the decoder 8 can be informed that the current frame is tonal.
It has been noticed that during tonal sections the second scaling factor CX2 may not improve the quality and may then be eliminated.
When a strongly tonal section (68) is detected, it is known that the current section is particularly challenging for high frequency region coding. Therefore, adding just sinusoids may not be enough. The quality can be further improved by increasing the accuracy of the high frequency coding. This can be performed by adding the number of bands used to create the high frequency region.
During strongly tonal sections, the high frequency sub- bands remain very similar from one frame to another. To maintain this similarity also in the coded signal, special actions can be applied. Especially if the number of high frequency sub-bands nb is relatively low (i.e. 8 or below) , the number of high frequency sub-bands can be increased to higher rates. For example, 16 high frequency sub-bands generally provide performance that is more accurate . In addition to a high number of bands, also a high number of sinusoids can be added. In general, a good solution is to use two times as many sinusoids as during "normal" tonal sections.
Increasing the number of high frequency sub-bands as well as increasing the number of sinusoids easily doubles the bit rate of strongly tonal sections when compared to "normal" frames. However, strongly tonal sections are a very special case and do occur very rarely, thus the increase to the average bit rate is very small.

Claims

04. November 2005CLAIMS
1. Method for encoding audio signals with
- receiving an input audio signal,
- dividing the audio signal into at least a low frequency band and a high frequency band,
- dividing the high frequency band into at least two high frequency sub-band signals,
- determining within the low frequency band signal sections which match best with high-frequency sub- band signals, and
- generating parameters that refer at least to the low frequency band signal sections which match best with high-frequency sub-band signals.
2. Method of claim 1, wherein generating parameters further comprises generating at least one scaling factor for scaling the low frequency band signal sections .
3. Method of claim 2, wherein the scaling factor is generated such that an envelope of the low frequency signal sections being transposed into the high- frequency sub-band signals using the parameters follows an envelope of the high frequency sub-band signal of the received signal.
4. Method of claim 2, wherein generating scaling factors comprises generating scaling factors in the linear domain to match at least amplitude peaks in the spectrum.
5. Method of claim 2, wherein generating scaling factors comprises generating scaling factors in the logarithmic domain to match at least energy and/or shape of the spectrum.
6. Method of claim 1, wherein generating parameters comprises generating links to low frequency signal sections which represent the corresponding high frequency sub-band signals.
7. Method of claim 1, wherein determining within the low frequency band signal sections which match best with high-frequency sub-band signals comprises using at least one of
A) normalized correlation,
B) Euclidian distance.
8. Method of claim 1, wherein at least samples of the low frequency signal sections are generated using modified discrete cosine transformation.
9. Method of claim 1, further comprising normalizing the envelope of the high frequency sub-band signals.
10. Method of claim 2, further comprising quantizing samples of the low frequency signal and quantizing at least the scaling factors.
11. Method of claim 1, wherein the input signal is divided into temporally successive frames, and further comprising detecting tonal sections within two successive frames within the input signal.
12. Method of claim 11, wherein detecting tonal sections comprises using Shifted Discrete Fourier Transformation.
13. Method of claim 11, further comprising adding sinusoids to tonal sections.
14. Method of claim 11, further comprising increasing the number of high frequency sub-bands for tonal sections .
15. Method for decoding audio signals with
- receiving an encoded bit stream,
- decoding from the bit stream at least a low frequency signal and at least parameters referring to low frequency signal sections,
- utilizing samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high- frequency sub-band signals, and - outputting an output signal comprising at least the low frequency signal at least two high-frequency sub-band signals.
16. Encoder for encoding audio signals comprising
- receiving means arranged for receiving an input audio signal,
- filtering means arranged for dividing the audio signal into at least a low frequency band and a high frequency band, and further arranged for dividing the high frequency band into at least two high frequency sub-band signals, and
- coding means arranged for generating parameters that refer at least to low frequency band signal sections which match best with the high-frequency sub-band signals .
17. Encoder of claim 16, wherein the coding means are arranged for generating at least one scaling factor for scaling the low frequency band signal sections.
18. Encoder of claim 16, wherein the coding means are arranged for generating the scaling factor such that an envelope of the low frequency signal sections being transposed into high-frequency sub-band signals using the parameters follows an envelope of the high frequency sub-band signals of the received signal .
19. Encoder of claim 16, wherein the filtering means are arranged for dividing the input signal into temporally successive frames, and for detecting tonal sections within two successive frames within the input signal.
20. Encoder of claim 19, wherein the filtering means are arranged for detecting tonal sections using Shifted Discrete Fourier Transformation.
21. Encoder of claim 19, wherein the coding means are arranged for adding sinusoids to tonal sections.
22. Encoder of claim 19, wherein the coding means are arranged for increasing the number of high frequency sub-bands for tonal sections.
23. Decoder for decoding audio signals comprising
- receiving means arranged for receiving an encoded bit stream,
- decoding means arranged for decoding from the bit stream at least a low frequency signal and at least parameters referring to low frequency signal sections ,
- generation means arranged for utilizing samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high-frequency sub-band signals .
24. System for digital audio compression comprising a decoder according to claim 23, and an encoder according to claim 16.
25. Computer program product for encoding audio signals, the program comprising instructions operable to cause a processor to
- receive an input audio signal,
- divide the audio signal into at least a low frequency band and a high frequency band,
- divide the high frequency band into at least two high frequency sub-band signals, and
- generate parameters that refer at least to low frequency band signal sections which match best with high-frequency sub-band signals.
26. Computer program product of claim 25, operable to cause a processor to divide the input signal into temporally successive frames, and to detect tonal sections within two successive frames within the input signal .
27. Computer program product of claim 26, operable to cause a processor to use Shifted Discrete Fourier Transformation for detecting tonal sections.
28. Computer program product of claim 26, operable to cause a processor to increase the number of high frequency sub-bands for tonal sections.
29. Computer program product for decoding bit streams, the program comprising instructions operable to cause a processor to
- receive an encoded bit stream, decode from the bit stream at least a low frequency signal and at least parameters referring to low frequency signal sections, utilize samples of the low frequency signal and the parameters referring to the low frequency signal sections for reconstructing at least two high- frequency sub-band signals, and put out an output signal comprising at least the low frequency signal and at least two high-frequency sub-band signals.
PCT/IB2005/003293 2005-11-04 2005-11-04 Audio compression WO2007052088A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
PCT/IB2005/003293 WO2007052088A1 (en) 2005-11-04 2005-11-04 Audio compression
KR1020087010631A KR100958144B1 (en) 2005-11-04 2005-11-04 Audio Compression
EP05806493.2A EP1943643B1 (en) 2005-11-04 2005-11-04 Audio compression
CN2005800519760A CN101297356B (en) 2005-11-04 2005-11-04 Audio compression
AU2005337961A AU2005337961B2 (en) 2005-11-04 2005-11-04 Audio compression
JP2008538430A JP4950210B2 (en) 2005-11-04 2005-11-04 Audio compression
BRPI0520729-0A BRPI0520729B1 (en) 2005-11-04 2005-11-04 METHOD FOR CODING AND DECODING AUDIO SIGNALS, CODER FOR CODING AND DECODER FOR DECODING AUDIO SIGNS AND SYSTEM FOR DIGITAL AUDIO COMPRESSION.
US12/084,677 US8326638B2 (en) 2005-11-04 2005-11-04 Audio compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2005/003293 WO2007052088A1 (en) 2005-11-04 2005-11-04 Audio compression

Publications (1)

Publication Number Publication Date
WO2007052088A1 true WO2007052088A1 (en) 2007-05-10

Family

ID=35883664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/003293 WO2007052088A1 (en) 2005-11-04 2005-11-04 Audio compression

Country Status (8)

Country Link
US (1) US8326638B2 (en)
EP (1) EP1943643B1 (en)
JP (1) JP4950210B2 (en)
KR (1) KR100958144B1 (en)
CN (1) CN101297356B (en)
AU (1) AU2005337961B2 (en)
BR (1) BRPI0520729B1 (en)
WO (1) WO2007052088A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059633A1 (en) * 2007-11-06 2009-05-14 Nokia Corporation An encoder
WO2009059631A1 (en) 2007-11-06 2009-05-14 Nokia Corporation Audio coding apparatus and method thereof
WO2010024371A1 (en) * 2008-08-29 2010-03-04 ソニー株式会社 Device and method for expanding frequency band, device and method for encoding, device and method for decoding, and program
WO2010098112A1 (en) 2009-02-26 2010-09-02 パナソニック株式会社 Encoder, decoder, and method therefor
WO2011000408A1 (en) * 2009-06-30 2011-01-06 Nokia Corporation Audio coding
WO2011035813A1 (en) * 2009-09-25 2011-03-31 Nokia Corporation Audio coding
WO2011052191A1 (en) 2009-10-26 2011-05-05 パナソニック株式会社 Tone determination device and method
WO2011058752A1 (en) 2009-11-12 2011-05-19 パナソニック株式会社 Encoder apparatus, decoder apparatus and methods of these
WO2011114192A1 (en) * 2010-03-19 2011-09-22 Nokia Corporation Method and apparatus for audio coding
EP2362376A3 (en) * 2010-02-26 2011-11-02 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Apparatus and method for modifying an audio signal using envelope shaping
WO2011161886A1 (en) 2010-06-21 2011-12-29 パナソニック株式会社 Decoding device, encoding device, and methods for same
WO2012052802A1 (en) * 2010-10-18 2012-04-26 Nokia Corporation An audio encoder/decoder apparatus
RU2464649C1 (en) * 2011-06-01 2012-10-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Audio signal processing method
WO2013035257A1 (en) * 2011-09-09 2013-03-14 パナソニック株式会社 Encoding device, decoding device, encoding method and decoding method
CN103996401A (en) * 2009-10-07 2014-08-20 索尼公司 Decoding device and decoding method
US20140244274A1 (en) * 2011-10-19 2014-08-28 Panasonic Corporation Encoding device and encoding method
WO2014184618A1 (en) * 2013-05-17 2014-11-20 Nokia Corporation Spatial object oriented audio apparatus
US8898057B2 (en) 2009-10-23 2014-11-25 Panasonic Intellectual Property Corporation Of America Encoding apparatus, decoding apparatus and methods thereof
US9361904B2 (en) 2013-01-29 2016-06-07 Huawei Technologies Co., Ltd. Method for predicting bandwidth extension frequency band signal, and decoding device
US9390717B2 (en) 2011-08-24 2016-07-12 Sony Corporation Encoding device and method, decoding device and method, and program
US9406312B2 (en) 2010-04-13 2016-08-02 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9583112B2 (en) 2010-04-13 2017-02-28 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9626978B2 (en) 2012-03-29 2017-04-18 Telefonaktiebolaget Lm Ericsson (Publ) Bandwidth extension of harmonic audio signal
US9659573B2 (en) 2010-04-13 2017-05-23 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9767824B2 (en) 2010-10-15 2017-09-19 Sony Corporation Encoding device and method, decoding device and method, and program
US9875746B2 (en) 2013-09-19 2018-01-23 Sony Corporation Encoding device and method, decoding device and method, and program
US10692511B2 (en) 2013-12-27 2020-06-23 Sony Corporation Decoding apparatus and method, and program

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101393298B1 (en) * 2006-07-08 2014-05-12 삼성전자주식회사 Method and Apparatus for Adaptive Encoding/Decoding
KR101434198B1 (en) * 2006-11-17 2014-08-26 삼성전자주식회사 Method of decoding a signal
US20100250260A1 (en) * 2007-11-06 2010-09-30 Lasse Laaksonen Encoder
KR20100086000A (en) * 2007-12-18 2010-07-29 엘지전자 주식회사 A method and an apparatus for processing an audio signal
ATE500588T1 (en) * 2008-01-04 2011-03-15 Dolby Sweden Ab AUDIO ENCODERS AND DECODERS
WO2009093466A1 (en) * 2008-01-25 2009-07-30 Panasonic Corporation Encoding device, decoding device, and method thereof
WO2009150290A1 (en) * 2008-06-13 2009-12-17 Nokia Corporation Method and apparatus for error concealment of encoded audio data
BR122019023712B1 (en) * 2009-01-28 2020-10-27 Dolby International Ab system for generating an output audio signal from an input audio signal using a transposition factor t, method for transposing an input audio signal by a transposition factor t and storage medium
US8805680B2 (en) * 2009-05-19 2014-08-12 Electronics And Telecommunications Research Institute Method and apparatus for encoding and decoding audio signal using layered sinusoidal pulse coding
CN103559891B (en) 2009-09-18 2016-05-11 杜比国际公司 Improved harmonic wave transposition
PL2800094T3 (en) * 2009-10-21 2018-03-30 Dolby International Ab Oversampling in a combined transposer filter bank
PL2581905T3 (en) 2010-06-09 2016-06-30 Panasonic Ip Corp America Bandwidth extension method, bandwidth extension apparatus, program, integrated circuit, and audio decoding apparatus
US9047875B2 (en) * 2010-07-19 2015-06-02 Futurewei Technologies, Inc. Spectrum flatness control for bandwidth extension
US9236063B2 (en) 2010-07-30 2016-01-12 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for dynamic bit allocation
US9208792B2 (en) 2010-08-17 2015-12-08 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for noise injection
JP5552988B2 (en) * 2010-09-27 2014-07-16 富士通株式会社 Voice band extending apparatus and voice band extending method
JP5743137B2 (en) 2011-01-14 2015-07-01 ソニー株式会社 Signal processing apparatus and method, and program
WO2012144128A1 (en) * 2011-04-20 2012-10-26 パナソニック株式会社 Voice/audio coding device, voice/audio decoding device, and methods thereof
JP5807453B2 (en) * 2011-08-30 2015-11-10 富士通株式会社 Encoding method, encoding apparatus, and encoding program
PL2772913T3 (en) * 2011-10-28 2018-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Encoding apparatus and encoding method
EP2717263B1 (en) 2012-10-05 2016-11-02 Nokia Technologies Oy Method, apparatus, and computer program product for categorical spatial analysis-synthesis on the spectrum of a multichannel audio signal
CN103280222B (en) * 2013-06-03 2014-08-06 腾讯科技(深圳)有限公司 Audio encoding and decoding method and system thereof
CN105745703B (en) 2013-09-16 2019-12-10 三星电子株式会社 Signal encoding method and apparatus, and signal decoding method and apparatus
KR102315920B1 (en) * 2013-09-16 2021-10-21 삼성전자주식회사 Signal encoding method and apparatus and signal decoding method and apparatus
WO2015147434A1 (en) * 2014-03-25 2015-10-01 인텔렉추얼디스커버리 주식회사 Apparatus and method for processing audio signal
US10020002B2 (en) * 2015-04-05 2018-07-10 Qualcomm Incorporated Gain parameter estimation based on energy saturation and signal scaling
US9613628B2 (en) 2015-07-01 2017-04-04 Gopro, Inc. Audio decoder for wind and microphone noise reduction in a microphone array system
DE102017200320A1 (en) * 2017-01-11 2018-07-12 Sivantos Pte. Ltd. Method for frequency distortion of an audio signal
JP2020105231A (en) * 2017-03-22 2020-07-09 Spiber株式会社 Molded article and method for producing molded article
CN109036457B (en) * 2018-09-10 2021-10-08 广州酷狗计算机科技有限公司 Method and apparatus for restoring audio signal
CN110111800B (en) * 2019-04-04 2021-05-07 深圳信息职业技术学院 Frequency band division method and device of electronic cochlea and electronic cochlea equipment
CN113192523A (en) * 2020-01-13 2021-07-30 华为技术有限公司 Audio coding and decoding method and audio coding and decoding equipment
CN113808597A (en) * 2020-05-30 2021-12-17 华为技术有限公司 Audio coding method and audio coding device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040125878A1 (en) * 1997-06-10 2004-07-01 Coding Technologies Sweden Ab Source coding enhancement using spectral-band replication
EP1441330A2 (en) * 2002-12-23 2004-07-28 Samsung Electronics Co., Ltd. Method of encoding and/or decoding digital audio using time-frequency correlation and apparatus performing the method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11120185A (en) * 1997-10-09 1999-04-30 Canon Inc Information processor and method therefor
US6711540B1 (en) * 1998-09-25 2004-03-23 Legerity, Inc. Tone detector with noise detection and dynamic thresholding for robust performance
US7031553B2 (en) * 2000-09-22 2006-04-18 Sri International Method and apparatus for recognizing text in an image sequence of scene imagery
US7447639B2 (en) * 2001-01-24 2008-11-04 Nokia Corporation System and method for error concealment in digital audio transmission
EP1701340B1 (en) * 2001-11-14 2012-08-29 Panasonic Corporation Decoding device, method and program
DE60323331D1 (en) * 2002-01-30 2008-10-16 Matsushita Electric Ind Co Ltd METHOD AND DEVICE FOR AUDIO ENCODING AND DECODING
CN1328707C (en) * 2002-07-19 2007-07-25 日本电气株式会社 Audio decoding device, decoding method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040125878A1 (en) * 1997-06-10 2004-07-01 Coding Technologies Sweden Ab Source coding enhancement using spectral-band replication
EP1441330A2 (en) * 2002-12-23 2004-07-28 Samsung Electronics Co., Ltd. Method of encoding and/or decoding digital audio using time-frequency correlation and apparatus performing the method

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101161866B1 (en) 2007-11-06 2012-07-04 노키아 코포레이션 Audio coding apparatus and method thereof
WO2009059631A1 (en) 2007-11-06 2009-05-14 Nokia Corporation Audio coding apparatus and method thereof
US9082397B2 (en) 2007-11-06 2015-07-14 Nokia Technologies Oy Encoder
WO2009059633A1 (en) * 2007-11-06 2009-05-14 Nokia Corporation An encoder
EP2317509A4 (en) * 2008-08-29 2014-06-11 Sony Corp Device and method for expanding frequency band, device and method for encoding, device and method for decoding, and program
EP2317509A1 (en) * 2008-08-29 2011-05-04 Sony Corporation Device and method for expanding frequency band, device and method for encoding, device and method for decoding, and program
WO2010024371A1 (en) * 2008-08-29 2010-03-04 ソニー株式会社 Device and method for expanding frequency band, device and method for encoding, device and method for decoding, and program
US8983831B2 (en) 2009-02-26 2015-03-17 Panasonic Intellectual Property Corporation Of America Encoder, decoder, and method therefor
RU2538334C2 (en) * 2009-02-26 2015-01-10 Панасоник Интеллекчуал Проперти Корпорэйшн оф Америка Encoder, decoder and method therefor
WO2010098112A1 (en) 2009-02-26 2010-09-02 パナソニック株式会社 Encoder, decoder, and method therefor
WO2011000408A1 (en) * 2009-06-30 2011-01-06 Nokia Corporation Audio coding
US8781844B2 (en) 2009-09-25 2014-07-15 Nokia Corporation Audio coding
WO2011035813A1 (en) * 2009-09-25 2011-03-31 Nokia Corporation Audio coding
EP2993667A1 (en) * 2009-10-07 2016-03-09 Sony Corporation Frequency band extending device, method and program
US9691410B2 (en) 2009-10-07 2017-06-27 Sony Corporation Frequency band extending device and method, encoding device and method, decoding device and method, and program
EP3232438A1 (en) * 2009-10-07 2017-10-18 Sony Corporation Frequency band extending device, method and program
CN103996401B (en) * 2009-10-07 2018-01-16 索尼公司 Decoding device and coding/decoding method
EP3584794A1 (en) * 2009-10-07 2019-12-25 SONY Corporation Frequency band extending device and method, encoding device and method, decoding device and method, and program
EP3968322A3 (en) * 2009-10-07 2022-06-01 Sony Group Corporation Frequency band extending device and method, encoding device and method, decoding device and method, and program
CN103996401A (en) * 2009-10-07 2014-08-20 索尼公司 Decoding device and decoding method
US8898057B2 (en) 2009-10-23 2014-11-25 Panasonic Intellectual Property Corporation Of America Encoding apparatus, decoding apparatus and methods thereof
US8670980B2 (en) 2009-10-26 2014-03-11 Panasonic Corporation Tone determination device and method
WO2011052191A1 (en) 2009-10-26 2011-05-05 パナソニック株式会社 Tone determination device and method
WO2011058752A1 (en) 2009-11-12 2011-05-19 パナソニック株式会社 Encoder apparatus, decoder apparatus and methods of these
US8838443B2 (en) 2009-11-12 2014-09-16 Panasonic Intellectual Property Corporation Of America Encoder apparatus, decoder apparatus and methods of these
US9264003B2 (en) 2010-02-26 2016-02-16 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for modifying an audio signal using envelope shaping
US9203367B2 (en) 2010-02-26 2015-12-01 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for modifying an audio signal using harmonic locking
EP2362376A3 (en) * 2010-02-26 2011-11-02 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Apparatus and method for modifying an audio signal using envelope shaping
WO2011104356A3 (en) * 2010-02-26 2012-06-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for modifying an audio signal using envelope shaping
WO2011114192A1 (en) * 2010-03-19 2011-09-22 Nokia Corporation Method and apparatus for audio coding
US10546594B2 (en) 2010-04-13 2020-01-28 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9659573B2 (en) 2010-04-13 2017-05-23 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9679580B2 (en) 2010-04-13 2017-06-13 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US10381018B2 (en) 2010-04-13 2019-08-13 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US10297270B2 (en) 2010-04-13 2019-05-21 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9406312B2 (en) 2010-04-13 2016-08-02 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9583112B2 (en) 2010-04-13 2017-02-28 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US10224054B2 (en) 2010-04-13 2019-03-05 Sony Corporation Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US9076434B2 (en) 2010-06-21 2015-07-07 Panasonic Intellectual Property Corporation Of America Decoding and encoding apparatus and method for efficiently encoding spectral data in a high-frequency portion based on spectral data in a low-frequency portion of a wideband signal
WO2011161886A1 (en) 2010-06-21 2011-12-29 パナソニック株式会社 Decoding device, encoding device, and methods for same
US10236015B2 (en) 2010-10-15 2019-03-19 Sony Corporation Encoding device and method, decoding device and method, and program
US9767824B2 (en) 2010-10-15 2017-09-19 Sony Corporation Encoding device and method, decoding device and method, and program
US9230551B2 (en) 2010-10-18 2016-01-05 Nokia Technologies Oy Audio encoder or decoder apparatus
WO2012052802A1 (en) * 2010-10-18 2012-04-26 Nokia Corporation An audio encoder/decoder apparatus
RU2464649C1 (en) * 2011-06-01 2012-10-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Audio signal processing method
US9858934B2 (en) 2011-06-01 2018-01-02 Samsung Electronics Co., Ltd. Audio-encoding method and apparatus, audio-decoding method and apparatus, recoding medium thereof, and multimedia device employing same
US9390717B2 (en) 2011-08-24 2016-07-12 Sony Corporation Encoding device and method, decoding device and method, and program
US9384749B2 (en) 2011-09-09 2016-07-05 Panasonic Intellectual Property Corporation Of America Encoding device, decoding device, encoding method and decoding method
WO2013035257A1 (en) * 2011-09-09 2013-03-14 パナソニック株式会社 Encoding device, decoding device, encoding method and decoding method
CN106847295A (en) * 2011-09-09 2017-06-13 松下电器(美国)知识产权公司 Code device and coding method
US10269367B2 (en) 2011-09-09 2019-04-23 Panasonic Intellectual Property Corporation Of America Encoding apparatus, decoding apparatus, and methods
US10629218B2 (en) 2011-09-09 2020-04-21 Panasonic Intellectual Property Corporation Of America Encoding apparatus, decoding apparatus, and methods
US9886964B2 (en) 2011-09-09 2018-02-06 Panasonic Intellectual Property Corporation Of America Encoding apparatus, decoding apparatus, and methods
US9741356B2 (en) 2011-09-09 2017-08-22 Panasonic Intellectual Property Corporation Of America Coding apparatus, decoding apparatus, and methods
US20140244274A1 (en) * 2011-10-19 2014-08-28 Panasonic Corporation Encoding device and encoding method
EP2770506A4 (en) * 2011-10-19 2015-02-25 Panasonic Ip Corp America Encoding device and encoding method
US9626978B2 (en) 2012-03-29 2017-04-18 Telefonaktiebolaget Lm Ericsson (Publ) Bandwidth extension of harmonic audio signal
US10002617B2 (en) 2012-03-29 2018-06-19 Telefonaktiebolaget Lm Ericsson (Publ) Bandwidth extension of harmonic audio signal
US9361904B2 (en) 2013-01-29 2016-06-07 Huawei Technologies Co., Ltd. Method for predicting bandwidth extension frequency band signal, and decoding device
US10388295B2 (en) 2013-01-29 2019-08-20 Huawei Technologies Co., Ltd. Method for predicting bandwidth extension frequency band signal, and decoding device
US10607621B2 (en) 2013-01-29 2020-03-31 Huawei Technologies Co., Ltd. Method for predicting bandwidth extension frequency band signal, and decoding device
US9875749B2 (en) 2013-01-29 2018-01-23 Huawei Technologies Co., Ltd. Method for predicting bandwidth extension frequency band signal, and decoding device
US9706324B2 (en) 2013-05-17 2017-07-11 Nokia Technologies Oy Spatial object oriented audio apparatus
WO2014184618A1 (en) * 2013-05-17 2014-11-20 Nokia Corporation Spatial object oriented audio apparatus
US9875746B2 (en) 2013-09-19 2018-01-23 Sony Corporation Encoding device and method, decoding device and method, and program
US10692511B2 (en) 2013-12-27 2020-06-23 Sony Corporation Decoding apparatus and method, and program
US11705140B2 (en) 2013-12-27 2023-07-18 Sony Corporation Decoding apparatus and method, and program

Also Published As

Publication number Publication date
BRPI0520729B1 (en) 2019-04-02
EP1943643A1 (en) 2008-07-16
US20090271204A1 (en) 2009-10-29
US8326638B2 (en) 2012-12-04
AU2005337961A1 (en) 2007-05-10
BRPI0520729A2 (en) 2009-05-26
CN101297356A (en) 2008-10-29
KR100958144B1 (en) 2010-05-18
JP4950210B2 (en) 2012-06-13
KR20080059279A (en) 2008-06-26
BRPI0520729A8 (en) 2016-03-22
JP2009515212A (en) 2009-04-09
AU2005337961B2 (en) 2011-04-21
CN101297356B (en) 2011-11-09
EP1943643B1 (en) 2019-10-09

Similar Documents

Publication Publication Date Title
EP1943643B1 (en) Audio compression
CA2608030C (en) Scalable compressed audio bit stream and codec using a hierarchical filterbank and multichannel joint coding
US7343287B2 (en) Method and apparatus for scalable encoding and method and apparatus for scalable decoding
EP3244407B1 (en) Apparatus and method for modifying a parameterized representation
US7275036B2 (en) Apparatus and method for coding a time-discrete audio signal to obtain coded audio data and for decoding coded audio data
Ravelli et al. Union of MDCT bases for audio coding
US9167367B2 (en) Optimized low-bit rate parametric coding/decoding
CN103366749B (en) A kind of sound codec devices and methods therefor
CN101276587A (en) Audio encoding apparatus and method thereof, audio decoding device and method thereof
CN103366750B (en) A kind of sound codec devices and methods therefor
CN103366751B (en) A kind of sound codec devices and methods therefor
Lee et al. Progressive multi-stage neural audio coding with guided references
RU2409874C9 (en) Audio signal compression
US20100280830A1 (en) Decoder
US8924202B2 (en) Audio signal coding system and method using speech signal rotation prior to lattice vector quantization
AU2011205144B2 (en) Scalable compressed audio bit stream and codec using a hierarchical filterbank and multichannel joint coding
Bartkowiak Low bit rate coding of sparse audio spectra using frequency shift and interleaved MDCT
Reche-Lopez et al. Signal-adaptive Parametric Modelling for High Quality Low Bit Rate Audio Coding

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200580051976.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005806493

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2354/DELNP/2008

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2005337961

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: MX/a/2008/004471

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2005337961

Country of ref document: AU

Date of ref document: 20051104

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 2005337961

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 1020087010631

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2008538430

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 12084677

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2008111884

Country of ref document: RU

WWP Wipo information: published in national office

Ref document number: 2005806493

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0520729

Country of ref document: BR

Kind code of ref document: A2