EP2139000B1 - Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal - Google Patents

Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal Download PDF

Info

Publication number
EP2139000B1
EP2139000B1 EP08159018A EP08159018A EP2139000B1 EP 2139000 B1 EP2139000 B1 EP 2139000B1 EP 08159018 A EP08159018 A EP 08159018A EP 08159018 A EP08159018 A EP 08159018A EP 2139000 B1 EP2139000 B1 EP 2139000B1
Authority
EP
European Patent Office
Prior art keywords
signal
speech
encoding
mlt
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP08159018A
Other languages
German (de)
English (en)
Other versions
EP2139000A1 (fr
Inventor
Oliver Wuebbolt
Johannes Boehm
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
THOMSON LICENSING
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP08159018A priority Critical patent/EP2139000B1/fr
Priority to CN2009101503026A priority patent/CN101615393B/zh
Publication of EP2139000A1 publication Critical patent/EP2139000A1/fr
Application granted granted Critical
Publication of EP2139000B1 publication Critical patent/EP2139000B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0212Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the invention relates to a method and to an apparatus for encoding or decoding a speech and/or non-speech audio input signal.
  • This wideband speech coder includes an embedded G.729 speech coder, which is used permanently. Therefore the quality for music-like signals (non-speech) is not very good. Although this coder uses transform coding techniques it is a speech coder.
  • This coder uses a principle structure similar to that of the above-mentioned coder.
  • the processing is based on time domain signals, which implies a difficult handling of the delay in the core encoder/decoder (speech coder). Therefore the processing is based on a common transform in order to reduce this problem.
  • the core coder i.e. the speech coder
  • the speech coder is used permanently, which results in a non-optimal quality for music like (non-speech) signals.
  • a disadvantage of the known audio/speech codecs is a clear dependency of the coding quality on the types of content, i.e. music-like audio signals are best coded by audio codecs and speech-like audio signals are best coded by speech codecs.
  • No known codec is holding a dominant position for mixed speech/music content.
  • a problem to be solved by the invention is to provide a good codec performance for both, speech and music, and to further improve the codec performance for such mixed signals. This problem is solved by the methods disclosed in claims 1 and 3. Apparatuses that utilise these methods are disclosed in claims 2 and 4.
  • the inventive joined speech/audio codec uses speech coding techniques as well as audio transform coding techniques.
  • MLT Modulated Lapped Transform
  • IMLT inverse Modulated Lapped Transform
  • the MLT output spectrum is separated into frequency bins (low frequencies) assigned to the speech coding section of the codec, and the remaining frequency bins (high frequencies) assigned to the transform-based coding section of the codec, wherein the transform length at the codec input and output can be switched signal adaptively.
  • the transform length can be switched input signal adaptively.
  • the invention achieves a uniform good codec quality for both speech-like and music-like audio signals, especially for very low bit rates but also for higher bit rates.
  • the inventive method is suited for encoding a speech and/or non-speech audio input signal, including the steps:
  • the inventive apparatus is suited for encoding a speech and/or non-speech audio input signal, said apparatus including means being adapted for:
  • the inventive method is suited for decoding a bit stream representing an encoded speech and/or non-speech audio input signal that was encoded according to the above method, said decoding method including the steps:
  • the inventive apparatus is suited for decoding a bit stream representing an encoded speech and/or non-speech audio input signal that was encoded according to the above encoding method, said apparatus including means being adapted for:
  • coding processing for speech-like signals linear prediction based speech coding processing, e.g. CELP, ACELP, cf. ISO/IEC 14496-3, Subparts 2 and 3, and MPEG4-CELP
  • state-of-the-art coding processing for general audio or music-like signals based on a time-frequency transform, e.g. MDCT.
  • the PCM audio input signal IS is transformed by a Modulated Lapped Transform MLT having a pre-determined length in step/stage 10.
  • MLT Modulated Lapped Transform
  • a Modified Discrete Cosine Transform MDCT is appropriate for audio coding applications.
  • the MDCT was first called by Princen and Bradley "Oddly-stacked Time Domain Alias Cancellation Transform" and was published in John P. Princen and Alan B. Bradley, "Analysis/synthesis filter bank design based on time domain aliasing cancellation", IEEE Transactions on Acoustics Speech Sigal Processing ASSP-34 (5), pp.1153-1161, 1986 .
  • the obtained spectrum is separated into frequency bins belonging to the speech band (representing a low band signal) and the remaining bins (high frequencies) representing a remaining band signal RBS.
  • the speech band bins are transformed back into time domain using the inverse MLT, e.g. an inverse MDCT, with a short transform length with respect to the pre-determined length used in step/stage 10.
  • the resulting time signal has a lower sampling frequency than the input time signal and contains only the corresponding frequencies of the speech band bins.
  • the generated time domain signal is then used as input signal for a speech encoding step/stage 12.
  • the output of the speech encoding can be transmitted in the output bit stream OBS, depending on a decision made by a below-described speech/audio switch 15.
  • the encoded 'speech' signal is decoded in a related speech decoding step/stage 13, and the decoded 'speech' signal is transformed back into frequency domain in step/stage 14 using the MLT corresponding to the inverse MLT of step/stage 11 (i.e. an 'opposite type' MLT having the short length) in order to re-generate the speech band signal, i.e. a reconstructed speech signal RSS.
  • That switch it is decided, whether the original low frequency bins are coded together with the remaining high frequency bins (this indicates that the coded 'speech' signal is not transmitted in bit stream OBS), or the difference signal DS is coded together with the remaining high frequency bins in a following quantisation&coding step/stage 16 (this indicates that the coded 'speech' signal is transmitted in bit stream OBS).
  • That switch may be operated by using a rate-distortion optimisation.
  • An information item SWI about the decision of switch 15 is included in bit stream OBS for use in the decoding. In this switch, but also in the other steps/stages, the different delays introduced by the cascaded transforms are to be taken into account. The different delays can be balanced using corresponding buffering for these steps/stages.
  • step/stage 16 It is possible to use a mixture of original frequency bins and difference signal frequency bins in the low frequency band as input to step/stage 16. In such case, information about how that mixture is composed is conveyed to the decoding side.
  • step/stage 10 i.e. the high frequencies
  • step/stage 16 the remaining frequency bins output by step/stage 10 (i.e. the high frequencies) are processed in quantisation&coding step/stage 16.
  • step/stage 16 an appropriate quantisation is used (e.g. like the quantisation techniques used in AAC), and subsequently the quantised frequency bins are coded using e.g. Huffman coding or arithmetic coding.
  • an appropriate quantisation e.g. like the quantisation techniques used in AAC
  • the quantised frequency bins are coded using e.g. Huffman coding or arithmetic coding.
  • the speech/audio switch 15 decides that a music-like input signal is present and therefore the speech coder/decoder or its output is not used at all, the original frequency bins corresponding to the speech band are to be encoded (together with the remaining frequency bins) in the quantisation&coding step/stage 16.
  • the quantisation&coding step/stage 16 is controlled by a psycho-acoustic model calculation 18 that exploits masking properties of the input signal IS for the quantisation. Therefore side information SI can be transmitted in the bit stream multiplex to the decoder.
  • Switch 15 can also receive suitable control information (e.g. degree of tonality or spectral flatness, or how noise-like the signal is) from psycho-acoustic model step/stage 18.
  • suitable control information e.g. degree of tonality or spectral flatness, or how noise-like the signal is
  • a bit stream multiplexer step/stage 17 combines the output code (if present) of the speech encoder 12, the switch information of switch 15, the output code of the quantisation&coding step/stage 16, and optionally side information code SI, and provides the output bit stream OBS.
  • inverse MDCT inverse MDCT
  • iMDCT inverse MDCT
  • the inverse MLT steps/stages 22 are arranged between a first grouping step/stage 21 and a second grouping step/stage 23 and provide a doubled number of output values.
  • the number of combined MLT bins which means the transform length of the inverse MLT, defines the resulting time and frequency resolution, wherein a longer inverse MLTs delivers a higher time resolution.
  • overlap/add is performed (optionally involving application of window functions) and the output of the inverse MLTs applied on the same input spectrum is sorted such that it results in several (the quantity depends on the size of the inverse MLTs) temporally successive 'short block' spectra which are quantised and coded in step/stage 16.
  • the information about this 'short block coding' mode being used is included in the side information SI.
  • multiple 'short block coding' modes with different inverse MLT transform lengths can be used and signalled in SI.
  • a non-uniform time-frequency resolution over the short block spectra is facilitated, e.g. a higher time resolution for high frequencies and a higher frequency resolution for low frequencies.
  • the inverse MLT can get a length of 2 successive frequency bins and for the highest frequencies the inverse MLT can get a length of 16 successive frequency bins.
  • a different order of coding the resulting frequency bins can be used, for example one 'spectrum' may contain not only different frequency bins at a time, but also the same frequency bins at different points in time may be included.
  • the input signal IS adaptive switching between the processing according to Fig. 1 and the processing according to Fig. 2 is controlled by psycho-acoustic model step/stage 18. For example, if from one frame to the following frame the signal energy in input signal IS rises above a threshold (i.e. there is a transient in the input signal), the processing according to Fig. 2 is carried out. In case the signal energy is below that threshold, the processing according to Fig. 1 is carried out.
  • This switching information is included in output bitstream OBS for a corresponding switching in the decoding.
  • the transform block sections can be weighted by a window function, in particular in an overlapping manner, wherein the length of a window function corresponds to the current transform length.
  • Analysis and synthesis windows can be identical, but need not.
  • the functions of the analysis an synthesis windows h A (n) and h S (n) must fulfil some constraints for the overlapping regions of successive blocks i and i+1 in order to enable a perfect reconstruction:
  • a further window function is disclosed in table 7.33. of the AC-3 audio coding standard.
  • transition window functions are used, e.g. as described in B.Edler, "Cod mich von Audiosignalen mit überlappender Transformation und adaptiven Novafunktionen", FREQUENZ, vol.43, pp.252-256, 1989 , or as used in mp3 and described in the MPEG1 standard ISO/IEC 11172-3 in particular section 2.4.3.4.10.3, or as in AAC (e.g. as described in the MPEG4 standard ISO/IEC 14496-3, Subpart 4).
  • the received or replayed bit stream OBS is demultiplexed in a corresponding step/stage 37, thereby providing code (if present) for the speech decoder 33, the switch information SWI for switch 35, the code and the switching information for the decoding step/stage 36, and optionally side information code SI.
  • code if present
  • the speech subcoder 11,12,13,14 was used at encoding side for a current data frame, in that current frame the corresponding encoded speech band frequency bins are correspondingly reconstructed by the speech decoding step/stage 33 and the downstream MLT step/stage 34, thereby providing the reconstructed speech signal RSS.
  • the remaining encoded frequency bins are correspondingly decoded in decoding step/stage 36, whereby the encoder-side quantisation operation is reversed correspondingly.
  • the speech/audio switch 35 operates corresponding to its operation at encoding side, controlled by switch information SWI.
  • switch information SWI indicates that a music-like input signal is present in the current frame and therefore the speech coding/decoding was not used
  • the frequency bins corresponding to the low band are decoded together with the remaining frequency bins in the decoding step/stage 36, thereby providing the reconstructed remaining band signal RRBS and the reconstructed low band signal RLBS.
  • step/stage 36 and of switch 35 are correspondingly combined in inverse MLT (e.g. iMDCT) step/stage 30 and are synthesised in order to provide the decoded output signal OS.
  • inverse MLT e.g. iMDCT
  • switch 35 and in the other steps/stages the different delays introduced by the cascaded transforms are to be taken into account. The different delays can be balanced using corresponding buffering for these steps/stages.
  • step/stage 36 several temporally successive 'short block' spectra are to be decoded in step/stage 36 and collected in a first grouping step/stage 43. Overlap/add is performed (optionally involving application of window functions). Thereafter each set of temporally successive spectral coefficients is transformed using the corresponding MLT steps/stages 42, and provides a halved number of output values. The generated spectral coefficients are then grouped in a second grouping step/stage 41 to one MLT spectrum with the initial high frequency resolution and transform length.
  • multiple 'short block decoding' modes with different MLT transform lengths can be used as signalled in SI, whereby a non-uniform time-frequency resolution over the short block spectra is facilitated, e.g. a higher time resolution for high frequencies and a higher frequency resolution for low frequencies.
  • a different cascading of the MLTs can be used wherein the order of the inner MLT/inverse MLT pair in the speech encoder is switched.
  • Fig. 5 a block diagram of a corresponding encoding is depicted, wherein Fig. 1 reference signs mean the same operation as in Fig. 1 .
  • the inverse MLT 11 is replaced by an MLT step/stage 51, and the MLT 14 is replaced by an inverse MLT step/stage 54 (i.e. an 'opposite type' MLT). Due to the exchanged order of these MLTs the speech encoder input signal has different properties compared to those in Fig. 1 . Therefore the speech coder 52 and the speech decoder 53 are adapted to these different properties (e.g. such that aliasing components are cancelled out).
  • a 'short block mode' processing can be used as shown in Fig. 6 , wherein MLT steps/stages 62 corresponding to that in Fig. 4 replace the inverse MLT steps/stages 22 in Fig. 2 .
  • the speech decoding step/stage 33 in Fig. 3 is replaced by a correspondingly adapted speech decoding step/stage 73 and the MLT step/stage 34 in Fig. 3 is replaced by a corresponding inverse MLT step/stage 74.
  • a 'short block mode' processing can be used as shown in Fig. 8 , wherein corresponding inverse MLT steps/stages 82 corresponding to that in Fig. 1 replace the MLT steps/stages 42 in Fig. 4 .
  • a different way of block switching is carried out.
  • a fixed large MLT 10 e.g. an MDCT
  • several short MLTs (or MDCTs) 90 can be switched on.
  • a fixed large MLT 10 e.g. an MDCT
  • 8 short MDCTs with a transform length of 256 samples can be used.
  • the sum of the lengths of the short transforms is equal to the long transform length (although it makes buffer handling even more easier).
  • the internal buffer handling is easier than for the long/short block mode switching according to figures 1 to 8 , at the cost of a less sharp band separation between the speech frequency band and the remaining frequency band.
  • the reason for the internal buffer handling being easier is as follows: at least for each inverse MLT operation an additional buffer is required, which leads in case of an inner transform to the necessity of an additional buffer also in the parallel high frequency path. Therefore the switching at the outmost transform has the least side effects concerning buffers.
  • the Fig. 1 reference signs do mean the same operation as in Fig. 1 .
  • the MLT 10 is input signal IS adaptively replaced by short MLT steps/stages 90, the inverse MLT 11 is replaced by shorter inverse MLT steps/stages 91, and the MLT 14 is replaced by shorter MLT steps/stages 94. Due to this kind of blocks switching, the lengths of the first transform 90, 30 and the second transform 11, 34, 51, 74 (iMDCT to reconstruct the speech band) and the third transform 14, 54 are coordinated. Furthermore, several short blocks of the speech band signal can be buffered after the iMDCT 91 in Fig. 9 in order to collect enough samples for a complete input frame for the speech coder.
  • encoding of Fig. 9 can also be adapted correspondingly to the encoding described for Fig. 5 .
  • the decoding according to Fig. 3 is adapted correspondingly, i.e. the inverse MLTs 34 and 30 are each replaced by corresponding adaptively switched shorter inverse MLTs.
  • the transform block sections are weighted at encoding side in MLT 90 and at decoding side in inverse MLT 30 by window functions, in particular in an overlapping manner, wherein the length of a window function corresponds to the current transform length.
  • window functions in particular in an overlapping manner, wherein the length of a window function corresponds to the current transform length.
  • shaped long windows the start and stop windows, or transition windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Claims (13)

  1. Procédé de codage d'un signal d'entrée audio vocal et/ou non vocal (IS), ledit procédé incluant les étapes suivantes:
    - transformation (10, 90) de sections successives et éventuellement chevauchantes dudit signal d'entrée (IS) par au moins une transformée MLT initiale et division des bins de fréquence de sortie obtenus en un signal à faible bande passante et un signal à bande passante restante (RBS) ;
    - acheminement dudit signal à faible bande passante vers une commutation de parole/audio (15) et via une boucle de codage/décodage de parole incluant au moins une courte transformée MLT de premier type (11, 51, 91), un codage de parole (12, 52), un décodage de parole correspondant (13, 53), et au moins une courte transformée MLT de second type (14, 54, 94) dont le type est opposé à celui de ladite courte transformée MLT de premier type ;
    - quantification et codage (16) dudit signal à bande passante restante (RBS), commandé par un modèle psycho-acoustique qui reçoit à son entrée ledit signal d'entrée audio (IS) ;
    - combinaison (17) du signal de sortie desdits quantification et codage (16), d'un signal d'informations de commutation (SWI) de ladite commutation (15), éventuellement du signal de sortie dudit codage de parole (12, 52), et, en option, d'autres informations de codage secondaires (SI), afin de former, pour ladite section actuelle dudit signal d'entrée (IS), un train de bits de sortie (OBS),
    où ladite commutation de parole/audio (15) reçoit ledit signal à faible bande passante et un deuxième signal d'entrée (DS) dérivé de la sortie de ladite courte transformée MLT de second type (14, 54, 94) et décide si ledit deuxième signal d'entrée évite ladite étape de quantification et de codage (16) ou ledit signal à faible bande passante est codé avec ledit signal à bande passante restante (RBS) lors de ladite étape de quantification et de codage (16),
    et où dans ce dernier cas, ledit signal de sortie dudit codage de parole (12, 52) n'est pas inclus dans la section actuelle dudit train de bits de sortie (OBS).
  2. Appareil de codage d'un signal d'entrée audio vocal et/ou non vocal (IS), ledit appareil incluant des moyens adaptés pour:
    - une transformation (10, 90) de sections successives et éventuellement chevauchantes dudit signal d'entrée (IS) par au moins une transformée MLT initiale et division des bins de fréquence de sortie obtenus en un signal à faible bande passante et un signal à bande passante restante (RBS) ;
    - un acheminement dudit signal à faible bande passante vers une commutation de parole/audio (15) et via une boucle de codage/décodage de parole incluant au moins une courte transformée MLT de premier type (11, 51, 91), un codage de parole (12, 52), un décodage de parole correspondant (13, 53), et au moins une courte transformée MLT de second type (14, 54, 94) dont le type est opposé à celui de ladite courte transformée MLT de premier type ;
    - une quantification et un codage (16) dudit signal à bande passante restante (RBS), commandé par un modèle psycho-acoustique qui reçoit à son entrée ledit signal d'entrée audio (IS) ;
    - une combinaison (17) du signal de sortie desdits quantification et codage (16), d'un signal d'informations de commutation (SWI) de ladite commutation (15), éventuellement du signal de sortie dudit codage de parole (12, 52), et, en option, d'autres informations de codage secondaires (SI), afin de former, pour ladite section actuelle dudit signal d'entrée (IS), un train de bits de sortie (OBS),
    où ladite commutation de parole/audio (15) reçoit ledit signal à faible bande passante et un deuxième signal d'entrée (DS) dérivé de la sortie de ladite courte transformée MLT de second type (14, 54, 94) et décide si ledit deuxième signal d'entrée évite ladite étape de quantification et de codage (16) ou ledit signal à faible bande passante est codé avec ledit signal à bande passante restante (RBS) lors de ladite étape de quantification et de codage (16),
    et où dans ce dernier cas, ledit signal de sortie dudit codage de parole (12, 52) n'est pas inclus dans la section actuelle dudit train de bits de sortie (OBS).
  3. Procédé de décodage d'un train de bits (OBS) représentant un signal d'entrée audio vocal et/ou non vocal codé (IS) qui a été codé conformément au procédé selon la revendication 1, ledit procédé de décodage incluant les étapes suivantes:
    - démultiplexage (37) de sections successives dudit train de bits (OBS) pour récupérer le signal de sortie de ladite étape de quantification et de codage (16), ledit signal d'informations de commutation (SWI), éventuellement le signal de sortie dudit codage de parole (12, 52), et lesdites informations de codage secondaires (SI) si présentes ;
    - s'il est présent dans une section actuelle dudit train de bits (OBS), acheminement dudit signal de sortie dudit codage de parole via un décodage de parole (33, 73) et ladite courte transformée MLT de second type (34, 74) ;
    - décodage (36) dudit signal de sortie de ladite étape de quantification et de codage (16), commandé par lesdites informations de codage secondaires (SI) si présentes, afin de fournir pour ladite section actuelle un signal à bande passante restante reconstruit (RRBS) et un signal à faible bande passante reconstruit (RLBS);
    - fourniture d'une commutation de parole/audio (15) avec ledit signal à faible bande passante reconstruit et un deuxième signal d'entrée (CS) dérivé de la sortie de ladite transformée MLT de second type (34, 74), et acheminement, selon ledit signal d'informations de commutation (SWI), dudit signal à faible bande passante reconstruit (RLBS) ou dudit deuxième signal d'entrée (CS) ;
    - transformée MLT inverse (30) du signal de sortie de ladite commutation (15) combiné avec ledit signal à bande passante restante reconstruit (RRBS), et chevauchement éventuel de sections successives, afin de former une section actuelle du signal de sortie reconstruit (OS).
  4. Appareil de décodage d'un train de bits (OBS) représentant un signal d'entrée audio vocal et/ou non vocal codé (IS) qui a été codé conformément au procédé selon la revendication 1, ledit appareil incluant des moyens adaptés pour:
    - un démultiplexage (37) de sections successives dudit train de bits (OBS) pour récupérer le signal de sortie de ladite étape de quantification et de codage (16), ledit signal d'informations de commutation (SWI), éventuellement le signal de sortie dudit codage de parole (12, 52), et lesdites informations de codage secondaires (SI) si présentes ;
    - s'il est présent dans une section actuelle dudit train de bits (OBS), un acheminement dudit signal de sortie dudit codage de parole via un décodage de parole (33, 73) et ladite courte transformée MLT de second type (34, 74) ;
    - un décodage (36) dudit signal de sortie de ladite étape de quantification et de codage (16), commandé par lesdites informations de codage secondaires (SI) si présentes, afin de fournir pour ladite section actuelle un signal à bande passante restante reconstruit (RRBS) et un signal à faible bande passante reconstruit (RLBS);
    - une fourniture d'une commutation de parole/audio (15) avec ledit signal à faible bande passante reconstruit et un deuxième signal d'entrée (CS) dérivé de la sortie de ladite transformée MLT de second type (34, 74), et acheminement, selon ledit signal d'informations de commutation (SWI), dudit signal à faible bande passante reconstruit (RLBS) ou dudit deuxième signal d'entrée (CS) ;
    - une transformée MLT inverse (30) du signal de sortie de ladite commutation (15) combiné avec ledit signal à bande passante restante reconstruit (RRBS), et chevauchement éventuel de sections successives, afin de former une section actuelle du signal de sortie reconstruit (OS).
  5. Procédé selon la revendication 1 ou 3, ou appareil selon la revendication 2 ou 4, où dans le cas où une transformée MLT unique (10) est utilisée à l'entrée du codage et une transformée MLT inverse unique (30) est utilisée à la sortie du décodage, et en cas de signal d'entrée (IS) de manière adaptative à l'entrée de ladite étape de quantification et de codage (16) et à la sortie dudit décodage (36), plusieurs courtes transformées MLT, présentant chacune une longueur inférieure à la longueur de ladite transformée MLT unique (10) et de ladite transformée MLT inverse unique (30), respectivement, sont réalisées:
    soit de courtes transformées MLT inverses (22) à l'entrée de ladite étape de quantification et de codage (16) et de courtes transformées MLT (22) à la sortie dudit décodage (36),
    soit de courtes transformées MLT (62) à l'entrée de ladite étape de quantification et de codage (16) et de courtes transformées MLT inverses (82) à la sortie dudit décodage (36),
  6. Procédé ou appareil selon la revendication 5, où lesdites courtes transformées MLT et lesdites courtes transformées MLT inverses, respectivement, sont réalisées si l'énergie de signal dans une section actuelle dudit signal d'entrée (IS) dépasse un niveau seuil.
  7. Procédé selon la revendication 1 ou 3, ou appareil selon la revendication 2 ou 4, où à l'entrée du décodage est appliqué un signal d'entrée commuté (IS) de manière adaptative à partir d'une transformée MLT unique (10) vers plusieurs transformées MLT plus courtes (90), et à la sortie dudit décodage (36), en conséquence, à partir d'une transformée MLT inverse unique (30) vers plusieurs transformées MLT inverses plus courtes.
  8. Procédé ou appareil selon la revendication 7, où lesdites plusieurs transformées MLT plus courtes et lesdites plusieurs transformées MLT inverses plus courtes, respectivement, sont réalisées si l'énergie de signal dans une section actuelle dudit signal d'entrée (IS) dépasse un niveau seuil.
  9. Procédé selon une des revendications 1, 3 et 5 à 8, ou appareil selon une des revendications 2 et 4 à 8, où ledit deuxième signal d'entrée (DS) est le signal de différence entre ledit signal à faible bande passante et le signal de sortie (RSS) de ladite transformée MLT de second type (14, 54, 94).
  10. Procédé selon une des revendications 1, 3 et 5 à 8, ou appareil selon une des revendications 2 et 4 à 8, où ledit deuxième signal d'entrée (DS) est ledit signal de sortie (RSS) de ladite transformée MLT de second type (14, 54, 94).
  11. Procédé selon une des revendications 1, 3 et 5 à 10, ou appareil selon une des revendications 2 et 4 à 10, où ladite commutation (15) est commandée par des informations reçues dudit modèle psycho-acoustique (18).
  12. Procédé selon une des revendications 1, 3 et 5 à 11, ou appareil selon une des revendications 2 et 4 à 11, où ladite commutation (15) fonctionne à l'aide d'une optimisation de la distorsion du taux.
  13. Procédé selon une des revendications 1, 3 et 5 à 12, ou appareil selon une des revendications 2 et 4 à 12, où des sections successives dudit signal d'entrée (IS) et des sections successives dudit signal de sortie (OS) sont pondérées par une fonction de fenêtre présentant une longueur correspondant à la longueur de transformée associée, en particulier selon un chevauchement, et où, si la longueur de transformée est commutée, des fonctions de fenêtre de transition correspondantes sont utilisées.
EP08159018A 2008-06-25 2008-06-25 Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal Not-in-force EP2139000B1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP08159018A EP2139000B1 (fr) 2008-06-25 2008-06-25 Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal
CN2009101503026A CN101615393B (zh) 2008-06-25 2009-06-19 对语音和/或非语音音频输入信号编码或解码的方法和设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP08159018A EP2139000B1 (fr) 2008-06-25 2008-06-25 Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal

Publications (2)

Publication Number Publication Date
EP2139000A1 EP2139000A1 (fr) 2009-12-30
EP2139000B1 true EP2139000B1 (fr) 2011-05-25

Family

ID=39718977

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08159018A Not-in-force EP2139000B1 (fr) 2008-06-25 2008-06-25 Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal

Country Status (2)

Country Link
EP (1) EP2139000B1 (fr)
CN (1) CN101615393B (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504532B2 (en) 2014-05-07 2019-12-10 Samsung Electronics Co., Ltd. Method and device for quantizing linear predictive coefficient, and method and device for dequantizing same

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074242B (zh) * 2010-12-27 2012-03-28 武汉大学 语音音频混合分级编码中核心层残差提取系统及方法
CN102103859B (zh) * 2011-01-11 2012-04-11 东南大学 一种数字音频编码、解码方法及装置
CN102737636B (zh) * 2011-04-13 2014-06-04 华为技术有限公司 一种音频编码方法及装置
CN103198834B (zh) * 2012-01-04 2016-12-14 中国移动通信集团公司 一种音频信号处理方法、装置及终端
KR102626320B1 (ko) * 2014-03-28 2024-01-17 삼성전자주식회사 선형예측계수 양자화방법 및 장치와 역양자화 방법 및 장치
CN107424622B (zh) 2014-06-24 2020-12-25 华为技术有限公司 音频编码方法和装置
CN106033982B (zh) * 2015-03-13 2018-10-12 中国移动通信集团公司 一种实现超宽带语音互通的方法、装置和终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6658383B2 (en) * 2001-06-26 2003-12-02 Microsoft Corporation Method for coding speech and music signals
WO2003065353A1 (fr) * 2002-01-30 2003-08-07 Matsushita Electric Industrial Co., Ltd. Dispositif de codage et de decodage audio, procedes correspondants
KR100467617B1 (ko) * 2002-10-30 2005-01-24 삼성전자주식회사 개선된 심리 음향 모델을 이용한 디지털 오디오 부호화방법과그 장치
DE10328777A1 (de) * 2003-06-25 2005-01-27 Coding Technologies Ab Vorrichtung und Verfahren zum Codieren eines Audiosignals und Vorrichtung und Verfahren zum Decodieren eines codierten Audiosignals
CN1471236A (zh) * 2003-07-01 2004-01-28 北京阜国数字技术有限公司 用于感知音频编码的信号自适应多分辨率滤波器组

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504532B2 (en) 2014-05-07 2019-12-10 Samsung Electronics Co., Ltd. Method and device for quantizing linear predictive coefficient, and method and device for dequantizing same
US11238878B2 (en) 2014-05-07 2022-02-01 Samsung Electronics Co., Ltd. Method and device for quantizing linear predictive coefficient, and method and device for dequantizing same
US11922960B2 (en) 2014-05-07 2024-03-05 Samsung Electronics Co., Ltd. Method and device for quantizing linear predictive coefficient, and method and device for dequantizing same

Also Published As

Publication number Publication date
CN101615393B (zh) 2013-01-02
EP2139000A1 (fr) 2009-12-30
CN101615393A (zh) 2009-12-30

Similar Documents

Publication Publication Date Title
EP2139000B1 (fr) Procédé et appareil de codage ou de décodage d'un signal d'entrée audio vocal et/ou non vocal
EP2255358B1 (fr) Encodage vocal et audio a echelle variable utilisant un encodage combinatoire de spectre mdct
Neuendorf et al. Unified speech and audio coding scheme for high quality at low bitrates
EP2311032B1 (fr) Encodeur et décodeur audio pour encoder et décoder des échantillons audio
EP2301020B1 (fr) Dispositif et procédé d encodage/de décodage d'un signal audio utilisant une méthode de commutation à repliement
EP2186088B1 (fr) Analyse/synthèse spectrale de faible complexité faisant appel à une résolution temporelle sélectionnable
KR101139172B1 (ko) 스케일러블 음성 및 오디오 코덱들에서 양자화된 mdct 스펙트럼에 대한 코드북 인덱스들의 인코딩/디코딩을 위한 기술
RU2507572C2 (ru) Звуковое кодирующее устройство и декодер для кодирования декодирования фреймов квантованного звукового сигнала
EP2044589B1 (fr) Procédé et appareil de codage sans perte d'un signal source avec utilisation d'un flux de données codées avec pertes et d'un flux de données d'extension sans pertes
CN101371296B (zh) 用于编码和解码信号的设备和方法
US20130173275A1 (en) Audio encoding device and audio decoding device
JP2001522156A (ja) オーディオ信号をコーディングする方法及び装置並びにビットストリームをデコーディングする方法及び装置
US9240192B2 (en) Device and method for efficiently encoding quantization parameters of spectral coefficient coding
AU2013200679B2 (en) Audio encoder and decoder for encoding and decoding audio samples
Jung et al. A bit-rate/bandwidth scalable speech coder based on ITU-T G. 723.1 standard
Motlicek et al. Frequency domain linear prediction for QMF sub-bands and applications to audio coding
EP3002751A1 (fr) Encodeur et décodeur audio pour encoder et décoder des échantillons audio
Motlicek et al. Scalable wide-band audio codec based on frequency domain linear prediction
Motlicek et al. Scalable Wide-band Audio Codec based on Frequency Domain Linear Prediction (version 2)
Quackenbush MPEG Audio Compression Future

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

17P Request for examination filed

Effective date: 20100223

17Q First examination report despatched

Effective date: 20100324

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

AKX Designation fees paid

Designated state(s): DE FR GB

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/02 20060101ALI20100830BHEP

Ipc: G10L 19/14 20060101AFI20100830BHEP

Ipc: G10L 19/04 20060101ALI20100830BHEP

Ipc: G10L 11/02 20060101ALI20100830BHEP

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON LICENSING

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602008007198

Country of ref document: DE

Effective date: 20110707

REG Reference to a national code

Ref country code: DE

Ref legal event code: R084

Ref document number: 602008007198

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 20110627

REG Reference to a national code

Ref country code: DE

Ref legal event code: R084

Ref document number: 602008007198

Country of ref document: DE

Effective date: 20110622

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20120228

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602008007198

Country of ref document: DE

Effective date: 20120228

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20150626

Year of fee payment: 8

Ref country code: DE

Payment date: 20150625

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20150622

Year of fee payment: 8

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602008007198

Country of ref document: DE

Representative=s name: KASTEL PATENTANWAELTE, DE

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602008007198

Country of ref document: DE

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602008007198

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20160625

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20170228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160630

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160625