WO2003049081A1 - Low bit rate codec - Google Patents
Low bit rate codec Download PDFInfo
- Publication number
- WO2003049081A1 WO2003049081A1 PCT/SE2002/002226 SE0202226W WO03049081A1 WO 2003049081 A1 WO2003049081 A1 WO 2003049081A1 SE 0202226 W SE0202226 W SE 0202226W WO 03049081 A1 WO03049081 A1 WO 03049081A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- signal
- encoding
- encoded
- decoding
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 79
- 230000003044 adaptive effect Effects 0.000 claims description 27
- 238000001914 filtration Methods 0.000 claims description 21
- 238000003786 synthesis reaction Methods 0.000 claims description 17
- 230000015572 biosynthetic process Effects 0.000 claims description 10
- 230000005284 excitation Effects 0.000 claims description 7
- 238000007493 shaping process Methods 0.000 claims description 6
- 230000005236 sound signal Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 239000012141 concentrate Substances 0.000 claims 1
- 238000000638 solvent extraction Methods 0.000 claims 1
- 239000013598 vector Substances 0.000 description 33
- 230000006870 function Effects 0.000 description 28
- 238000013139 quantization Methods 0.000 description 19
- 230000005540 biological transmission Effects 0.000 description 12
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000007704 transition Effects 0.000 description 8
- 239000011800 void material Substances 0.000 description 8
- 230000008901 benefit Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 125000003192 dTMP group Chemical group 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0212—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
Definitions
- the present invention relates to predictive encoding and decoding of a signal, more particularly it relates to predictive encoding and decoding of a signal representing sound, such as speech, audio, or video.
- Real-time transmissions over packet switched networks such as speech, audio, or video over Internet Protocol based networks (mainly the Internet or Intranet networks)
- Internet Protocol based networks mainly the Internet or Intranet networks
- features include such things as relatively low operating costs, easy integration of new services, and one network for both non-real -time and real-time data.
- Real-time data typically a speech, an audio, or a video signal
- a digital signal i.e. into a bitstream, which is divided in portions of suitable size in order to be transmitted in data packets over the packet switched network from a transmitter end to a receiver end.
- packet switched networks originally were designed for transmission of non-real-time data, transmissions of real-time data over such networks causes some problems.
- Data packets can be lost during transmission, as they can be deliberately discarded by the network due to congestion problems or transmission errors. In non-realtime applications this is not a problem since a lost packet can be retransmitted. However, retransmission is not a possible solution for real-time applications that are delay sensitive. A packet that arrives too late to a real-time application cannot be used to reconstruct the corresponding signal since this signal already has been, or should have been, delivered to the receiving end, e.g. for playback by a speaker or for visualization on a display screen. Therefore, a packet that arrives too late is equivalent to a lost packet.
- the main problem with lost or delayed data packets is the introduction of distortion in the reconstructed signal.
- the distortion results from the fact that signal segments conveyed by lost or delayed data packets cannot be reconstructed .
- a predictive coding method encodes a signal pattern based on dependencies between the pattern representations. It encodes the signal for transmission with a fixed bit rate and with a tradeoff between the signal quality and the transmitted bit rate.
- Examples of predictive coding methods used for speech are Linear Predictive Coding (LPC) and Code Excited Linear Prediction (CELP) , which both coding methods are well known to a person skilled in the art.
- a coder state is dependent on previously encoded parts of the signal.
- a lost packet will lead to error propagation since information on which the predictive coder state at the receiving end is dependent upon will be lost together with the lost packet. This means that decoding of a subsequent packet will start with an incorrect coder state. Thus, the error due to the lost packet will propagate during decoding and reconstruction of the signal.
- One way to solve this problem of error propagation is to reset the coder state at the beginning of the encoded signal part included by a packet.
- a reset of the coder state will lead to a degradation of the quality of the reconstructed signal.
- Another way of reducing the effect of a lost packet is to use different schemes for including redundancy information when encoding the signal. In this way the coder state after a lost packet can be approximated.
- not only does such a scheme require more bandwidth for transferring the encoded signal, it furthermore only reduces the effect of the lost packet . Since the effect of a lost packet will not be completely eliminated, error propagation will still be present and result in a perceptually lower quality of the reconstructed signal.
- Another problem with state of the art predictive coders is the encoding, and following reconstruction, of sudden signal transitions from a relatively very low to a much higher signal level, e.g. during a voicing onset of a speech signal.
- a coder state reflect the sudden transition, and more important, the beginning of the voiced period following the transition. This in turn will lead to a degraded quality of the reconstructed signal at a decoding end.
- An object of the present invention is to overcome at least some of the above-mentioned problems in connection with predictive encoding/decoding of a signal which is transmitted in packets. Another object is to enable an improved performance at a decoding end in connection with predictive encoding/decoding when a packet with an encoded signal portion transmitted from an encoding end is lost before being received at the decoding end. Yet another object is to improve the predictive encoding and decoding of a signal which undergoes a sudden increase of its signal power.
- a signal is divided into blocks and then encoded, and eventually decoded, on a block by block basis.
- the idea is to provide predictive encoding/decoding of a block so that the encoding/decoding is independent on any preceding blocks, while still being able to provide predictive encoding/decoding of a beginning end of the block in such way that a corresponding part of the signal can be reproduced with the same level of quality as other parts of the signal .
- This is achieved by basing the encoding and the decoding of a block on a coded start state located somewhere between the end boundaries of the block. The start state is encoded/decoded using any applicable coding method.
- a second block part • and a third block part, if such a third part is determined to exist, on respective sides of the start state and between the block boundaries are then encoded/decoded using any predictive coding method.
- the two block parts are encoded/decoded in opposite directions with respect to each other. For example, the block part located at the end part of the block is encoded/decoded along the signal pattern as it occurs in time, while the other part located at the beginning of the block is encoded/decoded along the signal pattern backwards in time, from later occurring signal pattern to earlier occurring signal pattern.
- the third block part is encoded in an opposite direction in comparison with the encoding of the second block part .
- decoding of an encoded block is performed in three stages when reproducing a corresponding decoded signal block.
- a predictive decoding method based on the start state is used for reproducing the second part of the block located between the start state and one of the two end boundaries of the block.
- the signal subject to encoding in accordance with the present invention either corresponds to a digital signal or to a residual signal of an analysis filtered digital signal.
- the signal comprises a sequential pattern which represents sound, such as speech or audio, or any other phenomena that can be represented as a sequential pattern, e.g. a video or an ElectroCardioGram (ECG) signal.
- ECG ElectroCardioGram
- the encoding/decoding of the start state uses a coding method which is independent of previous parts of the signal, thus making the block self-contained with respect to information defining the start state.
- predictive encoding/decoding is preferably used also for the start state.
- the signal block is divided into a set of consecutive intervals and the start state chosen to correspond to one or more consecutive intervals of those intervals that have the highest signal energy.
- the start state can be optimized towards a signal part with relatively high signal energy. In this way an encoding/decoding of the rest of the block is accomplished which is efficient from a perceptual point of view since it can be based on a start state which is encoded/decoded with a high accuracy.
- An advantage of the present invention is that it enables the predictive coding to be performed in such way that the coded block will be self-contained with respect to information in the excitation domain, i.e. the coded information will not be correlated with information in any previously encoded block. Consequently, at decoding, the decoding of the encoded block is based on information self-contained in the encoded block. This means that if a packet carrying an encoded block is lost during transmission, the predictive decoding of subsequent encoded blocks in subsequent received packets will not be affected by lost state information in the lost packet.
- the present invention avoids the problem of error propagation that conventional predictive coding/decoding encounter during decoding when a packet carrying an encoded block is lost before reception at the decoding end. Accordingly, a codec applying the features of the present invention will become more robust to packet loss.
- the start state is chosen so as to be located in the part of the block which is associated with the highest signal power.
- a speech signal composed of voiced and unvoiced parts
- high correlation exists between signal samples within a voiced part and low correlation between signal samples within an unvoiced part.
- the correlation in the transition region between an unvoiced part and a voiced part, and vice versa, is minor and difficult to exploit. From a perceptual point of view it is more important to achieve a good waveform matching when reproducing a voiced part of the signal, whereas the waveform matching for an unvoiced part is less important.
- the present invention is able to more fully exploit the high correlation in the voiced region to the benefit for the perception.
- the transition from unvoiced to highly periodic voiced sound takes a few pitch periods.
- the high bit rate of the start state encoding will be applied in a pitch cycle where high periodicity has been established, rather than in one of the very first pitch cycles of the voiced region.
- Fig. 1 shows an overview of the transmitting part of a system for transmission of sound over a packet switched network
- Fig. 2 shows an overview of the receiving part of a system for transmission of sound over a packet switched network
- Fig. 3 shows an example of a residual signal block
- Fig. 4 shows integer sub-block and higher resolution target for start state for the encoding of the residual Of Fig. 3;
- Fig. 5 shows a functional block diagram of an encoder encoding a start state in accordance with an embodiment of the invention
- Fig. 6 shows a functional block diagram of a decoder performing a decoding operation corresponding to the encoder in Fig. 5;
- Fig. 7 shows the encoding of a signal from the start state towards the block end boundaries
- Fig. 8 shows a functional block diagram of an adaptive codebook search advantageously exploited by an embodiment of the present invention.
- the encoding and decoding functionality according to the invention is typically included in a codec having an encoder part and a decoder part.
- a codec having an encoder part and a decoder part.
- an embodiment of the invention is shown in a system used for transmission of sound over a packet switched network.
- an encoder 130 operating in accordance with the present invention is included in a transmitting system.
- the sound wave is picked up by a microphone 110 and transduced into an analog electronic signal 115.
- This signal is sampled and digitized by an A/D-converter 120 to result in a sampled signal 125.
- the sampled signal is the input to the encoder 130.
- the output from the encoder is data packets 135.
- Each data packet contains compressed information about a block of samples.
- the data packets are, via a controller 140, forwarded to the packet switched network.
- a decoder 270 operating in accordance with the present invention is included in a receiving system.
- the data packets are received from the packet switched network by a controller 250, and stored in a jitter buffer 260. From the jitter buffer data packets 265 are made available to the decoder 270.
- the output of the decoder is a sampled digital signal 275. Each data packet results in one block of signal samples.
- the sampled digital signal is input to a D/A-converter 280 to result in an analog electronic signal 285. This signal can be forwarded to a sound transducer 290, containing a loudspeaker, to result in to reproduced sound wave.
- LPC linear predictive coding
- APC adaptive predictive coding
- CELP code excited linear prediction
- a codec according to the present invention uses a start state, i.e., a sequence of samples localized within the signal block to initialize the coding of the remaining parts of the signal block.
- the principle of the invention complies with an open-loop analysis-synthesis approach for the LPC as well as the closed-loop analysis- by-synthesis approach, which is well known from CELP.
- An open-loop coding in a perceptually weighted domain provides an alternative to analysis-by-synthesis to obtain a perceptual weighting of the coding noise. When compared with analysis-by-synthesis this method provides an advantageous compromise between voice quality and computational complexity of the proposed scheme.
- the open-loop coding in a perceptually weighted domain is described later in this description.
- the input to the encoder is the digital signal 125.
- This signal can take the format of 16 bit uniform pulse code modulation (PCM) sampled at 8 kHz and with a direct current (DC) component removed.
- PCM uniform pulse code modulation
- DC direct current
- the input is partitioned into blocks of e.g. 240 samples. Each block is subdivided into, e.g. 6, consecutive sub- blocks of, e.g., 40 samples each.
- any method can be used to extract a spectral envelope from the signal block without diverging from the spirit of the invention.
- One method is outlined as follows: For each input block, the encoder does a number, e.g. two, linear-predictive coding (LPC) analysis, each with an order of e.g. 10.
- LPC linear-predictive coding
- the resulting LPC coefficients are encoded, preferably in the form of line spectral frequencies (LSF) .
- LSF line spectral frequencies
- the encoding of LSF ' s is well known to a person skilled in the art. This encoding may exploit correlations between sets of coefficients, e.g., by use of predictive coding for some of the sets.
- the LPC analysis may exploit different, and possibly non- symmetric window functions in order to obtain a good compromise between smoothness and centering of the windows and lookahead delay introduced in the coding.
- the quantized LPC representations can advantageously be interpolated to result in a larger number of smoothly time varying sets of LSF coefficients. Subsequently the LPC residual is obtained using the quantized and smoothly interpolated LSF coefficients converted into coefficients for an analysis filter.
- FIG. 3 An example of a residual signal block 315 and its partition into sub-blocks 316, 317, 318, 319, 320 and 321 is illustrated in Figure 3, the number of sub-blocks being merely illustrative. In this figure each interval on the time axis indicates a sub-block.
- the identification of a target for a start state within the exemplary residual block in Figure 3 is illustrated in Figure 4. In a simple implementation this target can, e.g., be identified as the two consecutive sub-blocks 317 and 318 of the residual exhibiting the maximal energy of any two consecutive sub-blocks within the block.
- the length of the target can be further shortened and localized with higher time resolution by identifying a subset of consecutive samples 325 of possibly predefined length within the two-sub-block interval.
- a subset can be chosen as a trailing or tailing predefined number, e.g. 58, of samples within the two-sub-block interval.
- the choice between trailing or tailing subset can be based on a maximum energy criterion.
- start state can be encoded with basically any encoding method.
- scalar quantization with predictive noise shaping is used, as illustrated in Figure 5.
- the scalar quantization is pre-pended with an all-pass filtering 520 designed to spread the sample energy on all samples in the start state. It has been found that this results in a good tradeoff between overload and granular noise of a low rate bounded scalar quantizer.
- a simple design of such an all -pass filter is obtained by applying the LPC synthesis filter forwards in time and the corresponding LPC analysis filter backwards in time. To be specific, when the quantized LPC analysis filter is Aq(z), with coefficients 516. Then the all-pass filter 520 is given by Aq(z A -l) /Aq (z) .
- the filtered target 525 is normalized to exhibit a predefined maximal amplitude by the normalization 530 to result in the normalized target 535 and an index of quantized normalization factor 536.
- the weighting of the quantization error is divided into a filtering 540 of the normalized target 535 and a filtering 560 of the quantized target 556, from which the ringing, or zero-input response, 545 for each sample is subtracted from the weighted target 545 to result in the quantization target 547, which is input to the quantizer 550.
- the result is a sequence of indexes 555 of the quantized start state.
- any noise shaping weighting filter 540 and 560 can be applied in this embodiment.
- the same noise shaping is applied in the encoding of the start state as in the subsequent encoding of the remaining signal block, described later.
- memset targetBuf , 0, FILTERORDER*sizeof (float) ) ; memset (syntOutBuf , 0, FILTERORDER*sizeof (float) ) ; memset (weightOutBuf , 0, FILTERORDER*sizeof (float) ) ;
- memset (tmpbuf , 0, FILTERORDER*sizeof (float) ) ; memset (foutbuf , 0, FILTERORDER*sizeof (float) ) ;
- numerator [k] syntDenum [FILTERORDER-k]
- numerator [FILTERORDER] syntDenum [0]
- tmp ttmpbuf [FILTERORDER]
- fout &foutbuf [FILTERORDER] ;
- ZeroPoleFilter tmp, numerator, syntDenum, 2*len, FILTERORDER, fout
- AbsQuant (fou , syntDenum, eightNum, weightDenum, idxVec, len) ;
- the quantized start state 615 are looked up in the scalar codebook 620 to result in the reconstruction of the quantized start state 625.
- the quantized start state is then de-normalized 630 using the index of quantized normalization factor 626. This produces the de-normalized start state 635, which is input to the inverse all -pass filter 640, taking coefficients 636, to result in the decoded start state 645.
- memset (tmpbuf , 0 , FILTERORDER* sizeof ( float) ) ; memset ( foutbuf , 0 , FILTERORDER*sizeof ( f loat) ) ;
- numerator [k] syntDenum [FILTERORDER-k] ;
- the remaining samples of the block can be encoded in a multitude of ways that all exploit the start state as an initialization for the state of the encoding algorithm.
- a linear predictive algorithm can be used for the encoding of the remaining samples.
- the application of an adaptive codebook enables an efficient exploitation of the start state during voiced speech segments.
- the encoded start state is used to populate the adaptive codebook.
- an initialization of the state for error weighting filters is advantageously done using the start state. The specifics of such initializations can be done in a multitude of ways well known by a person skilled in the art.
- the start state 715 which is an example of the signal 645 and which is a decoded representation of the start state target 325, is extended to an integer sub-block length start state 725. Thereafter, these sub-blocks are used as start state for the encoding of the remaining sub-blocks within the block
- A-B (the number of sub-blocks being merely illustrative) .
- This encoding can start by either encoding the sub-blocks later in time, or by encoding the sub-blocks earlier in time. While both choices are readily possible under the scope of the invention, we describe in detail only embodiments which start with the encoding of sub-blocks later in time.
- an adaptive codebook and weighting filter are initialized from the start state for encoding of sub-blocks later in time. Each of these sub-blocks are subsequently encoded. As an example, this can result in the signal 735 in Figure 7. If more than one sub-block is later in time than the integer sub-block start state within the block, then the adaptive codebook memory is updated with the encoded LPC excitation in preparation for the encoding of the next sub-block. This is done by methods which are well known by a person skilled in the art.
- the block contains sub-blocks earlier in time than the ones encoded for the start state, then a procedure equal to the one applied for sub-blocks later in time is applied on the time-reversed block to encode these sub- blocks.
- the difference is, when compared to the encoding of the sub-blocks later in time, that now not only the start state, but also the LPC excitation later in time than the start state, is applied in the initialization of the adaptive codebook and the perceptual weighting filter. As an example, this will extend the signal 735 into a full decoded representation 745, which is the resulting decoded representation of the LPC residual 315.
- the signal 745 constitute the LPC excitation for the decoder.
- void iLBC_encode ( /* main encoder function */ float *speech, /* (i) speech data vector */ unsigned char *bytes, /* (o) encoded data bits */ float *block, /* (o) decoded speech vector */ int mode, /* (i) 1 for standard encoding 2 for redundant encoding */ float *decresidual, /* (o) decoded residual prior to gain adaption (useful for a redundant encoding unit) */ float *syntdenum, /* (o) decoded synthesis filters (useful for a redundant encoding unit) */ float *weightnum, /* (o) weighting numerator (useful for a redundant encoding unit) */ float *weightdenum /* (o) weighting denumerator (useful for a redundant encoding unit) */ )
- int start, idxForMax, idxVec [STATE_LEN] ; float reverseDecresidual [BLOCKL] , mem[MEML] ; int n, k, kk, meml_gotten, Nfor, Nback, i; int dummy 0 ; int gain_index[NSTAGES*NASUB] , extra_gain_index [NSTAGES] ; int cb_index[NSTAGES*NASUB] , extra_cb_index [NSTAGES] ; int lsf_i [LSF_NSPLIT*LPC_N] ; unsigned char *pbytes; int diff, start_pos, state_first ; float enl, en2; int index, gc_index; int subcount, subframe; float weightState [FILTERORDER] ;
- variable start indicates the beginning of the signal 3 17,318 ( Figure 4) in integer number of subblocks */
- variable start_pos now indicates the beginning of the signal 325 ( Figure 4) in integer number of samples */ /* scalar quantization of state */
- This function does a weighted multistage search of shape and gain indexes */
- decresidual contains the signal of which signal 725 in Figure 7 is an example */
- Weighted adaptive codebook search In the described forward and backward encoding procedures.
- the adaptive codebook search can be done in an un-weighted residual domain, or a traditional analysis-by-synthesis weighting can be applied.
- the method consist of a pre-weighting of the adaptive codebook memory and the target signal prior to construction of the adaptive codebook and subsequent search for the best codebook index.
- the advantage of this method compared to analysis-by- synthesis, is that the weighting filtering on the codebook memory leads to less computations than what is needed in the zero state filter recursion of an analysis- by-synthesis encoding for adaptive codebooks.
- the drawback of this method is that the weighted codebook vectors will have a zero-input component which results from past samples in the codebook memory not from past samples of the decoded signal as in analysis-by- synthesis. This negative effect can be kept low by designing the weighting filter to have low energy in the zero input component relative to the zero state component over the length of a codebook vector.
- FIG. 8 An implementation of this third method is schematized in Figure 8.
- This buffer is then weighting filtered 830 using the weighted LPC coefficients 836.
- the Weighted buffer 835 is then separated 840 into the time samples corresponding to the memory and those corresponding to the target .
- the weighted memory 845 is then used to build the adaptive codebook 850.
- the adaptive codebook 855 need not differ in physical memory location from the weighted memory 845 since time shifted codebook vectors can be addressed the same way as time shifted samples m the memory buffer.
- memcpy (buf , weightstate, sizeof (float) *FILTERORDER) ; memcpy (&buf [FILTERORDER] , mem, lMem*sizeof (float) ) ; memcpy (&buf [FILTERORDER+lMem] , target, lTarget*sizeof (float) ) ; /* At this point buf is the signal 825 on Fig. 8 */
- index [stage] best_index
- gain gainquant (gain, (float) fabs (gains [stage-1] ) , 8, &gain_index [stage] ) ,* /*
- the decoder covered by the present invention is any decoder that interoperates with an encoder according to the above description. Such a decoder will extract from the encoded data a location for the start state. It will decode the start state and use it as an initialization of a memory for the decoding of the remaining signal frame. In case a data packet is not received a packet loss concealment could be advantageous .
- This function does a syntesis filtering of the decoded residual */ memcpy (decblock, decresidual, BLOCKL*sizeo (float) ) ; memcpy (old_syntdenum, syntdenum, NSUB* (FILTERORDER+1) *sizeof (float) ) ;
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Synchronisation In Digital Transmission Systems (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Dc Digital Transmission (AREA)
- Stabilization Of Oscillater, Synchronisation, Frequency Synthesizers (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2002358365A AU2002358365A1 (en) | 2001-12-04 | 2002-12-03 | Low bit rate codec |
US10/497,530 US7895046B2 (en) | 2001-12-04 | 2002-12-03 | Low bit rate codec |
EP02792126A EP1451811B1 (en) | 2001-12-04 | 2002-12-03 | Low bit rate codec |
AT02792126T ATE437431T1 (en) | 2001-12-04 | 2002-12-03 | LOW BITRATE CODEC |
DE60233068T DE60233068D1 (en) | 2001-12-04 | 2002-12-03 | CODEC WITH LOW BITRATE |
US13/030,929 US8880414B2 (en) | 2001-12-04 | 2011-02-18 | Low bit rate codec |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SE0104059A SE521600C2 (en) | 2001-12-04 | 2001-12-04 | Lågbittaktskodek |
SE0104059-1 | 2001-12-04 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10497530 A-371-Of-International | 2002-12-03 | ||
US13/030,929 Continuation US8880414B2 (en) | 2001-12-04 | 2011-02-18 | Low bit rate codec |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003049081A1 true WO2003049081A1 (en) | 2003-06-12 |
Family
ID=20286184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SE2002/002226 WO2003049081A1 (en) | 2001-12-04 | 2002-12-03 | Low bit rate codec |
Country Status (8)
Country | Link |
---|---|
US (2) | US7895046B2 (en) |
EP (1) | EP1451811B1 (en) |
CN (1) | CN1305024C (en) |
AT (1) | ATE437431T1 (en) |
AU (1) | AU2002358365A1 (en) |
DE (1) | DE60233068D1 (en) |
SE (1) | SE521600C2 (en) |
WO (1) | WO2003049081A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007008001A3 (en) * | 2005-07-11 | 2007-03-22 | Lg Electronics Inc | Apparatus and method of encoding and decoding audio signal |
WO2007124485A2 (en) * | 2006-04-21 | 2007-11-01 | Dilithium Networks Pty Ltd. | Method and apparatus for audio transcoding |
EP2296144A1 (en) * | 2008-12-31 | 2011-03-16 | Huawei Technologies Co., Ltd. | Method and apparatus for distributing sub-frame |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE521600C2 (en) | 2001-12-04 | 2003-11-18 | Global Ip Sound Ab | Lågbittaktskodek |
US7024358B2 (en) * | 2003-03-15 | 2006-04-04 | Mindspeed Technologies, Inc. | Recovering an erased voice frame with time warping |
FR2861491B1 (en) * | 2003-10-24 | 2006-01-06 | Thales Sa | METHOD FOR SELECTING SYNTHESIS UNITS |
US7602867B2 (en) | 2004-08-17 | 2009-10-13 | Broadcom Corporation | System and method for linear distortion estimation by way of equalizer coefficients |
CA2596341C (en) * | 2005-01-31 | 2013-12-03 | Sonorit Aps | Method for concatenating frames in communication system |
TWI285568B (en) * | 2005-02-02 | 2007-08-21 | Dowa Mining Co | Powder of silver particles and process |
SG179433A1 (en) * | 2007-03-02 | 2012-04-27 | Panasonic Corp | Encoding device and encoding method |
US8280539B2 (en) * | 2007-04-06 | 2012-10-02 | The Echo Nest Corporation | Method and apparatus for automatically segueing between audio tracks |
US20100274556A1 (en) * | 2008-01-16 | 2010-10-28 | Panasonic Corporation | Vector quantizer, vector inverse quantizer, and methods therefor |
CA2717584C (en) * | 2008-03-04 | 2015-05-12 | Lg Electronics Inc. | Method and apparatus for processing an audio signal |
CA2729665C (en) * | 2008-07-10 | 2016-11-22 | Voiceage Corporation | Variable bit rate lpc filter quantizing and inverse quantizing device and method |
FR2938688A1 (en) * | 2008-11-18 | 2010-05-21 | France Telecom | ENCODING WITH NOISE FORMING IN A HIERARCHICAL ENCODER |
US9245529B2 (en) * | 2009-06-18 | 2016-01-26 | Texas Instruments Incorporated | Adaptive encoding of a digital signal with one or more missing values |
US8554746B2 (en) | 2010-08-18 | 2013-10-08 | Hewlett-Packard Development Company, L.P. | Multiple-source data compression |
MX2018016263A (en) | 2012-11-15 | 2021-12-16 | Ntt Docomo Inc | Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program. |
US10523490B2 (en) * | 2013-08-06 | 2019-12-31 | Agilepq, Inc. | Authentication of a subscribed code table user utilizing optimized code table signaling |
US10056919B2 (en) | 2014-07-02 | 2018-08-21 | Agilepq, Inc. | Data recovery utilizing optimized code table signaling |
AU2017278253A1 (en) | 2016-06-06 | 2019-01-24 | Agilepq, Inc. | Data conversion systems and methods |
US9934785B1 (en) | 2016-11-30 | 2018-04-03 | Spotify Ab | Identification of taste attributes from an audio signal |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE501981C2 (en) * | 1993-11-02 | 1995-07-03 | Ericsson Telefon Ab L M | Method and apparatus for discriminating between stationary and non-stationary signals |
US5621852A (en) * | 1993-12-14 | 1997-04-15 | Interdigital Technology Corporation | Efficient codebook structure for code excited linear prediction coding |
US6101276A (en) * | 1996-06-21 | 2000-08-08 | Compaq Computer Corporation | Method and apparatus for performing two pass quality video compression through pipelining and buffer management |
FR2762464B1 (en) * | 1997-04-16 | 1999-06-25 | France Telecom | METHOD AND DEVICE FOR ENCODING AN AUDIO FREQUENCY SIGNAL BY "FORWARD" AND "BACK" LPC ANALYSIS |
EP1146713B1 (en) * | 2000-03-03 | 2005-04-27 | NTT DoCoMo, Inc. | Method and apparatus for packet transmission with header compression |
SE522261C2 (en) * | 2000-05-10 | 2004-01-27 | Global Ip Sound Ab | Encoding and decoding of a digital signal |
JP2002101417A (en) * | 2000-09-22 | 2002-04-05 | Oki Electric Ind Co Ltd | Moving image encoding method and device therefor |
US7020284B2 (en) * | 2000-10-06 | 2006-03-28 | Patrick Oscar Boykin | Perceptual encryption and decryption of movies |
US7171355B1 (en) * | 2000-10-25 | 2007-01-30 | Broadcom Corporation | Method and apparatus for one-stage and two-stage noise feedback coding of speech and audio signals |
JP3957460B2 (en) * | 2001-01-15 | 2007-08-15 | 沖電気工業株式会社 | Transmission header compression apparatus, moving picture encoding apparatus, and moving picture transmission system |
SE521600C2 (en) | 2001-12-04 | 2003-11-18 | Global Ip Sound Ab | Lågbittaktskodek |
-
2001
- 2001-12-04 SE SE0104059A patent/SE521600C2/en not_active IP Right Cessation
-
2002
- 2002-12-03 WO PCT/SE2002/002226 patent/WO2003049081A1/en not_active Application Discontinuation
- 2002-12-03 EP EP02792126A patent/EP1451811B1/en not_active Expired - Lifetime
- 2002-12-03 AU AU2002358365A patent/AU2002358365A1/en not_active Abandoned
- 2002-12-03 CN CNB028271866A patent/CN1305024C/en not_active Expired - Lifetime
- 2002-12-03 AT AT02792126T patent/ATE437431T1/en not_active IP Right Cessation
- 2002-12-03 DE DE60233068T patent/DE60233068D1/en not_active Expired - Lifetime
- 2002-12-03 US US10/497,530 patent/US7895046B2/en active Active
-
2011
- 2011-02-18 US US13/030,929 patent/US8880414B2/en not_active Expired - Lifetime
Non-Patent Citations (3)
Title |
---|
ANDERSEN S.V. ET AL.: "Multiplexed predictive coding of speech", 2001 IEEE INTERNATIONAL CONFERENCE ON ACCOUSTICS, SPEECH AND SIGNAL PROCESSING, 2001. PROCEEDINGS, vol. 2, 7 May 2001 (2001-05-07) - 11 May 2001 (2001-05-11), SALT LAKE, CITY, UT, USA, pages 741 - 744, XP002960914 * |
BOYCE J.M.: "Packet loss resilient transmission of MPEG video over the internet", SIGNAL PROCESSING IMAGE COMMUNICATIONS, vol. 15, no. 1-2, September 1999 (1999-09-01), pages 7 - 24, XP002902148 * |
LESLIE B. ET AL.: "Packet loss resilient, scalable audio compression and streaming for IP networks", SECOND INTERNATIONAL CONFERENCE ON 3G MOBILE COMMUNICATION TECHNOLOGIES, 2001. (CONF. PUBL: NO.477), 26 March 2001 (2001-03-26) - 28 March 2001 (2001-03-28), LONDON, UK, pages 119 - 123, XP002960915 * |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8180631B2 (en) | 2005-07-11 | 2012-05-15 | Lg Electronics Inc. | Apparatus and method of processing an audio signal, utilizing a unique offset associated with each coded-coefficient |
US8010372B2 (en) | 2005-07-11 | 2011-08-30 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8554568B2 (en) | 2005-07-11 | 2013-10-08 | Lg Electronics Inc. | Apparatus and method of processing an audio signal, utilizing unique offsets associated with each coded-coefficients |
US8046092B2 (en) | 2005-07-11 | 2011-10-25 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US7830921B2 (en) | 2005-07-11 | 2010-11-09 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US7835917B2 (en) | 2005-07-11 | 2010-11-16 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8510120B2 (en) | 2005-07-11 | 2013-08-13 | Lg Electronics Inc. | Apparatus and method of processing an audio signal, utilizing unique offsets associated with coded-coefficients |
US7930177B2 (en) | 2005-07-11 | 2011-04-19 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signals using hierarchical block switching and linear prediction coding |
US7949014B2 (en) | 2005-07-11 | 2011-05-24 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US7962332B2 (en) | 2005-07-11 | 2011-06-14 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8032386B2 (en) | 2005-07-11 | 2011-10-04 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8510119B2 (en) | 2005-07-11 | 2013-08-13 | Lg Electronics Inc. | Apparatus and method of processing an audio signal, utilizing unique offsets associated with coded-coefficients |
US8050915B2 (en) | 2005-07-11 | 2011-11-01 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signals using hierarchical block switching and linear prediction coding |
US7987009B2 (en) | 2005-07-11 | 2011-07-26 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signals |
US7991012B2 (en) | 2005-07-11 | 2011-08-02 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US7991272B2 (en) | 2005-07-11 | 2011-08-02 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US7996216B2 (en) | 2005-07-11 | 2011-08-09 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8326132B2 (en) | 2005-07-11 | 2012-12-04 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8032368B2 (en) | 2005-07-11 | 2011-10-04 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signals using hierarchical block swithcing and linear prediction coding |
US8032240B2 (en) | 2005-07-11 | 2011-10-04 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US7966190B2 (en) | 2005-07-11 | 2011-06-21 | Lg Electronics Inc. | Apparatus and method for processing an audio signal using linear prediction |
US8417100B2 (en) | 2005-07-11 | 2013-04-09 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US7987008B2 (en) | 2005-07-11 | 2011-07-26 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8055507B2 (en) | 2005-07-11 | 2011-11-08 | Lg Electronics Inc. | Apparatus and method for processing an audio signal using linear prediction |
US8065158B2 (en) | 2005-07-11 | 2011-11-22 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8108219B2 (en) | 2005-07-11 | 2012-01-31 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8121836B2 (en) | 2005-07-11 | 2012-02-21 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8149877B2 (en) | 2005-07-11 | 2012-04-03 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8149876B2 (en) | 2005-07-11 | 2012-04-03 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8149878B2 (en) | 2005-07-11 | 2012-04-03 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8155153B2 (en) | 2005-07-11 | 2012-04-10 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8155152B2 (en) | 2005-07-11 | 2012-04-10 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
US8155144B2 (en) | 2005-07-11 | 2012-04-10 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signal |
WO2007008001A3 (en) * | 2005-07-11 | 2007-03-22 | Lg Electronics Inc | Apparatus and method of encoding and decoding audio signal |
US8255227B2 (en) | 2005-07-11 | 2012-08-28 | Lg Electronics, Inc. | Scalable encoding and decoding of multichannel audio with up to five levels in subdivision hierarchy |
US8275476B2 (en) | 2005-07-11 | 2012-09-25 | Lg Electronics Inc. | Apparatus and method of encoding and decoding audio signals |
US7805292B2 (en) | 2006-04-21 | 2010-09-28 | Dilithium Holdings, Inc. | Method and apparatus for audio transcoding |
WO2007124485A2 (en) * | 2006-04-21 | 2007-11-01 | Dilithium Networks Pty Ltd. | Method and apparatus for audio transcoding |
WO2007124485A3 (en) * | 2006-04-21 | 2008-06-19 | Dilithium Networks Pty Ltd | Method and apparatus for audio transcoding |
EP2538407A3 (en) * | 2008-12-31 | 2013-04-24 | Huawei Technologies Co., Ltd. | Framing method and apparatus |
EP2296144A4 (en) * | 2008-12-31 | 2011-06-22 | Huawei Tech Co Ltd | Method and apparatus for distributing sub-frame |
EP2296144A1 (en) * | 2008-12-31 | 2011-03-16 | Huawei Technologies Co., Ltd. | Method and apparatus for distributing sub-frame |
EP2755203A1 (en) * | 2008-12-31 | 2014-07-16 | Huawei Technologies Co., Ltd. | Framing method and apparatus of an audio signal |
US8843366B2 (en) | 2008-12-31 | 2014-09-23 | Huawei Technologies Co., Ltd. | Framing method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
SE0104059L (en) | 2003-07-03 |
ATE437431T1 (en) | 2009-08-15 |
CN1615509A (en) | 2005-05-11 |
EP1451811A1 (en) | 2004-09-01 |
SE0104059D0 (en) | 2001-12-04 |
SE521600C2 (en) | 2003-11-18 |
US20060153286A1 (en) | 2006-07-13 |
US7895046B2 (en) | 2011-02-22 |
US20110142126A1 (en) | 2011-06-16 |
AU2002358365A1 (en) | 2003-06-17 |
US8880414B2 (en) | 2014-11-04 |
DE60233068D1 (en) | 2009-09-03 |
EP1451811B1 (en) | 2009-07-22 |
CN1305024C (en) | 2007-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8880414B2 (en) | Low bit rate codec | |
KR101246991B1 (en) | Audio codec post-filter | |
KR100837451B1 (en) | Method and apparatus for improved quality voice transcoding | |
JP4005359B2 (en) | Speech coding and speech decoding apparatus | |
CN101006495A (en) | Audio encoding apparatus, audio decoding apparatus, communication apparatus and audio encoding method | |
WO2001059757A2 (en) | Method and apparatus for compression of speech encoded parameters | |
WO2000038177A1 (en) | Periodic speech coding | |
EP1141947A2 (en) | Variable rate speech coding | |
EP2041745A1 (en) | Adaptive encoding and decoding methods and apparatuses | |
EP2945158B1 (en) | Method and arrangement for smoothing of stationary background noise | |
JPH10124094A (en) | Voice analysis method and method and device for voice coding | |
US6826527B1 (en) | Concealment of frame erasures and method | |
JP2003501675A (en) | Speech synthesis method and speech synthesizer for synthesizing speech from pitch prototype waveform by time-synchronous waveform interpolation | |
US7684978B2 (en) | Apparatus and method for transcoding between CELP type codecs having different bandwidths | |
CA2293165A1 (en) | Method for transmitting data in wireless speech channels | |
EP1103953A2 (en) | Method for concealing erased speech frames | |
Andersen et al. | RFC 3951: Internet Low Bit Rate Codec (iLBC) | |
JP2004348120A (en) | Voice encoding device and voice decoding device, and method thereof | |
KR100341398B1 (en) | Codebook searching method for CELP type vocoder | |
JP2002073097A (en) | Celp type voice coding device and celp type voice decoding device as well as voice encoding method and voice decoding method | |
JP3350340B2 (en) | Voice coding method and voice decoding method | |
EP1212750A1 (en) | Multimode vselp speech coder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002792126 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20028271866 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2002792126 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006153286 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10497530 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 10497530 Country of ref document: US |