US6879955B2 - Signal modification based on continuous time warping for low bit rate CELP coding - Google Patents

Signal modification based on continuous time warping for low bit rate CELP coding Download PDF

Info

Publication number
US6879955B2
US6879955B2 US09/896,272 US89627201A US6879955B2 US 6879955 B2 US6879955 B2 US 6879955B2 US 89627201 A US89627201 A US 89627201A US 6879955 B2 US6879955 B2 US 6879955B2
Authority
US
United States
Prior art keywords
residual
section
lag
frame
last sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US09/896,272
Other versions
US20030004718A1 (en
Inventor
Ajit V. Rao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US09/896,272 priority Critical patent/US6879955B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAO, AJIT V.
Priority to JP2002186971A priority patent/JP4162933B2/en
Priority to EP02014365A priority patent/EP1271471B1/en
Priority to AT02014365T priority patent/ATE393447T1/en
Priority to DE60226200T priority patent/DE60226200T2/en
Publication of US20030004718A1 publication Critical patent/US20030004718A1/en
Priority to US11/032,595 priority patent/US7228272B2/en
Publication of US6879955B2 publication Critical patent/US6879955B2/en
Application granted granted Critical
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/12Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

Definitions

  • This invention relates generally to speech coding techniques and, more particularly, relates to techniques for modifying a signal to aid in coding the signal via a low bit-rate coding technique such as codebook excited linear prediction (CELP) coding.
  • CELP codebook excited linear prediction
  • Channels and media usable for the transmission and/or storage of digital voice are often of limited capacity, and grow more so every day.
  • the advent of quality video for use in conjunction with real time or recorded voice has created a demand for audio/video conferencing over digital networks in real time as well as for non-real time high quality audio/video presentations, such as those receivable in streaming format and those downloadable for storage in their entirety.
  • video content displaces bandwidth and storage capacity in various transmission channels and storage media, the need to efficiently and properly compress both voice and video becomes imperative.
  • Other scenarios also create a need for extreme and effective compression of voice.
  • increasingly congested cell phone links must be able to accommodate a greater number of users often over channels whose capacity has not changed in keeping with the number of users.
  • Waveform coders actually attempt to describe the sound wave itself and typically do not achieve high rates of compression.
  • Voice coders, or vocoders take into account the source and peculiarities of human speech rather than simply attempting to map the resultant sound wave, and accordingly may achieve much higher compression rates, albeit at the expense of increased computational complexity. Waveform coders are generally more robust to peculiar human voices, non-speech sounds and high levels of background noise.
  • linear predictive coding technique assumes that for each portion of the speech signal there exists a digital filter that when excited by a certain signal will produce a signal much like the original speech signal portion.
  • a coder implementing a linear predictive technique will typically first derive a set of coefficients that describe the spectral envelope, or formants, of the speech signal. A filter corresponding to these coefficients is established and used to reduce the input speech signal to a predictive residual.
  • the above described filter is an inverse synthesis filter, such that inputting the residual signal into a corresponding synthesis filter will produce a signal that closely approximates the original speech signal.
  • the filter coefficients and the residual are transmitted or stored for later and/or distant re-synthesis of the speech signal. While the filter coefficients require little space for storage or little bandwidth, e.g. 1.5 kbps, for transmittal, the predictive residual is a high-bandwidth signal similar to the original speech signal in complexity. Thus, in order to effectively compress the speech signal, the predictive residual must be compressed.
  • the technique of Codebook Excited Linear Prediction (CELP) is used to achieve this compression.
  • CELP utilizes one or more codebook indexes which are usable to select particular vectors, one each from a set of “codebooks”. Each codebook is a collection of vectors.
  • the selected vectors are chosen such that when scaled and summed, they produce a response from the synthesis filter that best approximates the response of the filter to the residual itself.
  • the CELP decoder has access to the same codebooks as the CELP encoder did, and thus the simple indexes are usable to identify the same vectors from the encoder and decoder codebooks.
  • RCELP Relaxation Codebook Excited Linear Predictive coding
  • this article describes a method of uniformly advancing or delaying whole segments of a residual signal such that its modified pitch-period contour matches a synthetic pitch-period contour.
  • problems with this approach include the fact that as an artifact of the particular warping methodology, certain portions of the original signal may be omitted or repeated.
  • two adjacent segments of the signal experience a cumulative compressive shift, portions of the original signal near the overlap may be omitted in the modified signal.
  • portions of the original signal near the overlap may be repeated in the modified signal.
  • the invention employs a continuous, rather than simply piece-wise continuous, time warp contour to modify the original residual signal to match a synthetic contour, thus avoiding edge shifting effects prevalent in the prior art.
  • the warp contour employed within the invention is continuous, i.e. lacking spatial jumps or discontinuities, and does not invert or overly distend the positions of adjacent end points in adjacent frames.
  • the optimum linear shift is derived via a quadratic or other approximation.
  • the algorithm utilized within the invention to determine the ideal warp contour does not require that every possible warp contour be calculated and utilized to correlate the modified signal to the synthetic signal.
  • a subset of possible contours from across a subrange of possible contours are calculated.
  • the relative correlation strengths from these contours are then modeled as points on a quadratic curve or other parametric function curve.
  • the optimum warp contour possibly represented by a point lying someplace between calculated sample points, is then calculated by maximizing the appropriate parametric function.
  • Other simplification techniques such as bisection or piece-wise polynomial modeling may also be used within the invention.
  • FIG. 1 is an architectural diagram of an exemplary coder within which an embodiment of the invention may be implemented
  • FIG. 2 is a simplified waveform diagram illustrating signal segmentation, time warping, and reconstruction within an embodiment of the invention
  • FIGS. 3 a and 3 b are flowcharts illustrating steps taken to effect signal modification within an embodiment of the invention
  • FIG. 4 is a flowchart illustrating the steps for calculating an optimal lag contour within an embodiment of the invention
  • FIG. 5 is a simplified graph illustrating the plotting of correlation strength as a function of last sample lag values used within an embodiment of the invention to identify an optimal last sample lag;
  • FIG. 6 is a graphical depiction of warp contours according to the prior art and according to an embodiment of the invention.
  • FIG. 7 is a simplified schematic diagram of a computing device upon which an embodiment of the invention may be implemented.
  • a speech encoder is a software module operable to compress a high bit rate input digital audio signal into a lower bit rate signal which is then transmitted across a digital channel, for example the Internet, or stored in a digital memory module, for example a hard disk or CD-R.
  • the transmitted or stored bits are converted by a speech decoder into a decoded digital audio signal.
  • the speech encoder and decoder are often jointly referred to as a speech codec. Speech codecs are designed to produce at the decoder the closest possible reconstruction of the input audio signal, particularly when the input signal is human speech.
  • the most common paradigm used in speech coding is codebook excited linear prediction (CELP).
  • CELP speech coders are based on the principle of short-term prediction and codebook search. The concepts and function of CELP coding are discussed herein to aid the reader. This discussion is not intended to define CELP coding in a manner different from that known in the art.
  • This invention provides a novel methodology for modifying the input digital speech signal prior to encoding it by a speech coder such that fewer bits are required for storage or transmission.
  • the objective of the signal modification is to simplify the structure of the input speech signal's waveform without adversely affecting the perceptual quality of the reconstructed signal.
  • the modified input speech signal is presented to the speech coder for encoding. Due to the simplified structure of the modified waveform, the speech coder can more proficiently and efficiently perform the task of encoding the signal. As mentioned previously, signal modification is especially advantageous at low bit-rates.
  • the signal modification technique described herein is based on a model of continuous time warping. Unlike the signal modification technique of RCELP referred to above, continuous time warping modifies the input signal using a continuous warping contour rather than simply a piece-wise continuous contour. The result is a modified speech signal whose waveform has a simple structure, and whose quality is virtually identical to that of the original input signal.
  • CELP coding the decoded speech signal is generated by filtering an excitation signal through a time varying synthesis filter.
  • the encoder sends information about the excitation signal and the synthesis filter to the decoder.
  • CELP is a waveform matching method; i.e., the choice of excitation signal is optimized via correlation of a proposed synthetic signal with the signal to be modeled, e.g. the residual.
  • the encoder evaluates short segments of the input speech signal and attempts to generate the closest replica for each segment.
  • the encoder first generates a set of excitation signals by combining certain allowed signals called “code-vectors”. Each excitation signal in the set thus generated is passed through the synthesis filter, and the filtered excitation signal that generates the closest likeness to the original speech signal, or other signal to be replicated is selected. Following this search procedure, the encoder transmits to the decoder, information about the code-vectors that were combined to generate the selected excitation signal and information about the synthesis filter.
  • CELP works well at relatively high bit rates, e.g. greater than 4 kbps, where there are sufficient code-vectors to represent the complex nature of the input speech signal. At low bit-rates, due to the small number of code-vectors allowable, the quality of the reproduced signal drops considerably.
  • the dominant characteristic of the residual signal for the perceptually important voiced segments of speech is a sequence of roughly periodic spikes. Although these spikes are generally spaced somewhat uniformly, separated by a pitch period, there are often small jitters in the regularity of the locations of these spikes. These jitters, although not perceptually important, consume a majority of the bit budget in low bit-rate waveform coders.
  • RCELP attempted to eliminate this variation by non-continuously warping the residual signal to readjust the locations of the spikes so that they occur in a regular fashion. Modifying the signal in this manner eases the task of a low bit-rate coder since very few bits are needed to send information about the locations of the spikes in the modified signal.
  • the modified residual signal is transformed back into the speech domain by passing it through an inverse of the prediction filter.
  • RCELP-based signal modification does result in a perceptible degradation of the voice quality due to the sub-optimal properties of the warping function employed.
  • RCELP potentially overlapping sections of the original residual signal, each containing a single spike, are cut and strung together to generate the modified residual signal.
  • the cut sections may, and often do, overlap resulting in some parts of the residual signal appearing twice in the modified residual while other parts never appear at all.
  • the invention overcomes the undesirable properties in RCELP's residual modification procedure as discussed by utilizing a continuous time warping algorithm coupled in an embodiment of the invention with an improved warp contour optimization methodology.
  • the inventive algorithm first identifies pieces of the original residual signal which contain a single spike, as in RCELP. However, unlike RCELP, these pieces are non-overlapping and cover the entire frame. That is, if the cut sections were concatenated, the original residual signal would be obtained—no portion of the residual signal would appear twice, and no portion would be omitted.
  • the algorithm instead of simply cutting and moving pieces as in RCELP, the algorithm either linearly accelerates or linearly decelerates each piece in a continuous and adaptive time warping operation.
  • each piece is to ensure that the spikes in the modified residual signal are separated by regular intervals thereby reducing the bit rate needed to encode the spike positions, achieving the same goal as RCELP, without its shortfalls.
  • the degree of acceleration or delay is limited to prevent degradation in the quality of the reproduced speech.
  • FIG. 1 an exemplary architecture for implementing an improved low bit rate coder according to an embodiment of the invention is illustrated.
  • the system is comprised of a digitizer 121 , a prediction filter or inverse synthesis filter 101 , a linear continuous residual modification module 103 , a synthesis filter 105 , and a coder such as CELP coder 107 , cascaded together.
  • the prediction filter 101 receives as input a digitized speech signal 109 from the digitizer module 121 .
  • Prediction filter 101 also sometimes referred to as an inverse synthesis filter, is operable to produce a residual signal 111 based on LPC coefficients and an input signal.
  • Those of skill in the art will be familiar with linear predictive coding concepts such as the inverse filter and residual.
  • the residual 111 is input to the residual modification module 103 , which converts the signal into a modified residual 113 in a manner to be discussed more fully hereinafter.
  • the modified residual 113 is subsequently input to a synthesis filter 105 to generate a reproduced speech signal 115 .
  • the residual modification technique implemented by the residual modification module 103 will allow the modified speech signal 115 to sound very much like the original speech 109 even though the excitation or modified residual 113 is altered from the residual 111 .
  • the CELP coder module 107 codes the modified speech signal in a manner well understood by those skilled in the art, and outputs a stream of encoded bits 117 for transmission or storage.
  • FIG. 2 shows simplified waveforms 203 , 205 , 207 , 209 , 211 having prominent pitch peaks 201 . Note that the peak shifts illustrated in FIG. 2 are exaggerated for clarity. Actual shift amounts should be limited as will be discussed hereinafter.
  • FIGS. 3 a and 3 b are flowcharts illustrating the steps executed in an embodiment of the invention to code a speech signal. At step 301 , an analog speech signal 119 is received by digitizer 121 .
  • step 303 digitizer 121 samples the signal at 8 kHz to obtain a digital sampled audio signal s(n).
  • signal s(n) is grouped into non-overlapping frames of 160 samples (20 ms) long by the digitizer, each of which is further subdivided into 2 non-overlapping subframes of 80 samples (10 ms) long.
  • the signal in the k th frame is given by s(160 k) . . . s(160 k+159).
  • the framed sampled signal 109 is passed from the digitizer 121 to the LPC extractor 123 in step 307 .
  • the LPC extractor 123 acts in a manner well known to those of skill in the art to calculate linear predictive coefficients corresponding to the input signal.
  • the LPC extractor 123 extracts a set of tenth order linear predictive coefficients for each frame by performing correlation analysis and executing the Levinson-Durbin algorithm.
  • the interpolation may be performed by transforming the LP coefficients into the Line Spectral Frequency (LSF) domain, interpolating linearly in the LSF domain, and transforming the interpolated subframe LSF coefficients back to LP coefficients.
  • the subframe LP coefficients a ks are used by the prediction filter 101 to produce the residual signal 111 in a manner well known to those of skill in the art.
  • the dominant characteristic of the residual signal 111 may be seen in the waveform 203 of FIG. 2 .
  • the residual 203 is dominated by a sequence of roughly periodic but irregularly spaced peaks or spikes 201 .
  • These spikes typically represent glottal pulses that excite the vocal tract during the process of generating voiced speech.
  • the time interval between adjacent spikes is equal to the pitch period.
  • Human speech typically has a pitch period of between about 2.5 ms and 18.5 ms.
  • the interval between spikes is usually not constant, but instead exhibits minor irregularities or jitter.
  • Steps 315 through 333 will describe the operation of residual modification module 103 .
  • the residual modification module 103 receives the residual signal 111 and determines an integer pitch period for the current frame, the k th frame.
  • the pitch period may be determined by any one of a number of techniques known in the art.
  • One technique usable within this embodiment is to employ co-relation analysis in the open loop. Whatever method is used, adequate care should be exercised to avoid undesirable artifacts such as pitch doubling.
  • the function c′(n) can be represented as a straight line from p(k ⁇ 1) at the beginning of the frame to p(k) at the end of the frame. It represents a smoothly varying pitch period (floating point) for every sample in the current frame.
  • a function c(n) is formed by rounding each value of c′(n) to the closest multiple of 0.125.
  • c(n) is a multiple of 1 ⁇ 8, and therefore 8*c(n) is an integer pitch period in an 8x over-sampled signal domain.
  • c(n) is referred to as the desired pitch contour.
  • the efficiencies engendered by modifying the residual to match this idealized contour are significant.
  • the pitch period of a frame having such a contour can be transmitted using very few bits, and the decoder can use the pitch to derive the pitch contour, and then use the pitch contour in conjunction with the spike locations from the previous frame to estimate the location of pitch spikes for the current frame.
  • the next process is to mimic the decoder and attempt to reconstruct the locations of the spikes in the current frame residual based on the pitch contour and the modified residual of the previous frame.
  • the actual decoder will typically not have access to information about the previous frame's modified residual, it will have access to the excitation signal used to reconstruct the previous frame. Therefore, since the spikes in the excitation signal of a particular frame will align with the spikes in the modified residual of that frame, the decoder's use of the previous excitation signal does not conflict with the coder's use of the previous modified residual.
  • the residual modification module 103 uses the pitch contour to delay the previous frame's modified residual in step 321 to produce a target signal for modification, r t (n).
  • An exemplary waveform for r t (n) is shown in FIG. 2 at element 211 .
  • This time warping function operates in the 8X over-sampled domain, using a standard interpolation filter with truncated sinc(x) impulse response and 90% pass-band, since the pitch contour c(n) is a multiple of 0.125.
  • the sample index of r′′ is a multiple of 0.125, representing the over-sampled condition.
  • the coder can now relocate the spikes in the actual residual to match those in r t (n).
  • the residual modification module 103 analyzes the unmodified residual signal 203 to identify distinct segments of the signal having a single predominant peak surrounded by a low energy region.
  • An exemplary resultant waveform is represented in FIG. 2 at element 205 .
  • the residual 203 is cut only at perceptually insignificant low energy points.
  • the coder associates a section of the target signal with an appropriate piece of the unmodified residual.
  • the residual modification module 103 calculates an optimal warping function for the identified section of the unmodified residual such that modification via the optimal warping function will align a predominant spike or peak in a segment of the residual 203 with that in the associated section of the target signal 211 .
  • the steps taken to calculate an optimal warping function for each section of the residual are illustrated with reference to FIG. 4 .
  • FIG. 4 illustrates the derivation of a lag contour l(n) representing the sample-by-sample delay between the residual signal 203 and the modified residual 209 .
  • the problem of finding the optimal warp contour is reduced to the problem of finding the optimal lag contour l(n).
  • the lag l f for the very first sample of the current section of interest is set equal to the lag for the very last sample of the previous section, and a set of candidates for the lag l 1 of the last sample of the current section is identified.
  • a set of 2K+1 candidates for the lag l 1 of the last sample are identified within a candidate range, such as ⁇ l f ⁇ K, l f ⁇ K+1, . . . l f +K ⁇ .
  • the value of K is selected based on parameters such as the computation power available, the periodicity of the speech sample, and the value of l f . Typical values of K are 0, 1, 2, 3, or 4.
  • the solution for this problem is to limit the value of K such that it does not allow a shift beyond the desired range, or to use an asymmetrical range of candidates.
  • K such that it does not allow a shift beyond the desired range, or to use an asymmetrical range of candidates.
  • an acceleration by five sample positions may be permitted if an asymmetrical distribution of candidate lag values is utilized.
  • the optimal lag value for the last sample (and resultant lag contour) may not even be included in the candidate set itself, but it is preferably situated within the candidate range.
  • step 403 the coder performs a linear interpolation between the first and last samples of the current section for each candidate lag value identified in step 401 to create a set of 2K+1 candidate lag contours.
  • a candidate lag contour represents a linear function such that the first and last values are l f and l 1 respectively, where l 1 is a candidate value.
  • each candidate lag contour is applied to the residual signal to obtain a set of 2K+1 candidate modified residuals, and the correlation between the target signal r t (n) 211 and each candidate modified residual is calculated in step 407 .
  • step 409 the strength of the correlation is modeled quadratically as a function of the last sample lag value, and the optimal lag value for the last sample is obtained.
  • the strength of the correlation for each candidate modified residual is plotted as a function of the associated last sample lag value candidate as illustrated by the plot points in the graph of FIG. 5 .
  • the plot points are divided into sets, each set consisting of three points. There is an overlap of one point between adjacent sets.
  • the 2K+1 plot points would be thus divided into K overlapping sets of 3 points each. For seven points, for example, there would be three sets.
  • Each set of three consecutive plot points is modeled according to a quadratic function. In FIG.
  • the three quadratic modeling functions are illustrated as 501 , 503 and 505 .
  • the maximum of each quadratic function in the range from the first to the last of the associated three points is obtained, and the maximum for the entire section is then calculated.
  • the maximum correlation value will lie at one of the endpoints. Note that, in general, the maximum for a given set of three points will not always lie at any of the three points, but will often lie somewhere between.
  • the optimal lag value for the entire section could be a value that was not in the set of candidates for the lag l 1 .
  • plots or “plotting” as used herein do not require the creation of a tangible or visible graph. Rather, these terms simply imply the creation of an association between quantities, be it implicit, such as where the axes used are different parameters related to the quantities shown in FIG. 5 , or explicit, and be it actual, as in a graphical program data structure, or virtual as in a set of numbers in memory from which can be derived the appropriate relationship. Therefore, these terms simply denote the creation of a relationship between the indicated quantities, however such relationship is manifested.
  • the maximum of all quadratics for the current correlation plot is associated with a lag value for the last sample via the appropriate quadratic, and this value is the optimal last sample lag value. It is not necessary that a quadratic function be used to model the sets of points, or that three points be used. For example, the sets could contain more than three points, and the modeling function may be a polynomial of any order, depending upon the acceptable level of complexity. Note also that for monotonic sequences of points, it is not necessary to model the sequence as a polynomial or otherwise since the highest endpoint is easily determined and represents the maximum of the sequence.
  • the residual modification module 103 derives in step 411 a corresponding lag contour by interpolating linearly over the section from l f to the optimal l 1 calculated in step 409 .
  • step 331 it is determined whether there are any more pieces in the current frame to be analyzed and shifted. If there are, the flow of operations returns to step 325 . Otherwise, the process ends for the current frame at step 333 .
  • Element 207 of FIG. 2 illustrates the warped sections of the modified residual 209 separately for clarity.
  • the modified residual 113 illustrated as waveform 209 is finally provided as input to the synthesis filter 105 , to yield a reproduction of the original speech signal, the reproduction having regular rather than jittered pitch peaks. From this point, the signal is processed using a technique such as ordinary CELP. However, the bit rate now required to code the signal will be greatly reduced over that required to code the unmodified signal due to the increased periodicity of the pitch structure.
  • processing begins on the subsequent frame.
  • unvoiced segment there are typically no pitch peaks, and so the methodology described herein need not be applied.
  • all quantities in the algorithm are reset. For example, the indication of accumulated shift is reset to zero.
  • the first voiced frame k is treated as a special case since the pitch value of the previous frame, p(k ⁇ 1) is unknown in this frame.
  • the pitch contour in this special frame k is set to a constant function equal to the pitch value of the frame, p(k). The rest of the procedure is identical to that of regular frames.
  • bisection may be used to find the optimal lag value without trying all, or even most, possible lag values.
  • the bisection technique entails identifying two lag candidate values, and their associated correlation strengths. The lag candidate with the higher correlation and a new lag candidate that lies in between the two lag values are used as endpoints to repeat the bisection process. The process may be terminated after a predetermined number of iterations, or when a lag value yielding a correlation strength above a predetermined threshold is found.
  • FIG. 6 A continuous linear warp contour resulting from the methodology described herein is illustrated in FIG. 6 .
  • the continuous linear warp contour 601 is shown as a solid black line
  • the discontinuous contour 603 used in the prior art RCELP technique is shown as a dashed line.
  • Both contours represent lines drawn through the set of points for signal samples plotted as a function of original time (pre-warp) versus modified time (post-warp).
  • pre-warp pre-warp
  • post-warp modified time
  • each straight segment in contour 601 and each separate piece of contour 603 represents a section of the original residual that has been warped according to the respective technique. It can be seen that the RCELP technique often results in missing or overlapped sections, while the continuous linear warp contour of the present invention does not permit overlap or omission.
  • the continuous linear warp contour 601 may contain discontinuities in slope, it is continuous rather than simply piece-wise continuous in position.
  • region 605 is occupied by two pieces of the warp contour 603 while section 607 is devoid of data pursuant to the same contour.
  • the entire signal space is occupied without overlap or omission by contour 601 according to the present invention.
  • the warp contour 601 for adjacent segments may have the same slope or different slopes, depending upon the acceleration or deceleration needed for each segment.
  • the slope of each section of RCELP contour 603 is unitary. This is because RCELP shifts sections of the signal but does not change the time scale within each section.
  • program modules include routines, objects, components, data structures and the like that perform particular tasks or implement particular abstract data types.
  • a program may include one or more program modules.
  • the invention may be implemented on a variety of types of machines, including cell phones, personal computers (PCs), hand-held devices, multi-processor systems, microprocessor-based programmable consumer electronics, network PCs, minicomputers, mainframe computers and the like, or on any other machine usable to code or decode audio signals as described herein and to store, retrieve, transmit or receive signals.
  • the invention may be employed in a distributed computing system, where tasks are performed by remote components that are linked through a communications network.
  • computing device 700 In its most basic configuration, computing device 700 typically includes at least one processing unit 702 and memory 704 . Depending on the exact configuration and type of computing device, memory 704 may be volatile (such as RAM), nonvolatile (such as ROM, flash memory, etc.) or some combination of the two. This most basic configuration is illustrated in FIG. 7 within line 706 . Additionally, device 700 may also have additional features/functionality. For example, device 700 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG.
  • additional storage removable and/or non-removable
  • Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Memory 704 , removable storage 708 and non-removable storage 710 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 700 . Any such computer storage media may be part of device 700 .
  • Device 700 may also contain one or more communications connections 712 that allow the device to communicate with other devices.
  • Communications connections 712 are an example of communication media.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
  • the term computer readable media as used herein includes both storage media and communication media.
  • Device 700 may also have one or more input devices 714 such as keyboard, mouse, pen, voice input device, touch-input device, etc.
  • One or more output devices 716 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at greater length here.

Abstract

A signal modification technique facilitates compact voice coding by employing a continuous, rather than piece-wise continuous, time warp contour to modify an original residual signal to match an idealized contour, avoiding edge effects caused by prior art techniques. Warping is executed using a continuous warp contour lacking spatial discontinuities which does not invert or overly distend the positions of adjacent end points in adjacent frames. The linear shift implemented by the warp contour is derived via quadratic approximation or other method, to reduce the complexity of coding to allow for practical and economical implementation. In particular, the algorithm for determining the warp contour uses only a subset of possible contours contained within a sub-range of the range of possible contours. The relative correlation strengths from these contours are modeled as points on a polynomial trace and the optimum warp contour is calculated by maximizing the modeling function.

Description

TECHNICAL FIELD
This invention relates generally to speech coding techniques and, more particularly, relates to techniques for modifying a signal to aid in coding the signal via a low bit-rate coding technique such as codebook excited linear prediction (CELP) coding.
BACKGROUND OF THE INVENTION
In today's highly verbal and highly interactive technical climate, it is often necessary or desirable to transmit human voice electronically from one point to another, sometimes over great distance, and often over channels of limited bandwidth. For example, conversations via cell phone links or via the Internet or other digital electronic networks are now commonplace. Likewise, it is often useful to digitally store human voice, such as on the hard drive of a computer, or in the volatile or nonvolatile memory of a digital recording device. For example, digitally stored human voice may be replayed as part of a telephone answering protocol or an audio presentation.
Channels and media usable for the transmission and/or storage of digital voice are often of limited capacity, and grow more so every day. For example, the advent of quality video for use in conjunction with real time or recorded voice has created a demand for audio/video conferencing over digital networks in real time as well as for non-real time high quality audio/video presentations, such as those receivable in streaming format and those downloadable for storage in their entirety. As video content displaces bandwidth and storage capacity in various transmission channels and storage media, the need to efficiently and properly compress both voice and video becomes imperative. Other scenarios also create a need for extreme and effective compression of voice. For example, increasingly congested cell phone links must be able to accommodate a greater number of users often over channels whose capacity has not changed in keeping with the number of users.
Whatever the motivation, the compression of voice has been and remains an important area of communication technology. Available digital voice coding techniques span a spectrum from inefficient techniques that employ no compression to efficient techniques that achieve compression ratios of four or greater. Generally, existing coders may be classified as either waveform coders or voice coders. Waveform coders actually attempt to describe the sound wave itself and typically do not achieve high rates of compression. Voice coders, or vocoders, take into account the source and peculiarities of human speech rather than simply attempting to map the resultant sound wave, and accordingly may achieve much higher compression rates, albeit at the expense of increased computational complexity. Waveform coders are generally more robust to peculiar human voices, non-speech sounds and high levels of background noise.
Most prevalent voice coders employ techniques based on linear predictive coding. The linear predictive coding technique assumes that for each portion of the speech signal there exists a digital filter that when excited by a certain signal will produce a signal much like the original speech signal portion. In particular, a coder implementing a linear predictive technique will typically first derive a set of coefficients that describe the spectral envelope, or formants, of the speech signal. A filter corresponding to these coefficients is established and used to reduce the input speech signal to a predictive residual. In general terms, the above described filter is an inverse synthesis filter, such that inputting the residual signal into a corresponding synthesis filter will produce a signal that closely approximates the original speech signal.
Typically, the filter coefficients and the residual are transmitted or stored for later and/or distant re-synthesis of the speech signal. While the filter coefficients require little space for storage or little bandwidth, e.g. 1.5 kbps, for transmittal, the predictive residual is a high-bandwidth signal similar to the original speech signal in complexity. Thus, in order to effectively compress the speech signal, the predictive residual must be compressed. The technique of Codebook Excited Linear Prediction (CELP) is used to achieve this compression. CELP utilizes one or more codebook indexes which are usable to select particular vectors, one each from a set of “codebooks”. Each codebook is a collection of vectors. The selected vectors are chosen such that when scaled and summed, they produce a response from the synthesis filter that best approximates the response of the filter to the residual itself. The CELP decoder has access to the same codebooks as the CELP encoder did, and thus the simple indexes are usable to identify the same vectors from the encoder and decoder codebooks.
When the available capacity or bandwidth is ample, it is not difficult to have codebooks that are rich enough to allow for a close approximation to the original residual, however complex. However, as the available capacity or bandwidth decreases, the richness of the CELP codebooks necessarily decreases.
One way to decrease the number of bits needed to mimic the residual signal is to increase its periodicity. That is, redundancies in the original signal are more compactly representable than are non-redundant features. One technique that takes advantage of this principle is Relaxation Codebook Excited Linear Predictive coding (RCELP). An example of this technique is discussed in the article “The RCELP Speech coding Algorithm,” Eur. Trans. On Communications, vol. 4, no. 5, pp. 573-82 (1994), authored by W. B. Kleijn et al, which is incorporated herein by reference in its entirety for all that it discloses. In particular, this article describes a method of uniformly advancing or delaying whole segments of a residual signal such that its modified pitch-period contour matches a synthetic pitch-period contour. Problems with this approach include the fact that as an artifact of the particular warping methodology, certain portions of the original signal may be omitted or repeated. In particular, if two adjacent segments of the signal experience a cumulative compressive shift, portions of the original signal near the overlap may be omitted in the modified signal. Likewise, if two adjacent segments experience a cumulative expansive shift, portions of the original signal near the overlap may be repeated in the modified signal. These artifacts produce an audible distortion in the final reproduced speech.
Other art has suggested a similar approach. See for example the article “Interpolation of the Pitch-Predictor parameters in Analysis-by-Synthesis Speech Coders,” IEEE Transactions of Speech and Audio Processing, vol. 2, no. 1, part I (January, 1994), authored by W. B. Kleijn et al, which is incorporated herein by reference in its entirety for all that it discloses.
All pitch warping approaches suggested in the past have suffered similar shortcomings, including a reduction in quality due to the shifting of segment edges, causing omissions and repeats of the original signal. It is desired to provide a frame warping method to reduce the transmission bit rate for a speech signal, while not introducing signal repeats and omissions, and without increasing the complexity or delay of the coding calculations to the point where real-time communications are not possible.
SUMMARY OF THE INVENTION
The invention employs a continuous, rather than simply piece-wise continuous, time warp contour to modify the original residual signal to match a synthetic contour, thus avoiding edge shifting effects prevalent in the prior art. In particular, the warp contour employed within the invention is continuous, i.e. lacking spatial jumps or discontinuities, and does not invert or overly distend the positions of adjacent end points in adjacent frames.
In order to reduce the complexity of the coding algorithm to allow for practical and economical implementation, the optimum linear shift is derived via a quadratic or other approximation. In particular, the algorithm utilized within the invention to determine the ideal warp contour does not require that every possible warp contour be calculated and utilized to correlate the modified signal to the synthetic signal. In one embodiment, a subset of possible contours from across a subrange of possible contours are calculated. The relative correlation strengths from these contours are then modeled as points on a quadratic curve or other parametric function curve. The optimum warp contour, possibly represented by a point lying someplace between calculated sample points, is then calculated by maximizing the appropriate parametric function. Other simplification techniques such as bisection or piece-wise polynomial modeling may also be used within the invention.
Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments which proceeds with reference to the accompanying figures.
BRIEF DESCRIPTION OF THE DRAWINGS
While the appended claims set forth the features of the present invention with particularity, the invention, together with its objects and advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which:
FIG. 1 is an architectural diagram of an exemplary coder within which an embodiment of the invention may be implemented;
FIG. 2 is a simplified waveform diagram illustrating signal segmentation, time warping, and reconstruction within an embodiment of the invention;
FIGS. 3 a and 3 b are flowcharts illustrating steps taken to effect signal modification within an embodiment of the invention;
FIG. 4 is a flowchart illustrating the steps for calculating an optimal lag contour within an embodiment of the invention;
FIG. 5 is a simplified graph illustrating the plotting of correlation strength as a function of last sample lag values used within an embodiment of the invention to identify an optimal last sample lag;
FIG. 6 is a graphical depiction of warp contours according to the prior art and according to an embodiment of the invention; and
FIG. 7 is a simplified schematic diagram of a computing device upon which an embodiment of the invention may be implemented.
DETAILED DESCRIPTION OF THE INVENTION
In the description that follows, the invention will be described with reference to acts and symbolic representations of operations that are performed by one or more computers, unless otherwise indicated. As such, it will be understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processing unit of the computer of electrical signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the computer in a manner well understood by those skilled in the art. The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while the invention is being described in the foregoing context, it is not meant to be limiting as those of skill in the art will appreciate that various of the acts and operations described hereinafter may also be implemented in hardware.
A speech encoder is a software module operable to compress a high bit rate input digital audio signal into a lower bit rate signal which is then transmitted across a digital channel, for example the Internet, or stored in a digital memory module, for example a hard disk or CD-R. The transmitted or stored bits are converted by a speech decoder into a decoded digital audio signal. The speech encoder and decoder are often jointly referred to as a speech codec. Speech codecs are designed to produce at the decoder the closest possible reconstruction of the input audio signal, particularly when the input signal is human speech. The most common paradigm used in speech coding is codebook excited linear prediction (CELP). CELP speech coders are based on the principle of short-term prediction and codebook search. The concepts and function of CELP coding are discussed herein to aid the reader. This discussion is not intended to define CELP coding in a manner different from that known in the art.
The task of any speech coder becomes more difficult and complex at low bit-rates due to the few bits available to capture the complex and time-varying nature of human speech. This invention provides a novel methodology for modifying the input digital speech signal prior to encoding it by a speech coder such that fewer bits are required for storage or transmission. The objective of the signal modification is to simplify the structure of the input speech signal's waveform without adversely affecting the perceptual quality of the reconstructed signal. Following signal modification, the modified input speech signal is presented to the speech coder for encoding. Due to the simplified structure of the modified waveform, the speech coder can more proficiently and efficiently perform the task of encoding the signal. As mentioned previously, signal modification is especially advantageous at low bit-rates.
The signal modification technique described herein is based on a model of continuous time warping. Unlike the signal modification technique of RCELP referred to above, continuous time warping modifies the input signal using a continuous warping contour rather than simply a piece-wise continuous contour. The result is a modified speech signal whose waveform has a simple structure, and whose quality is virtually identical to that of the original input signal.
In order to fully understand the invention, it is important to understand the basic facets of the CELP family of codec techniques. Although the various CELP techniques will be well known to those of skill in the art, they will nonetheless be briefly described herein for the reader's convenience. In CELP coding, the decoded speech signal is generated by filtering an excitation signal through a time varying synthesis filter. The encoder sends information about the excitation signal and the synthesis filter to the decoder.
CELP is a waveform matching method; i.e., the choice of excitation signal is optimized via correlation of a proposed synthetic signal with the signal to be modeled, e.g. the residual. Thus, the encoder evaluates short segments of the input speech signal and attempts to generate the closest replica for each segment. In particular, the encoder first generates a set of excitation signals by combining certain allowed signals called “code-vectors”. Each excitation signal in the set thus generated is passed through the synthesis filter, and the filtered excitation signal that generates the closest likeness to the original speech signal, or other signal to be replicated is selected. Following this search procedure, the encoder transmits to the decoder, information about the code-vectors that were combined to generate the selected excitation signal and information about the synthesis filter. Typically, most of the bits are required to transmit information about the code-vectors for formation of the synthesis filter excitation signal, while the synthesis filter parameters themselves typically require less than 1.5 kb/s. Thus, CELP works well at relatively high bit rates, e.g. greater than 4 kbps, where there are sufficient code-vectors to represent the complex nature of the input speech signal. At low bit-rates, due to the small number of code-vectors allowable, the quality of the reproduced signal drops considerably.
The dominant characteristic of the residual signal for the perceptually important voiced segments of speech is a sequence of roughly periodic spikes. Although these spikes are generally spaced somewhat uniformly, separated by a pitch period, there are often small jitters in the regularity of the locations of these spikes. These jitters, although not perceptually important, consume a majority of the bit budget in low bit-rate waveform coders.
As discussed, RCELP attempted to eliminate this variation by non-continuously warping the residual signal to readjust the locations of the spikes so that they occur in a regular fashion. Modifying the signal in this manner eases the task of a low bit-rate coder since very few bits are needed to send information about the locations of the spikes in the modified signal. Following residual modification, the modified residual signal is transformed back into the speech domain by passing it through an inverse of the prediction filter.
However, RCELP-based signal modification does result in a perceptible degradation of the voice quality due to the sub-optimal properties of the warping function employed. Specifically, in RCELP, potentially overlapping sections of the original residual signal, each containing a single spike, are cut and strung together to generate the modified residual signal. The cut sections may, and often do, overlap resulting in some parts of the residual signal appearing twice in the modified residual while other parts never appear at all.
The invention overcomes the undesirable properties in RCELP's residual modification procedure as discussed by utilizing a continuous time warping algorithm coupled in an embodiment of the invention with an improved warp contour optimization methodology. In summary, the inventive algorithm first identifies pieces of the original residual signal which contain a single spike, as in RCELP. However, unlike RCELP, these pieces are non-overlapping and cover the entire frame. That is, if the cut sections were concatenated, the original residual signal would be obtained—no portion of the residual signal would appear twice, and no portion would be omitted. Essentially, instead of simply cutting and moving pieces as in RCELP, the algorithm either linearly accelerates or linearly decelerates each piece in a continuous and adaptive time warping operation. The objective in warping each piece is to ensure that the spikes in the modified residual signal are separated by regular intervals thereby reducing the bit rate needed to encode the spike positions, achieving the same goal as RCELP, without its shortfalls. As will be discussed, the degree of acceleration or delay is limited to prevent degradation in the quality of the reproduced speech.
Having described the invention in generality above, the details of the preferred embodiments will be hereinafter described more fully. Referring to FIG. 1, an exemplary architecture for implementing an improved low bit rate coder according to an embodiment of the invention is illustrated. The system is comprised of a digitizer 121, a prediction filter or inverse synthesis filter 101, a linear continuous residual modification module 103, a synthesis filter 105, and a coder such as CELP coder 107, cascaded together.
The prediction filter 101 receives as input a digitized speech signal 109 from the digitizer module 121. There exist various methods known to those of skill in the art by which speech may be converted to a digital electrical signal, and accordingly such techniques will not be discussed in great detail herein. Prediction filter 101, also sometimes referred to as an inverse synthesis filter, is operable to produce a residual signal 111 based on LPC coefficients and an input signal. Those of skill in the art will be familiar with linear predictive coding concepts such as the inverse filter and residual. The residual 111 is input to the residual modification module 103, which converts the signal into a modified residual 113 in a manner to be discussed more fully hereinafter. The modified residual 113 is subsequently input to a synthesis filter 105 to generate a reproduced speech signal 115. The residual modification technique implemented by the residual modification module 103 will allow the modified speech signal 115 to sound very much like the original speech 109 even though the excitation or modified residual 113 is altered from the residual 111. Subsequently, the CELP coder module 107 codes the modified speech signal in a manner well understood by those skilled in the art, and outputs a stream of encoded bits 117 for transmission or storage.
The operation of the modules illustrated in FIG. 1 will now be described in greater detail with reference to FIG. 2 in conjunction with FIGS. 3 a and 3 b. In particular, FIG. 2 shows simplified waveforms 203, 205, 207, 209, 211 having prominent pitch peaks 201. Note that the peak shifts illustrated in FIG. 2 are exaggerated for clarity. Actual shift amounts should be limited as will be discussed hereinafter. FIGS. 3 a and 3 b are flowcharts illustrating the steps executed in an embodiment of the invention to code a speech signal. At step 301, an analog speech signal 119 is received by digitizer 121. In step 303, digitizer 121 samples the signal at 8 kHz to obtain a digital sampled audio signal s(n). Subsequently, in step 305, signal s(n) is grouped into non-overlapping frames of 160 samples (20 ms) long by the digitizer, each of which is further subdivided into 2 non-overlapping subframes of 80 samples (10 ms) long. Thus, the signal in the kth frame is given by s(160 k) . . . s(160 k+159). The framed sampled signal 109 is passed from the digitizer 121 to the LPC extractor 123 in step 307.
The LPC extractor 123 acts in a manner well known to those of skill in the art to calculate linear predictive coefficients corresponding to the input signal. In particular, in step 309, the LPC extractor 123 extracts a set of tenth order linear predictive coefficients for each frame by performing correlation analysis and executing the Levinson-Durbin algorithm. The optimal linear prediction coefficients in the kth frame ak(j), j=1, . . . , 10, are interpolated in step 311 to generate a set of LP coefficients aks(j), j=1, . . . , 10, in each subframe, wherein s=0, 1 corresponds to the first and second subframes respectively. The interpolation may be performed by transforming the LP coefficients into the Line Spectral Frequency (LSF) domain, interpolating linearly in the LSF domain, and transforming the interpolated subframe LSF coefficients back to LP coefficients. In step 313, the subframe LP coefficients aksare used by the prediction filter 101 to produce the residual signal 111 in a manner well known to those of skill in the art. The residual 111 in the kth frame is represented by r(n), n=160k . . . 160k+159.
The dominant characteristic of the residual signal 111 may be seen in the waveform 203 of FIG. 2. In particular, for voiced segments, the residual 203 is dominated by a sequence of roughly periodic but irregularly spaced peaks or spikes 201. These spikes typically represent glottal pulses that excite the vocal tract during the process of generating voiced speech. The time interval between adjacent spikes is equal to the pitch period. Human speech typically has a pitch period of between about 2.5 ms and 18.5 ms. The interval between spikes is usually not constant, but instead exhibits minor irregularities or jitter.
Steps 315 through 333 will describe the operation of residual modification module 103. In step 315, the residual modification module 103 receives the residual signal 111 and determines an integer pitch period for the current frame, the kth frame. The pitch period may be determined by any one of a number of techniques known in the art. One technique usable within this embodiment is to employ co-relation analysis in the open loop. Whatever method is used, adequate care should be exercised to avoid undesirable artifacts such as pitch doubling.
At step 317, a sample by sample linear interpolation of the frame pitch period is performed as follows:
c′(n)=p(k)*((n−160k)/160)+p(k−1)*(1−(n−160k)/160), n=160k . . . 160k+159.
The function c′(n) can be represented as a straight line from p(k−1) at the beginning of the frame to p(k) at the end of the frame. It represents a smoothly varying pitch period (floating point) for every sample in the current frame.
In step 319, a function c(n) is formed by rounding each value of c′(n) to the closest multiple of 0.125. Effectively, c(n) is a multiple of ⅛, and therefore 8*c(n) is an integer pitch period in an 8x over-sampled signal domain. Herein, c(n) is referred to as the desired pitch contour. The efficiencies engendered by modifying the residual to match this idealized contour are significant. For example, the pitch period of a frame having such a contour can be transmitted using very few bits, and the decoder can use the pitch to derive the pitch contour, and then use the pitch contour in conjunction with the spike locations from the previous frame to estimate the location of pitch spikes for the current frame.
The next process is to mimic the decoder and attempt to reconstruct the locations of the spikes in the current frame residual based on the pitch contour and the modified residual of the previous frame. Although the actual decoder will typically not have access to information about the previous frame's modified residual, it will have access to the excitation signal used to reconstruct the previous frame. Therefore, since the spikes in the excitation signal of a particular frame will align with the spikes in the modified residual of that frame, the decoder's use of the previous excitation signal does not conflict with the coder's use of the previous modified residual.
To predict the spike positions in the current frame, the residual modification module 103 uses the pitch contour to delay the previous frame's modified residual in step 321 to produce a target signal for modification, rt(n). An exemplary waveform for rt(n) is shown in FIG. 2 at element 211. This time warping function operates in the 8X over-sampled domain, using a standard interpolation filter with truncated sinc(x) impulse response and 90% pass-band, since the pitch contour c(n) is a multiple of 0.125. In particular, the 8X over-sampling is employed to obtain interpolated samples of the modified residual r′(n) in the previous frame, to arrive at the over-sampled signal as follows:
r″(n*0.125), n=160*8*(k−1) . . . 160*8*(k−1)+1279.
The sample index of r″ is a multiple of 0.125, representing the over-sampled condition. Subsequently, a delay line operation is performed to obtain the target signal rt(n), as follows:
r d(n*0.125)=r d(n*0.125)n=160*8*(k−1) . . . 160*8*(k−1) +1279
r d(n*0.125)=r d(n*0.125−C(INT(n*0.125))), n=160*8*k . . . 0.160*8*k+1279
r t(n)=r d(n), n=160*k . . . 160*k+159,
where INT(x) represents the integer closest to x, a floating point number and rd( ) is an intermediate signal. Note that the decoder performs an identical delay line operation on the previous frame's excitation signal.
Having calculated the ideal pitch spike locations represented in the target signal 211, the coder can now relocate the spikes in the actual residual to match those in rt(n). Initially at step 323, the residual modification module 103 analyzes the unmodified residual signal 203 to identify distinct segments of the signal having a single predominant peak surrounded by a low energy region. An exemplary resultant waveform is represented in FIG. 2 at element 205. There are preferably no gaps between pieces of the signal as segmented. In other words, if the pieces of element 205 were to be strung back together at this stage, the result would be the unmodified residual 203. Preferably, the residual 203 is cut only at perceptually insignificant low energy points. Subsequently at step 325 the coder associates a section of the target signal with an appropriate piece of the unmodified residual.
At step 327, the residual modification module 103 calculates an optimal warping function for the identified section of the unmodified residual such that modification via the optimal warping function will align a predominant spike or peak in a segment of the residual 203 with that in the associated section of the target signal 211. The steps taken to calculate an optimal warping function for each section of the residual are illustrated with reference to FIG. 4. In particular, FIG. 4 illustrates the derivation of a lag contour l(n) representing the sample-by-sample delay between the residual signal 203 and the modified residual 209. The quantity l(n) is a multiple of 0.125 such that the modified residual sample r′(m) equals the residual signal sample delayed by l(m) in the oversampled domain. That is:
r′(m)=r″(m−l(m)).
The problem of finding the optimal warp contour is reduced to the problem of finding the optimal lag contour l(n).
At step 401, the lag lf for the very first sample of the current section of interest is set equal to the lag for the very last sample of the previous section, and a set of candidates for the lag l1 of the last sample of the current section is identified. In particular, a set of 2K+1 candidates for the lag l1 of the last sample are identified within a candidate range, such as {lf−K, lf−K+1, . . . lf+K}. The value of K is selected based on parameters such as the computation power available, the periodicity of the speech sample, and the value of lf. Typical values of K are 0, 1, 2, 3, or 4. Although the range of candidates illustrated by the above equation fall symmetrically about lf, this need not be the case.
Although shifting sections of the residual by small amounts does not have a negative effect on the perceived quality of the reproduced signal, large shifts may have a perceivable negative effect. Thus, it is desirable to limit the amount by which a sample may be shifted to some small number, such as three original (not oversampled) sample increments including any accumulated shifting as a result of the shifting of the previous section or piece. Thus, if the last sample in the previous piece was delayed by the equivalent of two sample positions, then the last sample of the current piece should not be additionally delayed more than the equivalent of one sample position, or it will experience a total shift of more than three sample positions from its original location. The solution for this problem is to limit the value of K such that it does not allow a shift beyond the desired range, or to use an asymmetrical range of candidates. Thus, in the above example, although a delay by more than one sample is prohibited, an acceleration by five sample positions may be permitted if an asymmetrical distribution of candidate lag values is utilized.
Note that fewer than all possible lag candidates are in the candidate set, because the computational power needed to evaluate all possible lag candidates would be prohibitive. Rather, only a subset of possible lag values for the last sample in the current section are used as candidates. Lag values outside of the candidate range are not included in the set, nor are values lying between candidate lag values. Thus, the optimal lag value for the last sample (and resultant lag contour) may not even be included in the candidate set itself, but it is preferably situated within the candidate range.
Next, in step 403, the coder performs a linear interpolation between the first and last samples of the current section for each candidate lag value identified in step 401 to create a set of 2K+1 candidate lag contours. A candidate lag contour represents a linear function such that the first and last values are lf and l1 respectively, where l1 is a candidate value. In step 405, each candidate lag contour is applied to the residual signal to obtain a set of 2K+1 candidate modified residuals, and the correlation between the target signal rt(n) 211 and each candidate modified residual is calculated in step 407.
In step 409, the strength of the correlation is modeled quadratically as a function of the last sample lag value, and the optimal lag value for the last sample is obtained. In particular, the strength of the correlation for each candidate modified residual is plotted as a function of the associated last sample lag value candidate as illustrated by the plot points in the graph of FIG. 5. Next the plot points are divided into sets, each set consisting of three points. There is an overlap of one point between adjacent sets. The 2K+1 plot points would be thus divided into K overlapping sets of 3 points each. For seven points, for example, there would be three sets. Each set of three consecutive plot points is modeled according to a quadratic function. In FIG. 5 for example, the three quadratic modeling functions are illustrated as 501, 503 and 505. The maximum of each quadratic function in the range from the first to the last of the associated three points is obtained, and the maximum for the entire section is then calculated. Thus, for positive quadratic functions, i.e. those concave upward, as well as for monotonic configurations of points, the maximum correlation value will lie at one of the endpoints. Note that, in general, the maximum for a given set of three points will not always lie at any of the three points, but will often lie somewhere between. Thus, the optimal lag value for the entire section could be a value that was not in the set of candidates for the lag l1.
Although the plot of FIG. 5 is used herein to graphically depict steps according to an embodiment of invention, the terms “plot” or “plotting” as used herein do not require the creation of a tangible or visible graph. Rather, these terms simply imply the creation of an association between quantities, be it implicit, such as where the axes used are different parameters related to the quantities shown in FIG. 5, or explicit, and be it actual, as in a graphical program data structure, or virtual as in a set of numbers in memory from which can be derived the appropriate relationship. Therefore, these terms simply denote the creation of a relationship between the indicated quantities, however such relationship is manifested.
The maximum of all quadratics for the current correlation plot is associated with a lag value for the last sample via the appropriate quadratic, and this value is the optimal last sample lag value. It is not necessary that a quadratic function be used to model the sets of points, or that three points be used. For example, the sets could contain more than three points, and the modeling function may be a polynomial of any order, depending upon the acceptable level of complexity. Note also that for monotonic sequences of points, it is not necessary to model the sequence as a polynomial or otherwise since the highest endpoint is easily determined and represents the maximum of the sequence.
Having determined the optimal lag value for the last sample of the current dominant peak-containing section or segment of interest, the residual modification module 103 derives in step 411 a corresponding lag contour by interpolating linearly over the section from lf to the optimal l1 calculated in step 409. At step 329 of FIG. 3 b, the lag contour calculated in step 411 of FIG. 4 is applied to the residual as described above, that is: r′(n)=r″(n−l(n)).
Finally, at step 331, it is determined whether there are any more pieces in the current frame to be analyzed and shifted. If there are, the flow of operations returns to step 325. Otherwise, the process ends for the current frame at step 333. Element 207 of FIG. 2 illustrates the warped sections of the modified residual 209 separately for clarity. The modified residual 113 illustrated as waveform 209 is finally provided as input to the synthesis filter 105, to yield a reproduction of the original speech signal, the reproduction having regular rather than jittered pitch peaks. From this point, the signal is processed using a technique such as ordinary CELP. However, the bit rate now required to code the signal will be greatly reduced over that required to code the unmodified signal due to the increased periodicity of the pitch structure.
After a frame is processed, processing begins on the subsequent frame. In the case of an unvoiced segment, there are typically no pitch peaks, and so the methodology described herein need not be applied. During the unvoiced interval, all quantities in the algorithm are reset. For example, the indication of accumulated shift is reset to zero. When voiced speech resumes, the first voiced frame k is treated as a special case since the pitch value of the previous frame, p(k−1) is unknown in this frame. The pitch contour in this special frame k is set to a constant function equal to the pitch value of the frame, p(k). The rest of the procedure is identical to that of regular frames.
Note that techniques other than polynomial modeling may be used within the invention to calculate an optimal lag value If and associated lag contour for a given section or piece of a speech signal within a current frame. It is only of consequence for the invention that a substantial subset of possible lag values, for example half of all possible lag values, be used to create correlation values, for this greatly reduces the computational expense of finding the optimal lag contour. Thus, alternative techniques such as bisection may be used to find the optimal lag value without trying all, or even most, possible lag values. The bisection technique entails identifying two lag candidate values, and their associated correlation strengths. The lag candidate with the higher correlation and a new lag candidate that lies in between the two lag values are used as endpoints to repeat the bisection process. The process may be terminated after a predetermined number of iterations, or when a lag value yielding a correlation strength above a predetermined threshold is found.
A continuous linear warp contour resulting from the methodology described herein is illustrated in FIG. 6. In particular the continuous linear warp contour 601 is shown as a solid black line, while the discontinuous contour 603 used in the prior art RCELP technique is shown as a dashed line. Both contours represent lines drawn through the set of points for signal samples plotted as a function of original time (pre-warp) versus modified time (post-warp). Thus, each straight segment in contour 601 and each separate piece of contour 603 represents a section of the original residual that has been warped according to the respective technique. It can be seen that the RCELP technique often results in missing or overlapped sections, while the continuous linear warp contour of the present invention does not permit overlap or omission. Rather, although the continuous linear warp contour 601 may contain discontinuities in slope, it is continuous rather than simply piece-wise continuous in position. In particular, region 605 is occupied by two pieces of the warp contour 603 while section 607 is devoid of data pursuant to the same contour. On the other hand, the entire signal space is occupied without overlap or omission by contour 601 according to the present invention.
Note that the warp contour 601 for adjacent segments may have the same slope or different slopes, depending upon the acceleration or deceleration needed for each segment. In contrast, the slope of each section of RCELP contour 603 is unitary. This is because RCELP shifts sections of the signal but does not change the time scale within each section. Thus it can be seen that the method according to the invention warps the time scale within each section in a linear continuous manner such that the peak of each section shifts to the desired location without creating undesirable time scale discontinuities at section edges.
Although it is not required, the present invention may be implemented using instructions, such as program “modules,” that are executed by a computer. Generally, program modules include routines, objects, components, data structures and the like that perform particular tasks or implement particular abstract data types. A program may include one or more program modules.
The invention may be implemented on a variety of types of machines, including cell phones, personal computers (PCs), hand-held devices, multi-processor systems, microprocessor-based programmable consumer electronics, network PCs, minicomputers, mainframe computers and the like, or on any other machine usable to code or decode audio signals as described herein and to store, retrieve, transmit or receive signals. The invention may be employed in a distributed computing system, where tasks are performed by remote components that are linked through a communications network.
With reference to FIG. 7, one exemplary system for implementing embodiments of the invention includes a computing device, such as computing device 700. In its most basic configuration, computing device 700 typically includes at least one processing unit 702 and memory 704. Depending on the exact configuration and type of computing device, memory 704 may be volatile (such as RAM), nonvolatile (such as ROM, flash memory, etc.) or some combination of the two. This most basic configuration is illustrated in FIG. 7 within line 706. Additionally, device 700 may also have additional features/functionality. For example, device 700 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 7 by removable storage 708 and non-removable storage 710. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 704, removable storage 708 and non-removable storage 710 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 700. Any such computer storage media may be part of device 700.
Device 700 may also contain one or more communications connections 712 that allow the device to communicate with other devices. Communications connections 712 are an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. As discussed above, the term computer readable media as used herein includes both storage media and communication media.
Device 700 may also have one or more input devices 714 such as keyboard, mouse, pen, voice input device, touch-input device, etc. One or more output devices 716 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at greater length here.
In view of the many possible embodiments to which the principles of this invention may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of invention. For example, those of skill in the art will recognize that the elements of the illustrated embodiments shown in software may be implemented in hardware and vice versa or that the illustrated embodiments can be modified in arrangement and detail without departing from the spirit of the invention. Therefore, the invention as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof.

Claims (22)

1. A method of preparing a frame of a digital speech signal for compression comprising the steps of:
producing a linear prediction residual for the frame, the linear prediction residual having irregularly spaced dominant peaks;
dividing the residual into a series of contiguous, non-overlapping sections, each section containing not more than one dominant peak;
deriving an idealized signal having a series of regularly-spaced dominant peaks located in a series of sequential sections;
associating each section of the residual with a corresponding section of the idealized signal;
calculating a linear continuous warp contour for each residual section based on a subset of possible last sample lag values for each residual section within a subrange of possible last sample lag values for each residual section; and
modifying the residual by applying the calculated warp contour to the sections of the residual so that any dominant peak in each section of the residual aligns with the dominant peak in the corresponding section of the idealized signal, whereby dominant pitch peaks of the modified residual are regularly spaced, and no portion of any section of the residual is omitted or repeated in the modified residual.
2. The method according to claim 1 wherein the step of producing a linear prediction residual for the frame further comprises the steps of:
extracting linear prediction coefficients for the frame;
interpolating the linear prediction coefficients for the frame to create linear prediction coefficients for a plurality of sub-frames of the frame; and
producing a prediction residual for each sub-frame, whereby the prediction residual for the frame comprises a set of sub-frame prediction residuals.
3. The method according to claim 1, wherein the step of dividing the residual into a series of contiguous, non-overlapping sections further comprises the steps step of analyzing the frame to identify an integer pitch period.
4. The method according to claim 3 wherein the step of analyzing the frame to identify an integer pitch period further comprises the step of employing co-relation analysis in the open loop.
5. The method according to claim 1, wherein the step of calculating a linear continuous warp contour for each residual section further comprises the steps of:
establishing a first sample lag for the first sample of the residual section;
identifying a set of candidates for the last sample lag for the last sample of the residual section, the set of candidates consisting of a subset of all possible last sample lag values within a sub-range of all possible last sample lag values;
performing a linear interpolation between the first and last samples of the residual section for each candidate last sample lag to create a set of candidate lag contours;
applying each candidate lag contour to the residual section to obtain a set of candidate modified residuals;
calculating a correlation strength between each candidate modified residual and the corresponding section of the idealized signal to create a set of correlation strengths;
deriving an optimal last sample lag for the residual section based on the set of correlation strengths; and
deriving a linear continuous warp contour by interpolating linearly over the section from the first sample lag to the derived optimal last sample lag for the residual section.
6. The method according to claim 5, wherein the step of deriving an optimal last sample lag for the residual section based on the set of correlation strengths further comprises the steps of:
segregating the set of correlation strengths into overlapping subsections as a function of the last sample lags used to derive the strengths;
representing each subsection as a curve;
calculating the maximum value of each curve, wherein the maximum value is selectable from the group consisting of all possible lag values within a range of possible lag values that includes the last sample lags used to derive the strengths in the subsection; and
calculating the maximum correlation strength for the section based on the maximum values for the curves of the subsections.
7. The method according to claim 6 wherein the curve is a polynomial.
8. The method according to claim 7 wherein the polynomial is a quadratic function.
9. The method according to claim 1, wherein the subrange of possible last sample lag values for each residual section is selected such that the greatest cumulative shift for any sample in the section upon application of the calculated warp contour will be less than four sample positions.
10. A computer readable medium having computer readable instructions for performing a method of preparing a frame of a digital speech signal for compression comprising the steps of:
producing a linear prediction residual for the frame, the linear prediction residual having irregularly spaced dominant peaks;
dividing the residual into a series of contiguous, non-overlapping sections, each section containing not more than one dominant peak;
deriving an idealized signal having a series of regularly-spaced dominant peaks located in a series of sequential sections;
associating each section of the residual with a corresponding section of the idealized signal;
calculating a linear continuous warp contour for each residual section based on a subset of possible last sample lag values for each residual section within a subrange of possible last sample lag values for each residual section; and
modifying the residual by applying the calculated warp contour to the sections of the residual so that any dominant peak in each section of the residual aligns with the dominant peak in the corresponding section of the idealized signal, whereby dominant pitch peaks of the modified residual are regularly spaced, and no portion of any section of the residual is omitted or repeated in the modified residual.
11. The computer readable medium according to claim 10 wherein the step of producing a linear prediction residual for the frame further comprises the steps of:
extracting linear prediction coefficients for the frame;
interpolating the linear prediction coefficients for the frame to create linear prediction coefficients for a plurality of sub-frames of the frame; and
producing a prediction residual for each sub-frame, whereby the prediction residual for the frame comprises a set of sub-frame prediction residuals.
12. The computer readable medium according to claim 10, wherein the step of dividing the residual into a series of contiguous, non-overlapping sections further comprises the step of analyzing the frame to identify an integer pitch period.
13. The computer readable medium according to claim 12 wherein the step of analyzing the frame to identify an integer pitch period further comprises the step of employing co-relation analysis in the open loop.
14. The computer readable medium according to claim 10, wherein the step of calculating a linear continuous warp contour for each residual section further comprises the steps of:
establishing a first sample lag for the first sample of the residual section;
identifying a set of candidates for the last sample lag for the last sample of the residual section, the set of candidates consisting of a subset of all possible last sample lag values within a sub-range of all possible last sample lag values;
performing a linear interpolation between the first and last samples of the residual section for each candidate last sample lag to create a set of candidate lag contours;
applying each candidate lag contour to the residual section to obtain a set of candidate modified residuals;
calculating a correlation strength between each candidate modified residual and the corresponding section of the idealized signal to create a set of correlation strengths;
deriving an optimal last sample lag for the residual section based on the set of correlation strengths; and
deriving a linear continuous warp contour by interpolating linearly over the section from the first sample lag to the derived optimal last sample lag for the residual section.
15. The computer readable medium according to claim 14, wherein the step of deriving an optimal last sample lag for the residual section based on the set of correlation strengths further, comprises the steps of:
segregating the set of correlation strengths into overlapping subsections as a function of the last sample lags used to derive the strengths;
representing each subsection as a curve;
calculating the maximum value of each curve, wherein the maximum value is selectable from the group consisting of all possible lag values within a range of possible lag values that includes the last sample lags used to derive the strengths in the subsection; and
calculating the maximum correlation strength for the section based on the maximum values for the curves of the subsections.
16. The computer readable medium according to claim 15 wherein the curve is a polynomial.
17. The computer readable medium according to claim 16 wherein the polynomial is a quadratic function.
18. The computer readable medium according to claim 10, wherein the subrange of possible last sample lag values for each residual section is selected such that the greatest cumulative shift for any sample in the section upon application of the calculated warp contour will be less than four sample positions.
19. The computer readable medium according to claim 10, wherein the computer readable medium comprises a magnetically readable disc medium.
20. The computer readable medium according to claim 10, wherein the computer readable medium comprises an optically readable disc medium.
21. The computer readable medium according to claim 10, wherein the computer readable medium comprises a modulated data signal.
22. The computer readable medium according to claim 10, wherein the computer readable medium comprises volatile computer readable storage.
US09/896,272 2001-06-29 2001-06-29 Signal modification based on continuous time warping for low bit rate CELP coding Expired - Fee Related US6879955B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US09/896,272 US6879955B2 (en) 2001-06-29 2001-06-29 Signal modification based on continuous time warping for low bit rate CELP coding
JP2002186971A JP4162933B2 (en) 2001-06-29 2002-06-26 Signal modification based on continuous time warping for low bit rate CELP coding
DE60226200T DE60226200T2 (en) 2001-06-29 2002-06-27 Signal change using continuous time shift for low bit rate CELP coding
AT02014365T ATE393447T1 (en) 2001-06-29 2002-06-27 SIGNAL CHANGE USING CONTINUOUS TIME SHIFT FOR LOW BIT RATE CELP ENCODING
EP02014365A EP1271471B1 (en) 2001-06-29 2002-06-27 Signal modification based on continuous time warping for low bitrate celp coding
US11/032,595 US7228272B2 (en) 2001-06-29 2005-01-10 Continuous time warping for low bit-rate CELP coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/896,272 US6879955B2 (en) 2001-06-29 2001-06-29 Signal modification based on continuous time warping for low bit rate CELP coding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/032,595 Continuation US7228272B2 (en) 2001-06-29 2005-01-10 Continuous time warping for low bit-rate CELP coding

Publications (2)

Publication Number Publication Date
US20030004718A1 US20030004718A1 (en) 2003-01-02
US6879955B2 true US6879955B2 (en) 2005-04-12

Family

ID=25405930

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/896,272 Expired - Fee Related US6879955B2 (en) 2001-06-29 2001-06-29 Signal modification based on continuous time warping for low bit rate CELP coding
US11/032,595 Expired - Fee Related US7228272B2 (en) 2001-06-29 2005-01-10 Continuous time warping for low bit-rate CELP coding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/032,595 Expired - Fee Related US7228272B2 (en) 2001-06-29 2005-01-10 Continuous time warping for low bit-rate CELP coding

Country Status (5)

Country Link
US (2) US6879955B2 (en)
EP (1) EP1271471B1 (en)
JP (1) JP4162933B2 (en)
AT (1) ATE393447T1 (en)
DE (1) DE60226200T2 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040156397A1 (en) * 2003-02-11 2004-08-12 Nokia Corporation Method and apparatus for reducing synchronization delay in packet switched voice terminals using speech decoder modification
US20060149532A1 (en) * 2004-12-31 2006-07-06 Boillot Marc A Method and apparatus for enhancing loudness of a speech signal
US20060271356A1 (en) * 2005-04-01 2006-11-30 Vos Koen B Systems, methods, and apparatus for quantization of spectral envelope representation
US20060277039A1 (en) * 2005-04-22 2006-12-07 Vos Koen B Systems, methods, and apparatus for gain factor smoothing
US20080027717A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for wideband encoding and decoding of inactive frames
US20080027719A1 (en) * 2006-07-31 2008-01-31 Venkatesh Kirshnan Systems and methods for modifying a window with a frame associated with an audio signal
US20080027716A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for signal change detection
US20080027715A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for wideband encoding and decoding of active frames
US20080040104A1 (en) * 2006-08-07 2008-02-14 Casio Computer Co., Ltd. Speech coding apparatus, speech decoding apparatus, speech coding method, speech decoding method, and computer readable recording medium
US20080312914A1 (en) * 2007-06-13 2008-12-18 Qualcomm Incorporated Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding
US20090182556A1 (en) * 2007-10-24 2009-07-16 Red Shift Company, Llc Pitch estimation and marking of a signal representing speech
US20100198586A1 (en) * 2008-04-04 2010-08-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E. V. Audio transform coding using pitch correction
US20100204998A1 (en) * 2005-11-03 2010-08-12 Coding Technologies Ab Time Warped Modified Transform Coding of Audio Signals
US20100286991A1 (en) * 2008-01-04 2010-11-11 Dolby International Ab Audio encoder and decoder
US20120072209A1 (en) * 2010-09-16 2012-03-22 Qualcomm Incorporated Estimating a pitch lag
US8280730B2 (en) 2005-05-25 2012-10-02 Motorola Mobility Llc Method and apparatus of increasing speech intelligibility in noisy environments
US20130226597A1 (en) * 2001-11-29 2013-08-29 Dolby International Ab Methods for Improving High Frequency Reconstruction
US20140074462A1 (en) * 2002-09-18 2014-03-13 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
TWI483245B (en) * 2011-02-14 2015-05-01 Fraunhofer Ges Forschung Information signal representation using lapped transform
US9037457B2 (en) 2011-02-14 2015-05-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec supporting time-domain and frequency-domain coding modes
US9047859B2 (en) 2011-02-14 2015-06-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion
US9153236B2 (en) 2011-02-14 2015-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec using noise synthesis during inactive phases
US9218818B2 (en) 2001-07-10 2015-12-22 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9595262B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
US9595263B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Encoding and decoding of pulse positions of tracks of an audio signal
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result
US9792919B2 (en) 2001-07-10 2017-10-17 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7315815B1 (en) 1999-09-22 2008-01-01 Microsoft Corporation LPC-harmonic vocoder with superframe structure
US7668712B2 (en) * 2004-03-31 2010-02-23 Microsoft Corporation Audio encoding and decoding with intra frames and adaptive forward error correction
US7831421B2 (en) * 2005-05-31 2010-11-09 Microsoft Corporation Robust decoder
US7707034B2 (en) * 2005-05-31 2010-04-27 Microsoft Corporation Audio codec post-filter
US7177804B2 (en) 2005-05-31 2007-02-13 Microsoft Corporation Sub-band voice codec with multi-stage codebooks and redundant coding
US8682652B2 (en) * 2006-06-30 2014-03-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
MY154452A (en) 2008-07-11 2015-06-15 Fraunhofer Ges Forschung An apparatus and a method for decoding an encoded audio signal
EP2410521B1 (en) 2008-07-11 2017-10-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio signal encoder, method for generating an audio signal and computer program
PL3011556T3 (en) * 2013-06-21 2017-10-31 Fraunhofer Ges Forschung Method and apparatus for obtaining spectrum coefficients for a replacement frame of an audio signal, audio decoder, audio receiver and system for transmitting audio signals
US9985815B2 (en) * 2016-08-25 2018-05-29 Intel IP Corporation Signal processing chain switching
CN112951209B (en) * 2021-01-27 2023-12-01 中国科学技术大学 Voice recognition method, device, equipment and computer readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4561102A (en) * 1982-09-20 1985-12-24 At&T Bell Laboratories Pitch detector for speech analysis
JPS62189833A (en) * 1986-02-17 1987-08-19 Hitachi Ltd Voice coding and decoding device
US4932061A (en) * 1985-03-22 1990-06-05 U.S. Philips Corporation Multi-pulse excitation linear-predictive speech coder
US4991214A (en) * 1987-08-28 1991-02-05 British Telecommunications Public Limited Company Speech coding using sparse vector codebook and cyclic shift techniques
US5140638A (en) * 1989-08-16 1992-08-18 U.S. Philips Corporation Speech coding system and a method of encoding speech
EP0764940A2 (en) 1995-09-19 1997-03-26 AT&T Corp. am improved RCELP coder
US5648989A (en) * 1994-12-21 1997-07-15 Paradyne Corporation Linear prediction filter coefficient quantizer and filter set
US5781880A (en) * 1994-11-21 1998-07-14 Rockwell International Corporation Pitch lag estimation using frequency-domain lowpass filtering of the linear predictive coding (LPC) residual
US5886276A (en) * 1997-01-16 1999-03-23 The Board Of Trustees Of The Leland Stanford Junior University System and method for multiresolution scalable audio signal encoding
WO2000011654A1 (en) 1998-08-24 2000-03-02 Conexant Systems, Inc. Speech encoder adaptively applying pitch preprocessing with continuous warping
US6113653A (en) * 1998-09-11 2000-09-05 Motorola, Inc. Method and apparatus for coding an information signal using delay contour adjustment
US6262943B1 (en) * 1997-08-27 2001-07-17 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Signal processing system for sensing a periodic signal in noise
US6449590B1 (en) * 1998-08-24 2002-09-10 Conexant Systems, Inc. Speech encoder using warping in long term preprocessing

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6295599A (en) 1985-10-23 1987-05-02 株式会社リコー Residual driving type voice synthesization system
JPH0782359B2 (en) * 1989-04-21 1995-09-06 三菱電機株式会社 Speech coding apparatus, speech decoding apparatus, and speech coding / decoding apparatus
JP3199142B2 (en) 1993-09-22 2001-08-13 日本電信電話株式会社 Method and apparatus for encoding excitation signal of speech
JPH07160299A (en) * 1993-12-06 1995-06-23 Hitachi Denshi Ltd Sound signal band compander and band compression transmission system and reproducing system for sound signal
US5602959A (en) * 1994-12-05 1997-02-11 Motorola, Inc. Method and apparatus for characterization and reconstruction of speech excitation waveforms
US5774846A (en) * 1994-12-19 1998-06-30 Matsushita Electric Industrial Co., Ltd. Speech coding apparatus, linear prediction coefficient analyzing apparatus and noise reducing apparatus
US6240384B1 (en) * 1995-12-04 2001-05-29 Kabushiki Kaisha Toshiba Speech synthesis method
JP3531780B2 (en) 1996-11-15 2004-05-31 日本電信電話株式会社 Voice encoding method and decoding method
JP3296411B2 (en) 1997-02-21 2002-07-02 日本電信電話株式会社 Voice encoding method and decoding method
US5963897A (en) * 1998-02-27 1999-10-05 Lernout & Hauspie Speech Products N.V. Apparatus and method for hybrid excited linear prediction speech encoding
JP3180786B2 (en) * 1998-11-27 2001-06-25 日本電気株式会社 Audio encoding method and audio encoding device
US6311154B1 (en) 1998-12-30 2001-10-30 Nokia Mobile Phones Limited Adaptive windows for analysis-by-synthesis CELP-type speech coding
US6223151B1 (en) * 1999-02-10 2001-04-24 Telefon Aktie Bolaget Lm Ericsson Method and apparatus for pre-processing speech signals prior to coding by transform-based speech coders
US6732070B1 (en) * 2000-02-16 2004-05-04 Nokia Mobile Phones, Ltd. Wideband speech codec using a higher sampling rate in analysis and synthesis filtering than in excitation searching

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4561102A (en) * 1982-09-20 1985-12-24 At&T Bell Laboratories Pitch detector for speech analysis
US4932061A (en) * 1985-03-22 1990-06-05 U.S. Philips Corporation Multi-pulse excitation linear-predictive speech coder
JPS62189833A (en) * 1986-02-17 1987-08-19 Hitachi Ltd Voice coding and decoding device
US4991214A (en) * 1987-08-28 1991-02-05 British Telecommunications Public Limited Company Speech coding using sparse vector codebook and cyclic shift techniques
US5140638A (en) * 1989-08-16 1992-08-18 U.S. Philips Corporation Speech coding system and a method of encoding speech
US5140638B1 (en) * 1989-08-16 1999-07-20 U S Philiips Corp Speech coding system and a method of encoding speech
US5781880A (en) * 1994-11-21 1998-07-14 Rockwell International Corporation Pitch lag estimation using frequency-domain lowpass filtering of the linear predictive coding (LPC) residual
US5648989A (en) * 1994-12-21 1997-07-15 Paradyne Corporation Linear prediction filter coefficient quantizer and filter set
US5704003A (en) * 1995-09-19 1997-12-30 Lucent Technologies Inc. RCELP coder
EP0764940A2 (en) 1995-09-19 1997-03-26 AT&T Corp. am improved RCELP coder
US5886276A (en) * 1997-01-16 1999-03-23 The Board Of Trustees Of The Leland Stanford Junior University System and method for multiresolution scalable audio signal encoding
US6262943B1 (en) * 1997-08-27 2001-07-17 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Signal processing system for sensing a periodic signal in noise
WO2000011654A1 (en) 1998-08-24 2000-03-02 Conexant Systems, Inc. Speech encoder adaptively applying pitch preprocessing with continuous warping
US6449590B1 (en) * 1998-08-24 2002-09-10 Conexant Systems, Inc. Speech encoder using warping in long term preprocessing
US6113653A (en) * 1998-09-11 2000-09-05 Motorola, Inc. Method and apparatus for coding an information signal using delay contour adjustment

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
"The Next Step", CDMA Spectrum, (Mar. 1998), retrieved from wysiwyg://9/http://www.cdg.org/library/spectrum/Mar98/handset.asp on Feb. 7, 2001 (pp. 1 of 3).
Cuperman et al ("A Novel Approach To Excitation Coding in Low-Bit-Rate High-Quality CELP Coders", 2000 IEEE Workshop o Speech Coding, Sep. 2000).* *
Deller, J., et al., "Discrete-time Processing of Speech Signals", Macmillan Publishing Company (1993) pp. 1-908.
Gray, R. "Vector Quantization and Signal Compression", Kluwer Publishing House, (1992) pp. 1-732.
Henning, et al., "EVRC for CDMA Systems", (Slide presentation) Slide 1-143 (Apr. 1998), retrieved from http://www.eas.asu.edu/~speech/research/cdma/cdmaapr2898/s1d001.htm on Apr. 11, 2003.
IS-127: Enhanced Variable Rate Codec, Speech Service Option 3 for Wideband Spread Spectrum Digital Systems, (Jul. 19, 1996).
Kleijn, W.B. et al., "Interpolation of the Pitch-Predictor Parameters in Analysis-by Synthesis Speech Coders", IEEE Transaction on Speech and Audio Processing, vol. No. 2, No. 1, Part 1 pp. 42-54 (Jan. 1994).
Kleijn, W.B. et al., "The RCELP Speech-Coding Algorithm", European Trans. on Communication, vol. 5, No. 5 , pp. 573-582 (Sep.-Oct. 1994).
Makhoul, John, "Linear Prediction: A Tutorial Review", Proceedings of the IEEE , Digital Signal Processing, vol. 63 No. 4, pp. 561-580 (Apr. 1975).
Schroeder, Manfred R., et al, "Code-Excited Linear Prediction (CELP) High-Quality Speech at Very Low BIT Rates", Proc. IEEE Conf. Acoust, Speech, Signal Proc., (ICASSP) pp. 937-940 Abstract (1995).
Sugamura, N., et al., "Speech Data Compression by LSP Speech Analysis-Synthesis Technique", ABSTRACT of the transactions of IECE of Japan, vol. E 64, No. 8 (Aug. 1981) p. 555.
The IS-127 Enhanced Variable Rate Coder: A Giant Leap for CDMA, Wainhouse Research Bulletin, Retrieved from http://www.wainhouse .com/article_htm1 on Feb. 6, 2001 (5 pages total).
Wang, Min, "IS-127 Enhanced Variable Rate Speech Coder: Multichannel TMS320C62x Implementation", Texas Instruments Application Report, SPRA566B, pp. 1-14 (Feb. 2000).
Zhan, Puming, et al., "Speaker Normalization Based on Frequency Warping", IEEE International Conference on Acoustics, Speech and Signal Processing, Munich, Germany, vol. 2, 4 pages total (1997).

Cited By (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10902859B2 (en) 2001-07-10 2021-01-26 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9792919B2 (en) 2001-07-10 2017-10-17 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US9799340B2 (en) 2001-07-10 2017-10-24 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9799341B2 (en) 2001-07-10 2017-10-24 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US9865271B2 (en) 2001-07-10 2018-01-09 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US10297261B2 (en) 2001-07-10 2019-05-21 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US10540982B2 (en) 2001-07-10 2020-01-21 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9218818B2 (en) 2001-07-10 2015-12-22 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9431020B2 (en) * 2001-11-29 2016-08-30 Dolby International Ab Methods for improving high frequency reconstruction
US9761236B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9779746B2 (en) 2001-11-29 2017-10-03 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US20130226597A1 (en) * 2001-11-29 2013-08-29 Dolby International Ab Methods for Improving High Frequency Reconstruction
US10403295B2 (en) 2001-11-29 2019-09-03 Dolby International Ab Methods for improving high frequency reconstruction
US9792923B2 (en) 2001-11-29 2017-10-17 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9761237B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9818418B2 (en) 2001-11-29 2017-11-14 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9761234B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US11238876B2 (en) 2001-11-29 2022-02-01 Dolby International Ab Methods for improving high frequency reconstruction
US9812142B2 (en) 2001-11-29 2017-11-07 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9847089B2 (en) * 2002-09-18 2017-12-19 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US20140074462A1 (en) * 2002-09-18 2014-03-13 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US10157623B2 (en) 2002-09-18 2018-12-18 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US20170110136A1 (en) * 2002-09-18 2017-04-20 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US9542950B2 (en) * 2002-09-18 2017-01-10 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US20080235009A1 (en) * 2003-02-11 2008-09-25 Nokia Corporation Method and apparatus for reducing synchronization delay in packet switched voice terminals using speech decoder modification
US7394833B2 (en) * 2003-02-11 2008-07-01 Nokia Corporation Method and apparatus for reducing synchronization delay in packet switched voice terminals using speech decoder modification
US20040156397A1 (en) * 2003-02-11 2004-08-12 Nokia Corporation Method and apparatus for reducing synchronization delay in packet switched voice terminals using speech decoder modification
US8243761B2 (en) 2003-02-11 2012-08-14 Nokia Corporation Decoder synchronization adjustment
US20060149532A1 (en) * 2004-12-31 2006-07-06 Boillot Marc A Method and apparatus for enhancing loudness of a speech signal
US7676362B2 (en) * 2004-12-31 2010-03-09 Motorola, Inc. Method and apparatus for enhancing loudness of a speech signal
US20080126086A1 (en) * 2005-04-01 2008-05-29 Qualcomm Incorporated Systems, methods, and apparatus for gain coding
US8069040B2 (en) 2005-04-01 2011-11-29 Qualcomm Incorporated Systems, methods, and apparatus for quantization of spectral envelope representation
US8078474B2 (en) * 2005-04-01 2011-12-13 Qualcomm Incorporated Systems, methods, and apparatus for highband time warping
US8140324B2 (en) 2005-04-01 2012-03-20 Qualcomm Incorporated Systems, methods, and apparatus for gain coding
US20060271356A1 (en) * 2005-04-01 2006-11-30 Vos Koen B Systems, methods, and apparatus for quantization of spectral envelope representation
US20070088541A1 (en) * 2005-04-01 2007-04-19 Vos Koen B Systems, methods, and apparatus for highband burst suppression
US8244526B2 (en) 2005-04-01 2012-08-14 Qualcomm Incorporated Systems, methods, and apparatus for highband burst suppression
US8260611B2 (en) 2005-04-01 2012-09-04 Qualcomm Incorporated Systems, methods, and apparatus for highband excitation generation
US20070088558A1 (en) * 2005-04-01 2007-04-19 Vos Koen B Systems, methods, and apparatus for speech signal filtering
US20070088542A1 (en) * 2005-04-01 2007-04-19 Vos Koen B Systems, methods, and apparatus for wideband speech coding
US8484036B2 (en) 2005-04-01 2013-07-09 Qualcomm Incorporated Systems, methods, and apparatus for wideband speech coding
US20060277038A1 (en) * 2005-04-01 2006-12-07 Qualcomm Incorporated Systems, methods, and apparatus for highband excitation generation
US8332228B2 (en) 2005-04-01 2012-12-11 Qualcomm Incorporated Systems, methods, and apparatus for anti-sparseness filtering
US20060282263A1 (en) * 2005-04-01 2006-12-14 Vos Koen B Systems, methods, and apparatus for highband time warping
US8364494B2 (en) 2005-04-01 2013-01-29 Qualcomm Incorporated Systems, methods, and apparatus for split-band filtering and encoding of a wideband signal
US20060277042A1 (en) * 2005-04-01 2006-12-07 Vos Koen B Systems, methods, and apparatus for anti-sparseness filtering
US20060282262A1 (en) * 2005-04-22 2006-12-14 Vos Koen B Systems, methods, and apparatus for gain factor attenuation
US8892448B2 (en) 2005-04-22 2014-11-18 Qualcomm Incorporated Systems, methods, and apparatus for gain factor smoothing
US9043214B2 (en) 2005-04-22 2015-05-26 Qualcomm Incorporated Systems, methods, and apparatus for gain factor attenuation
US20060277039A1 (en) * 2005-04-22 2006-12-07 Vos Koen B Systems, methods, and apparatus for gain factor smoothing
US8364477B2 (en) 2005-05-25 2013-01-29 Motorola Mobility Llc Method and apparatus for increasing speech intelligibility in noisy environments
US8280730B2 (en) 2005-05-25 2012-10-02 Motorola Mobility Llc Method and apparatus of increasing speech intelligibility in noisy environments
US8412518B2 (en) * 2005-11-03 2013-04-02 Dolby International Ab Time warped modified transform coding of audio signals
US8838441B2 (en) 2005-11-03 2014-09-16 Dolby International Ab Time warped modified transform coding of audio signals
US20100204998A1 (en) * 2005-11-03 2010-08-12 Coding Technologies Ab Time Warped Modified Transform Coding of Audio Signals
US20080027717A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for wideband encoding and decoding of inactive frames
US8725499B2 (en) 2006-07-31 2014-05-13 Qualcomm Incorporated Systems, methods, and apparatus for signal change detection
US8532984B2 (en) 2006-07-31 2013-09-10 Qualcomm Incorporated Systems, methods, and apparatus for wideband encoding and decoding of active frames
US20080027719A1 (en) * 2006-07-31 2008-01-31 Venkatesh Kirshnan Systems and methods for modifying a window with a frame associated with an audio signal
US20080027716A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for signal change detection
US20080027715A1 (en) * 2006-07-31 2008-01-31 Vivek Rajendran Systems, methods, and apparatus for wideband encoding and decoding of active frames
US8260609B2 (en) 2006-07-31 2012-09-04 Qualcomm Incorporated Systems, methods, and apparatus for wideband encoding and decoding of inactive frames
US7987089B2 (en) 2006-07-31 2011-07-26 Qualcomm Incorporated Systems and methods for modifying a zero pad region of a windowed frame of an audio signal
US9324333B2 (en) 2006-07-31 2016-04-26 Qualcomm Incorporated Systems, methods, and apparatus for wideband encoding and decoding of inactive frames
US20080040104A1 (en) * 2006-08-07 2008-02-14 Casio Computer Co., Ltd. Speech coding apparatus, speech decoding apparatus, speech coding method, speech decoding method, and computer readable recording medium
US9653088B2 (en) 2007-06-13 2017-05-16 Qualcomm Incorporated Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding
US20080312914A1 (en) * 2007-06-13 2008-12-18 Qualcomm Incorporated Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding
US8478585B2 (en) * 2007-10-24 2013-07-02 Red Shift Company, Llc Identifying features in a portion of a signal representing speech
US8315856B2 (en) * 2007-10-24 2012-11-20 Red Shift Company, Llc Identify features of speech based on events in a signal representing spoken sounds
US20090182556A1 (en) * 2007-10-24 2009-07-16 Red Shift Company, Llc Pitch estimation and marking of a signal representing speech
US20090271183A1 (en) * 2007-10-24 2009-10-29 Red Shift Company, Llc Producing time uniform feature vectors
US20090271198A1 (en) * 2007-10-24 2009-10-29 Red Shift Company, Llc Producing phonitos based on feature vectors
US20090271197A1 (en) * 2007-10-24 2009-10-29 Red Shift Company, Llc Identifying features in a portion of a signal representing speech
US20090271196A1 (en) * 2007-10-24 2009-10-29 Red Shift Company, Llc Classifying portions of a signal representing speech
US20130046533A1 (en) * 2007-10-24 2013-02-21 Red Shift Company, Llc Identifying features in a portion of a signal representing speech
US8326610B2 (en) * 2007-10-24 2012-12-04 Red Shift Company, Llc Producing phonitos based on feature vectors
US8396704B2 (en) * 2007-10-24 2013-03-12 Red Shift Company, Llc Producing time uniform feature vectors
US20100286990A1 (en) * 2008-01-04 2010-11-11 Dolby International Ab Audio encoder and decoder
US20100286991A1 (en) * 2008-01-04 2010-11-11 Dolby International Ab Audio encoder and decoder
US8938387B2 (en) 2008-01-04 2015-01-20 Dolby Laboratories Licensing Corporation Audio encoder and decoder
US8924201B2 (en) 2008-01-04 2014-12-30 Dolby International Ab Audio encoder and decoder
US8494863B2 (en) * 2008-01-04 2013-07-23 Dolby Laboratories Licensing Corporation Audio encoder and decoder with long term prediction
US8484019B2 (en) 2008-01-04 2013-07-09 Dolby Laboratories Licensing Corporation Audio encoder and decoder
US20100198586A1 (en) * 2008-04-04 2010-08-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E. V. Audio transform coding using pitch correction
US8700388B2 (en) * 2008-04-04 2014-04-15 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio transform coding using pitch correction
US20120072209A1 (en) * 2010-09-16 2012-03-22 Qualcomm Incorporated Estimating a pitch lag
US9082416B2 (en) * 2010-09-16 2015-07-14 Qualcomm Incorporated Estimating a pitch lag
TWI483245B (en) * 2011-02-14 2015-05-01 Fraunhofer Ges Forschung Information signal representation using lapped transform
US9037457B2 (en) 2011-02-14 2015-05-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec supporting time-domain and frequency-domain coding modes
US9620129B2 (en) 2011-02-14 2017-04-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a portion of an audio signal using a transient detection and a quality result
US9153236B2 (en) 2011-02-14 2015-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio codec using noise synthesis during inactive phases
US9595263B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Encoding and decoding of pulse positions of tracks of an audio signal
US9595262B2 (en) 2011-02-14 2017-03-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Linear prediction based coding scheme using spectral domain noise shaping
US9583110B2 (en) 2011-02-14 2017-02-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for processing a decoded audio signal in a spectral domain
US9047859B2 (en) 2011-02-14 2015-06-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for encoding and decoding an audio signal using an aligned look-ahead portion
US9536530B2 (en) 2011-02-14 2017-01-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Information signal representation using lapped transform
TWI564882B (en) * 2011-02-14 2017-01-01 弗勞恩霍夫爾協會 Information signal representation using lapped transform
US9384739B2 (en) 2011-02-14 2016-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for error concealment in low-delay unified speech and audio coding

Also Published As

Publication number Publication date
EP1271471B1 (en) 2008-04-23
ATE393447T1 (en) 2008-05-15
JP4162933B2 (en) 2008-10-08
DE60226200D1 (en) 2008-06-05
US20030004718A1 (en) 2003-01-02
EP1271471A2 (en) 2003-01-02
US20050131681A1 (en) 2005-06-16
US7228272B2 (en) 2007-06-05
EP1271471A3 (en) 2004-01-28
DE60226200T2 (en) 2009-05-14
JP2003122400A (en) 2003-04-25

Similar Documents

Publication Publication Date Title
US6879955B2 (en) Signal modification based on continuous time warping for low bit rate CELP coding
US6658383B2 (en) Method for coding speech and music signals
EP1886307B1 (en) Robust decoder
US11721349B2 (en) Methods, encoder and decoder for linear predictive encoding and decoding of sound signals upon transition between frames having different sampling rates
RU2486484C2 (en) Temporary deformation loop computer, audio signal encoder, encoded audio signal presentation, methods and software
US8121833B2 (en) Signal modification method for efficient coding of speech signals
US7599833B2 (en) Apparatus and method for coding residual signals of audio signals into a frequency domain and apparatus and method for decoding the same
JP4970046B2 (en) Transcoding between indexes of multipulse dictionaries used for coding for digital signal compression
JP2003501675A (en) Speech synthesis method and speech synthesizer for synthesizing speech from pitch prototype waveform by time-synchronous waveform interpolation
US20050091041A1 (en) Method and system for speech coding
US8670982B2 (en) Method and device for carrying out optimal coding between two long-term prediction models
Prandoni et al. R/D optimal linear prediction
US6535847B1 (en) Audio signal processing
Chibani Increasing the robustness of CELP speech codecs against packet losses.
Neuhoff et al. Design of a CELP coder and analysis of various quantization techniques
MX2007015190A (en) Robust decoder

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RAO, AJIT V.;REEL/FRAME:012436/0134

Effective date: 20011017

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0001

Effective date: 20141014

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20170412