EP1464047A2 - Transcodierungsschema zwischen auf celp basierenden sprachcodes - Google Patents

Transcodierungsschema zwischen auf celp basierenden sprachcodes

Info

Publication number
EP1464047A2
EP1464047A2 EP03705707A EP03705707A EP1464047A2 EP 1464047 A2 EP1464047 A2 EP 1464047A2 EP 03705707 A EP03705707 A EP 03705707A EP 03705707 A EP03705707 A EP 03705707A EP 1464047 A2 EP1464047 A2 EP 1464047A2
Authority
EP
European Patent Office
Prior art keywords
celp
mapping
destination
codec
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP03705707A
Other languages
English (en)
French (fr)
Other versions
EP1464047A4 (de
Inventor
Marwan A. Jabri
Jianwei Wang
Stephen Gould
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilithium Networks Pty Ltd
Original Assignee
Dilithium Networks Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilithium Networks Pty Ltd filed Critical Dilithium Networks Pty Ltd
Publication of EP1464047A2 publication Critical patent/EP1464047A2/de
Publication of EP1464047A4 publication Critical patent/EP1464047A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/173Transcoding, i.e. converting between two coded representations avoiding cascaded coding-decoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders

Definitions

  • the present invention generally relates to techniques for processing information. More particularly, the invention provides a method and apparatus for converting CELP frames from one CELP based standard to another CELP based standard, and/or within a single standard but a different mode. Further details of the present invention are provided throughout the present specification and more particularly below.
  • Coding is the process of converting a raw signal (voice, image, video, etc) into a format amenable for transmission or storage. The coding usually results in a large amount of compression, but generally involves significant signal processing to achieve. The outcome of the coding is a bitstream (sequence of frames) of encoded parameters according to a given compression format.
  • the compression is achieved by removing statistically and perceptually redundant information using various techniques for modeling the signal.
  • the encoded format is referred to as a "compression format” or "parameter space”.
  • the decoder takes the compressed bitstream and regenerates the original signal.
  • compression typically leads to information loss.
  • transcoding The process of converting between different compression formats and/or reducing the bit rate of a previously encoded signal is known as transcoding. This may be done to conserve bandwidth, or connect incompatible clients and/or server devices. Transcoding differs from the direct compression process in that a transcoder only has access to the compressed signal and does not have access to the original signal.
  • Transcoding can be done using brute force techniques such as "tandem" which has a decompression process followed by a re-compression process. Since large amount of processing is often required and delays may be incurred to decompress and then re-compress a signal, one can consider transcoding in the compression space or parameter space. Such transcoding aims at mapping between compression formats while remaining in the parameter space wherever possible. This is where the sophisticated algorithms of "smart" transcoding come into play. Although there has been advances in transcoding, it is desirable to further improve transcoding techniques. Further details of limitations of conventional techniques will be described more fully throughout the present specification and more particularly below.
  • the invention provides a method and apparatus for converting CELP frames from one CELP based standard to another CELP based standard, and/or within a single standard but a different mode. Further details of the present invention are provided throughout the present specification and more particularly below. [0009]
  • the invention provides an apparatus for converting CELP frames from one CELP -based standard to another CELP based standard, and/or within a single standard but to a different mode.
  • the apparatus has a bitstream unpacking module for extracting one or more CELP parameters from a source codec.
  • the apparatus also has an interpolator module coupled to the bitstream unpacking module.
  • the interpolator module is adapted to interpolate between different frame sizes, subframe sizes, and/or sampling rates of the source codec and a destination codec.
  • a mapping module is coupled to the interpolator module.
  • the mapping module is adapted to map the one or more CELP parameters from the source codec to one or more CELP parameters of the destination codec.
  • the apparatus has a destination bitstream packing module coupled to the mapping module.
  • the destination bitstream packing module is adapted to construct at least one destination output CELP frame based upon at least the one or more CELP parameters from the destination codec.
  • a controller is coupled to at least the destination bitstream packing module, the mapping module, the interpolator module, and the bitstream unpacking module.
  • the controller is adapted to oversee operation of one or more of the modules and being adapted to receive instructions from one or more external applications.
  • the controller is adapted to provide a status information to one or more of the external applications.
  • the invention provides a method for transcoding a CELP based compressed voice bitstream from source codec to destination codec.
  • the method includes processing a source codec input CELP bitstream to unpack at least one or more CELP parameters from the input CELP bitstream and interpolating one or more of the plurality of unpacked CELP parameters from a source codec format to a destination codec format if a difference of one or more of a plurality of destination codec parameters including a frame size, a subframe size, and/or sampling rate of the destination codec format and one or more of a plurality of source codec parameters including a frame size, a subframe size, or sampling rate of the source codec format exist.
  • the method includes encoding the one or more CELP parameters for the destination codec and processing a destination CELP bitstream by at least packing the one or more CELP parameters for the destination codec.
  • the invention provides a method for processing CELP based compressed voice bitstreams from source codec to destination codec formats.
  • the method includes transferring a control signal from a plurality of control signals from an application process and selecting one CELP mapping strategy from a plurality of different CELP mapping strategies based upon at least the control signal from the application.
  • the method also includes performing a mapping process using the selected CELP mapping strategies to map one or more CELP parameters from a source codec format to one or more CELP parameters of a destination codec format.
  • the invention provides a system for processing CELP based compressed voice bitstreams from source codec to destination codec formats.
  • the system includes one or more memories.
  • Such memories may include one or more codes for receiving a control signal from a plurality of control signals from an application process.
  • One or more codes for selecting one CELP mapping strategy from a plurality of different CELP mapping strategies based upon at least the control signal from the application are also included.
  • the one or more memories also include one or more codes for performing a mapping process using the selected CELP mapping strategies to map one or more CELP parameters from a source codec format to one or more CELP parameters of a destination codec format.
  • the transcoding apparatus includes: a source CELP parameter unpacking module that extracts CELP parameters from the input encoded CELP bitstream;
  • CELP parameter interpolator that converts the input source CELP parameters into destination CELP parameters corresponding to the subframe size difference between source and destination codec; Parameter interpolation is used if the subframe size of source and destination codecs are different.
  • the source CELP parameter unpacking module is a simplified CELP decoder without a formant filter and a post-filter.
  • the CELP parameter interpolator comprises of a set of interpolators related to one or more of the CELP parameters.
  • the destination CELP parameter mapping and tuning module includes a parameter mapping strategy switching module, and one or more of the following parameter mapping strategies: a module of CELP parameter direct space mapping, a module of analysis in excitation space mapping, a module of analysis in filtered excitation space mapping.
  • the invention performs transcoding on a subframe by subframe basis. That is, as a frame (of source compressed information) is received by the transcoding system, the transcoder can begin operating on it and producing output subframes. Once a sufficient number of subframes have been produced, a frame (of compressed information according to destination format) can be generated and can be sent to the communication channel if communication is the purpose. If storage is the purpose, the generated frame can be stored as desired.
  • the transcoding operation consists of four operations: (1) bitstream unpacking, (2) subframe buffering and interpolation of source CELP parameters, (3) mapping and tuning to destination CELP parameters, and (4) code packing to produce output frame(s). [0019] So on receipt of a frame, the transcoders unpack the bitstream to produce the CELP parameters for each of the subframes contained within the frame ( Figure 10, block (1)).
  • the parameters of interest are the LPC coefficients, the excitation (produced from the adaptive and fixed codewords), and the pitch lag. Note that for a low complexity solution that produces good quality, only decoding to the excitation is required and not full synthesis of the speech waveform. If subframe interpolation is needed, it is done at this point by smart interpolation engine (Figure 10, block (2)).
  • the subframes are now in a form amenable for processing by the destination parameter mapping and tuning module (Figure 10, block (5)).
  • the short-term LPC filter coefficients are mapped independently of the excitation CELP parameters.
  • Simple linear mapping in the LSP pseudo-frequency space can be used to produce the LSP coefficients for the destination codec.
  • the excitation CELP parameters can be mapped in a number of ways giving accordingly better quality output at the cost of computational complexity. Three such mapping strategies have been described in this document and are part of the Parameter Mapping & Tuning Strategies module ( Figure 10, block (4)): • CELP parameter Direct Space Mapping (DSM);
  • mapping & Tuning Strategy Switching Module Figure 10, block (3)
  • the three methods trade-off quality for reduced computational load, they can be used to provide graceful degradation in quality in the case of the apparatus being overloaded by a large number of simultaneous channels.
  • the performance of the transcoders can adapt the available resources.
  • a transcoding system may be built using one strategy only yielding a desired quality and performance. In such a case, the Mapping and Tuning Strategy Switching module (Figure 10, Block (3)) would not be inco ⁇ orated.
  • a voice activity detector (operating in the parameter space) can also be employed at this point, if applicable to the destination standard, to reduce the outbound bandwidth.
  • the mapped parameters can then be packed into destination bitstream format frames ( Figure 10, block (7)) and generated for transmission or storage.
  • the invention covers the algorithms and methods used to perform smart transcoding between CELP -based speech coding standards.
  • the invention also covers transcoding within a single standard in order to perform rate control (by transcoding to lower modes or introduce silence frames through an embedded Voice Activity Detector).
  • FIG. 1 is a simplified block diagram of the decoder stage of a generic CELP coder
  • FIG. 2 is a simplified block diagram of the encoder stage of a generic CELP coder;
  • FIG. 3 is a simplified block diagram showing a mathematical model of a codec;
  • FIG. 4 is a simplified block diagram showing a mathematical model of a tandem transcodec
  • FIG. 5 is a simplified block diagram showing a mathematical model of a smart transcodec
  • FIG. 6 is an illustration of one of the traditional apparatus for CELP based transcoding
  • FIG. 7 is an illustration of one of the traditional apparatus for CELP based transcoding
  • FIG. 8 is a simplified block diagram showing generic transcoding between CELP codecs
  • FIG. 9 is a simplified diagram showing subframe interpolation for GSM-AMR and
  • FIG. 10 depicts a simplified block diagram of a system constructed in accordance with an embodiment of the present invention to transcode an input CELP bitstream of from source CELP codec to an output CELP bitstream of destination codec;
  • FIG. 11 is a simplified block diagram of a source codec CELP parameters unpack module in greater detail
  • FIG. 12 is a simplified diagram showing interpolation of subframe and-sample-by- sample parameters for G.723.1 to GSM-AMR;
  • FIG. 13 is a simplified block diagram showing the excitation being calibrated by source codec LPC coefficients and destination codec encoded LPC coefficients;
  • FIG. 14 is a simplified block diagram showing Parameter Mapping & Tuning
  • FIG. 15 is a simplified block diagram of a destination CELP parameters tuning module in greater detail
  • FIG. 16 is a simplified diagram showing an embodiment of the destination CELP code packing in frames for GSM-AMR
  • FIG. 17 depicts an embodiment of a G.723.1 to GSM-AMR transcoder
  • FIG. 18 depicts an embodiment of a GSM-AMR to G.723.1 transcoder. DETAILED DESCRIPTION OF THE INVENTION [0047]
  • the invention provides a method and apparatus for converting CELP frames from one CELP based standard to another CELP based standard, and/or within a single standard but a different mode. Further details of the present invention are provided throughout the present specification and more particularly below.
  • the invention covers algorithms and methods used to perform smart transcoding between CELP (code excited linear prediction) based coding methods and standards.
  • CELP code excited linear prediction
  • the invention also covers transcoding within a single standard in order to perform rate control (by transcoding to lower modes or introduce silence frames through an embedded Voice Activity Detector).
  • Speech coding techniques in general can be classified as waveform coders (e.g. standards G.711, G.726, G.722 from the ITU) and analysis-by-synthesis (AbS) type of coders (e.g.
  • Waveform coders operate in the time domain and they are based on sample-by-sample approach that utilizes the correlation between speech samples.
  • Analysis-by-synthesis coders try to imitate the human speech production system by a simplified model of a source (glottis) and a filter (vocal tract) that shapes the output speech spectrum on frame basis (typically frame size of 10-30ms is used).
  • the analysis-by-synthesis types of coders were introduced to provide high quality speech at low bit rates, at the expense of increased computational requirements. Compression techniques are a meaningful way to save the resource in the communication interface.
  • a CELP-based codec can then be thought of as an algorithm which maps between the sampled speech, x(n), and some parameter space, ⁇ , using a model of speech production, i.e. it encodes and decodes the digital speech. All CELP-based algorithms operate on frames of speech (which may be further divided into several subframes). In some codecs the speech frames overlap each other.
  • a frame of speech can be defined as a vector of speech samples beginning at some time n, that is,
  • L is the length (number of samples) of the speech frame.
  • / is related to the first frame sample n by a linear relationship
  • K is the number of samples overlapped between frames.
  • the compression (lossy encoding) process is a function which maps the speech frames, 3c,. , to parameters, ⁇ , and the decoding process maps back from the parameters, ⁇ , , to an approximation of the original speech frames, J ..
  • the speech frames that are produced by the decoder are not identical to the speech frames that were originally encoded.
  • the codec is designed to produce output speech which is as perceptually similar as possible as the input speech, that is, the encoder must produce parameters which maximize some perceptual criterion measure between input speech frames and the frames produced by the decoder when processing the parameters.
  • mapping from input to parameters, and from parameters to output requires knowledge of all previous input or parameters. This can be achieved by maintaining state within the codec, S, for example in the construction of the adaptive codebook used by CELP based methods.
  • the encoder state and decoder state must remain synchronized. This is achieved by only updating the state based on data which both sides (encoder and decoder) have, i.e. the parameters.
  • Figure 3 shows a generic model of an encoder, channel, and decoder.
  • the frame parameters, ⁇ i used in CELP-based models, consist of the linear- predictive coefficients (LPCs) used for short-term prediction of the speech signal (and physically relating to the vocal tract, mouth and nasal cavity, and lips), as well as excitation signal composed from adaptive and fixed codes.
  • LPCs linear- predictive coefficients
  • the adaptive codes are used to model long- term pitch information in the speech.
  • the codes (adaptive and fixed) have associated codebooks that are predefined for a specific CELP codec.
  • Figure 1 shows a typical CELP decoder where the adaptive and fixed codebook vectors are scaled independently by a gain factor, then combined and filtered to produce synthesized speech. This speech is usually passed through a post-filter to remove artifacts introduced by the model.
  • the CELP encoding (analysis) process involves preprocessing of the speech signal to remove unwanted frequency components and application of a windowing function, followed by extraction of the short-term LPC parameters. This is typically done using the Levinson-Durbin algorithm.
  • the LPC parameters are converted into Line Spectral Pairs (LSPs) to facilitate quantization and subframe inte ⁇ olation.
  • LSPs Line Spectral Pairs
  • the speech is then inverse-filtered by the short-term LPC filter to produce a residual excitation signal. This residual is perceptually weighted to improve quality and is analysed to find an estimate of the pitch of the speech.
  • a closed-loop analysis-by-synthesis method is used to determine the optimal pitch.
  • the adaptive codebook component of the excitation is subtracted from the residual, and the optimal fixed codeword found.
  • the internal memory of the encoder is updated to reflect changes to the codec state (such as the adaptive codebook).
  • the simplest method of transcoding is a brute- force approach called tandem transcoding, see Figure 4. This method performs a full decode of the incoming compressed bits to produce synthesized speech. The synthesized speech is then encoded for the target standard. This method suffers from the huge amount of computation required in re-encoding the signal, as well as from quality degradation issues introduced by pre- and post-filtering of the speech waveform, and from potential delays introduced by the look-ahead-requirements of the encoder.
  • Figure 6 illustrates the method used by US- 6,260,009 Bl.
  • the reconstructed signal which is used as target signal by the Searcher is produced from the input excitation parameters and output quantized formant filter coefficients. Due to the differences between quantized formant filter coefficients in the source and destination codecs, this leads to degradation in the target signal for the Searcher and finally the output speech quality from the transcoding is significantly degraded. See Figure 6. Other limitations may be found throughout the present specification and more particularly below. [0059]
  • Figure 7. (US2002/0077812 Al ) has been published. This method performs transcoding through mapping each CELP parameter directly ignoring the interaction between the CELP parameters.
  • the method is only applicable for a special case that requires very restricted conditions between source and destination CELP codecs. For an example, it requires Algebraic CELP (ACELP) and same subframe size in both source and destination codecs. It does not produce good quality speech for most CELP based transcoding. This method is only suitable for one of the GSM-AMR modes and it doesn't cover all the modes in GSM-AMR.
  • ACELP Algebraic CELP
  • the invention also covers transcoding within a single standard in order to perform rate control (by transcoding to lower modes or introduce silence frames through an embedded Voice Activity Detector).
  • the following sections discuss the details of the present invention.
  • the invention performs transcoding on a subframe by subframe basis. That is, as a frame is received by the transcoding system, the transcoder can begin operating on its subframes and producing output subframes. Once a sufficient number of subframes have been produced, a frame can be generated. If the duration of the frames defined by the source and destination standards are the same, then one input frame will produce one output frame, otherwise buffering of either input frames, or generation of multiple output frames will be needed.
  • the transcoding operation consists of four operations: (1) bitstream unpacking, (2) subframe buffering and inte ⁇ olation of source CELP parameters, (3) mapping and tuning to destination CELP parameters, and (4) Code packing to produce output frame(s). (see Figure 8).
  • Figure 10 is a block diagram illustrating the principles of a CELP based codec transcoding apparatus according to the present invention.
  • the block comprises a source bitstream unpacking module, a smart inte ⁇ olation engine, parameter mapping and tuning module, an optional advanced features module, a control module, and destination bitstream packing module.
  • the parameter mapping & tuning module comprises a mapping & tuning strategy switching module and parameter mapping & tuning strategies module.
  • the transcoding operation is overseen by the control module.
  • the transcoder unpacks the bitstream to produce the CELP parameters for each of the subframes contained within the frame.
  • the parameters of interest are the LPC coefficients, the excitation (produced from the adaptive and fixed codewords), and the pitch lag.
  • DSM Direct Space Mapping
  • the subframes are now in a form amenable for processing by the destination parameter mapping and tuning module shown in Figure 14.
  • the short-term LPC filter coefficients are mapped independently of the excitation CELP parameters. Simple linear mapping in the LSP pseudo-frequency space can be used to produce the LSP coefficients for the destination codec. More sophisticated non-linear inte ⁇ olation can also be used.
  • the excitation CELP parameters can be mapped in a number of ways giving accordingly better quality output at the cost of computational complexity. Three such mapping strategies have been described in this document and are part of the Parameter Mapping & Tuning Strategies module (Figure 10, block (4)):
  • mapping and tuning strategy is through the Mapping & Tuning Strategy Switching Module ( Figure 10, block (3)).
  • Figure 10, block (3) The selection of the mapping and tuning strategy is through the Mapping & Tuning Strategy Switching Module.
  • a voice activity detector (operating in the parameter space) can also be employed at this point, if applicable to the destination standard, to reduce the outbound bandwidth.
  • the outputs of parameter mapping and tuning module are destination CELP codec codes. They are packed into destination bitstream frames according to the codec CELP frame format. The packing process is needed to put the output bits into format that can be understood by destination CELP decoders. If the application is for storage, the destination CELP parameters could be packed or could be stored in an application specific format. The packing process could also be varied if the frames are to be transported according to a multimedia protocol, as for example bit scrambling is to be implemented in the packing process.
  • the apparatus of the present invention provides the capability of adding future optional signal processing functions or modules. Subframe Inte ⁇ olation
  • Subframe inte ⁇ olation may be needed when subframes for different standards represent different time durations in the signal domain, or when a different sampling rate is used.
  • G.723.1 uses frames of 30ms duration (7.5ms per subframe)
  • GSM- AMR uses frames of 20ms duration (5ms per subframe).
  • Sample-by- sample parameters such as excitation and codeword vectors
  • subframe parameters such as LSP coefficients, and pitch lag estimates.
  • the sample-by-sample parameters are mapped by considering their discrete time index and copying to the appropriate location in the target subframe.
  • the subframe parameters are inte ⁇ olated by some inte ⁇ olation function to produce a smoothed estimate of the parameters in the target subframe.
  • a smart inte ⁇ olation algorithm can improve the voice transcoding, not only in terms of computational performance, but more importantly in terms of voice quality.
  • a simple inte ⁇ olation function is the linear inte ⁇ olator.
  • subframe-wide parameters for example, the LSP coefficients
  • sample-by-sample parameters for example, the adaptive and fixed codewords
  • Subframe parameters denoted ⁇
  • sample-by-sample parameters denoted v[-]
  • each CELP parameter LSP coefficients, lag, pitch gain, codeword gain and etc
  • each CELP parameter can use different inte ⁇ olation scheme to achieve best perceptual quality.
  • the excitation vectors used as target signals in transcoding are calibrated by applying LPC data from the source and destination codecs.
  • Method 1 Linear transform of the LSP Coefficients
  • q' Aq + b
  • q ' the destination LSP vector (in the pseudo-frequency domain)
  • q the source (original) LSP vector
  • A is a linear transform matrix
  • b the bias term.
  • A reduces to the identity matrix and b reduces to zero.
  • the DC bias term used in the GSM-AMR codec is different from the one used by the G.723.1 codec
  • the b term in the equation above is used to compensate for difference.
  • Method 2 Excitation Vector Calibration by LSP Coefficients
  • the decoded source excitation vector is synthesized by source LPC coefficients in each subframes to convert to the speech domain and then filtered using quantized LP parameters of the destination codec to form the target signal in transcoding.
  • This calibration is optional and it can significantly improve the perceptual speech quality where there is a marked difference in the LPC parameters.
  • Figure 13 depicts the excitation calibration approach.
  • This section discusses three strategies for mapping the CELP excitation parameters. They are presented in order of successive computational complexity and output quality.
  • the core of the invention is the fact that the excitation can be mapped directly without the need to reconstruct the speech signal. This means that significant computation is saved during closed-loop codebook searches since the signals do not need to be filtered by the short-term impulse response, as required by conventional techniques.
  • This mapping works because the incoming bitstream contains already optimal excitation according to the source CELP codec for generating the speech.
  • the invention uses this fact to perform rapid searching in the excitation domain instead of the speech domain.
  • CELP Parameters Direct Space Mapping This strategy is the simplest transcoding scheme. The mapping is based on similarities of physical meaning between source and destination parameters and the transcoding is performed directly using analytical formula without any iterating or searching. The advantage of this scheme is that it does not require a large amount of memory and consumes almost zero MIPS but it can still generate intelligible, albeit degraded quality, sound. Note that the CELP parameters direct space mapping method of the present invention is different to the apparatus of prior art showing in Figure 7. This method is generic and it applies to all kind of CELP based transcoding in term of different frame or subframe size, different CELP codes in source and destination.
  • the LP parameters are still mapped directly from the source codec to the destination codec and the decoded pitch lag is used as the open-loop pitch estimation for the destination codec.
  • the closed-loop pitch search is still performed in the excitation domain.
  • the fixed-codebook search is performed in a filtered excitation space domain. The choice of the type of filter, and whether the target vector is converted to this domain for one or both searches, will depend on the desired quality and complexity requirements.
  • Various filters are applicable, including a lowpass filter to smooth irregularities, a filter that compensates for differences between characteristic of the excitation in the source and destination codecs, and a filter which enhances perceptually important signal features.
  • VAD Voice Activity Detectors
  • CNG comfort noise generation
  • the GSM-AMR codec uses eight source codecs with bit-rates of 12.2, 10.2, 7.95, 7.40, 6.70, 5.90, 5.15 and 4.75 kbit s.
  • the codec is based on the code-excited linear predictive (CELP) coding model.
  • CELP code-excited linear predictive
  • a 10th order linear prediction (LP), or short-term, synthesis filter is used.
  • the long-term, or pitch, synthesis filter is implemented using the so-called adaptive codebook approach.
  • the excitation signal at the input of the short- term LP synthesis filter is constructed by adding two excitation vectors from adaptive and fixed (innovative) codebooks. The speech is synthesized by feeding the two properly chosen vectors from these codebooks through the short-term synthesis filter.
  • the optimum excitation sequence in a codebook is chosen using an analysis-by-synthesis search procedure in which the error between the original and synthesized speech is minimized according to a perceptually weighted distortion measure.
  • the perceptual weighting filter used in the analysis-by-synthesis search technique uses the unquantized LP parameters.
  • the coder operates on speech frames of 20 ms corresponding to 160 samples at the sampling frequency of 8 000 sample/s. At each 160 speech samples, the speech signal is analysed to extract the parameters of the CELP model (LP filter coefficients, adaptive and fixed codebooks' indices and gains). These parameters are encoded and transmitted. At the decoder, these parameters are decoded and speech is synthesized by filtering the reconstructed excitation signal through the LP synthesis filter.
  • LP analysis is performed twice per frame for the 12.2kbit s mode and once for the other modes.
  • the two sets of LP parameters are converted to line spectrum pairs (LSP) and jointly quantized using split matrix quantization (SMQ) with 38 bits.
  • the single set of LP parameters is converted to line spectrum pairs (LSP) and vector quantized using split vector quantization (SVQ).
  • the speech frame is divided into four subframes of 5 ms each (40 samples).
  • the adaptive and fixed codebook parameters are transmitted every subframe.
  • the quantized and unquantized LP parameters or their inte ⁇ olated versions are used depending on the subframe.
  • the target signal is computed by filtering the LP residual through the weighted synthesis filter with the initial states of the filters having been updated by filtering the error between LP residual and excitation (this is equivalent to the common approach of subtracting the zero input response of the weighted synthesis filter from the weighted speech signal).
  • the target signal is updated by removing the adaptive codebook contribution (filtered adaptive codevector), and this new target is used in the fixed algebraic codebook search (to find the optimum innovation codeword).
  • the gains of the adaptive and fixed codebook are scalar quantified with 4 and 5 bits respectively or vector quantified with 6-7 bits (with moving average (MA) prediction applied to the fixed codebook gain). • Finally, the filter memories are updated (using the determined excitation signal) for finding the target signal in the next subframe. [0097] In each 20 ms speech frame, the bit allocation of 95, 103, 118, 134, 148, 159, 204 or 244 bits are produced, corresponding to a bit-rate of 4.75, 5.15, 5.90, 6.70, 7.40, 7.95, 10.2 or 12.2 kbps.
  • the G.723.1 coder has two bit rates associated with it, 5.3 and 6.3 kbps. Both rates are a mandatory part of the encoder and decoder. It is possible to switch between the two rates on any 30 ms frame boundary.
  • the coder is based on the principles of linear prediction analysis-by-synthesis coding and attempts to minimize a perceptually weighted error signal.
  • the encoder operates on blocks (frames) of 240 samples each. That is equal to 30 msec at an 8 kHz sampling rate. Each block is first high pass filtered to remove the DC component and then divided into four sub frames of 60 samples each. For every sub-frame, a 10th order linear prediction coder (LPC) filter is computed using the unprocessed input signal.
  • LPC linear prediction coder
  • the LPC filter for the last sub- frame is quantized using a Predictive Split Vector Quantizer (PSVQ).
  • PSDQ Predictive Split Vector Quantizer
  • the unquantized LPC coefficients are used to construct the short term perceptual weighting filter, which is used to filter the entire frame and to obtain the perceptually weighted speech signal.
  • L OL the open loop pitch period
  • L OL the open loop pitch period
  • This pitch estimation is performed on blocks of 120 samples.
  • the pitch period is searched in the range from 18 to 142 samples.
  • the speech is processed on a 60 samples per sub-frame basis.
  • a harmonic noise shaping filter is constructed.
  • the combination of the LPC synthesis filter, the formant perceptual weighting filter, and the harmonic noise shaping filter is used to create an impulse response. The impulse response is then used for further computations.
  • a closed loop pitch predictor is computed.
  • a fifth order pitch predictor is used.
  • the pitch period is computed as a small differential value around the open loop pitch estimate.
  • the contribution of the pitch predictor is then subtracted from the initial target vector.
  • Both the pitch period and the differential value are transmitted to the decoder.
  • the non periodic component of the excitation is approximated.
  • MP-MLQ multi-pulse maximum likelihood quantization
  • ACELP algebraic codebook excitation
  • FIG. 17 is a block diagram illustrating a transcoder from GSM-AMR to G.723.1 according to a first embodiment of the present invention.
  • the GSM-AMR bitstream consists of 20ms frames of length from 244 bits (31 bytes) for the highest rate mode 12.2kbps, to 95 bits (12 bytes) for the lowest rate mode 4.75kbps codec. There are eight modes in total.
  • Each of the eight GSM-AMR operating modes produces different bitstreams. Since a G.723.1 frame, being 30ms in duration, consists of one and a half GSM-AMR frames, two
  • GSM-AMR frames are needed to produce a single G.723.1 frame.
  • the next G.723.1 frame can then be produced on arrival of a third GSM-AMR frame.
  • two G.723.1 frames are produced for every three GSM-AMR frames processed.
  • the excitation vector needs to be formed by combining the adaptive codeword and the fixed (algebraic) codeword.
  • the adaptive codeword is constructed using a 60-tap inte ⁇ olation filter based on l/6 th or l/3 rd resolution pitch lag parameter.
  • the fixed codeword is then constructed as defined by the standard and the excitation formed as,
  • x is the excitation
  • v is the inte ⁇ olated adaptive codeword
  • c is the fixed codevector
  • g p and g c are the adaptive and fixed code gains respectively.
  • This excitation is then used to update the memory state of the GSM-AMR unpacker, and by the G.723.1 bitstream packer for mapping.
  • the adaptive codeword is found for each subframe by forming a linear combination of excitation vectors, and finding the optimal match to the target excitation signal, xfj, constructed by the GSM-AMR unpacker.
  • vfj is the reconstructed adaptive codeword
  • u[] is the previous excitation buffer
  • L is the (integer) pitch lag between 18 and 143 inclusive (determined by from the GSM-AMR unpacking module)
  • are lag weighting values which determine the gain and lag phase.
  • the vector table of ⁇ j values is searched to optimize the match between the adaptive codeword, v[], and the excitation vector, x[J.
  • x ⁇ [] is the target for the fixed codebook search
  • xfj is the excitation derived from the GSM-AMR unpacking
  • vfj is the (inte ⁇ olated and scaled) adaptive codeword.
  • the fixed codebooks are different for the high and low rate modes of the G.723.1 codec.
  • the high rate uses an MP-MLQ codebook which allows six pulses per subframe for even subframes, and five pulses per subframe for odd subframes, in any position.
  • the low rate mode uses an algebraic codebook (ACELP) which allows four pulses per subframe in restricted locations. Both codebooks use a grid flag to indicate whether to shift the codewords should be shifted by one position.
  • ACELP algebraic codebook
  • index n is set relative to the first sample of the current subframe, and the other parameters have been defined previously.
  • All the mapped parameters are encoded into the outgoing G.723.1 bitstream, and the system is ready to process the next frame.
  • FIG. 18 is a block diagram illustrating a transcoder of G.723.1 to GSM-AMR according to a second embodiment of the present invention.
  • the G.723.1 bitstream consists of frames of length 192 bits (24 bytes) for the high rate (6.3kbps) codec, or 160 bits (20 bytes) for the low rate (5.3kbps) codec.
  • the frames have a very similar structure and differ only in the fixed codebook parameter representation.
  • the 10 LSP parameters used for modeling the short-term vocal tract filter are encoded in the same way for both high and low rates and can be extracted from bits 2 to 25 of the G.723.1 frame.
  • the encoding uses three lookup tables and the LSP vector reconstructed by joining the three sub- vectors derived from these tables.
  • Each table has 256 vector entries; the first two tables have 3- element sub-vectors, and last table has 4-element sub-vectors. Combined these give a 10- element LSP vector.
  • the adaptive codeword is constructed for each subframe by combining previous excitation vectors.
  • the combination is a weighted sum of the previous excitation at five successive lags. This is best explained via the equation,
  • vfj is the reconstructed adaptive codeword
  • ufj is the previous excitation buffer
  • L is the (integer) pitch lag between 18 and 143 inclusive
  • ⁇ j are lag weighting values determined by the pitch gain parameter.
  • the lag parameter, L is extracted directly from the bitstream.
  • the first and third subframes use the full dynamic range of the lag, whereas, the second and fourth subframes encode the lag as an offset from the previous subframe.
  • the lag weighting parameters, ⁇ j are determined by table lookup. As a consequence of the adaptive codeword unpacking, an approximation to a fractional pitch lag and associated gain can be determined by calculating,
  • the fixed codebooks are different for the high and low rate modes of the G.723.1 codec.
  • the high rate mode uses an MP-MLQ codebook which allows six pulses per subframe for even subframes, and five pulses per subframe for odd subframes, in any position.
  • the low rate mode uses an algebraic codebook (ACELP) which allows four pulses per subframe in restricted locations. Both codebooks use a grid flag to indicate whether to shift the codewords should be shifted by one position. Algorithms for generating the codewords from the encoded bitstream are given in the G.723.1 standard documentation.
  • the (persistent) memory for the codec needs to be updated on completion of processing each subframe.
  • the GSM-AMR parameter mapping part of the transcoder takes the inte ⁇ olated CELP parameters as explained above, and uses them as a basis for searching the GSM-AMR parameter space.
  • the LSP parameters are simply encoded as received, whilst the other parameters, namely excitation and pitch lag, are used as estimates for a local search in the GSM-AMR space.
  • the following figure shows the main operations which need to take place on each subframe in order to complete the transcoding.
  • the adaptive codeword is formed by searching the vector of previous excitations up to a maximum lag of 143 for a best match with the target excitation.
  • the target excitation is determined from the inte ⁇ olated subframes.
  • the previous excitation can be inte ⁇ olated by 1/6 or 1/3 intervals depending on the mode.
  • the optimal lag is found by searching a small region about the pitch lag determined from the G.723.1 unpacking module. This region is searched to find the optimal integer lag, and then refined to determine the fractional part of the lag.
  • the procedure uses a 24-tap inte ⁇ olation filter to perform the fractional search.
  • the first and third subframes are treated differently to the second and forth.
  • the inte ⁇ olated adaptive codeword, vfj is then formed as,
  • ufj is the previous excitation buffer
  • L is the (integer) pitch lag
  • t is the fractional pitch lag in l/6 th resolution
  • b 60 is the 60- tap inte ⁇ olation filter.
  • the pitch gain is calculated and quantised so that it can be encoded and sent to the decoder, and also for calculation of the fixed codebook target vector. All modes calculate the pitch gain in the same way for each subframe,
  • the adaptive codebook component of the excitation is found, this component is subtracted from the excitation to leave a residual ready for encoding by the fixed codebook.
  • the residual signal for each subframe is calculated as,
  • JC 2 / is the target for the fixed codebook search
  • xfj is the target for the adaptive codebook search
  • g is the quantised pitch gain
  • vfj is the (inte ⁇ olated) adaptive.
  • the fixed codebook search is designed to find the best match to the residual signal after the adaptive codebook component has been removed. This is important for unvoiced speech and for priming of the adaptive codebook.
  • the codebook search used in transcoding can be simpler than the one used in the codecs since a great deal of analysis of the original speech has already taken place. Also the signal on which the codebook search is performed is the reconstructed excitation signal instead of synthesized speech, and therefore already possesses a structure more amenable to fixed book coding.
  • the gain for the fixed codebook is quantised using a moving average prediction based on the energy of the previous four subframes.
  • the correction factor between the actual and predicted gain is quantised (via table lookup) and sent to the decoder. Exact details are given in the GSM-AMR standard documentation.
  • the (persistent) memory for the codec needs to be updated on completion of processing each subframe. This is done by first shifting the previous excitation buffer, ufj, by 40 samples (i.e. one subframe), so that the oldest samples are discarded, and then copying the excitation from the current subframe into the top 40 samples of the buffer,
  • index n is set relative to the first sample of the current subframe, and the other parameters have been defined previously.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP03705707A 2002-01-08 2003-01-08 Transcodierungsschema zwischen auf celp basierenden sprachcodes Withdrawn EP1464047A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US34727002P 2002-01-08 2002-01-08
US347270P 2002-01-08
PCT/US2003/000649 WO2003058407A2 (en) 2002-01-08 2003-01-08 A transcoding scheme between celp-based speech codes

Publications (2)

Publication Number Publication Date
EP1464047A2 true EP1464047A2 (de) 2004-10-06
EP1464047A4 EP1464047A4 (de) 2005-12-07

Family

ID=23363030

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03705707A Withdrawn EP1464047A4 (de) 2002-01-08 2003-01-08 Transcodierungsschema zwischen auf celp basierenden sprachcodes

Country Status (6)

Country Link
EP (1) EP1464047A4 (de)
JP (1) JP2005515486A (de)
KR (1) KR20040095205A (de)
CN (1) CN100527225C (de)
AU (1) AU2003207498A1 (de)
WO (1) WO2003058407A2 (de)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7263481B2 (en) 2003-01-09 2007-08-28 Dilithium Networks Pty Limited Method and apparatus for improved quality voice transcoding
ATE368279T1 (de) 2003-05-01 2007-08-15 Nokia Corp Verfahren und vorrichtung zur quantisierung des verstärkungsfaktors in einem breitbandsprachkodierer mit variabler bitrate
FR2867648A1 (fr) 2003-12-10 2005-09-16 France Telecom Transcodage entre indices de dictionnaires multi-impulsionnels utilises en codage en compression de signaux numeriques
US20050258983A1 (en) * 2004-05-11 2005-11-24 Dilithium Holdings Pty Ltd. (An Australian Corporation) Method and apparatus for voice trans-rating in multi-rate voice coders for telecommunications
FR2871247B1 (fr) 2004-06-04 2006-09-15 Essilor Int Lentille ophtalmique
JP2008511852A (ja) * 2004-08-31 2008-04-17 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ トランスコードのための方法および装置
FR2880724A1 (fr) 2005-01-11 2006-07-14 France Telecom Procede et dispositif de codage optimise entre deux modeles de prediction a long terme
WO2007064256A2 (en) * 2005-11-30 2007-06-07 Telefonaktiebolaget Lm Ericsson (Publ) Efficient speech stream conversion
US7728741B2 (en) * 2005-12-21 2010-06-01 Nec Corporation Code conversion device, code conversion method used for the same and program thereof
US7826536B2 (en) * 2005-12-29 2010-11-02 Nokia Corporation Tune in time reduction
EP1903559A1 (de) * 2006-09-20 2008-03-26 Deutsche Thomson-Brandt Gmbh Verfahren und Vorrichtung zur Transkodierung von Tonsignalen
EP1933306A1 (de) * 2006-12-14 2008-06-18 Nokia Siemens Networks Gmbh & Co. Kg Verfahren und Vorrichtung zur Transcodierung eines Sprachsignals von einem ersten CELP-Format in ein zweites CELP-Format
JP5264913B2 (ja) * 2007-09-11 2013-08-14 ヴォイスエイジ・コーポレーション 話声およびオーディオの符号化における、代数符号帳の高速検索のための方法および装置
CN101459833B (zh) * 2007-12-13 2011-05-11 安凯(广州)微电子技术有限公司 一种用于相似视频码流的转码方法及其转码装置
CN101572093B (zh) * 2008-04-30 2012-04-25 北京工业大学 一种转码方法和装置
US8521520B2 (en) * 2010-02-03 2013-08-27 General Electric Company Handoffs between different voice encoder systems
WO2014202786A1 (en) 2013-06-21 2014-12-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating an adaptive spectral shape of comfort noise
LT3511935T (lt) 2014-04-17 2021-01-11 Voiceage Evs Llc Būdas, įrenginys ir kompiuteriu nuskaitoma neperkeliama atmintis garso signalų tiesinės prognozės kodavimui ir dekodavimui po perėjimo tarp kadrų su skirtingais mėginių ėmimo greičiais
CN104167210A (zh) * 2014-08-21 2014-11-26 华侨大学 一种轻量级的多方会议混音方法和装置
CN117476022A (zh) * 2022-07-29 2024-01-30 荣耀终端有限公司 声音编解码方法以及相关装置、系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08146997A (ja) * 1994-11-21 1996-06-07 Hitachi Ltd 符号変換装置および符号変換システム
EP1202251A2 (de) * 2000-10-30 2002-05-02 Fujitsu Limited Transkodierer mit Verütung von Kaskadenkodierung von Sprachsignalen
EP1363274A1 (de) * 2001-02-02 2003-11-19 NEC Corporation Einrichtung und verfahren zum konvertieren von sprachcodesequenzen

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5457685A (en) * 1993-11-05 1995-10-10 The United States Of America As Represented By The Secretary Of The Air Force Multi-speaker conferencing over narrowband channels
US5758256A (en) * 1995-06-07 1998-05-26 Hughes Electronics Corporation Method of transporting speech information in a wireless cellular system
US5995923A (en) 1997-06-26 1999-11-30 Nortel Networks Corporation Method and apparatus for improving the voice quality of tandemed vocoders
JP3235654B2 (ja) * 1997-11-18 2001-12-04 日本電気株式会社 無線電話装置
US6260009B1 (en) * 1999-02-12 2001-07-10 Qualcomm Incorporated CELP-based to CELP-based vocoder packet translation
US20030028386A1 (en) * 2001-04-02 2003-02-06 Zinser Richard L. Compressed domain universal transcoder
KR100434275B1 (ko) * 2001-07-23 2004-06-05 엘지전자 주식회사 패킷 변환 장치 및 그를 이용한 패킷 변환 방법

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08146997A (ja) * 1994-11-21 1996-06-07 Hitachi Ltd 符号変換装置および符号変換システム
EP1202251A2 (de) * 2000-10-30 2002-05-02 Fujitsu Limited Transkodierer mit Verütung von Kaskadenkodierung von Sprachsignalen
EP1363274A1 (de) * 2001-02-02 2003-11-19 NEC Corporation Einrichtung und verfahren zum konvertieren von sprachcodesequenzen

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KYUNG TAE KIM ET AL: "An efficient transcoding algorithm for G.723.1 and EVRC speech coders" IEEE 54TH VEHICULAR TECHNOLOGY CONFERENCE. VTC FALL 2001. PROCEEDINGS 7-11 OCT. 2001 ATLANTIC CITY, NJ, USA; [IEEE VEHICULAR TECHNOLGY CONFERENCE], IEEE 54TH VEHICULAR TECHNOLOGY CONFERENCE. VTC FALL 2001. PROCEEDINGS (CAT. NO.01CH37211) IEEE PISCATA, vol. 3, 7 October 2001 (2001-10-07), pages 1561-1564, XP010562224 ISBN: 978-0-7803-7005-0 *
No further relevant documents disclosed *
See also references of WO03058407A2 *

Also Published As

Publication number Publication date
KR20040095205A (ko) 2004-11-12
EP1464047A4 (de) 2005-12-07
WO2003058407A2 (en) 2003-07-17
AU2003207498A1 (en) 2003-07-24
AU2003207498A8 (en) 2003-07-24
WO2003058407A3 (en) 2003-12-24
CN100527225C (zh) 2009-08-12
JP2005515486A (ja) 2005-05-26
CN1701353A (zh) 2005-11-23

Similar Documents

Publication Publication Date Title
US6829579B2 (en) Transcoding method and system between CELP-based speech codes
US6260009B1 (en) CELP-based to CELP-based vocoder packet translation
KR100837451B1 (ko) 향상된 품질의 음성 변환부호화를 위한 방법 및 장치
JP6692948B2 (ja) 異なるサンプリングレートを有するフレーム間の移行による音声信号の線形予測符号化および復号のための方法、符号器および復号器
EP1464047A2 (de) Transcodierungsschema zwischen auf celp basierenden sprachcodes
US6055496A (en) Vector quantization in celp speech coder
JP2007537494A (ja) 遠隔通信のためのマルチレート音声コーダにおける音声レート変換の方法及び装置
US20050053130A1 (en) Method and apparatus for voice transcoding between variable rate coders
JP2003044097A (ja) 音声信号および音楽信号を符号化する方法
JPH10187196A (ja) 低ビットレートピッチ遅れコーダ
EP1554809A1 (de) Verfahren und vorrichtung zur schnellen celp-if-parameterabbildung
US20040111257A1 (en) Transcoding apparatus and method between CELP-based codecs using bandwidth extension
JPH1063297A (ja) 音声符号化方法および装置
US7684978B2 (en) Apparatus and method for transcoding between CELP type codecs having different bandwidths
JPH0341500A (ja) 低遅延低ビツトレート音声コーダ
Bakır Compressing English Speech Data with Hybrid Methods without Data Loss
KR20060082985A (ko) 음성패킷 전송율 변환 장치 및 방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040715

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO

A4 Supplementary search report drawn up and despatched

Effective date: 20051020

17Q First examination report despatched

Effective date: 20090224

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20100413