CN101887727B - Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code - Google Patents

Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code Download PDF

Info

Publication number
CN101887727B
CN101887727B CN2010101628414A CN201010162841A CN101887727B CN 101887727 B CN101887727 B CN 101887727B CN 2010101628414 A CN2010101628414 A CN 2010101628414A CN 201010162841 A CN201010162841 A CN 201010162841A CN 101887727 B CN101887727 B CN 101887727B
Authority
CN
China
Prior art keywords
coding
component
code
melp
help
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101628414A
Other languages
Chinese (zh)
Other versions
CN101887727A (en
Inventor
吴玉成
陈�峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN2010101628414A priority Critical patent/CN101887727B/en
Publication of CN101887727A publication Critical patent/CN101887727A/en
Application granted granted Critical
Publication of CN101887727B publication Critical patent/CN101887727B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention particularly relates to speech code data conversion system and method from an HELP code to an MELP (Mixed Excitation Linear Prediction) code, relating to the technical field of speech code conversion. The speech code data conversion system from the HELP code to the MELP (Mixed Excitation Linear Prediction) code comprises an error correction module, a code component separation module, a code inverse quantization module, a code conversion module, a quantization code module and a code multiplex module. The speech code data conversion method comprises the following steps of: carrying out error detection and correction processing on speech data streams of HELP codes; separating code components needed by refactoring speech, and carrying out inverse quantization; then respectively converting and quantizing the code components; and finally framing according to an MELP speech code algorithm standard to form the speech data streams of MELP codes. Compared with the prior art, the invention has small operation amount and low distortion degree.

Description

Be encoded to the vocoded data converting system and the method for MELP coding from HELP
Technical field
The present invention relates to voice coding switch technology field, be specifically related to a kind of vocoded data converting system and method that is encoded to the MELP coding from HELP.
Background technology
Speech coding technology refers to carry out voice signal the process of compression and decompression.Utilizing digital technology voice to be stored and transmitted is widely used in the various communication systems.The digitizing transmission of voice signal is present in the various speech coding technologies; The purpose of each speech coding technology all is to store in order providing more easily, to transmit perhaps and reduce transfer rate as much as possible for certain applications, and guarantees certain speech quality.Popular speech coder all adopts own specific speech coding algorithm standard at present, and wherein two kinds of speech coder algorithms are following:
1, HELP (Harnonic Excited Linear Prediction): code rate is 2.4k/s.
The ultimate principle of HELP speech coding algorithm is as shown in Figure 1, and s (n) is a raw tone, and the HELP algorithm is main to carry out through three steps: at first the input voice are carried out linear prediction filtering, obtain residual signals e (n); Again residual signals is analyzed, synthesized by the thought of sinusoidal model, obtain synthetic residual signals
Figure GSA00000109771500011
and at last
Figure GSA00000109771500012
carried out the voice
Figure GSA00000109771500013
that the linear prediction liftering just can obtain rebuilding
As stated; The HELP algorithm arrives the residual error territory through predictive filter with conversion of signals; With sinusoidal model residual signals is analyzed then; The Coded Analysis process that is the HELP vocoder is mainly by two parts: the one, calculate coefficient of linear prediction wave filter, and the HELP vocoder adopts lO rank predictive filter, and Lai Wenxun-Du Bin (Levinson-Dubin) algorithm is adopted in the calculating of coefficient; The 2nd, extract the sinusoidal model parameter, major parameter comprises pitch period, characterizes Ak and the Bk and the phase place of component sine waves.The major parameter that HELP need extract has 4: linear predictor coefficient (LPC), pitch period, spectral amplitude vector, voiced sound degree.
2, MELP (Mixed Excitation Linear Prediction): u.s. federal standard, code rate are 2.4k/s.
The ultimate principle of MELP speech coding algorithm is following: the voice signal of input is through a quadravalence II type Chebyshev (Chebyshev) Hi-pass filter; The direct current power frequency of filtering 50Hz is disturbed; Band mixed excitations more than adopting then carry out clearly, the voiced sound judgement, to extract pitch signal accurately; In order to improve the tonequality of synthetic speech, MELP is divided into voiceless sound, voiced sound and shake voiced sound three state to voice; Linear prediction mainly comprises the analysis of input speech signal and the analysis of residual signals; When signal period of voiced segments when not being fine, through non-periodic sign adopt the driving source that adapts with it to encourage unsettled vocal cords pulse at the decoding end; Through after the Error Correction of Coding with different Bit Allocation in Discrete modes with each parameter back framing of encoding respectively, form the compression bit stream of current speech frame.The major parameter that MELP need extract has 6: linear predictor coefficient (LPC), and pitch period, the Fourier series range value, gain (two of every frames), the subband sound intensity indicates non-periodic.
In many cases, the speech coding algorithm that is adopted between the various vocoders is not compatible.When two kinds of different communication systems use different voice encryption algorithm standards to encode, want to realize the intercommunication mutually between them, in two kinds of communication systems, carry out the code stream conversion and just become and be necessary very much.Use at present and the most generally adopt the Tandem scheme to change, promptly target data is carried out the mode of coding again of decoding earlier.This traditional code stream conversion plan exists operand to cause shortcomings such as voice distortion degree increasing greatly and through second-compressed, decompression.So just need seek a kind of simple and effective scheme, carry out coded data conversion between two kinds of different voice encryption algorithms, reach between different communication system directly the target of intercommunication mutually to be implemented in.
Summary of the invention
In view of this, in order to address the above problem, the object of the present invention is to provide a kind of reduce operand and degree of distortion be encoded to the vocoded data converting system of MELP coding from HELP.
The objective of the invention is to realize like this: the vocoded data converting system from HELP is encoded to the MELP coding comprises
Correction module receives the audio data stream that HELP encodes, and it is carried out EDC error detection and correction handle;
Coding component separation module is isolated the needed coding component of reconstruct voice the audio data stream of the HELP coding after error correction;
The coding inverse quantization module is carried out re-quantization to the needed coding component of the coding isolated reconstruct voice of component separation module, the coding component of output re-quantization;
The code conversion module converts the coding component of the re-quantization of coding inverse quantization module output into meet MELP voice coding scheme coding component respectively;
The quantization encoding module, the coding component MELP voice coding component that the code conversion module is exported quantizes;
The code multi-way multiplex module, the coding component that the quantization encoding module is exported carries out framing by closing MELP speech coding algorithm standard, forms the audio data stream of MELP coding.
Further, the needed coding component of said reconstruct voice comprises LSF parameter, pitch period, voiced sound degree, power and spectrum shape;
Further; Said code conversion module comprises LSF Parameters Transformation submodule, pitch period conversion submodule, voiced sound degree conversion submodule, power transfer submodule and spectrum shape conversion submodule; Respectively re-quantization LSF parameter coding component, pitch period coding component, voiced sound degree coding component, encoded component and spectrum shape coding component are changed; Wherein voiced sound degree conversion submodule converts voiced sound degree coding component the subband sound intensity into and indicates two coding components non-periodic; The power transfer submodule converts the encoded component into the gain coding component, and spectrum shape conversion submodule will be composed shape coding component and convert Fourier's range value coding component into;
The present invention also aims to provide a kind of reduce operand and degree of distortion be encoded to the vocoded data conversion method of MELP coding from HELP.
Be encoded to the vocoded data conversion method that MELP encodes from HELP, comprise the steps:
1) audio data stream to the HELP coding carries out the EDC error detection and correction processing;
2) isolate the needed coding component of reconstruct voice the audio data stream of the coding of the HELP after error correction;
3) to step 2) the needed coding component of isolated reconstruct voice carries out re-quantization, the coding component of output re-quantization;
4) the coding component with the step 3) re-quantization converts the coding component that meets MELP voice coding scheme respectively into;
5) the MELP voice coding component that step 4) is obtained quantizes;
6) the coding component that step 5) is obtained carries out framing by closing MELP speech coding algorithm standard, forms the audio data stream of MELP coding.
Further, isolated coding component comprises LSF parameter, pitch period, voiced sound degree, power and spectrum shape step 2);
Further, step 4) comprises the steps:
41) LSF parameter coding component is changed;
42) pitch period coding component is changed;
43) voiced sound degree coding component is analyzed;
44) the encoded component is analyzed;
45) spectrum shape coding component is analyzed;
46) with voiced sound degree coding component convert into subband sound intensity coding component and non-periodic the encoding flag component;
47) according to the pitch period coding component after encoded component and the conversion, obtain the gain coding component;
48) will compose shape coding component and convert Fourier's range value coding component into;
Further, step 41) in, 10 LSF parameters behind the re-quantization is arranged according to ascending order, and make therebetween, obtain a LSF vector after the ordering, then this LSF vector is carried out adaptive smooth at a distance from being at least 50Hz; In step 5), the LSF parameter is carried out multi-stage vector quantization;
Further, step 42) in, round after preceding pitch period multiply by a scale factor, obtain the pitch period coding component after the conversion, said scale factor is delta=2-(p-20)/150 (P is the pitch period value); In step 5), the coding of the pitch period after conversion component is carried out 99 rank uniform quantizations;
Further, judge step 43) that current data frame is unvoiced frames or unvoiced frame;
Step 46) specifically comprises the steps:
461) if current data frame is a unvoiced frames, then with 5 subband sound intensity v Bp1To v Bp5All put 0;
462) if current data frame is a unvoiced frame, the first subband sound intensity V then Bp1=1, establish i=p v/ 0.125 (P vBe the voiced sound degree), judge whether i is integer;
463) when i is integer, V Bp2=0, establish k=(p-0.125)/0.25:
When k=0, V Bp3To V Bp5All put 0;
When k=1, two subband sound intensities are arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=2, a subband sound intensity is arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=3, V Bp3To V Bp5All put 1;
464) when i is not integer, V Bp2=1, establish k=(p-0.125)/0.25:
When k=0, V Bp3To V Bp5All put 0;
When k=1, two subband sound intensities are arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=2, a subband sound intensity is arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=3, V Bp3To V Bp5All put 1;
When 465) current data frame is unvoiced frames, envelope cycle degree is detected, less than threshold values, then be changed to 1 indicating non-periodic like the result; Otherwise be changed to 0 indicating non-periodic;
Further, step 47) in, convert the encoded component into the gain coding component through following formula:
G i = 10 lg ( 0.01 + W 2 L ) , i = 1,2 ;
Wherein, Gi is two gain coding components, and W is the encoded component, and L is that analysis window is long, and when present frame was unvoiced frames, L was 120 sampling points; When present frame was unvoiced frame, L was the integral multiple of pitch period and greater than the minimum value in 120 sampling points, if this length surpasses 320 sampling points, then with it divided by 2;
In the step 5), be the uniform quantization of 3b to the quantification manner of G1, to the quantification manner of G2 for from 10dB to 77dB, to carry out uniform quantization with 5b;
Step 48) in, convert the spectrum shape parameter in the DFT amplitude spectrum of present frame residual signals preceding 10 harmonic amplitude vectors, these 10 harmonic amplitude vectors carry out being preceding 10 the fundamental tone harmonic value in the MELP Fourier range value after the normalization;
In the step 5), Fourier's range value is carried out the 8b uniform quantization.
The invention has the beneficial effects as follows: the present invention has saved the analysis of input speech signal being carried out pitch period; Comprise integer fundamental tone and mark pitch search, calculate very heavy analytical procedure consuming time in coded program such as LSF parameter and residual signals analysis; Saved the HELP needed operand of decoding simultaneously; The substitute is and change into the MELP parameters needed to the mode of HELP parameter through simple conversion, the translation operation amount is minimum.The conversion plan that the present invention proposes through HELP decoding and then through the MELP coding, can be saved the working time about 50% than earlier on the whole.And the present invention has been owing to avoided the repeatedly switching distortion through the speech signal parameter that secondary coding-decoding caused, aspect voice quality, and can be not poorer than the synthetic speech quality that Tandem mode obtains.
Description of drawings
In order to make the object of the invention, technical scheme and advantage clearer, will combine accompanying drawing that the present invention is made further detailed description below:
Fig. 1 shows HELP speech coding algorithm ultimate principle figure;
Fig. 2 shows the vocoded data converting system structural representation that is encoded to the MELP coding from HELP.
Embodiment
Below will carry out detailed description to the preferred embodiments of the present invention.
The present invention passes through the LSF parameter of HELP voice coding scheme, pitch period, voiced sound degree; Power, the re-quantization value of spectrum shape converts the LSF parameter of MELP voice coding scheme, pitch period to; The subband sound intensity; Non-periodic, sign and Fourier series range value were created echo signal according to voice that reproduce and pitch period composite signal, and obtained the algebraic coding of echo signal.
Referring to Fig. 2, the vocoded data converting system from HELP is encoded to the MELP coding comprises
Correction module receives the audio data stream that HELP encodes, and it is carried out EDC error detection and correction handle;
Coding component separation module is isolated the needed coding component of reconstruct voice the audio data stream of the HELP coding after error correction, comprises LSF parameter, pitch period, voiced sound degree, power and spectrum shape;
The coding inverse quantization module is carried out re-quantization to the needed coding component of the coding isolated reconstruct voice of component separation module, the coding component of output re-quantization;
The code conversion module converts the coding component of the re-quantization of coding inverse quantization module output into meet MELP voice coding scheme coding component respectively; Said code conversion module comprises LSF Parameters Transformation submodule, pitch period conversion submodule, voiced sound degree conversion submodule, power transfer submodule and spectrum shape conversion submodule; Respectively re-quantization LSF parameter coding component, pitch period coding component, voiced sound degree coding component, encoded component and spectrum shape coding component are changed; Wherein voiced sound degree conversion submodule converts voiced sound degree coding component the subband sound intensity into and indicates two coding components non-periodic; The power transfer submodule converts the encoded component into the gain coding component, and spectrum shape conversion submodule will be composed shape coding component and convert Fourier's range value coding component into;
The quantization encoding module, the coding component MELP voice coding component that the code conversion module is exported quantizes;
The code multi-way multiplex module, the coding component that the quantization encoding module is exported carries out framing by closing MELP speech coding algorithm standard, forms the audio data stream of MELP coding.
Be encoded to the vocoded data conversion method that MELP encodes from HELP, comprise the steps:
1) audio data stream to the HELP coding carries out the EDC error detection and correction processing;
2) isolate the needed coding component of reconstruct voice the audio data stream of the coding of the HELP after error correction, comprise LSF parameter, pitch period, voiced sound degree, power and spectrum shape;
3) to step 2) the needed coding component of isolated reconstruct voice carries out re-quantization, the coding component of output re-quantization;
4) the coding component with the step 3) re-quantization converts the coding component that meets MELP voice coding scheme respectively into; Specifically comprise the steps:
41) LSF parameter coding component is changed: 10 LSF parameters behind the re-quantization are arranged according to ascending order, and make therebetween, obtain a LSF vector after the ordering, then this LSF vector is carried out adaptive smooth at a distance from being at least 50Hz;
42) pitch period coding component is changed; Round after preceding pitch period multiply by a scale factor, obtain the pitch period coding component after the conversion, said scale factor is delta=2-(p-20)/150 (P is the pitch period value);
43) voiced sound degree coding component is analyzed, judged that current data frame is unvoiced frames or unvoiced frame;
44) the encoded component is analyzed;
45) spectrum shape coding component is analyzed;
46) with voiced sound degree coding component convert into subband sound intensity coding component and non-periodic the encoding flag component; Specifically comprise the steps:
461) if current data frame is a unvoiced frames, then with 5 subband sound intensity v Bp1To v Bp5All put 0;
462) if current data frame is a unvoiced frame, the first subband sound intensity V then Bp1=1, establish i=p v/ 0.125 (P vBe the voiced sound degree), judge whether i is integer;
463) when i is integer, V Bp2=0, establish k=(p-0.125)/0.25:
When k=0, V Bp3To V Bp5All put 0;
When k=1, two subband sound intensities are arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=2, a subband sound intensity is arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=3, V Bp3To V Bp5All put 1;
464) when i is not integer, V Bp2=1, establish k=(p-0.125)/0.25:
When k=0, V Bp3To V Bp5All put 0;
When k=1, two subband sound intensities are arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=2, a subband sound intensity is arranged is 0 in expression, through statistical method to V Bp3To V Bp5Put 0 or put 1;
When k=3, V Bp3To V Bp5All put 1;
When 465) current data frame is unvoiced frames, envelope cycle degree is detected, less than threshold values, then be changed to 1 indicating non-periodic like the result; Otherwise be changed to 0 indicating non-periodic;
47) according to the pitch period coding component after encoded component and the conversion, obtain the gain coding component, convert the encoded component into the gain coding component through following formula:
G i = 10 lg ( 0.01 + W 2 L ) , i = 1,2 ;
Wherein, Gi is two gain coding branches, and W is the encoded component, and L is that analysis window is long, and when present frame was unvoiced frames, L was 120 sampling points; When present frame was unvoiced frame, L was the integral multiple of pitch period and greater than the minimum value in 120 sampling points, if this length surpasses 320 sampling points, then with it divided by 2;
48) will compose shape coding component and convert Fourier's range value coding component into: will compose preceding 10 the harmonic amplitude vectors in the DFT amplitude spectrum that shape parameter converts the present frame residual signals to, these 10 harmonic amplitude vectors carry out being preceding 10 the fundamental tone harmonic value in the MELP Fourier range value after the normalization;
5) the MELP voice coding component that step 4) is obtained quantizes; Wherein, the LSF parameter is carried out multi-stage vector quantization; Pitch period after conversion coding component is carried out 99 rank uniform quantizations; Quantification manner to gain G 1 is the uniform quantization of 3b, to the quantification manner of gain G 2 for from 10dB to 77dB, to carry out uniform quantization with 5b; Fourier's range value is carried out the 8b uniform quantization.
6) after the coding component that step 5) is obtained carries out forward error correction, carry out framing, form the audio data stream of MELP coding by MELP speech coding algorithm standard.
The above is merely the present invention that preferably is not limited to of the present invention, and obviously, those skilled in the art can carry out various changes and modification and not break away from the spirit and scope of the present invention the present invention.Like this, belong within the scope of claim of the present invention and equivalent technologies thereof if of the present invention these are revised with modification, then the present invention also is intended to comprise these changes and modification interior.

Claims (4)

1. be encoded to the vocoded data conversion method of MELP coding from HELP, it is characterized in that: comprise the steps:
1) audio data stream to the HELP coding carries out the EDC error detection and correction processing;
2) isolate the needed coding component of reconstruct voice the audio data stream of the coding of the HELP after error correction;
3) to step 2) the needed coding component of isolated reconstruct voice carries out re-quantization, the coding component of output re-quantization;
4) the coding component with the step 3) re-quantization converts the coding component that meets MELP voice coding scheme respectively into;
5) the MELP voice coding component that step 4) is obtained quantizes;
6) the coding component that step 5) is obtained carries out framing by MELP speech coding algorithm standard, forms the audio data stream of MELP coding;
Step 4) comprises the steps:
41) LSF parameter coding component is changed;
42) pitch period coding component is changed;
43) voiced sound degree coding component is analyzed;
44) the encoded component is analyzed;
45) spectrum shape coding component is analyzed;
46) with voiced sound degree coding component convert into subband sound intensity coding component and non-periodic the encoding flag component;
47) according to the pitch period coding component after encoded component and the conversion, obtain the gain coding component;
48) will compose shape coding component and convert Fourier's range value coding component into;
Step 42) in, round after pitch period multiply by a scale factor, obtain the pitch period coding component after the conversion, said scale factor is delta=2-(p-20)/150, and p is the pitch period value; In step 5), the coding of the pitch period after conversion component is carried out 99 rank uniform quantizations.
2. the vocoded data conversion method that is encoded to MELP coding from HELP as claimed in claim 1 is characterized in that: step 2) in isolated coding component comprise LSF parameter, pitch period, voiced sound degree, power and spectrum shape.
3. the vocoded data conversion method that is encoded to the MELP coding from HELP as claimed in claim 1; It is characterized in that: step 41) in; 10 LSF parameters behind the re-quantization are arranged according to ascending order; And make therebetween at a distance from being at least 50Hz, obtain a LSF vector after the ordering, then this LSF vector is carried out adaptive smooth; In step 5), the LSF parameter is carried out multi-stage vector quantization.
4. the vocoded data conversion method that is encoded to the MELP coding from HELP as claimed in claim 1 is characterized in that: step 47) in, convert the encoded component into the gain coding component through following formula:
Figure FSB00000705000100021
Wherein, G iBe two gain coding components, W is the encoded component, and L is that analysis window is long, and when present frame was unvoiced frames, L was 120 sampling points; When present frame was unvoiced frame, L was the integral multiple of pitch period and greater than the minimum value in 120 sampling points, if this length surpasses 320 sampling points, then with it divided by 2;
In the step 5), to G 1Quantification manner be the uniform quantization of 3b, to G 2Quantification manner for from 10dB to 77dB, to carry out uniform quantization with 5b;
Step 48) in, spectrum shape coding component is converted to preceding 10 the harmonic amplitude vectors in the DFT amplitude spectrum of present frame residual signals, these 10 harmonic amplitude vectors carry out being preceding 10 the fundamental tone harmonic value in the MELP Fourier range value after the normalization;
In the step 5), Fourier's range value is carried out the 8b uniform quantization.
CN2010101628414A 2010-04-30 2010-04-30 Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code Expired - Fee Related CN101887727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101628414A CN101887727B (en) 2010-04-30 2010-04-30 Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101628414A CN101887727B (en) 2010-04-30 2010-04-30 Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code

Publications (2)

Publication Number Publication Date
CN101887727A CN101887727A (en) 2010-11-17
CN101887727B true CN101887727B (en) 2012-04-18

Family

ID=43073612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101628414A Expired - Fee Related CN101887727B (en) 2010-04-30 2010-04-30 Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code

Country Status (1)

Country Link
CN (1) CN101887727B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103050122B (en) * 2012-12-18 2014-10-08 北京航空航天大学 MELP-based (Mixed Excitation Linear Prediction-based) multi-frame joint quantization low-rate speech coding and decoding method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5995923A (en) * 1997-06-26 1999-11-30 Nortel Networks Corporation Method and apparatus for improving the voice quality of tandemed vocoders
US20030195745A1 (en) * 2001-04-02 2003-10-16 Zinser, Richard L. LPC-to-MELP transcoder
CN1186765C (en) * 2002-12-19 2005-01-26 北京工业大学 Method for encoding 2.3kb/s harmonic wave excidted linear prediction speech
US8589151B2 (en) * 2006-06-21 2013-11-19 Harris Corporation Vocoder and associated method that transcodes between mixed excitation linear prediction (MELP) vocoders with different speech frame rates

Also Published As

Publication number Publication date
CN101887727A (en) 2010-11-17

Similar Documents

Publication Publication Date Title
CA2862715C (en) Multi-mode audio codec and celp coding adapted therefore
US7392179B2 (en) LPC vector quantization apparatus
US8463604B2 (en) Speech encoding utilizing independent manipulation of signal and noise spectrum
AU733156B2 (en) Audio coding method and apparatus
US8386267B2 (en) Stereo signal encoding device, stereo signal decoding device and methods for them
US8396706B2 (en) Speech coding
CN102934161B (en) Audio mix code device and audio mix decoding device
US20230178087A1 (en) Audio Encoding/Decoding based on an Efficient Representation of Auto-Regressive Coefficients
CN102341852A (en) Filtering speech
CN101887727B (en) Speech code data conversion system and method from HELP code to MELP (Mixed Excitation Linear Prediction) code
CN105957533A (en) Speech compression method, speech decompression method, audio encoder, and audio decoder
JPH0990989A (en) Conversion encoding method and conversion decoding method
JP2003345392A (en) Vector quantizer of spectrum envelope parameter using split scaling factor
Ram et al. Multi Switched Split vector quantization of narrow band speech signals
Wang et al. Complexity reduced shape VQ of spectral envelope with perception consideration
Ram et al. Switched Multistage vector quantizer
Jing et al. A 2 kb/s enhanced waveform interpolation speech coder
Park et al. Vector Quantization-Block Constrained Trellis Coded Quantization of Speech Line Spectral Frequencies
Jun et al. Designing a Quantizer of LPC Parameters for the Narrowband Speech Coder using Block-Constrained Trellis Coded Quantization
Ku et al. DEPENDENT SPECTRAL QUANTIZATION

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120418

Termination date: 20150430

EXPY Termination of patent right or utility model