CN101009096A - Fuzzy judgment method for sub-band surd and sonant - Google Patents

Fuzzy judgment method for sub-band surd and sonant Download PDF

Info

Publication number
CN101009096A
CN101009096A CNA200610165246XA CN200610165246A CN101009096A CN 101009096 A CN101009096 A CN 101009096A CN A200610165246X A CNA200610165246X A CN A200610165246XA CN 200610165246 A CN200610165246 A CN 200610165246A CN 101009096 A CN101009096 A CN 101009096A
Authority
CN
China
Prior art keywords
vbp
subband
value
vector
voiced sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200610165246XA
Other languages
Chinese (zh)
Other versions
CN101009096B (en
Inventor
崔慧娟
唐昆
李晔
洪侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN200610165246XA priority Critical patent/CN101009096B/en
Publication of CN101009096A publication Critical patent/CN101009096A/en
Application granted granted Critical
Publication of CN101009096B publication Critical patent/CN101009096B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

This invention relates to sub band voiced blur judgment method, which is characterized by the following steps: using current band voiced parameters extraction method to remove remaining four sub band voiced degree besides first sub band; multiplying the self-related function on base sound circle parameters of the first filter signal with one set gain factor as first sub band voiced degree; if the product is super one, then making it one; quantizing vectors on each sub voiced sound to get each sub band voiced belonging degree to combine excitation signals.

Description

The method of sub-band surd and sonant fuzzy judgment
Technical field
The invention belongs to the speech coding technology field, particularly low rate parametric speech coding technology.
Background technology
Voice coding in communication system, voice storage-playback, have in the consumer product of phonetic function and be widely used.International Telecommunication Union, some regional organizations and some countries had formulated a series of voice compression coding standards in succession in the last few years, were that 1.2kb/s has obtained gratifying voice quality to 16kb/s in code rate.Domestic and international research mainly concentrates on the following high-quality speech compressed encoding of 1.2kb/s speed at present, is mainly used in radio communication, secret communication, high capacity voice storage playback etc.Pumping signal is synthetic extremely important in the low rate voice coding, is the synthetic important means of present pumping signal and be with pumping signal synthetic more.The synthetic logical voiced sound degree parameters of band that mainly are fixed against of pumping signals of being with more, the key step of being with logical voiced sound degree parameter to ask at present is as follows:
(1) divides frame in chronological order to the input speech signal sampling point;
(2) present frame is extracted the pitch period parameter;
(3) present frame is extracted surplus spectral amplitude parameter;
(4) the present frame voice signal is carried out filtering through the Butterworth filter on 56 rank, its passband is respectively 0-500,500-1000,1000-2000,2000-3000 and 3000-4000Hz;
(5) according to the voice signal behind first sub-band filter the pitch period parameter of asking in the step (2) is further improved, the gene periodic quantity after the improvement is
Figure A20061016524600041
(6) ask for value Vbp on the pitch period parameter position that the autocorrelation function of voice signal is asked for behind first sub-band filter in step (5) N, 1
(7) ask for the pitch period parameter that the autocorrelation function of the autocorrelation function of voice signal behind all the other 4 sub-band filters and its time envelope is asked in step (5)
Figure A20061016524600042
Locational value, and each subband got higher value among both;
Vbp n,i=max{Vbp n,i′,Vbp n,i″}i=2,3,4,5
Wherein, Vbp N, iThe autocorrelation function of the i subband of ' expression present frame (n frame) exists
Figure A20061016524600043
Locational value; Vbp N, i" autocorrelation function of the temporal envelope of the i subband of expression present frame (n frame) exists
Figure A20061016524600044
Locational value.
(8) if the value Vbp that asks in the step (6) N, 1Less than 0.6, then the value in the step (7) all is revised as 0, that is, and Vbp N, i=0 i=2,3,4,5, otherwise remain unchanged;
(9) value of asking in step (7) and the step (8) is done binaryzation, promptly make comparisons with threshold value 0.6 respectively,, think that then current sub is a voiced sound, represent that with 1 its band leads to the voiced sound degree, otherwise be voiceless sound, represent that with 0 its band leads to the voiced sound degree, that is: if greater than 0.6
Figure A20061016524600051
(10) the logical voiced sound degree of band of each subband of obtaining in the step (9), surplus spectral amplitude parameter in the step (3) and the pitch period parameter in the step (5) are used for together synthetic pumping signal.
Above-mentioned prior art has adopted 0,1 judgement for the expression of subband voiced sound degree, and promptly current sub is not that voiced sound is exactly a voiceless sound.In fact not having clear and definite boundary between the voiced sound of subband and the voiceless sound, simply is the not nature that voiceless sound or voiced sound can cause the speech frame transition with sub-band division.
As shown in Figure 1, when the logical voiced sound of expression band was spent, original technology adopted simple 0,1 judgement, and this can make the synthetic sense of low code check parametric speech coding strengthen, and naturalness descends.
Summary of the invention
The objective of the invention is for overcoming the weak point of prior art, changing old is not that voiceless sound is exactly the simple decision method of voiced sound, proposes the method for the logical pure and impure sound fuzzy judgment of band, strengthens the naturalness of voice.
The method of the sub-band surd and sonant fuzzy judgment that the present invention proposes may further comprise the steps:
(1) divide frame in chronological order to the input speech signal sampling point:
(2) present frame is extracted the pitch period parameter;
(3) present frame is extracted surplus spectral amplitude parameter;
(4) the present frame voice signal is carried out filtering through the Butterworth filter on 56 rank, obtain passband and be respectively 0-500,500-1000,1000-2000,5 subband signals of 2000-3000 and 3000-4000Hz;
(5) voice signal according to first subband further improves the pitch period parameter of asking in the step (2), and the gene periodic quantity after the improvement is
Figure A20061016524600052
(6) ask for the pitch period parameter that the autocorrelation function of first subband voice signal is asked in step (5) Locational value Vbp N, 1
(7) ask for the pitch period parameter that the autocorrelation function of the autocorrelation function of all the other 4 subband voice signals and its time envelope is asked in step (5) Locational value, and respectively each subband is got higher value among both:
Vbp n,i=max{Vbp n,i′,Vbp n,i″}i=2,3,4,5
Wherein, Vbp N, i' expression present frame, promptly the autocorrelation function of the i subband of n frame exists
Figure A20061016524600062
Locational value; Vbp N, i" the expression present frame, promptly the autocorrelation function of the temporal envelope of the i subband of n frame exists
Figure A20061016524600063
Locational value;
(8) if the value Vbp that asks in the step (6) N, 1Less than 0.6, then the value in the step (7) all is revised as 0, i.e. Vbp N, i=0 i=2,3,4,5, otherwise remain unchanged;
(9) with the value Vbp that asks in the step (6) N, 1Multiply by the gain factor of a setting, this gain factor is set at 1.2, if the back result that multiplies each other then makes it equal 1, that is: greater than 1
Vb p n , 1 = 1 if Vb p n , 1 × 1.2 > 1 Vb p n , 1 × 1.2 if Vb p n , 1 × 1.2 ≤ 1
(10) with the value asked in step (8) and the step (9) the voiced sound degree of membership as each subband, merging becomes a vector Vbp ‾ = ( Vbp n , 1 , Vbp n , 2 , Vbp n , 3 , Vbp n , 4 , Vbp n , 5 ) , Carry out vector quantization together; Vector quantization adopts the method that the code word in the code book is searched for entirely to obtain the optimum quantization code word:
Figure A20061016524600066
Wherein
Figure A20061016524600067
Represent input vector to be quantified,
Figure A20061016524600068
Code word vector in the expression code book, C represents code book, and i is the index value of code word vector in code book, and the distortion measure that Er () function representation is specific adopts the minimum weight square error here, promptly Er ( Vbp ‾ , Vbp i ‾ ) = Σ k = 1 5 W k ( Vbp k - Vbp i , k ) 2 , Vbp wherein kBe k component of vector to be quantified, Vbp I, kBe k component of i code word vector in the code book, W is the weighting factor vector, gets W=[16 here, 8,4,2,1]; Obtain the voiced sound degree of membership of each subband of quantizing after the quantification
Figure A200610165246000610
(11) the voiced sound degree of membership of each subband of obtaining in the step (10), surplus spectral amplitude parameter in the step (3) and the pitch period parameter in the step (5) are used for together synthetic pumping signal.
Characteristics of the present invention are the logical voiced sound degree parameter of the band in the low rate parametric speech coding have been adopted the method for fuzzy judgment.Original technology has adopted 0,1 judgement to being with logical voiced sound degree parameter, has increased the synthetic sense of voice, has reduced naturalness.The present invention adopts the relative theory of fuzzy mathematics, decides the voiced sound degree of membership of current sub with autocorrelation function.And consider that first subband often is subjected to The noise bigger, so its voiced sound degree of membership will be multiplied by a gain factor, the more accurate voiced sound degree of reasonably having described each subband.
This method can improve the naturalness of synthetic speech.The most suitable 600~800b/s the low rate of this method parametric speech coding will be realized on signal processor chip DSP.
Description of drawings
Fig. 1 is the logical voiced sound degree decision method FB(flow block) of the band of prior art.
The logical voiced sound degree fuzzy decision method FB(flow block) of band that Fig. 2 proposes for the present invention.
Embodiment
The logical voiced sound degree fuzzy decision method of the band that the present invention proposes reaches embodiment in conjunction with the accompanying drawings and further specifies as follows:
Method flow of the present invention may further comprise the steps as shown in Figure 2:
(1) divides frame in chronological order to the input speech signal sampling point;
(2) present frame is extracted the pitch period parameter;
(3) present frame is extracted surplus spectral amplitude parameter;
(4) the present frame voice signal is carried out filtering through the Butterworth filter on 56 rank, obtain passband and be respectively 0-500,500-1000,1000-2000,5 subband signals of 2000-3000 and 3000-4000Hz:
(5) voice signal according to first subband further improves the pitch period parameter of asking in the step (2), and the gene periodic quantity after the improvement is
Figure A20061016524600071
(6) ask for the pitch period parameter that the autocorrelation function of first subband voice signal is asked in step (5)
Figure A20061016524600072
Locational value Vbp N, 1
(7) ask for the pitch period parameter that the autocorrelation function of the autocorrelation function of all the other 4 subband voice signals and its time envelope is asked in step (5)
Figure A20061016524600073
Locational value, and respectively each subband is got higher value among both:
Vbp n,i=max{Vbp n,i′,Vbp n,i″}i=2,3,4,5
Wherein, Vbp N, i' expression present frame, promptly the autocorrelation function of the i subband of n frame exists
Figure A20061016524600074
Locational value; Vbp N, i" the expression present frame, promptly the autocorrelation function of the temporal envelope of the i subband of n frame exists
Figure A20061016524600075
Locational value;
(8) if the value Vbp that asks in the step (6) N, 1Less than 0.6, then the value in the step (7) all is revised as 0, i.e. Vbp N, i=0 i=2,3,4,5, otherwise remain unchanged;
(9) with the value Vbp that asks in the step (6) N, 1Multiply by the gain factor of a setting, this gain factor is set at 1.2, if the back result that multiplies each other then makes it equal 1, that is: greater than 1
Vbp n , 1 = 1 if Vbp n , 1 × 1.2 > 1 Vbp n , 1 × 1.2 if Vbp n , 1 × 1.2 ≤ 1
(10) with the value asked in step (8) and the step (9) the voiced sound degree of membership as each subband, merging becomes a vector Vbp ‾ = ( Vbp n , 1 , Vbp n , 2 , Vbp n , 3 , Vbp n , 4 , Vbp n , 5 ) , Carry out vector quantization together; Vector quantization adopts the method that the code word in the code book is searched for entirely to obtain the optimum quantization code word:
Figure A20061016524600083
Wherein
Figure A20061016524600084
Represent input vector to be quantified, Code word vector in the expression code book, C represents code book, and i is the index value of code word vector in code book, and the distortion measure that Er () function representation is specific adopts the minimum weight square error here, promptly Er ( Vbp ‾ , Vb p i ‾ ) = Σ k = 1 5 W k ( Vb p k - Vb p i , k ) 2 , Vbp wherein kBe k component of vector to be quantified, Vbp I, kBe k component of i code word vector in the code book, W is the weighting factor vector, gets W=[16 here, 8,4,2,1]; Obtain the voiced sound degree of membership of each subband of quantizing after the quantification
Figure A20061016524600087
(11) the voiced sound degree of membership of each subband of obtaining in the step (10), surplus spectral amplitude parameter in the step (3) and the pitch period parameter in the step (5) are used for together synthetic pumping signal.
The specific embodiment of each step of said method of the present invention is described in detail as follows respectively:
Said method step (1) divides the embodiment of frame to be by the 8kHz frequency sampling, to remove the voice sampling point that power frequency is disturbed through high-pass filtering to the input speech signal sampling point in chronological order.Every 25ms, just 200 voice sampling points constitute a frame;
The embodiment of said method step (2) is: the pitch period parameter p of asking for present frame by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation n
The embodiment of said method step (3) is: ask the surplus spectral amplitude parameter of present frame by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation, be designated as vector R, its dimension is k, R=[r 1, r 2..., r k] k=10;
The embodiment of said method step (4) is: the described method of linear prediction (MELP) speech coding algorithm standard by the 2400b/s of U.S. government mixed excitation is carried out bandpass filtering to the present frame voice signal;
Embodiment in the said method step (5) is: the described method of linear prediction (MELP) speech coding algorithm standard by the 2400b/s of U.S. government mixed excitation is further improved the pitch period parameter of present frame voice signal, and the pitch period parameter after the improvement is
Figure A20061016524600091
The embodiment of said method step (6) is: by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation ask for first subband voice signal autocorrelation function of present frame voice signal and The value at place is as the voiced sound degree of membership Vbp of this subband 1
The embodiment of said method step (7) is: ask for behind the present frame voice signal 4 subband filtering signals and the envelope signal autocorrelation function exists by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation
Figure A20061016524600093
The value at place, and get higher value among both as the voiced sound degree of membership Vbp of this subband i
The embodiment of said method step (8) is: if Vbp 1<0.6, Vbp then i=0, i=2,3,4,5;
The embodiment of said method step (9) is: Vbp 1=Vbp 1* 1.2, if Vbp 1>1, then make Vbp 1=1;
The specific practice of the embodiment of said method step (10) is: to being with logical voiced sound degree parameter
Figure A20061016524600094
Carry out vector quantization, Vbp ‾ = [ Vbp 1 , Vbp 2 , Vbp 3 , Vbp 4 , Vbp 5 ] . The code book of vector quantization needs training separately, and training algorithm adopts the LBG algorithm iteration to generate.When code word training and search, distortion measure adopts the minimum weight square error, weighting factor W, promptly Er = Σ i = 1 5 W i ( Vbp i - Vb p ^ i ) 2 , Wherein, Er is a distortion measure, Vbp iBe i component of trained vector or vector to be quantified, Be i component of vector in the code book, W is the weighting factor vector, W=[16,8,4,2,1]; The code word search adopts the method for full search to obtain the optimum quantization code word, promptly travels through code words all in the code book and gets the code word of distortion minimum as the last result who quantizes.
The specific practice of the embodiment of said method step (11) is: by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation, utilize the logical voiced sound degree parameter of band, pitch period parameter, surplus spectral amplitude parameter to carry out pumping signal and synthesize.

Claims (3)

1, the method for sub-band surd and sonant fuzzy judgment is characterized in that this method may further comprise the steps:
(1) divides frame in chronological order to the input speech signal sampling point;
(2) present frame is extracted the pitch period parameter;
(3) present frame is extracted surplus spectral amplitude parameter;
(4) the present frame voice signal is carried out filtering through the Butterworth filter on 56 rank, obtain passband and be respectively 0-500,500-1000,1000-2000,5 subband signals of 2000-3000 and 3000-4000Hz;
(5) voice signal according to first subband further improves the pitch period parameter of asking in the step (2), and the gene periodic quantity after the improvement is
Figure A2006101652460002C1
(6) ask for the pitch period parameter that the autocorrelation function of first subband voice signal is asked in step (5)
Figure A2006101652460002C2
Locational value Vbp N, 1
(7) ask for the pitch period parameter that the autocorrelation function of the autocorrelation function of all the other 4 subband voice signals and its time envelope is asked in step (5)
Figure A2006101652460002C3
Locational value, and respectively each subband is got higher value among both:
Vbp n,i=max{Vbp n,i′,Vbp n,i″}i=2,3,4,5
Wherein, Vbp N, i' expression present frame, promptly the autocorrelation function of the i subband of n frame exists Locational value; Vbp N, i" the expression present frame, promptly the autocorrelation function of the temporal envelope of the i subband of n frame exists
Figure A2006101652460002C5
Locational value;
(8) if the value Vbp that asks in the step (6) N, 1Less than 0.6, then the value in the step (7) all is revised as 0, i.e. Vbp N, i=0 i=2,3,4,5, otherwise remain unchanged;
(9) with the value Vbp that asks in the step (6) N, 1Multiply by the gain factor of a setting, this gain factor is set at 1.2, if the back result that multiplies each other then makes it equal 1, that is: greater than 1
Vbp n , 1 = 1 if Vbp n , 1 × 1.2 > 1 Vbp n , 1 × 1.2 if Vbp n , 1 × 1.2 ≤ 1
(10) with the value asked in step (8) and the step (9) the voiced sound degree of membership as each subband, merging becomes a vector Vbp ‾ = ( Vbp n , 1 , Vbp n , 2 , Vbp n , 3 , vbp n , 4 , Vbp n , 5 ) , carry out vector quantization together; Vector quantization adopts the method that the code word in the code book is searched for entirely to obtain the optimum quantization code word:
Wherein
Figure A2006101652460003C2
Represent input vector to be quantified,
Figure A2006101652460003C3
Code word vector in the expression code book, C represents code book, and i is the index value of code word vector in code book, and the distortion measure that Er () function representation is specific adopts the minimum weight square error here, promptly Er ( Vbp ‾ , Vbp ‾ i ) = Σ k = 1 5 W k ( Vbp k - Vbp i , k ) 2 , Vbp wherein kBe k component of vector to be quantified, Vbp I, kBe k component of i code word vector in the code book, W is the weighting factor vector, gets W=[16 here, 8,4,2,1]; Obtain the voiced sound degree of membership of each subband of quantizing after the quantification
Figure A2006101652460003C5
(11) the voiced sound degree of membership of each subband of obtaining in the step (10), surplus spectral amplitude parameter in the step (3) and the pitch period parameter in the step (5) are used for together synthetic pumping signal.
2, by the described sub-band surd and sonant fuzzy decision method of claim 1, it is characterized in that each frame comprises 180 or 200 voice sampling points in the described step (1).
3,, it is characterized in that synthetic pumping signal is directly used the voiced sound degree of membership of each subband after the quantification that obtains in the step (10) in the step (11) by the described sub-band surd and sonant fuzzy decision method of claim 1.
CN200610165246XA 2006-12-15 2006-12-15 Fuzzy judgment method for sub-band surd and sonant Expired - Fee Related CN101009096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200610165246XA CN101009096B (en) 2006-12-15 2006-12-15 Fuzzy judgment method for sub-band surd and sonant

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200610165246XA CN101009096B (en) 2006-12-15 2006-12-15 Fuzzy judgment method for sub-band surd and sonant

Publications (2)

Publication Number Publication Date
CN101009096A true CN101009096A (en) 2007-08-01
CN101009096B CN101009096B (en) 2011-01-26

Family

ID=38697493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200610165246XA Expired - Fee Related CN101009096B (en) 2006-12-15 2006-12-15 Fuzzy judgment method for sub-band surd and sonant

Country Status (1)

Country Link
CN (1) CN101009096B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261836B (en) * 2008-04-25 2011-03-30 清华大学 Method for enhancing excitation signal naturalism based on judgment and processing of transition frames
CN104517614A (en) * 2013-09-30 2015-04-15 上海爱聊信息科技有限公司 Voiced/unvoiced decision device and method based on sub-band characteristic parameter values
CN108461088A (en) * 2018-03-21 2018-08-28 山东省计算中心(国家超级计算济南中心) Based on support vector machines the pure and impure tone parameter of tone decoding end reconstructed subband method
CN110580920A (en) * 2019-08-28 2019-12-17 南京梧桐微电子科技有限公司 Method and system for judging clear and voiced sounds of sub-band of vocoder

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3687181B2 (en) * 1996-04-15 2005-08-24 ソニー株式会社 Voiced / unvoiced sound determination method and apparatus, and voice encoding method
FR2784218B1 (en) * 1998-10-06 2000-12-08 Thomson Csf LOW-SPEED SPEECH CODING METHOD
US6226606B1 (en) * 1998-11-24 2001-05-01 Microsoft Corporation Method and apparatus for pitch tracking
CN1284137C (en) * 2004-11-12 2006-11-08 清华大学 Super frame track parameter vector quantizing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261836B (en) * 2008-04-25 2011-03-30 清华大学 Method for enhancing excitation signal naturalism based on judgment and processing of transition frames
CN104517614A (en) * 2013-09-30 2015-04-15 上海爱聊信息科技有限公司 Voiced/unvoiced decision device and method based on sub-band characteristic parameter values
CN108461088A (en) * 2018-03-21 2018-08-28 山东省计算中心(国家超级计算济南中心) Based on support vector machines the pure and impure tone parameter of tone decoding end reconstructed subband method
CN110580920A (en) * 2019-08-28 2019-12-17 南京梧桐微电子科技有限公司 Method and system for judging clear and voiced sounds of sub-band of vocoder

Also Published As

Publication number Publication date
CN101009096B (en) 2011-01-26

Similar Documents

Publication Publication Date Title
CN101030377B (en) Method for increasing base-sound period parameter quantified precision of 0.6kb/s voice coder
CN1750124B (en) Bandwidth extension of band limited audio signals
CN101057275B (en) Vector conversion device and vector conversion method
KR101143724B1 (en) Encoding device and method thereof, and communication terminal apparatus and base station apparatus comprising encoding device
US9123350B2 (en) Method and system for extracting audio features from an encoded bitstream for audio classification
CN102341852B (en) Filtering speech
CN103325375B (en) One extremely low code check encoding and decoding speech equipment and decoding method
CN103050121A (en) Linear prediction speech coding method and speech synthesis method
CN101261836B (en) Method for enhancing excitation signal naturalism based on judgment and processing of transition frames
CN104025189A (en) Method for encoding voice signal, method for decoding voice signal, and apparatus using same
CN1186765C (en) Method for encoding 2.3kb/s harmonic wave excidted linear prediction speech
CN101261835B (en) Joint optimization method for multi-vector and multi-code book size based on super frame mode
CN101009096B (en) Fuzzy judgment method for sub-band surd and sonant
JPH08123484A (en) Method and device for signal synthesis
CN103050122A (en) MELP-based (Mixed Excitation Linear Prediction-based) multi-frame joint quantization low-rate speech coding and decoding method
CN101295507B (en) Superframe acoustic channel parameter multilevel vector quantization method with interstage estimation
KR100651712B1 (en) Wideband speech coder and method thereof, and Wideband speech decoder and method thereof
EP1497631B1 (en) Generating lsf vectors
CN104517614A (en) Voiced/unvoiced decision device and method based on sub-band characteristic parameter values
CN102903365A (en) Method for refining parameter of narrow band vocoder on decoding end
CN106935243A (en) A kind of low bit digital speech vector quantization method and system based on MELP
CN1875401B (en) Method and device for harmonic noise weighting in digital speech coders
CN1873777B (en) Mobile communication terminal with speech decode function and action method of the same
CN101377926B (en) Audio encoding method capable of quickening quantification circulation program
Ali et al. A long term harmonic plus noise model for narrow-band speech coding at very low bit-rates

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110126

Termination date: 20141215

EXPY Termination of patent right or utility model