CN1975861A - Vocoder fundamental tone cycle parameter channel error code resisting method - Google Patents

Vocoder fundamental tone cycle parameter channel error code resisting method Download PDF

Info

Publication number
CN1975861A
CN1975861A CNA2006101652455A CN200610165245A CN1975861A CN 1975861 A CN1975861 A CN 1975861A CN A2006101652455 A CNA2006101652455 A CN A2006101652455A CN 200610165245 A CN200610165245 A CN 200610165245A CN 1975861 A CN1975861 A CN 1975861A
Authority
CN
China
Prior art keywords
frame
pitch period
period parameter
parameter
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006101652455A
Other languages
Chinese (zh)
Other versions
CN1975861B (en
Inventor
崔慧娟
唐昆
李晔
洪侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2006101652455A priority Critical patent/CN1975861B/en
Publication of CN1975861A publication Critical patent/CN1975861A/en
Application granted granted Critical
Publication of CN1975861B publication Critical patent/CN1975861B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method for resisting channel bit error of vocoder basic-tone periodical parameter includes obtaining basic-tone period and clear voiced sound parameter of current frame at decoding end, comparing basic-tone parameter with last frame if current frame is not clear voiced sound and judging that basic-tone parameter transmission of current frame is error if D-value of the two is over a set threshold, reversing each bit being presented in binary system of current frame basic-tone parameter separately, solving out corresponding basic-tone parameters, comparing them separately with last frame figures and using compared result with minimum D-value as restoration value.

Description

Vocoder fundamental tone cycle parameter channel error code resisting method
Technical field
The invention belongs to the speech coding technology field, particularly voice coding anti-channel error code technology.
Background technology
Vocoder is very responsive to channel error code, so the research of its anti-channel error code algorithm becomes an important problem.In the process of research channel error code problem, error correcting code has obtained deep research and has used widely, and it can protect code word effectively, but this is a cost to increase the redundanat code position.And in low rate vocoder algorithm, as at the 2.4kb/s vocoder even more in the low rate vocoder, available bits is considerably less, can not rely on error correction coding to increase the vocoder error-resilient performance fully.On the basis that does not increase the additional bit expense, the algorithm that a series of raising vocoder opposing channel error code abilities occurred, this class algorithm mainly is the index that quantizes code book by reasonable arrangement, make little that the Euclidean distance of corresponding vector is also tried one's best between the close code word of Hamming distance, thereby the distortion that causes when channel error code takes place is reduced.
This class algorithm is the algorithm at scrambler, and mainly in order to improve the anti-error code capacity of scrambler self, how research does not improve the anti-error code capacity of demoder.In fact, some parameter of vocoder as pitch period, itself is a smooth variation, for the redundancy that information source coding fails to eliminate, can be used for resisting the noise in the transmission course.That is to say that the mistake that can utilize the mild characteristic of variation of pitch period parameter in decoding end channel error code to be caused detects and recovers.
Summary of the invention
The objective of the invention is for overcoming the weak point of prior art, the research of the anti-error code capacity of vocoder has been extended to decoding end, how mainly to have studied in decoding end and found because the pitch period parameter that channel error code causes wrong and correcting a mistake.
The pitch period parameter anti-channel error code technology that the present invention proposes may further comprise the steps:
(1) the voice signal sampling point to input divides frame by the time sequencing of setting, and this voice signal sampling point is the signal sampling point after having disturbed according to the setpoint frequency sampling and through high-pass filtering removal power frequency;
(2) the linear predict voice coding algorithm of pressing the 2400b/s mixed excitation extracts pitch period parameter P to present frame i, wherein i represents the frame number of present frame; When the multi-frame joint vector quantization, then need extract the pitch period parameter of all frames in the current superframe respectively;
(3) the pitch period parameter P in the present frame iIn the quantizing range of regulation, carry out even scalar quantization by the quantizing bit number of setting, coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, then described quantized value is the index value of selected code word vector in quantizing code book;
(4) the linear predict voice coding algorithm by the 2400b/s mixed excitation extracts pure and impure sound parameter to present frame, when the multi-frame joint vector quantization, need extract the pure and impure sound parameter of all frames in the current superframe respectively;
(5) the pure and impure sound parameter of 5 subbands that extract from each frame according to step (4), if subband is a voiceless sound, with " 0 " expression, subband is that voiced sound is then used " 1 " expression, is designated as vector B, B=b 1, b 2, b 3, b 4, b 5, again to these sub-band surd and sonant vector B 5 bit quantizations, 5 bits corresponding successively the pure and impure sound pattern of 5 subbands, even k subband is voiced sound, then b k=1, otherwise, b k=0, this quantized value coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, described quantized value then is the index value of selected code word vector in pure and impure sound parameter quantification code book;
(6) decoding obtains pure and impure sound parameter in the step (5), then, the sub-band surd and sonant argument sequence B in the described present frame is judged: if low strap, promptly first subband is a voiced sound, b 1=1, then present frame is unvoiced frame, otherwise is unvoiced frames; If present frame is unvoiced frame, previous frame be unvoiced frames then present frame be the first frame voiced sound of pure and impure sound transition section, the next unvoiced frame that is right after it is the second frame voiced sound of pure and impure sound transition section, the rest may be inferred; When the multi-frame joint vector quantization, have only when all frames all are unvoiced frame in the superframe, think that just current superframe is a unvoiced frame; If the judged result of present frame is first frame or the second frame voiced sound of voiceless sound or pure and impure sound transition section, then forward step (1) to, otherwise, carry out next step;
The binary form indicating value of the pitch period parameter after (7) decoding obtains quantizing in the step (3) that actual reception arrives Right again
Figure A20061016524500052
Carry out inverse quantization and obtain pitch period parameter P in its corresponding step (2) i, with itself and previous frame pitch period parameter P I-1Compare, judge the difference between the two | P i-P I-1| whether surpass a preset threshold, when the multi-frame joint vector quantization, P iBe the pitch period parameter of first frame in the current superframe, and P I-1Be the pitch period parameter of last frame in the last superframe, surpass the threshold value that sets, then judge present frame pitch period parameter loading error occurring, and carry out next step if be judged as, otherwise, forward step (1) to;
(8) judge whether previous frame pitch period parameter did wrong the recovery, if, forward step (1) to, otherwise, carry out next step (9) and carry out the mistake recovery;
(9) the binary form indicating value of the pitch period parameter of present frame in the step that receives (3)
Figure A20061016524500053
In each bit reverse respectively, obtain N represents bit number, P ^ i ( n ) n = 1,2 · · · · · · N The expression handle
Figure A20061016524500056
The binary value that obtains of n bit reversal, suppose
Figure A20061016524500057
Total N position; The binary form indicating value of this N candidate's pitch period obtaining is done inverse quantization respectively ask for pitch period parameter in its corresponding step (2), be made as P i (1), P i (2)P i (N), form candidate's pitch period parameter set Ψ i = { P i ( 1 ) , P i ( 2 ) · · · · · · P i ( N ) } , When the multi-frame joint vector quantization, candidate's pitch period parameter of being asked for is the candidate value of the pitch period parameter of first frame in the current superframe; Again with the element in candidate's pitch period parameter set respectively with the step (2) of previous frame in the pitch period parameter P that asks for I-1Compare, when the multi-frame joint vector quantization, P I-1Be the pitch period parameter of last frame in the last superframe, choose the pitch period parameter as present frame of difference minimum, that is: P i = arg min P i ( n ) ∈ Ψ i | P i ( n ) - P i - 1 | , Finish wrong the recovery, forward step (1) to; When the multi-frame joint vector quantization, P iOnly be the pitch period parameter of first frame in the current superframe, the pitch period parameter of Hui Fuing should be the pitch period parameter vector of the whole superframe at its place at last.
Characteristics of the present invention are under the prerequisite that does not increase redundant bit, change mild characteristic by utilize the pitch period parameter in decoding end, carry out the error-detecting and the recovery of pitch period parameter.And in order to prevent the pitch period parameter owing to recovering the wrong error code diffusion that causes, be judged as mistake and carry out under the situation of over recovery in previous frame pitch period parameter, present frame pitch period parameter is not carried out error-detecting and recovery.In the process of carrying out error-detecting and recovery, the present invention reverses each bit of pitch period parameter that receives respectively, ask for its corresponding pitch period parameter, pitch period parameter with previous frame compares respectively, choose the pitch period parameter as present frame of difference minimum, can more accurately wrong pitch period parameter be recovered.
This algorithm can be with the pitch period parameter owing to the error that channel error code causes reduces more than 10%, and effect is obvious, and the most suitable 2400b/s speech coding algorithm will be realized on digital processing chip DSP.
Description of drawings
The pitch period parameter anti-channel error code FB(flow block) that Fig. 1 proposes for the present invention.
Embodiment
The method of the pitch period parameter anti-channel error code that the present invention proposes reaches embodiment in conjunction with the accompanying drawings and further specifies as follows:
Method flow of the present invention may further comprise the steps as shown in Figure 1:
(1) the voice signal sampling point to input divides frame by the time sequencing of setting, and this voice signal sampling point is the signal sampling point after having disturbed according to the setpoint frequency sampling and through high-pass filtering removal power frequency;
(2) the linear predict voice coding algorithm of pressing the 2400b/s mixed excitation extracts pitch period parameter P to present frame i, wherein i represents the frame number of present frame; When the multi-frame joint vector quantization, then need extract the pitch period parameter of all frames in the current superframe respectively;
(3) the pitch period parameter P in the present frame iIn the quantizing range of regulation, carry out even scalar quantization by the quantizing bit number of setting, coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, then described quantized value is the index value of selected code word vector in quantizing code book;
(4) the linear predict voice coding algorithm by the 2400b/s mixed excitation extracts pure and impure sound parameter to present frame, when the multi-frame joint vector quantization, need extract the pure and impure sound parameter of all frames in the current superframe respectively;
(5) the pure and impure sound parameter of 5 subbands that extract from each frame according to step (4), if subband is a voiceless sound, with " 0 " expression, subband is that voiced sound is then used " 1 " expression, is designated as vector B, B=b 1, b 2, b 3, b 4, b 5, again to these sub-band surd and sonant vector B 5 bit quantizations, 5 bits corresponding successively the pure and impure sound pattern of 5 subbands, even k subband is voiced sound, then b k=1, otherwise, b k=0, this quantized value coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, described quantized value then is the index value of selected code word vector in pure and impure sound parameter quantification code book;
(6) decoding obtains pure and impure sound parameter in the step (5), then, the sub-band surd and sonant argument sequence B in the described present frame is judged: if low strap, promptly first subband is a voiced sound, b 1=1, then present frame is unvoiced frame, otherwise is unvoiced frames; If present frame is unvoiced frame, previous frame be unvoiced frames then present frame be the first frame voiced sound of pure and impure sound transition section, the next unvoiced frame that is right after it is the second frame voiced sound of pure and impure sound transition section, the rest may be inferred; When the multi-frame joint vector quantization, have only when all frames all are unvoiced frame in the superframe, think that just current superframe is a unvoiced frame; If the judged result of present frame is first frame or the second frame voiced sound of voiceless sound or pure and impure sound transition section, then forward step (1) to, otherwise, carry out next step;
The binary form indicating value of the pitch period parameter after (7) decoding obtains quantizing in the step (3) that actual reception arrives
Figure A20061016524500071
Right again Carry out inverse quantization and obtain pitch period parameter P in its corresponding step (2) i, with itself and previous frame pitch period parameter P I-1Compare, judge the difference between the two | P i-P I-1| whether surpass a preset threshold, when the multi-frame joint vector quantization, P iBe the pitch period parameter of first frame in the current superframe, and P I-1Be the pitch period parameter of last frame in the last superframe, surpass the threshold value that sets, then judge present frame pitch period parameter loading error occurring, and carry out next step if be judged as, otherwise, forward step (1) to;
(8) judge whether previous frame pitch period parameter did wrong the recovery, if, forward step (1) to, otherwise, carry out next step (9) and carry out the mistake recovery;
(9) the binary form indicating value of the pitch period parameter of present frame in the step that receives (3)
Figure A20061016524500081
In each bit reverse respectively, obtain N represents bit number, P ^ i ( n ) n = 1,2 · · · · · · N The expression handle
Figure A20061016524500084
The binary value that obtains of n bit reversal, suppose
Figure A20061016524500085
Total N position; The binary form indicating value of this N candidate's pitch period obtaining is done inverse quantization respectively ask for pitch period parameter in its corresponding step (2), be made as Form candidate's pitch period parameter set Ψ i = { P i ( 1 ) P i ( 2 ) · · · · · · P i ( N ) } , When the multi-frame joint vector quantization, candidate's pitch period parameter of being asked for is the candidate value of the pitch period parameter of first frame in the current superframe; Again with the element in candidate's pitch period parameter set respectively with the step (2) of previous frame in the pitch period parameter P that asks for I-1Compare, when the multi-frame joint vector quantization, P I-1Be the pitch period parameter of last frame in the last superframe, choose the pitch period parameter as present frame of difference minimum, that is: P i = arg min P i ( n ) ∈ Ψ i | P i ( n ) - P i - 1 | , Finish wrong the recovery, forward step (1) to; When the multi-frame joint vector quantization, P iOnly be the pitch period parameter of first frame in the current superframe, the pitch period parameter of Hui Fuing should be the pitch period parameter vector of the whole superframe at its place at last.
The specific embodiment of each step of said method of the present invention is described in detail as follows respectively:
Said method step (1) divides the embodiment of frame to be by the 8kHz frequency sampling, to remove the voice sampling point that power frequency is disturbed through high-pass filtering to the input speech signal sampling point in chronological order.Every 25ms, just 200 voice sampling points constitute a frame.
The embodiment of said method step (2) is: the described sound cycle parameter of linear prediction (MELP) the speech coding algorithm standard extracting method by the 2400b/s of U.S. government mixed excitation extracts the pitch period parameter p to present frame i
The embodiment of said method step (3) is: with present frame pitch period parameter p iCarry out even scalar quantization, quantizing range is [18,145], and quantizing bit number is 7 bits.
The embodiment of said method step (4) is: the pure and impure sound parameter of present frame being extracted 5 subbands by the described method of linear prediction (MELP) speech coding algorithm standard of the 2400b/s of U.S. government mixed excitation, subband is that voiceless sound is represented with " 0 ", subband is that voiced sound is represented with " 1 ", note is B, B has 32 values, corresponding 32 kinds of patterns.
B=[b 1,b 2,b 3,b 4,b 5]
Embodiment in the said method step (5) is: to above-mentioned pure and impure sound vector B 5 bit quantizations, 5 bits corresponding successively the pure and impure sound pattern of 1 to 5 subband, if i subband is voiced sound, b then i=1, otherwise, b i=0.
The embodiment of said method step (6) is: by the pure and impure sound of first subband among the B being judged the pure and impure sound of determining present frame, that is, if b 1=1, then present frame is a unvoiced frame, otherwise is unvoiced frames; If present frame is a unvoiced frame, previous frame is a unvoiced frames, and then present frame is the first frame voiced sound of pure and impure sound transition section, and its next frame voiced sound is the second frame voiced sound of pure and impure sound transition section, and the rest may be inferred.
The embodiment of said method step (7) is: if | p i-p I-1|>25, then think present frame pitch period parameter loading error occurring, wherein p iAnd p I-1Be respectively the pitch period parameter in the step (2) of present frame and previous frame.
The embodiment of said method step (8) is: if previous frame satisfies the relation in the step (7), and carried out the rejuvenation in the step (9), then present frame does not carry out the mistake recovery.
The embodiment of said method step (9) is: with present frame pitch period parameter p iBinary form is shown P ^ i = [ bit 1 , bit 2 , bit 3 , bit 4 , bit 5 , bit 6 , bit 7 ] , After respectively each bit being reversed respectively, obtain
Figure A20061016524500092
Figure A20061016524500093
Inverse quantization is asked for the pitch period parameter in its corresponding step (2) respectively again, forms 7 candidate's pitch period parameter P i (1), P i (2)P i (7), with these 7 candidate's pitch period parameters respectively with previous frame pitch period parameter p I-1(p I-1Be the correct pitch period parameter of judgement transmission of previous frame) compare, choose the pitch period parameter after the recovering of difference minimum, that is: as present frame
P i = arg min P i ( n ) , 1 ≤ n ≤ 7 | P i n - P i - 1 |
Note, this step is relevant with the quantification manner of pitch period parameter, if the pitch period parameter adopts vector quantization, then needs each bit reversal with the binary representation of the index value of vector quantization, ask for the real pitch period parameter of its correspondence respectively, and then compare.

Claims (3)

1, vocoder fundamental tone cycle parameter channel error code resisting method is characterized in that, described method realizes in digital integrated circuit chip successively according to the following steps:
(1) the voice signal sampling point to input divides frame by the time sequencing of setting, and this voice signal sampling point is the signal sampling point after having disturbed according to the setpoint frequency sampling and through high-pass filtering removal power frequency;
(2) the linear predict voice coding algorithm of pressing the 2400b/s mixed excitation extracts pitch period parameter P to present frame i, wherein i represents the frame number of present frame; When the multi-frame joint vector quantization, then need extract the pitch period parameter of all frames in the current superframe respectively;
(3) the pitch period parameter P in the present frame iIn the quantizing range of regulation, carry out even scalar quantization by the quantizing bit number of setting, coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, then described quantized value is the index value of selected code word vector in quantizing code book;
(4) the linear predict voice coding algorithm by the 2400b/s mixed excitation extracts pure and impure sound parameter to present frame, when the multi-frame joint vector quantization, need extract the pure and impure sound parameter of all frames in the current superframe respectively;
(5) the pure and impure sound parameter of 5 subbands that extract from each frame according to step (4), if subband is a voiceless sound, with " 0 " expression, subband is that voiced sound is then used " 1 " expression, is designated as vector B, B=b 1, b 2, b 3, b 4, b 5, again to these sub-band surd and sonant vector B 5 bit quantizations, 5 bits corresponding successively the pure and impure sound pattern of 5 subbands, even k subband is voiced sound, then b k=1, otherwise, b k=0, this quantized value coding after Channel Transmission to decoding end; When the multi-frame joint vector quantization, described quantized value then is the index value of selected code word vector in pure and impure sound parameter quantification code book;
(6) decoding obtains pure and impure sound parameter in the step (5), then, the sub-band surd and sonant argument sequence B in the described present frame is judged: if low strap, promptly first subband is a voiced sound, b 1=1, then present frame is unvoiced frame, otherwise is unvoiced frames; If present frame is unvoiced frame, previous frame be unvoiced frames then present frame be the first frame voiced sound of pure and impure sound transition section, the next unvoiced frame that is right after it is the second frame voiced sound of pure and impure sound transition section, the rest may be inferred; When the multi-frame joint vector quantization, have only when all frames all are unvoiced frame in the superframe, think that just current superframe is a unvoiced frame; If the judged result of present frame is first frame or the second frame voiced sound of voiceless sound or pure and impure sound transition section, then forward step (1) to, otherwise, carry out next step;
The binary form indicating value of the pitch period parameter after (7) decoding obtains quantizing in the step (3) that actual reception arrives Right again Carry out inverse quantization and obtain pitch period parameter P in its corresponding step (2) i, with itself and previous frame pitch period parameter P I-1Compare, judge the difference between the two | P i-P I-1| whether surpass a preset threshold, when the multi-frame joint vector quantization, P iBe the pitch period parameter of first frame in the current superframe, and P I-1Be the pitch period parameter of last frame in the last superframe, surpass the threshold value that sets, then judge present frame pitch period parameter loading error occurring, and carry out next step if be judged as, otherwise, forward step (1) to;
(8) judge whether previous frame pitch period parameter did wrong the recovery, if, forward step (1) to, otherwise, carry out next step (9) and carry out the mistake recovery;
(9) the binary form indicating value of the pitch period parameter of present frame in the step that receives (3) In each bit reverse respectively, obtain
Figure A2006101652450003C2
N represents bit number, P ^ i ( n ) n = 1 , 2 · · · · · · N The expression handle The binary value that obtains of n bit reversal, suppose
Figure A2006101652450003C5
Total N position; The binary form indicating value of this N candidate's pitch period obtaining is done inverse quantization respectively ask for pitch period parameter in its corresponding step (2), be made as P i (1), P i (2)P i (N), form candidate's pitch period parameter set Ψ i = { P i ( 1 ) , P i ( 2 ) · · · · · · P i ( N ) } , When the multi-frame joint vector quantization, candidate's pitch period parameter of being asked for is the candidate value of the pitch period parameter of first frame in the current superframe; Again with the element in candidate's pitch period parameter set respectively with the step (2) of previous frame in the pitch period parameter P that asks for I-1Compare, when the multi-frame joint vector quantization, P I-1Be the pitch period parameter of last frame in the last superframe, choose the pitch period parameter as present frame of difference minimum, that is: P i = arg min P i ( n ) ∈ Ψ i | P i ( n ) - P i - 1 | , Finish wrong the recovery, forward step (1) to; When the multi-frame joint vector quantization, P iOnly be the pitch period parameter of first frame in the current superframe, the pitch period parameter of Hui Fuing should be the pitch period parameter vector of the whole superframe at its place at last.
2, by the described vocoder fundamental tone cycle parameter channel error code resisting method of claim 1, it is characterized in that, normally 200 of the voice number of samples that each frame comprises in the described step (1), but be not limited to this, also can be 180 or other fixed number.
By the described vocoder fundamental tone cycle parameter channel error code resisting method of claim 1, it is characterized in that 3, the threshold value in the described step (7) obtains after by off-line a large amount of voice being added up, and is made as 25.
CN2006101652455A 2006-12-15 2006-12-15 Vocoder fundamental tone cycle parameter channel error code resisting method Expired - Fee Related CN1975861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2006101652455A CN1975861B (en) 2006-12-15 2006-12-15 Vocoder fundamental tone cycle parameter channel error code resisting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2006101652455A CN1975861B (en) 2006-12-15 2006-12-15 Vocoder fundamental tone cycle parameter channel error code resisting method

Publications (2)

Publication Number Publication Date
CN1975861A true CN1975861A (en) 2007-06-06
CN1975861B CN1975861B (en) 2011-06-29

Family

ID=38125884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2006101652455A Expired - Fee Related CN1975861B (en) 2006-12-15 2006-12-15 Vocoder fundamental tone cycle parameter channel error code resisting method

Country Status (1)

Country Link
CN (1) CN1975861B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261836B (en) * 2008-04-25 2011-03-30 清华大学 Method for enhancing excitation signal naturalism based on judgment and processing of transition frames
CN101572089B (en) * 2009-05-21 2012-01-25 华为技术有限公司 Test method and device of signal period
CN102656629A (en) * 2009-12-10 2012-09-05 Lg电子株式会社 Method and apparatus for encoding a speech signal
CN104795074A (en) * 2015-03-19 2015-07-22 清华大学 Multi-mode multi-stage codebook joint optimization method
CN106489178A (en) * 2014-07-11 2017-03-08 奥兰治 Using the variable sampling frequency according to frame, post processing state is updated
CN108831509A (en) * 2018-06-13 2018-11-16 西安蜂语信息科技有限公司 Determination method, apparatus, computer equipment and the storage medium of pitch period
CN109256143A (en) * 2018-09-21 2019-01-22 西安蜂语信息科技有限公司 Speech parameter quantization method, device, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106098072B (en) * 2016-06-02 2019-07-19 重庆邮电大学 A kind of 600bps very low speed rate encoding and decoding speech method based on mixed excitation linear prediction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60234195D1 (en) * 2001-08-31 2009-12-10 Kenwood Corp DEVICE AND METHOD FOR PRODUCING A TONE HEIGHT TURN SIGNAL AND DEVICE AND METHOD FOR COMPRESSING, DECOMPRESSING AND SYNTHETIZING A LANGUAGE SIGNAL THEREWITH
CN1186765C (en) * 2002-12-19 2005-01-26 北京工业大学 Method for encoding 2.3kb/s harmonic wave excidted linear prediction speech

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261836B (en) * 2008-04-25 2011-03-30 清华大学 Method for enhancing excitation signal naturalism based on judgment and processing of transition frames
CN101572089B (en) * 2009-05-21 2012-01-25 华为技术有限公司 Test method and device of signal period
CN102656629A (en) * 2009-12-10 2012-09-05 Lg电子株式会社 Method and apparatus for encoding a speech signal
CN102656629B (en) * 2009-12-10 2014-11-26 Lg电子株式会社 Method and apparatus for encoding a speech signal
US9076442B2 (en) 2009-12-10 2015-07-07 Lg Electronics Inc. Method and apparatus for encoding a speech signal
CN106489178A (en) * 2014-07-11 2017-03-08 奥兰治 Using the variable sampling frequency according to frame, post processing state is updated
CN106489178B (en) * 2014-07-11 2019-05-07 奥兰治 Post-processing state is updated using according to the variable sampling frequency of frame
CN104795074A (en) * 2015-03-19 2015-07-22 清华大学 Multi-mode multi-stage codebook joint optimization method
CN104795074B (en) * 2015-03-19 2019-01-04 清华大学 Multi-mode multi-stage codebooks combined optimization method
CN108831509A (en) * 2018-06-13 2018-11-16 西安蜂语信息科技有限公司 Determination method, apparatus, computer equipment and the storage medium of pitch period
CN108831509B (en) * 2018-06-13 2020-12-04 西安蜂语信息科技有限公司 Method and device for determining pitch period, computer equipment and storage medium
CN109256143A (en) * 2018-09-21 2019-01-22 西安蜂语信息科技有限公司 Speech parameter quantization method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN1975861B (en) 2011-06-29

Similar Documents

Publication Publication Date Title
CN1975861B (en) Vocoder fundamental tone cycle parameter channel error code resisting method
US7286982B2 (en) LPC-harmonic vocoder with superframe structure
US7493256B2 (en) Method and apparatus for high performance low bit-rate coding of unvoiced speech
CN1121683C (en) Speech coding
CN102169692B (en) Signal processing method and device
CN101494055B (en) Method and device for CDMA wireless systems
US20030004711A1 (en) Method for coding speech and music signals
CN101261834A (en) Encoding device and encoding method
EP3696813B1 (en) Audio encoder for encoding an audio signal, method for encoding an audio signal and computer program under consideration of a detected peak spectral region in an upper frequency band
CN1165365A (en) Pitch extraction method and device
CN101030377A (en) Method for increasing base-sound period parameter quantified precision of 0.6kb/s voice coder
CN1437747A (en) Closed-loop multimode mixed-domain linear prediction (MDLP) speech coder
CN102985969A (en) Coding device, decoding device, and methods thereof
CN1739143A (en) Method and apparatus for speech reconstruction within a distributed speech recognition system
JPWO2008108078A1 (en) Encoding apparatus and encoding method
CN1159260A (en) Sound decoding device
CN101009098B (en) Sound coder gain parameter division-mode anti-channel error code method
CN101783142B (en) Transcoding method, device and communication equipment
McCree et al. An embedded adaptive multi-rate wideband speech coder
CN1240050C (en) Invariant codebook fast search algorithm for speech coding
Qian et al. Wideband speech recovery from narrowband speech using classified codebook mapping
Thyssen et al. A candidate for the ITU-T G. 722 packet loss concealment standard
Xydeas et al. A long history quantization approach to scalar and vector quantization of LSP coefficients
Eriksson et al. On waveform-interpolation coding with asymptotically perfect reconstruction
CN1179322C (en) Method for acquisition of basic speech period and encoding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110629

Termination date: 20141215

EXPY Termination of patent right or utility model