CN101174413B - Sound signal encoder and sound signal decoder - Google Patents

Sound signal encoder and sound signal decoder Download PDF

Info

Publication number
CN101174413B
CN101174413B CN2007101529987A CN200710152998A CN101174413B CN 101174413 B CN101174413 B CN 101174413B CN 2007101529987 A CN2007101529987 A CN 2007101529987A CN 200710152998 A CN200710152998 A CN 200710152998A CN 101174413 B CN101174413 B CN 101174413B
Authority
CN
China
Prior art keywords
vector
dispersal pattern
dispersal
pulse
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN2007101529987A
Other languages
Chinese (zh)
Other versions
CN101174413A (en
Inventor
安永和敏
森井利幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Godo Kaisha IP Bridge 1
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP29513097A external-priority patent/JP3175667B2/en
Priority claimed from JP08571798A external-priority patent/JP3174756B2/en
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN101174413A publication Critical patent/CN101174413A/en
Application granted granted Critical
Publication of CN101174413B publication Critical patent/CN101174413B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Abstract

The invention relates to a speech coder and speech decoder, especially a dispersed vector generator for speech coder/speech decoder and method for generating the dispersed vector. The generator comprises a pulse vector generator for generating pulse vector with polarity unit pulse; a dispersion pattern memory for storing a number of fixed dispersion patterns; a dispersion pattern selector for selecting a dispersion pattern from the fixed dispersion patterns; a dispersion pulse vector generator for convoluting the pulse vector and the selected dispersion pattern to generte dispersion pulse vector. The dispersion pattern selector comprises a first selector for presecting the dispersion pattern from the fixed dispersion patterns and a second selector for selecting the dispersion pattern to convolute from the selected dispersion pattern after preselecting.

Description

Voice coder and sound signal decoder
The application is that application number is 98801556.0, international filing date is on October 22nd, 1998, denomination of invention is divided an application for the application for a patent for invention of " voice coder and sound signal decoder ".
Technical field
The present invention relates to voice coder and sound signal decoder that high efficient coding and decoding speech information are used.
Background technology
Now, developing the voice coding technology that high efficient coding and decoding speech information are used." Code Excited Linear Prediction: low bit rate high quality speech " (Code Excited Linear Prediction:High Quality Speech at Low Bit Rate) (M.R.Schroeder work; Be published in ICASSP ' 85, pp.937~940) in putting down in writing CELP type voice coder based on this voice coding technology.This voice coder carries out linear prediction to each frame of dividing input speech gained with the set time; Ask prediction residual (pumping signal) by the linear prediction of every frame, and with depositing adaptive codebook that drives source of sound and the noise code book of depositing a plurality of noise code vectors over this prediction residual coding.
In the past the functional block diagram of CELP type voice coder (hereinafter referred is " voice encryption device ") shown in Fig. 1.
Linear prediction analysis unit 12 is carried out linear prediction analysis to the voice signal of importing in this CELP type voice coder 11.Utilize this linear prediction analysis, can obtain linear predictor coefficient.Linear predictor coefficient is the parameter of the spectrum envelope characteristic of expression voice signal 11.The linear predictor coefficient of linear prediction analysis unit 12 gained quantizes at linear predictor coefficient coding unit 13, and the linear predictor coefficient after will quantizing is delivered to linear predictor coefficient decoding unit 14.To quantize gained again quantizes number to be input to coding output unit 24 as the linear prediction sign indicating number.After linear predictor coefficient decoding unit 14 is decoded linear predictor coefficient coding unit 13 gained quantized linear prediction coefficients, obtain the coefficient of composite filter, linear predictor coefficient decoding unit 14 outputs to composite filter 15 with the coefficient of composite filter.
Adaptive codebook 17 is the code book of the multiple candidate adaptive code vector of output, is made up of the buffer of depositing multiframe driving source of sound in the past.The adaptive code vector is the time series vector of periodic component in the performance input speech.
Noise code book 18 is for depositing the code book of multiple candidate noise code vector, and its kind is corresponding to the bit number that is distributed.The noise code vector is the time series vector of aperiodic component in the performance input speech.
Adaptive code gain weighted units 19 and noise code gain weighted units 20 output to totalizer 22 after respectively the candidate vector of adaptive codebook 17 and 18 outputs of noise code book being multiply by the adaptive gain of reading from weighting code book 21 and gains with noise code.
The weighting code book is a kind of storer, deposits multiple respectively and weighted number that candidate adaptive code vector multiplies each other and the weighted number that multiplies each other with candidate noise code vector, and its kind is corresponding to the bit number that is distributed.
Totalizer 22 is candidate adaptive code vector and the candidate noise code vector addition after adaptive code gain weighted units 19,20 weightings of noise code gain weighted units respectively, produces candidate and drives the source of sound vector, and output to composite filter 15.
The full polar form wave filter that composite filter 15 constitutes for the composite filter coefficient by linear predictor coefficient decoding unit 14 gained.Composite filter 15 has a kind of function, when input drives the source of sound vector from the candidate of totalizer 22, and the synthetic speech vector of output candidate.
Distortion computation unit 16 calculates the output (being the synthetic speech vector of candidate) of composite filter 15 and the distortion between the input speech 11, and the gained distortion value outputs to sign indicating number regulation unit 23.Sign indicating number number regulation unit 23 makes three kinds of minimum sign indicating numbers of the distortion of calculating in the distortion computation unit 16 number (adaptive code number, noise code number and weighted code number) to three kinds of code books (adaptive codebook, noise code book and weighting code book) regulations respectively.Then, three kinds of sign indicating numbers of sign indicating number regulation unit 23 defineds number are outputed to coding output unit 24.Coding output unit 24 converges adaptive code number, noise code number and the weighted code number of linear prediction sign indicating number number and yard number regulation unit 23 defineds of linear predictor coefficient coding unit 13 gained, and outputs to transmission line.
The functional block diagram of the CELP type sound signal decoder of shown in Fig. 2 the signal of above-mentioned encoder encodes being decoded (hereinafter referred is " voice decoder ").In this sound signal decoder; Coding input block 31 receives the coding that voice coder (Fig. 1) is sent here; The coding that receives is decomposed into linear prediction sign indicating number number, adaptive code number, noise code number and weighted code number, and will decomposes the gained coding and output to linear predictor coefficient decoding unit 32, adaptive codebook 33, noise code book 34 and weighting code book 35 respectively.
Then, linear predictor coefficient decoding unit 32 will be encoded after the input block 31 gained linear prediction sign indicating numbers number decoding, obtain the composite filter coefficient, and output to composite filter 39.Then, from adaptive codebook, read the adaptive code vector, read and noise code number corresponding noise code vector, and then read the adaptive code gain and the noise code gain of weighted code correspondence from the weighting code book from the noise code book with adaptive code number corresponding position.And, after adaptive code weighted units 36 multiply by the adaptive code vector adaptive code gain, deliver to totalizer 38.Equally, after noise code vector weighting unit 37 multiply by the noise code vector noise code gain, deliver to totalizer 38.
Totalizer 38 with above-mentioned two coded vector additions after, produce to drive the source of sound vector, and the driving source of sound that will produce delivers to adaptive codebook 33, to upgrade buffer, this driving source of sound is also delivered to composite filter 39, with the driving wave filter.Composite filter 39 drives the source of sound vector by totalizer 38 gained and drives, and reproduces synthetic speech with the output of linear predictor coefficient decoding unit 32.
Calculate the distortion E that asks at the distortion computation unit of CELP type voice coder 16 general using following formulas (formula 1):
E=||V-(gaHP+gcHC)|| (1)
V: input voice signal (vector)
H: composite filter impulse response convolution matrix
Figure GSB00000275243200031
Wherein, h is the impulse response (vector) of composite filter, and L is a frame length,
P: adaptive code vector
C: noise code vector
Ga: adaptive code gain
Gc: noise code gain
Here, minimum for the distortion E that makes formula (1), need use the closed loop calculated distortion to whole combinations of adaptive code number, noise code number and weighted code number, stipulate each yard number.
Yet; Formula (1) is carried out closed loop retrieval, and then the calculation process amount is excessive, thereby generally at first stipulates adaptive code number with adaptive codebook by vector quantization; Secondly by the vector quantization regulation noise code that adopts the noise code book number, at last by the vector quantization regulation weighted code that adopts the weighting code book number.Existing with regard in this case, further explain is done in the vector quantization processing of adopting the noise code book.
When adaptive code number was confirmed in advance or temporarily confirmed with the adaptive code gain, the distortion estimation formula of formula (1) became following formula (2).
Ec=||X-gcH?C|| (2)
Wherein, the vector X in the formula (2) predesignates or the adaptive code of interim provision number and adaptive code gain for adopting, and the noise source information of being tried to achieve by following formula (3) (regulation noise code number with target vector).
X=V-gaHP (3)
Ga: adaptive code gain
V: voice signal (vector)
H: composite filter impulse response convolution matrix
P: adaptive code vector
Under the situation of regulation noise code gain gc after the regulation noise code number; Generally know that but the gc in the assumption (2) can get arbitrary value, make that the processing (vector quantization of noise source information is handled) of the minimum regulation noise code vector of formula (2) number is replaceable to make the maximum noise code vector of the fractional expression of following formula (4) number for regulation.
( X t Hc ) 2 | | Hc | | 2 - - - ( 4 )
That is, under the situation of the previous existing or interim provision of adaptive code number and adaptive code gain, noise source information vector quantification treatment becomes the processing of the maximum candidate noise code vector of formula (4) fractional expression that regulation calculates distortion computation unit 16 number.
In the CELP type encoder/decoder at initial stage, the data of storage kind and institute's allocation bit in the storer being counted random number corresponding sequence gained are used as the noise code book.Yet, there is following problem: the very large memory capacity of needs, huge to the calculation process amount that the distortion of each candidate noise code vector meter formula (4) is used simultaneously.
As a kind of method that solves this problem; Can enumerate " the 8kb/s ACELP speech coding of 10 milliseconds of Speech frames of employing: candidate's ccitt standard " (" 8KBIT/S ACELP CODING OF SPEECH WITH10MS SPEECH-FRAME:A CANDIDATE FOR CCITT STANDARDIZATION ") (R.Salami.C.Laflamme and J-P.Adoul work; Publish in ICASSP ' 94; Pp.II-97~II-100; 1994) put down in writing suchly etc., adopt the CELP type voice coder/demoder that produces the algebraically source of sound vector generation unit of source of sound vector with the algebraically mode.
Yet; The noise code book adopts in the CELP type voice coder/demoder of above-mentioned algebraically source of sound generation unit; The noise source information of often trying to achieve by formula (3) (regulation noise code number target vector) with the performance of a small amount of impulse approximation, thereby have limitation aspect the speech quality seeking to improve.Actual key element of watching noise source information X in the formula (3) is then almost only with the situation of a small amount of this key element of pulse formation.Can explain thus and have limitation.
Summary of the invention
The object of the present invention is to provide a kind of new source of sound vector generator, the shape of gained source of sound vector and the high source of sound vector of statistics similarity when this device can produce with the actual analysis voice signal.
Another purpose of the present invention is to provide a kind of CELP voice coder/demoder, voice signal communication system and voice signal register system; They through with above-mentioned source of sound vector generator as the noise code book, the high synthetic speech of quality in the time of can obtaining than algebraically source of sound generation unit as the noise code book.
The present invention's the 1st form is a kind of source of sound vector generator, it is characterized in that, comprising: have the pulse vector generation unit that N (N >=1) is created on the passage of the pulse vector of a certain key element foundation band polarity unit pulse on the vector axle; Have the function of each passage M (M >=1) kind dispersal pattern among the said N of storage, have the dispersal pattern bank select bit of selecting a kind of function of dispersal pattern from depositing M kind dispersal pattern simultaneously; Have each passage and carry out the convolution algorithm of said pulse vector pulse vector that generation unit is exported and the selected dispersal pattern of said dispersal pattern bank select bit, and produce the pulse vector diffusion unit of the function of N diffusion vector; After having N the diffusion vector addition that said pulse vector diffusion unit is produced, produce the diffusion vector addition device of the function of source of sound vector.Make said pulse vector generation unit have the function that produces N pulse vector (N >=1) with the algebraically mode; Add said dispersal pattern bank select bit and store the dispersal pattern that obtains through the shape (characteristic) of learning the actual voice vector in advance in advance, thereby can produce the shape ratio algebraically source of sound generation unit in the past source of sound vector of approaching actual source of sound shape vector better.
The present invention's the 2nd form is a kind of CELP voice coder/demoder, it is characterized in that, adopts said source of sound vector generator in the noise code book.Compare with the voice coder/demoder that adopts algebraically source of sound generation unit in the noise code book in the past; Can produce more source of sound vector near true form; Therefore, can obtain voice coder/demoder, voice signal communication system and the voice signal register system of the higher synthetic speech of exportable quality.
According to one aspect of the present invention, the diffusion pulse vector that is used for voice coder/demoder generating means is provided, said diffusion pulse vector generating means comprises:
The pulse vector generation unit is used to generate the pulse vector with the pulse of band polarity unit;
The dispersal pattern bank select bit is used to store a plurality of fixedly dispersal patterns, and preliminary election dispersal pattern from said a plurality of fixedly dispersal patterns is selected a dispersal pattern from the dispersal pattern that said preliminary election obtains;
The pulse vector diffusion unit carries out convolution algorithm to said pulse vector and selected dispersal pattern, generates the diffusion pulse vector.According to another aspect of the present invention, provide to be used for method voice coder/demoder, that generate the diffusion pulse vector, said method comprises the steps:
Pulse vector generates step, generates the pulse vector with the pulse of band polarity unit;
The dispersal pattern reading step is read a plurality of fixedly dispersal patterns from storer;
Dispersal pattern is selected step, and preliminary election dispersal pattern from said a plurality of fixedly dispersal patterns of being read is selected a dispersal pattern from the dispersal pattern that said preliminary election obtains;
The pulse vector diffusing step carries out convolution algorithm to said pulse vector and selected dispersal pattern, generates the diffusion pulse vector.
Description of drawings
Fig. 1 is the functional block diagram of CELP type voice coder in the past.
Fig. 2 is the functional block diagram of CELP type sound signal decoder in the past.
Fig. 3 is the functional block diagram of the relevant source of sound vector generator of the present invention's the 1st example.
Fig. 4 is the functional block diagram of the relevant CELP type of the present invention's the 2nd example voice coder.
Fig. 5 is the functional block diagram of the relevant CELP type of the present invention's the 2nd example sound signal decoder.
Fig. 6 is the functional block diagram of the relevant CELP type of the present invention's the 3rd example voice coder.
Fig. 7 is the functional block diagram of the relevant CELP type of the present invention's the 4th example voice coder.
Fig. 8 is the functional block diagram of the relevant CELP type of the present invention's the 5th example voice coder.
Fig. 9 is the block diagram of vector quantization function in the 5th example.
Figure 10 is the key diagram that extracts the algorithm of target in the 5th example.
Figure 11 is the functional block diagram of predictive quantization in the 5th example.
Figure 12 is the functional block diagram of predictive quantization in the 6th example.
Figure 13 is the functional block diagram of CELP type voice coder in the 7th example.
Figure 14 is the functional block diagram of distortion computation unit in the 7th example.
Embodiment
Utilize description of drawings example of the present invention below.
(the 1st example)
The functional block diagram of the relevant source of sound vector generator of example of the present invention shown in Fig. 3.This source of sound vector generator comprises: the pulse vector generation unit 101 with a plurality of passages; Dispersal pattern bank select bit 102 with dispersal pattern storage unit and switch; The pulse vector diffusion unit 103 of diffusion pulse vector; Diffusion vector addition device 104 with a plurality of channel pulse vector additions that spread.
Pulse vector generation unit 101 has N passage (situation to N=3 in this example describes), and these passages are created on the vector (pulse vector hereinafter referred to as) of a certain key element configuration band polarity unit pulse on the vector axle.
Dispersal pattern bank select bit 102 has storage unit M1~M3 and switch SW 1~SW3; The former is to each passage store M kind dispersal pattern (situation to M=2 in this example describes), and the latter selects a kind of dispersal pattern the M kind dispersal pattern respectively from each storage unit M1~M3.
Pulse vector diffusion unit 103 carries out the convolution algorithm of pulse vector generation unit 101 pulse vector of exporting and dispersal pattern bank select bit 102 dispersal pattern of exporting to each passage, and produces N diffusion vector.
After N the diffusion vector addition of diffusion vector addition device 104 with 103 generations of pulse vector diffusion unit, generate source of sound vector 105.
In this example, paired pulses vector generation unit 101 is put down in writing rule and is described with the situation that the algebraically mode produces N pulse vector (N=3) according to tabulating down 1.
Table 1
Figure GSB00000275243200071
The running of such source of sound vector generator that constitutes mentioned above is described.Dispersal pattern bank select bit 102 is stored respectively 2 kinds the dispersal pattern from every passage and is selected a kind, and outputs to pulse vector diffusion unit 103.But, corresponding to dispersal pattern combination (the combination sum M of selecting N=8 kinds), the specific assigned number.
Then, pulse vector generation unit 101 generates the pulse vector (being 3 in this example) of number of channels share according to the rule of table 1 record with the algebraically mode.
Pulse vector diffusion unit 103 is made convolution algorithm with the dispersal pattern of dispersal pattern bank select bit 102 selections and the pulse of pulse vector generation unit 101 generations with formula (5), and each passage is generated the diffusion vector.
ci ( n ) = Σ k = 0 L - 1 wij ( n - k ) di ( k ) - - - ( 5 )
Wherein, n:0~L-1
L: diffusion vector length
I: channel number
J: dispersal pattern number (j=1~M)
Ci: the diffusion vector of passage i
Wij: the j kind dispersal pattern of passage i
The vector length of Wij (m) be 2L-1 (m:-(L-1)~L-1), but, can setting in 2L-1 key element be the Lij key element, other key element is zero
Di: the pulse vector of passage i
di=±δ(n-pi),n=0~L-1
Pi: the candidate pulse vector of passage i
Diffusion vector addition device 104 produces source of sound vector 105 after utilizing 3 diffusion vector additions of formula (6) with 103 generations of pulse vector diffusion unit.
c ( n ) = Σ i = 1 N ci ( n ) - - - ( 6 )
C: source of sound vector
Ci: diffusion vector
I: channel number (i=1~N)
N: vector key element number (n=0~L-1, wherein L is the source of sound vector length)
The source of sound vector generator that constitutes like this; Combined method through making dispersal pattern bank select bit 102 selected dispersal patterns and pulse position and the polarity zone in 101 production burst vectors of pulse vector generation unit change, and can produce various source of sound vector.
So; Above such source of sound vector generator that constitutes; To the combined method of dispersal pattern bank select bit 102 selected dispersal patterns and 2 kinds of information such as combined method of 101 production burst shape vectors of pulse vector generation unit (pulse position and pulse polarity), can allocate respectively number one to one in advance.In the dispersal pattern bank select bit 102, also can learn in advance, and store the dispersal pattern of this learning outcome gained in advance according to actual source of sound information.
If adopt above-mentioned source of sound vector generator in the source of sound information generating unit of voice coder/demoder, 2 kinds of numbers such as the then combination of combination through transmitting the selected dispersal pattern of dispersal pattern bank select bit number and pulse vector generation unit institute production burst vector number (can predetermined pulse position and pulse polarity).Can realize the transmission of noise source information.
Again, when adopting above such source of sound vector generation unit that constitutes, compare when adopting the pulse source of sound that generates with the algebraically mode, can produce shape (characteristic) the source of sound vector similar with actual source of sound information.
In this example, the situation of 2 kinds of dispersal patterns of dispersal pattern bank select bit 102 each passage of storage is described, but to the dispersal pattern beyond 2 kinds of each channel allocations the time, also can obtain same effect and effect.
Paired pulses vector generation unit 101 is made up of 3 passages and is put down in writing the situation that the pulse create-rule is the basis with table 1 and describes in this example; But not simultaneously in the number of channel; And the pulse create-rule also can be obtained same effect and effect when adopting regular beyond table 1 record.
In addition, composition has the voice signal communication system or the voice signal register system of above-mentioned source of sound vector generator or voice coder/demoder, can obtain effect and effect that above-mentioned source of sound vector generator is had.
(the 2nd example)
The functional block diagram of the relevant CELP type of this example shown in Fig. 4 voice coder, the functional block diagram of the relevant CELP type of this example shown in Fig. 5 sound signal decoder.
The CELP type voice coder of relevant this example is used the illustrated source of sound vector generator of the 1st example in the noise code book of above-mentioned Fig. 1 CELP type voice coder.The CELP type sound signal decoder relevant with this example in the noise code book of above-mentioned Fig. 2 CELP sound signal decoder, used the source of sound vector generator of above-mentioned the 1st example.Therefore, the processing except that noise source information vector quantification treatment, all the device with above-mentioned Fig. 1, Fig. 2 is identical.In this example, be center explanation voice coder and sound signal decoder with noise source information vector quantification treatment.And, identical with the 1st example, also establish port number N=3, the dispersal pattern of a passage is counted M=2, and the generation of pulse vector is according to Fig. 1.
It is the processing that regulation makes 2 kinds of maximum numbers of formula (4) reference value (dispersal pattern combination number, pulse position and pulse polarity combination number) that noise source vector quantization in Fig. 4 voice coder is handled.
When Fig. 3 source of sound vector generator is used as the noise code book, with closed loop regulation dispersal pattern combination number (8 kinds) and pulse vector combination number (considering that polarity chron is 16384 kinds).
Therefore, dispersal pattern bank select bit 215 is at first selected a kind of dispersal pattern from 2 kinds of dispersal patterns itself that store, and outputs to pulse vector diffusion unit 217.Then, pulse vector generation unit 216 produces the pulse vector (in this example being 3) of port number share with algebraic method, and outputs to pulse vector diffusion unit 217 according to the rule of Fig. 1.
The pulse vector that the dispersal pattern that pulse vector diffusion unit 217 is selected dispersal pattern bank select bit 215 and pulse vector generation unit 216 produce spreads vector with the convolution algorithm of formula (5) to each passage generation.
After the diffusion vector addition of diffusion vector addition device 218 with 217 acquisitions of pulse vector diffusion unit, generate source of sound vector (becoming candidate noise code vector).
Then, the value of the formula (4) that adopts diffusion vector addition device 218 gained candidate noise code vectors is calculated in distortion computation unit 206.Whole combinations of the pulse vector that the rule of pressing table 1 is produced; Carry out the computing of the value of above-mentioned formula (4); And the dispersal pattern combination when maximum number with the value of its Chinese style (4), pulse vector combination number (combination of pulse position and polarity thereof), and maximal value at that time outputs to sign indicating number regulation unit 213.
Then, dispersal pattern bank select bit 215 is selected and the previous different dispersal pattern of just having selected of combination from gained storage dispersal pattern.Then, the just combination of the new dispersal pattern of selecting and changing with mentioned above identical, is calculated the value of formula (4) to the rule by table 1 in whole pulse vector combinations that pulse vector generation unit 216 produces.The dispersal pattern combination when maximum number with its Chinese style (4) once more, pulse vector combination number and maximal value output to sign indicating number regulation unit 213 once more.
Carry out above-mentioned processing repeatedly to depositing whole combinations (combination adds up to 8 explanation of this example) that dispersal pattern selects from 215 of dispersal pattern bank select bits.
Sign indicating number regulation unit 213 compares whole 8 vector maximal values that calculate distortion computation unit 206; Select wherein maximum; 2 kinds of combinations when regulation produces this maximal value number (dispersal pattern combination number, pulse vector combination number), and number output to coding output unit 214 as noise code.
On the other hand; In the sound signal decoder of Fig. 5; Coding input block 301 receives the coding that voice coder (Fig. 4) is sent here; The coding that receives is resolved into corresponding linear prediction number, adaptive code number, noise code number (being made up of for 2 kinds dispersal pattern combination number, pulse vector combination number etc.) and weighted code number, and the coding that will decompose gained outputs to linear predictor coefficient decoding unit 302, adaptive codebook 303, noise code book 304 and weighting code book 305 respectively.
In the noise code number, the dispersal pattern combination number outputs to dispersal pattern bank select bit 311, pulse vector combination and number outputs to pulse vector generation unit 312.
Then, linear predictor coefficient decoding unit 302 is obtained number decoding of linear prediction sign indicating number the composite filter coefficient, and is outputed to composite filter 309.At adaptive codebook 303, from reading the adaptive code vector with adaptive code number corresponding position.
In the noise code book 304, dispersal pattern bank select bit 311 is read the dispersal pattern number corresponding with spreading pulse combined and is outputed to pulse vector diffusion unit 313 each passage; Pulse vector generation unit 312 produce the port number shares with number corresponding pulse vector of pulse vector combination and output to pulse vector diffusion unit 313; Dispersal pattern that pulse vector diffusion unit 313 will receive from dispersal pattern bank select bit 311 and the pulse vector that receives from pulse vector generation unit 312 produce the diffusion vector with the convolution algorithm of formula (5), and output to diffusion vector addition device 314.After the diffusion vector addition of diffusion vector addition device 314 with each passage of pulse vector diffusion unit 313 generations, produce the noise code vector.
Read and weighted code number corresponding adaptive code gain and noise code gain from weighting code book 305; And the adaptive code vector multiply by adaptive code gain in adaptive code vector weighting unit 306; Equally; After noise code weighted units 307 multiply by the noise code vector noise code gain, deliver to totalizer 308.
Totalizer 308 will be multiplied by above-mentioned 2 code vector additions of gain, generate drive the source of sound vector, and the driving source of sound vector that will generate outputs to adaptive codebook 303, so that upgrade buffer, also output to composite filter 309, so that the driving composite filter.
After the driving source of sound vector of composite filter 309 usefulness totalizers 308 gained drives, the synthetic speech 310 of regeneration.Again, adaptive codebook 303 usefulness are upgraded buffer from the driving source of sound vector that totalizer 308 receives.
But; Dispersal pattern bank select bit among Fig. 4 and Fig. 5 is taken as is used as the distortion computation reference type of the source of sound vector gained formula (7) of substitution formula (6) record among the C in the formula (2) as cost function; And study in advance; After making the value of this cost function less, the dispersal pattern of study gained is stored by each passage.
Pass through aforesaid operations; Can generate the similar source of sound vector of shape of shape and actual noise source information (vector X in the formula (4)); Thereby with the noise code book in adopt the CELP voice coder/demoder of algebraically source of sound vector generation unit to compare, can obtain the high synthetic speech of quality.
Ec = | | X - gcH Σ i = 1 N Ci | | 2
= Σ n = 0 L - 1 ( X ( n ) - gcH Σ i = 1 N Ci ( n ) ) 2
= Σ n = 0 L - 1 ( X ( n ) - gcH Σ i = 1 N Σ k = 0 L - 1 Wij ( n - k ) di ( k ) ) 2 - - - ( 7 )
X: regulation noise code number with target vector
Gc: noise code gain
H: composite filter impulse response convolution matrix
C: noise code vector
I: channel number (i=1~N)
J: dispersal pattern number (i=1~M)
Ci: the diffusion vector of passage i
Wij: the j kind dispersal pattern of passage i
Di: the pulse vector of passage i
L: the source of sound vector length (n=0~L-1)
In this example to each passage in advance store M learn in advance; The situation of the less back dispersal pattern that obtains of cost function value of formula (7) is described; But in fact M dispersal pattern needn't all be obtained through study; At least in advance store a kind of dispersal pattern of obtaining through study if make each passage, also can obtain the effect and the effect that improve synthetic speech quality in this case.
Situation about explaining in this example is; According to whole combinations of dispersal pattern dispersal pattern that bank select bit is stored and whole combinations of the pulse vector generation unit 6 candidate pulse vector position that generates, make the maximum combination of reference value in the formula (4) number with closed loop regulation.Yet, make the parameter (The perfect Gain of adaptive code vector etc.) of trying to achieve before the noise code this shop according to the rules, carry out preliminary election, or retrieve etc. with open loop, also can obtain same effect and effect.
In addition, have the voice signal communication system or the voice signal register system of above-mentioned voice coder/demoder, can obtain effect and effect that the source of sound vector generator put down in writing in the 1st example is had through formation.
(the 3rd example)
The functional block diagram of the relevant CELP type of this example shown in Fig. 6 voice coder.This example adopts in the CELP voice encryption device of above-mentioned the 1st example source of sound vector generator in the noise code book; Desirable adaptive code yield value with trying to achieve before the retrieval noise code book carries out the preliminary election of dispersal pattern dispersal pattern that bank select bit is deposited.Except that noise code book periphery, all the CELP type voice coder with Fig. 4 is identical.Therefore, the noise source information vector quantification treatment in this example key diagram 6CELP type voice coder.
Noise code book 408, noise code gain weighted units 410, composite filter 405, distortion computation unit 406, sign indicating number regulation unit 413, dispersal pattern bank select bit 415, pulse vector generation unit 416, the pulse vector diffusion unit 417 that this CELP type voice coder has adaptive codebook 407, adaptive code gain weighted units 409, be made up of example 1 illustrated source of sound vector generator, spread vector addition device 418 and adaptive gain identifying unit 419.
But in this example, at least a in the M kind dispersal pattern (M >=2) of above-mentioned dispersal pattern bank select bit 415 storages is to learn in advance, and the quantizing distortion that produces when the noise source information vector is quantized is less, and the dispersal pattern that is obtained by this learning outcome.
For illustrative ease; The port number N that establishes the pulse vector generation unit in this example is 3; The diffusion pulse species number M of dispersal pattern each passage that bank select bit is stored is 2; And take off the row situation and describe: a kind of of M kind dispersal pattern (M=2) is the dispersal pattern that is obtained by above-mentioned study, and another kind is the random vector string (random pattern hereinafter referred to as) that is generated by the random number vector generator.By the way, the above-mentioned dispersal pattern that is obtained by study as the W11 among Fig. 3, obviously is the relatively shorter pulse type dispersal pattern of length.
In the CELP type voice coder of Fig. 6, decide the processing of adaptive code this shop in the noise source information vector professional etiquette that quantizes to advance.Therefore, in the moment of carrying out noise source information vector quantification treatment, can be with reference to the vector of adaptive codebook number (adaptive code number) and the gain of desirable adaptive code (temporary transient confirm).In this example, use desirable adaptive code yield value wherein spreads the preliminary election of pulse.
Particularly, at first, after the adaptive codebook retrieval finished, the adaptive code gain ideal value that immediately sign indicating number regulation unit 413 is kept outputed to distortion computation unit 406.The adaptive code gain that distortion computation unit 406 will receive from sign indicating number regulation unit 413 outputs to adaptive gain identifying unit 419.
The 419 pairs of desirable adaptive gain values that receive from distortion computation unit 409 of adaptive gain identifying unit and the size of predefined threshold value compare.Then, adaptive gain identifying unit 419 is according to above-mentioned size result relatively, and the control signal of usefulness is in advance delivered to dispersal pattern bank select bit 415.The content of control signal above-mentioned size relatively in adaptive code gain when big; When indication is selected to learn, to make the noise source information vector to quantize in advance the dispersal pattern that obtains after less of the quantizing distortion that produces; And above-mentioned size relatively in adaptive code gain when little indication preliminary election and the different dispersal pattern of learning outcome gained dispersal pattern.
As a result, in dispersal pattern bank select bit 415, can adapt to the size of adaptive gain, the M kind dispersal pattern (M=2) of each passage storage of preliminary election, thus can reduce the dispersal pattern number of combinations in a large number.As a result, need can efficiently not carry out the vector quantization processing of noise source information with a spot of computing to whole combination calculated distortion of dispersal pattern.
Moreover the shape of noise code vector (when sound property is strong) when the adaptive gain value is big is a pulse type, and adaptive gain value hour (when sound property is weak) is shape at random.Therefore, the sound zone and the cone of silence of voice signal, the noise code vector that can utilize shape to be fit to respectively is so can improve the quality of synthetic speech.
For illustrative ease, the port number N that this example is defined in the pulse vector generation unit is 3, and the species number M that the dispersal pattern bank select bit is deposited each passage diffusion pulse describes under 2 the situation.Yet the dispersal pattern number of each passage and above-mentioned explanation also can obtain same effect and effect not simultaneously in the port number of pulse vector generation unit, the dispersal pattern bank select bit.
For illustrative ease, this example is deposited in the M kind dispersal pattern (M=2) each passage, and a kind of is the dispersal pattern that is obtained by above-mentioned study, and another kind describes for the situation of random pattern.Yet,,, also can expectation obtain same effect and effect even if be not the situation of above-mentioned that kind if each passage is stored a kind of dispersal pattern of being obtained by study at least in advance.
This example describes the gain situation of the means that size information uses as the preliminary election dispersal pattern of adaptive code to having; If but the parameter of the expression voice signal short time characteristic beyond the dual-purpose adaptive gain size information can expect to obtain effect and effect further.
In addition, have the voice signal communication system or the voice signal register system of above-mentioned voice coder, effect and effect that the source of sound vector generator that can obtain to put down in writing in the example 1 is had through formation.
Moreover; This example has been explained the desirable self-adaptation source of sound gain of current processed frame that moment of being utilized in the noise source information quantization can reference; The method of preliminary election dispersal pattern, but without the desirable self-adaptation source of sound gain of present frame is utilized in when being right after the self-adaption of decoding source of sound gain that former frame obtains and replace; At this moment desirable same structure also can obtain identical effect.
(the 4th example)
Fig. 7 is the functional block diagram of the relevant CELP type of this example voice coder.This example adopts in the noise code book in the CELP type voice coder of the 1st example source of sound vector generator; The available information of the moment that use quantizes in the noise source information vector, a plurality of dispersal patterns that preliminary election dispersal pattern bank select bit is stored.Benchmark as this preliminary election is characterized in that, the size of the coding distortion that produces (representing with the S/N ratio) when using regulation adaptive code this shop.
Except that noise code book periphery, all identical with Fig. 4 CELP type voice coder.Therefore, this example specifies the vector quantization processing of noise source information.
Noise code book 508, noise code gain weighted units 510, composite filter 505, distortion computation unit 506, sign indicating number regulation unit 513, dispersal pattern bank select bit 515, pulse vector generation unit 516, the pulse vector diffusion unit 517 that as shown in Figure 7, the CELP type voice coder of this example has adaptive codebook 507, adaptive code gain weighted units 509, be made up of the source of sound vector generator of explaining in the 1st example, spread vector addition device 518 and distortion power identifying unit 519.
But, in this example, get 515 of above-mentioned dispersal pattern bank select bits and deposit (M >=2) in the M kind dispersal pattern, at least a is random pattern.
For illustrative ease; In this example, the port number N that gets the pulse vector generation unit is 3, and the species number M that the dispersal pattern bank select bit is deposited each passage dispersal pattern is 2; And in the hypothesis M kind dispersal pattern (M=2) a kind of be random pattern; Another kind of is study in advance, after the quantizing distortion that the quantification of noise source information vector is produced is less, by the dispersal pattern of this learning outcome gained.
In the CELP type voice coder of Fig. 7, decide the processing of adaptive code this shop in the noise source information vector quantification treatment professional etiquette of advancing.Therefore, carrying out the moment that the noise source vector quantization is handled, can retrieve the target vector of usefulness with reference to the vector of adaptive codebook number (adaptive code number), the gain of desirable adaptive code (temporary transient confirm) and adaptive codebook.Use the adaptive codebook coding distortion (representing) that to calculate according to above-mentioned three kinds of information in this example, carry out the preliminary election of dispersal pattern with the S/N ratio.
Particularly, after adaptive codebook retrieval finished, the value of adaptive code that immediately sign indicating number regulation unit 513 is kept number and adaptive code gain (The perfect Gain) outputed to distortion computation unit 506.Distortion computation unit 506 utilizes the adaptive code that receives from sign indicating number regulation unit 513 number and adaptive code gain, and the target vector of adaptive codebook retrieval usefulness, calculates the coding distortion that produced by regulation adaptive code this shop (S/N than).The S/N specific output of calculating is arrived distortion power identifying unit 519.
The S/N that distortion power identifying unit 519 at first carries out receiving from distortion computation unit 506 compares than the size with predefined threshold value.Then, distortion power identifying unit 519 is delivered to dispersal pattern bank select bit 515 according to above-mentioned size result relatively with the control signal that preliminary election is used.The content of control signal above-mentioned size relatively in S/N when big; Study is in advance selected in indication; Make noise code book retrieval with target vector encode the coding distortion that produced less after; It is the dispersal pattern of gained as a result, and above-mentioned size relatively in S/N than hour, the dispersal pattern of random pattern is selected in indication.
As a result, in the dispersal pattern bank select bit 515, only preliminary election is a kind of from the M kind dispersal pattern (M=2) of each passage storage, can reduce the combination of dispersal pattern in a large number.Therefore, need be to whole combination calculated distortion of dispersal pattern, can efficiently stipulate noise code number with a spot of computing.Moreover the shape of noise code vector is pulse type at S/N when big, and S/N is than hour being shape at random.Therefore, can make the change in shape of noise code vector, thereby can improve the quality of synthetic speech according to the short time characteristic of voice signal.
For illustrative ease, the port number N that this example is defined in the pulse vector generation unit is 3, and the species number M that the dispersal pattern bank select bit is deposited each passage diffusion pulse describes under 2 the situation.Yet the kind of the port number of pulse vector generation unit, each passage dispersal pattern and above-mentioned explanation also can obtain same effect and effect not simultaneously.
For illustrative ease, this example is again in the M kind dispersal pattern of each passage storage (M=2), and a kind of is the dispersal pattern of being obtained by above-mentioned study, and another kind describes for the situation of random pattern.Yet, if make the dispersal pattern that each passage is stored a kind of random pattern at least in advance, even if be not that the situation of above-mentioned that kind also can expectation obtain same effect and effect.
The means that the size information of the coding distortion (representing with the S/N ratio) that produces is used as the preliminary election dispersal pattern though only use in this example by regulation adaptive code number; If but the information of the further correct representation voice signal short time characteristic of dual-purpose can expect to have further effect and effect.
In addition, have the voice signal communication system or the voice signal register system of above-mentioned voice coder through formation, or the source of sound vector generator that obtains to put down in writing in the 1st example effect and the effect that are had.
The 5th example
The functional block diagram of the relevant CELP type of the 5th example of the present invention shown in Fig. 8 voice coder.In this CELP type voice coder, through input voice data 601 is carried out autocorrelation analysis and lpc analysis, obtain the LPC coefficient in lpc analysis unit 600.In that being encoded, gained LPC coefficient when obtaining the LPC sign indicating number,, obtains decoding LPC coefficient again with the decoding of gained LPC sign indicating number.
Then, at source of sound generation unit 602, take out adaptive codebook 603 and take a sample (being called adaptive code vector (or self-adaptation source of sound) and noise code vector (or noise source) respectively), and deliver to LPC synthesis unit 605 respectively with the source of sound that noise code book 604 is deposited.
In LPC synthesis unit 605, to 2 sources of sound that source of sound generation unit 602 is obtained, the decoding LPC coefficient that utilizes lpc analysis unit 600 to be obtained carries out filtering, thereby obtains 2 synthetic speeches.
In comparing unit 606; Analyze the relation of 2 synthetic speeches of LPC synthesis unit 605 gained and input speech 601; Ask the optimum value (optimum gain) of 2 synthetic speeches; And will use each the synthetic speech addition behind this optimum gain adjustment power, obtain total synthetic speech after, calculate the distance of this always synthetic speech and input speech.
Again; Sampling to whole sources of sound of adaptive codebook 603 and noise code book 604; Calculating is by the distance of a plurality of synthetic speech that drives source of sound generation unit 602 and LPC synthesis unit 605 gained with input speech 601, asks this hour source of sound sampling call number in gained distance as a result.
Gained optimum gain, source of sound sampling call number and 2 corresponding sources of sound of this call number are delivered to parameter coding unit 607., through after carrying out the optimum gain coding and obtaining gain code LPC sign indicating number, source of sound sampling call number are merged together and deliver to transmission line 608 in parameter coding unit 607.
2 sources of sound according to corresponding with gain code and call number generate actual sound source signal, deposit this signal in adaptive codebook 603, discarded old simultaneously source of sound sampling.
Moreover in the LPC synthesis unit 605, common dual-purpose auditory sensation weighting wave filter, this wave filter adopt linear predictor coefficient, high frequency to strengthen wave filter and long-term forecasting coefficient (obtaining through the input speech is carried out the long-term forecasting analysis).Generally use analystal section is further carried out the source of sound retrieval to adaptive codebook and noise code book in the interval (being called subframe) of segmentation.
Below, this example is elaborated to the LPC vector quantization of coefficient in the lpc analysis unit 600.
Fig. 9 illustrates and is implemented in the functional block diagram that the Vector Quantization algorithm carried out lpc analysis unit 600 is used.The frame of vector quantization shown in Fig. 9 comprises target extraction unit 702, quantifying unit 703, distortion computation unit 704, comparing unit 705, decoded vector storage unit 707 and vector smooth unit 708.
In target extraction unit 702,, calculate quantified goal according to input vector 701.The existing method of extracting target that specifies.
Input vector in this example is made up of 2 kinds of vectors: analysis of encoding is to the parameter vector of picture frame gained; Carry out the parameter vector that analysis equally obtains from a future frame.Target extraction unit 702 utilizes above-mentioned input vector and 707 of decoded vector storage unit to deposit the previous frame decoded vector, calculates quantified goal.Formula (8) illustrates the example of operational method.
X(i)={S t(i)+p(d(i)+S t+1(i)/2}/(1+p) (8)
X (i): target vector
I: vector key element number
S t(i), S T+1(i): input vector
T: time (frame number)
P: weighting coefficient (fixing)
D (i): previous frame decoded vector
The thinking of above-mentioned target extraction method is shown below.In the typical vector quantization with the parameter vector S of present frame t(i), and carry out match with formula (9) as target X (i).
En = Σ i = 0 I ( X ( i ) - Cn ( i ) ) 2 - - - ( 9 )
En: with the distance of n number vector
X (i): quantified goal
Cn (i): code vector
N: code vector number
I: vector dimension
I: vector length
So in the vector quantization hereto, coding distortion still interrelates with the deterioration of tonequality.Even can not avoid in the ultralow bit rate coding of coding distortion to a certain degree in countermeasures such as taking the predictive vector quantification, this becomes big problem.
Therefore, in this example, as the direction that acoustically is difficult to find mistake, the mid point of decoded vector before and after being conceived to is derived decoded vector at this place, realizes the improvement of sense of hearing aspect thus.This is when utilizing the parameter vector interpolation characteristic good, is difficult to hear that the time continuation property is in acoustically this specific character of deterioration.Figure 10 that following reference illustrates vector space explains this situation.
At first, the decoded vector of this former frame is d (i), and following parameter vector is S T+1(i) (in fact be preferably following decoded vector, but can not encode in the present frame, so substitute parameter vector), then code vector Cn (i): (1) is code vector Cn (i): (2) are more near parameter vector S t(i), Cn (i) but in fact: (2) are near at d (i) and S T+1(i) on the line, thereby than Cn (i): (1) is difficult for hearing deterioration.So, utilize this specific character, if target X (i) is taken as from S t(i) with to a certain degree near d (i) and S T+1(i) vector on the point midway then is directed to the little direction of distortion acoustically with decoded vector.
In this example, can realize moving of this target through the estimation that imports following formula (10).
X(i)={S t(i)+p(d(i)+S t+1(i)/2}/(1+p) (10)
X (i): quantified goal vector
I: vector key element number
S t(i), S T+1(i): input vector
T: time (frame number)
P: weighting coefficient (fixing)
D (i): previous frame decoded vector
The first half of formula (10) is general vector quantization estimation formula, and latter half is the auditory sensation weighting component.In order to quantize with above-mentioned estimation formula, at each X (i) the estimation formula is carried out differential, and to establish differential gained result be 0, then can get formula (8).
Weighting coefficient P is positive constant, its value be 0 o'clock identical with general vector quantization, target is positioned at mid point fully when infinitely great.P is very big, and then target greatly departs from the parameter vector S of present frame t(i), sense of hearing sharpness descends.According to decoding voice signal audition experiment, confirm that 0.5<p<1.0 o'clock obtain good performance.
Quantified goal at 703 pairs of target extraction unit 702 gained of quantifying unit quantizes, and asks codebook vector, finds the solution code vector simultaneously, and delivers to distortion computation unit 704 together with codebook vector.
In this example, adopt predictive vector to quantize as the method that quantizes.The explanation predictive vector quantizes below.
The functional block diagram that predictive vector shown in Figure 11 quantizes.It is that a kind of utilization past Code And Decode gained vector (resultant vector) is predicted that predictive vector quantizes, and this predicated error is carried out the algorithm of vector quantization.
Generate the vector code book 800 of the sampling core (code vector) of a plurality of prediction error vectors of storage in advance.Usually according to a plurality of vectors of analyzing a plurality of voice data gained, utilize LBG algorithm (NO.1, pp84-95, JANUARY 1980 for IEEETRANSACTIONS ON COMMUNICATIONS, VOL.COM-28), generate this code book.
Vector 801 at 802 pairs of quantified goals of predicting unit is predicted.The past resultant vector that prediction utilizes state storage unit 803 to be deposited carries out, and the gained prediction error vector is delivered to metrics calculation unit 804.Here, as the form of prediction, enumerate the prediction that utilizes fixed coefficient to carry out when the prediction number of times is 1 time.Prediction error vector calculating formula when going up this prediction shown in the following formula (11).
Y(i)=X(i)-βD(i) (11)
Y (i): prediction error vector
X (i): quantified goal
β: predictive coefficient (scalar)
D (i): the resultant vector of preceding 1 frame
I: vector dimension
The value of predictive coefficient β is generally 0<β<1 in the following formula.
In metrics calculation unit 804, calculate predicting unit 802 gained prediction error vectors and 800 distances of depositing code vector of vector code book.Following formula (12) illustrates this distance calculation formula.
En = Σ i = 0 I ( T ( i ) - Cn ( i ) ) 2 - - - ( 12 )
En: with the distance of n number vector
T (i): prediction error vector
Cn (i): code vector
N: code vector number
I: vector dimension
I: vector length
In retrieval unit 805 relatively with the distance of each code vector, will export as codebook vector 806 apart from the number of the code vector of minimum.That is, control vector code book 800 and metrics calculation unit 804 ask 800 of vector code books to deposit the number of the minimum code vector of whole code vector middle distances, and with this vector number as codebook vector 806.
And then, according to final codebook vector, utilize the past decoded vector of being deposited from the code vector and the state storage unit 803 of vector code book 800 gained, carry out vector decode, and with the content of the resultant vector update mode storage unit 803 of gained.Can the vector of decoding be used for prediction here when therefore, decoding next time.
Formula (13) below utilizing is carried out the decoding of above-mentioned prediction form example (the prediction number of times is 1 time, fixed coefficient).
Z(i)=CN(i)+βD(i) (13)
Z (i): decoded vector (using as D (i) when next time encoding)
N: vector coding
CN (i): code vector
β: predictive coefficient (scalar)
D (i): the resultant vector of preceding 1 frame
I: vector dimension
On the other hand, in demoder,, decode through asking code vector according to the codebook vector that sends.Have vector code book and the state storage unit identical in the demoder in advance with scrambler, utilize with above-mentioned encryption algorithm in the identical algorithm of retrieval unit decoder function, decode.It more than is the vector quantization of carrying out in quantifying unit 703.
The previous frame decoded vector of in distortion computation unit 704, being deposited according to decoded vector, input vector 701 and the decoded vector storage unit 707 of quantifying unit 703 gained is calculated the auditory sensation weighting coding distortion.Following formula (14) illustrates calculating formula.
Ew=∑(V(i)-S t(i)) 2+p{V(i)-(d(i)+S t+1(i)/2} 2 (14)
Ew: weighted coding distortion
S t(i), S T+1(i): input vector
T: time (frame number)
I: vector key element number
V (i): decoded vector
P: weighting coefficient (fixing)
D (i): previous frame decoded vector
In formula (14), weighting coefficient p is identical with the coefficient of target extraction unit 702 used target calculating formulas.Above-mentioned weighted coding distortion value, decoded vector and codebook vector are delivered to comparing unit 705.
The codebook vector that comparing unit 705 is sent distortion computation unit 704 here is delivered to transmission line 608, and with the decoded vector that distortion computation unit 704 is sent here, upgrades the content of decoded vector storage unit 707.
According to above-mentioned example, target vector is modified to from S at target extraction unit 702 t(i) with to a certain degree near d (i) and S T+1The vector of the position of mid point (i), thereby can carry out weighting retrieval and do not think deterioration acoustically.
So far, explained that the present invention is adapted to the situation of used low bit rate voice signal coding technologies such as portable phone, but the present invention's voice signal coding still not, and can also be used for the music encoding device.Insert property parameter vector quantization preferably in the image encoder.
The LPC of lpc analysis unit coding normally is transformed to the parameter vector that general LSP (line spectrum pair) etc. is convenient to encode in the above-mentioned algorithm, utilizes Euclidean distance and weighting Euclidean distance to carry out vector quantization (VQ).
In this example, target extraction unit 702 is accepted the control of comparing unit 705, and input vector 701 is delivered to vector smooth unit 708, and target extraction unit 703 receives the input vector of revising in the vector smooth unit 708, carries out the extraction of target again.
At this moment, compare weighted coding distortion value and the inner reference value of sending here distortion computation unit 704 of preparing of comparing unit at comparing unit 705.According to this comparative result, processing is divided into two kinds.
When not reaching reference value, the codebook vector that distortion computation unit 704 is sent here is delivered to transmission line 606, and, upgrade the content of decoded vector storage unit 707 with the decoded vector that distortion computation unit 704 is sent here.Through rewrite the content of decoded vector storage unit 707 with the decoded vector that obtains, carry out this renewal.Then, carry out the transition to the next frame parameter encoding process.
Otherwise when reference value was above, control vector smooth unit 708 to the input vector correct, worked target extraction unit 702, quantifying unit 703 and vector computing unit 704 once more, carries out recompile.
Before in comparing unit 705, not reaching reference value, carry out encoding process repeatedly.Yet, carry out repeatedly sometimes can not becoming several times not reaching reference value, thereby comparing unit 705 inside have counter; Computational discrimination is the above number of times of reference value; Reach certain number of times when above, end coding repeatedly, and the processing sum counter zero clearing when not reaching reference value.
In the vector smooth unit 708; Receive the control of comparing unit 705; According to input vector that is obtained by target extraction unit 702 and the previous frame decoded vector that obtains from decoded vector storage unit 707, the formula (15) below utilizing is revised the present frame parameter vector S as one of input vector t(i), and with amended input vector deliver to target extraction unit 702.
S t(i)←(1-q)·S t(i)+q(d(i)+S t+1(i))/2 (15)
Above-mentioned q is a smoothing factor, and expression present frame parameter vector is near the degree of the mid point of previous frame decoded vector and future frame parameter vector.Implement according to coding, when the inside number of occurrence higher limit of confirmation 0.2<q<0.4 and comparing unit 705 is 5-8 time, can obtain good performance.
Though this example adopts predictive vector to quantize in quantifying unit 703, by above-mentioned smoothing processing, the possibility that the 704 gained weighted coding distortions of distortion computation unit diminish is big.Its reason is to utilize smoothing processing to make quantified goal more near the previous frame decoded vector.Therefore, utilize the coding that compares unit 705 controls repeatedly, the possibility that does not reach reference value in the distortion relatively of comparing unit 705 improves.
In the demoder, have the decoding unit corresponding in advance, decode according to the codebook vector of sending here from transmission line with the scrambler quantifying unit.
The LSP parameter quantification (quantifying unit is predicted VQ) that this example also is used for the appearance of CELP type coding carries out the Code And Decode experiment of voice signal.Its result confirms the sure raising of tonequality acoustically, and objective value (S/N ratio) is improved.This is because utilize and to have the level and smooth encoding process repeatedly of vector, even also can suppress to predict the effect of VQ coding distortion when reaching frequency spectrum and sharply changing.The shortcoming that prediction VQ in the past has is: owing to predict according to the past resultant vector, the distortion spectrum that the frequency spectrums such as part of speech beginning sharply change part becomes big on the contrary.Yet, use this example, carry out smoothing processing when distortion is big, diminish up to distortion, though thereby target some depart from actual parameter vector, coding distortion diminishes, the effect that overall degradation diminishes in the time of can obtaining the voice signal decoding.Therefore,, not only acoustically improve tonequality, and objective value is improved according to this example.
In this example; The characteristic of comparing unit capable of using and vector smooth unit; When the vector quantization distortion is big; The direction of its deterioration is controlled on the direction that acoustically more can not perceive, and when quantifying unit adopts predictive vector to quantize through carrying out smoothing processing+coding repeatedly, diminishing up to coding distortion also to make the objective value raising.
So far, explained that the present invention is adapted to the situation of used low bit rate voice coding techniquess such as portable phone, but the present invention is not only the voice signal coding, and inserts property parameter vector quantification preferably in can be used in music encoding device and the image encoder.
(the 6th example)
The relevant CELP type voice coder of the present invention's the 6th example is described below.This example is except that the quantization algorithm of quantization method the adopts quantifying unit that multistage predictive vector quantizes, and other structure is identical with above-mentioned the 5th example.That is, the noise code book adopts the source of sound vector generator of above-mentioned the 1st example.Specify the quantization algorithm of quantifying unit at present.
The functional block diagram of quantifying unit shown in Figure 12.In the multi-stage vector quantization, carry out utilizing its code book to decode to quantize gained target code word after target vector quantizes, ask vector and poor (being called the coding distortion vector) of former target behind the coding, and then the coding distortion vector of trying to achieve is quantized.
Generate in advance and deposit vector code book 899, the vector code book 900 of a plurality of prediction error vector sampling cores (code vector).By prediction error vector to a plurality of study usefulness, use and the identical algorithm of typical case's " multi-stage vector quantization " method for generating codebooks, generate these code books.That is,, utilize LBG algorithm (NO.1, pp84-95, JANUARY 1980 for I EEE TRANSACTIONS ON COMMUNICATIONS, VOL.COM-28) to generate above-mentioned code book usually according to a plurality of vectors of analyzing many voice data gained.But the study of vector code book 899 is the set of many quantified goals totally, and the study of vector code book 900 totally is the set of the coding distortion vector when using quantification code book 899 to encode to above-mentioned many quantified goals.
At first, predict at 902 pairs of quantified goal vectors 901 of predicting unit.Prediction is carried out with 903 resultant vectors of depositing over of state storage unit, and the prediction error vector that obtains is delivered to metrics calculation unit 904 and metrics calculation unit 905.
In this example,, enumerate the prediction that utilizes fixed coefficient to carry out when the prediction number of times is 1 time as the prediction form.Prediction error vector arithmetic expression when following formula (16) illustrates with this prediction.
Y(i)=X(i)-βD(i) (16)
Y (i): prediction error vector
X (i): quantified goal
β: predictive coefficient (scalar)
D (i): the resultant vector of preceding 1 frame
I: vector dimension
In the following formula, the value of predictive coefficient β is generally 0<β<1.
In metrics calculation unit 904, calculate predicting unit 902 gained prediction error vectors and 899 distances of depositing code vector A of vector code book.Following formula (17) illustrates the distance calculation formula.
En = Σ i = 0 I ( Y ( i ) - C ln ( i ) ) 2 - - - ( 17 )
En: with the distance of n number vector A
Y (i): prediction error vector
Cln (i): code vector A
N: the number of code vector A
I: vector dimension
I: vector length
In retrieval unit 906, relatively with the distance of each code vector A, will be apart from the number of the code vector A of minimum coding as code vector A.That is, control vector code book 899 asks 899 of vector code books to deposit the number of the minimum code vector A of whole code vector middle distances with metrics calculation unit 904, and with the coding of this number as code vector A.Then, deliver to metrics calculation unit 905 with the coding of code vector A with reference to the decoded vector A that this coding is obtained from vector code book 899.Again the coding of code vector A is delivered to transmission line, retrieval unit 907.
Metrics calculation unit 905 is according to prediction error vector and the decoded vector A that obtains from retrieval unit 906; Obtain the coding distortion vector; Perhaps with reference to the coding of the code vector A that obtains from retrieval unit 906; Obtain amplitude from amplitude storage unit 908, the code vector B that calculates storage in above-mentioned coding distortion vector and the vector code book 900 then multiply by above-mentioned amplitude gained result's distance, and this distance is delivered to retrieval unit 907.Following formula (18) illustrates the distance calculation formula.
Z(i)=Y(i)-C1N(i)
Em = Σ i = 0 I ( Z ( i ) - aNC 2 m ( i ) ) 2 - - - ( 18 )
Z (i): decoding distortion vector
Y (i): prediction error vector
C1N (i): decoded vector A
N: the coding of code vector A
Em: with the distance of m number vector B
The amplitude that aN is corresponding with the coding of code vector A
C2m (i): code vector B
M: the number of code vector B
I: vector dimension
I: vector length
In retrieval unit 907, relatively with the distance of each code vector B, will be apart from the number of the code vector B of minimum coding as code vector B.That is, control vector code book 900 and metrics calculation unit 905 ask 900 of vector code books to deposit the number of the minimum code vector B of whole code vector B middle distances, and with the coding of this number as code vector B.Then, the coding of code vector A and code vector B is lumped together, as vector 909.
Retrieval unit 907 is also according to the coding of code vector A, B; Use the decoded vector A that obtains from vector code book 899 and vector code book 900 and B, from the amplitude of amplitude storage unit 908 acquisitions; And the past decoded vector of state storage unit 903 storage carries out the decoding of vector, and utilizes the content of the resultant vector update mode storage unit 903 that obtains.(therefore, when encoding, the vector of decoding is used for prediction here next time.) (the prediction number of times is 1 time to utilize following formula (19) to carry out the prediction of this example.Fixed coefficient) decoding in.
Z(i)=C1N(i)+aN·C2M(i)+βD(i) (19)
Z (i): decoded vector (using as D (i) when next time encoding)
N: the coding of code vector A
M: the coding of code vector B
C1M: decoded vector A
C2M: decoded vector B
AN: the amplitude corresponding with the coding of code vector A
β: predictive coefficient (scalar)
D (i): the resultant vector of former frame
I: vector dimension
Preestablish the amplitude of amplitude storage unit 908 storages, this establishing method is shown below.Many voice datas are encoded, and after each coding of the 1st grade of code vector asked total coding distortion of following formula (20), learn, make this distortion minimum, thus the setting amplitude.
EN = Σ Σ i = 0 I ( Y t ( i ) - C 1 N ( i ) - aNC 2 m t ( i ) ) 2 - - - ( 20 )
EN: code vector A be encoded to N the time coding distortion
N: the coding of code vector A
T: the time that is encoded to N of code vector A
Y t(i): the prediction error vector of time t
C1N (i): decoded vector A
AN: the amplitude corresponding with the coding of code vector A
C2m t(i): code vector B
m t: the number of code vector B
I: vector dimension
I: vector length
That is, behind the coding, set and revise the distortion of above-mentioned formula (20), making the value at each amplitude differential is 0, thus, carries out amplitude study.Then, carry out above-mentioned coding+study repeatedly, thereby obtain optimum range.
On the other hand, in the demoder,, ask code vector, decode through according to transmitting the codebook vector of coming.Demoder has the vector code book identical with scrambler (corresponding to code vector A, B), amplitude storage unit and state storage unit, use with above-mentioned encryption algorithm in the identical algorithm of decoding function of retrieval unit (corresponding to code vector B) decode.
Therefore, in this example, utilize the characteristic of amplitude storage unit and metrics calculation unit to make the 2nd grade code vector adapt to the 1st grade, thereby can make amplitude distortion less with less calculated amount.
So far, explained that the present invention is adapted to the situation of used low bit rate voice signal coding technologies such as portable phone, but the present invention is not only the voice signal coding, but also inserts property parameter vector quantification preferably in can be used in music encoding device and the image encoder etc.
(the 7th example)
The relevant CELP type voice coder of the present invention's the 7th example is described below.Form of the present invention is a kind of examples of encoder, and this scrambler can reduce the operand that adopts the retrieval of ACELP type noise code book time-code.
The functional block diagram of the relevant CELP type of this example shown in Figure 13 voice encryption device.In this CELP type voice coder, 1002 pairs of inputs of filter coefficient analytic unit voice signal 1001 carries out linear prediction analysis, obtains the composite filter coefficient, and gained composite filter coefficient is outputed to filter coefficient quantifying unit 1003.Behind the composite filter coefficient quantization of filter coefficient quantifying unit 1003 with input, output to composite filter 1004.
Composite filter 1004 is to set up according to the filter coefficient that filter coefficient quantifying unit 1003 is supplied with, and is driven by pumping signal 1011.This pumping signal 1011 multiply by noise vector 1009 that adaptive gain 1007 gained results and noise code book 1008 export through the self-adaptation vector 1006 with adaptive codebook 1005 output and multiply by noise gain 1010 gained results added and obtain.
Here, adaptive codebook 1005 is that each pitch period of storage takes out the code book of past to a plurality of self-adaptation vectors of the pumping signal of composite filter, and noise code book 1007 is code books of a plurality of noise vectors of storage.Noise code book 1007 can adopt the source of sound vector generator of above-mentioned the 1st example.
Calculate as the distortion between the synthetic voice signal 1012 of the output of pumping signal 1011 composite filter that drives 1004 and the input voice signal 1001 distortion computation unit 1013, and carry out a yard retrieval process.The sign indicating number retrieval process is the number and the number of noise vector 1009 that a kind of regulation makes the self-adaptation vector 1006 of the minimum usefulness of 1013 calculated distortion in distortion computation unit, the processing of calculating the optimum value of adaptive gain that each output vector is taken advantage of 1007 and noise gain 1010 simultaneously.
The output of coding output unit 1014 be will be respectively and the filter coefficient quantized value that obtains from filter coefficient quantifying unit 1003, and the result of gained behind the adaptive gain 1007 that multiplies each other of the number of the number of the self-adaptation vector of selecting in the distortion computation unit 1,013 1006 and noise vector 1009 and noise gain 1009 codings.To transmit or store from the information of coding output unit 1014 output.
Sign indicating number retrieval process in the distortion computation unit 1013 is at first retrieved the adaptive codebook component in the pumping signal usually, then this component of noise code in the pumping signal is retrieved.
The orthogonal search of explanation below the retrieval of above-mentioned noise component is used.
In the orthogonal search, regulation makes retrieval reference value Eort (=Nort/Dort) the maximum noise vector c of formula (21).
Eort ( = Nort Dort ) = [ { ( P t H t Hc ) x - ( x t Hp ) Hp } Hc ] 2 ( c t H t Hc ) ( p t H t Hp ) - ( p t H t Hc ) 2 - - - ( 21 )
The branch subitem of Nort:Eort
The denominator term of Dort:Eort
P: the self-adaptation vector of having stipulated
H: composite filter matrix of coefficients
The transposed matrix of Ht:H
X: echo signal (result of input voice signal and composite filter zero input response difference gained)
C: noise vector
Orthogonal search is that the noise vector that prior regulation self-adaptation vector is a candidate is distinguished quadrature, and stipulates the search method that 1 distortion is minimum from a plurality of noise vectors of quadrature.This search method is compared with nonopiate retrieval, it is characterized in that improving the precision of regulation noise vector, thereby can improve the quality of synthetic voice signal.
In the ACELP mode, only the pulse with minority band polarity constitutes noise vector.Utilize this point, the branch subitem (Nort) of retrieving reference value shown in the formula (21) is transformed to following formula (22), thus, can reduce the computing of branch subitem.
Nort={a 0ψ(l 0)+a 1ψ(l 1)+…+a n-1ψ(l n-1)} 2 (22)
a i: the polarity of i pulse (+1/-1)
l i: the position of i pulse
N: pulse number
ψ:{(p tH tHp)x-(x tHp)Hp}H
The value of ψ in the formula (22) is calculated as pre-treatment in advance, and in array, launched, then can (N-1) among the array ψ individual key element tape symbol be carried out addition, and squared to its result, thus the branch subitem of calculating formula (21).
Specify the distortion computation unit 1013 that can reduce operand below to denominator term.
The functional block diagram of the unit of distortion computation shown in Figure 14 1013.Voice coder in this example, its structure are in the structure of Figure 13, with self-adaptation vector 1006 and noise vector 1009 input distortion computation unit 1013.
In Figure 14, following 3 kinds of processing are carried out in the pre-treatment as to institute's input noise vector calculated distortion the time.
(1) calculates the 1st matrix (N): the power (p that calculates gained vector behind composite filter synthesis self-adaptive vector tH tHp) and the autocorrelation matrix (H of composite filter median filter coefficient tH), and above-mentioned power and each key element of above-mentioned autocorrelation matrix multiplied each other, thereby calculate matrix N (=(p tH tHp) H tH).
(2) calculate the 2nd matrix (M): will be behind composite filter synthesis self-adaptive vector the vector of gained synthetic by the inhour order, and to its signal (p of gained as a result tH tH) get vector product after, calculate matrix M.
(3) generate the 3rd matrix (L): the matrix M to calculating in the matrix N calculated in (1) and (2) is carried out difference, generator matrix L.
Again, the denominator term (Dort) of formula (21) is deployable is formula (23).
Dort=(c tH tHc)(p tH tHp)-(p tH tHc) 2 (23)
=c tNc-(r tc) 2
=c tNc-(r tc) t(r tc)
=c tNc-(c trr tc)
=c tNc-(c tMc)
=c t(N-M)c
=c tLc
N: (p tH tHp) H tH ← above-mentioned pre-treatment (1)
R:p tH tH ← above-mentioned pre-treatment (2)
M:rr t← above-mentioned pre-treatment (2)
L:N-M ← above-mentioned pre-treatment (3)
C: noise vector
Thus, the computing method of the denominator term (Dort) during with calculating formula (21) retrieval reference value (Eort) are replaced into formula (23), available this component of less operand regulation noise code.
Matrix L and noise vector 1009 with above-mentioned pre-treatment obtains carry out the calculating of denominator term.
Here; For easy; To input voice signal sampling frequency is 8000Hz; The unit time width (frame time) of Algebraic Structure noise code book retrieval be 10ms, and (situation of+1/-1) principle combinations generation is explained the denominator term computing method based on formula (23) to noise vector with 5 unit pulses of every 10ms.
Establish 5 unit pulses that constitute noise vector again and select the pulse of 1 position to form respectively by being in from the 0th group to the 4th group defined position shown in the table 2, the candidate noise vector can use following formula (24) to record and narrate.
C=a 0δ(k-l 0)+a 1δ(k-l 1)+…+a 4δ(k-l 4) (24)
(k=0,1,…79)
a i: the polarity of pulse under the i group (+1/-1)
l i: the position of pulse under the i group
Table 2
Group number Symbol The candidate pulse position
0 ±1 0,10,20,30,…,60,70
1 ±1 2,12,22,32,…,62,72
2 ±1 2,16,26,36,…,66,76
3 ±1 4,14,24,34,…,64,74
4 ±1 8,18,28,38,…,68,78
At this moment, available following formula (25) is asked the denominator term (Dort) shown in the formula (23).
Dort = Σ i = 0 4 Σ j = 0 4 a i a j L ( l i , l j ) - - - ( 25 )
a i: the polarity of pulse under the i group
l i: the position of pulse under the i group
L (l i, l j): l in the matrix L iRow, l jThe key element of row
According to above explanation, prove when adopting ACELP type noise code book, the branch subitem (Nort) of the sign indicating number retrieval reference value of available formula (22) calculating formula (21), available formula (25) is calculated its denominator term (Dort).Therefore, when adopting ACELP type noise code book, not the reference value of former state calculating formula (21), but calculate its minute subitem and denominator term respectively, yard retrieve an operand thereby can cut down significantly with (22) and formula (25).
More than this example of explanation has been explained the noise code book retrieval that does not have preliminary election.Yet preliminary election makes the big noise vector of value of formula (22), and to utilizing preliminary election to converge to the noise vector calculating formula (21) of a plurality of candidates, selection makes the maximum noise vector of this value, uses the present invention in this case, also can obtain identical effect.

Claims (10)

1. the diffusion pulse vector generating means that is used for voice coder/demoder, said diffusion pulse vector generating means comprises:
The pulse vector generation unit is used to generate the pulse vector with the pulse of band polarity unit;
The dispersal pattern bank select bit is used to store a plurality of fixedly dispersal patterns, and preliminary election dispersal pattern from said a plurality of fixedly dispersal patterns is selected a dispersal pattern from the dispersal pattern that said preliminary election obtains;
The pulse vector diffusion unit carries out convolution algorithm to said pulse vector and selected dispersal pattern, generates the diffusion pulse vector.
2. diffusion pulse vector generating means as claimed in claim 1, dispersal pattern is selected in said dispersal pattern bank select bit reference adaptive code book gain.
3. diffusion pulse vector generating means as claimed in claim 1, said pulse vector generates according to the algebraic codebook table.
4. like any one described diffusion pulse vector generating means among the claim 1-3, be stored in the said a plurality of fixedly dispersal patterns in the said dispersal pattern storer, be divided into a plurality of kinds according to the characteristic of each dispersal pattern in said a plurality of fixedly dispersal patterns.
5. diffusion pulse vector generating means as claimed in claim 4, said a plurality of kinds comprise: comprise first kind of pulse type dispersal pattern and comprise second kind of shape dispersal pattern at random.
6. be used for method voice coder/demoder, that generate the diffusion pulse vector, said method comprises the steps:
Pulse vector generates step, generates the pulse vector with the pulse of band polarity unit;
The dispersal pattern reading step is read a plurality of fixedly dispersal patterns from storer;
Dispersal pattern is selected step, and preliminary election dispersal pattern from said a plurality of fixedly dispersal patterns of being read is selected a dispersal pattern from the dispersal pattern that said preliminary election obtains;
The pulse vector diffusing step carries out convolution algorithm to said pulse vector and selected dispersal pattern, generates the diffusion pulse vector.
7. the method for generation diffusion pulse vector as claimed in claim 6 is selected in the step at said dispersal pattern, and the gain of reference adaptive code book selects to carry out with said pulse vector the dispersal pattern of convolution algorithm.
8. the method for generation diffusion pulse vector as claimed in claim 6, said pulse vector generates according to this table of algebraic code.
9. like the method for any one described generation diffusion pulse vector among the claim 6-8, said a plurality of fixedly dispersal patterns are divided into a plurality of kinds according to the characteristic of each dispersal pattern in said a plurality of fixedly dispersal patterns.
10. the said a plurality of kinds of method of generation diffusion pulse vector as claimed in claim 9 comprise: comprise first kind of pulse type dispersal pattern and comprise second kind of shape dispersal pattern at random.
CN2007101529987A 1997-10-22 1998-10-22 Sound signal encoder and sound signal decoder Expired - Lifetime CN101174413B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP28941297A JP3235543B2 (en) 1997-10-22 1997-10-22 Audio encoding / decoding device
JP289412/97 1997-10-22
JP295130/97 1997-10-28
JP29513097A JP3175667B2 (en) 1997-10-28 1997-10-28 Vector quantization method
JP08571798A JP3174756B2 (en) 1998-03-31 1998-03-31 Sound source vector generating apparatus and sound source vector generating method
JP85717/98 1998-03-31

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB988015560A Division CN100367347C (en) 1997-02-13 1998-10-22 Sound encoder and sound decoder

Publications (2)

Publication Number Publication Date
CN101174413A CN101174413A (en) 2008-05-07
CN101174413B true CN101174413B (en) 2012-04-18

Family

ID=17742914

Family Applications (8)

Application Number Title Priority Date Filing Date
CN2007103073150A Expired - Lifetime CN101202044B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2006100048275A Expired - Lifetime CN1808569B (en) 1997-10-22 1998-10-22 Voice encoding device,orthogonalization search method, and celp based speech coding method
CN2007103073165A Expired - Lifetime CN101202045B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2007101529987A Expired - Lifetime CN101174413B (en) 1997-10-22 1998-10-22 Sound signal encoder and sound signal decoder
CN2007103073184A Expired - Lifetime CN101202047B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2007101529972A Expired - Lifetime CN101174412B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN200710307317XA Expired - Lifetime CN101202046B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2007103073381A Expired - Lifetime CN101221764B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN2007103073150A Expired - Lifetime CN101202044B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2006100048275A Expired - Lifetime CN1808569B (en) 1997-10-22 1998-10-22 Voice encoding device,orthogonalization search method, and celp based speech coding method
CN2007103073165A Expired - Lifetime CN101202045B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder

Family Applications After (4)

Application Number Title Priority Date Filing Date
CN2007103073184A Expired - Lifetime CN101202047B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2007101529972A Expired - Lifetime CN101174412B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN200710307317XA Expired - Lifetime CN101202046B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder
CN2007103073381A Expired - Lifetime CN101221764B (en) 1997-10-22 1998-10-22 Sound encoder and sound decoder

Country Status (2)

Country Link
JP (1) JP3235543B2 (en)
CN (8) CN101202044B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3449339B2 (en) 2000-06-08 2003-09-22 日本電気株式会社 Decoding device and decoding method
US7054807B2 (en) * 2002-11-08 2006-05-30 Motorola, Inc. Optimizing encoder for efficiently determining analysis-by-synthesis codebook-related parameters
US8271274B2 (en) * 2006-02-22 2012-09-18 France Telecom Coding/decoding of a digital audio signal, in CELP technique
CA2929800C (en) 2010-12-29 2017-12-19 Samsung Electronics Co., Ltd. Apparatus and method for encoding/decoding for high-frequency bandwidth extension
CN104021796B (en) * 2013-02-28 2017-06-20 华为技术有限公司 Speech enhan-cement treating method and apparatus
CN108198564B (en) 2013-07-01 2021-02-26 华为技术有限公司 Signal encoding and decoding method and apparatus
WO2015025454A1 (en) 2013-08-22 2015-02-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Speech coding device and method for same
CN108701462B (en) 2016-03-21 2020-09-25 华为技术有限公司 Adaptive quantization of weighting matrix coefficients

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5195137A (en) * 1991-01-28 1993-03-16 At&T Bell Laboratories Method of and apparatus for generating auxiliary information for expediting sparse codebook search
US5448680A (en) * 1992-02-12 1995-09-05 The United States Of America As Represented By The Secretary Of The Navy Voice communication processing system
CA2136891A1 (en) * 1993-12-20 1995-06-21 Kalyan Ganesan Removal of swirl artifacts from celp based speech coders
US5570454A (en) * 1994-06-09 1996-10-29 Hughes Electronics Method for processing speech signals as block floating point numbers in a CELP-based coder using a fixed point processor
JPH08179796A (en) * 1994-12-21 1996-07-12 Sony Corp Voice coding method
JP3425152B2 (en) * 1995-02-03 2003-07-07 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ An encoding device for encoding an (N-1) -bit information word sequence into an N-bit channel word sequence and a decoding device for decoding the N-bit channel word sequence into an (N-1) bit information word sequence
JP3137176B2 (en) * 1995-12-06 2001-02-19 日本電気株式会社 Audio coding device

Also Published As

Publication number Publication date
CN101202047B (en) 2012-04-25
CN101202044A (en) 2008-06-18
CN101202046B (en) 2012-06-20
JPH11126096A (en) 1999-05-11
CN1808569B (en) 2010-05-26
CN101221764B (en) 2013-02-13
CN101174412A (en) 2008-05-07
JP3235543B2 (en) 2001-12-04
CN101202045A (en) 2008-06-18
CN101174413A (en) 2008-05-07
CN101202046A (en) 2008-06-18
CN101202047A (en) 2008-06-18
CN101221764A (en) 2008-07-16
CN1808569A (en) 2006-07-26
CN101202045B (en) 2011-08-10
CN101174412B (en) 2011-06-08
CN101202044B (en) 2012-07-25

Similar Documents

Publication Publication Date Title
CN100349208C (en) Speech coder and speech decoder
US4817157A (en) Digital speech coder having improved vector excitation source
US4896361A (en) Digital speech coder having improved vector excitation source
CN101903945B (en) Encoder, decoder, and encoding method
US5633980A (en) Voice cover and a method for searching codebooks
CN101174413B (en) Sound signal encoder and sound signal decoder
US7337110B2 (en) Structured VSELP codebook for low complexity search
CN100367347C (en) Sound encoder and sound decoder
JP3174756B2 (en) Sound source vector generating apparatus and sound source vector generating method
JPH096396A (en) Acoustic signal encoding method and acoustic signal decoding method
CN1845239B (en) Excitation vector generator, speech coder and speech decoder
JP3276354B2 (en) Diffusion vector generation device, sound source vector generation device, and sound source vector generation method
JP3276356B2 (en) CELP-type speech coding apparatus and CELP-type speech coding method
JP3276357B2 (en) CELP-type speech coding apparatus and CELP-type speech coding method
JP3276358B2 (en) CELP-type speech coding apparatus and CELP-type speech coding method
JP3276355B2 (en) CELP-type speech decoding apparatus and CELP-type speech decoding method
JP3276353B2 (en) Diffusion vector generation device, sound source vector generation device, and sound source vector generation method
CA2598870C (en) Multi-stage vector quantization apparatus and method for speech encoding
JP2000132199A (en) Voice encoding device/decoding device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: INTELLECTUAL PROPERTY BRIDGE NO. 1 CO., LTD.

Free format text: FORMER OWNER: MATSUSHITA ELECTRIC INDUSTRIAL CO, LTD.

Effective date: 20140605

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20140605

Address after: Tokyo, Japan

Patentee after: GODO KAISHA IP BRIDGE 1

Address before: Japan's Osaka kamato City

Patentee before: Matsushita Electric Industrial Co., Ltd.

CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20120418