WO2005069277A1 - 音声信号符号化方法、音声信号復号化方法、送信機、受信機、及びワイヤレスマイクシステム - Google Patents
音声信号符号化方法、音声信号復号化方法、送信機、受信機、及びワイヤレスマイクシステム Download PDFInfo
- Publication number
- WO2005069277A1 WO2005069277A1 PCT/JP2005/000510 JP2005000510W WO2005069277A1 WO 2005069277 A1 WO2005069277 A1 WO 2005069277A1 JP 2005000510 W JP2005000510 W JP 2005000510W WO 2005069277 A1 WO2005069277 A1 WO 2005069277A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- audio signal
- vector
- decoding
- subband signals
- Prior art date
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 145
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000005070 sampling Methods 0.000 claims abstract description 13
- 239000013598 vector Substances 0.000 claims description 133
- 230000005284 excitation Effects 0.000 claims description 100
- 238000013139 quantization Methods 0.000 claims description 79
- 230000015572 biosynthetic process Effects 0.000 claims description 64
- 238000003786 synthesis reaction Methods 0.000 claims description 64
- 230000006978 adaptation Effects 0.000 claims description 35
- 238000004458 analytical method Methods 0.000 claims description 24
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000007906 compression Methods 0.000 abstract description 32
- 230000006835 compression Effects 0.000 abstract description 31
- 230000005540 biological transmission Effects 0.000 description 21
- 230000003321 amplification Effects 0.000 description 14
- 238000003199 nucleic acid amplification method Methods 0.000 description 14
- 108091006146 Channels Proteins 0.000 description 12
- 238000010586 diagram Methods 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 7
- 239000012636 effector Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/083—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being an excitation gain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L2019/0001—Codebooks
- G10L2019/0003—Backward prediction of gain
Definitions
- Audio signal encoding method Audio signal decoding method, transmitter, receiver, and wireless microphone system
- the present invention relates to an audio signal encoding method for encoding an audio signal with low delay, and audio signal decoding for decoding an audio signal encoded based on this audio signal encoding method into an original audio signal.
- a transmitter for encoding and transmitting the audio signal, and receiving the audio signal encoded based on the audio signal decoding method
- the present invention relates to a receiver that decodes an audio signal, and a wireless microphone system including the transmitter and the receiver.
- a subband adaptive differential pulse code modulation encoding method (hereinafter, referred to as an encoding method for encoding a speech signal with low delay and a decoding method for decoding the encoded speech signal into an original speech signal (hereinafter referred to as “encoding method”).
- encoding method There are simply known a subband ADPCM coding method and a subband adaptive differential pulse code modulation decoding method (hereinafter simply referred to as a subband ADPCM decoding method).
- a transmitter having a coding unit 204 for coding a speech signal based on a conventional subband ADPC coding method, and a decoding unit 215 for decoding the coded speech signal
- the transmitter encoding unit 204 divides the audio signal into four bands as shown in FIG. 12, and reduces it at a thinning rate corresponding to the number of divisions.
- Subband division filter bank 204a that generates four subband signals by sampling and the four subband signals generated by subband division filter bank 204a are coded according to the subband ADPCM coding method. It includes four ADP CM quantizers 220a-220d and a multiplexer 204c that multiplexes four encoded subband signals and incorporates them into the bit stream.
- the decoding unit 215 of the receiver includes a demultiplexer 215a that extracts four code subband signals with a bitstream power, and four encoded subband signals from the conventional ADPC. Based on the M decoding method! 4 ADPCM inverse quantizers 230a to 230d to be decoded, and 4 subband signals decoded by 4 ADPCM inverse quantizers 230a to 230d And a sub-band synthesis filter bank 215c for up-sampling at a rate and synthesizing the audio signal.
- the speech signal is divided into four bands, down-sampled at a bow ratio corresponding to the number of divisions, and the four subband signals are subband divided filter bank 204a. Generated by.
- the four subband signals generated by subband division filter bank 204a are then encoded by four ADPCM quantizers 220a-220d based on the conventional ADPCM encoding method.
- the four code subband signals encoded by the four ADPCM quantizers 220a through 220d are then incorporated into the bitstream by the multiplexer 204c.
- the demultiplexer 215a extracts four code subband signals for the bitstream power.
- the four code subband signals are decoded by the four ADPCM inverse quantum filters 230a to 230d.
- the four subband signals are up-sampled at an interpolation rate corresponding to the number of divisions, and synthesized into an audio signal by the subband synthesis filter bank 215c (for example, see Patent Document 1).
- Patent Document 1 JP 2002-330075 A
- the present invention has been made to solve the conventional problems, and is about 1Z7 to 1Z8 of the original audio signal with a relatively low delay and without degrading the sound quality of the wideband audio signal.
- Audio signal encoding method that can be compressed, based on this audio signal encoding method!
- An audio signal decoding method capable of decoding an encoded audio signal into an original audio signal with relatively low delay, and encoding and transmitting an audio signal based on the audio signal encoding method!
- a transmitter capable of receiving an audio signal encoded based on the audio signal decoding method, and decoding the original audio signal; and the transmitter and the receiver It aims at providing a wireless microphone system provided with.
- the speech signal encoding method of the present invention includes a generation step of dividing a speech signal into a plurality of subbands, down-sampling according to the number of divisions, and generating a plurality of subband signals, and the plurality of subbands
- the linear prediction coefficient is obtained from
- the quantization bit allocation for each subband is made non-uniform allocation according to the frequency energy distribution and auditory characteristics of the speech signal to be encoded, and low delay due to the adaptation of the knock word is compatible.
- vector quantization is performed, a low-delay speech code with good compression efficiency can be realized.
- a codebook divided into at least two is used, and at least two It has a configuration that generates excitation vectors using the sum of codebooks.
- a difference signal indicating a difference between a predicted value of the excitation signal gain obtained by the backward adaptation and a true excitation signal gain is generated. And / or a configuration for adaptive scalar quantization of the difference signal.
- the backward prediction gain and the difference gain can be adaptively and accurately quantized.
- the audio signal decoding method of the present invention divides an audio signal into a plurality of subbands, A plurality of subband signals using a generation step of generating a plurality of subband signals in accordance with the signal, and a combination analysis method for encoding the plurality of subband signal force vector indices.
- a quantization step for quantizing wherein in the quantization step, the sound is generated from an encoded speech signal encoded by a speech signal encoding method in which a linear prediction coefficient is obtained from a past decoded signal by backward adaptation.
- a speech signal decoding method for decoding into a voice signal wherein the vector index power includes a plurality of inverse quantization steps for inversely quantizing the vector index for decoding into the plurality of subband signals, A synthesis process for up-sampling the sub-band signal and synthesizing the band.
- a knock word It has a configuration for obtaining the linear prediction coefficients from past decrypt signals by response! / Ru.
- the speech signal decoding method of the present invention uses a codebook divided into at least two when the plurality of subband signals are vector-quantized in the quantization step, and the at least two An audio signal decoding method for decoding an audio signal from an encoded audio signal encoded by an audio signal encoding method that generates an excitation vector using a sum of codebooks, the inverse quantization In the process, an excitation vector is generated using a sum of vectors corresponding to two or more vector indexes.
- decoded speech can be obtained using vector index data.
- the speech signal decoding method of the present invention generates a difference signal indicating a difference between a predicted value of the excitation signal gain obtained by the backward adaptation and a true excitation signal gain in the quantization step.
- the transmitter of the present invention divides an audio signal into a plurality of subbands, performs down-sampling according to the number of divisions, and generates a plurality of subband signals, and the plurality of subbands. And a quantization step of vector quantization of the plurality of subband signals using a synthesis analysis method in order to encode the signal index into the vector index.
- the encoder includes a coding unit that generates a coded speech signal from a speech signal based on a speech signal coding method in which a linear prediction coefficient is obtained in the past.
- the encoding unit divides the audio signal into a plurality of subbands, downsamples the audio signal according to the number of divisions, and generates a plurality of subband signals, and the plurality of subband signals.
- the plurality of quantizers are configured to obtain past decoded signal power linear prediction coefficients by backward adaptation.
- the transmitter of the present invention uses at least two codebooks when vectorizing the plurality of subband signals in the quantization step, and uses the codebook divided into at least two. Based on the speech signal encoding method that generates the excitation vector using the sum of the plurality of subband signals, the plurality of quantizers of the encoding unit at least performs vector quantization on the plurality of subband signals. A codebook divided into two is used, and an excitation vector is generated using the sum of at least two codebooks.
- the transmitter of the present invention generates a difference signal indicating a difference between a predicted value of the excitation signal gain obtained by the backward adaptation and a true excitation signal gain,
- the plurality of quantizers of the encoding unit are predicted values of the excitation signal gain obtained by the backward adaptation.
- a true excitation signal gain a differential signal is generated, and the differential signal is adaptively scalar quantized.
- the encoded audio signal can be transmitted. Multiplexed and can be transmitted.
- the receiver of the present invention divides an audio signal into a plurality of subbands, performs downsampling according to the number of divisions, and generates a plurality of subband signals, and the plurality of subband signal cards. And a quantization step for vector quantization of the plurality of subband signals using a synthesis analysis method to encode the vector index. In the quantization step, past decoding is performed by applying a knock word.
- the signal strength comprises a decoding unit for decoding a coded audio signal based on an audio signal decoding method for decoding an encoded audio signal encoded by an audio signal encoding method for obtaining a linear prediction coefficient.
- the decoding key unit includes a plurality of inverse quantizers that dequantize the vector index in order to decode the vector index power into a plurality of subband signals. And a subband synthesis filter that upsamples the plurality of subband signals and performs band synthesis, and the plurality of inverse quantizers are configured to obtain past decoded signal power linear prediction coefficients by backward adaptation. Have.
- the receiver of the present invention uses a codebook divided into at least two when the plurality of subband signals are vector-quantized, and a sum of the at least two codebooks.
- Decoding the encoded speech signal based on the speech signal decoding method for decoding the encoded speech signal encoded by the speech signal encoding method that generates an excitation vector using V
- the plurality of inverse quantizers of the decoding unit has a configuration for generating an excitation vector using a sum of vectors corresponding to two or more vector indexes! /
- the receiver of the present invention generates a differential signal indicating a difference between an estimated value of the excitation signal gain obtained by the backward adaptation and a true excitation signal gain.
- the encoded speech signal is decoded based on the speech signal decoding method for decoding the encoded speech signal encoded by the speech signal encoding method in which the differential signal is adaptively scalar quantized.
- a plurality of inverse quantizers of the decoding unit, the predicted value of the excitation signal gain by backward adaptation and the inverse quantized excitation signal gain It has a configuration for obtaining the excitation signal gain by taking the sum with the residual.
- the wireless microphone system of the present invention includes a generation step of dividing an audio signal into a plurality of subbands, down-sampling according to the division number, and generating a plurality of subband signals, and the plurality of subband signals.
- a quantization step of vector-quantizing the plurality of subband signals using a synthesis analysis method for signing the force vector index, and in the quantization step, a past decoded signal is applied by backward adaptation.
- a transmitter that generates an encoded audio signal from the audio signal, and transmits the encoded audio signal, wherein the encoding unit divides the audio signal into a plurality of subbands;
- the subband division filter that generates a plurality of subband signals by down-sampling according to the number of divisions, and the plurality of subband signal forces using the analysis method based on the synthesis in order to encode the subband signal power into a vector index.
- a plurality of quantizers that perform vector quantization on the subband signal of the first subband signal, and the plurality of quantizers obtain a past decoded signal power linear prediction coefficient by backward adaptation and are generated by the code unit.
- a transmitter that transmits the encoded voice signal, and a receiver that receives the encoded voice signal transmitted from the transmitter.
- an audio signal can be encoded with high compression efficiency, so that the radio transmission band can be used effectively and a multi-channel system can be easily constructed.
- the receiver divides an audio signal into a plurality of subbands, down-samples according to the number of divisions, and generates a plurality of subband signals.
- Coded speech encoded by the speech signal coding method in which linear prediction coefficients are obtained from the decoded signal A decoding unit for decoding an encoded audio signal based on an audio signal decoding method for decoding a signal, wherein the decoding unit decodes the vector for decoding into a plurality of subband signals such as a vector index car.
- a plurality of dequantizers that dequantize the index; and a subband synthesis filter that up-samples the plurality of subband signals and performs band synthesis.
- the plurality of dequantizers are adapted for backward adaptation. Thus, the past decoding signal power linear prediction coefficient is obtained.
- the present invention provides low-delay and high-compression by providing subband dividing means that divides a wideband audio signal into a plurality of bands and a vector quantizer adapted to backward prediction of internal prediction coefficients and the like. It is possible to provide an audio signal encoding method, an audio signal decoding method, a transmitter, a receiver, and a wireless microphone system that have the effect of obtaining high-quality decoded audio while being efficient.
- FIG. 1 is a block diagram of a wireless microphone system according to first to third embodiments of the present invention.
- FIG. 2 is a block diagram of a transmitter of the wireless microphone system of the first to third embodiments of the present invention.
- FIG. 3 is a block diagram of a receiver of the wireless microphone system according to the first to third embodiments of the present invention.
- FIG. 4 is a block diagram of a compression code section of the transmitter of the wireless microphone system of the first to third embodiments of the present invention.
- FIG. 5 is a block diagram of a compressed signal decoding unit of the receiver of the wireless microphone system according to the first to third embodiments of the present invention.
- FIG. 6 is a block diagram of a quantizer for each subband in the compression code section of the transmitter of the wireless microphone system according to the first embodiment of the present invention.
- FIG. 7 is a block diagram of an inverse quantizer for each subband in a compression code section of the transmitter of the wireless microphone system according to the first embodiment of the present invention.
- Fig. 8 is a block diagram of a quantizer for each subband in the compression code section of the transmitter of the wireless microphone system according to the second embodiment of the present invention.
- FIG. 9 is a block diagram of an inverse quantizer for each subband in a compression code section of a transmitter of a wireless microphone system according to a second embodiment of the present invention.
- FIG. 10 is a block diagram of a quantizer for each subband in a compression code section of a transmitter of a wireless microphone system according to a third embodiment of the present invention.
- FIG. 11 is a block diagram of an inverse quantizer for each subband in a compression code section of a transmitter of a wireless microphone system according to a third embodiment of the present invention.
- FIG. 12 is a block diagram of a schematic configuration of a conventional subband ADPCM encoder.
- VQ codebook A Vectnoreno uffa excitation VQ codebook A 73 Excitation VQ Codebook B
- wireless microphone system 100 includes a transmitter 101 that encodes an audio signal and transmits the encoded audio signal, and a receiver that receives the encoded audio signal from transmitter 101. 102.
- the transmitter 101 includes a microphone port 1 that converts audio into an analog audio signal, an audio signal amplifier 2 that amplifies the analog audio signal converted by the microphone 1, The analog audio signal amplified by the audio signal amplifier 2 is sampled at a predetermined sampling frequency and converted into a digital audio signal of a predetermined bit rate, and the analog digital converter 3 converts the analog audio signal.
- the digital audio signal converted by the analog-to-digital converter 3 is encoded by a compression encoding unit 4 that encodes the encoded bit string at a low bit rate, and the compression encoding unit 4 converts the digital audio signal.
- An error correction encoding unit 5 that encodes the code bit sequence into a code sequence that is resistant to transmission path errors, and an error correction code.
- a high-frequency amplifier 7 that digitally modulates the signal, amplifies it to a required transmission output, and outputs it as an output signal, and a transmission antenna 8 that radiates the output signal amplified by the high-frequency amplifier 7 as a radio wave into space.
- Transmitter 101 further includes a setting unit (not shown) that sets a bit rate in analog-to-digital conversion unit 3, a bit rate in compression encoding unit 4, a transmission channel in high-frequency amplification unit 7, and the setting unit. And a control unit (not shown) for controlling each unit in accordance with the set result.
- a setting unit (not shown) that sets a bit rate in analog-to-digital conversion unit 3, a bit rate in compression encoding unit 4, a transmission channel in high-frequency amplification unit 7, and the setting unit.
- a control unit (not shown) for controlling each unit in accordance with the set result.
- the error correction code key unit 5 uses block coding, convolutional coding, interleaving, and the like to convert it into a code key sequence that is resistant to transmission path errors! / Speak.
- the receiver 102 receives a radio wave radiated from the transmitter 101 as an input signal, and amplifies the input signal received by the reception antenna 9.
- a high frequency amplifying unit 10 that converts the signal to a preset intermediate frequency signal; an intermediate frequency amplifying unit 11 that amplifies the intermediate frequency signal converted by the high frequency amplifying unit 10 and limits the signal to a preset frequency band;
- the demodulation unit 12 that demodulates the transmission frame signal from the intermediate frequency signal amplified by the intermediate frequency amplification unit 11, detects the additional information from the transmission frame signal demodulated by the demodulation unit 12, and decodes the encoded sequence Channel code decoding unit 13 and error correction to coded sequence decoded by channel code decoding unit 13
- An error correction unit 14 that performs processing and decodes into an encoded bit sequence, a compressed signal decoding unit 15 that decodes the encoded bit sequence decoded by the error correction unit 14 into a digital audio signal, and a compressed signal decoding unit 15
- the receiver 102 further sets a reception channel, a bit rate of the compressed signal decoding unit 15 and the like (not shown), and controls each unit according to the setting result set by the setting unit ( (Not shown) with a control unit.
- the digital effector unit 16 is adapted to perform digital effect processing such as howling suppression, equalizing, digital reverberation, etc. on the digital audio signal decoded by the compressed signal decoding unit 15! / RU
- the compression encoding unit 4 of the transmitter 101 divides a wideband audio signal including a frequency component of 8 kHz or more into four parts, down-samples the audio signal according to the number of divisions, and Subband division filter bank 4a that generates two subband signals, and four subband signals based on a low-delay code-excited linear prediction (hereinafter simply referred to as LD-CELP) algorithm.
- LD-CELP low-delay code-excited linear prediction
- the vector quantization unit 4b includes four LD-CELP quantizers 20a to 20d that perform vector quantization on the four subband signals, respectively.
- Each of the LD-CELP quantizers 20a to 20d is adapted to obtain a linear prediction coefficient for the past decoded signal power by means of knock word adaptation.
- LD-CELP is an international standard for realizing voice signals in the telephone band at 16kbps, and is a low-delay code-excited linear prediction used in ITU-T recommendation "G.728". The Argo Rhythm.
- downsampling is to resample a signal sampled at a certain frequency at a lower frequency.
- upsampling means resampling a signal sampled at a certain frequency at a higher frequency.
- the LD-CELP quantizer 20a includes a vector buffer 21 for buffering subband signals corresponding to the number of dimensions of the quantization vector, and an excitation whose gain is adjusted according to the noise vector.
- a backward gain adaptor 24 that linearly predicts the gain from the vector, a gain multiplier 23 that multiplies the gain linearly predicted by the backward gain adaptor 24, and a signal multiplied by the gain by the gain multiplier 23.
- the synthesis filter 2 5 that forms the signal, and the backward coefficient adaptor 26 that linearly predicts the filter coefficients of the synthesis filter 25 in the past and adaptively updates them, and the subband buffered in the vector buffer 21
- An adder 29 that subtracts the signal calculated by the synthesis filter 25 and calculates a difference (residual signal), and a frequency weighting process on the residual signal calculated by the adder 29. Calculate the least mean square error so that the energy of the audio weighting filter 27 and the frequency weighted residual signal by the audio weighting filter 27 is minimized, and obtain the index number from the excitation VQ codebook 22 And a least mean square error calculator 28.
- Each of the LD-CELP quantizers 20b, 20c, and 20d has a configuration similar to that of the LD-CELP quantizer 20a, and encodes a subband signal of each band! / Speak.
- the LD-CELP quantizers 20a to 20d each output an index number to the multiplexer 4c.
- the multiplexer 4c acquires the index number from the LD-CELP quantizers 20a to 20d and incorporates the acquired index number into the bit stream! /.
- the compressed signal decoding unit 15 of the receiver 102 converts the bit stream as shown in FIG.
- Demultiplexer 15a that decomposes into four subband index numbers, index number power of four subbands
- Vector dequantization unit 15b that decodes four subband signals, and synthesizes four subband signals
- a subband synthesis filter bank 15c for outputting signals.
- the vector inverse quantization unit 15b has four LD-CELP inverse quantities. Have children 30a-30d!
- LD-CELP inverse quantum amplifiers 30a to 30d are respectively an excitation VQ codebook 31, a gain multiplier 32, a knock word gain adaptor 33, a synthesis filter 34, and a backward coefficient adaptor. 35, and the index number power also decodes the subband signal! /.
- sub-band signals corresponding to the number of dimensions of the quantization vector are buffered in the vector buffer 21.
- the noise vector in the excitation VQ codebook 22 is multiplied by the gain multiplier 23 multiplied by the gain linearly predicted by the backward gain adaptor 24 from the previous gain-adjusted excitation vector, and is generated here.
- the gain-adjusted excitation vector passes through the synthesis filter 25 to form a decoded signal.
- the coefficients of the synthesis filter 25 are linearly predicted from the past decoded signal by the knock word coefficient adaptor 26 and adaptively updated.
- the difference (residual signal) between the decoded speech and the input subband signal in the previous vector buffer 21 is calculated, and after frequency weighting processing by the perceptual weighting filter 27, the least mean square error calculator 28 calculates the residual signal.
- the index of the excitation VQ code that minimizes energy is calculated. This index number is output from each of the LD-CELP quantizers 20a to 20d, and the index is combined into a bit stream by the multiplexer 4c and transmitted from the transmitter 101.
- the sub-band signal is decoded by the LD-CELP dequantizers 30a to 30d for each sub-band by the demultiplexer 15a.
- the decoded subband signal is interpolated with 0 by the subband synthesis filter bank 15c at an interpolation rate proportional to the number of subband divisions for each subband, and after subband synthesis filtering, the sum for each subband is calculated. Is taken and output as a decoded audio signal.
- a wideband audio signal is converted into a plurality of audio signals.
- the subband is divided into subbands and the redundancy to be encoded is eliminated.
- the wireless microphone system includes a transmitter and a receiver.
- the transmitter includes a microphone 1, an audio signal amplification unit 2, an analog-digital conversion unit 3, and a compression encoding unit. 4, an error correction coding unit 5, a line coding unit 6, a high frequency amplification unit 7, and a transmission antenna 8.
- the compression encoding unit 4 of the transmitter divides a wideband audio signal including a frequency component of 8 kHz or more into four, down-samples according to the number of divisions, and generates four subband signals.
- Subband splitting filter bank 4a and four subband signals are combined into a vector index based on the LD-CELP algorithm.
- a vector quantization unit 4b that quantizes and outputs an index, and a multiplexer 4c that incorporates the index output by the vector quantization unit 4b into a sign bit string.
- the vector quantization unit 4b includes four LD-CELP Has quantizers 40a through 40d!
- the LD-CELP quantizers 40a to 40d include a vector buffer 41, an excitation VQ code book A42, an excitation VQ code book B43, a preselector 44, Complementary codebook A45, candidate codebook B46, adder 53, gain multiplier 47, backward gain adaptor 48, synthesis filter 49, backward coefficient adaptor 50, adder 5 4, An auditory weighting filter 51 and a least mean square error calculator 52 are provided.
- the receiver is similar to the configuration of the receiver 102 of the wireless microphone system 100 of the first embodiment, and includes a reception antenna 9, a high frequency amplification unit 10, an intermediate frequency amplification unit 11, and a demodulation.
- Unit 12 a line code decoding unit 13, an error correction unit 14, a compressed signal decoding unit 15, a digital effector unit 16, a digital / analog conversion unit 17, a voice amplification unit 18, and a speaker 19.
- the receiver further sets a reception channel, a bit rate of the compressed signal decoding unit 15, and the like (not shown), and controls each unit according to the setting result set by the setting unit. And a control unit (not shown).
- the compressed signal decoding unit 15 of the receiver includes a demultiplexer 15a that extracts the index of the four bands as well as the code string bit force, and the index power subband based on the index of the four bands based on the LD-CELP algorithm.
- a vector dequantization unit 15b that decodes four band indexes into four subband signals using a decoding method that decodes the signal and synthesizes the four subband signals to generate a digital audio signal
- the vector dequantization unit 15b includes four LD-CELP dequantizers 60a to 60d that perform vector dequantization on four subband signals, respectively. ing.
- the LD-CELP inverse quantizers 60a to 60d respectively include an excitation VQ code book A61, an excitation VQ code book B62, an adder 67, a gain multiplier 63, A clock gain adaptor 64, a synthesis filter 65, and a knock word coefficient adaptor 66.
- the input audio signal is bandpass filtered for each of several frequency bands by the subband division filter bank 4a, and down-sampled at a thinning rate proportional to the number of divisions. .
- the previous subband signal is buffered in the vector buffer 41 by the number of quantized vector dimensions.
- the preliminary selector 44 selects the candidate of the input signal and the input signal from the excitation VQ code book A42 and the excitation VQ code book B43, respectively.
- the candidate code book A45 and the candidate code book B4 6 Stored in The preselection is a synthesis filter based on the target vector derived by subtracting the past zero input response from the input signal and the excitation VQ code vector (sum of vector elements from excitation VQ codebook A42 and excitation VQ codebook B43). And a correlation with the 0-state response excited by the auditory weighting filter 51 and further applied backward gain Use a quasi-optimal method that requires fewer operations than a synthetic analysis method, such as searching for combinations that increase the function.
- the candidate codebook A45 and candidate codebook B46, which have been preselected in this way, are added together to become excitation vector candidates, and the optimal candidate codebook index number is calculated as the least mean square error by analysis using synthesis.
- the synthesis by analysis is the same as in the first embodiment, and the sum excitation vector of candidate codebook A45 and candidate codebook B46 is generated, and then gain is multiplied by gain multiplier 47.
- the gain is adaptively predicted from the past gain-adjusted excitation vector by the backside gain adaptor 48.
- the gain-adjusted excitation vector is obtained through the synthesis filter 49 as decoded speech.
- the coefficients of the synthesis filter 49 are adaptively updated by the backward coefficient adaptor 50.
- the compressed signal decoding unit 15 of the receiver receives the previous VQ index, selects excitation candidate vectors from the same excitation VQ codebook A61 and excitation VQ codebook B62 as the encoder, and these two vectors.
- the gain is adjusted by the gain multiplier 63 as an excitation vector, and a decoded subband signal is generated by the synthesis filter 65.
- the prediction coefficients of the gain multiplier 63 and the synthesis filter 65 are adaptively updated by the backward gain adaptor 64 and the backward coefficient adaptor 66, respectively.
- the decoded subband signal for each subband is decoded by the subband synthesis filter bank 15c.
- High-quality decoding is performed by using a codebook divided into two or more, performing pre-selection to select a sub-optimal candidate code vector, and performing analysis by synthesis of a small number of selected candidate candidates. It is possible to obtain coding and decoding operations with less voice, memory usage, and computation.
- the compression encoding unit 4 of the receiver receives a wideband audio signal including a frequency component of 8 kHz or more. Although it has been described that it has subband division filter bank 4a that divides into four subbands, downsamples according to the number of divisions, and generates four subband signals, subband division filter bank 4a Limited to dividing into 4 subbands It is not fixed.
- the wireless microphone system includes a transmitter and a receiver.
- the transmitter is similar to the configuration of the transmitter 101 of the wireless microphone system 100 of the first embodiment.
- the microphone 1 the audio signal amplification unit 2, the analog-digital conversion unit 3, the compression encoding unit 4, an error correction coding unit 5, a line coding unit 6, a high frequency amplification unit 7, and a transmission antenna 8.
- the compression encoding unit 4 of the transmitter divides a wideband audio signal including a frequency component of 8 kHz or more into four, down-samples according to the number of divisions, and generates four subband signals.
- Subband splitting filter bank 4a and four subband signals are combined into a vector index based on the LD-CELP algorithm.
- the LD-CELP quantizers 70a to 70d include a vector buffer 71, an excitation VQ code book A72, an excitation VQ code book B73, a preliminary selector 74, and a candidate code.
- Book A75 candidate codebook B76, adaptive gain adder 77, gain multiplier 78, knock gain adaptor 79, synthesis filter 80, backward coefficient adaptor 81, and perceptual weighting filter 82 And a least mean square error calculator 83.
- the receiver is similar to the configuration of the receiver 102 of the wireless microphone system of the first embodiment, and includes a reception antenna 9, a high frequency amplification unit 10, an intermediate frequency amplification unit 11, and a demodulation unit. 12, a line code decoding unit 13, an error correction unit 14, a compressed signal decoding unit 15, a digital effector unit 16, a digital / analog conversion unit 17, an audio amplification unit 18, and a speaker 19. ing.
- the receiver further sets the reception channel, the bit rate of the compressed signal decoding unit 15, and the like. And a control unit (not shown) for controlling each unit according to the setting result set by the setting unit (not shown).
- the compressed signal decoding unit 15 of the receiver includes a demultiplexer 15a that extracts the index of the four bands, and the index power subband based on the LD-CELP algorithm.
- a vector dequantization unit 15b that decodes four band indexes into four subband signals using a decoding method that decodes the signal and synthesizes the four subband signals to generate a digital audio signal
- the vector dequantization unit 15b includes four LD-CELP dequantizers 90a to 90d that perform vector dequantization on four subband signals, respectively. ing.
- the LD-CELP inverse quantizers 90a to 90d include an excitation VQ code book A91, an excitation VQ code book B92, an adaptive gain adder 93, and a gain multiplier 94, respectively.
- the input speech signal is bandpass filtered for each of several frequency bands by the subband division filter bank 4a, and thinned out at a thinning rate proportional to the number of divisions.
- a plurality of subband signals are generated.
- the previous subband signal is buffered by the vector buffer 71 for the number of quantized vector dimensions.
- the preliminary selector 74 selects the input signal and the near vector candidate from the excitation VQ codebook A72 and excitation VQ codebook B73, respectively, and puts them in the candidate codebook A75 and candidate codebook B76. Stored.
- Preselection is a synthesis filter of the target vector derived by subtracting the past zero input response from the input signal and the excitation VQ code vector (sum of vector elements from excitation VQ codebook A72 and excitation VQ codebook B73). 80 and perceptual weighting filter 82, and a multiplier that finds a combination that increases the cross-correlation with the 0-state response, with a gain of 78. It is better to use a sub-optimal method that requires fewer operations than the analysis method based on.
- the candidate code book A75 and the candidate code book B76 which are preliminarily selected in this way are added together and become excitation vector candidates.
- the ideal gain value is calculated for each candidate outside the excitation range, and the ideal gain value is further multiplied by the gain obtained by backward prediction, and the difference ideal gain value obtained by subtracting the gain to reduce the gain dynamic range is obtained.
- Ask. The differential ideal gain value is quantized and encoded by adaptive scalar quantization by the adaptive gain adder 77. This quantized value is used in the synthesis analysis method, and is added to the output of the gain multiplier 78 and multiplied by the excitation vector, and this gain-adjusted excitation vector passes through the synthesis filter 80. As a result, decoded speech is generated and the difference from the outer buffer 71 is calculated.
- the compressed signal decoding unit 15 of the receiver receives the previous excitation VQ index, selects an excitation candidate vector from the same excitation VQ codebook A91 and excitation VQ codebook B92 as the encoder, The sum of these two vectors is used as an excitation vector, and the gain is adjusted by an adaptive gain adder 93 and a gain multiplier 94 which are obtained in the same form as the compression encoding unit 4. Further, a decoded subband signal is generated by the synthesis filter 96 from the gain-adjusted excitation vector. The prediction coefficients of the gain multiplier 94 and the synthesis filter 96 are periodically updated by a backward gain adaptor 95 and a backward coefficient adaptor 97, respectively. The decoded subband signal for each subband is subjected to band synthesis filtering by the subband synthesis filter bank 15c to generate decoded speech.
- the excitation candidate vector in the quantizer provided for each subband, the excitation candidate vector
- a codebook divided into two or more is used, a preliminary selection is performed to select a sub-optimal candidate code vector, and an analysis method by synthesis is performed from a small number of selected candidate models.
- the audio signal encoding method, the audio signal decoding method, the transmitter, the receiver, and the wireless microphone system according to the present invention have a low transmission information rate while having a low delay and a high compression efficiency. It is effective and is useful as a voice code for a wireless communication with severe transmission band restrictions and a real-time call system using wired communication.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05703747A EP1748423A4 (en) | 2004-01-19 | 2005-01-18 | AUDIO SIGNAL CODING METHOD, AUSIOSIGNAL DECODING METHOD, TRANSMITTER, RECEIVER AND WIRELESS MICROPHONE SYSTEM |
US10/597,215 US20090024395A1 (en) | 2004-01-19 | 2005-01-18 | Audio signal encoding method, audio signal decoding method, transmitter, receiver, and wireless microphone system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004010040A JP2005202262A (ja) | 2004-01-19 | 2004-01-19 | 音声信号符号化方法、音声信号復号化方法、送信機、受信機、及びワイヤレスマイクシステム |
JP2004-010040 | 2004-01-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005069277A1 true WO2005069277A1 (ja) | 2005-07-28 |
Family
ID=34792293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/000510 WO2005069277A1 (ja) | 2004-01-19 | 2005-01-18 | 音声信号符号化方法、音声信号復号化方法、送信機、受信機、及びワイヤレスマイクシステム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20090024395A1 (ja) |
EP (1) | EP1748423A4 (ja) |
JP (1) | JP2005202262A (ja) |
CN (1) | CN1910657A (ja) |
WO (1) | WO2005069277A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8077804B2 (en) * | 2007-04-03 | 2011-12-13 | Sony Corporation | Transmitting apparatus, receiving apparatus and transmitting/receiving system for digital data |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101033994B1 (ko) | 2005-09-05 | 2011-05-11 | 현대중공업 주식회사 | 선박항해기록장치용 음성 저장시스템 |
JP4876574B2 (ja) * | 2005-12-26 | 2012-02-15 | ソニー株式会社 | 信号符号化装置及び方法、信号復号装置及び方法、並びにプログラム及び記録媒体 |
JP2008058667A (ja) * | 2006-08-31 | 2008-03-13 | Sony Corp | 信号処理装置および方法、記録媒体、並びにプログラム |
RU2464650C2 (ru) * | 2006-12-13 | 2012-10-20 | Панасоник Корпорэйшн | Устройство и способ кодирования, устройство и способ декодирования |
CN101325059B (zh) * | 2007-06-15 | 2011-12-21 | 华为技术有限公司 | 语音编解码收发方法及装置 |
US8644171B2 (en) * | 2007-08-09 | 2014-02-04 | The Boeing Company | Method and computer program product for compressing time-multiplexed data and for estimating a frame structure of time-multiplexed data |
US8190440B2 (en) * | 2008-02-29 | 2012-05-29 | Broadcom Corporation | Sub-band codec with native voice activity detection |
US8351724B2 (en) * | 2009-05-08 | 2013-01-08 | Sharp Laboratories Of America, Inc. | Blue sky color detection technique |
US20100322513A1 (en) * | 2009-06-19 | 2010-12-23 | Sharp Laboratories Of America, Inc. | Skin and sky color detection and enhancement system |
KR101388901B1 (ko) * | 2009-06-24 | 2014-04-24 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | 오디오 신호 디코더, 오디오 신호를 디코딩하는 방법 및 캐스케이드된 오디오 객체 처리 단계들을 이용한 컴퓨터 프로그램 |
US20110196673A1 (en) * | 2010-02-11 | 2011-08-11 | Qualcomm Incorporated | Concealing lost packets in a sub-band coding decoder |
KR101071540B1 (ko) * | 2011-06-20 | 2011-10-11 | (주)이어존 | 자동으로 페어링 되는 교실용 무선 마이크 시스템 |
CN102436819B (zh) * | 2011-10-25 | 2013-02-13 | 杭州微纳科技有限公司 | 无线音频压缩、解压缩方法及音频编码器和音频解码器 |
US8924203B2 (en) | 2011-10-28 | 2014-12-30 | Electronics And Telecommunications Research Institute | Apparatus and method for coding signal in a communication system |
US9717440B2 (en) * | 2013-05-03 | 2017-08-01 | The Florida International University Board Of Trustees | Systems and methods for decoding intended motor commands from recorded neural signals for the control of external devices or to interact in virtual environments |
CN105094727B (zh) * | 2014-05-23 | 2018-08-21 | 纬创资通股份有限公司 | 扩展屏幕模式下的应用程序运作方法以及平板计算机 |
US10418957B1 (en) * | 2018-06-29 | 2019-09-17 | Amazon Technologies, Inc. | Audio event detection |
US11451931B1 (en) | 2018-09-28 | 2022-09-20 | Apple Inc. | Multi device clock synchronization for sensor data fusion |
BR112021013767A2 (pt) * | 2019-01-13 | 2021-09-21 | Huawei Technologies Co., Ltd. | Método implementado por computador para codificação de áudio, dispositivo eletrônico e meio legível por computador não transitório |
USD881837S1 (en) * | 2019-12-13 | 2020-04-21 | Shenzhen Longxiang Intelligent Interconnection Technology Co., Ltd. | Signal receiving device |
CN115955250B (zh) * | 2023-03-14 | 2023-05-12 | 燕山大学 | 一种高校科研数据采集管理系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0612097A (ja) * | 1992-06-29 | 1994-01-21 | Nippon Telegr & Teleph Corp <Ntt> | 音声の予測符号化方法および装置 |
JPH0667696A (ja) * | 1992-08-21 | 1994-03-11 | Sony Corp | 音声符号化方法 |
JPH09281995A (ja) * | 1996-04-12 | 1997-10-31 | Nec Corp | 信号符号化装置及び方法 |
JPH09297597A (ja) * | 1996-03-06 | 1997-11-18 | Fujitsu Ltd | 高能率音声伝送方法及び高能率音声伝送装置 |
JPH1097298A (ja) * | 1996-09-24 | 1998-04-14 | Sony Corp | ベクトル量子化方法、音声符号化方法及び装置 |
JP2002032100A (ja) * | 2000-05-26 | 2002-01-31 | Lucent Technol Inc | オーディオ信号を符号化する方法 |
JP2003032382A (ja) * | 2001-07-19 | 2003-01-31 | Hitachi Ltd | 字幕付き音声通信装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69328450T2 (de) * | 1992-06-29 | 2001-01-18 | Nippon Telegraph And Telephone Corp., Tokio/Tokyo | Verfahren und Vorrichtung zur Sprachkodierung |
JPH08190764A (ja) * | 1995-01-05 | 1996-07-23 | Sony Corp | ディジタル信号処理方法、ディジタル信号処理装置及び記録媒体 |
SE504010C2 (sv) * | 1995-02-08 | 1996-10-14 | Ericsson Telefon Ab L M | Förfarande och anordning för prediktiv kodning av tal- och datasignaler |
GB2318029B (en) * | 1996-10-01 | 2000-11-08 | Nokia Mobile Phones Ltd | Audio coding method and apparatus |
JP3064947B2 (ja) * | 1997-03-26 | 2000-07-12 | 日本電気株式会社 | 音声・楽音符号化及び復号化装置 |
JP3022462B2 (ja) * | 1998-01-13 | 2000-03-21 | 興和株式会社 | 振動波の符号化方法及び復号化方法 |
US6370502B1 (en) * | 1999-05-27 | 2002-04-09 | America Online, Inc. | Method and system for reduction of quantization-induced block-discontinuities and general purpose audio codec |
JP2002330075A (ja) * | 2001-05-07 | 2002-11-15 | Matsushita Electric Ind Co Ltd | サブバンドadpcm符号化方法、復号方法、サブバンドadpcm符号化装置、復号装置およびワイヤレスマイクロホン送信システム、受信システム |
JP3922979B2 (ja) * | 2002-07-10 | 2007-05-30 | 松下電器産業株式会社 | 伝送路符号化方法、復号化方法、及び装置 |
-
2004
- 2004-01-19 JP JP2004010040A patent/JP2005202262A/ja active Pending
-
2005
- 2005-01-18 CN CNA2005800025633A patent/CN1910657A/zh active Pending
- 2005-01-18 US US10/597,215 patent/US20090024395A1/en not_active Abandoned
- 2005-01-18 WO PCT/JP2005/000510 patent/WO2005069277A1/ja active Application Filing
- 2005-01-18 EP EP05703747A patent/EP1748423A4/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0612097A (ja) * | 1992-06-29 | 1994-01-21 | Nippon Telegr & Teleph Corp <Ntt> | 音声の予測符号化方法および装置 |
JPH0667696A (ja) * | 1992-08-21 | 1994-03-11 | Sony Corp | 音声符号化方法 |
JPH09297597A (ja) * | 1996-03-06 | 1997-11-18 | Fujitsu Ltd | 高能率音声伝送方法及び高能率音声伝送装置 |
JPH09281995A (ja) * | 1996-04-12 | 1997-10-31 | Nec Corp | 信号符号化装置及び方法 |
JPH1097298A (ja) * | 1996-09-24 | 1998-04-14 | Sony Corp | ベクトル量子化方法、音声符号化方法及び装置 |
JP2002032100A (ja) * | 2000-05-26 | 2002-01-31 | Lucent Technol Inc | オーディオ信号を符号化する方法 |
JP2003032382A (ja) * | 2001-07-19 | 2003-01-31 | Hitachi Ltd | 字幕付き音声通信装置 |
Non-Patent Citations (2)
Title |
---|
FUJIWARA H. ET AL: "Multi Media Joho Asshuku.", SHOHAN, KYORITSU SHUPPAN CO., LTD.,, 1 March 2000 (2000-03-01), pages 74 - 78, XP002992128 * |
See also references of EP1748423A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8077804B2 (en) * | 2007-04-03 | 2011-12-13 | Sony Corporation | Transmitting apparatus, receiving apparatus and transmitting/receiving system for digital data |
Also Published As
Publication number | Publication date |
---|---|
US20090024395A1 (en) | 2009-01-22 |
EP1748423A1 (en) | 2007-01-31 |
CN1910657A (zh) | 2007-02-07 |
EP1748423A4 (en) | 2010-03-17 |
JP2005202262A (ja) | 2005-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005069277A1 (ja) | 音声信号符号化方法、音声信号復号化方法、送信機、受信機、及びワイヤレスマイクシステム | |
JP4506039B2 (ja) | 符号化装置及び方法、復号装置及び方法、並びに符号化プログラム及び復号プログラム | |
CN101023471B (zh) | 可伸缩性编码装置、可伸缩性解码装置、可伸缩性编码方法、可伸缩性解码方法、通信终端装置以及基站装置 | |
EP1768105B1 (en) | Speech coding | |
EP1881488B1 (en) | Encoder, decoder, and their methods | |
WO2004097796A1 (ja) | 音声符号化装置、音声復号化装置及びこれらの方法 | |
WO2003091989A1 (en) | Coding device, decoding device, coding method, and decoding method | |
CN103098126A (zh) | 音频编码器、音频解码器及利用复预测处理多信道音频信号的相关方法 | |
JPH08263096A (ja) | 音響信号符号化方法及び復号化方法 | |
US20080140393A1 (en) | Speech coding apparatus and method | |
KR20090007396A (ko) | 손실 인코딩된 데이터 스트림 및 무손실 확장 데이터 스트림을 이용하여 소스 신호를 무손실 인코딩하기 위한 방법 및 장치 | |
CN1918630B (zh) | 量化信息信号的方法和设备 | |
WO2001003122A1 (en) | Method for improving the coding efficiency of an audio signal | |
JP2003323199A (ja) | 符号化装置、復号化装置及び符号化方法、復号化方法 | |
JP4603485B2 (ja) | 音声・楽音符号化装置及び音声・楽音符号化方法 | |
US6141640A (en) | Multistage positive product vector quantization for line spectral frequencies in low rate speech coding | |
JPH1097295A (ja) | 音響信号符号化方法及び復号化方法 | |
JP3092653B2 (ja) | 広帯域音声符号化装置及び音声復号装置並びに音声符号化復号装置 | |
CN101689372B (zh) | 信号分析装置、信号控制装置及其系统、方法 | |
JPH07183857A (ja) | 伝送システム | |
JP4373693B2 (ja) | 音響信号の階層符号化方法および階層復号化方法 | |
CN101911183A (zh) | 信号分析控制、信号分析、信号控制系统、装置以及程序 | |
JP3576485B2 (ja) | 固定音源ベクトル生成装置及び音声符号化/復号化装置 | |
JP4287840B2 (ja) | 符号化装置 | |
EP1334485B1 (en) | Speech codec and method for generating a vector codebook and encoding/decoding speech signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10597215 Country of ref document: US Ref document number: 200580002563.3 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005703747 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2005703747 Country of ref document: EP |