WO2000063878A1 - Codeur de parole, processeur de parole et procede de traitement de la parole - Google Patents

Codeur de parole, processeur de parole et procede de traitement de la parole Download PDF

Info

Publication number
WO2000063878A1
WO2000063878A1 PCT/JP1999/002089 JP9902089W WO0063878A1 WO 2000063878 A1 WO2000063878 A1 WO 2000063878A1 JP 9902089 W JP9902089 W JP 9902089W WO 0063878 A1 WO0063878 A1 WO 0063878A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
signal
filter
vector
audio
Prior art date
Application number
PCT/JP1999/002089
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
Masanao Suzuki
Yasuji Ota
Yoshiteru Tsuchinaga
Original Assignee
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Limited filed Critical Fujitsu Limited
Priority to EP99913723A priority Critical patent/EP1187337B1/en
Priority to PCT/JP1999/002089 priority patent/WO2000063878A1/ja
Priority to JP2000612922A priority patent/JP3905706B2/ja
Priority to DE69937907T priority patent/DE69937907T2/de
Publication of WO2000063878A1 publication Critical patent/WO2000063878A1/ja
Priority to US09/897,839 priority patent/US6470312B1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters

Definitions

  • the present invention relates to an audio encoding device, an audio processing device, and an audio processing method, and particularly to A-b-S (Analysis-by-Synthesis) at a low bit rate (specifically, 4 kb Zs or less).
  • Speech coding device that performs speech coding on the assumption that a speech generation model is used for speech signals that include multiple-cycle signals in a fixed-length section using type vector quantization.
  • Speech processor that performs speech analysis and synthesis assuming a speech generation model using S-type vector quantization
  • A— b Assumes a speech generation model using S-type vector quantization.
  • a voice processing method for performing voice analysis and synthesis a voice processing method for performing voice analysis and synthesis.
  • CELP Code Excited Linear Prediction: Code-Driven Linear Prediction
  • telephone band 0.3 to 3.4 kHz
  • a coding method is known, and is widely used in fields such as digital mobile communication and intra-company communication systems.
  • CELP transmits linear predictive (LPC) coefficients, which represent human vocal tract characteristics, and parameters, which represent an excitation signal (sound source information) consisting of pitch components and noise components of speech. .
  • LPC linear predictive
  • the human vocal tract is assumed to be an LPC synthesized filter H (z) expressed by equation (1), and the input (sound source signal) to the LPC synthesized filter is a pitch period component representing the periodicity of voice.
  • the input (sound source signal) to the LPC synthesized filter is a pitch period component representing the periodicity of voice.
  • random noise components Suppose you can.
  • the filter coefficient of the LPC synthesis filter, the pitch period component and the noise component of the excitation signal are extracted, and information is obtained by transmitting the quantization result (quantized index). Compression is achieved.
  • FIG. 18 is a diagram showing an encoding algorithm of CELP.
  • the input voice signal Sn is input to the LPC analysis means 21.
  • p is the Phil evening order.
  • p 10 to 12 for telephone band voice
  • p 16 to 20 for wideband voice.
  • the LPC filter coefficients are quantized by scalar quantization, vector quantization, or the like (a quantization unit is not shown), and then the quantized index is transmitted to the decoder side.
  • the excitation signal is quantized.
  • an adaptive codebook Ba storing past excitation signal sequences is prepared.
  • a noise codebook Bn storing various noise signal sequence patterns is prepared for quantization of the noise component.
  • vector quantization by A—b—s is performed using the codebooks B a and B n. That is, first, the multipliers 22 a and 22 b are used to calculate the values obtained by varying the gains of the code vectors stored in the respective codebooks.
  • the adder 23 adds the output values from the multipliers 22a and 22b, and the addition result Is input to the LPC composite filter 24 composed of the LPC filter coefficient.
  • the LPC synthesis filter 24 performs a filter process to obtain a reproduced signal S. Then, the arithmetic unit 26 obtains an error en between the input audio signal S n and the reproduction signal S n *.
  • the error power evaluation means 23 controls the switches SW a and SW b for all the patterns in the codebooks B a and B n, and performs the error evaluation to determine the code base that minimizes the error en.
  • the vector is determined as the optimal code vector.
  • the gain for the optimal code vector selected at that time is defined as the optimal gain. Then, the optimum code vector and the optimum gain are quantized (a quantization unit is not shown) to obtain a quantization index.
  • the quantization index of the LPC filter coefficient and the quantization index of the optimal code vector (actually, the “delay” described later when extracting the optimal vector from the adaptive codebook Ba)
  • the value, the quantization index obtained by quantizing the code vector of the random codebook Bn) and the quantization index of the optimal gain are transmitted to the decoder side.
  • the decoder side has the same codebooks Ba and Bn as the encoder side, and decodes LPC filter coefficients, optimal code vector, and optimal gain from transmission information transmitted from the encoder, and The audio signal is reproduced by the LPC synthesis filter as in.
  • CELP achieves speech compression by modeling the speech generation process and quantizing and transmitting the feature parameters of the model.
  • vocal tract information and sound sources are generated for each fixed length (frame) of 5 to L0 msec in CELP. Updating information. This allows With CELP, even if the bit rate is reduced to about 5 to 6 kb / s, coded speech without degradation can be obtained.
  • the frame length must be 10 msec or more in order to make the bit rate 4 kb / s or less.
  • one frame often contains input signals having a plurality of cycles, which leads to a problem that quality of coded speech is deteriorated.
  • the periodicity of the output signal from adaptive codebook Ba is limited to only a single frame component, and the expressiveness of the periodicity is weak. For this reason, when one frame of the input signal includes a plurality of periods, the periodicity cannot be represented with high accuracy, and the coding efficiency is degraded. Disclosure of the invention
  • Still another object of the present invention is to provide an audio processing method that performs optimal audio processing according to input audio and reproduces high-quality audio.
  • a speech encoding apparatus 10 that divides a speech signal Sn into sections of a fixed length and performs speech encoding on the assumption of a speech generation model.
  • the adaptive codebook Ba that stores the signal vector sequence of the past speech signal, and the signal vector stored at a position that is shifted from the starting point O of the adaptive codebook Ba by a certain delay L intervals.
  • Vector extracting means for extracting the neighboring vector stored in the vicinity of the signal vector And a high-order long-term prediction synthesis filter that generates a long-term prediction speech signal S na-1 by performing a long-term prediction analysis and synthesis on the periodicity of the audio signal Sn with respect to the signal vector and the neighboring vector.
  • a long-term predictive synthesis filter 12 and a filter coefficient calculating means 13 for calculating the fill coefficient of the line 2
  • a linear predictive synthesis filter 14a estimated by linear predictive analysis synthesis representing vocal tract characteristics, and a linear form are examples of the fill coefficient of the line.
  • a first auditory weighting filter 14b which is connected before or after the predictive synthesis filter 14a to perform weighting of auditory characteristics, and a second auditory weighting process that performs auditory weighting on the audio signal Sn Perceptually weighted fill filter 1 4 b— 1, and a perceptually weighted synthetic filter 14 that generates a reproduced encoded voice signal S na from the long-term predicted voice signal S na-1, and a perceptually weighted voice signal S n and the reproduced encoded audio signal S na
  • the speech encoding device 10
  • the adaptive codebook Ba stores a signal vector sequence of a past speech signal.
  • the vector extracting means 11 includes a signal vector stored at a position shifted by a fixed delay L interval from the starting point O of the adaptive codebook Ba, and a neighboring vector stored near the signal vector. Is extracted.
  • the higher-order long-term prediction synthesis filter 12 performs long-term prediction analysis and synthesis on the periodicity of the audio signal S n for the signal vector and the neighboring vector, and obtains the long-term prediction audio signal S na — Generates 1.
  • the filter coefficient calculating means 13 calculates the filter coefficient of the long-term prediction synthetic filter 12.
  • the auditory weighting synthesis filter is a linear prediction synthesis filter estimated by linear prediction analysis synthesis that represents the vocal tract characteristics.
  • a first auditory weighting filter that connects to the first or second stage of the evening prediction 14a and the linear prediction synthesis filter evening 14a to perform auditory characteristic weighting, and an auditory weighting process for the audio signal Sn
  • a second perceptual weighting filter for generating a reproduced encoded speech signal S na from the long-term predicted speech signal S na-1.
  • the error calculating means 15 calculates an error En between the perceptual weighted speech signal S n ′ and the reproduced encoded speech signal S na.
  • the minimum error detecting means 16 detects the minimum error from the errors repeatedly calculated by the error calculating means 15 by changing the delay L.
  • the optimum value transmitting means 17 transmits the optimum filter coefficient 3a, which is the filter coefficient when the minimum error is detected, and the optimum delay La, which is the delay when the minimum error is detected, as the optimum values. .
  • a speech processing apparatus 100 that performs speech analysis and synthesis assuming a speech generation model, a plurality of periods are not included when a speech signal is divided into processing sections of a fixed length.
  • the first speech encoding means 20 for encoding the speech signal and generating encoded information and the case where the speech signal is divided into processing sections of a fixed length and a plurality of periods are included,
  • An adaptive codebook that stores the signal vector sequence of the past speech signal, a signal vector that is stored at a position that is shifted from the start point of the adaptive codebook by a fixed delay interval, and a signal vector that is stored near the signal vector.
  • a vector extraction means for extracting the stored neighboring vectors and a long-term prediction analysis and synthesis relating to the periodicity of the audio signal are performed on the signal vectors and the neighboring vectors to obtain a long-term predicted audio signal.
  • High-order long-term prediction synthesis Evening, a fill coefficient coefficient calculating means for calculating the fill coefficient of the long-term predictive synthesis filter, a linear predictive synthetic filter estimated by linear predictive analysis / synthesis representing vocal tract characteristics, and a stage preceding or preceding the linear predictive synthetic filter.
  • a long-term predicted audio signal which is connected to the subsequent stage and is composed of a first auditory weighting filter that performs auditory characteristic weighting processing and a second auditory weighting filter that performs auditory weighting processing on the audio signal.
  • the minimum error detection means for detecting the minimum error, the optimal filter coefficient which is the filter coefficient when the minimum error is detected, and the optimal delay which is the delay when the minimum error is detected are set as the optimal values.
  • a speech encoding processor 1 comprising: an optimal value transmitting means for transmitting; and a second speech encoding means 10 including: a first speech decoding apparatus for decoding the encoded information to reproduce speech. And a second speech decoding means for decoding the optimum value and reproducing the speech, and a speech decoding processing device comprising: An apparatus is provided.
  • the first audio encoding means 20 encodes the audio signal when a plurality of periods are not included when the audio signal is divided into processing sections of a fixed length, and generates encoded information.
  • the first audio decoding means 120 decodes the encoded information to reproduce the audio.
  • the second speech decoding means 110 reproduces speech by decoding the optimum value.
  • an adaptive codebook that stores a signal vector sequence of past speech signals is generated, and a speech signal is generated. If the audio signal is divided into processing sections of a fixed length and multiple periods are not included, the audio signal is encoded to generate encoded information, and the audio signal is divided into processing sections of a fixed length. Is extracted, the signal vector stored at a position shifted by a certain delay interval from the start point of the adaptive codebook and the neighboring vector stored near the signal vector are extracted.
  • a long-term predictive analysis synthesis relating to the periodicity of the speech signal is performed using a high-order long-term predictive synthesis filter for the signal vector and the neighboring vector to generate a long-term predictive speech signal, Long-term forecast synthetic fill Evening fill Calculating a coefficient by linear predictive analysis and synthesis representing the vocal tract characteristics 7
  • an audio processing method characterized by transmitting an optimal delay, which is a delay when detecting, and an optimal value, decoding encoded information or an optimal value, and reproducing audio.
  • the audio signal is coded to generate coded information.
  • speech coding is performed using the high-order long-term prediction synthesis file estimated by long-term prediction analysis synthesis and the linear prediction synthesis file estimated by linear prediction analysis synthesis.
  • the optimal value is generated, and the decoding side decodes the encoded information and the optimal value.
  • FIG. 1 is a diagram illustrating the principle of a speech encoding apparatus according to the present invention.
  • FIG. 2 is a diagram for explaining the order of the LTP synthesis filter.
  • FIG. 3 is a diagram for explaining the order of the LTP synthesis filter.
  • Figure 4 is a flowchart showing the processing procedure for searching for the optimal LTP fill coefficient and optimal lag.
  • FIG. 5 is a diagram illustrating the principle of the audio processing device.
  • FIG. 6 is a diagram showing a configuration of the first exemplary embodiment.
  • FIG. 7 is a diagram showing an operation when the value of the lag is changed.
  • FIG. 8 is a diagram showing the state update of the adaptive codebook.
  • FIG. 9 is a diagram showing information transmitted by the speech encoding processing device.
  • FIG. 10 is a diagram showing the configuration of the second embodiment.
  • FIG. 11 is a diagram showing an example of the arrangement of poles when the filter is stable.
  • FIG. 12 is a diagram showing an example of the arrangement of poles when the filter is unstable.
  • FIG. 13 is a diagram showing a configuration of the third embodiment.
  • FIG. 14 is a diagram showing the configuration of the fourth embodiment.
  • FIG. 15 is a diagram showing the configuration of the speech decoding processing device.
  • FIG. 16 is a diagram showing the configuration of the speech decoding processing device.
  • FIG. 17 is a flowchart showing the processing procedure of the audio processing method.
  • FIG. 18 is a diagram showing an encoding algorithm of CELP. BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 is a diagram illustrating the principle of a speech encoding apparatus according to the present invention.
  • Speech coding apparatus 10 is capable of converting speech signal S n into a fixed-length section (a frame when the bit rate is 4 kb Zs or less). Speech coding is performed assuming the generation model.
  • the adaptive codebook Ba stores a signal vector (code vector) sequence of the past speech signal Sn for each frame.
  • the vector extraction means 11 1 stores the signal vector stored at a position shifted by a fixed delay L from the starting point of the adaptive codebook Ba, and stored in the vicinity of the signal vector. Extract the neighborhood vector.
  • two neighboring vectors C L _, and C UI above and below the signal vector CL are extracted from the adaptive codebook Ba, but even if two or more are extracted, Good. Also, only the signal vector at the upper position of the signal vector CL may be extracted as a neighboring vector from the adaptive codebook Ba, or only the signal vector at the lower position may be extracted. May be extracted from the adaptive codebook Ba as a neighborhood vector.
  • the long-term predictive synthesis filter 12 with higher order performs long-term predictive analysis and synthesis (LTP: Long Term Prediction) on the periodicity of the speech signal S n for the extracted signal vector and the neighboring vector. To generate a long-term predicted speech signal Sna-1.
  • LTP Long Term Prediction
  • the filter coefficient calculating means 13 calculates the filter coefficient of the long-term prediction composite filter 12.
  • the auditory weighting synthesis filter 14 is composed of a linear prediction synthesis filter 14 a (hereinafter referred to as an LPC synthesis filter 14 a) estimated by linear prediction (LPC) analysis and synthesis representing vocal tract characteristics, and an LPC synthesis filter 1.
  • 4a which is connected to the front or back of 4a to perform weighting processing of the auditory characteristics, and is composed of a first perceptual weighting filter 14b and a long-term predicted voice signal Sna-1 Generate S na.
  • the second auditory weighting filter 1 4b-1 performs an auditory weighting process on the audio signal Sn.
  • the error calculating means 15 calculates an error En between the perceptual weighted voice signal S n ′ and the reproduced coded voice signal S na.
  • the minimum error detecting means 16 detects the minimum error from the errors repeatedly calculated by the error calculating means 15 by changing the delay L.
  • the optimum value transmitting means 17 transmits the optimum filter coefficient (optimum filter coefficient) 3 a which is a filter coefficient when the minimum error is detected and an optimum delay L a which is a delay when the minimum error is detected, as optimum values. .
  • the optimum value transmitting means 17 quantizes the optimal filter coefficient 3 a and transmits the quantized value.
  • the long-term prediction synthesis filter 12 will be described.
  • the long-term prediction composite filter 12 is called the LTP composite filter 12
  • the filter coefficient of the LTP composite filter 12 is called the LTP filter coefficient
  • the delay is called the lag.
  • Equation (2) is an equation showing the transfer function P (z) of the LTP synthesis filter 12.
  • FIG. 2 is a diagram for explaining the order of the LTP synthesis filter 12.
  • CL the left direction
  • J 2 the right direction
  • FIG. 3 is a diagram for explaining the order of the LTP synthesis filter 12. There is a signal base-vector CL to a position of the lag L, to the right direction and J 3.
  • the speech coding apparatus 10 obtains the LTP filter coefficient / 3i and the lag L so as to minimize the error evaluation equation of the equation (4).
  • X is the target signal vector (the vector of the input audio signal input when calculating the error)
  • H is the impulse response vector of the LPC synthesis filter 14a and Ct at the lag L position.
  • Equation (4) Equation (5)
  • Equation (6) is obtained by partially differentiating equation (5) by 3,.
  • a ' represents the transposed matrix of A.
  • equation (7) is obtained.
  • Equation (8) the L ⁇ ⁇ coefficient vector / 3 is obtained from equation (8).
  • R- 1 represents the inverse matrix of R.
  • FIG. 4 is a flowchart showing a processing procedure for searching for the optimal LTP fill coefficient / 3 a and the optimal lag La.
  • Equation (8) is solved from 11 'and, and the LTP fill coefficient coefficient vector 3 is obtained.
  • the search range of the lag L is arbitrary, but when the sampling frequency of the input signal is 8 kHz, the range of the lag L may be set to a range of 20 to 147.
  • the input to the LTP synthesis filter 12 is a vector from the adaptive codebook Ba, and any other vector may be used.
  • a white noise vector, a pulse vector, or a previously learned noise vector may be used.
  • the speech encoding device 10 of the present invention performs LTP synthesis and W 00
  • the audio signal transmission rate should be 4 to 16 kb / s, and audio encoding should be performed for frames with a short frame length of 5 to 10 Omsec or less.
  • the frame length becomes a long frame length of 10 msec or more, and this one frame may include a signal of multiple periods. Is high.
  • speech encoding apparatus 10 of the present invention when a signal of a plurality of cycles is included in one frame, not only signal vector from adaptive codebook Ba but also the vicinity of signal vector The neighboring vectors at the position of are also extracted from the adaptive codebook Ba, and these vectors are subjected to long-term prediction synthesis using the LTP filter 12, and then subjected to LTP synthesis processing to obtain speech data.
  • the coding was performed.
  • the periodicity of the voice can be expressed well, and the encoding accuracy can be improved.
  • Fig. 5 shows the principle of the speech processing device.
  • the audio processing device 100 and the audio encoding processing device 1 It is composed of an encryption processor 2.
  • the audio encoding processing device 1 includes first audio encoding means 20 and second audio encoding means 10.
  • the first audio encoding means 20 encodes the audio signal and generates encoded information when a plurality of periods are not included when the audio signal is divided into processing sections (frames) of a fixed length. .
  • the first speech encoding means 20 actually corresponds to CELP, a case where speech encoding processing is performed using the first speech encoding means 20 is hereinafter referred to as a CELP mode. .
  • the second audio encoding means 10 performs audio encoding when a plurality of periods are included when the audio signal is divided into processing sections (frames) of a fixed length.
  • the second speech encoding unit 10 corresponds to the speech encoding device 10 described above, and a detailed description thereof will be omitted. Note that the case where speech encoding processing is performed using the second speech encoding means 10 is hereinafter referred to as LTP mode.
  • the audio decoding device 2 includes a first audio decoding unit 120 and a second audio decoding unit 110.
  • the first audio decoding means 120 decodes the encoded information to reproduce the audio. That is, decoding processing corresponding to the case where the encoding side encodes in the CELP mode is performed.
  • the second speech decoding unit 110 decodes the optimum value generated by the second speech encoding unit 10 to reproduce the speech. That is, decoding processing corresponding to the case where the encoding side performs encoding in the LTP mode is performed.
  • the audio decoding processing device 2 will be described later with reference to FIGS.
  • FIG. 6 is a diagram showing a configuration of the first exemplary embodiment.
  • the speech coding apparatus 1a mainly includes a CELP (first speech coding means 20) shown in FIG. 18 and a speech coding apparatus shown in FIG. 10 (second speech coding means 10).
  • CELP first speech coding means 20
  • second speech coding means 10 second speech coding means 10.
  • the input audio signal X (n) is divided into frames of a fixed length, and the encoding process is performed in frame units. Let N be the frame length.
  • the LPC analyzing means 21, the LPC filtering coefficient quantizing means 19 a and the LPC filtering coefficient dequantizing means 19 b which are commonly used in the CELP mode and the LTP mode will be described.
  • the LPC filter coefficient quantization means 19a quantizes a i to obtain a quantization index Index Lpc.
  • the LPC filter coefficient inverse quantization means 19b inversely quantizes the quantization index Index Lpc to obtain an inverse quantization value aq ;
  • any one of the auditory weighting filters 14 b and 14 b-1 can be used.
  • the equation (10) can be used. ⁇ , ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ . ⁇ .
  • the auditory weighting synthesis filter 14 is a subordinate connection of ⁇ ( ⁇ ) and W ( ⁇ ), and can be expressed as ⁇ ( ⁇ ) -W ( ⁇ ).
  • the gain quantization means 32 used in the C E L ⁇ mode quantizes the optimal gain determined in the search for the adaptive codebook Ba and the noise codebook Bn.
  • the quantization format is arbitrary, and is performed using scalar quantization, vector quantization, etc.
  • the coded information transmitting means 18 transmits coded information such as the optimum gain in the CELP mode to the speech decoding processing device 2 (described later with reference to FIG. 9).
  • the LTP mode is characterized in that an input signal is encoded by using a higher-order LTP composite filter and an LPC composite filter.
  • the LTP processing means 30 in the figure includes a vector extraction means 11, an LTP synthesis filter 12, and a filter coefficient calculation means 13.
  • the signal vector ( ⁇ ) corresponding to the lag L is extracted from the adaptive codebook Ba.
  • This C t and the impulse response h (n) of the auditory weighting synthesis filter 14 (n 0,..., I-
  • the CL value is input to the LTP synthesis filter 12 composed of ⁇ i to obtain a sound source signal (the long-term predicted speech signal Sna-1 described in FIG. 1), and this sound source signal is perceived as an auditory weighting synthesis filter. 14 to generate a reproduced speech vector (the reproduced encoded speech signal S na described in FIG. 1).
  • the error calculation control means 15a obtains a weighted square error E between the reproduced voice vector and the input voice signal from equation (4). (The error calculation control means 15a Calculate the error using the error evaluation formula for CELP mode).
  • the minimum error detecting means 16 repeats such a process for a predetermined lag range (for example, 20 ⁇ L1 47), and detects the minimum error that minimizes E.
  • the optimal value transmitting means 17 outputs the optimal lag La and the optimal LTP filter coefficient / 3 a at the time of calculating the minimum error to the speech decoding processing device 2.
  • the mode selection means 31 controls switching between the CELP mode and the LTP mode. For example, switching between the CELP mode and the LTP mode may be performed according to the periodicity of the input audio signal, and after performing the audio encoding in both the CELP mode and the LTP mode, each code output is performed. May be compared, and the mode with the higher coding quality may be selected.
  • the switch terminal c in the mode selection means 31 is connected to the terminal a when switching to the CELP mode, and the switch terminal c is connected to the terminal b when switching to the LTP mode.
  • a plurality of modes of three or more with two total modes may be prepared, and one of them may be set as the LTP mode. Note that when switching modes according to the periodicity of the input audio signal, The processing will be described later.
  • the state updating means 33 in the figure will be described later with reference to FIG.
  • both the CELP mode and the LTP mode are provided, and the mode is switched according to the input audio signal to perform the audio encoding.
  • the noise codebook Bn which has a low contribution to the periodicity of speech, is not used, and all the quantum codes that harm the noise codebook Bn in the CELP mode are used. And assigns the conversion bits to the LTP synthesis filter 12.
  • encoding processing specialized for the periodicity of the input signal can be performed.
  • sufficient coding capability can be exerted even for signals that cannot be conventionally encoded sufficiently, and signals that do not include a plurality of cycles in one frame can be exhibited.
  • the CELP mode it is possible to efficiently perform highly flexible encoding according to the input audio signal.
  • FIG. 7 is a diagram showing an operation when the value of the lag L is changed.
  • a signal (signal vector) having a frame length is extracted from a position shifted by a lag L from the starting point O of the adaptive codebook Ba.
  • step S10 The signal vector extracted in step S10 is subjected to LTP processing by the LTP processing means 30, and then input to the auditory weighting synthesis filter 14. You.
  • the error calculation control means 15a calculates an error between the output signal from the auditory weighting synthesis filter 14 and the input audio signal.
  • step S11 the processing from step S11 is repeated for all L to detect the minimum error.
  • the process of changing the lag L from L to L + 1 is the same for the random codebook Bn.
  • FIG. 8 is a diagram showing a state update of the adaptive codebook Ba.
  • the adaptive codebook Ba stores L max past sound source signals (signal vectors). N in the figure represents a frame length which is a unit of encoding.
  • the state updating means 33 discards the temporally oldest N samples in the adaptive codebook Ba, shifts the remaining signals to the left (toward the temporally oldest), and The obtained excitation signal (the linear combination of the adaptive codebook Ba output and the noise codebook Bn output) is copied to the part opened by the shift. Therefore, the latest excitation signal is always stored on the right side (newest in time) in adaptive codebook Ba.
  • FIG. 9 is a diagram showing information transmitted by the voice coding processing apparatus 1a.
  • Items in Table T include mode information, lag, noise codebook index, gain index, LPC fill coefficient index, and LTP fill coefficient There is a number index.
  • the mode information is information indicating whether the mode is the CELP mode or the LTP mode (MODE).
  • the lag information is information indicating the position of adaptive codebook Ba from start point ((L).
  • the random codebook index is an index obtained when the code vector extracted from the random codebook B n is quantized (Index cl).
  • the gain index is an index when the optimal gain is quantized (Index Gain).
  • the LPC fill coefficient index is an index when the LPC fill coefficient is quantized (Index Lpc).
  • the LTP filter coefficient index is an index when the LTP filter coefficient is quantized (Index L tp.
  • the coded information transmitting means 18 transmits the information shown in Table T in the CELP mode. Further, the optimum value transmitting means 17 transmits the information shown in Table T in the LTP mode.
  • FIG. 10 is a diagram showing the configuration of the second embodiment.
  • the speech encoding processing device 1b further includes a stability determination unit 41 with respect to the speech encoding processing unit 1a.
  • the stability determining means 41 determines the stability of the LTP composite filter 12 while searching for the optimal LTP filter coefficient / 3a and the optimal lag La in the LTP mode. Then, when it is determined to be unstable, the LTP filter coefficient and the lag L at that time are excluded from the candidates for selecting the optimum value.
  • the matrix R in equation (8) used to calculate the LTP filter coefficient is a covariance matrix.
  • the stability of the LTP composite filter 12 composed of the LTP filter coefficient obtained from (8) is not necessarily guaranteed.
  • the absolute value of the k-parameter (PARCOR coefficient) obtained from the LTP filter coefficient constituting the filter does not exceed 1.
  • the range of k-parameters in the case where the stability of the filter is not guaranteed is wider than that in the case where the filter is stable, and in this case, the quantization efficiency is reduced.
  • the order of the LTP synthesis filter 12 is increased in order to improve the quality of the reproduced sound, the probability of finding an unstable coefficient increases, and the quantization efficiency may be degraded. .
  • the LTP synthesis filter composed of the LTP filter coefficient obtained during the search for the optimum LTP filter coefficient / 3a and the optimum lag La is used.
  • the stability of (1) and (2) is determined, and if the filter is unstable, the LTP filter coefficient and the lag are excluded from the selection candidates.
  • i3 i is the L T P filter coefficient
  • p is the L T P order.
  • FIG. 11 is a diagram showing an example of the arrangement of poles when the filter is stable.
  • Fig. 12 is a diagram showing an example of pole placement when the fill is unstable. In each case, the vertical axis is Im ⁇ z, ⁇ and the horizontal axis is Re ⁇ z t ⁇ .
  • the stability determination means 41 determines the stability of the filter by the stability determination means 41, and when the stability is determined to be unstable, the filter coefficient and the lag of the LTP composite filter 12 are determined. We decided to exclude them from the selection. As a result, only stable parameters can be extracted.
  • Stability may be determined for the optimal optimal LTP fill coefficient / 3 a.
  • the LTP mode is not selected and the CELP mode is selected.
  • FIG. 13 is a diagram showing a configuration of the third embodiment.
  • the voice coding apparatus lc according to the third embodiment further includes a stability determination unit 41 and a stabilization processing unit 42 with respect to the voice coding processing unit 1a. 2 ⁇ stabilization processing means 42, when the stability determination means 41 determines the stability of the LTP synthesis filter 12 and determines that it is unstable, the LTP fill coefficient and lag L at that time are stabilized. It is to make it.
  • the stability of the LTP synthesis filter 12 is determined, and if unstable, the filter coefficient and the corresponding lag are removed from the candidates to obtain a stable filter coefficient. Was getting.
  • the LTP filter coefficient corresponding to each lag is determined during the search process, and the stability of the filter is determined in the same manner as in the second embodiment. If it is determined that is unstable, the configuration is such that the LTP synthesis filter 12 is stabilized to correct the LTP filter coefficient. If the LTP composite filter 12 is determined to be stable, the LTP composite coefficient is not corrected.
  • Any method can be used as a method of stabilizing the LTP synthesis filter 12.
  • the pole position on the z-plane is calculated from the filter coefficient, and the pole position is shifted inward on the unit circle.
  • a moving method (hereinafter, referred to as a polar moving method) can be used.
  • the polar movement method will be described.
  • equation (11) is solved in the same manner as described in the second embodiment, and the root z is obtained.
  • Re ⁇ zi ⁇ is the real part of zi
  • Im ⁇ z t ⁇ is the imaginary part of Z i.
  • the stability determination means 41 calculates the root z, and if the pole is in the unit circle, the filter is determined to be stable, and if even one pole is outside the unit circle, the It is determined that Phil is unstable.
  • the TP filter coefficient and the stabilization processing means 42 are input. If the LTP composite file 12 is stable, 3 is output as is) 3ia. If the filter is unstable, the processing of the following equation (12) is performed.
  • 3 ia is used as the LTP coefficient in the error evaluation during the search processing in the LTP mode.
  • 3 ia is used as the LTP coefficient in the error evaluation during the search processing in the LTP mode.
  • the LTP synthesis filter 12 is unstable, it is not necessary to exclude the LTP filter coefficient and lag from the search candidates, so by increasing the order of the LTP filter coefficient, the unstable filter coefficient is increased. Is found Even if the number increases, it is possible to prevent the encoded voice quality from deteriorating.
  • FIG. 14 is a diagram showing the configuration of the fourth embodiment.
  • the speech encoding processing device 1d according to the fourth embodiment further includes parameter / parameter conversion control means 50 for the speech encoding processing unit 1a.
  • the parameter-to-parameter conversion control means 50 includes a parameter-to-parameter conversion means 51, a parameter-to-parameter quantization means 52, a parameter-to-parameter inverse conversion means 53, and a parameter-to-parameter inverse quantization means 54.
  • the parameter-to-parameter conversion means 51 converts the output H i of the LPC analysis means 21 into parameter-to-parameter ki.
  • the parameter quantization unit 52 quantizes the parameter k, and generates Index Lpc.
  • the parameter overnight inverse transform means 53 inversely transforms Index Lpc to generate k Q i.
  • the parameter inverse quantization means 54 inversely quantizes k Q i to generate a q i.
  • the optimum value transmitting means 17 includes a parameter / parameter converting means 51a and a parameter / quantizing means 52a.
  • a parameter / parameter converting means 51a and a parameter / quantizing means 52a.
  • the peripheral portion of the parameter conversion control means 50 is shown, and other components are omitted because they are the same as those of the speech encoding device 1a.
  • the LTP synthesis filter 12 is an all-pole resonance circuit having a feedback path from the output to the input, the coefficient sensitivity is high. Therefore, if the quantization error is large when quantizing the LTP filter coefficients, the quantization error is large.
  • the LTP filter coefficient is received on the decoding side. Then, there is a possibility that the LTP synthesis filter 12 on the decoding side oscillates, or the distortion of the spectrum increases, thereby greatly deteriorating the reproduced voice quality.
  • the LTP filter coefficient is converted into another equivalent parameter and then quantized.
  • k parameter parameter PAR COR coefficient
  • LSP line spectrum pair
  • k-parameters are known as parameters equivalent to the LPC filter coefficients obtained by LPC analysis of the input signal.
  • the conversion formula to LPC coefficient is known.
  • LSP is a parameter equivalent to the LPC filter coefficient
  • a conversion equation from the LPC filter coefficient to LSP and a conversion equation from LSP to the LPC coefficient are known.
  • the LSP has better quantization characteristics than the LPC fill coefficient, so in normal CELP coding, the LPC fill coefficient is converted to k-parameter or LSP and then quantized. I have.
  • the above-mentioned relational expression between the LPC fill coefficient and the k parameter (or LSP) is also applied to the LTP fill coefficient.
  • the method of converting the LTP filter coefficient into the k parameter is also called a step-down process, and is represented by equations (14) and (15).
  • is the LTP fill coefficient and p is the fill order.
  • k parameters and k i are obtained.
  • the k parameter can be converted to LTP filter coefficients by the step-up process shown in equations (17) and (18).
  • the LTP filter coefficients are converted to k-parameters or equivalent parameters such as LSP, and then quantized. This makes it possible to achieve high-quality coded speech quality with a small number of quantization bits.
  • the mode selection means 31 determines the nature of the input voice, and selects the CELP mode or the LTP mode according to the result of the determination.
  • R CC (L) ⁇ x (nL) x (nL), (20)
  • L for L of Te to Baie ⁇ L 2, calculates a pitch prediction gain G (L), to determine the maximum value G (L) max of G (L).
  • G (L) max is compared with a predetermined threshold Th, and if G (L) max is larger than T, it is determined that the period is strong, and the LTP mode is selected.
  • G (L) fflax is smaller than Th, it is determined that the period is weak, and the CELP mode is selected.
  • the pitch prediction gain of the input speech signal is used as a parameter for mode determination, but other characteristic parameters may be used.
  • the mode may be determined based on a combination of a plurality of feature parameters.
  • the mode is determined by the mode selection unit 31. As a result, it is possible to select an optimal encoding mode according to the properties of the input audio signal, thereby improving the quality of the encoded audio. be able to.
  • FIG. 15 is a diagram showing the configuration of the speech decoding processing device.
  • the audio decoding processor 2a is a CELP This is a decoder for decoding audio from information output by the audio encoding processing device 1 composed of two modes, namely, a mode and an LTP mode.
  • the information shown in FIG. 9 is input to the audio decoding processing device 2a corresponding to each mode.
  • the LPC synthesis filter 103 is composed of and is used in both the CELP mode and the LTP mode.
  • the mode selection means 106 selects the CELP mode based on the mode information M ⁇ DE, the following decoding process is performed.
  • N is the frame length.
  • the gain index Index Gain is input to the gain inverse quantization means 105, and the adaptive codebook gain g e and the noise codebook gain g are supplied to the multipliers 107a and 107b. Each given.
  • the multiplier 107a is a code vector C extracted from the adaptive codebook Ba. (N) Gain g. Multiply by g. '(:. (N) is generated.
  • the multiplier 107 b multiplies the code vector ( ⁇ (n)) extracted from the random codebook B n by a gain to generate g, ⁇ C, (n).
  • the sound source signal y (n) is as shown in equation (23).
  • y (n) g o -C o (n) + g i -C i (n)
  • L max is the adaptive codebook size (the maximum value of the lag).
  • the LTP mode is selected according to the mode information
  • the following decoding processing is performed.
  • the quantization index Index Lpc is input to the LTP filter coefficient inverse quantization means 104, and the LTP composite filter coefficient 3i is output.
  • a reproduced signal s (n) is obtained by inputting ya (n) to the LPC synthesis filter 103. Also, the state of the adaptive codebook Ba is updated by feeding back the excitation signal ya (n) created in the current frame to the adaptive codebook Ba.
  • the method of feedback is arbitrary, but the same method as in the case of the CELP method described above can be used.
  • the audio decoding processing device 2a can reproduce (decode) high-quality audio from the information encoded from the audio encoding processing device 1.
  • FIG. 16 is a diagram showing a configuration of the speech decoding processing device.
  • the speech decoding processing device 2b has two modes, the CELP mode and the LTP mode, as described in the fourth embodiment, and sets the LTP synthesis filter coefficient to the k parameter or the LTP coefficient such as LSP.
  • This is a decoder for decoding audio from information encoded by the audio encoding processor 1d, which converts the parameters into equivalent parameters and then quantizes them.
  • the audio decoding processing device 2b is the same as the audio decoding processing device 2a except that the method of generating the LTP fill coefficient is the same as the audio decoding processing device 2a, and thus only the operation when the LTP mode is selected will be described. .
  • p is the order of the LTP synthesis filter 102.
  • the parameter-to-parameter conversion means 104b performs the conversion process by the step-up process of the equation (17) described in the fourth embodiment.
  • the LTP synthesis filter 102 is composed of lags L and 3. Next, the output (: (n) corresponding to the lag L is extracted from the adaptive codebook Ba, and C Q (n) is input to the LTP synthesis filter 102 to generate the excitation signal ya (n). Further, input ya (n) to the LPC synthesis filter 103 to obtain the reproduced signal s. Create (n).
  • the state of the adaptive codebook Ba is updated by feeding back the excitation signal ya (n) created in the current frame to the adaptive codebook Ba.
  • the method of feedback is arbitrary, and for example, the methods of equations (24) and (25) can be used.
  • the audio decoding processing device 2b can reproduce (decode) high-quality audio from the information encoded by the audio encoding processing device 1d.
  • Figure 17 is a flowchart showing the processing procedure of the audio processing method.
  • the audio signal is encoded to generate encoded information.
  • CS 24 Generates long-term predicted speech signals by performing long-term predictive analysis and synthesis on the periodicity of speech signals using high-order long-term predictive synthesis filters for signal vectors and neighboring vectors. I do.
  • the encoded information or the optimal value is decoded to reproduce the audio.
  • the audio processing apparatus 100 and the audio processing method according to the present invention provide an audio signal processing method in which when an audio signal is divided into sections of a certain length and a signal of a plurality of cycles is not included, If the fixed-length section contains signals of multiple periods, speech is generated using a higher-order long-term prediction synthesis file estimated by long-term prediction analysis synthesis.
  • An optimal value is generated by encoding, and the decoding side is configured to decode the encoded information and the optimal value.
  • the encoding process performed by the first speech encoding unit 20 has been described as CELP, but speech encoding processes other than CELP may be performed.
  • the speech coding apparatus provides a high-order speech estimation method based on long-term prediction analysis / synthesis when a speech signal is divided into sections of a fixed length and includes a signal of a plurality of cycles.
  • the audio coding was performed using the long-term prediction synthesis filter. This makes it possible to perform optimal speech coding according to the input speech.
  • the audio processing device of the present invention encodes an audio signal to generate encoded information when a signal of a plurality of periods is not included when the audio signal is divided into sections of a fixed length, and If the long section contains signals of multiple cycles,
  • the optimal value is generated by performing speech coding using the high-order long-term prediction synthesis file estimated by long-term prediction analysis and synthesis, and the decoding side decodes the encoding information and the optimal value. Configuration. This makes it possible to perform optimal speech coding according to the input speech, and to reproduce high-quality speech.
  • the audio processing method of the present invention encodes an audio signal to generate encoded information when a signal of a plurality of cycles is not included when the audio signal is divided into sections of a fixed length.
  • speech coding is performed using the high-order long-term prediction synthesis filter estimated by long-term prediction analysis and synthesis to generate an optimal value.
PCT/JP1999/002089 1999-04-19 1999-04-19 Codeur de parole, processeur de parole et procede de traitement de la parole WO2000063878A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP99913723A EP1187337B1 (en) 1999-04-19 1999-04-19 Speech coding processor and speech coding method
PCT/JP1999/002089 WO2000063878A1 (fr) 1999-04-19 1999-04-19 Codeur de parole, processeur de parole et procede de traitement de la parole
JP2000612922A JP3905706B2 (ja) 1999-04-19 1999-04-19 音声符号化装置、音声処理装置及び音声処理方法
DE69937907T DE69937907T2 (de) 1999-04-19 1999-04-19 Sprachkodiererprozessor und sprachkodierungsmethode
US09/897,839 US6470312B1 (en) 1999-04-19 2001-07-02 Speech coding apparatus, speech processing apparatus, and speech processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP1999/002089 WO2000063878A1 (fr) 1999-04-19 1999-04-19 Codeur de parole, processeur de parole et procede de traitement de la parole

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US09/897,839 Continuation US6470312B1 (en) 1999-04-19 2001-07-02 Speech coding apparatus, speech processing apparatus, and speech processing method

Publications (1)

Publication Number Publication Date
WO2000063878A1 true WO2000063878A1 (fr) 2000-10-26

Family

ID=14235515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1999/002089 WO2000063878A1 (fr) 1999-04-19 1999-04-19 Codeur de parole, processeur de parole et procede de traitement de la parole

Country Status (5)

Country Link
US (1) US6470312B1 (US06470312-20021022-M00013.png)
EP (1) EP1187337B1 (US06470312-20021022-M00013.png)
JP (1) JP3905706B2 (US06470312-20021022-M00013.png)
DE (1) DE69937907T2 (US06470312-20021022-M00013.png)
WO (1) WO2000063878A1 (US06470312-20021022-M00013.png)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7072832B1 (en) 1998-08-24 2006-07-04 Mindspeed Technologies, Inc. System for speech encoding having an adaptive encoding arrangement
DE19934296C2 (de) * 1999-07-21 2002-01-24 Infineon Technologies Ag Prüfanordnung und Verfahren zum Testen eines digitalen elektronischen Filters
US6910007B2 (en) * 2000-05-31 2005-06-21 At&T Corp Stochastic modeling of spectral adjustment for high quality pitch modification
US7103538B1 (en) * 2002-06-10 2006-09-05 Mindspeed Technologies, Inc. Fixed code book with embedded adaptive code book
US9058812B2 (en) * 2005-07-27 2015-06-16 Google Technology Holdings LLC Method and system for coding an information signal using pitch delay contour adjustment
PT2165328T (pt) 2007-06-11 2018-04-24 Fraunhofer Ges Forschung Codificação e descodificação de um sinal de áudio tendo uma parte do tipo impulso e uma parte estacionária
KR101403340B1 (ko) * 2007-08-02 2014-06-09 삼성전자주식회사 변환 부호화 방법 및 장치
US9972301B2 (en) * 2016-10-18 2018-05-15 Mastercard International Incorporated Systems and methods for correcting text-to-speech pronunciation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6068400A (ja) * 1983-09-26 1985-04-18 沖電気工業株式会社 音声分析合成方法
JPH05113800A (ja) * 1991-10-22 1993-05-07 Nippon Telegr & Teleph Corp <Ntt> 音声符号化法
JPH0981174A (ja) * 1995-09-13 1997-03-28 Toshiba Corp 音声合成システムおよび音声合成方法
JPH09134196A (ja) * 1995-11-08 1997-05-20 Matsushita Electric Ind Co Ltd 音声符号化装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265190A (en) * 1991-05-31 1993-11-23 Motorola, Inc. CELP vocoder with efficient adaptive codebook search
WO1995024776A2 (en) * 1994-03-11 1995-09-14 Philips Electronics N.V. Transmission system for quasi-periodic signals
IT1277194B1 (it) * 1995-06-28 1997-11-05 Alcatel Italia Metodo e relativi apparati di codifica e di decodifica di un segnale vocale campionato
JP3499658B2 (ja) 1995-09-12 2004-02-23 株式会社東芝 対話支援装置
US5774837A (en) * 1995-09-13 1998-06-30 Voxware, Inc. Speech coding system and method using voicing probability determination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6068400A (ja) * 1983-09-26 1985-04-18 沖電気工業株式会社 音声分析合成方法
JPH05113800A (ja) * 1991-10-22 1993-05-07 Nippon Telegr & Teleph Corp <Ntt> 音声符号化法
JPH0981174A (ja) * 1995-09-13 1997-03-28 Toshiba Corp 音声合成システムおよび音声合成方法
JPH09134196A (ja) * 1995-11-08 1997-05-20 Matsushita Electric Ind Co Ltd 音声符号化装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KLEIJN W. B. ET AL: "Improved Speech Quality and Efficient Vector Quantization in Selp", PROC. IEEE ICASSP-88, vol. 1, 1988, pages 155 - 158, XP002923665 *
See also references of EP1187337A4 *

Also Published As

Publication number Publication date
EP1187337A4 (en) 2005-05-11
EP1187337B1 (en) 2008-01-02
JP3905706B2 (ja) 2007-04-18
DE69937907T2 (de) 2008-12-24
EP1187337A1 (en) 2002-03-13
DE69937907D1 (de) 2008-02-14
US6470312B1 (en) 2002-10-22

Similar Documents

Publication Publication Date Title
JP7209032B2 (ja) 音声符号化装置および音声符号化方法
JPH0736118B2 (ja) セルプを使用した音声圧縮装置
KR20070028373A (ko) 음성음악 복호화 장치 및 음성음악 복호화 방법
JP2004526213A (ja) 音声コーデックにおける線スペクトル周波数ベクトル量子化のための方法およびシステム
JP3628268B2 (ja) 音響信号符号化方法、復号化方法及び装置並びにプログラム及び記録媒体
WO2000063878A1 (fr) Codeur de parole, processeur de parole et procede de traitement de la parole
WO2002071394A1 (en) Sound encoding apparatus and method, and sound decoding apparatus and method
JP3888097B2 (ja) ピッチ周期探索範囲設定装置、ピッチ周期探索装置、復号化適応音源ベクトル生成装置、音声符号化装置、音声復号化装置、音声信号送信装置、音声信号受信装置、移動局装置、及び基地局装置
US6842732B2 (en) Speech encoding and decoding method and electronic apparatus for synthesizing speech signals using excitation signals
CN110709925A (zh) 音频编码
JP3490325B2 (ja) 音声信号符号化方法、復号方法およびその符号化器、復号器
JPH113098A (ja) 音声符号化方法および装置
JP3578933B2 (ja) 重み符号帳の作成方法及び符号帳設計時における学習時のma予測係数の初期値の設定方法並びに音響信号の符号化方法及びその復号方法並びに符号化プログラムが記憶されたコンピュータに読み取り可能な記憶媒体及び復号プログラムが記憶されたコンピュータに読み取り可能な記憶媒体
JPH09244695A (ja) 音声符号化装置及び復号化装置
JP4525693B2 (ja) 音声符号化装置および音声復号化装置
JP3175667B2 (ja) ベクトル量子化法
JP3232728B2 (ja) 音声符号化方法
JPH11272298A (ja) 音声通信方法及び音声通信装置
JP2004020676A (ja) 音声符号化/復号化方法及び音声符号化/復号化装置
JP4525694B2 (ja) 音声符号化装置
JP2020129115A (ja) 音声信号処理方法
JP2001343984A (ja) 有音/無音判定装置、音声復号化装置及び音声復号化方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 2000 612922

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 09897839

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1999913723

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1999913723

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1999913723

Country of ref document: EP